Exceptional curation on technology & society. I’m exploring how our societies and political economy will change under the force rapidly accelerating...
Exceptional curation on technology & society. I’m exploring how our societies and political economy will change under the force rapidly accelerating...
About faceSeveral tech companies have stopped selling facial recognition to the police. Here is what we should ask now.
First IBM announced that it would stop selling facial recognition technologies. Earlier this week Arvind Krishna, the boss, said:
Then Amazon, likely the market leader, followed by saying it would enforce a year-long moratorium on the use of its platform, Rekognition by the US police forces:
Yesterday, Microsoft announced a ban on police use of their surveillance technologies until federal regulation is in place.
Image credit: Buolamwini & Gebru
Some large technology companies have been arguing in favour of some kind of regulatory-legislative framework for AI and its applications for some months. But for some reason, until now, they weren’t really entering into many voluntary moratoria. Back in January, I offered my thoughts on “the real reasons technology companies want regulation”:
I was relieved to see IBM, then Amazon and Microsoft step up and pause the sale of facial recognition systems to the police. Back in July 2019, the Ada Lovelace Institute, where I am a director, had called for a moratorium on facial recognition technologies.
These moratoria give us enough time and breathing space to come up with the correct regulatory framework. And this isn’t only an issue of racial bias, but of generally automating bias, of creating a persistently surveilled citizenry without adequate accountability and protections.
So, absolutely, well done on hitting the pause button.
But…It was EV#16 from July 2016 where I first tackled the issue of racial bias in image recognition technologies.
It was February 2018 when Timnit Gebru and Joy Buolamwini published their paper Gender Shades, which compellingly showed the racial biases creeping into this now mainstream technology. (Original paper here.)
At least four years of mainstream debate on this topic. And countless books, such as Safiya Noble’s Algorithms of Oppression, Cathy O’Neil’s Weapons of Math Destruction, Ruha Benjamin’s Race After Technology and Virginia Eubanks’s Automating Inequality.
Karen Hao has a typically excellent deep dive into activists’ efforts to persuade Amazon to stop selling its Rekognition tool to law enforcement. The firm spent 18 months or more trying to discredit researchers. (Amazon’s systems demonstrated the worst racial bias.)
Meredith Whittaker, with whom I spoke on the podcast recently, points out that “Amazon tried to discredit their research. It tried to undermine them as Black women who led this research. It tried to spin up a narrative that they had gotten it wrong—that anyone who understood the tech clearly would know this wasn’t a problem.”
Back in January, I spoke with Microsoft’s President, Brad Smith, on this and other topics. Brad was clear that he felt these technologies needed to be regulated by appropriate democratic mechanisms. Laws are best made by responsive legislators informed by the experience of firms operating in the market, not exclusively by lobbyists or corporate fiat. But as a consequence of these principles, Microsoft would continue to sell facial recognition tech.
Brad’s view in our conversation was definitely nuanced (my emphasis):
And so this week, Microsoft, amongst others, voluntarily changed its position—along the lines that the firm said it might.
The question I’m wrestling with is what changed?
It isn’t clear that facial recognition technologies were used in the murder of George Floyd. George Floyd wasn’t the only Black American murdered by the police. Racial (and other biases) in facial recognition, and other AI systems, were not suddenly discovered this week. I’ve discussed them extensively in this newsletter and on the podcast since 2016.
Nor were the problems of systemic racism or police overreach new to us. Why weren’t those clear issues sufficient to motivate these firms? Why did it take the substantial public outcry against racism across the world to trigger an action? What does it tell us about the limits of corporate decision-making in the face of rapid technological change?
This narrow question lays bare the challenges we have in regulating novel technologies that leap ahead of the ability for governments and civil society to understand and manage them. The technology firms didn’t think these technologies were problematic enough to press pause until public opinion (and presumably employee sentiment) swung against them.
And now, by executive fiat, they do think these technologies are too toxic to be unleashed on us without a moment of considerable reflection. “Regulation by outrage” fills the policy gap after public criticism, threats of regulation and mea culpas. In this case, four years of waiting for constructive response and action is followed by minimum necessary measures to avoid further reputational damage rather than systemic solutions.
Of course, the good news is that two of the leading players in this sector, Amazon and Microsoft, will endeavour to force a legislative discussion about these technologies. This will hopefully lead to a suitable legal framework under which they will operate. The good news (to answer my question “what changed?” above) is that the firms buckled from public sentiment.
The bad news is that there are other important players providing surveillance technologies such as Huawei, NEC, Hikvision, and Clearview AI, which built its gigantic image base by scraping photos of Facebook users, violating the company’s terms (I wrote about Clearview in EV#253).
As EV member Stephanie Hare points out to me in private correspondence, police departments “can find creative ways to leverage the use of facial recognition technology by the private sector—a growing and totally unregulated area. We need to regulate surveillance technologies as a whole, not rely on individual companies to self-regulate as and when they see fit (and for only a year in the case of Amazon).” She also joins me in questioning how these US-centric decisions will impact other countries, where surveillance continues to be a prolific, unregulated industry with important geopolitical repercussions.
This leaves us with three other big, outstanding issues:
Happy to hear respectful comments below.
Best, Azeem
P.S. Thanks to Stephanie Hare, Mark Bunting and Carly Kind for valuable feedback on this letter. |