Issue: About Face, June 13, 2020

Subscribe or learn more...
Exponential View, by Azeem Azhar

Exceptional curation on technology & society. I’m exploring how our societies and political economy will change under the force rapidly accelerating...

Subscribe or learn more...

 

💡 About face

Several tech companies have stopped selling facial recognition to the police. Here is what we should ask now.

Azeem Azhar Jun 13      

 

First IBM announced that it would stop selling facial recognition technologies. Earlier this week Arvind Krishna, the boss, said:

 

IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms.

 

We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.

 

Then Amazon, likely the market leader, followed by saying it would enforce a year-long moratorium on the use of its platform, Rekognition by the US police forces:

 

We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested

 

Yesterday, Microsoft announced a ban on police use of their surveillance technologies until federal regulation is in place.

 

   

Image credit: Buolamwini & Gebru

 

Some large technology companies have been arguing in favour of some kind of regulatory-legislative framework for AI and its applications for some months. But for some reason, until now, they weren’t really entering into many voluntary moratoria. Back in January, I offered my thoughts on “the real reasons technology companies want regulation”:

 

If [big technology firms] aren’t kept in check by competition, then what will? What will the limits to the big tech firms’ power be?

 

And the only answer then might be government, the state, the regulator, unfashionable though it may be to say. And knowing that a decent strategy for any large tech firm is the ‘woke’ one, focus on emotive and important issues early, such as facial recognition. So that we don’t ask the really hard questions.

 

I was relieved to see IBM, then Amazon and Microsoft step up and pause the sale of facial recognition systems to the police. Back in July 2019, the Ada Lovelace Institute, where I am a director, had called for a moratorium on facial recognition technologies.

 

Occupying the middle ground between inaction and prohibition, a moratorium provides for time and space for informed thinking and the building of public trust.

 

These moratoria give us enough time and breathing space to come up with the correct regulatory framework. And this isn’t only an issue of racial bias, but of generally automating bias, of creating a persistently surveilled citizenry without adequate accountability and protections. 

 

So, absolutely, well done on hitting the pause button.

 

But…

It was EV#16 from July 2016 where I first tackled the issue of racial bias in image recognition technologies.

 

It was February 2018 when Timnit Gebru and Joy Buolamwini published their paper Gender Shades, which compellingly showed the racial biases creeping into this now mainstream technology. (Original paper here.)

 

At least four years of mainstream debate on this topic. And countless books, such as Safiya Noble’s Algorithms of OppressionCathy O’Neil’s Weapons of Math DestructionRuha Benjamin’s Race After Technology and Virginia Eubanks’s Automating Inequality.

 

Karen Hao has a typically excellent deep dive into activists’ efforts to persuade Amazon to stop selling its Rekognition tool to law enforcement. The firm spent 18 months or more trying to discredit researchers. (Amazon’s systems demonstrated the worst racial bias.)

 

   

 

Meredith Whittaker, with whom I spoke on the podcast recently, points out that “Amazon tried to discredit their research. It tried to undermine them as Black women who led this research. It tried to spin up a narrative that they had gotten it wrong—that anyone who understood the tech clearly would know this wasn’t a problem.”

 

Back in January, I spoke with Microsoft’s President, Brad Smith, on this and other topics. Brad was clear that he felt these technologies needed to be regulated by appropriate democratic mechanisms. Laws are best made by responsive legislators informed by the experience of firms operating in the market, not exclusively by lobbyists or corporate fiat. But as a consequence of these principles, Microsoft would continue to sell facial recognition tech.

 

Brad’s view in our conversation was definitely nuanced (my emphasis):

 

Should we just stop the technology until we solve them and not allow it to be used for anything or should we allow it to go forward and address the issues perhaps more precisely. We’re definitely in the second camp. I look at it and say, is this a problem that can be solved with a scalpel or should we pull out a meat cleaver? I worry that if you pull out a meat cleaver and you just say, we're not going to allow this technology to be used at all or not at all by the public sector, we're going to stop ourselves from, in fact doing the work needed to solve the problems because it takes work. It takes experience, it takes learning.

 

I just think it’s unrealistic to expect all of these countries around the world to adopt laws on any kind of meaningful timeframe. So let’s figure out the right principles. Let's expect and ask companies to act voluntarily. And let’s work to enshrine these in the law

 

And so this week, Microsoft, amongst others, voluntarily changed its position—along the lines that the firm said it might.

 

The question I’m wrestling with is what changed? 

 

It isn’t clear that facial recognition technologies were used in the murder of George Floyd. George Floyd wasn’t the only Black American murdered by the police. Racial (and other biases) in facial recognition, and other AI systems, were not suddenly discovered this week. I’ve discussed them extensively in this newsletter and on the podcast since 2016.

 

Nor were the problems of systemic racism or police overreach new to us. Why weren’t those clear issues sufficient to motivate these firms? Why did it take the substantial public outcry against racism across the world to trigger an action? What does it tell us about the limits of corporate decision-making in the face of rapid technological change? 

 

This narrow question lays bare the challenges we have in regulating novel technologies that leap ahead of the ability for governments and civil society to understand and manage them. The technology firms didn’t think these technologies were problematic enough to press pause until public opinion (and presumably employee sentiment) swung against them. 

 

And now, by executive fiat, they do think these technologies are too toxic to be unleashed on us without a moment of considerable reflection. “Regulation by outrage” fills the policy gap after public criticism, threats of regulation and mea culpas. In this case, four years of waiting for constructive response and action is followed by minimum necessary measures to avoid further reputational damage rather than systemic solutions.

 

Of course, the good news is that two of the leading players in this sector, Amazon and Microsoft, will endeavour to force a legislative discussion about these technologies. This will hopefully lead to a suitable legal framework under which they will operate. The good news (to answer my question “what changed?” above) is that the firms buckled from public sentiment. 

 

The bad news is that there are other important players providing surveillance technologies such as Huawei, NEC, Hikvision, and Clearview AI, which built its gigantic image base by scraping photos of Facebook users, violating the company’s terms (I wrote about Clearview in EV#253).

 

As EV member Stephanie Hare points out to me in private correspondence, police departments “can find creative ways to leverage the use of facial recognition technology by the private sector—a growing and totally unregulated area. We need to regulate surveillance technologies as a whole, not rely on individual companies to self-regulate as and when they see fit (and for only a year in the case of Amazon).” She also joins me in questioning how these US-centric decisions will impact other countries, where surveillance continues to be a prolific, unregulated industry with important geopolitical repercussions. 

 

This leaves us with three other big, outstanding issues:

 

  1. What do the firms actually think the rules should be? How will they use their lobbying clout to frame the rules? And how effectively will these rules be developed? We need to understand the internal logic, the process of decision-making. 

  2. What of the other big issues around the social settlement with tech infrastructure players as they become our new public space? Facial recognition is a de minimis business for these massive firms. But it does represent the thin edge of a regulatory wedge. Imagine the foot-dragging for something that really affects their bottom line. Regulators will find that if they don’t have the capacity to respond in a timely way, the private sector will become the gatekeepers of how the new technology gets implemented.

  3. What do we learn about the pipeline of exponential, non-neutral, dual-use technologies—how should they be dealt with?

 

Happy to hear respectful comments below.

 

Best,

Azeem

 

P.S. Thanks to Stephanie Hare, Mark Bunting and Carly Kind for valuable feedback on this letter. 

 

Subscribe or learn more...