Amazon Facial Recognition Confuses Members of Congress with Criminals

The ACLU stunt is intended to warn against using tech to identify suspects.

An American Civil Liberties Union experiment with Amazon's facial recognition software showed the technology confused 28 lawmakers with the mugshots of criminals.

Since this is Congress we're talking about, we should be clear: The lawmakers aren't criminals. The pictures that Amazon's "Rekognition" software matched to them were of different people.

This stunt is intended to serve as a wake-up call for lawmakers and law enforcement leaders about the potential problems of using facial recognition software to identify suspects. Privacy and technology experts and activists have been warning for years that the technology remains far too flawed to be used to identify criminal subjects, and the tech struggles particularly with differentiating between the faces of minorities.

The experiment here bore out that concern. Only 20 percent of the members of Congress are racial minorities. But 39 percent of the members of Congress who were mistaken for criminals were racial minorities.

The ACLU warns of the potential consequences of widespread implementation of such flawed recognition software:

If law enforcement is using Amazon Rekognition, it's not hard to imagine a police officer getting a "match" indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins. Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.

An identification — whether accurate or not — could cost people their freedom or even their lives. People of color are already disproportionately harmed by police practices, and it's easy to see how Rekognition could exacerbate that. A recent incident in San Francisco provides a disturbing illustration of that risk. Police stopped a car, handcuffed an elderly Black woman and forced her to kneel at gunpoint — all because an automatic license plate reader improperly identified her car as a stolen vehicle.

Amazon is marketing Rekognition to police to be used as an identification tool, and the ACLU notes that a sheriff's department in Oregon is doing exactly what the ACLU did here, matching people's faces to mugshot databases.

A representative for Amazon told The New York Times that the ACLU used the tools differently than how they expect law enforcement will. The ACLU used the default mode of 80 percent confidence in the match; Amazon recommends that police use a 95 percent threshold. "It is worth noting that in real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment," Amazon's rep said in a statement.

I can't imagine how anybody would find that reassuring. Not only do law enforcement officers have a lengthy history of stubbornly arresting and imprisoning people over cases of mistaken identity, Zuri Davis noted recently how one police chief was just flat-out arresting random innocent men in order to clear burglaries. Imagine being able to blame it on technology.

And keep in mind, in the midst of a federal government implementing a remarkably harsh crackdown on immigrants (both legal and illegal), the Department of Homeland Security is implementing a facial recognition system in airports that will scan everybody boarding international flights, including American citizens, all for the purpose of trying to catch illegal immigrants with expired visas. We already see cases of immigration officials detaining American citizens—some of them for months or even years—over mistaken identities.

But proponents note that facial recognition software has a place and does serve a purpose when it's accurate. Officials used facial recognition tools to identify the alleged shooter at the Capitol Gazette in Maryland. Given the man's historical obsession with that newspaper, though, it was only a matter of time before they figured out who he was.

The ACLU and dozens of other civil rights groups want Amazon to stop offering this surveillance tool to police, and they're calling on Congress to enact a moratorium on the use of facial recognition tools until the software is improved to the point that these mistakes are much less likely.

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

Humans are still much better at recognizing faces from bad photos then computers are. We might eventually get to the point where computers can “zoom and enhance” a photo beyond the original resolution, but we’re not there yet.

Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment,” Amazon’s rep said in a statement.

As much as I distrust facial recognition technology, Amazon is right here. At least for now, I don’t think that the Department of Public Works is automatically issuing warrants for everyone with a hat size of 7 1/2 based on facial recognition hits. Even your bullshit ‘red light’ camera has a human do the final review.

I understand what the ACLU is warning about, but I think they would do better to scrutinized how the tech is used more than how the tech actually works.

I think I’m legitimately okay with limiting the technology the government has access. I do not trust them with it, and I do not trust many people to have enough understanding of the technology to meaningfully stand against its abuses.

I believe that’s an impossible standard. It’s the genie-out-of-the-bottle problem. I think the better approach is to make sure we know how, when and where those tools are being used. In some cases, that will probably rely on whistle blowers and leaked press reports because government will lie to us about their use of the technology anyway– there’s precedent for that.

They could easily just turn it down further if they wanted to. This is a detail that is easy to gloss over too, or play off as user error.

All it probably does it put a distance between two faces. Some real between 0 and 1, and then have some threshold for which they call a match. Drop that number down to, say, 50% and you probably get just about every human.

The experiment here bore out that concern. Only 20 percent of the members of Congress are racial minorities. But 39 percent of the members of Congress who were mistaken for criminals were racial minorities.

That isn’t the conclusion given the higher rate of crime by some minorities. The rate of mistakes could be exactly the same but with very different population pools (and their rates) to start with.

“It is worth noting that in real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment”

“It is worth noting that in real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment”

I come for the alt text, I stay for the article, I get waylaid by the comments.

I wish other Reason writers did alt text. It’s a real treat when Scotty “Love Shack” Shackford has an article up. Sometimes I wish Love Shack and 2chili would rebel and chuck the proverbial Reason tea into the proverbial harbor. They seem like the most likely candidates to at least request that KMW bring comrade Gillespie in for a spanking.