UK law enforcement has proudly been using facial recognition for tech for a few years now. As is the case with any new law enforcement tech advancement, it's being deployed as broadly as possible with as little oversight as agencies can get away with.

As of 2015, UK law enforcement had 18 million faces stashed away in its databases.

Presumably, the database did not contain 18 million criminals and their mugshots.

Concerns were raised but waved away with promises to put policies in place at some point in the future and with grandiose claims of 100% reliability. The latter, naturally, came from the police inspector who headed the facial recognition department. Caveat: this had only been tested on a limited set using "clear images."

What works well in theory and/or with limited datasets doesn't work especially well in practice. Here's how things went down when the facial recognition program was deployed in the wild.

Quote:

The controversial trial of facial recognition equipment at Notting Hill Carnival resulted in roughly 35 false matches and an erroneous arrest, highlighting questions about police use of the technology.

The system only produced a single accurate match during the course of Carnival, but the individual had already been processed by the justice system and was erroneously included on the suspect database.

Yeah, that's going to keep UK citizens from being menaced by terrorists, drug dealers, and whatever else was cited to keep the facial recognition program from being derailed by concerned legislators and citizens. And, while the tech was busy failing to do its job, a few thousand photos of people engaged in nothing more than being criminally underdressed were added to the pot of randomly-drawn faces for the next round of facial recognition roulette.