A “top-of-the-line” automated facial recognition (AFR) system trialled for the second year in a row at London’s Notting Hill Carnival couldn’t even tell the difference between a young woman and a balding man, according to a rights group worker invited to view it in action. Because yes, of course they did it again: London’s Met police used controversial, inaccurate, largely unregulated automated facial recognition (AFR) technology to spot troublemakers. And once again, it did more harm than good.

Last year, it proved useless. This year, it proved worse than useless: it blew up in their faces, with 35 false matches and one wrongful arrest of somebody erroneously tagged as being wanted on a warrant for a rioting offense.

[...] During a recent, scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

The Garda Síochána has proposed to expand its surveillance on Irish citizens by swelling the amount of data it collects on them through an increase in its CCTV and ANPR set-ups, and will also introduce facial and body-in-a-crowd biometrics technologies. [...] The use of Automated Facial Recognition (AFR) technology is fairly troubled in the UK, with the independent biometrics commissioner warning the government that it was risking inviting a legal challenge back in March. It is no less of an issue in Ireland, where the Data Protection Commissioner (DPC) audited Facebook in 2011 and 2012, and scolded the Zuckerborg over its use of facial recognition technology.