Save Article

Don’t Believe the Algorithm

Blind faith in machines (and machine learning) has left us vulnerable to biased and incoherent AI. The solution? A healthy dose of skepticism and human oversight.

By

Hannah Fry

Sept. 5, 2018 10:27 a.m. ET

The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

Facial recognition is increasingly used by law-enforcement agencies and in schools, retail stores and other venues, spurring privacy concerns. In this episode of Moving Upstream, we test out the technology.

Auto makers and other companies racing to commercialize self-driving car technology are facing pushback from local politicians, complicating their plans to bring real-world testing to more U.S. cities.