Post navigation

This AI Lie Detector Flags Falsified Police Reports

Minority Report

Imagine this: You file a police report, but back at the station, they feed it into an algorithm — and it accuses you of lying, as though it had somehow looked inside your brain.

That might sound like science fiction, but Spain is currently rolling out a very similar program, called VeriPol, in many of its police stations. VeriPol’s creators say that when it flags a report as false, it turns out to be correct more than four-fifths of the time.

Lie Detector

VeriPol is the work of researchers at Cardiff University and Charles III University of Madrid.

In a paperpublished earlier this year in the journal Knowledge-Based Systems, they describe how they trained the lie detector with a data set of more than 1,000 robbery reports — including a number that police identified as false — to identify subtle signs that a report wasn’t true.

Thought Crime

In pilot studies in Murcia and Malaga, Quartz reported, further investigation showed that the algorithm was correct about 83 percent of the time that it suspected a report was false.

Still, the project raises uncomfortable questions about allowing algorithms to act as lie detectors. Fast Company reported earlier this year that authorities in the United States, Canada, and the European Union are testing a separate system called AVATAR that they want to use to collect biometric data about subjects at border crossings — and analyze it for signs that they’re not being truthful.

Maybe the real question isn’t whether the tech works, but whether we want to permit authorities to act upon what’s essentially a good — but not perfect — assumption that someone is lying.