EU Testing AI Lie Detectors at Border Crossings

Border control checkpoints in the European Union are testing AI-based lie-detector technology to ferret out those trying to cross the border illegally.

The system, which has the clunky name of “iBorderCtrl,” is funded by the EU and is intended “to deliver more efficient and secure land border crossings to facilitate the work of border guards in spotting illegal immigrants, and…contribute to the prevention of crime and terrorism.”

Luxembourg is the main coordinator, and trials are “about to start” in Hungary, Greece, and Latvia, the EU says. Cyprus, the United Kingdom, Poland, Spain, and Germany will also be testing it.

Prior to traveling, people will be required to upload pictures of their passport, visa, and proof of funds online. Then, an animated guard (personalized to the traveler’s gender, ethnicity, and language) will ask questions via the webcam.

According to New Scientist, questions include “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”

Then, when the traveler gets to the border, those flagged as low risk during the pre-screening stage will get a short re-evaluation, while “higher-risk passengers” will get a more “detailed” check.

That includes: using a hand-held device to cross-check information and compare facial images captured via the webcam to passports and photos taken on previous border crossings; as well as fingerprinting, palm vein scanning, and face matching.

“We’re employing existing and proven technologies—as well as novel ones—to empower border agents to increase the accuracy and efficiency of border checks,” according to project coordinator George Boultadakis of European Dynamics in Luxembourg, who says iBorderCtrl system will “move beyond biometrics and on to biomarkers of deceit.”

The EU has invested nearly $ 5 million into this endeavor but, as Gizmodo notes, having an AI only gather data from select countries could have negative effects, especially if it’s training facial-recognition technology, which has been racially biased in the past.