Prof. Rada Mihalcea and her collaborators have developed lie-detecting software that analyzes a speaker’s words and gestures.

New Lie-Detecting Software from U-M Uses Real Court Case Data

By studying videos from high-stakes court cases, University of Michigan researchers are building unique lie-detecting software based on real-world data.

Their prototype considers both the speaker's words and gestures, and unlike a polygraph, it doesn't need to touch the subject in order to work. In experiments, it was up to 75 percent accurate in identifying who was being deceptive (as defined by trial outcomes), compared with humans' scores of just more than 50 percent.

With the software, the researchers say they've identified several tells. Lying individuals moved their hands more. They tried to sound more certain. And, somewhat counterintuitively, they looked their questioners in the eye a bit more often than those presumed to be telling the truth, among other behaviors.

The system might one day be a helpful tool for security agents, juries and even mental health professionals, the researchers say.

To develop the software, the team used machine-learning techniques to train it on a set of 120 video clips from media coverage of actual trials. They got some of their clips from the website of The Innocence Project, a national organization that works to exonerate the wrongfully convicted.

The "real world" aspect of the work is one of the main ways it's different.

"In laboratory experiments, it's difficult to create a setting that motivates people to truly lie. The stakes are not high enough," said CSE Prof. Rada Mihalcea who leads the project with Mihai Burzo, assistant professor of mechanical engineering at UM-Flint.

"We can offer a reward if people can lie well — pay them to convince another person that something false is true. But in the real world there is true motivation to deceive."

The videos include testimony from both defendants and witnesses. In half of the clips, the subject is deemed to be lying. To determine who was telling the truth, the researchers compared their testimony with trial verdicts.

To conduct the study, the team transcribed the audio, including vocal fill such as "um, ah, and uh." Researchers then analyzed how often subjects used various words or categories of words. They also counted the gestures in the videos using a standard coding scheme for interpersonal interactions that scores nine different motions of the head, eyes, brow, mouth and hands.

The researchers fed the data into their system and let it sort the videos. When it used input from both the speaker's words and gestures, it was 75 percent accurate in identifying who was lying. That's much better than humans, who did just better than a coin-flip.

This image from the researchers’ paper shows facial expressions and hand gestures from real-life trial clips. Starting at the top left-hand corner: a deceptive trial with forward head movement; a deceptive trial with both hands movement; a deceptive trial with one hand movement; a truthful trial with raised eyebrows; a deceptive trial with a scowling face; and and a truthful trial with an up gaze.

"There are clues that humans give naturally when they are being deceptive, but we're not paying close enough attention to pick them up. We're not counting how many times a person says 'I' or looks up. We're focusing on a higher level of communication."

In the clips of people lying, the researchers found common behaviors:

• Scowling or grimacing of the whole face. This was in 30 percent of lying videos vs. 10 percent of truthful ones.

"Deception detection is a very difficult problem," Burzo said. "We are getting at it from several different angles."

For this work, the researchers themselves classified the gestures, rather than having the computer do it. They're in the process of training the computer to do that.

The research team also includes research fellows Veronica Perez-Rosas and Mohamed Abouelenien.

A paper on the findings titled "Deception Detection using Real-life Trial Data" was presented at the International Conference on Multimodal Interaction and is published in the 2015 conference proceedings. The work was funded by the National Science Foundation, John Templeton Foundation and Defense Advanced Research Projects Agency.