When everybody lies: Voice-stress analysis tackles lie detection

As big data and analytics are increasingly considered the go-to technologies for teasing veracity from volumes of information, the real truth is that people lie -- sometimes quite effectively, essentially negating reams of data on credit worthiness, employment performance and personal references.

Agencies have seen their share of headlines about rogue employees passing security clearances. An insider security threat or leak can damage business and national security, ruin reputations and even cost human lives, so organizations are keen to identify deception.

Although various technologies have been applied to determining whether a person is telling the truth or not, many experts believe that no foolproof method of lie detection exists. Nevertheless, since the early 1900s people have used available technology – from measuring changes in blood pressure and pupil dilation to linguistic analysis or magnetic resonance imaging -- to try to sift fact from fiction.

The polygraph, today’s disputed yet de facto standard, was invented in 1921 and is currently used by many organizations, including law enforcement, and intelligence agencies, to interrogate suspects and screen new employees. A polygraph machine looks at heartbeat, perspiration, breathing and other physical factors that are influenced by stress. Too many stress indicators could mean that a subject is feeling guilty or is worried about his response. If stress levels remain the same throughout the questioning, then no deception is detected.

While the polygraph has been a standard tool for law enforcement in criminal investigations, some police departments are using computer voice stress analysis (CVSA) in their investigations and parole programs. In fact, a U.S. federal court recently ruled that sex offenders can be required to submit to CVSA examinations as part of their post-release supervision.

One such voice examination tool, CVSA II manufactured by National Institute for Truth Verification, runs on a variety of platforms -- including mobile devices. The company claims it even works whether the subject is face to face with an investigator or talking over the phone. It uses a microphone plugged into a computer to quantify and analyze frequency changes in the subject’s responses that indicate vocal stress. As the subject speaks, the computer displays each voice pattern and numbers it. At the end of the evaluation, an algorithm scores the results.

But criminal investigations represent only the tip of the iceberg for an automated system that can flag human deception. Such technology could be invaluable in personnel screening, defense and homeland security, border control and airport security as well as for financial institutions, contact centers and insurance providers – in short, anywhere where human deception is a liability.

The Department of Homeland Security’s National Center for Border Security and Immigration at the University of Arizona developed a screening system called the Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR), which is designed to flag suspicious or anomalous behavior that warrants further investigation by a trained human agent in the field.

The kiosk-based automated system conducts brief interviews in a number of screening contexts, such as trusted traveler application programs, personnel reinvestigations, visa application reviews, or similar scenarios where truth assessment is a key concern. AVATAR uses non-invasive sensors to track pupil dilation, eye and body movements and changes in vocal pitch in an effort identify suspicious or irregular behavior that deserves further investigation.

AVATAR has been tested in several simulation exercises and at the U.S.-Mexico border. Its first field test was in December 2013 in Romania.

Nemesysco, an Israel-based company specializing in voice analysis solutions, uses layered voice analysis (LVA), which identifies various types of stress levels, cognitive processes and emotional reactions that are reflected in the properties of a subject’s voice. Nemesysco emphasizes that LVA is not the same as voice stress analysis but instead uses a unique technology to detect “brain activity traces” using the voice as a medium. By using a wide range spectrum analysis to detect minute involuntary changes in the speech waveform itself, the company says, LVA can detect anomalies in brain activity and classify them in terms of stress, excitement, deception and varying emotional states.

Beyond Verbal Communications, another Israel-based firm that bills itself as an emotional analytics company, is among a number of businesses that are working on adapting voice recognition technology to a variety of applications such as improving call center interactions and monitoring airline pilots for fatigue.

Beyond Verbal offers its software as a cloud-based licensed service. By connecting to its API and SDK, third-party developers can use the technology for a variety of purposes in a range of fields.

It has even released a “home” version of its emotion decoding voice recognition software. “With the click of a button and about 20 second of speech, the Moodies app gives users the option to analyze their own voice as well as understand the emotions of individuals around them,” the company said in its announcement of the iOS app. Similar “for-fun” emotion-analysis or lie-detection apps are available for Android.

In the end, the detection method is only as good as the investigator using it and the questions posed. But there will always be doubt. So while any deception detection technology might be preferred by one investigator or another, humans can still sometimes outwit technology.