In a small-scale study of forty videotaped conversations, researchers at the University of Buffalo’s Center for Unified Biometrics and Sensors (CUBS) were able to correctly identify whether subjects were telling the truth or lying a whopping 82.5 percent of the time.

Keep in mind that even the most expert of human interrogators average around 65 percent accuracy according to Ifeoma Nwogu, a research scientist at CUBS quoted by the UB Reporter, the University of Buffalo’s newspaper.

“What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes,” Nwogu said.

Others involved with the CUBS research were Nisha Bhaskaran, Venu Govindaraju and Professor Mark G. Frank, a professor of communications as well as a behavioral scientist who focuses his research on human facial expressions and deception.

The new CUBS system utilizes the tracking of eye movement, which is one of the many factors analyzed by the Future Attribute Screening Technology (FAST) system that the DHS has been heavily researching.

By leveraging a statistical model of how human eyes move during non-deceitful, regular conversation as well as when someone is lying, the system can reportedly detect lies with surprising accuracy.

When someone’s eye movement pattern differed between the two situations, the system assumes that the individual is lying. Those who displayed consistent eye movement between both scenarios are believed to be telling the truth.

Previous research which used human observers to code facial movements documented a marked difference in the amount of eye contact an individual made when they were making what was considered to be a high-stakes lie.

Nwogu and her colleagues built upon this earlier research by creating an automated system that could both verify and improve upon the data human coders used to successfully detect deceit and differentiate it from truthful statements.

The research from Nwogu and colleagues utilized a sample size of forty, which is too small to be statistically significant, yet Nwogu says their findings were still exciting.

The findings suggest that computers may very well be able to learn enough about the behavior of a person in a relatively short period of time that they might be able to outperform even the most experienced of investigators.

In order to best detect deceit, the researchers included videos of people with a range of head poses in various lighting with assorted skin colors and items which can obstruct the face like glasses.

The next step in this research, according to Nwogu, will be to draw from a larger sample size of videos and to develop more advanced automated pattern-recognition models to suss out liars.

Thankfully, Nwogu isn’t claiming that the technology is foolproof as some people are able to maintain eye-movement patterns while lying and thus tricking their system.

However, she does say that automated deceit detection systems could indeed be used in law enforcement and security screenings.

In reality, they are already being field tested by the DHS and perhaps other federal agencies as well under the banner of “threat assessment” and “malicious intent detection.”

While it might be beneficial in some ways, I think that the risks are much greater than the rewards, since the DHS seems to want to use this as a kind of pre-crime technology.

They seek to create a world where if a computer says you’re lying, you become instantly criminalized, even if you are just darting your eyes around or your skin temperature is raised because you are nervous.

As I have pointed out in my previous coverage of such technology, the physiological signals monitored by these symptoms are wildly variable from person to person.

This is likely why these studies are avoiding using samples which would actually make the findings statistically significant as it would greatly diminish the results.

The DHS tests of the FAST system are heavily redacted so it is almost impossible to tell how effective their systems supposedly are.

There is also the concerns raised by retired Federal Bureau of Investigation (FBI) counterintelligence special agent Joe Navarro, who was a founding member of the FBI’s Behavioral Analysis Unit and 25 year FBI veteran.

He told Scientific American, “I can tell you as an investigator and somebody who’s studied this not just superficially but in depth, you have to observe the whole body; it can’t just be the face,” adding that failing to take body language into account could result in “an inordinate amount of false positives.”

Scientific American makes a great point that human law enforcement today have to take “into account that interrogations can make even honest people a little anxious,” which is obviously something a machine cannot do.

This could result in wholly innocent people being treated as potential criminals just because they’re uncomfortable being questioned by police, and this is something that should never happen in the United States or anywhere else, for that matter.

Madison Ruppert is the Editor and Owner-Operator of the alternative news and analysis database End The Lie and has no affiliation with any NGO, political party, economic school, or other organization/cause. He is available for podcast and radio interviews. Madison also now has his own radio show on Orion Talk Radio from 8 pm -- 10 pm Pacific, which you can find HERE. If you have questions, comments, or corrections feel free to contact him at admin@EndtheLie.com

6 comments:

If such a machine is reliable, then it would have been useful during the president George Bush years, because it would have saved hundreds of thousands of lives, and it would have saved America Two Trillion Dollars.

I think that America at times uses Criminal Gangs like the Zemun Gang which is the Serbian Mafia in Serbia that assassinated Serbian Prime Minister Zoran Dindic, and this occurred one week before President George Bush began his Illegal and Immoral invasion of Iraq on 19 March 2003.

I thought at the time, and I will always think this regardless of any lies, that America was behind the assassination of Zoran Dindic on 12 March 2003.

This is because America wanted the Muslims to concentrate their minds not on the injustices over Islamic Iraq, but on how America is willing to give stolen land in Europe to Muslims, even though many People think it is against Islamic Law to steal.

The funeral of Zoran Dindic was held on 15 March 2003, with Leaders from all over the World attending and thus causing the desired News Saturation Globally to remind the Muslims that America will steal land for them, and that they should overlook the murders of Innocent Arab Muslims.

On 17 March 2003, President Bush gave Saddam Hussein and his sons 48 hours to flee Iraq, or America would begin its Crimes against Humanity.

The Bush Administration knew that Iraq was not a threat to America or anyone else, and that his lies would be uncovered one day, and on 19 March 2003, President George Bush did Illegally and Immorally invade the Country of Iraq.

I do have problems with such a machine, and it could prove to be as useful as the useless lie detector machine.

I think that with computers, there is more possibility of the program being rigged and the computer actually lying, because it was programmed that way.

It could be that whenever an Unscrupulous Person speaks a set of words, then the lie shows up as that Person being honest, or it could be that the conductor of the test is a Shadow CIA, like Stratfor has been described, to make it look like any possible espionage by them is Patriotic and Legal, but that any alleged espionage by Bradley Manning or Julian Assange is Treacherous and Criminal, because there is one Law for the Rich and another for the Poor.

We saw this with ACTA, SOFA, and PIPA, where the Corporations would not be accused of copyright violations, but the poor would, and again, this is because there is one Law for the Rich and another for the Poor.

A very Interesting Video Titled, Truth. What political connections were supiciously connected to 9/11, at http://www.youtube.com/watch?v=5k0xSBTmUX0 , and where it is claimed that other Organizations are also Shadow CIA, and Americans should realize that there could be Many Shadow CIA Organizations in America.

I found it suspicious that if you type the incorrectly spelt word supiciously, as is in the Video Tile, then you do not even get close to finding the Video, and this happens on my computer, but I cannot comment on other computers.

I wonder if the search engines are more precise in order to act as a de facto censorship of the Internet, by making it harder to find what you want on the Internet.

I just detected lie, and I did not a machine to do that, and it was an unintentional lie, and I do not think that a machine could tell us what was an unintentional lie or a deliberate lie.

I meant to write, that if the misspelt word supiciously is correctly spelt as suspiciously with a search for “Truth. What political connections were suspiciously connected to 9/11”, then I cannot find that Video on my computer.

I am glad that I made that unintentional error, because if by tying the correctly spelt “Truth. What political connections were suspiciously connected to 9/11”, in a Google Search Engine and not a Video Search Engine, then there appear to be some interesting Websites.

Establishing a baseline reading and looking for movement or expressions outside of this sounds good in theory, but there is more to telling lies than eye movement. The "eye movement" mentality is old and many people swear by this, but it is not scientifically based and the understanding of micro-expressions has eliminated the need of random eye movements. Think of it as only getting one word in a sentence, this isn't enough data to interpret the meaning. An eye movement out of place may mean a lie, it may also mean uncertainty, embarrassment,shame, contempt, fear of not being believed, or many other thoughts that a computer will not be able to process.

That said, a computer is unable to interpret emotional context. You cannot determine emotion from non emotional factors just as people cannot guess if a person is lying from small amounts of data. I agree that a machine will still be manipulated by its user to come to the conclusion that he/she desires over the data.

9/11 Questions

Activist Post is an Independent News blog for Activists challenging the abuses of the establishment.

FAIR USE NOTICE. Many of the stories on this site contain copyrighted material whose use has not been specifically authorized by the copyright owner. We are making this material available in an effort to advance the understanding of environmental issues, human rights, economic and political democracy, and issues of social justice. We believe this constitutes a 'fair use' of the copyrighted material as provided for in Section 107 of the US Copyright Law which contains a list of the various purposes for which the reproduction of a particular work may be considered fair, such as criticism, comment, news reporting, teaching, scholarship, and research. If you wish to use such copyrighted material for purposes of your own that go beyond 'fair use'...you must obtain permission from the copyright owner.

Paid advertising on Activist Post may not represent the views and opinions of this website and its contributors. No endorsement of products and services advertised is either expressed or implied.

All opinions expressed by contributors to this site are theirs and theirs alone.