Category Archives: Mechanical methods

Detailed commentary from Patrick Barkham in the Guardian (18 Sept), exploring the use of ‘lie detecting’ machines in the UK. He covers the use of voice stress analysis in benefit offices and insurance companies, and polygraphy for sex offenders. Interesting stuff, and well worth reading in full over on the Guardian site. Here’s a flavour:

… Voice stress analysis systems have been used for more than five years in the British insurance industry but have yet to really catch on, according to the Association of British Insurers. There was an initial flurry of publicity when motor insurance companies introduced the technology in 2001 but it is still “the exception rather than the norm,” says Malcolm Tarling of the ABI. “Not many companies use it and those that do use it in very controlled circumstances. They never use the results of a voice risk analysis alone because the technology is not infallible.”

… Next year, in a pilot study, the government will introduce a mandatory polygraph for convicted sex offenders in three regions. … Professor Don Grubin, a forensic psychiatrist at Newcastle University… admits he was initially sceptical but argues that polygraphs are a useful tool. “We were less concerned about accuracy per se than with the disclosures and the changes in behaviour it encourages these guys to make,” he says. “It should not be seen as a lie detector but as a truth facilitator. What you find is you get markedly increased disclosures. You don’t get the full story but you get more than you had.”

…critics argue that most kinds of lie-detector studies are lab tests, which can never replicate the high stakes of real lies and tend to test technology on healthy individuals (usually students) of above-average intelligence. Children, criminals, the psychotic, the stupid and even those not speaking in their first language (a common issue with benefit claimants) are rarely involved in studies.

Benefit claimants and job seekers could be forced to take lie detector tests as early as next year after an early review of a pilot scheme exposed 126 benefit cheats in just three months, saving one local authority £110,000.

The news report also points out that many are skeptical:

Experts in America, where the most comprehensive scrutiny of the technology has taken place, warn that the technology is far from failsafe. David Ashe, chief deputy of the Virginia Board for Professional and Occupational Regulation, said, ‘The experience of being tested, or of claiming a benefit and being told that your voice is being checked for lies, is inherently stressful. ‘Lie detector tests have a tendency to pass people for whom deception is a way of life and fail those who are scrupulously honest.’

Reading beyond the headlines, it’s clear that the pilot study is not finished, it hasn’t been properly evaluated, and no decision has yet been made. In Lie detector beats benefit fraud, silicon.com (3 Sept) reveals

A spokesman for the Department for Work and Pensions (DWP) – which funded the pilot – told silicon.com the department will evaluate the technology when the trial is completed next May. He said the DWP will “look at the evaluation results and see if it’s viable, see if it’s something to work on and see if other councils are interested in doing it”. If the benefits are seen as sufficient, the system could potentially be rolled out across the country, although no firm plans are currently in place.

But this hasn’t stopped others jumping on the VSA bandwagon, as the Telegraph (9 Sept) and BBC Online (7 Sept) report that Birmingham Council is next in line to adopt the system.

More Deception Blog posts on this story here and here, and more generally on VSA here.

…The goal of this study was to test the validity and reliability of two popular VSA programs (LVA and CVSA) in a “real world” setting. Questions about recent drug use were asked of a random sample of arrestees in a county jail. Their responses and the VSA output were compared to a subsequent urinalysis to determine if the VSA programs could detect deception.

Both VSA programs show poor validity – neither program efficiently determined who was being deceptive about recent drug use. The programs were not able to detect deception at a rate any better than chance. The data also suggest poor reliability for both VSA products when we compared expert and novice interpretations of the output. …

However, the researchers did find that arrestees who knew they were going to be VSA tested “were much less likely to be deceptive about recent drug use than arrestees in a non-VSA research project” (though they do admit that the non-VSA project was not carried out in exactly the same way as the VSA study). The authors suggest that regardless of its validity, a VSA device may produce a bogus pipeline effect, which is perhaps why so many law enforcement agencies believe that it works:

When police officers report that VSA programs “work,” they generally mean that they were able to obtain a confession from suspects by telling them that the computer “said they were lying.” The potential problem, of course, is with false confessions. Several high profile cases have emerged in the past decade that suggest impressionable suspects may confess to a crime that they did not commit because they believed the software. The rationalization is usually that they “must have forgotten” that they did it. Obviously, the bogus pipeline effect of VSA products has important positive and negative implications (p.86).

The authors also highlight the financial implications, estimating the cost for training just one person from each of the 1400 law enforcement agencies who claim to use VSA at more than $16 million. The software and laptop is nearly $10K per agency. And “computer upgrades can increase the cost to almost $13,000” (p.4). Yikes.

Download the full text report as a pdf from the link below, and read more on the Deception Blog about VSA here and about the bogus pipeline effect here.

Hat tip to Prof Peter Tillers for pointing us to a paper from Charles Keckler, George Mason University School of Law, on admissibility in court of neuroimaging evidence of deception. Here’s the abstract:

The last decade has seen remarkable process in understanding ongoing psychological processes at the neurobiological level, progress that has been driven technologically by the spread of functional neuroimaging devices, especially magnetic resonance imaging, that have become the research tools of a theoretically sophisticated cognitive neuroscience. As this research turns to specification of the mental processes involved in interpersonal deception, the potential evidentiary use of material produced by devices for detecting deception, long stymied by the conceptual and legal limitations of the polygraph, must be re-examined.

Although studies in this area are preliminary, and I conclude they have not yet satisfied the foundational requirements for the admissibility of scientific evidence, the potential for use – particularly as a devastating impeachment threat to encourage factual veracity – is a real one that the legal profession should seek to foster through structuring the correct incentives and rules for admissibility. In particular, neuroscience has articulated basic memory processes to a sufficient degree that contemporaneously neuroimaged witnesses would be unable to feign ignorance of a familiar item (or to claim knowledge of something unfamiliar). The brain implementation of actual lies, and deceit more generally, is of greater complexity and variability. Nevertheless, the research project to elucidate them is conceptually sound, and the law cannot afford to stand apart from what may ultimately constitute profound progress in a fundamental problem of adjudication.

“Health, Disability, and Employment Law Implications of MRI” – Stacey Tovino, Hamline University School of Law

From a deception researcher’s point of view, the chance to hear from Steven Laken of commercial fMRI deception detection company Cephos will be particularly interesting.

Mind Hacks also notes that ABC Radio National’s All in the Mind on 23 June featured many of the speakers from this conference in a discussion of neuroscience, criminality and the courtroom. The webpage accompanying this programme has a great reference list. For those interested in deception research, I particularly recommend Wolpe, Foster & Langleben (2005) for an informative overview of the potential uses and dangers of neurotechnologies and deception detection.

Wow. Mind Hacks is right. A great article from the New Yorker on fMRI and deception detection. Here’s a little snippet but as the article is freely available online you should really head on over there and read the whole thing:

To date, there have been only a dozen or so peer-reviewed studies that attempt to catch lies with fMRI technology, and most of them involved fewer than twenty people. Nevertheless, the idea has inspired a torrent of media attention, because scientific studies involving brain scans dazzle people, and because mind reading by machine is a beloved science-fiction trope, revived most recently in movies like “Minority Report” and “Eternal Sunshine of the Spotless Mind.” Many journalistic accounts of the new technology—accompanied by colorful bitmapped images of the brain in action—resemble science fiction themselves.

And later, commenting on University of Pennsylvania psychiatrist Daniel Langleben’s studies that kicked off the current fMRI-to-detect-deception craze:

Nearly all the volunteers for Langleben’s studies were Penn students or members of the academic community. There were no sociopaths or psychopaths; no one on antidepressants or other psychiatric medication; no one addicted to alcohol or drugs; no one with a criminal record; no one mentally retarded. These allegedly seminal studies look exclusively at unproblematic, intelligent people who were instructed to lie about trivial matters in which they had little stake. An incentive of twenty dollars can hardly be compared with, say, your freedom, reputation, children, or marriage—any or all of which might be at risk in an actual lie-detection scenario.

Heinz and Suzanne Offe have just published a paper in Law and Human Behavior, in which they present the results of a study exploring when and how the controversial Control Question Test works in polygraph testing.

The logic of the CQT is that innocent subjects will respond more strongly to Control Questions (CQs, which relate to previous history of – or inclination towards – wrong-doing) than to Relevant Questions (RQs, which relate to the particular offence being investigated). Guilty subjects, on the other hand, will, it is theorised, respond more strongly to RQs.

In order for this procedure to be effective, it is claimed, subjects need to be convinced that being judged ‘not guilty’ depends on them giving socially desirable responses to the CQs. Examiners will tell their subjects something along the following lines:

“I want to find out whether you are the sort of person capable of [the crime under investigation] based on your history. So the questions I am going to ask you about your history will allow me to make these judgements about you. Now, tell me if you have ever taken something that was not yours…”.

In reality the explanations are a lot more detailed than this, all designed to raise the anxiety an innocent subject might feel at the prospect of being accused of something they did not do. (Offe and Offe give a detailed example of how this is done in the first appendix to their study.)

However, as Offe and Offe point out, it is debatable whether or not this type of questioning actually does increase the salience of the CQs for subjects.

The researchers set out to test the workings of the CQT by giving a mix of students and law enforcement trainees the opprtunity to steal some money. Participants were allowed to choose for themselves whether or not to steal, making the simulation more realistic. They were then polygraphed under various different conditions, in which the researchers tested whether explaining the CQ in detail made a difference to the ability to discriminate guilty and innocent subjects.

I’ve been out of the country for the last couple of weeks and missed the start of what looks to be an interesting series from UK’s Channel 4 on lie detection. Luckily the trusty Mind Hacks is on hand to pick it up!

Lie Lab is a three-part TV series where they use the not-very-accurate brain scan lie detection method to test high profile people who have been accused of lying.

Read all about it on the Mind Hacks post here, or on the Channel 4 website here.

This is the question asked in the May 2007 issue of The Scientist, which discusses the recent commercialisation of fMRI for lie detection, and concludes with a good summary of the persistent problems using this technology in forensic contexts:

[…] in reality, a nonconsensual testtaker need only move his or her head slightly to render the results useless. And there are other challenges. For one, individuals with psychopathologies or drug use (overrepresented in the criminal defendant population) may have very different brain responses to lying, says [New York University Psychology prof Elizabeth] Phelps. They might lack the sense of conflict or guilt used to detect lying in other individuals. […]

If a person actually believes an untruth, it’s not clear if a machine could ever identify it as such. Researchers including Phelps are still debating whether the brain can distinguish true from false memory in the first place. […]

Jed Rakoff, US District Judge for the Southern District of New York, says he doubts that fMRI tests will meet the courtroom standards for scientific evidence (reliability and acceptance within the scientific community) anytime in the near future, or that the limited information they provide will have much impact on the stand.

[…] According to Rakoff, the best way to get at the truth in the courtroom is still “plain old cross-examination.” And in the national security sphere, there’s “much more to detecting spies than the perfect gadget,” [Marcus Raichle, professor at the Washington University in St. Louis School of Medicine] agrees. “There’s some plain old-fashioned footwork that needs to be done.”

See also:

Hat tip to Mind Hacks (11 May), which has a detailed commentary on the article.

If podcasts are your thing you can also listen to an interview with Ken Alder, author of a new book on the polygraph, on the Bat Segundo show (mp3). As the Anti-Polygraph Blog points you, you have to sit through a little silliness first…

An eerie image of a magenta, blue-green and yellow face glows on a screen as a government employee steps behind a heat-sensing camera on this sprawling U.S. Army base. Not far away, researchers are studying lasers’ ability to detect muscle contraction. Other technology tracks the movement of a person’s eyes.

Liars beware. The Defense Department facility that trains the people who run the government’s polygraph machines is looking to an even higher plane of technology in its quest to separate fact from fiction.

The Trades Union Congress has called for the Department for Work and Pensions to abandon plans to use voice stress analysis in benefit centres (press release, 4 May). Quite right too. They say:

The Government should abandon plans to trial lie detector tests for people claiming benefits because the accuracy of the technology has not been scientifically proven, and individuals with genuine cases are likely to be discouraged from applying for the help they desperately need, says the TUC today (Friday).

[…] a TUC briefing ‘Lies, damned lies and lie detectors’ says that the science just isn’t there to back up the technology, and any use of the software when dealing with benefit claimants means that the innocent are just as likely to fall foul of the system as the genuinely guilty.

The TUC says that the problem with the lie detection technology that the DWP intends to use is that it cannot detect lies. Voice risk analysis and lie detectors can only detect, with varying accuracy, changes in the body, such as heart or breathing rate, or any changes in the tone, pitch or tremors in the voice.

An article in the March 2007 issue of Sexual Abuse: A Journal of Research and Treatment presents the results of an experimental comparision between child molesters’ responses on a questionnaire and their responses when attached to a fake lie detector known as a ‘bogus pipeline’. Here’s the abstract:

Questionnaires are relied upon by forensic psychologists, clinicians, researchers, and social services to assess child molesters’ (CMs’) offense-supportive beliefs (or cognitive distortions). In this study, we used an experimental procedure to evaluate whether extrafamilial CMs underreported their questionnaire-assessed beliefs. At time one, 41 CMs were questionnaire-assessed under standard conditions (i.e., they were free to impression manage). At time two, CMs were questionnaire-assessed again; 18 were randomly attached to a convincing fake lie detector (a bogus pipeline), the others were free to impression manage. The results showed that bogus pipeline CMs significantly increased cognitive distortion endorsements compared to their own previous endorsements, and their control counterparts’ endorsements. The findings are the first experimental evidence showing that CMs consciously depress their scores on transparent questionnaires.

Lie detectors will be used to help root out benefit cheats, Work and Pensions Secretary John Hutton has said. So-called “voice-risk analysis software” will be used by council staff to help identify suspect claims. It can detect minute changes in a caller’s voice which give clues as to when they may be lying. The technology is already used by the insurance industry to combat fraud and will be trialled by Harrow Council, in north London, from May.

In a Washington Monthly article (April 07) entitled The Big Lie (How America became obsessed with the polygraph—even though it has never really worked), David Wallace-Wells reviews Ken Alder’s recently published book The Lie Detectors.

[…] The device has been derided by teams of experts as junk science, hardly more reliable than methods of pure chance, barred from the courts, a favorite tool of overzealous investigators and an instrument of state-sponsored vigilantism, a handmaiden to McCarthyism, an accomplice to the pink scare, and a nightmare vision of justice as arbitrary and expansive as the judgment of a totalitarian court, in a box no bigger or more conspicuous than the briefcase of a company man. And yet, as Ken Alder shows in his revealing, colloquial social history The Lie Detectors, by the time scientific scrutiny finally caught up to the scientistic ambition of the device in the late 1980s, generations of Americans had been seduced by it.

The article concludes with a charming quote from G. K. Chesterton:

“Who but a Yankee would think of proving anything from heart-throbs?” asked G. K. Chesterton’s fictional detective, Father Brown. “Why, they must be as sentimental as a man who thinks a woman is in love with him if she blushes.”

This report examines how DOE’s new polygraph screening policy has evolved and reviews certain scientific findings with regard to the polygraph’s accuracy. As part of its continuing oversight of DOE’s polygraph program, the 110th Congress could address several issues, including whether DOE’s new screening program is sufficiently focused on a small number of individuals occupying only the most sensitive positions; program implementation; the desirability of further research into scientific validity of the polygraph and possible alternatives to the polygraph; and whether to continue or discontinue polygraph screening.

Charles Honts and Susan Amato have just published a study in Psychology Crime and Law that indicates that an automated polygraph test may lead to more accurate results than one administered by a human being. As Honts and Amato explain:

Much of the criticism of polygraph practice has focused on the polygraph examiners […who have been] criticized for a variety of reasons, including, but not limited to: poor training, bias, incompetence, inability to use statistically relevant information, and for being an uncontrollable and unquantifiable variable in the conduct of polygraph tests.

Participants were randomly assigned to lie (‘guilty’) or tell the truth (‘innocent’) conditions. The human version of the test was conducted by an experienced polygraph examiner, and in the automated version the participants were given their questions via audio tape recording.

In this study, around two thirds of the guilty participants who had been tested by a human were correctly judged to be guilty, and 63% of the innocent participants were correctly judged innocent. However, in the automated version, the correct ‘guilty’ decisions went up to 79% and correct ‘innocent’ decisions to 76%. It’s worth noting that the human examiner was not the one who made the decision about guilt or innocence – this was calculated in exactly the same way for both ‘human’ and ‘automated’ condition, via statistical analyses of the polygraph readings.

Here’s the abstract:

The present study examined the effects of automating the Relevant-Irrelevant (RI) psychophysiological detection of deception test within a mock-screening paradigm. Eighty participants, recruited from the local community, took part in the study. Experimental design was a 2 (truthful/deceptive) by 2 (human/automation) factorial. Participants in the deceptive conditions attempted deception on two items of an employment application. Examinations conducted with the automated polygraph examination were significantly more accurate than examinations conducted by the human polygraph examiner. Statistical analyses revealed different patterns of physiological responses to deceptive items depending upon the automation condition. Those results have potentially interesting theoretical implications. The results of the present study are clearly supportive of additional efforts to develop a field application of an automated polygraph examination.

[…] Deception arises in our brains. The utility of finding a way to look under the hood directly for the source of deception is undeniable. Not surprisingly, a number of researchers have been trying to find correlates in the brain for truth and lies. […] Now a couple of American companies are claiming to be able to do just that. No Lie MRI in Tarzana, Calif., and Cephos Corporation in Pepperell, Mass. use fMRI scanning to uncover deception. No Lie MRI asserts that its technology, “represents the first and only direct measure of truth verification and lie detection in human history.” Both companies say that their technology can distinguish lies from truth with an accuracy rate of 90 percent.

[…] What evidence does No Lie MRI and Cephos Corporation offer for their assertion of 90 percent accuracy in detecting lies? A look at the studies cited on No Lie MRI’s website is not reassuring. The company links to one done using 26 right-handed male undergraduates; to another with 22 right-handed male undergraduates; and to a third one with 23 right-handed participants (11 men and 12 women).

Cephos links to just three fMRI studies, one using a total of 61 subjects (29 male and 32 female of whom 52 were right-handed); another using 14 right-handed adults who did not smoke or drink coffee; and a third one that tested 8 men. So adding up the studies cited by these two companies, we get a total of 154 subjects whose brains have been probed for lying in controlled laboratory settings.

[…] Right now its accuracy has not yet been proven beyond a reasonable doubt. Or as Stanford law professor Hank Greeley succinctly put it: “I want proof before this gets used, and proof is not three studies of 40 college students lying about whether or not they are holding the three of spades.”

Posts navigation

Categories

Archive

Follow me on Twitter

Subscribe!

Disclaimers

Where postings include copyright material, this is used with in accordance with Fair Use exemptions. The fair use of a copyrighted work for purposes such as criticism, comment, news reporting, teaching, scholarship, or research, is not generally considered an infringement of copyright. However, if you wish to use this copyrighted material for purposes of your own that go beyond ‘fair use,’ you must obtain permission from the copyright owner.

Just because I post a link to an article here does not necessarily imply that I endorse the content. You should exercise your own critical judgement when assessing the reliability and truthfulness of some of the reports, particularly those forwarded from news services.

Photos make the site more engaging and celebrate the talents of some wonderful amateur (and professional) photographers who upload their work to Flickr and similar sites. I always try to credit the photographer (unless they are my own photos!) and only use pictures that have been licensed under a Creative Commons license. If I have used your photograph and you are unhappy about it for whatever reason, please let me know and I will of course remove it.