Category Archives: Applications

The APA’s Monitor on Psychology this month has an entertaining and interesting article about how children lie, and how we get better at deceiving as we grow up. Here’s a taster, but you can read the whole thing for free on the APA site here.

…As humans, we are as much defined by our economy with the truth as we are by our cooperation. But that’s not necessarily a bad thing, say psychologists. Lying is a cognitive signal that people understand what others are thinking, the important cognitive milestone known as theory of mind. As children grow older, their lying becomes more sophisticated and takes on the characteristics of their respective cultures, revealing to psychologists rich cognitive properties beneath the deceptively common practice.

Children first begin lying verbally around age 3, the time when language development and the ability to control one’s own mental skills combine to form a child’s theory of mind. Also at this age, children have learned their parents’ rules and the consequences of breaking them. …A child’s initial lies tend to be of the punishment-escaping variety. They’re not yet aware of the moral qualms associated with lying… It’s essentially a logic puzzle to them.

… By age 4, children can reliably tell the difference between harmful lies and little white ones, and they stop lying indiscriminately. But, as any lawyer can tell you, the lies don’t drop out altogether. Instead, children develop lying into a social skill.

The article goes on to describe several recent research studies, including a great experiment by psychologist Victoria Talwar from McGill University which demonstrated how lying sophistication increases with age.

A press release from Blackwell Publishing (28 Nov) highlights a new study coming out in the next issue of the journal Psychophysiology.

In order to prevent false positive results in polygraph examinations, testing is set to err on the side of caution. This protects the innocent, but increases the chances that a guilty suspect will go unidentified. A new study published in Psychophysiology finds that the use of a written test, known as Symptom Validity Testing (SVT), in conjunction with polygraph testing may improve the accuracy of results.

SVT is an independent measure that tests an entirely different psychological mechanism than polygraph examinations. It is based on the rationale that, when presented with both real and plausible-but-unrelated crime information, innocent suspects will show a random pattern of results when asked questions about the crime. SVT has previously been shown as effective in detecting post-traumatic stress disorder, amnesia and other perceptual deficits for specific events.

The study finds that SVT is also an easy and cost-effective method for determining whether or not a suspect is concealing information. In simulated cases of mock crime questioning and feigned amnesia, it accurately detected when a participant was lying.

Furthermore, when used in combination with the preexisting but relatively uncommon concealed information polygraph test (CIT), test accuracy is found to be higher than when either technique is used alone.

“We showed that the accuracy of a Concealed Information Test can be increased by adding a simple pencil and paper test,” says lead author Ewout Meijer of Maastricht University. “When ‘guilty’ participants were forced to choose one answer for each question, a substantial proportion did not succeed in producing the random pattern that can be expected from ‘innocent’ participants.”

A press release (2 Nov) heralds the publication of a new study by Professor Sean Spence from the University of Sheffield, who claims the research shows that fMRI “could be used alongside other factors to address questions of guilt versus innocence”. It’s an interesting study on two counts: one, it appears to be the first time that fMRI lie-detection research has been carried out using a real world case (as opposed to contrived experiments), and two, the research was funded by a TV company and featured on a TV documentary earlier this year. The study is currently in press in the journal European Psychiatry (reference below).

An academic at the University of Sheffield has used groundbreaking technology to investigate the potential innocence of a woman convicted of poisoning a child in her care. Professor Sean Spence, who has pioneered the use of functional Magnetic Resonance Imaging (fMRI) to detect lies, carried out groundbreaking experiments on the woman who, despite protesting her innocence, was sentenced to four years in prison. ….Using the technology, Professor Spence examined the woman´s brain activity as she alternately confirmed her account of events and that of her accusers. The tests demonstrated that when she agreed with her accusers´ account of events she activated extensive regions of her frontal lobes and also took significantly longer to respond – these findings have previously been found to be consistent with false or untrue statements.

In the acknowledgements section of the paper the authors reveal that the study “was funded by Quickfire Media in association with Channel Four Television”. The case Spence et al. describe as that of “Woman X” was featured in Channel 4’s Lie Lab series (and if you’re really interested, you can easily identify X in a couple of clicks). Although unusual, this isn’t the first time that research featured on TV has found its way into academic journals: see, for example, Haslam and Reicher’s academic publications based on their controversial televised replication of Zimbardo’s Stanford Prison Experiment .

In theory, I am not sure it necessarily matters if a study is done for the TV, if the study is carried out in an ethical and scientific way, and the subsequent article(s) meet rigorous standards of peer review. Nor does it always matter if the academic research then receives wider publicity as a result. In this case, however, I hope that anyone picking up and reporting further on this story reads the actual paper, in which Spence and his co-authors consider carefully the implications of the study and the caveats that should be applied to the results:

To our knowledge, this is the first case described where fMRI or any other form of functional neuroimaging has been used to study truths and lies derived from a genuine ‘real-life’ scenario, where the events described pertain to a serious forensic case. All the more reason then for us to remain especially cautious while interpreting our findings and to ensure that we make explicit their limitations: the weaknesses of our approach (p.4).

The authors go on to discuss alternative interpretations of their results: Perhaps X had told her story so many times that her responses were automatic? Perhaps the emotive nature of the subject under discussion (poisoning a child) gave rise to the observed pattern of activation? Maybe X used countermeasures (such as moving her head or using cognitive distractions)? Perhaps she “has ‘convinced herself’ of her innocence … she answered sincerely though ‘incorrectly’”? In this case, perhaps the researchers have “merely imaged ‘self-deception’” (p.5)? For each argument, the authors discuss the pros and cons, remaining careful not to claim too much for their results, and pointing out that further empirical enquiry is needed.

These cautions are also echoed in Spence’s comments at the end of the press release:

“This research provides a fresh opportunity for the British legal system as it has the potential to reduce the number of miscarriages of justice. However, it is important to note that, at the moment, this research doesn´t prove that this woman is innocent. Instead, what it clearly demonstrates is that her brain responds as if she were innocent.”

Mind Hacks discusses an article in which Raymond Tallis “laments the rise of ‘neurolaw’ where brain scan evidence is used in court in an attempt to show that the accused was not responsible for their actions”.

Julian Boon, Lynsey Gozna and Stephen Hall have a paper forthcoming in the journal Personality and Individual Differences exploring whether it’s possible to ‘fake bad’ on the Gudjonsson Suggestibility Scales (GSS). These tests measure ‘Interrogative Suggestibility’ (IS), which is defined as “the extent to which, within a closed social interaction, people come to accept messages communicated during formal questioning, as a result of which their subsequent behavioural response is affected” (Gudjonsson & Clark, 1986, p. 84). People who are high in IS are more susceptible to making false confessions under interrogative pressure, in a police or military interrogation scenario, for instance. However, as the authors point out, some offenders might be motivated to appear suggestible or vulnerable even if they are not. For instance, if an offender wanted to retract a statement or confession, or “in circumstances where the successful demonstration of vulnerability may lead to a reduction in a fine or sentence or even to escaping a custodial sentence”.

Gudjonsson’s tests for suggestibility are now quite widely known and it’s relatively easy to find information about the procedure on the internet. This presents a problem: the reliability of the GSS depends on test takers being unaware of the purpose of the test. Boon et al. thus explored whether knowledge of the purpose of the test influenced performance, as well as examining the performance of those who deliberately tried to fake bad.

The ‘test aware’ group in this study performed differently on the suggestibility measures compared to the fakers and to a control group who had just been given the standard test. The fakers also produced a different pattern of results. Comparing the fakers’ results on suggestibility measures to the norms for individuals who are mentally impaired revealed that the results were almost identical. However, fakers could be discriminated from genuinely mentally impaired people because they performed better on a test of memory. The authors suggest that this unusual combination of results could be a ‘red flag’ for faking bad:

Specifically, this red flag could be where individuals’ scoring profiles revealed near identical scores on the principal suggestibility measures to those of the intellectually disabled norms, while simultaneously they were scoring significantly higher on the free recall measures.

The authors report that the interviewer administering the tests didn’t know which conditions participants had been allocated to, but tried to guess. She was only correct 58% of the time, suggesting that participants were good at fooling the interviewer. This shows the value of being able to detect faking via measures in the tests rather than simply relying on the judgement of the interviewer.

The limitations of this study are the usual ones – participants were undergraduate students whose motivation for faking bad is probably rather less than that of real criminals trying to escape a prison sentence.

Stephen Porter and colleagues have a paper in the April 2007 issue of Canadian Journal of Behavioural Science exploring the differences between truthful and fabricated accounts of traumatic experiences.

They examined the written accounts of students fabricating and giving truthful accounts of traumatic events and found that:

… narratives based on false and genuine traumatic events showed several qualitative differences, some contrasting our predictions. Whereas we predicted that participants would be able to produce fabricated events that appeared to be as credible as truthful accounts, we found that fabricated events were rated lower on plausibility by coders with no knowledge of their actual veracity. This suggests that mistakes in the courtroom may result from liars who are able to effectively distract attention from their stories by manipulating their demeanour and speech (e.g., tone) (p.88).

In other words, lie catchers need to focus on what is being said, and try avoid being misled by non-verbal behaviour.

In addition, attention to specific types of details in the narratives helped to discriminate honesty from deception. When relating a fabricated experience, participants were unable to provide the same level of contextual information as when relating a genuine experience. They provided fewer time and location details and their reports were abbreviated overall, despite our prediction that they may be more detailed in an attempt to make their trauma stories more credible and to elicit sympathy (p.88).

As far as I can see, the following is the only attempt to motivate participants, during the instructions for the study:

Your goal in this section is to provide a believable (but fabricated) traumatic memory report. These reports will be shown to legal professionals and students (if you consented to this aspect of the study) in future research for them to determine how credible your experience appears (p.83).

It doesn’t appear from the description of the method that participants had much time to prepare their truthful or fabricated accounts. Perhaps it is not surprising then, that the results did not confirm to the researchers’ predictions? Perhaps real life malingerers, with the results of a court case at stake, and time to practice their account, might try harder to make their stories credible, and be better at it?

…genuine and fabricated reports of trauma could be differentiated based on the patterns of traumatic stress or symptoms reported. It was anticipated that symptoms on the three measures of traumatic stress would be exaggerated when participants were fabricating. The results provided strong evidence for this hypothesis (p.88).

Detailed commentary from Patrick Barkham in the Guardian (18 Sept), exploring the use of ‘lie detecting’ machines in the UK. He covers the use of voice stress analysis in benefit offices and insurance companies, and polygraphy for sex offenders. Interesting stuff, and well worth reading in full over on the Guardian site. Here’s a flavour:

… Voice stress analysis systems have been used for more than five years in the British insurance industry but have yet to really catch on, according to the Association of British Insurers. There was an initial flurry of publicity when motor insurance companies introduced the technology in 2001 but it is still “the exception rather than the norm,” says Malcolm Tarling of the ABI. “Not many companies use it and those that do use it in very controlled circumstances. They never use the results of a voice risk analysis alone because the technology is not infallible.”

… Next year, in a pilot study, the government will introduce a mandatory polygraph for convicted sex offenders in three regions. … Professor Don Grubin, a forensic psychiatrist at Newcastle University… admits he was initially sceptical but argues that polygraphs are a useful tool. “We were less concerned about accuracy per se than with the disclosures and the changes in behaviour it encourages these guys to make,” he says. “It should not be seen as a lie detector but as a truth facilitator. What you find is you get markedly increased disclosures. You don’t get the full story but you get more than you had.”

…critics argue that most kinds of lie-detector studies are lab tests, which can never replicate the high stakes of real lies and tend to test technology on healthy individuals (usually students) of above-average intelligence. Children, criminals, the psychotic, the stupid and even those not speaking in their first language (a common issue with benefit claimants) are rarely involved in studies.

Anne Reed has posted a couple of thoughtful pieces on deceptive jurors over at her Deliberations Blog.

In part I of “When Jurors Lie” Anne highlights the extent of the problem of potential jurors lying to get onto a jury. For example:

One of the jurors who convicted Martha Stewart, “by far the most outspoken juror on the panel,” failed to disclose an arrest and three lawsuits against him on a jury questionnaire that reportedly asked for this information. Just in recent months on Deliberations, we’ve had the football fan who faithfully responded when asked who his favorite team was, but said nobody asked him whether he hated the plaintiff Oakland Raiders, which he did; the juror in the California Peregrine trial who at first said he wasn’t reading media coverage of the trial, but later admitted he had; and the blogging lawyer juror who bragged that he’d slipped through voir dire by describing himself as a “project manager,” leaving out the lawyer part.

In part II Anne considers how to recognise a lying juror, pointing out how difficult detecting deceit can be. But she offers some interesting – and above all practical – tips for trying to weed out deceptive jurors during the voir dire process, including asking about experiences and behaviours (rather than about attitudes and beliefs), being aware of your own stereotypes, paying attention to the language jurors use under questioning, and leveraging peer pressure by asking the whole group of potential jurors to buy into the importance of voir dire.

Anne also wonders why lawyers get so twitchy about deceptive jurors. She suggests that it’s

partly because a lying juror puts a lawyer personally at risk, and we take it personally… But I want to suggest there’s something else at work here too, a deep belief we share with all those other jurors who don’t lie. People believe the courtroom is a place where justice happens.

Hat tip to Prof Peter Tillers for pointing us to a paper from Charles Keckler, George Mason University School of Law, on admissibility in court of neuroimaging evidence of deception. Here’s the abstract:

The last decade has seen remarkable process in understanding ongoing psychological processes at the neurobiological level, progress that has been driven technologically by the spread of functional neuroimaging devices, especially magnetic resonance imaging, that have become the research tools of a theoretically sophisticated cognitive neuroscience. As this research turns to specification of the mental processes involved in interpersonal deception, the potential evidentiary use of material produced by devices for detecting deception, long stymied by the conceptual and legal limitations of the polygraph, must be re-examined.

Although studies in this area are preliminary, and I conclude they have not yet satisfied the foundational requirements for the admissibility of scientific evidence, the potential for use – particularly as a devastating impeachment threat to encourage factual veracity – is a real one that the legal profession should seek to foster through structuring the correct incentives and rules for admissibility. In particular, neuroscience has articulated basic memory processes to a sufficient degree that contemporaneously neuroimaged witnesses would be unable to feign ignorance of a familiar item (or to claim knowledge of something unfamiliar). The brain implementation of actual lies, and deceit more generally, is of greater complexity and variability. Nevertheless, the research project to elucidate them is conceptually sound, and the law cannot afford to stand apart from what may ultimately constitute profound progress in a fundamental problem of adjudication.

“Health, Disability, and Employment Law Implications of MRI” – Stacey Tovino, Hamline University School of Law

From a deception researcher’s point of view, the chance to hear from Steven Laken of commercial fMRI deception detection company Cephos will be particularly interesting.

Mind Hacks also notes that ABC Radio National’s All in the Mind on 23 June featured many of the speakers from this conference in a discussion of neuroscience, criminality and the courtroom. The webpage accompanying this programme has a great reference list. For those interested in deception research, I particularly recommend Wolpe, Foster & Langleben (2005) for an informative overview of the potential uses and dangers of neurotechnologies and deception detection.

If you were a police officer, what sort of interview style would offer you the best chance of detecting whether or not your interviewee was telling lies? Aldert Vrij and his colleagues ran a study to find out:

In Experiment 1, we examined whether three interview styles used by the police, accusatory, information-gathering and behaviour analysis, reveal verbal cues to deceit, measured with the Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM) methods. A total of 120 mock suspects told the truth or lied about a staged event and were interviewed by a police officer employing one of these three interview styles. The results showed that accusatory interviews, which typically result in suspects making short denials, contained the fewest verbal cues to deceit. Moreover, RM distinguished between truth tellers and liars better than CBCA. Finally, manual RM coding resulted in more verbal cues to deception than automatic coding of the RM criteria utilising the Linguistic Inquiry and Word Count (LIWC) software programme.

In Experiment 2, we examined the effects of the three police interview styles on the ability to detect deception. Sixty-eight police officers watched some of the videotaped interviews of Experiment 1 and made veracity and confidence judgements. Accuracy scores did not differ between the three interview styles; however, watching accusatory interviews resulted in more false accusations (accusing truth tellers of lying) than watching information-gathering interviews. Furthermore, only in accusatory interviews, judgements of mendacity were associated with higher confidence. We discuss the possible danger of conducting accusatory interviews.

In the discussion, Vrij and colleagues summarise:

The present experiment revealed that style of interviewing did not affect on overall accuracy (ability to distinguish between truths or lies) or on lie detection accuracy (ability to correctly identify liars). In fact, the overall accuracy rates were low and did not differ from the level of chance. This study, like so many previous studies (Vrij, 2000), thus shows the difficulty police officers face when discerning truths from lies by observing the suspect’s verbal and nonverbal behaviours.

In other words, if law enforcement officers want to increase their chances of detecting deception, they need to make sure interviewers use an information gathering approach. But simply watching that interview (live or on tape) might not help them decide whether or not the suspect is telling the truth – they may need to subject a transcript to linguistic analysis to give themselves the best chance.

Even if it doesn’t result in better ‘live’ judgements of veracity, an information gathering approach has another advantage for the law enforcement officer: it maximises the number of checkable facts elicited from the suspect, and being able to check a fact against the truth is pretty much the most effective means of uncovering false information. Of course, someone can provide false information without deliberately lying: if they have misremembered something, for instance, or are passing on something that someone else lied to them about. But then the point of any law enforcement interview is to get to the truth, which is a higher goal than simply uncovering a liar, in my opinion.

As always with lab-based studies, there are some limitations. Vrij et al., for instance, acknowledge that “in practice elements of all three styles may well be incorporated in one interview” but explain that “we distinguished between the three styles in our experiments because we can only draw conclusions about the effects of such styles only by examining them in their purest form”.

Further problems, which are difficult to overcome in structured lab settings, are caused because participants were assigned randomly to ‘guilty’ (liars) or ‘innocent’ (truth tellers) conditions. In the real world, individuals who are prepared put themselves in a situation in which they might later have to lie may differ in their ability to lie effectively than those who try to stay out of such situations. And real guilty suspects make a decision about whether they are going to lie (a few confess from the start, others will offer partial or whole untruths). It’s an issue that is open to empirical test: let participants choose whether they want to be in the ‘guilty’ or ‘innocent’ conditions (or have four conditions: guilty choice/guilty no choice/innocent choice/innocent no choice).

Also, in this study the liars were told what lie to tell (as opposed to being able to make one up). Real guilty suspects who decide to lie will presumably choose a lie that they they think they stand a good chance of being able to get away with. In real world conditions, the perception by the guilty individual of what sort of situation they’re in, the evidence against them, the plausible story they can tell to explain away the evidence, and their ability to lie effectively are probably all important.

Dr. Vasudevi Reddy from the University of Portsmouth has garnered a fair amount of publicity for a study that “identified seven categories of deception used between six months and three-years-old”, according to the Daily Telegraph (1 July), which also reveals:

Whether lying about raiding the biscuit tin or denying they broke a toy, all children try to mislead their parents at some time. Yet it now appears that babies learn to deceive from a far younger age than anyone previously suspected. Behavioural experts have found that infants begin to lie from as young as six months. Simple fibs help to train them for more complex deceptions in later life. Until now, psychologists had thought the developing brains were not capable of the difficult art of lying until four years old.

[…] Infants quickly learnt that using tactics such as fake crying and pretend laughing could win them attention. By eight months, more difficult deceptions became apparent, such as concealing forbidden activities or trying to distract parents’ attention. By the age of two, toddlers could use far more devious techniques, such as bluffing when threatened with a punishment.

[…] We still do not have a full picture of the development of deceptive actions in human infants and toddlers or an explanation of why it emerges. This paper applies Byrne & Whiten’s functional taxonomy of tactical deception to the social behaviour of human infants and toddlers using data from three previous studies. The data include a variety of acts, such as teasing, pretending, distracting and concealing, which are not typically considered in relation to human deception. This functional analysis shows the onset of non-verbal deceptive acts to be surprisingly early. Infants and toddlers seem to be able to communicate false information (about themselves, about shared meanings and about events) as early as true information. It is argued that the development of deception must be a fundamentally social and communicative process and that if we are to understand why deception emerges at all, the scientist needs to get ‘back to the rough ground’ as Wittgenstein called it and explore the messy social lives in which it develops.

In a quick search I couldn’t find the “three previous studies” referred to, but I did find some other work from Reddy that indicates she has been conducting research in this area for some time. This is from the abstract of Newton, Reddy & Bull, 2000:

…the deceptions of a 21/2-year-old child over a 6-month period were shown to be varied, flexible, context appropriate and too complex to be ‘blind’ learned strategies. It is argued that children’s deceptive skills develop from pragmatic need and situational exigencies rather than from conceptual developments; they may learn to lie in the same way as they learn to speak.

Wow. Mind Hacks is right. A great article from the New Yorker on fMRI and deception detection. Here’s a little snippet but as the article is freely available online you should really head on over there and read the whole thing:

To date, there have been only a dozen or so peer-reviewed studies that attempt to catch lies with fMRI technology, and most of them involved fewer than twenty people. Nevertheless, the idea has inspired a torrent of media attention, because scientific studies involving brain scans dazzle people, and because mind reading by machine is a beloved science-fiction trope, revived most recently in movies like “Minority Report” and “Eternal Sunshine of the Spotless Mind.” Many journalistic accounts of the new technology—accompanied by colorful bitmapped images of the brain in action—resemble science fiction themselves.

And later, commenting on University of Pennsylvania psychiatrist Daniel Langleben’s studies that kicked off the current fMRI-to-detect-deception craze:

Nearly all the volunteers for Langleben’s studies were Penn students or members of the academic community. There were no sociopaths or psychopaths; no one on antidepressants or other psychiatric medication; no one addicted to alcohol or drugs; no one with a criminal record; no one mentally retarded. These allegedly seminal studies look exclusively at unproblematic, intelligent people who were instructed to lie about trivial matters in which they had little stake. An incentive of twenty dollars can hardly be compared with, say, your freedom, reputation, children, or marriage—any or all of which might be at risk in an actual lie-detection scenario.

A press release from the Association for Psychological Science (13 June) draws attention to research by Elke Geraerts, a psych post doc from Harvard and Maastricht Universities. Geraerts and her colleagues have a paper coming out next month in Psychological Science, presenting results of research on the accuracy of ‘recovered memories’, decribed in the press release as “one of the most contentious issues in the fields of psychology and psychiatry”.

The press release explains:

A decade or so ago, a spate of high profile legal cases arose in which people were accused, and often convicted, on the basis of “recovered memories.” These memories, usually recollections of childhood abuse, arose years after the incident occurred and often during intensive psychotherapy. […]

[…] Recovered memories are inherently tricky to validate for several reasons, most notably because the people who hold them are thoroughly convinced of their authenticity. Therefore, to maneuver around this obstacle Geraerts and her colleagues attempted to corroborate the memories through outside sources.

The researchers recruited a sample of people who reported being sexually abused as children and divided them based on how they remembered the event. […] The results […] showed that, overall, spontaneously recovered memories were corroborated about as often (37% of the time) as continuous memories (45%). Thus, abuse memories that are spontaneously recovered may indeed be just as accurate as memories that have persisted since the time the incident took place. Interestingly, memories that were recovered in therapy could not be corroborated at all.

The July issue of Psychological Science isn’t online yet, but the paper is available as a pdf on Geraerts’ website – access via the link below.

The development of lying to conceal one’s own transgression was examined in school-age children. Children (N = 172) between 6 and 11 years of age were asked not to peek at the answer to a trivia question while left alone in a room. Half of the children could not resist temptation and peeked at the answer. When the experimenter asked them whether they had peeked, the majority of children lied. However, children’s subsequent verbal statements, made in response to follow-up questioning, were not always consistent with their initial denial and, hence, leaked critical information to reveal their deceit. Children’s ability to maintain consistency between their initial lie and subsequent verbal statements increased with age. This ability is also positively correlated with children’s 2nd-order belief scores, suggesting that theory of mind understanding plays an important role in children’s ability to lie consistently. (c) 2007 APA, all rights reserved

Last month we reported that according to a study by Leif A. Strömwall, Pär Anders Granhag and Sara Landström, by the ages of 11-14, children are able to deceive adults 54% of the time, when given the chance to prepare their lies (and even when they can’t prepare the figure is 43% …).

Plants may not malinger, but people often do. In the latest issue of the Journal of Forensic Sciences (vol 52, no. 3, May 2007), Ryan Hall and Richard Hall discuss research on detecting malingered PTSD. From the abstract:

Posttraumatic stress disorder (PTSD) is a condition that can be easily malingered for secondary gain. For this reason, it is important for physicians to understand the phenomenology of true PTSD and indicators that suggest an individual is malingering. This paper reviews the prevalence of PTSD for both the general population and for specific events, such as rape and terrorism, to familiarize evaluators with the frequency of its occurrence. The diagnostic criteria for PTSD, as well as potential ambiguities in the criteria, such as what constitutes an exposure to a traumatic event, are reviewed. Identified risk factors are reviewed as a potential way to help differentiate true cases of PTSD from malingered cases. […] We then examine how the clinician can use the clinical interview (e.g., SIRS, CAPS), psychometric testing, and the patient’s physiological responses to detect malingering. […] The review includes a case, which shows how an individual used symptom checklist information to malinger PTSD and the inconsistencies in his story that the evaluator detected. We conclude with a discussion regarding future diagnostic criteria and suggestions for research, including a systematic multifaceted approach to identify malingering.

… according to a press release from the Economic and Social Research Council (7 June):

Shifting uncomfortably in your seat? Stumbling over your words? Can’t hold your questioner’s gaze? Police interviewing strategies place great emphasis on such visual and speech-related cues, although new research funded by the Economic and Social Research Council and undertaken by academics at the University of Portsmouth casts doubt on their effectiveness. However, the discovery that placing additional mental stress on interviewees could help police identify deception has attracted interest from investigators in the UK and abroad.

[…] A series of experiments involving over 250 student ‘interviewees’ and 290 police officers, the study saw interviewees either lie or tell the truth about staged events. Police officers were then asked to tell the liars from the truth tellers using the recommended strategies. Those paying attention to visual cues proved significantly worse at distinguishing liars from those telling the truth than those looking for speech-related cues.

[…] However, the picture changed when researchers raised the ‘cognitive load’ on interviewees by asking them to tell their stories in reverse order. Professor Aldert Vrij explained: “Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand would induce additional cues in liars. Analysis showed significantly more non-verbal cues occurring in the stories told in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and false accounts.”

Asking an interviewee to tell their story in reverse order is not a new interview technique – it’s one of the techniques used in the Cognitive Interview, more usually deployed to get maximum detail in statements from victims and witnesses.

There are also detailed articles in the UK Times and Daily Telegraph newspapers based on (and building on) this press release.

More details, and links to downloadable reports, are available on the ESRC website via this link.

The Self-Appraisal Questionnaire (SAQ, Loza, 2005) is a self-report measure designed to predict violent and nonviolent recidivism. According to the authors of this study, published in the latest issue of the Journal of Interpersonal Violence, the SAQ has been shown, in several studies, to be valid in a variety of different “populations, settings, cultures, gender, and age groups” (p.672).

Despite this, some researchers wonder about the validity of the SAQ, because it is a self-report measure:

Such doubts arise from beliefs that self-report measures have inferior validity relative to professionally rated measures (Kroner & Loza, 2001) and a widespread belief that self-report questionnaires are more susceptible to lying and deception, especially when used in offending populations (p.673).

So the authors set out to investigate whether the SAQ is vulnerable to manipulation by offenders:

Two studies were conducted to investigate the vulnerability of the Self-Appraisal Questionnaire (SAQ) to deception and self-presentation biases. […] In the first study, comparisons were made between 429 volunteer offenders who completed the SAQ for research purposes and 75 offenders who completed the SAQ as a part of the psychological assessments process required for consideration for early release. In the second study, 106 participants over two sessions completed the SAQ and the Balanced Inventory of Desirable Responding. Participants completed both measures under two separate sets of instructions: (a) Answers would be used for research purposes, and (b) answers would be used for making decisions about their release to the community [from the abstract].

Their results suggest that the SAQ is not vulnerable to manipulation:

The nonsignificant differences between the SAQ scores of the participants who completed the SAQ for research and those of the participants who completed the SAQ for decision on release purposes in two separate studies provide further support to the previously reported findings that the SAQ as a self-report measure not affected by deception and self-biases (p.679).

If podcasts are your thing you can also listen to an interview with Ken Alder, author of a new book on the polygraph, on the Bat Segundo show (mp3). As the Anti-Polygraph Blog points you, you have to sit through a little silliness first…

An article in the March 2007 issue of Sexual Abuse: A Journal of Research and Treatment presents the results of an experimental comparision between child molesters’ responses on a questionnaire and their responses when attached to a fake lie detector known as a ‘bogus pipeline’. Here’s the abstract:

Questionnaires are relied upon by forensic psychologists, clinicians, researchers, and social services to assess child molesters’ (CMs’) offense-supportive beliefs (or cognitive distortions). In this study, we used an experimental procedure to evaluate whether extrafamilial CMs underreported their questionnaire-assessed beliefs. At time one, 41 CMs were questionnaire-assessed under standard conditions (i.e., they were free to impression manage). At time two, CMs were questionnaire-assessed again; 18 were randomly attached to a convincing fake lie detector (a bogus pipeline), the others were free to impression manage. The results showed that bogus pipeline CMs significantly increased cognitive distortion endorsements compared to their own previous endorsements, and their control counterparts’ endorsements. The findings are the first experimental evidence showing that CMs consciously depress their scores on transparent questionnaires.

Deliberations, a blog about “Law, news, and thoughts on juries and jury trials” has been keeping my attention since its launch in February.

Anne Reed, a trial lawyer and jury consultant from Wisconsin, posts regularly on research and news relating to juries and court cases. If you have an interest in the psychology of juries – or even just a more general interest in forensic psychology – it’s well worth adding to your list of required reading.

Posts navigation

Categories

Archive

Follow me on Twitter

Subscribe!

Disclaimers

Where postings include copyright material, this is used with in accordance with Fair Use exemptions. The fair use of a copyrighted work for purposes such as criticism, comment, news reporting, teaching, scholarship, or research, is not generally considered an infringement of copyright. However, if you wish to use this copyrighted material for purposes of your own that go beyond ‘fair use,’ you must obtain permission from the copyright owner.

Just because I post a link to an article here does not necessarily imply that I endorse the content. You should exercise your own critical judgement when assessing the reliability and truthfulness of some of the reports, particularly those forwarded from news services.

Photos make the site more engaging and celebrate the talents of some wonderful amateur (and professional) photographers who upload their work to Flickr and similar sites. I always try to credit the photographer (unless they are my own photos!) and only use pictures that have been licensed under a Creative Commons license. If I have used your photograph and you are unhappy about it for whatever reason, please let me know and I will of course remove it.