Brain Scan Lie-Detection Deemed Far From Ready for Courtroom

A landmark decision has excluded fMRI lie-detection evidence from a federal court case in Tennessee.

The defense tried to use brain scans of the defendant to prove its client had not intentionally defrauded the government. In a 39-page opinion, Judge Tu Pham provided both a rebuke of this kind of fMRI evidence now, and a roadmap for how future defendants may be able to satisfy the Daubert standard, which governs the admissibility of scientific evidence.

“It has no automatic binding force on any other court, but because it’s been so carefully done, it will very likely carry a lot of persuasive value,” said Owen Jones, a professor of law and biological sciences at Vanderbilt University, who observed the entire hearing.

The specific facts of the Tennessee case revolve around whether defendant Lorne Semrau, CEO of two nursing home facilities, intentionally had his employees fraudulently fill out Medicare and Medicaid forms. Semrau claims he acted in good faith and that the government directions were unclear; the government argues his companies made an extra $3 million by marking up a variety of services beyond their assigned value. The brain scans were intended to show Semrau is telling the truth today about his behavior in the past.

As Jones pointed out to Wired.com in May, with the fMRI scans, “the defense is attempting to introduce evidence of the brain’s current assessment of the brain’s former mental state.”

To get the brain scans into Federal court, the evidence had to meet the Daubert standard, so-named for the 1993 Supreme Court case that established rules for scientific testimony. Daubert has multiple prongs, but they don’t form a literal checklist: Judges are allowed to examine the evidence holistically.

Judge Pham, who presided over this evidentiary hearing, summarized his reading of Daubert: Reasonable tests to apply and ideas to consider include “(1) whether the theory or technique can be tested and has been tested; (2) whether the theory or technique has been subjected to peer review and publication; (3) the known or potential rate of error of the method used and the existence and maintenance of standards controlling the technique’s operation; and (4) whether the theory or method has been generally accepted by the scientific community.”

In walking through the use of fMRI in the case, the judge highlighted multiple areas where it did not meet the standard. First, he called attention to the difficulty of applying laboratory results about lying where the consequences of being caught are nonexistent, versus a real-world situation like the Semrau case.

“While it is unclear from the testimony what the error rates are or how valid they may be in the laboratory setting, there are no known error rates for fMRI-based lie detection outside the laboratory setting, i.e. in the ‘real-world’ or ‘real-life’ setting,” Pham wrote in his decision.

But Pham did not take his criticism too far. He could imagine, he wrote, that even if we didn’t know how well fMRI worked in the real-world, it could still be deemed admissible.

“The court notes that potential or known error rates is but one factor under the Daubert analysis,” Pham wrote, “and that in the future, should fMRI-based lie detection undergo further testing, development, and peer review, improve upon standards controlling the technique’s operation, and gain acceptance by the scientific community for use in the real world, this methodology may be found to be admissible even if the error rate is not able to be quantified in a real world setting.”

More damaging to Semrau’s case was that the neuroscience community has not accepted fMRI lie detection as ready for use in real-world situations. “No doubt in part because of its recent development, fMRI-based lie detection has not yet been accepted by the scientific community,” Pham plainly wrote.

Pham was also less than impressed with the scientific methodology employed by Cephos, the company who conducted the lie-detection test. After Semrau failed one of the two tests he’d agreed to take, Cephos CEO Steven Laken retested him a third time, claiming his client had been tired.

“Assuming, arguendo, that the standards testified to by Dr. Laken could satisfy Daubert, it appears that Dr. Laken violated his own protocols when he re-scanned Dr. Semrau,” Pham wrote.

On balance, Hank Greely, Stanford law professor and co-director of the Law and Neuroscience Project, did not find Cephos’ case for its product’s scientific accuracy compelling.

“It seems almost laughable that Cephos could parade this as a great method when, in this very case, they tried it three times and got one result twice and the other one once,” Greely wrote in an e-mail to Wired.com. “In the only ‘real world’ test we’ve got evidence about, their accuracy rate was either 66.7 percent or 33.3 percent.”

Finally, there was a small twist at the end of the Tennessee’s judge’s opinion where he cited a different evidentiary standard as a second basis for excluding the evidence, completely outside the scientific realm. Rule 403 of the Federal Rules of Evidence provides for the exclusion of evidence “on Grounds of Prejudice, Confusion, or Waste of Time.”

In applying rule 403 to this case, Pham compared Semrau’s situation to the case law surrounding polygraphs that are obtained by defendants unilaterally, saying they presented “similar issues.” In those cases, courts did not look kindly on tests performed solely to bolster the credibility of the witness without both prosecution and defense having been involved.

“Dr. Semrau risked nothing in having the testing performed, and Dr. Laken himself testified that had the results not been favorable to Dr. Semrau, they would have never been released,” Pham noted.

Furthermore, and the judge quoted extensively from the prosecution’s cross-examination on this point, Cephos only claims to be able to offer a general impression of whether someone is being deceptive. While they ask dozens of individual questions, Laken admitted that his company’s method could not be used to tell whether someone was lying or telling the truth on any of specific facts.

That is to say, Laken refused to say that Semrau was telling the truth to a question like, “Did you enter into a scheme to defraud the government by billing for AIMS tests conducted by psychiatrists under CPT Code 99301?” but was willing to say that Semrau was “more overall” telling the truth.

Given the slipperiness of that method, “the court fails to see how his testimony can assist the jury in deciding whether Dr. Semrau’s testimony is credible,” Pham concluded.

“That’s a really interesting critique of the Cephos method — and one that none of us had really noticed before this testimony because we hadn’t realized that Laken would say that he couldn’t give an opinion on individual questions,” Greely said. “If that’s Laken’s final position, it makes a courtroom use of this technology seem unlikely.”

All-in-all, the decision found multiple instances where fMRI evidence did not meet the standards of evidence in the United States. While that’s a victory for opponents of the use of fMRI in courts, like Greely, it might also offer proponents a clear path to shoring up the use of lie-detection scans.

“There will certainly be further litigation over fMRI lie detection in future cases. I expect that the companies marketing this research for forensic purposes will likely conduct new tests in light of the report recommendation to address some of the articulated weaknesses,” Owen said.