A HIGH-profile court case in Massachusetts is once again casting doubt on the claimed infallibility of fingerprint evidence. If the case succeeds it could open the door to numerous legal challenges.

The doubts follow cases in which the testimony of fingerprint examiners has turned out to be unreliable. The most high-profile mistake involved Brandon Mayfield, a Portland lawyer, who was incorrectly identified from crime scene prints taken at one of the Madrid terrorist bombings on 11 March 2004. Despite three FBI examiners plus an external expert agreeing on the identification, Spanish authorities eventually matched the prints to an Algerian.

Likewise, Stephan Cowans served six years in a Massachusetts prison for shooting a police officer before being released last year after the fingerprint evidence on which he had been convicted was trumped by DNA.

No one disputes that fingerprinting is a valuable and generally reliable police tool, but despite more than a century of use, fingerprinting has never been scientifically validated. This is significant because of the criteria governing the admission of scientific evidence in the US courts.

The so-called Daubert ruling introduced by the Supreme Court in 1993 set out five criteria for admitting expert testimony. One is that forensic techniques must have a known error rate, something that has never been established for fingerprinting.

The reliability of fingerprinting is at the centre of an appeal which opened earlier this month at the Massachusetts Supreme Court in Boston. Defence lawyers acting for Terry Patterson, who was convicted of murdering an off-duty policeman in 1993, have launched a so-called "interlocutory" appeal midway through the case itself to test the admissibility of fingerprinting. Patterson's conviction relies heavily on prints found on a door of the vehicle in which the victim died.

A key submission to the appeal court is a dossier signed by 16 leading fingerprint sceptics, citing numerous reasons for challenging the US Department of Justice's long-standing contention that fingerprint evidence has a "zero error rate", and so is beyond legal dispute. Indeed, fingerprint examiners have to give all-or-nothing judgements. The International Association for Identification, the oldest and largest professional forensic association in the world, states in a 1979 resolution that any expert giving "testimony of possible, probable or likely [fingerprint] identification shall be deemed to be engaged in conduct unbecoming".

Material in the dossier includes correspondence sent to New Scientist in 2004 by Stephen Meagher of the FBI's Latent Fingerprint Section in Quantico, Virginia, author of a pivotal but highly controversial study backing fingerprinting. The so-called "50K study" took a set of 50,000 pre-existing images of fingerprints and compared each one electronically against the whole of the data set, producing a grand total of 2.5 billion comparisons. It concluded that the chances of each image being mistaken for any of the other 49,999 images were vanishingly small, at 1 in 1097.

But Meagher's study continues to be severely criticised. Critics say that showing an image is more like itself than other similar images is irrelevant. The study does not mimic what happens in real life, where messy, partial prints from a crime scene are compared with inked archive prints of known criminals.

When New Scientist highlighted these issues in 2004 (31 January 2004, p 6), Meagher's response to our questions arrived too late for publication. He wrote that critics misunderstood the purpose of his study, which sought to establish that individual fingerprints are effectively unique - unlike any other person's print. "This is not a study on error rate, or an effort to demonstrate what constitutes an identification," he wrote (the letter can be read at www.newscientist.com/article.ns?id=dn7983). By the time New Scientist went to press, the FBI had not responded to our requests for comment.

But critics of fingerprinting have seized on this admission and included it in the dossier as evidence that the 50K study doesn't back up the infallibility of fingerprinting. "It shows that the author of the study says it doesn't have anything to do with reliability," says Simon Cole, a criminologist at the University of California, Irvine and one of the 16 co-signatories of the dossier.

Cole says that Meagher's replies to New Scientist demolish claims by the courts, the FBI and prosecution lawyers that the 50K study is evidence of infallibility. He says the letter has already helped to undermine fingerprint evidence in a recent case in New Hampshire.

Whatever the decision in the Patterson case, the pressure is building for fingerprinting's error rate to be scientifically established.

One unpublished study may go some way to answering the critics. It documents the results of exercises in which 92 students with at least one year's training had to match archive and mock "crime scene" prints. Only two out of 5861 of these comparisons were incorrect, an error rate of 0.034 per cent. Kasey Wertheim, a private consultant who co-authored the study, told New Scientist that the results have been submitted for publication.

But evidence from qualified fingerprint examiners suggests a higher error rate. These are the results of proficiency tests cited by Cole in the Journal of Criminal Law & Criminology (vol 93, p 985). From these he estimates that false matches occurred at a rate of 0.8 per cent on average, and in one year were as high as 4.4 per cent. Even if the lower figure is correct, this would equate to 1900 mistaken fingerprint matches in the US in 2002 alone.

Examiners&apos; objectivity called into question

Fingerprint examiners can be heavily influenced by external factors when making judgements, according to research in which examiners were duped into thinking matching prints actually came from different people.

The study, by Itiel Dror and Ailsa Péron at the University of Southampton, UK, suggests that subjective bias can creep into situations in which a match between two prints is ambiguous. So influential can this bias be that experts may contradict evidence they have previously given in court. "I think it's pretty damning," says Simon Cole, a critic of fingerprint evidence at the University of California, Irvine.

Dror and Péron arranged for five fingerprint examiners to determine whether a "latent" print matched an inked exemplar obtained from a suspect. A latent print is an impression left at a crime scene and visualised by a technique such a dusting. The examiners were also told by a colleague that these prints were the same ones that had notoriously and incorrectly been matched by FBI fingerprint examiners last year in the investigation into the Madrid bombings. That mismatch led to Portland lawyer Brandon Mayfield being incorrectly identified as one of the bombers.

What the three examiners didn't know was that the prints were not from the bombing case at all. Each pair of prints, different for each examiner, had previously been presented in court by that same expert as a definite match.

Yet in the experiment only one of the experts correctly deemed their pair as matches. "The other four participants changed their identification decision from the original decision they themselves had made five years earlier," says Dror. Three claimed the pair were a definite mismatch, while the fourth said there was insufficient information to make a definite decision. Dror will present the results at the Biometrics 2005 conference in London next month.

One solution, says Cole, might be for each forensics lab to have an independent official who distributes evidence anonymously to the forensic scientists. This would help to rule out any external case-related influences by forcing the scientists to work in isolation, knowing no more about each case than is necessary. At the moment fingerprint examiners asked to verify decisions made by their colleagues do not receive the evidence "blind". They already know the decision colleagues have made.

Paul Chamberlain, a fingerprint examiner with the UK Forensic Science Service who has more than 23 years experience, says: "The FSS was aware of the need for a more robust scientific approach for fingerprint comparison." But he questions the relevance of the expert study. "The bias is unusual and it is, in effect, an artificial scenario," he says.