Forensic Examiners Pass the Face-Matching Test

The first study to test the skills of law enforcers trained in facial recognition has found that they perform better than the average person and even computers.

The research also suggests that trained facial forensic examiners identify faces in a different way to a small number of “super-recognisers” who are naturally very good at face matching.

“Super-recognisers tested in previous studies appear to rely on automatic, holistic processes when they compare facial images, but forensic examiners use analytical methods,” says Dr David White of UNSW Australia.

“The examiners’ superiority was greatest when they had a longer time to study the images,” White says, “and they were also more accurate than others at matching faces when the faces were shown upside down. This is consistent with them tuning into the finer details in an image rather than relying on the whole face.”

Because of increased use of CCTV, images captured on mobile phones and automatic face recognition technology, the comparison of facial images to identify suspects has become an important source of evidence. “These identifications affect the course and outcome of criminal investigations and convictions,” White says. “But despite calls for research on any human error in forensic proceedings, the performance of the experts carrying out the face matching had not previously been examined.”

The study, which has been published in the Proceedings of the Royal Society B, tested an international group of 27 facial forensic examiners with many years of experience who were attending a meeting of the Facial identification Scientific Work Group. The group’s member agencies include the FBI and police, customs and border protection services in the US, Australia and other countries.

The trained experts were given three tests in which they had to decide if pairs of images were of the same person. Their performance was compared with a control group of non-experts who were attending the same meeting, as well as a group of untrained students.

The pairs of images used were so challenging that computer algorithms were 100% wrong on one of the tests. For some of the tests participants were given 2 seconds or 30 seconds to decide.

“Overall, our study is good news. It provides the first evidence that these professional examiners are experts at their work. They were consistently more accurate on all tasks than the controls and the students,” White says.

“However, it is important to note that although the tests were challenging, the images were relatively good quality. Faces were captured on high-resolution cameras in favourable lighting conditions, and subjects were looking straight at the camera. This is often not the case when images are extracted from surveillance footage.”