A Libyan rebel soldier flashes the “V for victory” sign as he prepares for battle in the eastern city of Ajdabiya (AFP/Getty Images)

Terrorists and enemy insurgents are difficult to identify because they often conceal their faces with scarves and masks. A new algorithm has shown surprising promise being able to identify individuals by their characteristic “V for victory” signs.

Fingerprints used to be the cutting edge of biometric profiling, but this field has expanded significantly over the years. Today, people can be identified according to the unique shape of their bodies, including ears, eyes, nose—and even the shape of their ass. It’s also possible to glean a person’s identity based on the quality of their voice and the manner in which they walk.

Now, thanks to the work of Ahmad Hassanat from Jordan’s Mu’tah University we can add another biosignature to the list: the “V for victory” sign. A pre-print of the study can now be found on arXiv.

It might seem random, but as Technology Review points out, one of the more terrifying images of the 21st century “is the image of a man in desert or army fatigues making a ‘V for victory’ sign with raised arm while standing over the decapitated body of a Western victim.” Hyperbolic, for sure, but its point is well taken. The faces of these troublemakers are typically obscured, making identification difficult. That’s where the new biometric tool comes in.

Advertisement

Image: A. B. A. Hassanat et al., 2016

To develop the tool, Hassanat recruited 50 volunteers and took photos of their right-handed V sign. Taking this data, the researchers considered three different metrics: the endpoints of the two fingers, the lowest point in the “valley” between them, and two points in the palm. Armed with this data, the researchers analyzed various triangle shapes between the points, and other measurements. A statistical technique was used to to create 16 different features useful for identification. Finally, a machine-learning algorithm was trained to recognize different V signs.

The technique allowed the researchers to successfully identify individuals between 40 to 90 percent accuracy depending on the the quality of the image, and variables such as distance.

Advertisement

It’s a promising start. But Hassanat and friends will have to show that the tool works on much larger dataset and that it doesn’t produce too many false positives (which seems pretty likely). Finally, given that this is now a thing, it’s reasonable to assume that people who don’t wish to be identified will simply stop making V signs.