The paper, accepted to appear in a
computer vision conference workshop
next month and detailed in
Jack Clark’s ImportAI newsletter
, shows that identifying people covering their faces
is
possible, but there’s a long way to go before it’s accurate enough to be relied upon. Researchers used a deep-learning algorithm—a flavor of artificial intelligence that detects patterns within massive amounts of data—to find specific points on a person’s face and analyze the distance between those points. When asked to compare a face concealed by a hat or scarf against photos of five people, the algorithm was able to correctly identify the person 56% of the time. If the face was also wearing glasses, that number dropped to 43%.

But those imperfect results don’t mean the paper should be ignored. The team, with members from the University of Cambridge, India’s National Institute of Technology, and the Indian Institute of Science, also released two datasets of disguised and undisguised faces for others to test and improve the technology. (Data has been
shown to be a key component
for driving progress in the field of AI; when deep-learning algorithms have more data to analyze, they can identify patterns in the data with greater accuracy.)

AI analyzes 14 points on a face to identify them.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

The algorithm for identifying disguised faces maps 14 points on a person’s face, and then uses the distance between those points to identify them again in other images. The points mainly focus on the area around the eyes, but also designate the tip of the nose and corners of the mouth. (The placement of these markers indicates that large glasses alone might confuse the algorithm, but the paper’s authors don’t disclose the accuracy of a glasses-only test.)

Glasses have been researched before as a way to evade facial recognition. A team of Carnegie Mellon researchers
printed glasses
with a pattern custom-built to fool facial-recognition algorithms into misidentifying the wearer as someone else, such as a celebrity. Other attempts at beating identifying algorithms have included custom scarves with patterns that
look, to machines, like human faces
.

But faces aren’t the only way to identify a person—other AI research indicates that the way a person walks is
almost like a fingerprint
. Researchers achieved more than 99% accuracy when using a deep-learning algorithm to identify one person’s gait from 154 others.

The new research skips past
generic fear-mongering
about artificial intelligence to get more directly at the realistic implications of AI systems being developed and used by unchecked or authoritarian government powers. Facial-recognition technology that could bypass disguises, for example, would be immensely useful for identifying political dissidents or protestors.

As technology writer and New York Times opinion contributor Zeynep Tufekci
tweeted
, “Too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI.”

By using this service you agree not to post material that is obscene, harassing, defamatory, or
otherwise objectionable. Although Defenseone.com does not monitor comments posted to this site (and
has no obligation to), it reserves the right to delete, edit, or move any material that it deems
to be in violation of this rule.