Facial Recognition is getting really accurate, and we have not prepared

There’s reason to believe facial recognition software is getting very accurate. According to a WSJ article by Laura Mills, Facial Recognition Software Advances Trigger Worries, a Russian company called NTechLab has built software that “correctly matches 73% of people to large photo database.” The stat comes from celebrities recognized in a database of a million pictures.

Now comes the creepy part. The company, headed by two 20-something Russian tech dudes, are not worried about the ethics of their algorithms. Here are their reasons:

Because it’s already too late to worry. In the words of one of the founders, “There is no private life.”

They don’t need to draw a line in the sand for who they give this technology to, because“we don’t receive requests from strange people.”

Also, the technology should be welcomed, rather than condemned, because according to the founders, “There is always a conflict between progress and some scared people,” he said. “But in any way, progress wins.”

Thanks for the assurance!

Let’s compare the above reasons to not worry to the below reasons we have to worry, which include:

The founders are in negotiations to sell their products to state-affiliated security firms from China and Turkey.

Moscow’s city government is planning to install NTechLab’s technology on security cameras around the city.

They were already involved in a scandal in which people used their software to identify and harass women who had allegedly acted in pornographic films online in Russia.

Share this:

Like this:

Related

I was working on facial recognition in my machine vision class back at UNCW in 2005 – 2007. It was pretty awesome back then. Back in those days we had a pretty decent success rate. Our biggest problems were introducing facial changes such as beards and scars. But even back then we could take a picture of the right side of your face and have the software derive the left side of your face for a complete picture. Remember, that was 2005 – 2007. An eternity in the computer world. So I find Mathbabe’s first sentence confusing. It should be changed to “Facial recognition is getting super accurate.”

People have been working on facial recognition software for a long time. A lot of research money has been invested in it. Now that it is actually starting to become effective, it will have profound effects on our society. The technology will probably be sold to security and law enforcement, and advertising companies. We probably won’t know how it is used until after it is well established. By then it will be very difficult to change.

US police already regularly monitor social media sites, particularly during protests. For example, the analytics firm Geofeedia was in the news just today after cops used it to identify and arrest protesters in Oakland and Baltimore.

But you can 3d print yourself a new face and an AK-47, so it’s kind of a draw isn’t it?

Are you aware of Jeffrey West’s work at the Santa Fe Institute on why businesses die but cities don’t? Under the hypotheses of his work, a constant economic growth requires exponentially accelerating progress, which implies that our law systems with their at best linear space complexity is doomed to fail.

The day will come, for privacy reasons, when people start wearing masks in public of their favorite or most detested person. Imagine going shopping and only seeing the faces of Donald Trump or Hillary Clinton everywhere you look.

This does seem worrisome, but I’m skeptical of the accuracy in real-world conditions. ‘celebrities recognized in a database of a million pictures’ sounds like an easier task with lots of training data. If facial recognition really does work well, I’ll look forward to a google glass type device that can remind me the names of people I see out and about.
With that said, the guys’ assurances don’t seem credible. Moreover, misleading accuracy statistics could worsen problems if the system was actually used–e.g. if it’s supposedly improved to 90% accuracy, but a large number of people are misrecognized as someone they don’t want to be associated with.

Ethical conundrums aside, I also wonder how accurate their technology is when tested on a pool of people who are not predominantly white & cis; there’s a great MVC article, written by engineer Alyx Baldwin, about the failings of recognition on non-white subjects & the issue gender creates when implementing & training these types of technologies: https://modelviewculture.com/pieces/the-hidden-dangers-of-ai-for-queer-and-trans-people

The article was mainly about human-based recognition, but many of the same problems that plague a human based recognition system also plague automated recognition systems. And for many cases, it’s not an either/or situation, at any rate. For example, in court cases, humans (on the jury) would be making the final judgement (at least as it stands now).

One interesting thing about the case of the mismatch referred to in the article is that the bank robber was wearing a baseball cap and sunglasses, which hid many of the most important facial features (eg, the eyes) that both human-based and automated recognition systems key in on and take measurements from. That made it virtually impossible to make a match with high confidence. Note to facial recognition system designers: any smart bank robber will probably be aware of this (some might even use ski masks, if you can imagine that)

Unfortunately, even without sunglasses and a cap, real life situations are rarely as ideal as the facial recognition algoritmists would like them to be. Of course, that means that even an automated system that is “73% accurate” (whatever that means) under ideal (or even good) conditions might be much less accurate in practice. In fact, it could actually be quite worthless in practice (or worse).

A necessary (but not sufficient) condition for talking about the “accuracy” of a facial recognition system is that one knows the “true” face.

But unfortunately, for many facial recognition applications, one does not even know whether a test face is part of the database used for matching,

Also, because there is not yet a probabilistic (frequency) characterization of the “key” human facial features/measurements (or even universal agreement about which features/measurements are key), in such cases, it does not really make sense to even talk about the “probability” that a match is correct.