Facial Recognition: Should We Fear It or Embrace It?

Advances in deep learning and artificial neural networks have propelled the speed and accuracy of facial recognition to new levels. But who's making sure the tech is not being abused? At least one tech CEO is in favor of regulation to address this.

Facial-recognition technology is not new, but it has progressed immensely in the past few years, mainly because of advances in artificial intelligence.

Naturally, this has drawn the interest of Silicon Valley, advertising agencies, hardware manufacturers, and the government. But not everyone is thrilled. The American Civil Liberties Union (ACLU) and 35 other advocacy groups, for example, sent a letter to Amazon CEO Jeff Bezos demanding that his company stop providing advanced facial-recognition technology to law enforcement, warning that it could be misused against immigrants and protesters.

What's Different Now?

Early iterations of the technology, which dates back to the 1960s, were clunky. Police had to create a facial-recognition database, which required a human user to specify key points on a photo of each subject's face, such as the center of pupils and corners of the eyes, mouth, and nose. The system then used those points to calculate and register landmark distances of the subject's face.

For the recognition phase, the operator would repeat the marking process with new images, and the system would compare the distances to what it contained in its database. Even then, the operator needed to adjust the system to account for the tilt, rotation, and lean of the subject's head.

Now, advances in deep learning and artificial neural networks have propelled the speed and accuracy of facial recognition to new levels. Computer vision, which lets computers make sense of different objects in images, makes facial-recognition systems increasingly efficient at detecting facial elements with little or no need for human assistance or correction. With the explosion of cloud computing, ubiquitous connectivity, and the internet of things, we can integrate facial recognition into many more devices and applications, for better or worse.

Facebook, for example, recently rolled out a feature that uses facial recognition to alert users if someone uploads a photo of them. This is supposed to give users more control over their online identities by preventing others from impersonating them or posting pictures of them without their consent. But privacy advocates worry that Facebook, a company whose business thrives on collecting and mining user data, will use the technology to deepen its understanding of users' preferences and target them with personalized ads and other content.

Law Enforcement Logs On

Law enforcement agencies are interested in advanced facial recognition not only in their labs but also in the streets, at the border, in their vehicles, and on body cameras and glasses. Ideally, it will help identify criminals and victims in real time, like the UK's South Wales Police did last year.

With millions of CCTV cameras, China has one of the most sophisticated and invasive surveillance networks. In recent years, it has added real-time facial recognition to its network; authorities showed the effectiveness of the system during a demo in which they located and apprehended a BBC reporter in just seven minutes. In April, Chinese law enforcement also used the system to identify and arrest a financial crime suspect at a concert with more than 50,000 attendees.

In the states, police departments have been testing Amazon's Rekognition system. In Washington County, Oregon, police reported that the system's results were 75 percent accurate, but a more recent test of the same service marked 28 members of the US Congress as people with criminal backgrounds. The Orlando Police Department in Florida opted to let its Amazon contract expire.

One of the biggest concerns privacy advocates and experts point to is the lack of regulation and oversight regarding the use of this technology. According to a 2016 investigation by privacy advocacy groups, more than half the US adult population is subject to face-scanning systems. Can we trust law enforcement to be fair and objective in its use of facial recognition? A look into the performance of the facial-recognition technology used by the UK Metropolitan Police showed that 98 percent of matches the system made were mistakes.

What's certain is that the technology is very volatile. Like all deep learning systems, facial recognition is only as good as the quality of the data it's trained with, and it can behave erratically when it hasn't seen enough examples. For instance, a recent study of two popular face analysis services by IBM and Microsoft proved that both systems were significantly more accurate on male faces than female faces and on lighter faces than darker faces.

The Future of Facial Recognition

I usually take the words of tech executive with a grain of salt, but Microsoft President Brad Smith's recent essay on the opportunities and challenges surrounding facial recognition is an interesting and balanced read on the direction the industry should take.

Smith acknowledges the recent concerns raised over the potential misuse of facial recognition while also reminding us of the positive uses it can serve. Microsoft was recently entangled in an internal debate over its work with Immigration and Customs Enforcement (ICE); more than 100 employees called on the company's leadership to cancel contracts with ICE amidst child separations at the border. ICE uses Redmond's Azure Government cloud service, not its facial-recognition tech, but the incident highlighted the need for laws regulating the use of facial recognition, Smith writes.

"It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike," Smith says, naming the auto industry as an example where regulations have set standards for passenger safety.

Meanwhile, Smith also recognizes the tech sector's responsibility in reducing the risk of bias in facial recognition technology and establishing ethical guidelines to make sure their applications are not used for purposes that violate human rights. "'Move fast and break things' became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people's fundamental rights are being broken," Smith writes.

The bottom line is to recognize and embrace facial recognition as a powerful technology but remember that power can go both ways. In Smith's words, "All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause."

About the Author

Ben Dickson is a software engineer and tech blogger. He writes about disruptive tech trends including artificial intelligence, virtual and augmented reality, blockchain, Internet of Things, and cybersecurity. Ben also runs the blog TechTalks. Follow him on Twitter and Facebook.

Get Our Best Stories!

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.