Facebook Wants to Own Your Face, Here’s Why That’s a Privacy Disaster

(in)Secure is a weekly column that dives into the rapidly escalating topic of cybersecurity.

Scanning your face is easier than remembering a password, that’s for
sure. But while facial recognition technology has gone mainstream with
Apple’s FaceID and Microsoft’s Windows Hello, we’re only now thinking
through the cybersecurity and privacy concerns.
Now Facebook, among other companies, are finding questionable ways to
use this new data. To get to the bottom of just how dangerous that
could be, we spoke with Theresa Payton, the former White House Chief
Information Officer for the George W. Bush administration. She’s now
deeply involved in the world of cybersecurity — and has some serious
concerns about how Facebook intends to use the tech.

Your face belongs to you, doesn’t it?

Facial recognition technology has great potential, even in the world
of cybersecurity. In the case of authentication, for example, it makes
locking devices and accounts simpler for those who are slow to move to
methods like two-factor authentication. But, as Theresa Payton
explained, there’s a dark side to the technology.

Facebook says scanning and recognizing your face helps “protect you from a stranger using your photo to impersonate you.”

“I believe there are a lot of really cool things that could come out
of this technology, but recent history tells us we need to play out
worse-case scenarios,” Payton told Digital Trends. “We need to
understand that new technology will always be released a year or two
before we really understand the ramifications of securing that data, as
well as the legal aspects of protecting privacy.”
According to a recent New York Times report,
Facebook’s use of facial recognition to pick your face out of photos
has a handful of civil rights organization up in arms. Using artificial
intelligence and its own proprietary algorithm, Facebook already knows
your face as well as your best friend.
In Facebook’s own words, scanning and recognizing your face helps
“protect you from a stranger using your photo to impersonate you.” At
least, that’s what it said when it first tried to introduce the
technology in Europe six years ago. Facebook pulled back when EU
regulators started asking questions about security and privacy – but
now, the issue has returned.

You might think Facebook would retire the idea completely due these
previous concerns, along with the recent Cambridge Analytica data
scandal, yet the company has no plan to stop.
“They said, ‘Okay, we learned a lot,’ and basically ‘We want to make
it easier to authenticate, to classify their photos and videos,” said
Payton. “They basically said you shouldn’t worry about this, because
we’re going to let the users control facial recognition.”

“This is cool technology, but why don’t we all take a step back and talk about the uses [of Facebook’s facial recognition]”

Facebook’s plan to analyze your face don’t stop with photos and authentication. As reported by WWD,
the social media giant wants to monetize facial recognition further
with what it calls “augmented commerce.” The idea is to help brands
transform simple Facebook ads into interactive AR experiences. The
problem? No one knows what Facebook or its ad partners will do with the
data gained from scanning your face.
And that’s only the beginning. Facebook holds several worrisome and
downright creepy patents regarding facial recognition technology. One Minority Report-like
patent described a way to set a “trust level” for each person who
enters a store. By recognizing their faces and connecting it to the data
in their Facebook profile, the system could figure out which shoppers
were “trustworthy,” or could unlock special deals. Other disturbing patents include a system for tracking your emotions by scanning your face and matching that to what you’re currently looking at.

“You are not going to get a new face,” said Peyton. “This is cool
technology, but why don’t we all take a step back and talk about the
uses and applications of that technology and play out future security
and privacy concerns?”
She has a point. It’s not hard to imagine a day when biometrics are
accurate and routinely used for accessing your bank account. If your
face was then stolen, that could be incredibly problematic. But that
wouldn’t happen, right?

Biometrics won’t save us

Technology like facial recognition and fingerprint scanners are often
seen as the safer alternative to simple passwords. But if that data is
not secured, the consequences are catastrophic. We’ve already seen it
happen. In 2015, the Office of Personnel Management had a breach that
resulted in the theft of 5.6 million unencrypted fingerprints.
“I’m incredibly worried about the ease in which biometrics could be stolen and used for nefarious purposes,” said Payton.

“Play out those scenarios with this technology and come up with your countermeasures for that.”

With massive machine learning infrastructure to power biometric
scanning in place for companies like Google, Facebook, Apple, and
Microsoft, we tend to assume those companies are also hiding that data
away in a digital locked vault.Payton says our ability to protect our
biometric data is “woefully lacking right now.”
It seems it’s only worth implementing if companies are willing to do the hard work of securing the data.
“Here’s what I’d say to these technology companies…Let us know that
you are thinking through these worse-case scenarios,” she said. “Play
out those scenarios with this technology and come up with your
countermeasures for that. If we at least get those assurances, that’d be
incredibly helpful given the current state of affairs.”
Payton isn’t calling for an end to biometric scanning and facial
recognition. Instead, she proposed a more responsible way to use it
hand-in-hand with other technology. Rather than rely solely on something
like a fingerprint scanner, Payton’s advice was for companies to
combine it with behavioral-based data that could act as biometric
two-factor authentication. A system might be able to know things like
when the individual typically makes transactions, what kind of operating
system they use, or how fast they type.

“There’s a lot of biometrics and behavioral-based information if you
match the two together, then you have assurances of who that person
really is,” she insisted.
But it’s not too late, Payton argued. We’ve seen the worst social
media has to offer in the past couple of years, but if we could wind
back the clock and warn ourselves when this was all beginning, our world
might look different than it does today.
“If that worse-case scenario had been played out in in the late 1990s
and early 2000s, maybe things would have been a little different on
these social media platforms,” said Payton. “Let’s not repeat that type
of mistake with these newer technologies we’re introducing.”