We’re giving up too much (data) and getting too little in return.

By now we’ve all heard the cliche: If you’re getting something for free on the internet, and you can’t identify the product that’s being sold, then you are the product. Your attention, your purchasing potential, your data, and, increasingly, your vote. That’s what you’re trading.

advertisement

advertisement

This is the devil’s bargain we’ve all made with Facebook. In exchange for the ability to reunite with long lost friends, see photos of our nieces and nephews, share cat videos, and get into endless political arguments, we give up our data –massive amounts of data. Not merely who we are, where we are, and what we do, but also the people who are important to us. The things we liked, and how enthusiastically we liked them. The things we read, the comments we made. How “engaged” we were.

In turn, Facebook uses that data to determine what to show us next, make suggestions as to other things we might like, and display ads for products that, theoretically at least, we might be interested in acquiring.

It also promised–very grudgingly, after a lot of external pressure–to give us control over this data. To determine who gets to see it and what they get to do with it. At least, that’s what Facebook told us. Not surprisingly, the reality is a little different.

That brings me to Cambridge Analytica and the 50 million user profiles it managed to extract from the social network with Facebook’s permission. In case you haven’t already heard this story to death, here’s a quick summary. If you have, you can skip the next few graphs.

In 2013 Facebook was approached by an academic named Aleksandr Kogan who wanted to create psychographic profiles of Facebook users using an app called ThisIsYourDigitalLife. Users who downloaded the app filled out a personality questionnaire; they also allowed the app to access their profile data, including their network of friends.

Some 270,000 gullible Facebook users took the quiz. By mining the profiles of each of their friends, the researcher managed to compile a database of 50 million–enough data to start creating detailed personality profiles of users based on their likes and interests. Among other things, the research attempted to gauge sentiment on gun rights, immigration, healthcare, and environmental issues.

advertisement

Combined with voter registration records, these profiles could then be used to identify targets for political ads. (In the U.S., those candidates were Ted Cruz and Donald Trump, but Cambridge Analytica claims to have used similar techniques in elections around the globe.)

At the time, this kind of data mining was perfectly within the bounds of Facebook’s terms and conditions. In fact, the only violation the researcher committed was sharing the information with a third party, Cambridge Analytica. When the social network found out about that–back in 2015–it ordered the app be removed and the data destroyed. Only last week, when this news finally became public, did Facebook boot Cambridge Analytica from its ad platform.

Facebook never bothered to check whether the data had been destroyed (it apparently still exists). And even if it had been, the genie was already out of the bottle. The mining had been done, the models created. Facebook did, however, benefit financially from the ads Cambridge bought during the 2016 election.

Those 270,000 people willingly handed over their personal information to take a stupid quiz. That’s their right; there’s no law against taking dumb Facebook quizzes. But 49,730,000 people did not give their permission to have their data used in the service of courting gun lovers and immigration foes. And while that’s not illegal, it should be.

As a technology journalist who specializes in privacy issues, I’ve followed Facebook pretty closely. I’ve watched the company survive outrage after outrage, as it attempted to monetize more and more of its users’ data, only to be forced to back off and offer its 2 billion members more privacy controls.

So I’ve been growing increasingly disenchanted with Facebook over the years. It really kicked in during the summer of 2016, when I noticed how the social network facilitated and benefited from the spread of political propaganda, aka “Fake News.” Where did all these anonymous or pseudonymous sites come from, and why was Facebook promoting them? I decided to find out.

advertisement

I was one of the first journalists to write about this phenomenon, for the Guardian, in August 2016. It turns out that Facebook was the primary distribution method for these enormously profitable sites, many of them based overseas. Without Facebook, most of them probably would not have survived.

As news emerged of Russia’s election manipulation, including the purchase of targeted Facebook ads–and Facebook’s strident denials gradually morphed into admissions of guilt–I grew even more disgusted.

Cambridge Analytica is the last straw for me.

You can argue that the company’s data mining had little impact on the election, or that it was doing things commercial marketers do all the time. I don’t care. The deal isn’t fair any more. We’re giving up too much and getting too little in return. I don’t need to engage in yet another pointless political argument with a total stranger; there are better ways to stay in touch with family and friends.

As a journalist I still have to cover Facebook. But I don’t have to give them my data. So I’m leaving and taking my data with me. For years I’ve maintained a pseudonymous (aka fake) account on Facebook to test things. I will use that account to keep abreast of the next outrage Facebook commits. Fake data for the kings of Fake News. Seems appropriate, I think.