Why Data Privacy Needs Legal Protection

Share

It began with the dream of a more interconnected world—so how did it all go horribly wrong? The question echoes throughout The Great Hack, a Netflix documentary on the work of Cambridge Analytica, the London-based political consultancy that helped influence the 2016 U.S. election. The firm used stolen Facebook data to create targeted social-media campaigns aimed at millions of American voters—typically in swing states—that its algorithms had determined were particularly receptive to fake-news items and inflammatory ads. Strategically inundated with false information, these “persuadables,” as they were known, were considered more likely to vote for Donald Trump.

Which, it now seems clear, they did. But the Trump campaign was not the only client of Cambridge Analytica (whose vice president at the time was Steve Bannon). It had already helped the Leave campaign during the Brexit referendum (described in the film as a “Petri dish for Trump”), along with electoral campaigns in Trinidad, Lithuania, Kenya, Ghana, and elsewhere. Virtually every campaign that did business with Cambridge Analytica won. The company shut down last year, but the techniques it pioneered are almost certain to be used in elections for the foreseeable future.

Cambridge Analytica was just one cog in the massive data-gathering system that Silicon Valley has helped create. The company’s strategy was based on “psychographics,” or the study and classification of people according to their psychological criteria, attributes, and aspirations. Harvesting this information has never been easier, thanks to ever-evolving technology, and it’s being used for far more than influencing elections. From our iPhones to our smart speakers, all the technology we use is constantly learning more about who we are, tracking the decisions we make. We don’t even know how much data is being collected, or from where, or when. Privacy-protection laws are limited, and big tech firms circumvent liability and responsibility with long, dense “terms and conditions” that we agree to without reading or understanding.

The first comprehensive study of this system was Shoshana Zuboff’s 2018 The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Zuboff defines surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioral data.” Rapid technological advancement in combination with standard corporate greed brought us here, she argues. These companies aim to predict and alter human behavior to work toward social uniformity, in turn maximizing profit. It’s a homogenizing process, one that threatens the very nature of what it means to be human. In Zuboff’s words: “Group pressure and computational certainty replace politics and democracy, extinguishing the felt reality and social function of an individual existence.”

Among the companies Zuboff implicates are Amazon, Apple, Google, Microsoft, and Facebook. Facebook is particularly adept at collecting this information because of the nature of its platform and the reach it has acquired through subsidiaries like Whatsapp and Instagram. In tracking not just our posts and photos but also our reactions to the posts and photos of friends, our visits to news and consumer sites, our purchases, and more, it is essentially building vast dossiers on hundreds of millions of users. Facebook has also allowed third parties to read our messages. On September 4, Google was charged a $170 million fine for targeting ads to minors on YouTube whose information the company had illegally harvested in violation of the Children’s Online Privacy Protection Act. There could be—and likely is—more surveillance we don’t even know about. Of course, there’s also the risk of that data being stolen. Cofounder and CEO Mark Zuckerberg recently testified before Congress about the platform’s expansive data breach, and the Federal Trade Commission approved a $5 billion fine against Facebook for the way it mishandled the personal information of its users.

How can these abuses be prevented? The best solution is increased regulation. Privacy is largely considered a bipartisan issue, and politicians on both sides of the aisle, from Minnesota Senator Amy Klobuchar (D) to Missouri Senator Josh Hawley (R), have introduced legislation that would increase consumer transparency and control over what data companies are collecting, how they’re using it, and what they’re doing to protect it. Some of this legislation would allow consumers to opt in or out of the data collection altogether. Yet none of these bills has been put to a vote, in part due to squabbles over how much power the FTC should have in enforcement, and whether a federal law would override state regulations. California stands out for pioneering tough privacy legislation: the state’s Consumer Protection Act obligates businesses to obtain permission from consumers for the types of data they’re collecting and to provide an annual consumer report with this information. The act also protects consumers who choose to opt out of sharing their data against service discrimination, and it increases fines and penalties on businesses that fail to meet the new requirements.

But what if people were given ownership rights over their data—not only to retain some control over it, but also perhaps to be compensated for it? This idea is receiving greater attention, especially now that data (hard to believe as it seems) has surpassed oil to become the most valuable commodity on earth. California Governor Gavin Newsom recently called for a “data dividend” that would require companies to pay you for your information. None other than Cambridge Analytica’s former business-development director, Brittany Kaiser, has spearheaded the #OwnYourData campaign, which goes even further, advocating to treat data rights like human rights—as basic, inalienable entitlements belonging to every person.

But the central concern remains: the harvesting, manipulation, and exploitation of our personal data threatens our democracy, our ability to freely make decisions, and our individual nature. The Great Hack should be required watching for all American citizens—though they may also want to note the irony of its airing on Netflix, which in recent years has crowdsourced solutions to optimize its algorithm for tailoring personal recommendations its customers.