From insurance to surveillance: Experts debate privacy in the age of Big Data

That’s the one thing that four people from very different professions agreed on during a panel at a Kenyon College political-science conference about technology’s impact on privacy in the 21st century.

Julia Angwin, a senior reporter at ProPublica, described her attempt to completely avoid being tracked while writing a book about whether privacy was dead. She turned off cookies, left Gmail, moved to an encrypted file-hosting service, and tried to quit Facebook. She feared quitting LinkedIn before realizing that she never logged in anymore; she hesitated anyway, because she didn’t want to miss out in the future. She called this concern “loss aversion.”

In the end, after taking all these steps and even adopting a fake identity with which she registered for a credit card and bought a cellphone, she had spent a lot of money for very little real privacy. Her friends wouldn’t email her using an encryption protocol called PGP. Data brokers still retained her most sensitive information.

What Angwin wanted, she realized after this ordeal, was what people get when they buy a car: safety, reliability, and accountability. She wanted a guarantee that there were barriers to things going wrong, either while she was driving or after an accident. To this end, she launched a campaign.

“I’m on a campaign to rebrand privacy,” she told the crowd. “It’s human rights.”

Kirk Herath, the chief privacy officer at insurance giant Nationwide, agreed that Big Data could be scary but said that it also offered many under-discussed benefits.

For one thing, he said, it reduces the risk of insurers making a bad deal that costs them money. This might sound self-serving, but as Herath pointed out, insurers that make fewer bad bets can afford to make more bets overall, because they lose less money. He mentioned a payday-loan company that analyzes loan applicants’ use of capital letters in their sentences and the amount of time they spend filling out the application. Using this data, they can assemble a remarkably accurate picture of the default risk that someone poses.

At Nationwide, Herath said, life insurance policy underwriting based on data collection is more accurate than underwriting based on drawing a customer’s blood or other physically intrusive techniques. It might be scary, he argued, but it benefits everyone.

In Angwin’s remarks, she mentioned a story she wrote about how Staples charged people different prices based on their ZIP code, which they collected through data-mining. ZIP codes turned out to be a proxy for one’s proximity to a Staples competitor. The closer you were to a store like Office Depot, the lower the price you saw on Staples’s website.

But proximity to a Staples competitor turned out to be a pretty good proxy for affluence and race: whiter, richer people were more likely to live in areas that contained multiple stores. The result was that minorities saw higher prices than white people.

Herath, nodding back to Angwin’s story, said that he advised insurers not to data-mine in ways that targeted protected classes like racial minorities. “Data itself is not bad,” he said. “It’s the use that’s bad.”

Laura Donohue, a professor at Georgetown University Law Center and director of its national security program, moved the ball forward by explaining why the Fourth Amendment, the country’s traditional guardian of privacy, was inadequate in the modern era.

The Founders, Donohue explained, based their understanding of privacy on the English notion that a man’s house is his castle. In part, this meant that physical location defined the scope of the privacy that one could expect. For a long time, this was uncontroversial. After all, one might consider it odd to tell a police officer not to look at something happening in a yard plainly visible from the street and take action.

But as Donohue pointed out, new technology like Wi-Fi signals, GPS, drones, and network connection metadata has changed how people conducted themselves. They began transmitting obviously private, personal things in a physically public way and place—through open air, with signals crisscrossing parks and sports stadiums.

David Greene, a senior staff attorney and the civil-liberties director at the Electronic Frontier Foundation, proposed a solution to the privacy concerns that Angwin and Donohue raised.

Greene pointed to the test that the Supreme Court set up when it upheld the NAACP’s right to refuse to provide the state of Alabama with its membership list. This information, he noted, went straight to First Amendment protections of freedom of expression and association. And the Court’s test required government data requests to serve compelling interest unrelated to the goal of suppressing expression, which was plainly Alabama’s goal at the time.

Greene called for a “rigorous examination” of all data-access requests along those same lines.

“Data itself is not bad. It’s the use that’s bad.”

The U.S. government was able to overcome privacy concerns in establishing its bulk-surveillance programs by arguing that, because the data it targeted was held by companies and not American citizens, those citizens had no expectation of privacy in the handling of that data.

Under Greene’s revised standard, this carve-out for third parties would vanish, and, he said, many government requests for access to data would meet their demise in federal courts.

“I’m not actually a privacy fetishist or an extremist,” Greene said. But even if one doesn’t mind Google knowing everything about them, he added, that person knows that they can’t stop Google from complying with a warrant and turning their entire digital lives over to the government.

From insurance underwriting to fighting terrorism to serving relevant Internet ads, the United States is awash in data—and data collection. As Angwin and Herath’s stories indicate, Big Data has gone from a buzzword to a fact of life. Whether Americans can expect new laws remedying its excesses, from government surveillance to customer profiling, is anyone’s guess.