Tackling the Risks of Big Data

The Federal Trade Commission (FTC) held a workshop on big data last week called “Big Data: A Tool for Inclusion or Exclusion?” And to answer the question, the consensus among the panelists was yes to both: Big data analytics can bring enormous benefits to society, but it can also wrongfully label individuals, stigmatize protected classes and perpetuate societal prejudices and discrimination.

Examples abounded particularly on the benefits side—think predicting flu outbreaks based on online searches or targeting advertisements only to those who are likely to want them and, yes, uncovering discrimination through big data analysis.

But the possible harmful uses of big data were front-and-center, too. After all, fighting such harms is what the FTC, as the nation’s consumer protection agency, does. These harms ranged from a plethora of speculative scenarios, such as inferring credit-worthiness from the fact that a consumer buys vegetable seeds, to making decisions based on “proxy” variables that are statistically associated with protected minorities, thereby inadvertently or surreptitiously discriminating against these minorities, to the well-known, real-life example of the pothole-finding mobile app whose results were skewed in favor of wealthier Boston neighborhoods with greater smartphone penetration until the data was complemented by gathering additional data through other means.

One speaker called the dazzling range of opportunity and detriment the “data paradox.” Data can be used to help or hurt. And equally dazzling was the panorama of opinions on how to deal with this data paradox. They ranged from taking a “wait-and-see” approach before rushing into new legislation to devising a new regulatory framework for big data. Some participants explained how existing credit, finance, employment and consumer protection laws already cover many big data risks. Other participants suggested that the sheer variety of big data applications precluded a “one-size-fits-all” approach or they pointed out that protecting from harm in this context is “situational” and that “framework models” for addressing such context-specific risks and harms should be devised.

Fortunately, as several speakers noted, one such model already exists. It’s the risk-based approach to privacy, also referred to as “benefit-risk” analysis. It’s already available and, in some cases, already part of the law. See the “legitimate interest” ground for data processing in the EU Data Protection Directive, mandatory Privacy Impact Assessments in an increasing number of jurisdictions and the FTC’s unfairness authority, for example. And it’s uniquely suitable for addressing the challenges of big data and the massive data explosion that’s still ahead of us with the Internet of Things (IoT).

So what exactly is this risk-based approach to privacy?

The approach posits that in an era where data practices are simply too complex for the average person to comprehend, much less actively manage, the burden for preventing harm and safeguarding privacy should fall on the businesses amassing and using the data. Thus, businesses should assess the potential harms of their proposed data uses to individuals and, maybe, society, devise the appropriate controls and then consider whether to proceed in light of any residual, unavoidable risk and the countervailing benefits of the proposed use. In other words, they should base their data-use decisions on contextual risk assessments. And since they would be following a structured and coherent risk-assessment methodology, businesses would be able to later justify their use decisions in the event of, say, an FTC investigation. Not only would this approach allocate the responsibility for privacy protections more appropriately, it would also improve the effectiveness of privacy protections, not least by replacing the common but dubious reliance on meaningless “consents” from individuals.

Thus, especially in the context of big data and the IoT where notice and consent are becoming increasingly impractical, impossible or illusory, a risk-based approach to privacy can deliver appropriate protections nonetheless.

Indeed, responsible companies include such assessments in their organizational compliance programs already. But more work to refine and socialize the privacy-risk methodology for the big data context needs to be done. Consensus still needs to grow around what kind of harms need to be considered and how to quantify and weigh them. Same goes for the benefits. But that work is underway in many corners, from industry groups, EU legislators, U.S. government agencies to privacy think tanks—including work by the Centre for Information Policy Leadership—on the risk-based approach and a privacy-risk framework.

I’m with the panelists who see the huge potential of the risk-based approach for both delivering effective privacy protections and enabling the benefits of big data. If deployed properly, this approach will allow big data to get bigger safely.

The same can’t be said of the blanket application of concepts such as data minimization, collection and purpose limitations and consent—though they should remain available as possible controls for specific contexts following a risk analysis and may also remain relevant in some non-big data contexts. And the risk-based approach can help calibrate their implementation where they are appropriate. But, as cornerstones of data protection, they are fundamentally at odds with the promise of big data.

As one panelist said, the biggest risk is not using data enough. He’s right. Size matters in big data analytics. More data will result in more knowledge and improve the quality and integrity of big data outcomes. Less data will result in the opposite, as illustrated by the Boston pothole-finding app. If there’s one thing that became clear at the FTC workshop, it’s that there is a lot at stake in choosing the right path forward. Luckily, we have some very good ideas of where to go.

Author

Tags

2 Comments

Lacking a clear path forward and a ubiquitously implementable solution for protecting people's privacy and avoiding harmful uses, companies are left with relying on those "increasingly impractical, impossible or illusory" consents. Risk-based approaches to protecting the information is becoming more commonly found in large companies, but the social impact dimension of risk has not been put forward as a <i>thing</i> that will cause financial or reputational harm.
In order for companies to move more toward self-assessed appropriate use models for how and when to use Big Data, the courts will need to start applying greater scrutiny to privacy policies and consents. Until the courts begin to reject them and deem them insufficient mechanisms, companies are left with little incentive to change---and that assumes there is something they can change to.

comment
Dana • Sep 25, 2014

Companies are thinking about information “about their customers” as an unrealized “asset.” However, much of that data may be lost in file shares or data silos, undiscoverable and unprotected. So what can be seen as a “risk” may also be viewed as an “asset” when accessed and protected appropriately. I think there is also a very interesting opportunity for companies to repurpose their compliance programs that have traditionally been viewed as a “cost center” for the business, to help them turn this previously untapped information into a business asset. This not only creates a quantifiable “return on investment” for data security and privacy programs, but also helps the company increase productivity and stay out of “regulatory hot water”.

Related Stories

Following meetings taking place in Beijing, China, last month, Markus Heyder, on behalf of the Centre for Information Policy Leadership, provides an update for Privacy Perspectives on the status of the APEC Cross-Border Privacy Rules (CBPRs). Having “witnessed the level of energy and interest that g...

Many in the privacy world have backed interoperability as the best way forward for privacy protections in global data flows, while others “continue to pursue the ideal of a global privacy standard,” writes Hunton & Williams’ Markus Heyder, “But the prospect of achieving such an international standar...

Companies are regularly faced with the tall task of using data to contribute to a robust bottom line while executing strong privacy practices and maintaining positive brand recognition. But what if data IS your business? Regulators, including the FTC, are keying into the data collection and use prac...

Gathering in a room overlooking the Museum of London on Tuesday, some 450 years after the birth of Shakespeare, the conversation started with the not-so-obvious connection between Hamlet and Paypal. It turns out the Bard of Avon’s longest play has fewer words than Paypal’s privacy notice. A panel of...

The IAPP is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, support and improve the privacy profession globally.

The IAPP is the only place you’ll find a comprehensive body of resources, knowledge and experts to help you navigate the complex landscape of today’s data-driven world. We offer individual, corporate and group memberships, and all members have access to an extensive array of benefits.