Privacy Harm Is In the Eye of Beholder

Privacy law in the U.S. is weaker than in most places, but hey, at least we’ve got Section 5.

While many countries around the world have affirmative privacy protections for most data, the U.S. instead enforces a hundred-year old prohibition against deceptive business practices to merely prohibit companies from tricking people about data practices. In recent years, the FTC has expanded its interpretation of Section 5’s ban on deceptive practices to apply not just to misstatements but also to affirmative omissions—that is, when by failure to mention a potentially controversial privacy practice, the company is effectively trying to deceive consumers. This line of enforcement is all in the name of creating external accountability for privacy practices, and a transparent market for personal information. This market is far from perfect, and I think the law should do more to empower people to assess various privacy practices and control the flow of their information.

Still, at bottom, the U.S. has always had one (fairly low!) baseline: don’t lie about what you’re doing.

Recently, however, even this weak standard has been called into question—by two sitting Commissioners of the FTC no less. Commissioners Maureen Ohlhausen and Joshua Wright have both indicated that the FTC shouldn’t bring deceptive practice cases against companies absent some objective assessment of consumer harm.

This is an extremely dangerous idea that risks upsetting consumer self-determination—and supplanting individual choice with a paternalistic assessment of values by a regulator.

People are increasingly taking privacy considerations into account when making market choices. A desire for occasional seclusion and some control over personal information are core human values, and sometimes we might want to demand some assurances from companies we interact with about how our information is going to be treated. Now others might think those personal choices are irrational—they might argue that personalized advertising is completely benign, that total surveillance is both inevitable and desirable, and that you’d have to be wearing a tinfoil hat to try to limit data collection.

But that’s not someone else’s decision to make for me: individuals should be free to value considerations such as privacy however they want. In economics, this idea is called utility—the degree of subjective satisfaction that an individual derives from certain choices. Privacy law should—at the very least—encourage greater transparency about privacy practices so consumers can make their own determinations about the value of privacy.

I might, for example, be willing to pay $30 a month to limit behavioral advertising by my Internet service provider. Apple and Fitbit recently adopted policies to prevent wearables’ information from being sold to data brokers. If these privacy promises are violated, should that deception be sanctioned, or should regulators first engage in a cost-benefit analysis to determine if I was really harmed by the transfer of my personal information?

This question recently came up in the FTC’s settlement with Nomi Technologies, an analytics company that passively monitors cell phone signals to analyze how people physically traverse retail establishments. In its privacy policy, Nomi indicated that consumers could opt out of this tracking in two ways—either by visiting the Nomi website, or by opting out in the stores where Nomi operates. Nomi provided the first, but didn’t require its stores to offer a more prominent or contextual in-store opt-out. The FTC found this to be a fairly straightforward deceptive practices case.

Commissioners Ohlhausen and Wright, however, both dissented from the FTC’s decision. Commissioner Ohlhausen summarily concluded that the FTC “should use its limited resources to pursue cases that involve consumer harm” without explanation. It’s not clear from this statement how this harm should be determined or by whom, but in other public statements, Commissioner Ohlhausen has argued that the FTC should exercise regulatory humility by only taking cases where it determines that deceptive practices lead to concrete and observable harms—as determined by the FTC. She has also argued for considering the benefits of certain data practices when deciding whether to bring a case for deception.

Commissioner Wright in his dissent focuses primarily on the materiality of the deceptive statement instead of harm (arguing that Nomi’s policy offered consumers another, easier way to opt out of data collection), but elsewhere he has advanced the idea that the FTC should only act in the face of objective harm. In a recent speech to the Chamber of Commerce, he stated that the FTC should articulate “cognizable” harms before intervening, and stated—or at least strongly implied—that Nomi’s failure to offer its promised opt-out did not result in an injury, at least not compared to the business insights provided by Nomi’s tracking. He also emphasized the importance of doing a “cost-benefit analysis” before taking cases, and argued that the FTC should weigh the harms and benefits of a particular practice in its deception cases as it already does in enforcement cases alleging “unfair” business practices.

Unfairness and deception are very different concepts, however.

Under its unfairness authority, the FTC is statutorily required to make a value judgment prior to intervening on behalf of consumers unable to protect themselves. Under its deception authority, the FTC is only supposed to look for statements and practices that are likely to mislead consumers trying to make their own decisions. It should not be incumbent upon the FTC to query whether a company’s misrepresentations to a consumer led to a loss from the FTC’s perspective but rather from the individual’s. If a used car dealer offered an F-150 for sale but delivered a Silverado, a regulator shouldn’t perform a cost-benefit analysis about which is the better truck. It should instead require a transparent market where people get what they pay for—whether regulators think those decisions are rational or not.

There is no question that consumers benefit tremendously from a lot of data collection, but privacy authorities must not paternalistically permit privacy deception because they believe the possible benefits outweigh individual concerns that they deem unworthy. Commissioner Wright recently said about the Internet of Things “the fact that there are millions of data points is not—in and of itself—a privacy risk.” I think I would disagree with that and others might too.

As individuals’ ability to enforce the law themselves continues to erode, the FTC has an obligation to hold companies accountable for the assurances they make, and not substitute their own views about the merits of personal privacy for the views of self-interested consumers.

Author

Tags

1 Comment

This article is incredibly well-articulated and makes all the right points. I could not agree more with the fact that we should be able to choose how our data is being used. But the consumer, the regulators and the courts will need to speak and act in favor of privacy for the tide of corporate data gathering - not to mention the discriminatory pricing practices which result - to change. As a culture, Americans do not seem to care enough about data protection/privacy. What needs to happen for the dangers to become palpable for the American public? Please keep talking about this! Thank you

Related Stories

As readers of this blog are already well aware, a White House Review Group last week released its long-awaited report on Big Data. The IAPP’s Angelique Carson, CIPP/US, has published an excellent summary of the report’s findings. From my point of view, the report does a very good job of summarizing ...

On May 27, the Federal Trade Commission (FTC) issued a report on the data broker industry that found data brokers operate with a ”fundamental lack of transparency.” The commission unanimously recommended that Congress consider enacting legislation to make data broker practices more visible to consum...

The Federal Trade Commission (FTC) is a privacy regulator, sure. But it’s not out to get the good guys trying to do the right thing. It’s primary concern is making sure organizations are keeping the privacy promises they make to consumers. That was how Federal Trade Commissioner Maureen Ohlhausen ki...

Could the appointment of Justin Brookman of the Center for Democracy and Technology (CDT) and Carl Cargill of Adobe salvage the World Web Consortium (W3C) Do Not Track (DNT) process? Hopefully, all sides will work together to pursue an agreed-upon solution, since an implosion of the process, which s...

The IAPP is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, support and improve the privacy profession globally.

The IAPP is the only place you’ll find a comprehensive body of resources, knowledge and experts to help you navigate the complex landscape of today’s data-driven world. We offer individual, corporate and group memberships, and all members have access to an extensive array of benefits.