I. Introduction and Summary

Current online privacy debates focus on respecting the privacy interests of Internet users while accommodating business needs. Formal and informal proposals for improving consumer privacy offer different ideas for privacy regulation and privacy self-regulation, sometimes called codes of conduct. [1] Some in the Internet industry continue to advance or support ideas for privacy self- regulation. Many of these same players proposed and implemented privacy self-regulatory schemes that started in the late 1990s.

Missing from current debates on self-regulation in the online privacy arena is a basic awareness of what happened with the first round of industry self-regulation for privacy. Also missing are the lessons that that should have been learned from the failures of past privacy self-regulatory efforts.

This report reviews the history of the leading efforts that comprised that early wave of privacy self-regulation, which occurred from 1997 to about 2007. One purpose of this report is to document the facts about that first wave of self-regulation. The other purpose of this report is to inform current discussions about the recent past. A key finding of this report is that the majority of the industry self-regulatory organizations that were initiated have now disappeared. The disappearance of a self-regulatory organization constitutes a failure of the self-regulatory scheme.

This is not the first World Privacy Forum report on privacy self-regulation. In 2007, the World Privacy Forum (WPF) issued a report on the National Advertising Initiative’s early efforts at business-operated self-regulation for privacy. The report was The NAI: Failing at Consumer Protection and at Self-Regulation. [2] In 2010, the WPF issued a report on privacy activities of the Department of Commerce, The US Department of Commerce and International Privacy Activities: Indifference and Neglect. [3] The Commerce report reviewed in some detail the government supervised self-regulatory Safe Harbor Framework for personal data exported from Europe to the US. Unlike most other privacy self-regulatory efforts, the Safe Harbor Framework continues to exist, largely because of the government role. But the Safe Harbor Framework is deficient in enforcement and some other areas, and it cannot be counted as successful.

The privacy self-regulation programs reviewed in this report were effectively a Potemkin Village of privacy protection. Erected quickly, the schemes were designed to look good from a distance. Upon closer inspection, however, the protections offered were just a veneer. The privacy Potemkin Village fell down soon after the gaze of potential regulators drifted elsewhere. Efforts such as the Individual Reference Service Group (IRSG) and the National Advertising Initiative (NAI) are examples of classic, failed privacy self-regulatory efforts. These and other poorly designed privacy self-regulation schemes had limited market penetration and insufficient enforcement. Still, that was enough to fend off regulators until political winds blew in other directions.

Many participants to the debate are new to the issue and are unaware of recent history. Even the Federal Trade Commission has a short memory. The FTC appeared to acknowledge the limits of self-regulation when, it concluded in 2000 that self-regulatory programs fell “well short of the meaningful broad-based privacy protections the Commission was seeking and that consumers want.” [4] But in 2010, a staff report from the FTC continued to show support for self-regulation as an alternative to legislation, seemingly ignoring the Commission’s own experience from ten years earlier. [5] The pressure to believe that “this time, things will be different” remains significant. This belief is fueled by industry pressure, industry desire for no formal regulation, a continually shifting political environment, and the absence of meaningful rulemaking authority at the Federal Trade Commission.

This report offers a simple and clear history lesson. Industry self-regulation for privacy as it has been done in the past has failed. Past industry self-regulatory programs for privacy have lacked credibility, sincerity, and staying power. This report does not propose a new model for self- regulation, but it does conclude with some suggestions for a different approach that is based on a a defined role for consumers, more transparency, better definitions, and firmer commitments by those subject to self-regulation. [6]

It is beyond the scope of this report to consider whether the public’s demands for greater privacy protections should be met with legislation, self-help mechanisms, some yet untested form of activity (regulatory, co-regulatory, or otherwise), or nothing at all. [7] This report is offered as a resource to help those who are debating these questions today.

Characteristics Common to Privacy Self-Regulation

This report reviews early industry self-regulatory activities for privacy during the years just before and after 2000. This period was the high watermark for privacy self-regulation. This report distinguishes between industry efforts at self-regulation, and government efforts. For most industry-supported self-regulatory efforts for privacy, a clear pattern developed in the years covered by this review. Feeling pressure from Federal Trade Commission scrutiny and from legislative interest, industry self-regulatory efforts for privacy developed quickly in an attempt to avoid any formal regulation. It can be observed that the self-regulatory activities typically were characterized by some or most of the following qualities:

Self-regulatory organizations were most often based in Washington, D.C, where potential regulators are.

Self-regulatory organizations formulated their rules in secret, typically with no input from non-industry stakeholders.

The governing boards of privacy self-regulatory organizations typically had no non-industry board members of these groups. There were typically few or no consumer representatives.

Privacy self-regulatory rules covered only a fraction of an industry or covered an industry subgroup, leaving many relevant business practices and many players untouched.

Privacy self-regulation organizations were short-lived, typically surviving for a few years, and then diminishing or disappearing entirely when pressure faded.

Privacy self-regulation organizations were structurally weak, lacking meaningful ability to enforce their own rules or maintain memberships. Those who subscribed to self-regulation were usually free to drop out at any time.

Privacy self-regulation organizations were typically underfunded, and industry financial support in some cases appeared to dry up quickly. There was no long-term plan for survival or transition.Not all of these characteristics were present in government supervised self-regulatory efforts, although those efforts were not necessarily any more successful.

Summary of Privacy Self-Regulatory History

Self-regulatory efforts do not fall neatly into narrow categories. However, some generalizations may be made that efforts fell into two broad categories, industry-supported and government- supported. One exception exists that is a mix of government, civil society, industry, and academia.

Industry-Supported Self-Regulatory Programs

• The Individual Reference Services Group was announced in 1997 as a self- regulatory organization for companies providing information that identifies or locates individuals. The group terminated in 2001, deceptively citing a recently- passed regulatory law as making the group’s self-regulation unnecessary. However, that law did not cover IRSG companies.

• The Privacy Leadership Initiative began in 2000 to promote self-regulation and to support privacy educational activities for business and for consumers. The organization lasted about two years.

• The Online Privacy Alliance began in 1998 with an interest in promoting industry self-regulation for privacy. OPA’s last reported substantive activity appears to have taken place in 2001, although its website continues to exist and shows signs of an update in 2011, when FTC and congressional interest recurred. The group does not accept new members. [8]

• The Network Advertising Initiative had its origins in 1999, when the Federal Trade Commission showed interest in the privacy effects of online behavioral targeting. By 2003, when FTC interest in privacy regulation had diminished, the NAI had only two members. Enforcement and audit activity lapsed as well. NAI did not fulfill its promises or keep its standards up to date with current technology until 2008, when FTC interest increased. [9]

• The BBBOnline Privacy Program began in 1998, with a substantive operation that included verification, monitoring and review, consumer dispute resolution, a compliance seal, enforcement mechanisms and an educational component. Several hundred companies participated in the early years, but interest did not continue and BBBOnline stopped accepting applications in 2007. The program has now disappeared.

Government-Supported Self-Regulatory Efforts

Not all privacy self-regulatory efforts were solely industry supported. Some were government sponsored in some manner, and there is one effort that involved consumers, academics, public interest groups as well as industry. These efforts included:

• The US-EU Safe Harbor Framework began in 2000 to ease the export of data from Europe to US companies that self-certified compliance with specified Safe Harbor standards. Three studies have documented that compliance was spotty, with many and perhaps most companies claiming to be in the Safe Harbor not meeting the requirements. The Department of Commerce continues to run the program but has undertaken negligible oversight or enforcement. Thus, the Safe Harbor Framework is a form of government-supervised self-regulation but with little evidence of active supervision. Some EU data protection authorities recently rejected reliance on the Safe Harbor framework because of its lack of reliability.

• The Children’s Online Privacy Protection Act (COPPA), which passed in 1998, involves both legislation and self-regulation. It is technically a form of government-supervised self-regulation. The COPPA law provides for a safe harbor provision [10] that is sometimes cited as a self-regulatory program. Industry participation in the COPPA safe harbor program is not widespread. Under COPPA, the same statutory standards apply whether a business is in the COPPA safe harbor program or not.

Combination Self-Regulatory Efforts

• The Platform for Privacy Preferences Project (P3P) is a standard for communicating the privacy policies of a website to those who use the website. A user can retrieve a standardized machine-readable privacy policy from a website and use the information to make a decision about how to interact with the website. Sponsors presented a prototype at an FTC Workshop in 1997, and the first formal technical specification came in 2000. Major web browsers still support P3P in part, and there is some usage by websites. A 2010 study found that there are widespread errors in implementation of P3P requirements and that large numbers of websites that use P3P compact policies are misrepresenting their privacy practices, misleading users and making the privacy protection tools ineffective.

This report does not aim to be comprehensive. We have limited the scope to the early, leading efforts. Some privacy self-regulatory efforts developed or revived more recently. [11] The Network Advertising Initiative began in 1999 and nearly disappeared a few years later. NAI revived around 2008, when FTC interest in online privacy reawakened, and industry felt threatened once again by regulation and legislation. This report discusses the early iteration of the NAI. The NAI issued a new set of self-regulatory principles in 2008, and membership increased. The revival of NAI follows the earlier pattern so far. Because the new NAI effort is still underway, this report does not attempt to evaluate the NAI’s post-1998 efforts. The new NAI looks a lot like the old NAI, however. Also not reviewed in this report is TRUSTe. [12]

_____________________________________

Endnotes

[1] This report uses self-regulation instead of the term codes of conduct.

[5] Federal Trade Commission, Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Business and Policymakers (Preliminary Staff Report 2010) at 66, http://ftc.gov/os/2010/12/101201privacyreport.pdf, (last visited 9/20/11) (“Such a universal [Do Not Track] mechanism could be accomplished by legislation or potentially through robust, enforceable self-regulation.”)

[6] The National Consumer Council (UK) published a checklist for self-regulatory schemes in 2000 that remains worthy of attention.Models of self-regulation: An overview of models in business and the professions 51-52 (November 2000), available at: http://www.talkingcure.co.uk/articles/ncc_models_self_regulation.pdf (last visited 9/21/2011). The checklist offers the following requirements for a “credible” self-regulatory scheme: 1. The scheme must be able to command public confidence. 2. There must be strong external consultation and involvement with all relevant stakeholders in the design and operation of the scheme. 3. As far as practicable, the operation and control of the scheme should be separate from the institutions of the industry. 4. Consumer, public interest and other independent representatives must be fully represented (if possible, up to 75 per cent or more) on the governing bodies of self-regulatory schemes. 5. The scheme must be based on clear and intelligible statements of principle and measurable standards – usually in a Code – which address real consumer concerns. The objectives must be rooted in the reasons for intervention . 6. The rules should identify the intended outcomes. 7. There must be clear, accessible and well-publicised – complaints procedures where breach of the code is alleged. 8. There must be adequate, meaningful and commercially significant sanctions for non-observance. 9. Compliance must be monitored (for example through complaints, research and compliance letters from chief executives). 10. Performance indicators must be developed, implemented and published to measure the scheme’s effectiveness. 11. There must be a degree of public accountability, such as an Annual Report. 12. The scheme must be well publicised, with maximum education and information directed at consumers and traders. 13. The scheme must have adequate resources and be funded in such a way that the objectives are not compromised. 14. Independence is vital in any redress scheme which includes the resolution of disputes between traders and consumers. 15. The scheme must be regularly reviewed and updated in light of changing circumstances and expectations.

[7] For a thoughtful discussion of self-regulation and analysis of alternatives, see Ira S. Rubinstein, Privacy and Regulatory Innovation: Moving Beyond Voluntary Codes, 6 I/S A Journal of Law and Policy for the Information Society 356 (2011), available at http://www.is-journal.org/hotworks/rubinstein.php (last visited 9/20/11).

[9] This report evaluates the original NAI self-regulatory program that existed until 2007/2008.

[10] 15 U.S.C. §§ 6501-6506.

[11] The Digital Advertising Alliance self-regulatory program is not analyzed in this report, as it was launched in July 2009 and falls out of range of this study. See http://www.aboutads.info (last visited 9/21/11).

[12] TRUSTe, a privacy seal that continues to exist, became a for-profit company in 2008. Saul Hansell, Will the Profit Motive Undermine Trust in Truste?, New York Times (July 15, 2008), http://bits.blogs.nytimes.com/2008/07/15/will-profit-motive-undermine-trust-in-truste (last visited 2/14/11). TRUSTe has morphed significantly in its scope, purpose, and composition during its lifetime, and as such requires a separate discussion. TRUSTe is discussed in this report in the context of the first iteration of the NAI program and in the context of P3P. For more on TRUSTe see also Ben Edelman, Certifications and Site Trustworthiness (Sept. 25, 2006), http://www.benedelman.org/news/092506-1.html (last visited 2/14/11) (“Of the sites certified by TRUSTe, 5.4% are untrustworthy according to SiteAdvisor’s data, compared with just 2.5% untrustworthy sites in the rest of the ISP’s list. So TRUSTe-certified sites are more than twice as likely to be untrustworthy.”). See also the discussion of the Platform for Privacy Preferences (P3P) later in this document for a reference to numerous TRUSTe certified websites that had errors in implementation of P3P requirements.

Roadmap: Many Failures – A Brief History of Privacy Self-Regulation in the United States: I. Introduction and Summary

To score is human. Ranking individuals by grades and other performance numbers is as old as human society. Consumer scores — numbers given to individuals to describe or predict their characteristics, habits, or predilections — are a modern day numeric shorthand that ranks, separates, sifts, and otherwise categorizes individuals and also predicts their potential future actions. This new report by Pam Dixon and Robert Gellman explores this issue of predictive scores and privacy.

This Jan. 30, 2014 report discusses a new right to restrict disclosure of health information under the updated HIPAA health privacy rule. The new provision called “Pay Out of Pocket,” also called the “Right to Restrict Disclosure” gives patients the right to request that their health care provider not report or disclose their information to their health plans when they pay for medical services in full. Navigating the new right will take effort and planning for patients to utilize effectively. This substance of this report is about the new patient right to restrict disclosure, and how patients can use it to protect health privacy.

This report focuses on government use of commercial data brokers, the implications for that usage, and what needs to be done to address privacy problems. The government must bring itself fully to heel in the area of privacy. If it is going to outsource its data needs to commercial data brokers, it needs to attach the privacy standards it would have been held to if it had collected the data itself. Outsourcing is not an excuse for evading privacy obligations. Report authors: Bob Gellman and Pam Dixon.