Privacy Solutions

Among Internet users, there are a variety of concerns about privacy, security and the ability to access content. Some of these concerns are quite serious, while others may be more debatable. Regardless, the goal of this ongoing series is to detail the tools available to users to implement their own subjective preferences. Anonymizers (such as Tor) allow privacy-sensitive users to protect themselves from the following potential privacy intrusions:

Advertisers Profiling Users. Many online advertising networks build profiles of likely interests associated with a unique cookie ID and/or IP address. Whether this assembling of a “digital dossier” causes any harm to the user is debatable, but users concerned about such profiles can use an anonymizer to make it difficult to build such profiles, particularly by changing their IP address regularly.

Compilation and Disclosure of Search Histories. Some privacy advocates such as EFF and CDT have expressed legitimate concern at the trend of governments subpoenaing records of the Internet activity of citizens. By causing thousands of users’ activity to be pooled together under a single IP address, anonymizers make it difficult for search engines and other websites–and, therefore, governments–to distinguish the web activities of individual users.

Government Censorship. Some governments prevent their citizens from accessing certain websites by blocking requests to specific IP addresses. But an anonymizer located outside the censoring country can serve as an intermediary, enabling the end-user to circumvent censorship and access the restricted content.

Reverse IP Hacking. Some Internet users may fear that the disclosure of their IP address to a website could increase their risk of being hacked. They can use an anonymizer as an intermediary between themselves and the website, thus preventing disclosure of their IP address to the website.

Traffic Filtering. Some ISPs and access points allocate their Internet bandwidth depending on which websites users are accessing. For example, bandwidth for information from educational websites may be prioritized over Voice-over-IP bandwidth. Under certain circumstances, an anonymizer can obscure the final destination of the end-user’s request, thereby preventing network operators or other intermediaries from shaping traffic in this manner. (Note, though, that to prevent deep packet inspection, an anonymizer must also encrypt data).

Remember, remember the Fifth of November,
The Gunpowder Treason and Plot Privacy Dashboard, so hot,
I know of no reason
Why the Gunpowder Treason Privacy Dashboard
Should ever be forgot.

Sorry, I couldn’t resist, this being Guy Fawkes day (a major traditional holiday for Britons and, more recently, geeky American libertarians such as myself, who dress up as V for Vendetta for Halloween). Google’s announcement of its Privacy Dashboard (TechCrunch) is a major step forward in both informing users about what data Google has tied to their account in each of Google’s many products and in empowering users to easily manage their privacy settings for each product. If users decide they’d rather “take their ball and go home,” they can do that, too, by simply deleting their data.

See what data is associated with your account in 23 of Google’s products (Google notes that it will incorporate its 18 other products in the near future).

Directly access the privacy management settings for that account.

Access more information—”Links to relevant help articles and information pages.”

Some critics have complained in the past that it’s too hard to find privacy settings links on Google and other sites. Indeed, Google could have made it easier—and now they have! Google has taken another major step forward in user education and empowerment—just as it pioneered transparency into its interest-based advertising product with the Ad Preference Manager launched in March (which I applauded here). (The Dashboard is only for data tied to a user’s Google account, while the APM is tied only to a cookie on the user’s computer.)

The Dashboard really couldn’t be much easier to use—yet we can be sure it won’t be good enough for some privacy zealots who arrogantly presume that their fellow homo sapiens are basically vegetables with hair—unable to use any tool online, no matter how simple, and barely able to tie their own shoelaces without government reminding them how. The principled alternative is to “Trust People & Empower Them.” Because privacy is so profoundly subjective and because there is an inherent trade-off between clamping down on data and the many benefits enjoyed by Internet users from sharing their data, Adam Thierer and I have argued for that “household standards” set by individuals should trump “community standards” imposed on everyone from above: Continue reading →

PFF summer fellow Eric Beach and I have been working on what we hope is a comprehensive taxonomy of all the threats to online security and privacy. In our continuing Privacy Solutions Series, we have discussed and will continue to discuss specific threats in more detail and offer tools and methods you can use to protect yourself.

The taxonomy of 21 different threats is organized as a table that indicates the “threat vector” and goal(s) of attackers using each threat. Following the table is a glossary defining each threat and providing links to more information.Threats can come from websites, intermediaries such as an ISP, or from users themselves (e.g. using an easy-to-guess password). The goals range from simply monitoring which (or what type of) websites you access to executing malicious code on your computer.

Please share any comments, criticisms, or suggestions as to other threats or self-help privacy/security management tools that should be added by posting a comment below.

Today’s Washington Post has a story entitled U.S. Web-Tracking Plan Stirs Privacy Fears. It’s about the reversal of an ill-conceived policy adopted nine years ago to limit the use of cookies on federal Web sites.

In case you don’t already know this, a cookie is a short string of text that a server sends a browser when the browser accesses a Web page. Cookies allow servers to recognize returning users so they can serve up customized, relevant content, including tailored ads. Think of a cookie as an eyeball – who do you want to be able to see that you visited a Web site?

Your browser lets you control what happens with the cookies offered by the sites you visit. You can issue a blanket refusal of all cookies, you can accept all cookies, and you can decide which cookies to accept based on who is offering them. Here’s how:

I recommend accepting first-party cookies – offered by the sites you visit – and blocking third-party cookies – offered by the content embedded in those sites, like ad networks. (I suspect Berin disagrees!) Or ask to be prompted about third-party cookies just to see how many there are on the sites you visit. If you want to block or allow specific sites, select the “Sites” button to do so. If you selected “Prompt” in cookie handling, your choices will populate the “Sites” list.

I recommend checking “Accept cookies from sites” and leaving unchecked “Accept third party cookies.” Click the “Exceptions” button to give site-by-site instructions.

There are many other things you can do to protect your online privacy, of course. Because you can control cookies, a government regulation restricting cookies is needless nannying. It may marginally protect you from government tracking – they have plenty of other methods, both legitimate and illegitimate – but it won’t protect you from tracking by others, including entities who may share data with the government.

The answer to the cookie problem is personal responsibility. Did you skip over the instructions above? The nation’s cookie problem is your fault.

If society lacks awareness of cookies, Microsoft (Internet Explorer), the Mozilla Foundation (Firefox), and producers of other browsers (Apple/Safari, Google/Chrome) might consider building cookie education into new browser downloads and updates. Perhaps they should set privacy-protective defaults. That’s all up to the community of Internet users, publishers, and programmers to decide, using their influence in the marketplace. (I suspect Berin is against it!)

Artificially restricting cookies on federal Web sites needlessly hamstrings federal Web sites. When the policy was instituted it threatened to set a precedent for broader regulation of cookie use on the Web. Hopefully, the debate about whether to regulate cookies is over, but further ‘Net nannying is a constant offering of the federal government (and other elitists).

By moving away from the stultifying limitation on federal cookies, the federal government acknowledges that American grown-ups can and should look out for their own privacy.

In the first entry of the Privacy Solution Series, Berin Szoka and Adam Thierer noted that the goal of the series is “to detail the many ‘technologies of evasion’ (i.e., empowerment or user ‘self-help’ tools) that allow web surfers to better protect their privacy online.” Before outlining a few more such tools, we wanted to step back and provide a brief overview of the need for, goals of, and future scope of this series.

We started this series because, to paraphrase Smokey the Bear, “Only you can protect your privacy online!” While the law can play a vital role in giving full effect to the Fourth Amendment’s restraint on government surveillance, privacy is not something that cannot simply be created or enforced by regulation because, as Cato scholar Jim Harper explains, privacy is “the subjective condition that people experience when they have power to control information about themselves.” Thus, when the appropriate technological tools and methods exist and users “exercise that power consistent with their interests and values, government regulation in the name of privacy is based only on politicians’ and bureaucrats’ guesses about what ‘privacy’ should look like.” As Berin has put it:

Debates about online privacy often seem to assume relatively homogeneous privacy preferences among Internet users. But the reality is that users vary widely, with many people demonstrating that they just don’t care who sees what they do, post or say online. Attitudes vary from application to application, of course, but that’s precisely the point: While many reflexively talk about the ‘importance of privacy’ as if a monolith of users held a single opinion, no clear consensus exists for all users, all applications and all situations.

Moreover, privacy and security are both dynamic: The ongoing evolution of the Internet, shifting expectations about online interaction, and the constant revelations of new security vulnerabilities all make it impossible to simply freeze the Internet in place. Instead, users must be actively engaged in the ongoing process of protecting their privacy and security online according to their own preferences.

Our goal is to educate users about the tools that make this task easier. Together, user education and empowerment form a powerful alternative to regulation. That alternative is “less restrictive” because regulatory mandates come with unintended consequences and can never reflect the preferences of all users.

In our ongoing “Privacy Solutions Series” we have been outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online. These tools and methods form an important part of a layered approach that we believe offers a more effective alternative to government-mandated regulation of online privacy. [See entries 1, 2, 3, 4] In this installment, we will be exploring CCleaner, a free Windows-based tool created by UK-based software developer Piriform that scrubs you computer’s hard drive and cleans its registry. We’ll describe how CCleaner helps you destroy data and protect your private information.

Whenever you move files to the recycling bin and subsequently purge the recycling bin, the affected files remain on your computer. In other words, deleting files from the recycling bin does not remove them from the computer. The reason for this is important and, in many ways, beneficial. In some respects, many computer file systems work like an old library catalog system. A file is like a catalog card and contains the reference to the actual place on the hard drive where the information contained in the file is stored. When a user deletes a file, the computer does not actually clean all the affected hard drive space. Instead, to extend the analogy, the computer simply removes the card catalog entry that points to the hard drive space where the file is contained and frees up this space for new files. The reason this is usually beneficial is that cleaning the hard drive space occupied by a file can take a while. If you want evidence of this, look no further than the length of time required to reformat a hard drive (reformatting a hard drive actually clears the disk’s contents). The practical implication of the way hard drives work is that when you delete an important memo from your computer, it is not actually gone. Similarly, when you clear your browsing history, it is not gone. The bottom line is that an individual who can access your hard drive (a thief, the government, etc.) could view many or all of the files you deleted.

The solution to this problem is to ensure that when a file is deleted, the space on the hard drive occupied by that file is not simply flagged as available space but is entirely rewritten with unintelligible data. One of the best programs for accomplishing this is CCleaner (which formerly stood for “Crap Cleaner”!)

As noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In the last installment, we covered the privacy features embedded in Microsoft’s Internet Explorer (IE) 8. This installment explores the privacy features in the Mozilla Foundation’s Firefox 3, both the current 3.0.7 version and the second beta for the next release, 3.5 (NOTE – The name for the next version of Firefox was just changed from 3.1 to 3.5 to reflect the large number of changes, but the beta is still named 3.1 Beta 2). We’ll make it clear which features are new to 3.1/3.5 and those which are shared with 3.0.7. Future installments will cover Google’s Chrome 1.0, Apple’s Safari 4, and some of the more useful privacy plug-ins for browsers . The availability and popularity of privacy plug-ins for Firefox such as AdBlock (which we discussed here), NoScript and Tor significantly augments the privacy management capabilities of Firefox beyond the capability currently baked into the browser. In evaluating the Web browsers, we examine:

As noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In some of the upcoming installments we will be exploring the privacy controls embedded in the major web browsers consumers use today: Microsoft’s Internet Explorer (IE) 8, the Mozilla Foundation’s Firefox 3, Google’s Chrome 1.0, and Apple’s Safari 4. In evaluating these browsers, we will examine three types of privacy features:

As a means of introducing myself to TLF readers, this is an article that I wrote for the PFF blog in September that has not been previously mentioned on the TLF. Most of my other PFF blog posts have been cross-posted by Adam Thierer or Berin Szoka, but I’ve taken ownership of those posts so they appear on my TLF author page.

This is the first in a series of articles that will focus directly on technology instead of technology policy. With an average age of 57, most members of Congress were at least 30 when the IBM PC was introduced in 1981. So it is not surprising that lawmakers have difficulty with cutting-edge technology. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed, but no insult to the reader’s intelligence is intended.

This article focuses on cookies–not the cookies you eat, but the cookies associated with browsing the World Wide Web. There has been public concern over the privacy implications of cookies since they were first developed. But to understand them , you must know a bit of history.

According to Tim Berners Lee, the creator of the World Wide Web, “[g]etting people to put data on the Web often was a question of getting them to change perspective, from thinking of the user’s access to it not as interaction with, say, an online library system, but as navigation th[r]ough a set of virtual pages in some abstract space. In this concept, users could bookmark any place and return to it, and could make links into any place from another document. This would give a feeling of persistence, of an ongoing existence, to each page.”[1. Tim Berners-Lee, Weaving The Web: The Original Design and Ultimate Destiny of the World Wide Web. p. 37. Harper Business (2000).] The Web has changed quite a bit since the early 1990s.

Today, websites are much more dynamic and interactive, with every page being customized for each user. Such customization could include automatically selecting the appropriate language for the user based on where they’re located, displaying only content that has been added since the last time the user visited the site, remembering a user who wants to stay logged into a site from a particular computer, or keeping track of items in a virtual shopping cart. These features are simply not possible without the ability for a website to distinguish one user from another and to remember a user as they navigate from one page to another. Today, in the Web 2.0 era, instead of Web pages having persistence (as Berners-Lee described), we have dynamic pages and “user-persistence.”

This paper describes the various methods websites can use to enable user-persistence and how this affects user privacy. But the first thing the reader must realize is that the Web was not initially designed to be interactive; indeed, as the quote above shows, the goal was the exact opposite. Yet interactivity is critical to many of the things we all take for granted about web content and services today.

The goal of our “Privacy Solution Series,” as we noted in the first installment, is to detail the many “technologies of evasion” (i.e., user-empowerment or user “self-help” tools) that allow web surfers to better protect their privacy online—and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that, in our view, provides an effective alternative to government-mandated regulation of online privacy.

In this second installment in this series, we will highlight Adblock Plus (ABP), a free downloadable extension for the Firefox web browser (as well as for the Flock browser, though we focus on the Firefox version here).

Purpose: The primary purpose of Adblock Plus is to block online ads from being downloaded and displayed on a user’s screen as they browse the Web. In a broad sense, this functionality might be considered a “privacy” tool by those who consider it an intrusion upon, or violation of, their “privacy” to be “subjected” to seeing advertisements as they browse the web. But if one thinks of privacy in terms of what others know about you, Adblocking is not so much about “privacy” as about user annoyance (measured in terms of distracting images cluttering webpages or simply in terms of long download times for webpages). In this sense, ABP may not qualify as a “technology of evasion,” strictly speaking. But, as explained below the fold, ABP does allow its users to “evade” some forms of online tracking by blocking the receipt of some, but not all, tracking cookies.

Cost: Like almost all other Firefox add-ons, both the ABP extensions and the filter subscriptions on which it relies (as described below) are free.

Popularity / Adoption: While there are a wide variety of ad-blocking tools available, Adblock Plus is far and away the leader. ABP has proven enormously popular since its release in November 2005 as the successor to Adblock, which was first developed in 2002 and reached over 10,000,000 downloads before being abandoned by its developer and even today garners nearly 40,000 downloads a week. This history of Adblock provides further details.

ABP was named one the 100 best products of 2007 by PC World magazine and is now the #1 most downloaded add-on for Firefox with over 500,000 weekly downloads, up significantly for just a few months. In a blog post last month, ABP creator Wladimir Palant estimated that “no more than 5% of Firefox users have Adblock Plus installed,” but that percentage is bound to grow larger as more people discover Adblock. As one indicator of ABP’s popularity, the number of Google searches for “Adblock” has nearly eclipsed the number of searches for “identity theft,” which seems like a far more serious concern than having to look at web ads. Continue reading →