Blog

Do you need some stimulating reading material for this long holiday weekend? Here’s a great option: the latest issue of Timothy McSweeney’s Quarterly Concern, The End of Trust. This is a collection of essays and interviews about technology, privacy, and surveillance, featuring many EFF authors—including EFF Executive Director Cindy Cohn, Special Advisor Cory Doctorow, and board member Bruce Schneier.

The End of Trust is on sale online and in bookstores now, but it’s also free to download under a Creative Commons BY-NC-ND license. In addition to essays from EFFers, contributors include anthropologist Gabriella Coleman examining anonymity, Edward Snowden tackling blockchain, and EFF Pioneer Award winner Malkia Cyril zeroing in on the historical surveillance of black bodies.

EFF has read and reviewed every piece of The End of Trust, and it’s a smart, thought-provoking, and entertaining issue. We are proud to be part of this project, and hope you enjoy it.

Keeping up with Facebook privacy scandals is basically a full-time job these days. Two weeks ago, it announced a massive breach with scant details. Then, this past Friday, Facebook released more information, revising earlier estimates about the number of affected users and outlining exactly what types of user data were accessed. Here are the key details you need to know, as well as recommendations about what to do if your account was affected.

30 Million Accounts Affected

The number of users whose access tokens were stolen is lower than Facebook originally estimated. When Facebook first announced this incident, it stated that attackers may have been able to steal access tokens—digital “keys” that control your login information and keep you logged in—from 50 to 90 million accounts. Since then, further investigation has revised that number down to 30 million accounts.

The attackers were able to access an incredibly broad array of information from those accounts. The 30 million compromised accounts fall into three main categories. For 15 million users, attackers access names and phone numbers, emails, or both (depending on what people had listed).

For 14 million, attackers access those two sets of information as well as extensive profile details including:

Username

Gender

Locale/language

Relationship status

Religion

Hometown

Self-reported current city

Birthdate

Device types used to access Facebook

Education

Work

The last 10 places they checked into or were tagged in

Website

People or Pages they follow

Their 15 most recent searches

For the remaining 1 million users whose access tokens were stolen, attackers did not access any information.

Facebook is in the process of sending messages to affected users. In the meantime, you can also check Facebook’s Help Center to find out if your account was among the 30 million compromised—and if it was, which of the three rough groups above it fell into. Information about your account will be at the bottom in the box titled “Is my Facebook account impacted by this security issue?”

What Should You Do If Your Account Was Hit?

The most worrying potential outcome of this hack for most people is what someone might be able to do with this mountain of sensitive personal information. In particular, adversaries could use this information to turbocharge their efforts to break into other accounts, particularly by using phishing messages or exploiting legitimate account recovery flows. With that in mind, the best thing to do is stay on top of some digital security basics: look out for common signs of phishing, keep your software updated, consider using a password manager, and avoid using easy-to-guess security questions that rely on personal information.

The difference between a clumsy, obviously fake phishing email and a frighteningly convincing phishing email is personal information. The information that attackers stole from Facebook is essentially a database connecting millions of people’s contact information to their personal information, which amounts to a treasure trove for phishers and scammers. Details about your hometown, education, and places you recently checked in, for example, could allow scammers to craft emails impersonating your college, your employer, or even an old friend.

In addition, the combination of email addresses and personal details could help someone break into one of your accounts on another service. All a would-be hacker needs to do is impersonate you and pretend to be locked out of your account—usually starting with the “Forgot your password?” option you see on log-in pages. Because so many services across the web still have insecure methods of account recovery like security questions, information like birthdate, hometown, and alternate contact methods like phone numbers could give hackers more than enough to break into weakly protected accounts.

Facebook stated that it has not seen evidence of this kind of information being used “in the wild” for phishing attempts or account recovery break-ins. Facebook has also assured users that no credit card information or actual passwords were stolen (which means you don’t need to change those) but for many that is cold comfort. Credit card numbers and passwords can be changed, but the deeply private insights revealed by your 15 most recent searches or 10 most recent locations cannot be so easily reset.

What Do We Still Need To Know?

Because it’s cooperating with the FBI, Facebook cannot discuss any findings about the hackers’ identity or motivations. However, from Facebook’s more detailed description of how they carried out the attack, it’s clear that the attackers were determined and coordinated enough to find an obscure, complex vulnerability in Facebook’s code. It’s also clear that they had the resources necessary to automatically exfiltrate data on a large scale.

We still don’t know what exactly the hackers were after: were they targeting particular individuals or groups, or did they just want to gather as much information as possible? It’s also unclear if the attackers abused the platform in ways beyond what Facebook has reported, or used the particular vulnerability behind this attack to launch other, more subtle attacks that Facebook has not yet found.

There is only so much individual users can do to protect themselves from this kind of attack and its aftermath. Ultimately, it is Facebook’s and other companies’ responsibility to not only protect against these kinds of attacks, but also to avoid retaining and making vulnerable so much personal information in the first place.

Earlier this week, Google dropped a bombshell: in March, the company discovered a “bug” in its Google+ API that allowed third-party apps to access private data from its millions of users. The company confirmed that at least 500,000 people were “potentially affected.”

Google’s mishandling of data was bad. But its mishandling of the aftermath was worse. Google should have told the public as soon as it knew something was wrong, giving users a chance to protect themselves and policymakers a chance to react. Instead, amidst a torrent of outrage over the Facebook-Cambridge Analytica scandal, Google decided to hide its mistakes from the public for over half a year.

What Happened?

The story behind Google’s latest snafu bears a strong resemblance to the design flaw that allowed Cambridge Analytica to harvest millions of users’ private Facebook data. According to a Google blog post, an internal review discovered a bug in one of the ways that third-party apps could access data about a user and their friends. Quoting from the post:

Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.

The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public.

It’s important to note that Google “found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.” Nevertheless, potential exposure of user data on such a large scale is more than enough to cause concern. A full list of the vulnerable data points is available here, and you can update the privacy settings on your own account here.

Potential exposure of user data on such a large scale is more than enough to cause concern.

What would this bug look like in practice? Suppose Alice is friends with Bob on Google+. Alice has shared personal information with her friends, including her occupation, relationship status, and email. Then, her friend Bob decides to connect to a third-party app. He is prompted to give that app access to his own data, plus “public information” about his friends, and he clicks “ok.” Before March, the app would have been granted access to all the details—not marked public—that Alice had shared with Bob. Similar to Facebook’s Cambridge Analytica scandal, a bad API made it possible for third parties to access private data about people who never had a chance to consent.

Google also announced in the same post that it would begin phasing out the consumer version of Google+, heading for a complete shutdown in August 2019. The company cited “low usage” of the service. This bug’s discovery may have been the final nail in the social network’s coffin.

Should You Be Concerned?

We know very little about whose data was taken by whom, if any, so it’s hard to say. For many people, the data affected by the bug may not be very revealing. However, when combined with other information, it could expose some people to serious risks.

Email addresses, for example, are used to log in to most services around the web. Since many of those services still have insecure methods of account recovery, information like birthdays, location history, occupations, and other personal details could give hackers more than enough to break into weakly protected accounts. And a database of millions of email addresses linked to personal information would be a treasure trove for phishers and scammers.

Furthermore, the combination of real names, gender identity, relationship status, and occupation with residence information could pose serious risks to certain individuals and communities. Survivors of domestic violence or victims of targeted harassment may be comfortable sharing their residence with trusted friends, but not the public at large. A breach of these data could also harm undocumented migrants, or LGBTQ people living in countries where their relationships are illegal.

Based on our reading of Google’s announcement, there’s no way to know how many people were affected. Since Google deletes API logs after two weeks, the company was only able to audit API activity for the two weeks leading up to the bug’s discovery. Google has said that “up to 500,000” accounts might have been affected, but that’s apparently based on an audit of a single two-week slice of time. The company hasn’t revealed when exactly the vulnerability was introduced.

Even worse, many of the people affected may not even know they have a Google+ account. Since the platform’s launch in 2011, Google has aggressively pushed users to sign-up for Google+, and sometimes even required a Google+ account to use other Google services like Gmail and YouTube. Contrary to all the jokes about its low adoption, this bug shows that Google+ accounts have still represented a weak link for its unwitting users’ online security and privacy.

It’s Not The Crime, It’s The Cover-Up

Google never should have put its users at risk. But once it realized its mistake, there was only one correct choice: fix the bug and tell its users immediately.

Instead, Google chose to keep the vulnerability secret, perhaps waiting for the backlash against Facebook to blow over.

Google wrote a pitch when it was supposed to write an apology.

The blog post announcing the breach is confusing, cluttered, and riddled with bizarre doublespeak. It introduces “Project Strobe,” and is subtitled “Protecting your data...” as if screwing up an API and hiding it for months was somehow a bold step forward for consumer privacy. In a section headed “There are significant challenges in creating and maintaining a successful Google+ product that meets consumers’ expectations,” the company explains regarding the breach, then gives a roundabout, legalistic excuse for not telling the public about it sooner. Finally, the post describes improvements to Google Account’s privacy permissions interface and Gmail’s and Android’s API policies, which, while nice, are unrelated to the breach in question.

Overall, the disclosure does not give the impression of a contrite company that has learned its lesson. Users don’t need to know the ins and outs of Google’s UX process, they need to be convinced that this won’t happen again. Google wrote a pitch when it was supposed to write an apology.

With its latest update, Privacy Badger now fights “link tracking” in a number of Google products.

Link tracking allows a company to follow you whenever you click on a link to leave its website. Earlier this year, EFF rolled out a Privacy Badger update targeting Facebook’s use of this practice. As it turns out, Google performs the same style of tracking, both in web search and, more concerning, in spaces for private conversation like Hangouts and comments on Google Docs. From now on, Privacy Badger will protect you from Google’s use of link tracking in all of these domains.

Google Link Tracking in Search, Hangouts, and Docs

This update targets link tracking in three different products: Google web search, Hangouts, and the Docs suite (which includes Google Docs, Google Sheets, and Google Slides). In each place, Google uses a variation of the same technique to track the links you click on.

Google Web Search

After you perform a web search, Google presents you with a list of results. On quick inspection, the links in the search results seem normal: hovering over a link to EFF’s website shows that the URL underneath does, in fact, point to https://www.eff.org. But once you click on the link, the page will fire off a request to google.com, letting the company know where you’re coming from and where you’re going. This way, Google tracks not only what you search for, but which links you actually click on.

Google uses different techniques in different browsers to make this type of tracking possible.

In Chrome, its approach is fairly straightforward. The company uses the new HTML “ping” attribute, which is designed to perform exactly this kind of tracking. When you click on a link with a “ping” tag, your browser makes two requests: one to the website you want to go to, and another (in the background) to Google, containing the link you clicked and extra, encoded information about the context of the page.

A search result in Chrome (top) and its source code, including the tracking “ping” attribute (bottom).

In Firefox, things are more complicated. Hyperlinks there look normal at first. Hovering over them doesn’t change anything, and there’s no obvious “ping” attribute. But as soon as you click on a link, you’ll notice that the URL shown in the bottom left corner of the browser – the one you’re about to navigate to – has changed into a Google link.

Watch the URL in the lower left hand corner: before clicking, it looks normal, but after pressing the mouse button down, it’s swapped out for a Google link shim.

How did that happen? For each link, Google has set a piece of JavaScript code to execute, in the background, on “mousedown”—the instant your mouse button is pressed on the link (but before you release the click). This code replaces the normal URL with a link shim that redirects you through Google on the way to your destination. Since your browser doesn’t navigate away from the search page until you release the mouse button, the code has more than enough time to slide a tracking link right under your nose.

In the background, JavaScript changes the link the instant that you click on it.

Google Hangouts and the Google Docs Suite

In Hangouts and the Docs suite, the tracking is less sophisticated, but just as effective. Try sending a link to one of your friends in a Hangouts chat. Although the message might look like an innocuous URL, you can hover over the hyperlink to reveal that it’s actually a link shim in disguise. The same thing happens with links in comments on Google Docs, Google Sheets, and Google Slides. That means Google will track whether and when your friend, family member, or co-worker clicks on the link that you sent them.

These tracking links are easy to spot, if you know where to look. Simply hover over one and you’ll find that it’s not quite what you expect.

Hovering over the link in a Hangouts window (right) reveals that it actually points to a Google link shim (bottom).

These link shims may be more nefarious than their web search counterparts. When you use Google search, you’re engaging in a kind of dialog with the company. Many users understand, even if they don’t like it, that Google provides search results in exchange for ad impressions and collects a good deal of information as part of the bargain. But when you use Hangouts to chat with a friend, it feels more private. Google provides the chat platform, but it doesn’t serve ads there, and it shouldn’t have any business reading your messages. Knowing that the company is tracking the links you share, both when you send them and when they’re clicked, might make you think twice about how you communicate.

We will continue investigating the ways that Facebook, Google, Twitter, and others track you, and we’ll keep teaching Privacy Badger new ways to fight back. In the meantime, if you’re a developer and would like to help, check us out on Github.

Any work to find a more secure and user-friendly solution than passwords is worthwhile. However, the devil is always in the details—and this project is the work of many devils we already know well. The companies behind this initiative are the same ones responsible for the infrastructure behind security failures like SIM-swapping attacks, neutrality failures like unadvertised throttling, and privacy failures like supercookies and NSA surveillance.

Research on moving user-friendly security and authentication forward must be open and vendor- and platform-neutral, not tied to any one product, platform, or industry group. It must allow users to take control of our identities, not leave them in the hands of the very same ISP companies that have repeatedly subverted our trust.

Some providers have begun offering Single Sign-On, or SSO, which serves as an alternative to keeping track of multiple passwords. When you see options to “Sign in with Facebook” or “Sign in with Google” on other websites, that’s an example of SSO. A recent Facebook breach points to the pitfalls of an SSO system that is not well implemented or published and developed openly for community auditing, but on the whole this method can be a big win for usable security.

Project Verify appears to fall under this category. With Single Sign-on, you authenticate once to the SSO provider: a corporate server, a site using a standard like OpenID, or, in the case of Project Verify, your mobile phone provider. When you then log in to a separate site or app, it can request authentication from the SSO provider instead of asking you to register with a new username and password. You may then have to approve that login or registration with the SSO provider, sometimes using multi-factor authentication.

From EFF's own Privacy Badger to Tor for Android to Safari's Tracking Protection feature on iOS, users have more options than ever before to enhance their privacy when they go online with their mobile devices. They shouldn’t have to compromise that privacy in order to secure their accounts.

Stronger alternatives than SMS and email are available now: two-factor authentication through the U2F standard or a Time-based One Time Password (TOTP) each offer superior security. Neither one is perfect on its own—both suffer from accessibility concerns, and TOTP can be abused by advanced phishing attacks. However, neither of these standards compromises the user's privacy.

One of the few things we know about the details of Project Verify is that users will be identified using a combination of five data points: phone number, account tenure, phone account type, SIM card details, and IP address.

Two of these, phone number and IP address, raise particular concerns.

Tying accounts to phone numbers has generated a growing list of problems for users in recent years, including but not at all limited to the weakness of SMS verification mentioned above. An increasingly common scam involves criminals contacting providers with the name and phone number of an account they hope to hack into and claiming they either have a new phone or have lost their SIM card. When the provider sends or gives them the new SIM and deactivates the real user's original card, the hacker is then able to use SMS-based multi-factor authentication and/or account reset tools to take over the users' accounts.

The use of phone numbers for verification can cause other sorts of problems when a phone is lost, a phone number is changed, or an employee changes jobs but a service used for work required SMS verification. In the case of a data breach, a personal phone number included in the data can expose a user to scams, harassment, or further hacking attempts.

In the U.S., social security numbers have already shown us what can happen when an assigned, nearly impossible-to-change number morphs into an essential identifier and a target for identity thieves. Our mobile phone numbers are going down the same road as our social security numbers, with the added problem that they were never private in the first place. Let's break that link, not strengthen it.

Further, the use of IP addresses could reveal quite a bit to wireless providers or even site operators, even if you are using privacy-protective measures like Tor or a mobile VPN. Tor users in particular should steer well clear of Project Verify’s service for this reason.

For Project Verify to work, your logins to third-party apps and websites must talk to your wireless provider, whether or not you're logging in over a VPN, Tor, a local wifi network, or even using a separate device altogether. With ISPs such as those in the Mobile Authentication Taskforce given free reign to track and sell users’ usage data, it is extremely dangerous to give them even more visibility into users’ logins on or off their network.

The Project Verify site states, "The platform will only share consumers' data with their consent." However, this still leaves a lot of wiggle room for carriers. Will consent be obtained through explicit and granular opt-in Project Verify functions, or will this be one the many forms of consent buried in the user's subscriber agreement with no clear avenue for opt-out? Users should not have to worry about their data being collected by a third party simply to enable a more secure means of managing logins.

Ironically, we can't verify much about the project. What we know is that it's asking us to allow the same mobile carriers responsible for enormous, and intentional, privacy failures to become the gatekeepers of identity authentication in an attempt to combat a real problem with a solution that's both concerning and conveniently beneficial to them—which, if history is any indication, is a verifiably bad idea.

If you found yourself logged out of Facebook this morning, you were in good company. Facebook forced more than 90 million Facebook users to log out and back into their accounts Friday morning in response to a massive data breach.

According to Facebook’s announcement, it detected earlier this week that attackers had hacked a feature of Facebook that could allow them to take over at least 50 million user accounts. At this point, information is scant: Facebook does not know who’s behind the attacks or where they are from, and the estimate of compromised accounts could rise as the company’s investigation continues. It is also unclear the extent to which user data was accessed and accounts misused.

What is clear is that the attack—like many security exploits—took advantage of the interaction of several parts of Facebook’s code. At the center of this is the “View As” feature, which you can use to see how your profile appears to another user or to the public. (Facebook has temporarily disabled the feature as a precaution while it investigates further.) Facebook tracked this hack to a change it made to its video uploading feature over a year ago in July 2017, and how that change affected View As.

The change allowed hackers to steal Facebook “access tokens.” An access token is a kind of “key” that controls your login information and keeps you logged in. It’s the reason you don’t have to log into your account every time you use the app or go to the website. Apparently, the View As feature inadvertently exposed access tokens for users who were “subject to” View As. That means that, if Alice used the View As feature to see what her profile would look like to Bob, then Bob’s account might have been compromised in this attack.

This morning, in addition to resetting the access tokens and thus logging out the 50 million accounts that Facebook knows were affected, Facebook has also reset access tokens for another 40 million that been the subject of any View As look-up in the past year.

Add “a phone number I never gave Facebook for targeted advertising” to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives’ own previous statements, the company has been using contact information that users explicitly provided for security purposes—or that users never provided at all—for targeted advertising.

Two-Factor Authentication Is Not The Problem

First, when a user gives Facebook their number for security purposes—to set up 2FA, or to receive alerts about new logins to their account—that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers.)

But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It’s not even a problem with the inherent weaknesses of SMS-based 2FA in particular. Instead, this is a problem with how Facebook has handled users’ information and violated their reasonable security and privacy expectations.

There are many types of 2FA. SMS-based 2FA requires a phone number, so you can receive a text with a “second factor” code when you log in. Other types of 2FA—like authenticator apps and hardware tokens—do not require a phone number to work. However, until just four months ago, Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies—Google notable among them—also still follow that outdated practice.

Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook's repeated claims that we have “complete control” over our own information, but has also seriously damaged users’ trust in a foundational security practice.

Until Facebook and other companies do better, users who need privacy and security most—especially those for whom using an authenticator app or hardware key is not feasible—will be forced into a corner.

Shadow Contact Information

...if User A, whom we’ll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we’ll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call “shadow contact information,” about a month later.

This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends’ phone books.

Even worse, none of this is accessible or transparent to users. You can’t find such “shadow” contact information in the “contact and basic info” section of your profile; users in Europe can’t even get their hands on it despite explicit requirements under the GDPR that a company give users a “right to know” what information it has on them.

As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal, it needs to put its money where its mouth is. Wiping 2FA numbers and “shadow” contact data from non-essential use would be a good start.

Facebook has a problem: an infestation of undercover cops. Despite the social platform’s explicit rules that the use of fake profiles by anyone—police included—is a violation of terms of service, the issue proliferates. While the scope is difficult to measure, EFF has identified scores of agencies who maintain policies that explicitly flaunt these rules.

Hopefully—and perhaps this is overly optimistic—this is about to change, with a new warning Facebook has sent to the Memphis Police Department. The company has also updated its law enforcement guidelines to highlight the prohibition on fake accounts.

This summer, the criminal justice news outlet The Appeal reported on an alarming detail revealed in a civil rights lawsuit filed by the ACLU of Tennessee against the Memphis Police Department. The lawsuit uncovered evidence that the police used what they referred to as a “Bob Smith” account to befriend and gather intelligence on activists. Following the report, EFF contacted Facebook, which deactivated that account. Facebook has since identified and deactivated six other fake accounts managed by Memphis police that were previously unknown.

In a letter to Memphis Police Director Michael Rallings dated Sept. 19, Facebook’s legal staff demands that the agency “cease all activities on Facebook that involve the use of fake accounts or impersonation of others.”

EFF has long been critical of Facebook’s policies that require users to use their real or “authentic” names, because we feel that the ability to speak anonymously online is key to free speech and that forcing people to disclose their legal identities may put vulnerable users at risk. Facebook, however, has argued that this policy is needed “to create a safe environment where people can trust and hold one another accountable." As long as they maintain this position, it is crucial that they apply it evenly—including penalizing law enforcement agencies who intentionally break the rules.

We are pleased to see Facebook acknowledge that fake police profiles undermine this safe environment. In the letter to the Memphis Police Department, Facebook further writes:

Facebook has made clear that law enforcement authorities are subject to these policies. We regard this activity as a breach of Facebook's terms and policies, and as such we have disabled the fake accounts that we identified in our investigation.

We request that the Police Department, its members, and any others acting on its behalf cease all activities on Facebook that involve impersonation or that otherwise violate our policies.

EFF raised this issue with Facebook four years ago, when the Drug Enforcement Administration was caught impersonating a real user in order to investigate suspects. At the time of the media storm surrounding the revelation, Facebook sent a warning to the DEA. But EFF felt that it did not go far enough, since many other agencies—such as police in Georgia, Nebraska, New York, and Ohio—were openly using this tactic, according to records available online. Recently, EFF pointed out to Facebook that this prohibition is not clearly articulated in its official law enforcement guidelines.

People on Facebook are required to use the name they go by in everyday life and must not maintain multiple accounts. Operating fake accounts, pretending to be someone else, or otherwise misrepresenting your authentic identity is not allowed, and we will act on violating accounts.

We applaud this progress, but we are also skeptical that a warning alone will deter the activity. While Facebook says it will delete accounts brought to its attention, too often these accounts only become publicly known (say in a lawsuit) long after the damage has been done and the fake account has outlived its purpose.

After all, law enforcement often already knows the rules, but chooses to ignore them. A slide presentation for prosecutors at the 2016 Indiana Child Support Conference says it all:

The presenter told the audience: “Police and Federal law enforcement may create a fake Facebook profile as part of an investigation and even though it violates the terms and policies of Facebook the evidence may still be used in court.”

The question remains: what action should Facebook take when law enforcement intentionally violates the rules? With regular users, that could result in a lifetime ban. But, banning Memphis Police Department from maintaining its official, verified page could deprive residents of important public safety information disseminated across the platform.

It’s not an easy call, but it’s one Facebook must address and soon. Or better yet, maybe it should abandon its untenable policy requiring authentic names from everyday people who don’t wear a badge.

An important example: a 15-year-old technology called Server Name Indication (SNI), which allows a single server to host multiple HTTPS web sites. Unfortunately, SNI itself is unencrypted and transmits the name of the site you’re visiting. That lets ISPs, people with access to tap Internet backbones, or even someone monitoring a wifi network collect a list of the sites you visit. (HTTPS will still prevent them from seeing exactly what you did on those sites.)

We were disappointed last year that regulations limiting collection of data by ISPs in the U.S. were rolled back. This leaves a legal climate in which ISPs might feel empowered to create profiles of their users’ online activity, even though they don’t need those profiles in order to provide Internet access services. SNI is one significant source of information that ISPs could use to feed these profiles. What’s more, the U.S. government continues to argue that the SNI information your browser sends over the Internet, as “metadata,” enjoys minimal legal protections against government spying.

Today, Cloudflare is announcing a major step toward closing this privacy hole and enhancing the privacy protections that HTTPS offers. Cloudflare has proposed a technical standard for encrypted SNI, or “ESNI,” which can hide the identities of the sites you visit—particularly when a large number of sites are hosted on a single set of IP addresses, as is common with CDN hosting.

Working at the Internet Engineering Task Force (IETF), Cloudflare and representatives of other Internet companies, including Fastly and Apple, broke a years-long deadlock in the deployment of privacy enhancements in this area.

Hosting providers and CDNs (like Cloudflare) still know which sites users access when ESNI is in use, because they have to serve the corresponding content to the users. But significantly, ESNI doesn’t give these organizations any information about browsing activity that they would not otherwise possess—they still see parts of your Internet activity in the same way either with or without ESNI. So, the technology strictly decreases what other people know about what you do online. And ESNI can also potentially work over VPNs or Tor, adding another layer of privacy protections.

ESNI is currently in an experimental phase. Only users of test versions of Firefox will be able to use it, and initially only when accessing services hosted by Cloudflare. However, every aspect of the design and implementation of ESNI is being published openly, so when it’s been shown to work properly, we hope to see it supported by other browsers and CDNs, as well as web server software, and eventually used automatically for the majority of web traffic. We may be able to help by providing options in Certbot for web sites to enable ESNI.

We’re thrilled about Cloudflare’s leadership in this area and all the work that they and the IETF community have done to make ESNI a reality. As it gets rolled out, we think ESNI will give a huge boost to the goal of reducing what other people know about what you do online.

Five of the largest U.S. technology companies pledged support this year for a dangerous law that makes our emails, chat logs, online videos and photos vulnerable to warrantless collection by foreign governments.

Now, one of those companies has voiced a meaningful pivot, instead pledging support for its users and their privacy. EFF appreciates this commitment, and urges other companies to do the same.

Microsoft’s long-titled “Six Principles for International Agreements Governing Law Enforcement Access to Data” serves as the clearest set of instructions by a company to oppose the many privacy invasions possible under the CLOUD Act. (Dropbox published similar opposition earlier this year, advocating for many safeguards.)

Quickly, Microsoft’s principles are:

The universal right to notice

Prior independent judicial authorization and required minimum showing

Specific and complete legal process and clear grounds to challenge

Mechanisms to resolve and raise conflicts with third-country laws

Modernizing rules for seeking enterprise data

Transparency

To understand how these principles could serve as a bulwark for privacy, we have to first revisit how the CLOUD Act does the opposite.

Under the CLOUD Act, the president can enter into “executive agreements” that allow police in foreign countries to request data directly from U.S. companies, so long as that data does not belong to a U.S. person or person living in the United States. Now, you might wonder: Why should a U.S. person worry about their privacy when foreign governments can’t specifically request their data? Because even though foreign governments can’t request U.S. person data, that doesn’t mean they won’t get it.

As we wrote before, here is an example of how a CLOUD Act data request could work:

“London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would receive no prior judicial review for this request. The London police could avoid notifying U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages could be used to criminally charge the U.S. person with potentially unrelated crimes, too.”

Many of the CLOUD Act’s privacy failures—failure to require notice, failure to require prior judicial authorization, and the failure to provide a clear path for companies and individuals to challenge data requests—are addressed by Microsoft’s newly released principles.

The Microsoft Principles

Microsoft’s principles encompass both itself and other U.S. technology companies that handle foreign data, including cloud technology providers. That’s because the principles sometimes demand changes to the actual executive agreements—changes that will affect how any company that receives CLOUD Act data requests can publicize, respond to, or challenge them. (No agreements have been finalized, but EFF anticipates the first one between the United States and the United Kingdom to be released later this year.)

Microsoft has committed to the “universal right to notice,” saying that “absent narrow circumstances, users have a right to know when the government accesses their data, and cloud providers must have a right to tell them.”

Providing notice is vital to empowering individuals to legally defend themselves from overbroad government requests. The more companies that do this, the better.

Further, Microsoft committed itself to “transparency,” saying that “the public has a right to know how and when governments seek access to digital evidence, and the protections that apply to their data.”

Again, EFF agrees. This principle, while similar to universal notice, serves a wider public. Microsoft’s desire is to not only inform users whose data is requested about those data requests, but to also spread broader information to everyone. For instance, Microsoft wants all cloud providers to “have the right to publish regular and appropriate transparency reports” that unveil the number of data requests a company receives, what governments are making requests, and how many users are affected by requests. This type of information is crucial to understanding, for instance, if certain governments make a disproportionate number of requests, and, if so, what country’s persons, if any, are they targeting? Once again, EFF has graded companies on this issue.

Microsoft’s interpretation on transparency also includes a demand that any executive agreement negotiated under the CLOUD Act must be published “prior to its adoption to allow for meaningful public input.” This is the exact type of responsible procedure that Congressional leadership robbed from the American public when sneaking the CLOUD Act into the back of a 2,232-page government spending bill just hours before a vote. Removing the public from a conversation about their right to privacy was unacceptable then, and it remains unacceptable now.

Microsoft additionally demanded that any CLOUD Act data requests include “prior independent judicial authorization and required minimum showing.” This is a big deal. Microsoft is demanding a “universal requirement” that all data requests for users’ content and “other sensitive digital evidence” be first approved by a judicial authority before being carried out. This safeguard is nowhere in the CLOUD Act itself.

One strong example of this approval process, which Microsoft boldly cites, is the U.S. requirement for a probable cause warrant. This standard requires a judicial authority, often a magistrate judge, to approve a government search application prior to the search taking place. It is one of the strongest privacy standards in the world and a necessary step in preventing government abuse. It serves as a bedrock to the right to privacy, and we are happy to see Microsoft mention it.

Elsewhere in the principles, Microsoft said that all CLOUD Act requests must include a “specific and complete legal process and clear grounds to challenge.”

Currently, the CLOUD Act offers individuals no avenue to fight a request that sweeps up their data, even if that request was wrongfully issued, overbroad, or illegal. Instead, the only party that can legally challenge a data request is the company that receives it. This structure forces individuals to rely on technology companies to serve as their privacy stewards, battling for their rights in court.

Microsoft’s demand is for a clear process to do just that, both for itself and other companies. Microsoft wants all executive agreement data requests to show proof that prior independent judicial review was obtained, a serious crime is under investigation as defined by the executive agreement, and that the data request is not for an investigation that infringes human rights.

Finally, a small absence: EFF would like to see Microsoft commit to “minimization procedure” safeguards for how requested data is stored, used, shared, and eventually deleted by governments.

A Broader Commitment

Microsoft’s principles are appreciated, but it must be noted that some of their demands require the work of people outside the company’s walls. For example, lawmakers will decide how much to include the public when negotiating executive agreements under the CLOUD Act. And lawmakers will decide what actually goes in those agreements, including restrictions on the universal right to notice, language about prior judicial review, and instructions for legal challenges.

That said, Microsoft is powerful enough to influence CLOUD Act negotiations. And so are the four companies that, as far as we know, still non-conditionally support the CLOUD Act—Apple, Google, Facebook, and Oath (formerly Yahoo). EFF urges these four companies to make the same commitment as Microsoft and to publish principles that put privacy first when responding to CLOUD Act data requests.

EFF also invites all companies affected by the CLOUD Act to also publish their own set of principles similar to Microsoft’s.

As for Microsoft, Apple, Google, Facebook, and Oath, we can at least say that some have scored well on EFF’s Who Has Your Back reports, and some have shown a healthy appetite for defending privacy in court, challenging government gag orders, search warrants, and surveillance requests. And, of course, if these companies falter, EFF and its supporters will hold them accountable.

The CLOUD Act has yet to produce its first executive agreement. Before that day comes, we urge technology companies: support privacy and fight this dangerous law, both for your users and for everyone.