Blog

A decade ago, before social media was a widespread phenomenon and blogging was still a nascent activity, it was nearly unthinkable outside of a handful of countries—namely China, Tunisia, Syria, and Iran—to detain citizens for their online activity. Ten years later, the practice has become all too common, and remains on the rise in dozens of countries. In 2017, the Committee to Protect Journalists found that more than seventy percent of imprisoned journalists were arrested for online activity, while Reporters Without Borders’ 2018 press freedom barometer cited 143 imprisoned citizen journalists globally, and ten citizen journalists killed. While Tunisia has inched toward democracy, releasing large numbers of political prisoners following the 2011 revolution, China, Syria, and Iran remain major offenders, and are now joined by several countries, including the Philippines, Saudi Arabia, and Egypt.

When we first launchedOffline in 2015, we featured five cases of imprisoned or threatened bloggers and technologists, and later added several more. We hoped to raise awareness of their plight, and advocate for their freedom, but we knew it would be an uphill struggle. In two cases, our advocacy helped to secure their release: Ethiopian journalist Eskinder Nega was released from prison earlier this year, and the Zone 9 Bloggers, also from Ethiopia, were acquitted in 2015 following a sustained campaign for their freedom.

Award-winning Ethiopian journalist Eskinder Nega on the power of the Internet and journalism.

Today, the situation in several countries is dire. In Egypt, where a military coup brought the country back toward dictatorship, dozens of individuals have been imprisoned for expressing themselves. Activist Amal Fathy was detained earlier this year after a video she posted to Facebook detailing her experiences with sexual harassment in Cairo went viral, and awaits trial. And Wael Abbas, an award-winning journalist whose experiences with censorship we’ve previously documented, has been detained without trial since May 2018. We also continue to advocate for the release of Alaa Abd El Fattah, the Egyptian activist whose five-year sentence was upheld by an appeals court last year.

Three new Offline cases demonstrate the lengths to which states will go to silence their critics. Eman Al-Nafjan, a professor, blogger, and activist from Saudi Arabia, was arrested in May for her advocacy against the country’s ban on women driving, which was repealed just one month later. Ahmed Mansoor is currently serving a ten-year sentence for “cybercrimes” in his home country of the United Arab Emirates after being targeted several times in the past for his writing and human rights advocacy. And Dareen Tatour, a Palestinian citizen of Israel, recently began a five-month prison sentence after several years of house arrest and a lengthy trial for content she posted on social media that had been misinterpreted by police.

Advocacy and campaigns on behalf of imprisoned technologists, activists, and bloggers can make a difference. In the coming months, we will share more details and actions that the online community can take to support these individuals, defend their names, and keep them safe.

Here’s the not-so-secret recipe for strong passphrases: a random element like dice, a long list of words, and math. And as long as you have the first two, the third takes care of itself. All together, this adds up to diceware, a simple but powerful method to create a passphrase that even the most sophisticated computer could take at least thousands of years to guess.

In short, diceware involves rolling a series of dice to get a number, and then matching that number to a corresponding word on a wordlist. You then repeat the process a few times to create a passphrase consisting of multiple words.

In 2016, EFF debuted a series of wordlists that can be used with five six-sided dice to generate strong passphrases. This year, we’re upping our game. At Dragon Con 2018 in Atlanta over Labor Day weekend, EFF will be testing new wordlists optimized for three 20-sided dice. Since Dragon Con is largely a fantasy and science fiction convention, we’ve also created four new wordlists drawn from fan-created Wikia pages for Star Trek, Star Wars, Game of Thrones, and Harry Potter.

If you’re at Dragon Con, come visit our table on the second floor of the Hilton Atlanta. EFF and Access Now are teaming up to teach people how to create passwords using giant 20-sided dice. Attendees will also be encouraged to write sentences or little stories using the words to help remember their passphrases. Participants who successfully create a strong passphrase will receive a gift (while supplies last).

We’re also releasing the wordlists and password worksheet online, so folks at home can play along:

(Note: Any trademarks within the wordlist are the property of their respective trademark holders, who are not affiliated with the Electronic Frontier Foundation and do not sponsor or endorse these passwords.)

How We Created the Wordlists

A diceware passphrase is just a set of rare and unusual words that is easy for humans to remember, but hard for computers to guess. When we set out to create fandom-specific wordlists, we weren’t sure where to gather unique but relevant words. Official encyclopedias for Star Trek and Star Wars only had hundreds of entries—nowhere close to the thousands of possible rolls of three 20-sided dice.

So, we began to look at the FANDOM Wikia pages for various science fiction and fantasy universes. At first, we tried using the unique page titles for sections like Memory Alpha and Wookieepedia. While we were easily able to gather enough words for wordlists, too many of the words were complicated, obscure names or words from fictional languages. They would have been too difficult for most fans to memorize—and memorability is one of the key features of diceware technique.

Instead, we narrowed in on some of the most popular pages for various fandoms, such as limiting ourselves to the main Star Wars films, a selection of Star Trek episodes from the original series and Discovery, the Harry Potter books, and a few episodes from each season of Game of Thrones. Then, we filtered the text of each page to just its unique words. As a result, our wordlists are mostly regular English words with a distinct flavor of the corresponding fandom.

Each wordlist is 4,000 unique words, repeated once to match the possible 8,000 outcomes of the three 20-sided dice.

The Math

For this method, it’s important to use carefully constructed wordlists. It’s also important that the user not modify the words after they’ve been chosen or re-roll for new words because they don’t like the original ones that came up. This process relies on randomness—so, the second some words on the list are prioritized over others or changed in the generation process, the mathematical analysis starts to fall apart.

To see why, we need to understand how to analyze the security of a passphrase.

Let’s assume an attacker trying to crack our passphrase knows the method we used (in this case, a particular fandom wordlist and three 20-sided dice). We also assume the attacker is going to use the most effective attack for that particular method. For our method, that means trying all combinations of words in the wordlist, rather than, say, trying every individual letter combination.

Assuming the attacker knows that our passphrase is made up of words from a particular list, then the security of a passphrase is determined by how many possibilities there are. In our wordlists, there are 4000 words, and we’re choosing five of them, so the number of possibilities is 4000 times 4000 times 4000 times 4000 times 4000, which is about 1018 possibilities. Around 1018 to 1024 is usually a good number to aim for, for most people. The easiest way to increase this number is by adding another word to the passphrase using the same dice-rolling method.

How long will it take for an attacker to crack this password in practice? That number depends on how fast the computer is. Using a desktop today, computers can try about 15 million passwords per second. The world’s fastest supercomputer can try about 92 trillion passwords per second.

If you assume the attacker has a copy of the wordlist you used and a computer that can try 15 million passwords a second, it would take them over two thousand years to try every possible combination, cracking the password in just over a thousand years on average.

The world’s fastest supercomputer could crack that same password in an hour and a half on average, but not to worry: adding two more words to the password increases that time to almost three thousand years for even the fastest supercomputer.

Going back to school? This is a perfect time for a digital security refresh to ensure the privacy of you and your friends is protected!

It’s a good time to change your passwords. The best practice is to have passwords that are unique, long, and random. In order to keep track of these unique, long and random passwords, consider downloading a password manager.

As a great additional measure: You can add login notifications to your accounts, so that you can monitor logins from devices you don’t recognize.

Applying for an internship, job, fellowship, or for further education at a school? Worried about an embarrassing photo being found by a recruiter? Now’s a great time to check your social media privacy settings. Helping your friends with this daunting task? Consider looking through the Security Education Companion’s lesson plan on Locking Down Social Media. If your study group, student organizing club, or class uses Facebook Groups, you can help members understand who can see what is posted.

Looking for an app that has disappearing messages that are actually just between you and your recipient? You might want to try an end-to-end encrypted messaging app like Signal. A service is no fun without friends on it: teach your friends and family how to use end-to-end encrypted messaging with the Security Education Companion’s lesson plan.

Exciting new technology in the classroom can also mean privacy violations, including the chance that your personal devices and online accounts may be demanded for searches. If you’re a student, parent, or teacher, we’ve written tips for you.

If you’re a teacher, librarian, professor, or extracurricular leader looking for fresh material, try out our lesson plans from the Security Education Companion at sec.eff.org! We have an assortment of lesson plans on basic digital security concepts, such as threat modeling, end-to-end encrypted messaging, and password managers.

In the last few years, we’ve discovered just how much trust — whether we like it or not — we have all been obliged to place in modern technology. Third-party software, of unknown composition and security, runs on everything around us: from the phones we carry around, to the smart devices with microphones and cameras in our homes and offices, to voting machines, to critical infrastructure. The insecurity of much of that technology, and increasingly discomforting motives of the tech giants that control it from afar, has rightly shaken many of us.

But latest challenge to our collective security comes not from Facebook or Google or Russian hackers or Cambridge Analytica: it comes from the Australian government. Their new proposed “Access and Assistance” bill would require the operators of all of that technology to comply with broad and secret government orders, free from liability, and hidden from independent oversight. Software could be rewritten to spy on end-users; websites re-engineered to deliver spyware. Our technology would have to serve two masters: their customers, and what a broad array of Australian government departments decides are the “interests of Australia’s national security.” Australia would not be the last to demand these powers: a long line of countries are waiting to demand the same kind of “assistance.”

In fact, Australia is not the first nation to think of granting itself such powers, even in the West. In 2016, the British government took advantage of the country’s political chaos at the time to push through, largely untouched, the first post-Snowden law that expanded not contracted Western domestic spying powers. At the time, EFF warned of its dangers —- particularly orders called “technical capability notices”, which could allow the UK to demand modifications to tech companies’ hardware, software, and services to deliver spyware or place backdoors in secure communications systems. These notices would remain secret from the public.

Last year we predicted that the other members of Five Eyes (the intelligence-sharing coalition of Canada, New Zealand, Australia, the United Kingdom, and the United States) might take the UK law as a template for their own proposals, and that Britain “… will certainly be joined by Australia” in proposing IPA-like powers.

That’s now happened. This month, in the midst of a similar period of domestic political chaos, the Australian government introduced their proposal for the “Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018.” The bill unashamedly lifts its terminology and intent from the British law.

But if the Australian law has taken elements of the British bill, it has also whittled them into a far sharper tool. The UK bill created a hodge-podge of new powers; Australia’s bill recognizes the key new powers in the IPA and has zeroed in on their key abilities: those of assistance and access.

If this bill passes, Australia will — like the UK — be able to demand complete assistance in conducting surveillance and planting spyware, from a vast slice of the Internet tech sector and beyond. Rather than having to come up with ways to undermine the increasing security of the Net, Australia can now simply demand that the creators or maintainers of that technology re-engineer it as they ask.

It’s worth underlining here just how sweeping such a power is. To give one example: our smartphones are a mass of sensors. They have microphones and cameras, GPS locators, fingerprint and facial scanners. The behavior of those sensors is only loosely tied to what their user interfaces tell us.

Australia seeks to give its law enforcement, border and intelligence services, the power to order the creators and maintainers of those tools to do “acts and things” to protect “the interests of Australia’s national security, the interests of Australia’s foreign relations or the interests of Australia’s national economic well-being”.

The “acts and things” are largely unspecified — but they include enabling surveillance, hacking into computers, and remotely pulling data from private computers and public networks.

The range of people who would have to secretly comply with these orders is vast. The orders can be served on any “designated communications provider”, which includes telcos and ISPs, but is also defined to include a “person [who] develops, supplies or updates software used, for use, or likely to be used, in connection with: (a) a listed carriage service; or (b) an electronic service that has one or more end users in Australia”; or a “person [who] manufactures or supplies customer equipment for use, or likely to be used, in Australia”.

As Mark Nottingham, co-chair of the IETF’s HTTP group and member of the Internet Architecture Board, notes, this seems to include “Everyone who’s ever written an app or hosted a Web site — worldwide, since one Australian user is the trigger — is a potential recipient, whether they’re a multimillion dollar company or a hobbyist.” It includes Debian ftpmasters, and Linux developers; Mozilla or Microsoft; certificate authorities like Let’s Encrypt, or DNS providers.

This is not an error: when we were critiquing a similarly broad definition in the UK’s IPA, we pointed out that the wording would allow the authorities to target a particular developer at a company (while requiring them to not inform their boss), or non-technical bystander who would not know the impact of what they were being asked to do. Commentators from close to GCHQ denied this would be the case and said that this would be clarified in later documents — but subsequent draft codes of practice actually doubled down on the breadth of the orders, saying that it was deliberately broad, and that even café owners who operated a wifi hotspot could be served with an order.

There are some signs that the companies affected by these orders have learned the lesson of the IPA, and pushed back during the Assistance and Access’s preliminary stages. Unlike the UK bill, there are clauses forbidding Australia from being required to “implement or build [a] systemic weakness or systemic vulnerability into a form of electronic protection” (S.317ZG); and preventing actions in some cases that would cause material loss to others lawfully using a targeted computer (e.g. S.199 (3), pg 163. Companies have an opportunity to be paid for their troubles, and billing departments can’t be targeted. There is some attempt to prevent government agencies forcing providers to “make false or misleading statements or engage in dishonest conduct”(S.317E).

But these are tiny exceptions in a sea of permissions, and easily circumvented. You may not have to make false statements, but if you “disclose information”, the penalty is five years’ imprisonment (S.317ZF). What is a “systemic weakness” is determined entirely by the government. There is no independent judicial oversight. Even counselling an ISP or telco to not comply with an assistance or capability order is a civil offence.

If the passage of the UK surveillance law is any guide, Australian officials will insist that while the language is broad, no harm is intended, and the more reasonable, narrower interpretations were meant. But none of those protestations will result in amendments to the law: because Australia, like Britain, wants the luxury of broad, and secret powers. There will be — and can be no true oversight — and the kind of malpractice we have seen in the surveillance programs of the U.S. and U.K. intelligence services will spread to Australia’s law enforcement. Trust and security in the Australian corner of the Internet will diminish — and other countries will follow the lead of the anglophone nations in demanding full and secret control over the technology, the personal data, and the individual innovators of the Internet.

“The government,” says Australia’s Department of Home Affairs web site, “welcomes your feedback” on the bill. Comments are due by September 10th. If you are affected by this law — and you almost certainly are — you should read the bill, and write to the Australian government to rethink this disastrous proposal. We need more trust and security in the future of the Internet, not less. This is a bill that will breed digital distrust, and undermine the security of us all.

Sen. Ron Wyden has sent a letter to the U.S. Department of Justice concerning disruptions to 911 emergency services caused by law enforcement’s use of cell-site simulators (CSS, also known as IMSI catchers or Stingrays). In the letter, Sen. Wyden states that:

Senior officials from the Harris Corporation—the manufacturer of the cell-site simulators used most frequently by U.S. law enforcement agencies—have confirmed to my office that Harris’ cell-site simulators completely disrupt the communications of targeted phones for as long as the surveillance is ongoing. According to Harris, targeted phones cannot make or receive calls, send or receive text messages, or send or receive any data over the Internet. Moreover, while the company claims its cell-site simulators include a feature that detects and permits the delivery of emergency calls to 9-1-1, its officials admitted to my office that this feature has not been independently tested as part of the Federal Communication Commission’s certification process, nor were they able to confirm this feature is capable of detecting and passing-through 9-1-1 emergency communications made by people who are deaf, hard of hearing, or speech disabled using Real-Time Text technology.

Researchers of CSS technology have long suspected that using such technologies, even professionally designed and marketed CSS’s, would have a detrimental effect on emergency services, and now—for the first time—we have confirmation.

It is striking, but unfortunately not surprising, that law enforcement has been allowed to use these technologies and has continued to use them despite the significant and undisclosed risk to public safety posed by disabling 911 service, not to mention the myriad privacy concerns related to CSS use. What’s more, a cell-site simulator wouldn’t just disrupt service for the specific person or persons being tracked but would likely disrupt service for every mobile device in the area as it tricks every phone in the area into connecting to the fake base station in search of the target phone. This could be especially dangerous during a natural disaster when IMSI catchers are being used to locate missing persons in damaged buildings or other infrastructure, cutting off 911 service at a time like that could be a grave danger to others trapped in dangerous situations.

Harris Corporation claims that they have the ability to detect and deliver calls to 911, but they admit that this feature hasn’t been tested. Put bluntly, there is no way for the public or policy makers to know if this technology works as intended. Thanks to the onerous non-disclosure agreements that customers of Harris Corp and other CSS vendors’ customers have regularly been required to enter into there is very little public information about how CSS work and what their capabilities are. Even if a security researcher did audit a CSS, the results would be unlikely to ever see the light of day.

Furthermore, even if Harris’ technology works the way they claim it does, they are far from the only manufacturer of CSS devices. There are several other companies that manufacture such technology and we know even less about the workings of their technologies or whether they have any protections against blocking 911 calls. Cell-site simulators are now easy to acquire or build, with homemade devices costing less than $1000 in parts. Criminals, spies, and anyone else with malicious intent could easily build a CSS specifically to disrupt phone service, or use it without caring whether it disrupts 911 service.

The only way to stop the public safety and public privacy threats that cell-site simulators pose is to increase the security of our mobile communications infrastructure at every layer. All companies involved in mobile communications from the network layer (AT&T, T-Mobile, Verizon, etc.) to the hardware layer (Qualcomm, Samsung, Intel), to the software layer (Apple, Google) need to work together to ensure that our cellular infrastructure is safe, secure, and private from attacks by spys, criminals, and rogue law enforcement. For their part, policymakers such as Sen. Wyden can help by continuing to provide transparency on how IMSI catchers work and are used, and funds to upgrade our aging cellular infrastructure.

Late last week, Reuters reported that Facebook is being asked to “break the encryption” in its Messenger application to assist the Justice Department in wiretapping a suspect's voice calls, and that Facebook is refusing to cooperate. The report alarmed us in light of the government’s ongoing calls for backdoors to encrypted communications, but on reflection we think it’s unlikely that Facebook is being ordered to break encryption in Messenger and that the reality is more complicated.

The wiretap order and related court proceedings arise from an investigation of the MS-13 gang in Fresno, California and is entirely under seal. So while we don’t know exactly what method for assisting with the wiretap the government is proposing Facebook use, if any, we can offer our informed speculation based on how Messenger works. This post explains our best guess(es) as to what’s going on, and why we don’t think this case should result in a landmark legal precedent on encryption.

We do fear that this is one of a series of moves by the government that would allow it to chip away at users’ security, done in a way such that the government can claim it isn’t “breaking” encryption. And while we suspect that most people don’t use Messenger for secure communications—we certainly don’t recommend it—we’re concerned that this move could be used as precedent to attack secure tools that people actually rely on.

The nitty gritty:

Messenger is Facebook’s flagship chat product, offering users the ability to exchange text messages, stickers, send files, and make voice and video calls. Unlike Signal and WhatsApp (also a Facebook product), however, Messenger is not marketed as a “secure” or encrypted means of communication. Messenger does have the option of enabling “secret” text conversations, which are end-to-end encrypted and make use of the Signal protocol (also used by WhatsApp).

At issue here is a demand by the government that Facebook help it intercept Messenger voice calls. While Messenger’s protocol isn’t publicly documented, we believe that we have a basic understanding how it works—and how it differs from actual secure messaging platforms. But first, some necessary background on how Messenger handles non-voice communications.

When someone uses Messenger to send a text chat to a friend, the user’s client (the app on their smartphone, for example) sends the message to Facebook’s servers, encrypted so that only Facebook can read it. Facebook then saves and logs the message, and forwards it on to the intended recipient, encrypted so that only the intended recipient can read it. When the government wants to listen in on those conversations, because Facebook sees every message before it’s delivered, the company can turn those chats over in real time (in response to a wiretap order) or turn over some amount of the user’s saved chat history (in response to a search warrant).

However, when someone uses Messenger to initiate a voice call, the process is different. Messenger uses a standard protocol called WebRTC for voice (and video) connections. WebRTC relies on Messenger to set up a connection between the two parties to the call that doesn’t go through Facebook’s servers. Rather—for reasons having to do with cost, efficiency, latency, and to ensure that the audio skips as little as possible—the data that makes up a Messenger voice call takes a shorter route between the two parties. That voice data is encrypted with something called the “session key” to ensure that a nosy network administrator sitting somewhere between the two parties to the call can’t listen in.

This two-step process is typical in Voice over IP (VoIP) calling applications: first the two parties each communicate with a central server which assists them in setting up a direct connection between them, and once that connection is established, the actual voice data (usually) takes the shortest route.

Step 1: A central server facilitates a key exchange between two devices. The servers cannot decrypt to see these keys.

Step 2: The session keys are then used for encrypting the call between the devices.

But in Messenger, some information related to the voice call does go through Facebook’s servers, especially when the call is first initiated. That data includes the session key that encrypts the voice data.

Step 1: The two devices communicate with a Facebook central server, sending their keys through the server.

This differs in a major way from other secure messaging applications like Signal, WhatsApp, and iMessage. All of those apps use protocols that encrypt that initial session key—the key to the voice data—in a way that renders it unreadable by anyone other than the intended participants in the conversation.

So even though Facebook doesn’t actually have the encrypted voice data, if it did somehow have that data, we’re pretty sure that it would have the technical means to decrypt it. In other words, despite the fact that the voice data is encrypted all the way between the two callers, it’s not really what we refer to as “end-to-end encrypted” because someone other than the intended recipient of the call—in this case Facebook—could decrypt it with the session key.

So what’s at stake in this case:

Assuming our technical understanding is roughly correct, Facebook can’t currently turn over unencrypted voice communications to the government without additional engineering effort. The question is what sort of engineering would be required, and what effect it would have on user security, both within Facebook and more generally. We’ve been able to identify at least four possible ways the government might ask Facebook to assist with its wiretap:

Force Facebook to retain the session key to the suspect’s conversation and turn it over to the government. The government would then use that key to decrypt voice data separately captured by the subject’s ISP (likely a mobile provider in this case).

Force Facebook to construct a man-in-the-middle attack by directing the suspect’s phone to route Messenger voice data through Facebook’s servers, then capture and use the session key to decrypt the data.

Force Facebook to push out a custom update to the suspect’s version of Messenger that would record conversations on the device and send them directly to the government.

Demand that Facebook just figure out how to record the suspect’s conversations and turn them over—decrypted—to the government.

In broad strokes, these scenarios look similar to the showdown between Apple and the FBI in the San Bernardino case: the government compelling a tech company to alter its product to effectuate a search warrant (here a wiretap order). One obvious difference on the legal front is that the Apple case turned on the All Writs Act, whereas here the government is almost certainly relying on the technical assistance provision of the Wiretap Act, 18 U.S.C. § 2518(4). As we saw in the Apple case, the All Writs Act is a general-purpose gap-filling statute that allows the government to get orders necessary to further existing court orders, including search warrants. The Wiretap Act’s technical assistance provision is narrower and more specific, requiring communication service providers to furnish “technical assistance necessary to accomplish the interception unobtrusively and with a minimum of interference with the services.”

What are the limits of this duty to provide necessary technical assistance, and would it extend to the four possible demands we listed above? While we’re not aware of a judicial decision that’s directly on point, the Ninth Circuit Court of Appeals wrote in a well-known case interpreting this “minimum of interference” language that private companies' obligations to assist the government have “not extended to circumstances in which there is a complete disruption of a service they offer to a customer as part of their business.” And, invoking case law on the All Writs Act, the court held that an “intercept order may not impose an undue burden on a company enlisted to aid the government.”

The government could of course be expected to argue that the options above are not unreasonably burdensome and that Messenger service would not be significantly disrupted. These arguments might have some force if Facebook’s participation is limited to preserving the session key for the suspect’s conversations. After all, this information already likely passes through Facebook’s servers in a way that Facebook could choose to capture it. One unknown is to what extent Facebook sees its role in facilitating Messenger calls as ensuring the security of the calls. If, as in the Apple case, Facebook tried to make it difficult to bypass security features in the system, cooperation would potentially be quite disruptive. But the government might say that in this context Facebook is much like a webmail provider such as Gmail that uses TLS to encrypt mail between the user and Google. Google has the keys to decrypt this data, so it can comply with a wiretap. Facebook’s role isn’t exactly the same, but it certainly can obtain the session keys.

In the scenario where Facebook is being asked to push a custom update, the company might raise more forceful arguments like those made by security experts in the Apple case about the risks of undermining public trust in automatic security updates. Computer security is hard, and using a trusted channel to turn a suspect’s phone into a surveillance device could have disastrous consequences. And if the government is simply telling Facebook to “figure it out,” (option 4), Facebook might have reason to question the necessity of its assistance as well as its feasibility, since the government would not have demonstrated why other techniques would be unsuccessful in carrying out the surveillance.

All of this points to a strong need for the public to know more about what’s going on in the Fresno federal court. The Reuters article indicates that Facebook is opposing the order in some respect, and we at EFF would love the opportunity to weigh in as amicus, as we did in San Bernardino. We hope the company will do its utmost to get the court to unseal at least the legal arguments in the case. It should also ask the court to allow amicus participation on any issues involving novel or significant interpretations of the Wiretap Act or other technical assistance law.

Most important, we cannot allow the government to weaponize any ruling in this case in its larger push to undermine strong encryption and digital security.

Most important, we cannot allow the government to weaponize any ruling in this case in its larger push to undermine strong encryption and digital security. The government’s narrative has long been that there is a “middle ground,” and that companies should engage in “responsible encryption.” It loves to point to services that use TLS as examples of encrypted data that can yield to lawful court orders for plaintext. Similarly, in the San Bernardino case, the FBI did not technically ask Apple to “break the encryption” in iOS, but instead to reengineer other security features that protected that encryption. These are dangerous requests that still put users at risk, even though they don’t involve tampering with the math supporting strong encryption.

We will follow this case closely as it develops, and we’ll push buck on all efforts to undermine user security.

Giving Privacy Badger a Jump Start:

Teaching new Badgers to block from the get-go

When new users try Privacy Badger, they often get confused about why Privacy Badger isn’t blocking anything right away. But that’s because Privacy Badger learns about trackers as you browse; up until now, it hasn’t been able to block trackers on the first few sites it sees after being installed.

With today’s update, however, new users won't have to wait to see Privacy Badger in action. Thanks to a new training regimen, your Badger will block many third party trackers out of the box.

We haven’t changed how Privacy Badger works. Instead, we’ve given young Badgers a head start.

For people who already use Privacy Badger, essentially nothing has changed. Privacy Badger still uses heuristics to learn what's tracking you and to decide what to block. For new users, Privacy Badger will already be trained to block many common trackers as soon as you download it.

One thing that sets Privacy Badger apart is that, unlike most ad- or tracker-blocking extensions, it does not use “blacklists,” or hand-assembled lists of the domains to block. Blacklists rely on people knowing about particular tracking domains, and they can become out of date quickly. In addition, for-profit companies behind extensions with blacklists tend to give certain trackers preferential treatment in exchange for monetary gain. As a result, many blacklist-based extensions -- particularly those involved in the Acceptable Ads initiative -- do not protect your privacy by default.

Privacy Badger doesn’t use blacklists. Instead, it uses heuristics to identify tracking behaviors and learn who is tracking you. Once your Badger has seen the same third-party domain track you on three different websites, it will start blocking that tracker.

Using Selenium for automation, our new training regimen has Privacy Badger visit a few thousand of the most popular websites on the Web, and saves what Privacy Badger learns. Then, when you install a fresh version of Privacy Badger, it will be as if your Badger has already visited and learned from all of those sites. As you continue browsing, your Badger will continue to learn and build a better understanding of which third parties are tracking you and how to block them.

Every time we update Privacy Badger, we’ll update the pre-trained list as well. If you already use the extension, these updates won’t affect you. After you install Privacy Badger, it’s on its own: your Badger uses the information it had at install time combined with what it learns from your browsing. Future updates to the pre-trained list won't affect your Badger unless you choose to reset the tracking domains it's learned about. And as always, this learning is exclusive to your browser, and EFF never sees any of your personal information.

What if I don't want a pre-trained badger?

If you already have Privacy Badger installed, you don’t need to do anything. Your Badger will ignore the pre-trained list and keep working as it has.

If you are a new user, or installing Privacy Badger on a new device, you can choose to clear all tracker data (including the pre-trained list) by going to the options page, selecting the “Manage Data” tab, and clicking on “Remove all.”

If at any time you want to forget all data that your personal Badger has learned and start over with the latest pre-trained data, you can do this by clicking on the “Reset” button also found under the Manage Data tab on the options page.

Although nothing has changed about the way Privacy Badger learns, we hope this update will make it easier for new users to get the most out of Privacy Badger.

You may have arrived at this post because you received an email from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation.

Don’t panic. Contrary to the claims in your email, you haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam which is popularly being called "sextortion." This is a type of online phishing that is targeting people around the world and preying off digital-age fears.

We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom.

We have pasted a few examples of these emails at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers. The hacker promises to go away if you send them thousands of dollars, usually with bitcoin.

What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you a password you once used or currently use.

Again, this still doesn't mean you've been hacked. The scammers in this case likely matched up a database of emails and stolen passwords and sent this scam out to potentially millions of people, hoping that enough of them would be worried enough and pay out that the scam would become profitable.

EFF researched some of the bitcoin wallets being used by the scammers. Of the five wallets we looked at only one had received any bitcoin, in total about 0.5 bitcoin or $4,000 at the time of this writing. It’s hard to say how much the scammers have received in total at this point since they appear to be using different bitcoin addresses for each attack, but it’s clear that at least some people are already falling for this scam.

Here are some quick answers to the questions many people ask after receiving these emails.

They have my password! How did they get my password?

Unfortunately, in the modern age, data breaches are common and massive sets of passwords make their way to the criminal corners of the Internet. Scammers likely obtained such a list for the express purpose of including a kernel of truth in an otherwise boilerplate mass email.

If the password emailed to you is one that you still use, in any context whatsoever, STOP USING IT and change it NOW! And regardless of whether or not you still use that password it's always a good idea to use a password manager.

And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps.

Should I respond to the email?

Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all.

So, I shouldn’t pay the ransom?

You should not pay the ransom. If you pay the ransom, you’re not only losing money but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re are susceptible.

What should I do instead?

As we said before, for sure stop using the password that the scammer used in the phishing email, and consider employing a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online.

One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do.

We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good password hygiene going forward!

Example 1

I am aware one of your passphrase: password. Lets get directly to point. Not a single person has compensated me to investigate about you. You do not know me and you are probably wondering why you're getting this e mail?actually, I actually installed a software on the adult vids (sex sites) site and you know what, you visited this web site to have fun (you know what I mean). When you were viewing videos, your internet browser initiated working as a Remote control Desktop that has a key logger which provided me access to your display screen and also web cam. Right after that, my software program collected your complete contacts from your Messenger, FB, and email . After that I created a double-screen video. 1st part shows the video you were viewing (you've got a good taste haha . . .), and 2nd part shows the view of your webcam, and its u. You do have only 2 alternatives. We are going to understand these types of choices in aspects:1st solution is to disregard this message. In this case, I am going to send your actual video clip to just about all of your contacts and thus you can easily imagine about the disgrace you feel. Not to mention should you be in a relationship, just how it will eventually affect?Number two choice will be to pay me $3000. We will think of it as a donation. As a consequence, I most certainly will without delay eliminate your videotape. You will keep going on your daily life like this never happened and you will not hear back again from me.You'll make the payment through Bitcoin (if you do not know this, search for "how to buy bitcoin" in Google).

Example 3

?hat's up.If you were more vigilant while playing with yourself, I wouldn't worry you. I don't think that playing with yourself is very bad, but when all colleagues, relatives and friends get video record of it- it is obviously for u.I adjusted virus on a porn web-site which you have visited. When the victim press on a play button, device begins recording the screen and all cameras on your device starts working.мoreover, my program makes a dedicated desktop supplied with key logger function from your device , so I could get all contacts from ya e-mail, messengers and other social networks. I've chosen this e-mail cuz It's your working address, so u should read it.Ì think that 730 usd is pretty enough for this little false. I made a split screen vid(records from screen (u have interesting tastes ) and camera ooooooh... its awful ᾷF)Ŝo its your choice, if u want me to erase this сompromising evidence use my ƅitсȯin wᾷllеt aďdrеss- 1JEjgJzaWAYYXsyVvU2kTTgvR9ENCAGJ35 Ƴou have one day after opening my message, I put the special tracking pixel in it, so when you will open it I will know.If ya want me to share proofs with ya, reply on this message and I will send my creation to five contacts that I've got from ur contacts.P.S... You can try to complain to cops, but I don't think that they can solve ur problem, the investigation will last for several months- I'm from Estonia - so I dgf LOL

Example 4

I know, password, is your pass word. You may not know me and you're most likely wondering why you are getting this e mail, correct?In fact, I placed a malware on the adult vids (porn material) web-site and you know what, you visited this website to have fun (you know what I mean). While you were watching video clips, your internet browser initiated operating as a RDP (Remote Desktop) that has a keylogger which provided me access to your screen and also webcam. Immediately after that, my software program gathered your entire contacts from your Messenger, social networks, as well as email.What did I do?I made a double-screen video. 1st part shows the video you were watching (you have a good taste lmao), and 2nd part shows the recording of your webcam.exactly what should you do?

Well, I believe, $2900 is a fair price for our little secret. You'll make the payment by Bitcoin (if you don't know this, search "how to buy bitcoin" in Google).BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge(It is cAsE sensitive, so copy and paste it)

Note:You have one day in order to make the payment. (I have a specific pixel in this email message, and at this moment I know that you have read through this email message). If I do not get the BitCoins, I will definitely send out your video recording to all of your contacts including family members, coworkers, etc. However, if I do get paid, I'll destroy the video immidiately. If you want to have evidence, reply with "Yes!" and I will certainly send out your video to your 14 contacts. This is the non-negotiable offer, so please don't waste my personal time and yours by responding to this email message.

Maybe you’re a beginner to web development, but you’ve done the hard work: you taught yourself what you needed to know, and you’ve lovingly made that website and filled it with precious content. But one last task remains: you don’t have that little green padlock with the word “secure” beside your website’s address. You don’t yet have that magical “S” after “HTTP”.

prove that your site is not being impersonated (or prevent some malicious actor from pretending to be you)

do this all for free

Then, this post about getting an HTTPS certificate is for you! If transport-layer security, certificate authorities, and HTTPS are new concepts for you, check out this comic from How HTTPS Works: https://howhttps.works/.

The details about how to enable HTTPS on your site depend crucially on your hosting environment. Depending on the provider and software your site is hosted with, HTTPS setup could range anywhere from automatic, to a single click, to impossible (if your hosting provider specifically doesn’t allow HTTPS). For many web site owners, the most challenging or unfamiliar step in enabling HTTPS is getting a certificate, a document issued by a publicly-trusted certificate authority. A valid certificate is required for browsers to confirm that encrypted connections to your site are secure.

EFF helped create a free, automated, publicly-trusted certificate authority called Let’s Encrypt, which is now the most-used certificate authority on the web. In this post, we’re going to provide advice about the process of getting a certificate from Let’s Encrypt. It’s a convenient option in many cases because it doesn’t charge money for the certificates, they’re accepted by all mainstream browsers, and the certificate renewal process can often be automated with EFF’s tool Certbot.

There are also many other certificate authorities (CAs), which have different policies and procedures for getting certificates. Most will expect you to pay for a certificate unless you have some other relationship with them (for example, through a university that gets free certificates from a particular CA, or if you use a web host that has a commercial relationship with a CA to let subscribers get certificates at no additional charge). For most purposes, you won’t get a different level of privacy or security protection by choosing one CA rather than another, so you can choose whichever public CA you conclude best meets your needs.

We’ve compiled some resources that we’re sharing here for beginners who are new to getting their own HTTPS certificates from the Let’s Encrypt Certificate Authority.

This blogpost isn’t a full tutorial, but is intended to help you get started with the journey to get a HTTPS certificate:

Confirm with your web hosting provider to see what options are available for HTTPS.

Learn what system and software your server uses.

Troubleshoot until you find an appropriate tutorial to get HTTPS certificates for your site.

Check that HTTPS is working!

We’re trying to improve this process to encrypt the web. When Let’s Encrypt first launched in 2016, only 40% of website connections were encrypted. Today, that number is as high as 73%. Help websites get to 100% encrypted and make the Internet more secure for everyone.

There’s a chance that your web host already provides an option to obtain a certificate automatically, either from Let’s Encrypt or a different CA. Check if this is already described in your web host’s site or administrative interface. You can also check if they’re on this master list of web hosts supporting Let’s Encrypt, and if they have up-to-date instructions.

If you find your web host on the list of supported providers, or you already know that it has a tutorial or guide for using its HTTPS support, follow their instructions for enabling HTTPS on your site. If it is not supported, proceed below.

2. Confirm with your web hosting provider to see what options are available for HTTPS.

See if your site administration page has an option to enable HTTPS.

A lot of providers—including many that aren't on that community list—use software like cPanel on some of their hosting plans to let subscribers configure their hosting services. cPanel normally has a feature to let the subscriber automatically get a certificate for free (which may be either from Let's Encrypt or another CA).

Some of cPanel's competitors such as Plesk also have this configurable option. However, some hosts may be running outdated software or have deliberately disabled the ability to get a free certificate.

Get in touch with your provider and ask them about their options of HTTPS support.

Many providers are already working on making HTTPS available or or may already provide an HTTPS feature. You can contact them and ask to see if this might be an option.

“Dear [company],

I would like to obtain a free HTTPS certificate for my site. I was wondering if this is already in the works?

Thank you.”

Your provider may then be able to guide you about whether your hosting plan allows you administrative access to the server (in which case a tool like Certbot may be relevant for you). See the next step if this is your circumstance.

3. Learn what system and software your server uses.

If your hosting provider doesn’t integrate Let’s Encrypt but you do have administrative access to your server, you can use software to obtain and install a certificate. This is dependent on what software your web server is using, and what operating system your server is running on.

If the above sounds like unfamiliar jargon and you’re not sure about what software or system you’re using, don’t worry! You can email your webhost to get that information

Try using the following language in an email to your webhost (influenced from Matt Mitchell).

“Dear [company],

I am using your hosting service. I’m interested in using Certbot to use a free certificate from Let’s Encrypt.. Can you send me the support webpage on how to do this? In particular, I’m wondering how I can SSH into your server from my computer? I need to know what software the server is using, and what system the server is on.

Thank you.”

If you know what software and operating system your web server is on and know how to use the command line, Certbot might be a good tool for you.

Check EFF’s Certbot site to generate instructions for getting Let’s Encrypt certificates on Unix servers that you administer. If you don’t see your server’s software and operating system reflected on Certbot, or are unable to get a certificate from following the Certbot instructions for your configurations, proceed to step 4.

4. Troubleshoot until you find an appropriate tutorial to get HTTPS certificates for your site.

This is the messy part: there are many, many tutorials out there for many possible situations. If you’re new to using your command line, we recommend calling a friend with experience in configuring a Let’s Encrypt certificate on their site to help. Be prepared to copy and paste error messages, and spend some time troubleshooting.

Try checking the service https://letsdebug.net/ for an analysis of your setup that can help point out a number of common problems. Try searching the Let’s Encrypt Community Forum for similar questions. If you don’t find the answer from the community’s responses, try submitting your own question to the Let’s Encrypt Community Forum, or calling a friend.

Some other things to look for as you set up HTTPS include:

Get the certificate to automatically renew every 90 days. This means you won’t have to go through the pains of configuring a new HTTPS certificate manually, or leaving your site with an expired certificate warning in web browsers if you forget to repeat these steps 3 months from now.

Redirect your sites to HTTPS by default, so that it doesn’t default to the HTTP connection.

Check with your site host if a wildcard certificate is available for you. This just means that it’ll apply to all your sites that are subdomains of the same domain (if the domain is “example.com”, the subdomains “transactions.example.com” and “email.example.com” will be covered by a “*.example.com” wildcard certificate).

Once you’ve found a tutorial and enabled HTTPS, you’re almost there!

5. Check that HTTPS is working!

Now, visit your site in your own browser and troubleshoot the HTTPS configuration for your site to make sure it’s working. If you have problems, some resources include:

This is the latest in the web’s massive shift from non-secure HTTP to the more secure, encrypted HTTPSprotocol. All web servers use one of these two protocols to get web pages from the server to your browser. HTTP has serious problems that make it vulnerable to eavesdropping and content hijacking. HTTPS fixes most of these problems. That’s why EFF and others have been working to encourage websites to offer HTTPS by default.

Users should be able to expect HTTPS by default.

And browsers have been an important part of the equation to push secure browsing forward. Last year, Chrome and Firefox started showing users “Not secure” warnings when HTTP websites asked them to submit password or credit card information. And last October, Chrome expanded the warning to cover all input fields, as well as all pages viewed over HTTP in Incognito mode.

Chrome’s most recent move to show “not secure” warnings on all HTTP pages reflects an important, ongoing shift for user expectations: users should be able to expect HTTPS encryption—and the privacy and integrity it ensures—by default. Looking ahead, Chrome plans to remove the “Secure” indicator next to HTTPS sites, indicating that encrypted HTTPS connections are increasingly the norm (even on sites that don’t accept user input).

For website owners and administrators, these changes come at a time when offering HTTPS is easier and cheaper than ever thanks to certificate authorities like Let’s Encrypt. Certificate Authorities (CAs) issue signed, digital certificates to website owners that help web users and their browsers independently verify the association between a particular HTTPS site and a cryptographic key. Let's Encrypt stands out because it offers these certificates for free and in a manner that facilitates automation. And, with EFF’s Certbot and other Let’s Encrypt client applications, certificates are easier than ever for web masters and website administrators to get.