Security for Journalists, Part Two: Threat Modeling

Jonathan Stray on how to protect yourself, your sources, and your scoop on sensitive stories

If you know that your work as a journalist will involve specific risks, you need a specific security plan. In part one of this series, we covered the digital security precautions that everyone in news organizations should take. If one of your colleagues uses weak passwords or clicks on a phishing link, more sophisticated efforts are wasted. But assuming that everyone you are working with is already up to speed on basic computer security practice, there’s a lot more you can do to provide security for a specific, sensitive story.

This work begins with thinking through what it is you have to protect, and from whom. This is called threat modeling and is the first step in any security analysis. The goal is to construct a picture—in some ways no more than an educated guess—of what you’re up against. There are many ways to do this, but this post is structured around four basic questions.

What do you want to keep private?

Who wants to know?

What can they do to find out?

What happens if they succeed?

After you answer these questions you will be able to make a security plan, a set of practices that everyone involved in the story must understand and follow. A security plan might involve specific software tools, but security doesn’t come from software. It comes from understanding your security problem thoroughly, and successfully executing a plan to mitigate your specific risks. Think process, not technology.

There are two ways to look at the goal of security work. A good security plan should plausibly reduce the risk of something bad happening, and mitigate the damage if the worst does happen—but security is also about making things possible. If you can ensure safety, you can do important work that would otherwise be too risky. Either way, the ethics of journalism demand that we attend closely to security.

What this Guide Is and Isn’t

This is a brief introduction to journalism information security. It goes into some detail on the technical aspects of information security because reporting now depends heavily on information technology. (And of course, Source is a data journalism resource.) It has much less to say on the legal, physical, and operational aspects of information security.

In reality, though, all aspects of security are indivisible. There is only one world. Not only are legal, physical, and operational concerns crucial to information security, but information security is only one part of security. This is not a guide to journalism security in general, which would include the physical safety of reporters and sources. For a broader introduction, see here.

In fact, this is not even a comprehensive reference on the technical aspects of information security. That is not possible in a single article. Nonetheless, I hope to provide a useful conceptual framework. My goal is to turn unknown unknowns into known unknowns.

Threat Modeling

There is no one-size-fits-all security. Threat modeling is a general approach to thinking through your security needs and coming up with a plan that suits your unique circumstance. To make this concrete, throughout the rest of this post I’ll refer to the following security scenarios. These are simplified versions of real situations that journalists have faced.

Police Misconduct.
You are reporting a story about local police misconduct. You have talked to sources including police officers and victims.
You would prefer that the police commissioner not know of your story before it is ready to be published, to avoid any possible interference.

Insider Trading Whistleblower.
You are reporting on insider trading at a large bank and talking secretly to a whistleblower who may give you documents. If they are identified before the story comes out, at the very least you will lose your source. The source might lose their job or face legal trouble.

Syria War Photographer.
You are a photojournalist in Syria with digital images you want to get out of the country. Some of the images may identify people working with the rebels who could be targeted by the government. A security failure could mean someone loses their life.

The threat modeling approach is based on the idea of asking what is threatened, by who, and how. You can structure this thought process by asking the following questions.

What do you want to keep private?

Privacy is about protecting specific pieces of information. Aside from keeping someone’s identity secret, you may need to protect someone’s location, schedule, or address book. Address books or contact lists can be particularly sensitive, because they can reveal the identities of many people at once. The more specific you can be about what is secret, the better you can protect it.

In the Syria scenario, the photographs obviously cannot be public until the people in them are safe elsewhere. But although the digital files containing the photographs must not fall into the hands of the authorities, it’s not really the photographs themselves that need to be secret, but the identities of the people in them. It may also be necessary to protect the location and identity of the photographer.

In the whistleblower scenario it’s similarly important to protect your source’s identity. But in this case you also have to communicate with your source to receive the documents they want to give you. Encryption tools will protect the content of your communication from prying eyes in the bank’s IT department, but an eavesdropper will still be able to tell who you are talking to even they don’t know what you are saying. For the police misconduct story, you may wish to keep the very fact that you are working on a story secret until you are ready to ask for comment.

Who wants to know?

This is often the easiest question to answer. The entity who wants to break your security is called the adversary. In our scenarios the adversaries are the Syrian government, the bank, and the police department. Very often, the adversary is the subject of the story. But it’s worth thinking about other adversaries. Who else would want what you have? Maybe a rival news organization would love to take a look at your juicy leak. Maybe you don’t want nosy customs agents to flip through the photos on your camera at a border crossing, or there could even be a government intelligence agency who might take an interest in your work.

Here are some general categories of adversaries to consider: the subject of the story, anyone with a financial interest, politicians, government agencies, police of various kinds, competitors, other sources you’ve talked to, and criminal organizations. This is not an exhaustive list. The more specific you can be about who poses potential threats, the better you can plan for them.

What can they do to get it?

Once you’ve worked out who your adversary is and what they want, you’re ready to ask how they can get it.

It’s kind of glamorous to imagine Syrian hackers or NSA snooping, but there are far more mundane methods. Different adversaries might search public materials for your traces, steal your laptop, file a subpoena, or call a new employee and ask for their password. There are many different kinds of “attacks” on your security.

A technical attack relies on hacking, installing malware, intercepting your communications electronically, or breaking codes. Remember, though, your adversary doesn’t get any points for difficulty, and non-technical attacks can leave you just as compromised.

A legal attack might involve a lawsuit to stop publication or compel disclosure, or a subpoena to force you or a third party to reveal information. Or you or your source could be arrested or otherwise detained.

A social attack is a con of some sort, relying on trust and deception. Your adversary could mount a phishing campaign, brazenly walk into your office during lunch and sit down at your computer, or call with a fake emergency to ask for passwords.

A physical attack involves theft of computers or data storage devices, installing malware on someone’s computer when they’re not looking, or generally interfering with your hardware or your person. A determined adversary can also just beat someone until they talk—a strategy which goes by the grim name of rubber-hose cryptanalysis when applied to “breaking” encryption.

You can’t know for sure what your adversary can try, but you can make some educated guesses. In the police misconduct scenario, your adversary is likely to use legal tools against you, and maybe even arrest you or your sources. In the insider trading scenario the bank might file a lawsuit, but their IT department will use technical tools to determine if someone is using a work computer to leak proprietary information. The current Syrian government has used both sophisticated technical attacks and horrific torture.

What happens if they succeed?

Your security plan is incomplete if you haven’t thought through what will happen when things go wrong.

To begin with, tracing through the consequences of a security failure can show you how to improve your protection. Good plans often include multiple overlapping security measures, an important strategy known as defense in depth.

But thinking through the consequences of a security breach is also an important reminder of what’s at stake. Security is never free: it costs time, money, and convenience. Suppose you have a hot piece of information but you’re away from your laptop where your secure communications tools are installed. Can you get away with sending an unencrypted text message, just this one time? That depends. How important is it that you send the message before you get back to your laptop, as compared to the possible consequences of interception?

Your analysis of consequences may also lead to the conclusion that there is no safe way to do a story. The ethics of journalism security are still evolving, but I propose that “do no harm” should be a basic principle.

It’s just a model

You won’t be able to answer the above questions definitively, because you probably don’t have solid information about your adversary’s capabilities and intentions. That’s why this is a threat “model.” Your security planning can only be as good as your assumptions; you can only protect against risks you’ve thought of. These questions are designed to make your assumptions explicit.

Even though you cannot know what your adversary will do, there are two types of information that are crucial to making educated guesses.

First, you can research your specific adversary. What is their history, what are they trying to achieve, and what have they done in the past? What are the relevant politics in your part of the world? Have other people faced similar adversaries? What happened in those cases? What does all this tell you about intentions and capabilities?

Second, you need to know what types of attacks are possible, and the difficulty or expense of each. You can see that someone could steal your laptop if they can get into your hotel room. Could someone unmask your source if they can get into your ISP? And if so, what could you do to stop them? How much would your adversary have to spend to buy malware that can steal your files remotely? These sorts of questions require detailed technical knowledge to answer. I’ll cover some of the basics in the rest of this article.

Digital Security Concepts

The threat modeling questions above are designed to give you a clearer picture of your security needs. The next step is to translate these needs into an actual plan. That requires understanding a variety of technical concepts. For the journalist working on a story with security risks, such technical knowledge is simply part of the job description. The sections below are a rapid refresher on digital communications technology.

How communications travel over the Internet

The Internet got its name from being a “network of networks.” Any communication between two points may travel through dozens of intermediate computers operated by different entities. Those computers belong to corporate networks, telecommunications companies, and technology companies that provide online services.

Suppose our insider trading source, alice@bigbank.com, sends an after-hours email from her desk at work to a reporter, bob@gmail.com, who is currently on his couch at home. The message first travels from Alice’s computer to the BigBank email server, then to the telecommunications company (telco) that BigBank pays for Internet service. The email probably passes through several different telcos before it finds its way to Google’s servers. When Bob checks his Gmail account from his couch, Google transmits the email from their server to Bob’s web browser, via several more telcos, ending with whatever company Bob pays for internet service to his home.

All of these intermediaries—BigBank, the telcos, and Google—may be able to read the contents of the email. The process is akin to passing a postcard from hand to hand over thousands of miles. Without encryption, everyone along the way can see what you wrote.

Fortunately there is already a lot of encryption built into the web: any time you go to a URL that starts with HTTPS you are using a secured connection between your computer and the server you are connecting to (the “S” in HTTPS stands for secure.) When Bob logs into Gmail, Google automatically redirects the connection to an HTTPS address, which means that the connection between Bob’s browser and Google’s servers is encrypted. But Google can still read your email, and other parts of the path from Alice’s office computer to Google may not be secure. For example, BigBank almost certainly keeps some record of the email.

Other messaging systems face similar issues. Your message passes from your computer or mobile device through dozens of different computers owned by different organizations. Some of those connections may be encrypted—such as connections from a browser using HTTPS—but some may not be, and usually any company which stores or processes your data has access to it.

The only reliable way to protect information transmitted over the internet is to encrypt it yourself before transmission, a practice called end-to-end encryption.

Privacy versus anonymity

Encryption will hide the contents of what you are saying, but not who you are saying it to.

Our Syrian reporter should be wary of any electronic communication with in-country sources. Even if she uses end-to-end encryption, anyone snooping on the network—say, the Syrian intelligence agency—will know who she is communicating with. This is a catastrophe if her source identities must remain secret, because monitoring her communications will produce an instant list of suspects for the authorities to investigate.

Just as it is possible to read the address on an envelope even when you can’t read the letter inside, there is a difference between the content of a communication—say, an email—and information about who sent it, who received it, when, using which server, and so on. This is the distinction between “content” and “metadata” that has been popularized by the recent NSA revelations (though the NSA also collects large amounts of message content.)

Metadata

Think of this envelope as your message passing through the Internet, including equipment that belongs to the telecommunications company as well as the bank’s corporate servers. All of these machines—and any person who has access to them—can read the address. They have to be able to read the address to deliver the message! But if you have sealed the envelope no one can read the contents of the letter. Encryption technology “seals the envelope,” protecting the contents of your communication. It does not hide the address.

This is the distinction between privacy and anonymity. It may be enough to protect just the content of your communications, or you may need to keep the addresses secret as well.

Anonymity is best understood as the inability to link one identity to another, such as linking a pseudonym to a legal name or a location to an email address. There are different kinds of unlinkability which might be needed in different situations. For example, is it important that your adversary not know that your source is talking to you specifically, or will it be a problem if they talk to any reporter?

We’ll look at tools for both encryption and anonymity below.

Data at rest, data in motion

Data needs to be protected in two ways: when it’s being transmitted from one place to another, and when it’s stored somewhere. You adversary could read your email by intercepting it as it is transmitted across the network, or they could steal your laptop and read the messages stored there.

The key to securing data at rest is to keep track of how many copies exist and where they are stored. In the paper era, intelligence agencies would number each physical copy of a classified document and keep records of its whereabouts. It’s much easier to make copies of digital material, for better or worse, but the same logic applies. How many copies are there of that sensitive photograph? It might be obvious that there’s a copy on your laptop, but what about your camera memory card? What about your laptop backup? USB sticks? Did you ever view the photo on your phone? And even on your laptop, how many copies of the same file do you have? Is there a copy in a temporary directory somewhere? Have you imported the data into different programs?

Your security plan needs to take into account all of the copies that need to be made, where each is stored, and how it is protected. Each copy of the data can be secured in variety of ways. Again, consider threats of all kinds: technical, physical, legal, social.

One of the simplest things you can do to secure data at rest is to use full disk encryption. As the name suggests, this encrypts everything on a drive, keyed to your password. Windows, Mac, and Linux have built-in tools for such encryption—but you do have to turn it on. Disk encryption is much stronger than a mere login password; without disk encryption an adversary can read your data merely by connecting the drive to a different computer. You can also encrypt external drives and USB sticks, as well as the entire contents of Apple and Android phones, and you should.

Full disk encryption is free and essentially zero inconvenience. Like two-step login, discussed in part one, there is no excuse not to use it. Every journalist should turn it on, on every computer and every phone they use.

You will also need to understand secure erase techniques and tools. There’s no use deleting a file just to have someone pull it out of the recycle bin or trash folder, and a dedicated adversary with access to your hardware can work wonders with appropriate data recovery technology. This is the difference between throwing a document out and feeding it into a shredder.

How to secure empty your trash.

On Mac computers, the Secure Empty Trash command will delete all files in the trash so that they cannot be recovered, but has no effect on any file previously deleted using the regular Empty Trash command. Use the Erase Free Space feature for that. On Windows, you can use the free Eraser utility to delete specific files, and CCleaner to clear all previously deleted data from your drive. While this will definitely prevent file recovery, note that your computer may still contain traces of deleted information, such as operating system logs, temporary files, or filenames in “Recently Opened” menus. (In general, it’s a good idea to use bland filenames for sensitive information.) If you are facing an adversary who might do forensic analysis on your hardware, you need to physically destroy the storage devices.

Mobile devices

Smartphones are a security disaster. Not only do they store huge amounts of personal data, but they are inherently a location tracking device. Plus, the security tools for mobile devices are less mature than their desktop counterparts.

Consider for a moment what is accessible through your phone. Certainly your email and social media accounts. Perhaps your phone also has stored passwords to your online data in various cloud services. And of course, your phone has a copy of your address book. At minimum you should be using a lock code to protect your privacy, but this won’t stop a determined and technically savvy adversary from accessing your data, should they get their hands on your device. As noted above, you should also be encrypting all data on your Apple or Android phone. Phones also contain a microphone, which can be accessed remotely to listen in on you.

Even worse, phones produce a record of your location even if you are not using any mapping or GPS applications. In order to stay connected to the mobile network, your phone constantly switches between signal towers, each of which serves a particular small area. The phone company keeps this data, as well as a record of every text message, call, and data transmission, for billing purposes. Many phones and apps also store an internal record of GPS coordinates and wifi hotspots, and in some cases they transmit this information to corporate servers. It’s also possible for third parties to track the radio signals your phone emits using a device called an IMSI catcher, popular with both police and criminals.

In 2010, German politician Malte Spitz sued to obtain his location history data from the local phone company. Plotted on a map, and correlated with his public appearances and posts, the data paints an extremely detailed portrait of his life, activities, and associates, as an amazing visualization by Zeit Online shows.

The data generated by your smartphone can be extraordinarily revealing, especially this location data. Our crooked police commissioner doesn’t need to crack your anonymity scheme to figure out who your sources are, if they can just subpoena your phone records. Even if you didn’t call anyone, the location data can reveal who you met with—especially if it can be correlated with your source’s phone location.

It is possible to use so-called “burner” phones but it is actually quite tricky to set up and correctly use a phone that is not connected to your other identities. Unfortunately, the situation in which you’d most want a burner phone is when your adversary has some sort of access to phone company data (think subpoena), which is exactly the situation in which burners are hardest to use. For example, you can’t ever have your regular and burner phones turned on in the same place at the same time, and you can’t ever call your regular contacts from your burner phone. In many countries it is not even possible to activate a SIM card without giving your name or calling from another number.

Consider simply not using your phone for any kind of sensitive communications. Or even leaving it at home. Is this level of concern really necessary? Sometimes. It all depends on who your adversary is and what data they are likely to be able to access.

Document Metadata

Most electronic document files contain identifying information that you cannot normally see. (This “metadata” is distinct from the “metadata” at the center of the NSA revelations, which refers to the records of who talked to whom.) A Microsoft Word document stores the author name, creation date, and other information. PDF documents have been known to contain all sorts of hidden information. Photographs contain information about the camera, and if the camera has a GPS—or if the camera is a phone—then the digital file may also contain the location where the picture was taken.

Document metadata can have damming consequences, as when Vice magazine inadvertently revealed the location of source John McAfee by publishing a photograph with location metadata.

Big oops.

There are tools to “scrub” or remove metadata from various kinds of files. These tools mostly work, when used correctly. But here’s a simpler, near-foolproof method to remove metadata: load the file of interest on a computer that has never been used by anyone you are trying to protect, and take screenshots. This ensures that only the information you can actually see makes it into the final file (but these files will have metadata created on the new computer, which is why it must be clean.)

Endpoint Security

There’s no need to intercept your email in transit if someone can just hack into your computer remotely and download your files. Unfortunately, this is not only possible, but there is a thriving market in tools to do just that. The problem of securing computers and the data on them—as opposed to securing the communications transmitted between computers—is known as endpoint security.

There are many ways a computer can be compromised. If the adversary can get a piece of software installed on your computer or mobile device without your knowledge, you lose. This can be accomplished with the unknowing cooperation of the user—as in a phishing attack—or by silently exploiting vulnerabilities remotely. Or they might find a way to get you to plug an infected USB stick into your computer. You have to assume that any USB stick that isn’t straight out of the packaging might contain malware, intentionally or inadvertently, including the cheap logo-imprinted USB sticks given out at conferences. The goal of the adversary may to be install a remote administration tool, a piece of software that can do things like record keystrokes (to reveal passwords) or even secretly transmit your files on command.

The most basic defenses against remote hacking are anti-virus tools and up-to-date software. It’s particularly important to keep your browser and operating system up to date. This will not protect you from all attacks, because not all known vulnerabilities have been disclosed and not all disclosed vulnerabilities have been fixed, but up to date software is generally going to be much safer than old versions.

If the adversary has physical access to your computer, anything is possible. They could install software, read files, or even install an inexpensive hardware device to record keystrokes. Generally, any device containing sensitive data or used for secure communication must be physically secured at all times. That usually means either on your person or locked up somewhere your adversary is unlikely to be able to get into. If you’re traveling somewhere where an adversary might want access to your laptop, don’t ever leave your laptop alone in your hotel room.

Will your adversary really go through the trouble of using sophisticated malware to break into your computers, or even secretly tamper with your hardware while you’re out? This depends on your circumstance, though as I said above, it can be very helpful to research what your adversary and similar adversaries have done before. This answer is a key part of your threat model. But note that remotely hacking into your computer is not only technically far easier than cracking properly encrypted data, but available as a turn-key capability to anyone who can afford the necessary software. Here’s a recent price list for one commercial provider.

Endpoint security is a difficult problem, one that is impossible to fully solve at this time. If you are concerned that your computer may be compromised, the most plausible defense is to buy a brand new computer, in person from a consumer retailer. At the very least, you should wipe the drives and re-install the operating system, or boot from a secured operating system like TAILS (see below.) In the worst case, when you must keep data from a determined, well-resourced, and technically sophisticated adversary, the only answer may be to store sensitive files on a computer that is never connected to any network at all. This is called an air gap and requires careful preparation and procedures.

Who Do You Trust?

All the encryption in the world won’t help you if you put your trust in the wrong place. When designing a security plan, you need to make choices about who you will share information and data with—both inside and outside your organization. This is sometimes called operational security, or opsec for short. The first rule of opsec is: don’t tell anyone.

Tell your editor, or not

Journalists have a long history of keeping things from their editors and colleagues, like the identities of sources. This can be frustrating, but it can also be more secure. Just as an adversary might get access to your files in a variety of ways, they might get information from anyone who knows it. An adversary can often do pretty much the same things to an editor that they can to do a reporter.

A compartmentalization policy, better known as “need to know,” restricts information access as much as operationally possible. Although this can be inconvenient and excludes people, it also spares them from needing to keeping consequential secrets. Your security plan needs to specify who gets to know or store every kind of information that the reporting process might generate: source identities, but also notes, files, documents, communication records, and so on. You should have clear answers to the question of who knows which secrets, and who must protect what data.

This also means don’t tell your friends, don’t brag, don’t ever give unnecessary details. If you don’t absolutely need to share it with someone, don’t. This is a difficult policy to follow and denies you the many benefits of openness. Like all security, you have to decide if you’re getting more from it than you are paying to get it, which in turn requires evaluation of your risks.

Third-party storage

Storing confidential information with third parties can be risky. This includes every app or service you use to communicate, and every bit of your cloud storage. Aside from data you explicitly upload (say, to Dropbox) every online service creates implicit records such as logs of your access times, IP address, who you contacted, location traces, and so on.

Who has access to all of this? This is more than a question of “do you trust company X?” As usual, it’s helpful to imagine all the different ways your adversary might get access.

Can you ensure that every person working for the company is honest? Your private data is only as secure as the creepiest employee with access.

Will your service provider prove secure against technical attacks by your adversary? It may be difficult for you to know whether or not they are competent when it comes to security.

What will this organization do when served with a subpoena?

The legal risks of third-party storage are often the most damming concern. In the United States, the Fourth Amendment does not protect information that you have given to someone else to store or process. Of course you might get also get served with a subpoena directly, but at least you’ll know about it and have the opportunity to contest it. A third party may not even notify you that anything has happened.

This is particularly an issue when it comes to third-party communication records, such as phone company records and ISP connection logs. In every country there are ways, legal or otherwise, to compel the production of such records. In 2012 the U.S. Department of Justice secretly obtained the calling records of more than 20 Associated Press phone lines for a period spanning several months. There are really only two ways to prevent such things: change government policy or don’t use the phone. The DOJ has since issued new guidelines saying that examining journalist communication records is an “extraordinary measure” that must be approved by the Attorney General. This may or may not be sufficient reassurance for you.

The global nature of the internet complicates this picture further. Where are the computers that store your information, physically? Where are the offices and employees of the companies that run these servers? Are they in a jurisdiction that is friendly or unfriendly to your adversary? If you can get a secure connection to a server located in and operated from a friendly country, the fact that someone else has your data may not be an issue. For example, Gmail may be a reasonable choice in Vietnam because Google does not have servers there, and all your connections to Gmail servers elsewhere leave the country via encrypted HTTPS connections. Google itself still has access to your email, but the Vietnamese authorities may be unable to compel Google to turn it over.

Many major tech companies now publish transparency reports that reveal how many times they’ve given user data to the authorities in each country.

Digital Security Tools

Every security professional gets asked about tools constantly, but software is not what gives you security. By the time you are selecting tools you should already have developed a solid threat model. You should know what you are protecting, and from whom, and how they might break your security.

Nonetheless, tools are important. Each has quirks and flaws and nuances, and is easier or harder to use correctly—and to get sources to use correctly. Here’s a brief overview of some of the most common tools. New tools are being developed all the time, and existing tools are occasionally shown to have serious vulnerabilities, so this is the part of this post I expect to go out of date most rapidly.

Cryptocat: easy encrypted chat

Cryptocat is probably the easiest of all security tools to use, which makes it a good choice for sources who are new to secure communication—that is, almost everyone. Simply enter your user name and the name of a chat room for instant secure group chat. You can also transmit files. Cryptocat is available as an extension for Chrome, Firefox and Safari, and as an iPhone app. It uses end-to-end encryption so not even the Cryptocat servers (or anyone who can hack into them) can read your messages, and does not log or store messages after they are transmitted.

After establishing a chat room, Cryptocat requires a simple authentication step to ensure you are really talking directly to your source. This protects against an adversary who may be able to alter the network traffic between you and your source, known as a man-in-the-middle attack. For example, a national government that controls all communications into and out of the country might try to intercept connections to the outside world, as Syria has done.

Cryptocat

Be aware that in-browser encryption is a relatively new concept, which means Cryptocat is less mature than other security tools. Previous versions had severe vulnerabilities. The code is open source and has since been audited by multiple people, which increases the likelihood that it is now secure. It is a great tool for many people owing to its simplicity of use, but more extensively vetted tools like OTR plus Pidgin or Adium (discussed below) are probably more appropriate against well-funded or technically sophisticated adversaries.

GPG: encrypted email

GPG is the gold standard for end-to-end secure email. It’s an open-source implementation of the PGP protocol, but you probably don’t want to use it directly. Instead, use an email application which supports GPG such as Thunderbird or Apple Mail, or a browser extension such as Mailvelope or Google’s forthcoming end-to-end Gmail encryption.

PGP is a powerful technology, perhaps the most mature and well vetted end-to-end encryption protocol. It’s as probably as secure as mainstream encrypted communication can get, and secure against all adversaries when used appropriately. GPG is an operationally complex tool. It takes more work than other technologies to set up and use correctly, including manual key generation and management. Unfortunately, these drawbacks are embedded in the 1990s-era design of PGP and are unlikely to be fixed.

Like all encryption technologies, GPG will not protect the identities of the people you are communicating with, only the message contents.

OTR: encrypted instant messaging

OTR stands for “Off the Record” instant messaging. Like PGP, it’s a protocol rather than an application. It’s supported by the Pidgin universal instant messaging app on all operating systems, Adium on the Mac, and various other clients on other platforms and devices. Note that this is completely different than the confusingly named “off the record” option in Google Chat and AIM, which only turns off chat logging and does not provide end-to-end encryption.

Secure instant messaging is a lot simpler to set up than secure email, and it can actually be more secure too. First, it is simpler to use so it’s less likely that you’ll make a bad mistake. Also, after the conversation is over the per-conversation encryption keys are destroyed so there is no way to recover what was said—assuming neither party kept a log of the conversation. Don’t keep chat logs; go into the settings on your IM software and make sure logging is turned off. This is probably the number one way that secure IM is compromised.

Like Cryptocat, you will need to do fingerprint verification with each new contact to ensure that you are really talking to who you think are. This authentication step is an easy, one-time process, and it’s especially important if your threat model includes the possibility of a man-in-the-middle attack.

Tor: an anonymity building block

Tor provides anonymity online, or at it least helps. The name stands for “The Onion Router” and that’s sort of what Tor does: it routes your browser traffic through multiple computers using multiple layers of encryption. The end result is that the computer you connect to does not know where you are connecting from, and in fact no single computer on the internet has knowledge of the path your packets are taking. This obscures the IP address of the device you are using to connect to the Internet. Your IP address would otherwise reveal a lot about you including your location. You can use Tor by downloading the Tor Browser, a custom version of Firefox.

Tor is a powerful technology, mature and well tested, and even the NSA has difficulty breaking it. However, it’s extremely important to understand what Tor does not do. For example, if you log into any of your regular accounts over Tor, the server on the other end (and anyone who can intercept the connection between the final Tor node and that server) still knows it’s you. That’s an obvious example, but there are many other behavioral risks. Tor does not hide the fact that you are using it, which can itself be used to identify you, as in this case.

Online anonymity is quite difficult. We are all embedded in huge web of online accounts, logins, contact lists, habits, locations, and associations that is very difficult to break free from without traces. However there are a few recipes that are widely used. One key point: you can never reference or communicate with your non-anonymous identity from your anonymous identity. Ideally you should even be accessing them from different computers because browsers and operating systems have characteristic fingerprints, and because you don’t want the details of your regular and anonymous identities stored in a single place.

Again, anonymity is all about linkability, which makes compartmentalization of information a key strategy. For a cautionary tale, consider how former CIA director David Petraeus was busted by correlating login IP addresses.

SecureDrop and GlobalLeaks: anonymous submission

SecureDrop and GlobalLeaks both exist to solve a specific problem: to allow people to submit material to journalists anonymously. Both are designed so that the source must access the drop site through the Tor Browser, meaning that not even the journalist knows who the source is—assuming the source doest not identify themselves, voluntarily or accidentally. This is the anonymous leaking model of Wikileaks, operationalized into a robust tool and a set of recommended procedures.

GlobalLeaks may be simpler to set up and use, while SecureDrop has more carefully thought through the process of securely storing and reviewing submitted material. The recommended SecureDrop configuration includes an air-gapped viewing station running TAILS and a specifies a strict protocol for transferring material between machines using USB sticks and GPG keys. SecureDrop also supports simple two-way anonymized communication between the source and the journalist, through the drop site, which allows the journalist to ask questions about the source or material.

Many news organizations now run SecureDrop or GlobalLeaks servers, and yours should too.

Whisper Systems and Guardian Project: secure mobile communications

You’ll notice that all of the above tools run on desktop computers, not smartphones. Various organizations including Open Whisper Systems and The Guardian Project are trying to close that gap, with an array of open source applications for various platforms. You’ll notice that they’re based on the same secure standards as their desktop cousins.

Signal for Apple and Android provides easy encrypted voice calls and text messages. You can use your regular phone number, and the app lets you find friends who have installed Signal without transmitting your contact list to anyone else.

TextSecure for Android is an end-to-end encrypted replacement for standard text messaging. Includes group chat and emoji support :)

OSTel is a protocol for encrypted voice calls, supported by several apps

All of these apps provide end-to-end encryption. They may also provide a certain level of unlinkability, because they bypass the traditional telephone network and so do not leave the usual call records. This does not automatically mean they leave no records at all, however. Any number of servers may keep logs. Stronger anonymity needs to start with a tool like Tor:

Jitsi: encrypted video calls

Jitsi provides end-to-end encrypted video calls and conferencing, ala Skype or Google Hangouts. You can download their app or use their new web client for easy encrypted video conferences. You can also run a your own video conferencing server which allows tighter access control and avoids the need to trust a third party.

It’s safe to say that open source encrypted multi-party video calling is a technology still in its infancy. Though the underlying secure protocols are well established, Jitsi has yet to go through an independent security audit. Therefore, it cannot be assumed to be secure against sophisticated attacks. As always, encryption alone cannot provide anonymity.

Silent Circle: commercial secure telephony

There is good reason to believe that open-source software can be more secure than proprietary systems. Open source software can be widely reviewed for bugs, and it’s hard for any one entity to introduce hidden back doors. On the other hand, usability, training, and support can be… variable. Silent Circle is a commercial secure voice, video, and text app for your phone. They offer a slick product with good support. And you can give one of their pre-paid “Ronin cards” to a source to get them started with secure communications, easily.

But does Silent Circle have the technical capability build a secure product? Will they expose their users when required by government, as Hushmail did and Lavabit did not? These questions are impossible to answer definitively, but at least the pedigree looks good. Silent Circle was co-founded by PGP creator Phil Zimmerman, which lends a certain amount of technical and ideological credibility.

TAILS: a secure operating system

There is no un-hackable operating system, but TAILS starts completely fresh from every boot; it doesn’t ever save anything to disk. You boot it from a DVD or USB stick inserted into almost any Intel-based computer (including Macs.) Even if someone was able to hack into your computer while it was running TAILS, they would find no personal data there, and any back doors or malware they were able to install would simply disappear forever when you shut the computer off. Even better, everything you do online using TAILS is automatically routed through Tor.

TAILS is ideal if your threat model includes sophisticated attackers who might try to put malware on your computer… or if you need to make absolutely certain that your computer is not compromised, but you don’t have the budget to buy a fresh laptop (but note that no operating system can protect against hardware tampering.) It’s also great for setting up an up anonymous communication machine that has no trace of your other identities—though as usual, be careful! There are a hundred ways you might accidentally reveal yourself.

Putting It All Together

Let’s say you’ve created a threat model for the story you’re working on. You’ve researched your adversaries and defined the technical, legal, social, and physical attacks you need to protect against. You know whether you need privacy or anonymity, and specifically what kind of privacy or anonymity. Now you’re ready to put together an actual security plan.

Security Recipes

To build your plan, try to rely on smaller “recipes” that solve particular sub-problems. If you do some research, you’ll find many such recipes. Here are a few widely used recipes for combining the above tools to achieve specific ends. Note: these recipes do not and cannot “solve” the problem of security. They are only building blocks that might be useful in your specific situation. Do your research before you use them.

Private chat: The journalist wishes to communicate with someone while hiding the content of the communication, but not the fact of communication. You might be in this situation if you need to communicate with colleague in a context where the fact that you are in communication isn’t a security issue. In this case I would recommend that the parties use Cryptocat. There will still be logs showing that each party connected to the Cryptocat server, such as connection logs at either party’s employer or ISP. This is fine if either 1) the adversary cannot access those logs or 2) using Cryptocat is not itself revealing or suspicious.

Anonymous submission: The source wishes to deliver something, perhaps a file, to the journalist without leaving metadata traces that would identify them. The journalist can set up a SecureDrop server, which the source accesses using the Tor Browser. They must do so from a network that they are not normally associated with, such as open wifi at a cafe, because Tor use can itself be powerfully identifying. Be sure to store submitted material securely, and remove document metadata before publishing.

Private, anonymous chat: Two parties wish to communicate such that an adversary who can log or intercept either end of the connection does not know who the other party is. Use an OTR-compatible instant messaging app, routed through Tor, connecting to a third-party chat server. The jabber.ccc.de chat server is popular, but several newsrooms also run their own chat servers. You’ll need to be sure that user names on that chat server cannot be linked to real identities, though you should be able to talk freely once the connection is established. Be sure to do fingerprint verification when you set up the channel. If your adversary is likely to try to compromise your endpoint security—that is, try to remotely install malware on your computer or phone—do all of this using the TAILS operating system, which includes an instant messaging client conveniently pre-configured to go through Tor. Remember to turn off logging on chat clients at both ends!

Anonymous web browsing: In this case the journalist or source wishes to access the web while leaving no traces of what they did on the computer they used. The best solution is to use TAILS. It routes all web access through Tor by default, and leaves no traces on the computer when it shuts down. It also prevents reading the contents of any drive installed on the computer, which helps prevent information leakage. As usual, the use of Tor can itself can be identifying. Also, it should be obvious that no software can protect you from publicly posting information that can be used to identify you.

Travel without data: If you need to go somewhere where it may be difficult to maintain control of your devices, consider simply not bringing any sensitive information along. Get a new laptop, or take the hard drive out of your current laptop. Buy a burner phone, not to stay anonymous but so you aren’t walking in with your email and address book in your pocket. Use TAILS on your computer, combined with secure online storage, so nothing is ever stored on your person. Or if you must store information locally, use a single encrypted USB stick or memory card that you keep with you at all times, or a TAILS persistent volume.

Who must do what?

Your security plan must specify who does what, and how the security depends on that. Here are some questions your plan needs to answer, drawn from the concerns discussed above. This is not an exhaustive list.

How will all parties communicate?

Who will have access to what information? How?

What are the critical actions that must be performed correctly to stay secure?

Will you need to arrange face-to-face meetings? If so, how will you signal that? Where will you meet?

Where is confidential information stored, physically? How many copies?

Who has physical access to what equipment? Consider offices, servers, memory cards, etc.

What is the policy for archiving and deleting this data?

What information can be released publicly as part of reporting, and what information must remain confidential?

What about phones? Is their location tracking a problem?

Have you configured your equipment and software to be as safe as possible? For example, turn off GPS and chat logging.

What legal issues are also security issues in this case?

Have you consulted a lawyer? Do you need to?

Is the plan ethical? Do all parties understand the risks?

Could you bear the consequences of a major security failure?

Who is overall responsible for the correct implementation of the plan?

How will you communicate this plan to all parties?

Is this plan appropriate for everyone’s level of knowledge and skill?

Will you need to provide training or arrange for practice exercises?

I’ve emphasized the problem of communicating the plan. I regularly assign threat modeling homework, and almost always the major problem with my students’ plans is that they are too complex; they overestimate the ability of people with no prior information security experience to operate specialized tools.

Is your plan simple enough?

By now you realize that security is complex. “Security” isn’t even a single thing, but must always be understood relative to your goals and threats.

The required software can be difficult to set up. The protection that security technologies offer is riddled with caveats. And for some goals, like anonymity, a single mistake can defeat all the security work you’ve done. Meanwhile, journalists are almost always under time pressure and sometimes even in physical danger. And even if you become a security expert, your sources also need to use secure tools. Are they generally technologically fluent? Will they be able to reliably operate the software according to your plan? Can you even persuade them to attempt it?

Even experienced people frequently get security wrong. Wikileaks did not intend to release the full cache of 250,000 diplomatic cables, but through a complex series of events involving mistakes by both Wikileaks and The Guardian, they became public. If Wikileaks can’t get security right, what hope does anyone else have?

This is why asking whether everyone can really follow the plan has to be part of the plan. Once you’ve worked out your threat model and specified the tools and the process, you have to ask: where is someone likely to screw it up? And what happens when they do? Far too often, we are our own worst enemy when it comes to security.

The best plans are simple.

In the whistleblower scenario, you could choose not to use digital communication tools at all. Meet in person. Leave the phones at home or work. Take notes on paper. Keep the notes somewhere physically secure. If necessary, physically pass a USB drive or CD. Scrub document metadata before publishing. In the Syria scenario, you could choose to delete any photo containing identifying information. Review the photos every evening when you get back your hotel. Copy only “safe” photos to your laptop. Then wipe the camera memory card with a secure erase tool. In the police misconduct scenario, this may be mostly a matter of operational security. No one who isn’t involved in producing the story should know of its existence before publication.

Paper can be a wonderful secure information storage and communication technology. Everyone understands exactly how it works, and it’s not possible to access it remotely.

Recap

This is a lot of information. There’s even more you’ll need to learn—as I said, no one can learn digital security in an afternoon. But hopefully you’ve now got the beginnings of a conceptual framework. Start with these questions:

What do you want to keep private?

Who wants to know?

What can they do to find out?

What happens if they succeed?

Answering these questions involves building up a picture of the security problems you face: a threat model. Building a realistic model requires that you understand both your adversary and the relevant technology. You will need to research what your adversary and similar adversaries have done in the past, to build up a model of what they want and what they might do. You will need to look under the hood of the technologies you intend to use, and understand how the technical intersects with the legal, physical, and social. After that—when you understand the security landscape you are operating in—you can move on to defining specific processes and tools to meet your needs. But plans and tools don’t make security; clear understandings and diligent habits do. The simplest plan you can come up with might be the best.

Resources

There’s a lot more to learn. Fortunately, there is starting to be a lot of good material on journalism information security

The Digital First Aid Kit has resources for immediate digital security problems, such as a hacked account, lost device, or suspected malware.

The information security chapter of the Journalist Security Guide by the Committee to Protect Journalists is part of a larger security guide which has an especially good discussion of the physical safety of journalists and their sources.

The Security-in-a-Box guide comes in many different languages, and includes instructions on how to use most of the tools mentioned in this article.

Connect

OpenNews connects a network of developers, designers, journalists, and editors to collaborate on open technologies and processes within journalism. OpenNews believes that a community of peers working, learning, and solving problems together can create a stronger, more responsive journalism ecosystem. Incubated at the Mozilla Foundation from 2011-2016, OpenNews is now a project of Community Partners.