In an age of smartphones and social networking, e-mail may strike many as quaint. But it remains the vehicle that millions of people use every day to send racy love letters, confidential business plans, and other communications both sender and receiver want to keep private. Following last week's revelations of a secret program that gives the National Security Agency (NSA) access to some e-mails sent over Gmail, Hotmail, and other services—and years after it emerged that the NSA had gained access to full fiber-optic taps of raw Internet traffic—you may be wondering what you can do to keep your messages under wraps.

The answer is public key encryption, and we'll show you how to use it.

The uses of asymmetry

The full extent of the cooperation between the NSA and various technology companies is unclear. It will probably remain that way for the foreseeable future. For the time being, however, it seems likely that the standard cryptographic tools used to secure data "in flight"—that is to say, the SSL that protects data traveling between machines on the Internet—remain secure as long as certain best practices are used.

That protects against some threats, such as wholesale monitoring of Internet traffic of the kind the NSA is known to engage in, but it doesn't do anything to protect data that's "at rest." That is to say, SSL doesn't do anything to prevent a company like Google or Microsoft from handing over an archive of your e-mail in response to a court order. The e-mails are just lying around on some Google server somewhere.

If you don't want a government, service provider, employer, or unauthorized party to have access to your mail at rest, you need to encrypt the mail itself. But most encryption algorithms are symmetric, meaning that the encryption key serves a dual purpose: it both encrypts and decrypts. As such, people encrypting mail with a symmetric key would be able to decrypt other mail that used the same symmetric key. While this would protect against anyone without the key, it wouldn't be very useful as an encrypted e-mail system.

The solution to this is asymmetric cryptography. In asymmetric encryption there are two opposite keys, and a message encrypted with one key can only be decrypted with the other. The two keys are known as a private key, which as the name might suggest is kept private, and a public key, which is broadcast to the world. Each time you want to send an e-mail to someone, you encrypt it with the recipient's public key.

Asymmetric encryption is also used to perform mail signing. For this, the mail sender encrypts a hash, or mathematical fingerprint, of their file, producing a signature. Hashes are designed so that any small change to the message's text will produce a different hash value. Anyone reading the mail can then decrypt the signature using the sender's public key, giving them the original hash value. They can then compute the hash value of the mail they received and compare the two. If the values are the same, the message hasn't been modified. If they're not, it has—and we'll see the uses of this later on.

Making things even more complex, having encryption support isn't itself enough. To a great extent, you don't control the things that are in your own inbox. That's all mail that someone else has sent you. If you want your inbox to contain encrypted mail that only you can read, you need to be sure that people sending you mail are encrypting that mail when they send it. And if you want to be sure that everything in your sent mail folder is encrypted, you'll need to send other people encrypted mail.

As a result, e-mail encryption is not something you can impose unilaterally. To protect the contents of your account, you need to ensure that everyone you communicate with is in a position to handle encrypted mail—and is willing to use that ability.

Finally, e-mail encryption doesn't encrypt everything. Certain metadata—including e-mail addresses of both sender and recipient, time and date of sending, and the e-mail's subject line—is unencrypted. Only the body of the mail (and any attachments) gets protected.

If you're happy with these constraints, e-mail encryption is for you. Unfortunately, it can be complicated to use.

Cutting through the complexity

Few e-mail programs have PGP encryption features enabled by default. And even if they do, end users must still navigate a series of mazes that are long and confusing. Tasks include generating the key pair that will lock and unlock the communications and storing the private key in a location where no one else can get it. It also requires securely sharing a public key with every single person who wants to send you a private e-mail and securely getting a unique public key from each person you want to send encrypted e-mail to. No wonder most people—reportedly including Glenn Greenwald, the Guardian reporter who exposed aspects of the secret NSA dragnet—need time getting up to speed.

Fortunately, free e-mail encryption programs are available for all major operating systems, and the ability to use them effectively isn't out of the grasp of average computer users if they know where to look. What follows is a set of step-by-step instructions for using GnuPG, the open-source implementation of the PGP encryption suite, to send and receive encrypted e-mails on machines running Microsoft Windows and Mac OS X.

After that, we'll show readers how to use a similar crypto standard called S/MIME, which may prove simpler to deploy because it is already built into many desktop and mobile e-mail clients, including Outlook and Thunderbird. (Interested in S/MIME? Skip directly to page three.)

Linux will be touched on only briefly because much of the functionality is already included in various distributions and because many Linux users already have PGP down cold. (Users are invited to provide Linux instructions and screenshots in the comments following this article.)

PGP on Windows

The basic element you'll need to encrypt mail is software to generate and manage your key pair and make them work with whatever e-mail program you happen to use. On Windows, there's no shortage of proprietary apps that will do both, with Symantec's PGP Desktop E-mail being perhaps the best known. There's nothing wrong with this offering, but it's almost $200 for a single-user license. This tutorial will instead focus on the open-source Gnu Privacy Guard, which is available for free on Windows, Mac, and Linux platforms.

GnuPG, or simply GPG, is still available mostly as a command-line tool, meaning there's no graphical interface many end users would feel more comfortable using. Rather than learn a long list of GPG commands, many e-mail users are better off installing graphical implementation of GPG. On Windows, Gpg4win will give you everything you need to generate strongly encrypted messages that can be sent and later decrypted by the intended receiver using standard e-mail programs.

At time of writing, the most recent version of Gpg4win is 2.1.1 and it's available here. After downloading such a sensitive piece of software you'll want to confirm the installer hasn't been tampered with and truly came from Gpg4win rather than a site masquerading as gpg4win.org. To do that, we'll need to check the SHA1 checksum for the downloaded file and make sure it matches the hash—a94b292c8944576e06fe8c697d5bb94e365cae25—listed on the Gpg4win download page. For those who prefer a graphical interface, use HashCalc. Install HashCalc and then open the program. In the "data" box, navigate to the folder where the downloaded gpg4win-2.1.1.exe file is located. In our case, since the SHA1 hash calculated by HashCalc matches the SHA1 digest provided on the Gpg4win download page, we have a high degree of confidence the file we're about to install is genuine.

For readers who prefer command lines, Microsoft's File Checksum Integrity Verifier may be a better way to check the SHA1 hashes. You'll need to download and extract the FCIV package and follow the instructions in the readme text file, including making sure the folder containing the FCIV executable file has been added to the system path of Windows. With that out of the way, open a Windows command window and navigate to the folder containing the Gpg4win installer.

Once you're sure you have the real gpg4win-2.1.1.exe, double-click on the file and click Yes to the User Access Control dialogue. When presented with the Gpg4win installation welcome screen, click Next, and then click Next at the following window to accept the Gpg4win license agreement. The next screen will allow you to choose the precise GPG components you want to install. Make sure you install all available components, including GPA, which is short for the GNU Privacy Assistant. Click Next at the Choose Components screen and again at the Destination and Install Options screens.

The Choose Components screen displayed during the Gpg4win installation.

At the Install Options screen, makes sure the "start menu" box is checked, click Next, and at the next window click Install. We won't be using S/MIME for now, so if you see any screens referring to Trustable Root Certificates, you can click the box to skip configuration and click Next. The installation is now complete.

When you click on your Start menu and choose All Programs, you should now see a Gpg4win folder. Highlight it and choose GPA. This is the GNU Privacy Assistant. We'll use it to generate our key pair, and later we'll use it to store the public keys of people who will receive our encrypted messages. The first time you open GPA, you'll see a screen asking if you want to generate a private key. That's exactly what we want to do, so click "Generate key now."

The Generate Key Now dialog presented by GPA.

In the screens that follow, enter your name and e-mail address. When asked if you want to back up your key, choose "Do it later." It's not that this step isn't important, but we'll want to back up the key only after we're satisfied that we've done everything correctly. Next, you'll need to choose a passphrase to protect your key. Your passphrase is like the password protecting an e-mail or Web account. Except rather than preventing an unauthorized person from accessing your account, it prevents the person from using your private key should it ever be lost or stolen. In other words, the password is extremely sensitive. It should have a minimum of nine characters, but 18, 27, or even 36 characters are even better. For more tips on generating a strong password, see Ars Senior Reporter Jon Brodkin's discussion of master passwords here. When you're finished, you'll have generated your first key pair: the public key you will share with other people so they can send encrypted messages that only you can read, and the private key you'll use to decrypt those messages.

While generating your key, be sure to set an expiration date, rather than allowing it to remain valid forever. This way, keys that new users abandon, lose or never end up using won't remain on public servers indefinitely. Remember also to backup your private key somewhere that's extremely safe. Storing it on a USB stick that's stored in lock box is one suitable method. You may also want to upload your public key to one or more public key servers. These servers give crypto users a way to make their keys available to others and to fetch other people's public keys.

Now that we've generated our first key pair, let's import the public key of someone else so we'll have it later when we're ready to send them our first encrypted e-mail. For this, get someone to give you their public key, preferably in person. It will look something like this:

Take the public key of a real-world contact and save it to a file named something like key.txt. If you don't have a real-world contact who has a public key, save the above public key to a file and name it key.txt. Now, with GPA open, choose the "Import" icon, navigate to the disk location of key.txt, highlight the file, and click Open. Congratulations. You've just imported your first public key. Don't get too excited just yet. You'll need to import a public key for each person you want to send encrypted mail to.

Until there is a seamless, idiot-proof, "big button" solution for using PGP encryption, the fact remains that it's just too hard for your average (non-IT-professional) person to use.

Online webmail providers aren't going to build it into their systems because then they wouldn't be able to mine emails for ad targeting. What's needed is an easy-to-use, cross-platform, client-agnostic, PGP encryption application. Something as easy to use as Facebook, that automatically shares everyone's public keys with all their friends and contacts, and automatically encrypts all communications. If someone gets it right they could make a fortune.

Edit: Sorry, I finished reading the article and what I wrote below is not what the story is about. The story is about encrypting the body of the message and sharing the keys and not about hosting email on their own servers.

I admit I only read the first page and I feel this is great for people who host their email server on their own machines. This ensures end to end encryption without a middleman.

For those of us who use an email service such as Gmail, wouldn't an easier solution be to use the POP3 settings to send all email to your local machine and have it deleted on the email server?

Supposedly, the Gov can look at email that has been left on the server after 3 months, so having the emails stored locally on your machine and deleted off the hosting server is a solution to keep email out of the servers.

Of course that is assuming that the Gov plays by the rules and assuming that the email hosting service really does delete email on their servers and does not hold on to deleted email indefinitely.

The concept of asymmetric encryption is appealing, but the problem for me is that I typically wipe my computer every 6 months or so, and keeping up with the key pair has proven to be remarkably challenging. Since I don't actually use the key pair for much of anything, keeping it safe while I reinstall is not something foremost on my mind. (as a power user, I tend to install an unthinkable amount of random crap constantly and I find it's just easiest to wipe the whole thing to get that fresh "new computer smell" back.)

If more people I know used asymmetric key pairs, I'm sure I'd keep up with them better though. The other issue is having a Galaxy Nexus, iPod touch, nexus 10, desktop, and other small devices laying around implies a decent amount of effort to sync all of these devices up to the same key pair.

Maybe someday I'll figure out a way to use this stuff, or maybe I'll write my own programs that take all the hassle out of it... who knows. Encryption is important.

I've been playing around with GnuPG for email encryption over the last week or so and have found that the biggest barrier is the difficulty in setting it up for the average user, so this writeup is great.

My next biggest question: How do you your public key out there? Currently I've uploaded it to a keyserver and have a link to it in my email signature.

However, until everyone (or at least, everyone you communicate with) uses encryption, there's no use for it for the average user. I'd like to use encrypted email, but sadly, unless I feel like teaching every single person I know how to read my encrypted emails, it's useless to me.

The first person (or people) that makes a nice, transparent, hands-off solution for encrypted email is going to make millions.

Until there is a seamless, idiot-proof, "big button" solution for using PGP encryption, the fact remains that it's just too hard for your average (non-IT-professional) person to use.

Online webmail providers aren't going to build it into their systems because then they wouldn't be able to mine emails for ad targeting. What's needed is an easy-to-use, cross-platform, client-agnostic, PGP encryption application. Something as easy to use as Facebook, that automatically shares everyone's public keys with all their friends and contacts, and automatically encrypts all communications. If someone gets it right they could make a fortune.

I completely agree! As I stated in my post, this is really the issue I have with it. It's too much hassle, and it especially is for the average user.

There is a public infrastructure of OpenPGP public key servers.You can upload your public key there and you can fetch other people's public keys, making it easier to share.

However, this does not fix the issue of trusting keys. Ie "I have this public key that says Peter Bright, but does it really belong to Peter Bright?PGP/GPG uses a web of trust model: if Alice sign's Bob's key and Bob sign's Claire's key, then Alice will trust Claire's key too.

3) From the pop-up dialog, select the public key of your recipient from your list of public keys. Optionally (but recommended), check the box to sign your message and add yourself to the recipient list so you can decrypt it yourself.

4) Enter your own private key's passcode to enable GPGTools to use it to cryptographically sign your message.

"Since making encryption the default is in most cases impractical, most users of encrypted mail will be better served by encrypting only sensitive communications."

I've got to object to that recommendation. Only encrypting "sensitive" stuff is like broadcasting "Look here! Look here!" when those persons from whom you're trying to hide your communications do end up with possession. That's especially true since the meta data is not encrypted. You're broadcasting when and where you're emailing sensitive information.

If you really want to hide information but leave it on a public server you'll need to establish several different public keys and fill your inbox with mail to each. That will require the snooper to break several private keys to get to the "good stuff."

How much metadata is still exposed with these solutions? Seems like at least the recipient address would still have to be there, and probably the sending IP (though you could Tor that). Of course, when it occurred and how large a message would be visible (though you could pad a small message). If the sender identity is sufficiently hidden, I suppose that probably would be enough to inhibit building a good map of connections between senders/recipients.

I'm glad nothing I do is interesting enough that I have to actually worry about these issues ;-) Actually, I did use GnuPG for sending credit card information from web servers to merchants many years ago before giving up on trying to comply with PCI myself and outsourcing CC handling.

Something as easy to use as Facebook, that automatically shares everyone's public keys with all their friends and contacts, and automatically encrypts all communications

The problem with anything that automatically shares/befriends/encrypts in this situation is that it might automatically or accidentally share your private key with people you don't want it to. I think that is the big issue.

If your private key is in the cloud, you lose, so anything cloud/webmail style just plain doesn't work. You have to store your keys locally, or keep them encrypted, which starts to get back into 'lots of hassle' territory.

That is the real problem that needs solving. Keeping your private key safe enough and accessible enough. After that, the rest is trivial.

While I'm mildly happy to see it touched on, this article felt like the authors had an axe to grind from the start and proceeded to do so with gusto. The S/MIME section in particular is spartan, which is odd given that as the authors grudgingly acknowledge it's by the far the easiest and most widespread. While later mentioning that support is built-in to mobile clients as well, no mention of how to set that up is made. Abstract "costs" are mentioned for a certificate with identity verification, but no mention of what said costs are ($60 for 2 years from StartCom, for anyone interested). Free certificates are not a problem, a quick phone call can be used for extra verification if you're truly concerned but in practice that's not something someone needs to worry about and should not be an obstacle to doing better then plaintext. "Perfect" should never be the enemy of "good" for security, since all security is a matter of economics.

Finally, for communications with your own trusted parties you can also just make your own S/MIME certs and distribute them directly if you'd like (which will be both free and highly secure).

Quote:

In practice, using encrypted e-mail is awkward and annoying.

It really isn't. Signing improves email all by itself, and thus provides a direct incremental improvement even if the other side does nothing. All mail can be signed by default with no issue, and then encrypted when possible. It's two buttons in the client. If it's under iOS at least, it's all automatic after adding the cert. It'd be absolutely fair to say that setup could be easier, but using it is easy, no one at our business has any trouble with it.

Quote:

Though S/MIME has been around for a long time and support is widespread both in desktop and mobile clients, its actual usage is rare.

Yeah good call! "In practice, using adaptive hashes is awkward and annoying. Though bcrypt has has been around for a long time and support is universal for servers, its actual usage is rare" - said Dan Goodin never, presumably (or maybe that's your next article?). While support could certainly be improved farther, popularity is sadly a very, very poor metric for ease or importance of security.

Quote:

Moreover, the all-too-common not-quite-spam that many of us receive on a regular basis—mailing lists, shopping receipts, bill notifications, and so on—won't ever send encrypted mail.

It could all be signed however, and that could eliminate many types of phishing and forgeries. It's also an easy way to reduce false positives in spam filtering. That even financial institutions do not sign mail is not because it's hard, it's for the same sort of reasons that many banks still require a 6-8 character alphanumeric password, a few "security questions" such as "Who was your mom?" and call it done.

Quote:

The process is also error-prone.

What. No it's not. Is the lock shut? It's encrypted. Is the lock open? It's not. If it's not possible to encrypt, the lock will be greyed out.

So yeah, disappointing all around. Secure email is one of the most convenient ways to have secure communications with another person anywhere on the world. That it's not perfect or could use more development is not a great reason to tell everyone not to bother, particularly given that improvements won't come without more demand.

While I'm mildly happy to see it touched on, this article felt like the authors had an axe to grind from the start and proceeded to do so with gusto. The S/MIME section in particular is spartan, which is odd given that as the authors grudgingly acknowledge it's by the far the easiest and most widespread. While later mentioning that support is built-in to mobile clients as well, no mention of how to set that up is made. Abstract "costs" are mentioned for a certificate with identity verification, but no mention of what said costs are ($60 for 2 years from StartCom, for anyone interested). Free certificates are not a problem, a quick phone call can be used for extra verification if you're truly concerned but in practice that's not something someone needs to worry about and should not be an obstacle to doing better then plaintext. "Perfect" should never be the enemy of "good" for security, since all security is a matter of economics.

Finally, for communications with your own trusted parties you can also just make your own S/MIME certs and distribute them directly if you'd like (which will be both free and highly secure).

Quote:

In practice, using encrypted e-mail is awkward and annoying.

It really isn't. Signing improves email all by itself, and thus provides a direct incremental improvement even if the other side does nothing. All mail can be signed by default with no issue, and then encrypted when possible. It's two buttons in the client. If it's under iOS at least, it's all automatic after adding the cert. It'd be absolutely fair to say that setup could be easier, but using it is easy, no one at our business has any trouble with it.

I don't agree. Even with signatures, there's enough software out there (mailing lists, malware scanners in mail relays, that kind of thing) that ends up breaking the signatures and making scary error messages appear in mail recipients' clients.

Quote:

Yeah good call! "In practice, using adaptive hashes is awkward and annoying. Though bcrypt has has been around for a long time and support is universal for servers, its actual usage is rare" - said Dan Goodin never, presumably (or maybe that's your next article?). While support could certainly be improved farther, popularity is sadly a very, very poor metric for ease or importance of security.

bcrypt can be imposed unilaterally by any site that needs to hash passwords. e-mail encryption can't. That's why lack of uptake is a problem. If you want your own inbox to be secure, you have to persuade everyone who sends you mail to jump through the hoops.

Quote:

It could all be signed however, and that could eliminate many types of phishing and forgeries.

Yes, it could, and spammers and phishers could drop off the face of the earth. It isn't.

Yes, as mentioned, metadata isn't encrypted, only the contents are. There's other techniques to hide that you are communicating with someone at all, most of which are not useful or friendly to the everyday user. Steganography in imageboards, usenet, etc.

Email being plaintext is not really a bug. Back in the day, usenet was plaintext at all. Thus evolved schemes of encoding binaries into the appropriate length and format for posting into text streams. The same applies for cryptographically signed or encrypted materials. You can then use whatever transport you want.

I mostly agree with Xoa. Even just signing via your bank's certificate will reduce phishing (until some smart cookie puts something similar in the graphical HTML email, thus reducing it again to a PEBKAC problem.)

All that's happened over the last couple of weeks makes it clear to me that at least as important as it to stop people from reading the contents of email it's equally important to hid the relationships that are made apparent by it.

It'd be nice to have an email client that would be contentiously adding noise to a distributed database. Whenever I wanted to send a message to someone I'd start sending an encrypted stream of data that would appear to be noise. That way anyone listening wouldn't know that a message was being sent and wouldn't know who it was being sent to.

I guess it's not possible though - on some level every message has to have a to address. It doesn't really matter if it's user@domain or a 50 digit random string, you can always translate it back to the source.

I can't imaging trying to get most of the people I know to use encryption, and I shudder to imagine trying to walk half of them through this.

This is what I would classify as a technology problem, not a social problem. Once the technology is streamlined enough that it only requires the click of a check box, and yet people still don't adopt it, that would be a social issue. The fact that this article needs to be three pages long, nicely illustrates how there are still too many technological hurdles for widespread adoption.

Having the messages themselves be encrypted is really the ***ONLY*** solution, for e-mail, even if you have your own personal e-mail server.

The SMTP protocol used between mail servers has no encryption support. So any e-mail that goes outside whatever webmail service you use (meaning pretty much anything where the sender and receiver are not in the same domain) is subject to wholesale monitoring of the internet.

I mostly agree with Xoa. Even just signing via your bank's certificate will reduce phishing (until some smart cookie puts something similar in the graphical HTML email, thus reducing it again to a PEBKAC problem.)

Signing opens more programatic ways to handle it automatically though that don't involve the user at all, which is always the best. There aren't that many banks in the world, or that many PayPals, or Gmails or whatever. In practice, a solution doesn't have to be infinitely scalable to provide tremendous value. If all banks and a number of major services signed all email as a matter of course for example, it'd be perfectly possible for Microsoft/Apple/Mozilla/whomever to have a checked list, or even hardcoded list, of services where a class 3 signing cert from one of the majors is required or a matching message simply gets dumped without ever being displayed. If you know that, say, PayPal will never be sending an unsigned mail, then it's possible to be vastly more aggressive about detection and auto purging with no need to worry about false positives.

Also, the "signed/encrypted" chrome exists outside the email body, and could also be made more prominent (and should be, but one step at a time).

I guess it's not possible though - on some level every message has to have a to address. It doesn't really matter if it's user@domain or a 50 digit random string, you can always translate it back to the source.

Tor Mail (or similar other services, or set up your own). If you want to use email in an obfuscated fashion, well now you really are talking about something that's a lot more work. All the standard warnings that come with using Tor apply just as strongly, or even more so depending on exactly how important your anonymity is since you're sending actual content. It's not impossible though to at least make sure you aren't low hanging fruit.

Or I guess in many countries you could also just create throwaway accounts on free services at will and send them via war driving, though that requires significant work as well.

An option I missed is a solution for those that want to keep using web-based clients.

Mailvelope is a very easy way to set up PGP: it's a Chrome extension (a Firefox addon is on its way) that will encrypt/decrypt your mails locally. It's extremely easy and works great. It's a young project though, so some things are missing: signing messages and verrifying them is not yet supported. But basic encryption works great, and the integration with any webbased client (litterelly every client: you can select any text-field on a page and encrypt the text you type in it) is really really polished.

E-mail encryption could be approached in two ways: user-to-user and domain-to-domain.

In the user to user approach, each user would share public keys with each other, for encrypting e-mail. The challenge here is that this starts getting technical for most people and for many people e-mail is as technical as they want things to get. To change this you have to make encryption easier to use by people, especially the people who are techies - the expectation should just be click and send. Maybe we need to revisit how public keys are shared?

In the domain to domain approach, domains would share a public key (maybe via a DNS record?), for encrypting e-mail on transport. This would mean any intermediate relays would not be able to see the content of the e-mail. The issue here, is while this solves the snooping relays, it says nothing to the people using the service whether the service in itself can be trusted.

The ideal solution might be a combination of both solutions, but ultimately encryption, like many other security problems comes down to an issue of trust and time. How much security is good enough and how much is just being overly crazy paranoid? It is always protecting yourself from prying eyes, but still having the convenience and ease of use of the tool.

Even though it still uses GPG, I've found it easier to get a couple of my friends using a full-on software suite like Retroshare, than to try and ever get them to use GPG for e-mail or any other forms of communication.

Which is fine, because they're, you know, public keys. The authorities don't need a subpoena to get them, then can just go download them all like everyone else. You might confusing that with the issue of authorities getting forged certs in a public PKI system, which is separate.