End-to-End Encrypted GMail? Not So Easy

Last week Julian Sanchez urged Google to offer end-to-end encryption for GMail, so that your messages would be known to you and your browser (and your email correspondents) but not to Google itself. Julian explained why this would be a positive step for users and, arguably, for Google itself. Let’s talk about what would be required to make it happen.

We have had standards for end-to-end email encryption for a long time: PGP since at least 1996 and S/MIME since at least 2002. In these systems, each user has a private key that they use to encrypt and digitally sign their email. If two people know each other’s public keys, they can exchange email securely without the network, or even their email services, being able to read or tamper with the messages. This feature has long been supported in desktop email clients. What would we need to make it work for a cloud email service like GMail?
(I’m using GMail as an example here, not to pick on Google but because that’s the example Julian used. Other cloud email services would face the same challenges.)

There are two issues, which we’ll deal with one at a time. First, how would the crypto keys and crypto code be managed? Second, what about the features of GMail that rely on Google seeing your messages?

First, managing keys and code. To start with, we would need a place to store your private key. We could store it on your desktop, but this would conflict with the usual cloud model that gives you access from multiple devices. We could have Google store your private key for you, then download it to whatever device you’re using at the moment, but then what’s the point of encrypting your messages against Google? The best solution is to have Google store your private key, but encrypt your private key using a password that only you know. Then Google would download your encrypted private key to your device, you would enter your password, and the private key would be decrypted on the device.

The next problem we would have to solve is how to do cryptography in the browser. A service like GMail has to run on lots of different devices with differently abled browsers. Presumably the cryptographic operations–including time-consuming public-key crypto operations–would have to be done in the browser, using the browser’s Javascript engine, which will be slow. It would be nice if there were a standardized API for in-browser crypto, but that doesn’t exist yet, and even when it does exist it will take a long time to be deployed so widely that a public service like GMail can rely in it being present on all devices.

What is most problematic is that the software code to do all of this–to manage your keys, decrypt messages, and so on–would itself be written and delivered by Google, which means that Google would, after all, have the ability to see your messages, simply by sending you code that silently uploaded your keys and/or data. So if your goal is to make it impossible for Google to see your messages, for the protection of you and/or Google, then you won’t have achieved that goal.

This problem–the inability to provide end-to-end encryption in a webapp, because the code can be modified by the app provider–has been the downfall of many a “brilliant” security idea over the years. The only solution we know is to acquire the secure functionality by a traditional download, incorporating carefully vetted code that cannot be modified or updated without user control. The code might be provided as a standalone app, or as a browser extension. We could do that for GMail (and at least one company has done it), but that would give up some of the portability that makes the cloud email attractive.

The second major issue is how to keep messages secret while still providing GMail features that rely on Google seeing your messages. These features include spam filtering (which you couldn’t live without) and the content-based ads that Google shows next to your messages (which Google probably wouldn’t want to live without). Can these be provided without leaking the full content of messages to Google? I suspect the answer is a qualified yes–that pretty good versions of these features could be provided in a more privacy-friendly way–but that’s a topic for another day.

Comments

While it’s slightly less convenient, a straightforward way to secure the contents of your email is to:
Get the public key of your destination and store locally
Write your note offline, on your trusted local host
Encrypt it offline, on your trusted local host, using your destination’s public key for encryption
Send the encrypted file as an attachment to your Gmail or other cloud provider

Details for efficiency, multiple addressees, etc left out.

Depending on your paranoia level, you may want to overwrite the plaintext file in local storage.

Even though Google would have the capability to read your email, I think the end to end idea still has a lot of merit. Google would have to violate an agreement that they presumably would provide where they promise not to send code to the browser that steals the user’s key. I think we’d all be a lot safer if Google provided end to end encryption as you describe. Most likely they would not actually provide malicious code that would steal your key. They would have a lot to lose in the way of FTC fines and class action law suits if they violated their agreement. And, email would be safe at rest, in the event hat someone hacked into Google’s file system and found email messages there. I vote in favor, although for some reason, I don’t think my vote here counts.

Perhaps elliptic curve cryptography (ECC) (a type of public-key cryptography) could be used to reduce the burden on the browser. It is allegedly significantly less computationally intensive than other public-key crypto systems (such as the RSA system). This discussion makes me wonder what type of system Kim Dotcom (of Megaupload) is claiming to have created.

Even though Google would have the capability to modify their JavaScript code to upload the keys, doing encryption in the browser does have a benefit — it increases the required capabilities of the attacker. It changes the attack from a passive monitoring attack into an active attack and requires the attacker — which isn’t necessarily Google itself — to have that capability. I’m generally in favor of security measures that further limit the range of possible attackers, or raise their required capabilities for a successful attack, even if it doesn’t counter every possible attack. Furthermore, delivery of JavaScript that uploads the keys would probably be viewed as at least unfair and deceptive.

But part of the deal with GMail is that Google gets access to your content, so I would be very surprised if Google were to provide a service that interferes with that.

As to the speed of crypto in the browser, advances in computing hardware are making that less of an issue all the time.

It could make one aspect of spam filtering _easier_. If encrypting were to become mainstream, then signing would probably go mainstream too. That gives you a key to look up in some whitelist or reputation database.

The problem is what to do with email from people without any rep at all: it’s tempting to assume they’re spammers, and worse, that approach would probably work _best_ most of the time, so new non-spammer identities would have a hard time.

The portability problem is what’s really so sad about all this; pgp/gpg/openpgp are _ancient_ by today’s standards and if you use Linux all day you get in the habit of just assuming that gpg is _there_, waiting for you to pipe things to/from. This ought to become baseline functionality that everyone can count on (no having-to-download _anything_) but of course it’s not. And yet if someone were to ship a device with a browser which couldn’t do HTTPS, the reviews would crucify the manufacturer. What a ridiculous situation.

I’d never leave leave gpg encryption up to man in the middle or web app like gmail. There is no way to secure the private key. If you don’t want google seeing your emails then why would you trust them with your private key. Thunderbird + enigmail along with many other desktop clients provide all your encryption needs while you maintaining your private key locally on your machine. Hey you can even plug gmail into thunderbird imagine that.

This isn’t to google’s advantage. Not being able to see what is in your email makes you a less valuable target for advertising, which is the reason why gmail is free. Corporate accounts can pay to remove the advertising and then would be a reasonable target for integrated end to end encryption.

But for the rest of us the best way is to use a PGP aware client. Then an encrypted file/service for keys to allow cross platform access to your private keys.

So something like thunderbird (with enigmail plugin), pgp/gpg, and keepass (with syncing via dropbox). That way you don’t have to trust google provided web client, you have better functionality when offline, and you are much more secure. The biggest problems are social, mostly related in getting people that email you to use gpg/pgp.

Thunderbird supports tagging, spam filtering, search and runs on Linux, OSX, and Windows. Hard to imagine similar options not being available on IOS and android.

Disclaimer: Ed, you probably will tire of my comments… But I have been being censored [for my very opinionated but never abusive posts] a lot more on my favorite website so I am commenting more on here.

“This feature has long been supported in desktop email clients. What would we need to make it work for a cloud email service like GMail?”

I am not sure I have ever used this feature in desktop e-mail clients, unless we are talking about StartTLS/SSL or other encrypted passing of e-mail, but that is only between me and the server–it still presumes the service provider could access/copy it en-route. I guess I need to study more on that issue. I have a lot to learn for sure.

However, why even bother asking how to make it work for cloud e-mail. Am I losing my mind, or are you really considering cloud e-mail. Granted, you pretty much come to the conclusion that Google will still have access to it no matter how they implement such security. So, I know that at least you are not losing your mind.

Perhaps my own security focus is eccentric, but how could anyone in their right security mind, even think that cloud e-mail is good in any form. I am worried less about Google reading/accessing my e-mail, as I am worried about a) identity thieves hacking, AND the government spying (since more and more they are taking my freedoms away). Supposing it is stored encrypted by Google, and they can’t read it and they can’t share it–if it were really encrypted. They still control it (see my response to your post on Facebook’s copyright statement). That means even if hackers and the government can’t access it, there are other problems. There is of course data security (e.g. what happens when you accidentally delete a message, can they get it back for you; or if you purposely deleted a message how do you know it is really “erased” if you don’t want that message anymore.) Or, what if Google decides they don’t like you, they can delete everything, and leave you dry. Etc. Etc. Etc.

I never use cloud services because I WANT THE CONTROL. I want to know if it is still available on accident, or if and when I want something gone completely to know it is gone completely. I want to know when backups are made and how to access them. Etc. Etc. Etc.

Granted, reality is that most people use cloud e-mail, and moving more and more to other cloud services as well. But, I will never understand it from a security perspective it seems the worst possible move ever. I understand the convenience of having access to the same content on multiple devices, but to me, the security risks far outweigh the benefit to it, unless I am the one controlling my own cloud server. Just as I currently run my own e-mail server.

Disclaimer: My e-mail server is not a secured platform which is why I don’t do things via e-mail that need security, but at least I have control.

Obviously the user has to store their own private key. If they want to be able to use it from multiple trusted devices, e.g. their PC and phone, it’s a small file that can be copied by the user. (If they want to use it from untrusted devices, they’re missing many important points.)

Cryptocat went through the same issue about downloaded Javascript being untrustable, and went to the browser extension/plugin model. Yes, you still have to trust that Google didn’t backdoor the plugin, but you only have to validate it once when you install or update it, instead of with every message.

But spam filtering encrypted messages from Gmail users is a hard problem – can’t do much except rate limiting and maybe captchas, and spam filtering inbound encrypted messages falls back to RBLs and rate limits. Unfortunately, while just using crypto eliminates some spammers, Google would presumably use a convenient key server, and spammers have vastly more CPU available than in the 90s, between GPUs, botnets, and general CPU improvements, while crypto keys haven’t gotten much longer, so there may still be a market for it.

This kind of thing has been on my mind in a few different guises recent.

The new mega upload is rumoured to support fully encrypted end-to-end transfers, using a PK cipher suite written in JavaScript running in the end user’s browser and using HTML5 technology to store the user’s private key.

Looking to the future, W3C is working on something that should work well here, a crypto API (http://www.w3.org/TR/WebCryptoAPI/) That should allow JS applications to call on native crypto operations during execution. These would be independantly implemented in FireFox, Chrome, Safari, etc. so Google would have to rely on a strong interface that they cannot reasonably modify to extract a client’s private keys.

This would give the end user end-to-end encryption; however, once the data is in the browser and decrypted, what’s to stop Gmail sending all of that decrypted data back to the server in the next post request or even as a sneaky AJAX request? Not much, all told (unfortunately).

A crypto API as given by the W3C would also be a boon for things like online voting, as it would give the ability for elections to be run using something like FOO92.

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.