Gov’t, certificate authorities conspire to spy on SSL users?

SSL is the cornerstone of secure Web browsing, enabling credit card and bank details to be used on the 'Net with impunity. We're all told to check for the little padlock in our address bars before handing over any sensitive information. SSL is also increasingly a feature of webmail providers, instant messaging, and other forms of online communication.

Recent discoveries by Wired and a paper by security researchers Christopher Soghoian and Sid Stamm suggests that SSL might not be as secure as once thought. Not because SSL itself has been compromised, but because governments are conspiring with Certificate Authorities, key parts of the SSL infrastructure, to subvert the entire system to allow them to spy on anyone they wish to keep tabs on.

With SSL, any two parties on the Internet can make a secure connection between them, through which information can be passed without eavesdroppers being able to listen in. However, the coretechnology used in initiating SSL connections has some problems. The first problem is that although it allows you to create a secure connection between two parties, it doesn't allow either party to prove that the person they're talking to is the one they think they're talking to. The second is that if an eavesdropper can intercept the initial negotiation they can sit between the two other parties and decrypt and then re-encrypt the data sent between them, allowing them to see what's being sent, without either party knowing.

Such attacks, where someone sits between the two parties and listens in on their conversation, are known as "man-in-the-middle" attacks. They're a big threat when trying to perform private communication over an insecure medium. Fortunately, there's a solution.

Until now, it had been broadly assumed that the CAs were honest and wouldn't give certificates to people they shouldn't, thereby keeping the entire system trustworthy.

The solution to both of these problems is cryptographic certificates. A certificate provides an unforgeable proof of identity, allowing one person to verify that they are indeed talking to their bank (rather than a hacker), and by incorporating certificate data into the set-up of the secure connection, the man in the middle can no longer decrypt and encrypt the traffic without being detected.

The problem with certificates is that on its own, a certificate announcing "I am Amazon.com" doesn't mean much—anyone could make one. To deal with that, certain organizations are trusted by SSL software. If a certificate is issued by one of these companies, it will be trusted by SSL software. The reason these companies are trusted is that they make some promise to verify who people are before issuing them with certificates. In other words, before they'll give me a certificate that lets me claim to be a bank or a well-known online retailer, they'll check that I really am the bank or retailer, and only if I am who I say I am will they give me the certificate.

These organizations are called Certificate Authorities (CAs), and their role in the system is essential. Most Web browsers and operating systems have a set of certificates from a few dozen CAs, and will verify that the certificates used in any SSL connection can be traced back to one of those CAs. If the certificate can't be traced back, the software will typically display a warning about an untrusted connection, and might even refuse to connect entirely.

The weak link here is that if a CA could be persuaded to issue a certificate to, say, Amazon to someone who wasn't actually from Amazon, then all the protections fall apart. Anyone connecting to the person with that certificate would think that they were connecting to the real Amazon. Moreover, if the person could intercept traffic between would-be customers and the real Amazon, they could do the decryption/re-encryption trick to listen in on any traffic sent to and from the company.

Untrustworthy certificate authorities

Until now, it had been broadly assumed that the CAs were honest and wouldn't give certificates to people they shouldn't, thereby keeping the entire system trustworthy. Though there have been attacks on certain aspects of the cryptography and handling of certificates by software, the basic design of SSL has been solid, and these specific problems have been solved by tightening policies and fixing software. Unfortunately, these untrustworthy CAs render all the encryption technology irrelevant, as it can now be bypassed.

The security researchers found out that an Arizona-based network security company, Packet Forensics, was covertly selling a piece of hardware designed to perform these man-in-the-middle attacks—just as long as it could be provided with a suitable certificate. The existence of such a product makes no sense without the ability to retrieve such certificates—which meant that CAs must be handing over certificates so that they could be used with the device.

Software for security researchers and/or hackers that could perform this kind of man-in-the-middle attack has been around for some years, but its utility has always been limited due to the difficulties in getting appropriate certificates; the tools are useful in demonstrating the kind of attacks mentioned above, but have little practical value. The existence of hardware changes things substantially—nobody goes to the expense of designing and creating hardware devices unless they can use them.

Packet Forensics initially denied that it even sold the devices, but eventually admitted that they were real. The company sells hardware to law-enforcement agencies and similar groups, so these certificates might well be issued on demand of a court order. But equally, they could be coerced by blackmail, or even outright theft.

This strikes a blow at the entire trust system integral to SSL. If CAs can't be trusted, the SSL can't be used safely.

And it gets worse.

It gets worse

The set of CAs trusted by default by different browsers and OSes vary, but there are some commonalities between them all. A few big CAs like VeriSign are supported as standard across the board. These CAs might in turn be victims of court orders, blackmail, and so on. But many platforms go further, and include government CAs. That is, certificate authorities operated not by private, independent corporations, but by government departments (typically government telecommunications monopolies). The reason for this is to allow governments to avoid a dependence on external third parties for their cryptographic needs, but the result is this: any one of those governments could produce a certificate purporting to be from any site in the world, feed it into one of Packet Forensics' machines, and use it to eavesdrop on encrypted traffic. Because the browser will automatically trust a certificate issued by one of these government authorities, it won't provide any alert to the user that something is wrong with the certificate. Everything will appear to work as normal. It just won't be secure.

Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.

Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.

This gives governments considerable ability to intercept and eavesdrop on supposedly secure communications. It's true that the case is, at present, only circumstantial. Just because a company is selling man-in-the-middle hardware that requires the use of court-ordered certificates, and just because companies like VeriSign make a lot of money from security and surveillance does not necessarily mean that anyone has actually bought or used the technology. But it seems unlikely that a company would develop or promote a man-in-the-middle system if it could not be used. Though the weakness of the CA system is well-known, the prospects of real attacks on CA trust seemed slim. Not so any more. VeriSign, for its part, refuses to comment on the matter; other CAs, such as GoDaddy, insist that no such request has ever been made, nor would such a request be granted.

Update: VeriSign has commented to say, "VeriSign has never issued a fake certificate, and to do so would be against our policies."

The value to governments—enabling largely undetectable spying on, say, Gmail accounts—could be substantial, as such tools are widely used among both terrorists and freedom fighters alike. It'd be useful in the growing international industrial espionage business, too. And governments are certainly known to be interested; Etisalat last year rolled out a BlackBerry patch that embedded spyware into RIM devices enabling monitoring of e-mail, so it can hardly be considered trustworthy (in spite of its widespread appearance in trusted CA lists).

A robust solution is hard to devise. The Electronic Frontier Foundation has made suggestions; certificates could be independently certified by notaries (though this only extends the level of coercion required), and the TOR anonymous routing system could be used to ensure that the same certificate was used regardless of location. This would detect compromises made in, say, a hotel, Internet café or ISP, but would be ineffective if the monitoring equipment were placed close to the target server. It might also be desirable to get browsers and OSes to trim their list of trusted CAs. In particular, those that are prone to control by oppressive regimes such as the Chinese CNNIC would be good candidates for removal, to ensure that browsers at least present a warning when connecting to sites with their certificates.

In spite of the concerns, however, SSL is still the best system we have, in general, and for connecting to public sites like Gmail or banking, it's the only option we have. Checking for those padlocks is still worth doing—even if it doesn't mean quite as much as we once thought it did.

There's a few other solutions - Monkeysphere http://lwn.net/Articles/374805/ which compares SSL certificates with what other people see, but only works on Linux at the moment, and RFC 5054, TLS/SRP which uses a password hashing algorithm known as SRP to establish TLS without needing certificates, but it's more suited for mail servers than ecommerce websites.

The sad part is no one actually needed to back-door a perfectly good sec protocol. Police forces and governments can't/don't want to go to the effort to do traffic forensics, therefore want an instant decrypt so they don't have to establish probable cause to tap and decrypt traffic the old way... with a court order.

How about not providing trusted root CAs with browsers? That way the user has to choose who to trust. Really, the only way around this is for companies to take control of managing their own cert infrastructure. Cmon, how hard is it for Amazon to have a page offering its trusted root CA cert? You get it once, load it, and never see it again until the CA cert expires or you reload the box.

The existing system is indeed bothersome. This idea puts CAs on level with ISPs/Telecoms with regard to holding the key to privacy with regard to everyday transactions, if not worse, due to the technical complexity which places the idea out of the average man's thought space. Bah-humbug on that. All for a little healthy paranoia.

However, there is a legitimate market for these devices. In combination with an in-house CA, it allows internal communication to easily be encrypted, while allowing that company to monitor internal communication as it sees fit. I don't believe there is a reasonable expectation of privacy at the workplace, is there?

Though the weakness of the CA system is well-known, the prospects of real attacks on CA trust seemed slim. Not so any more.

That's an understatement. The possibility has always been obvious. The difference is that in the past, anyone who took it seriously was derided with snipes like "conspiracy theorist" or "tin foil hat", and now corruption in business and government has become so pervasive and shameless that people are finally taking this kind of thing seriously.

Quote:

Just because a company is selling man-in-the-middle hardware that requires the use of court-ordered certificates, and just because companies like VeriSign make a lot of money from security and surveillance does not necessarily mean that anyone has actually bought or used the technology.

Maybe you are perceiving the burden of proof backwards. We need some rational reason to believe that the MITM is *not* happening, and there isn't any. Pervasive surveillance has been an obsession of recent U.S. governments.

Quote:

VeriSign, for its part, refuses to comment on the matter; other CAs, such as GoDaddy, insist that no such request has ever been made, nor would such a request be granted.

The so-called Patriot Act, and other laws provide for "gag orders" that force intermediaries to not only cooperate, but also conceal the wiretapping from the citizens who are spied upon. This makes any such claims worthless.

Quote:

In spite of the concerns, however, SSL is still the best system we have, in general, and for connecting to public sites like Gmail or banking, it's the only option we have.

Correct about public sites - at least it still protects from random thieves. For parties who have some non-public connection, however, there is a better alternative: a system of keys shared by side channels is reliably secure. This will probably be the basis of increasingly popular alternatives to the CA system (until they are outlawed).

How can this be anything new? It was the obvious flaw of the system from the beginning. Problem is that there is no easy way of solving it and it's in business and governments interest to prevent true security.

For those that actually care about security there is side channel sharing of the keys (one and only secure way of doing it). This should be more than sufficient for most of the private communication and all of business communication as your IT department should be the first ones to touch the computer and get all the required keys in to it. Couple that with TPM the way IBM made it (that is user having the full and absolute control over it) and you should be reasonably safe even from government.

Could someone tell me why the existence of said hardware couldn't easily be explained by the need for a few organizations to use leaked private certificate keys? Instead of blackmailing a CA (not so easy), I just bribe an Amazon engineer to give me the Amazon certificate private key (arguably easier), and I need the exact same hardware to perform a MITM attack using this key?

Then the existence of the hardware wouldn't prove the existence of false certificates issued by corrupt CA, but only the existence of compromised-and-yet-still-in-use certificates.

Aside from the revelations that some CA's appear to be engaged in this activity, the flaw is really nothing new.

SSL has always depended on the CA's being trustworthy. I have had my doubts about this for a long time. When I have explained SSL to people in the past, I have for years added the caveat that the security depends on the Certificate authorities being trustworthy and resisting attempts by both criminals and governments to obtain phony certificates.

A site where truly sensitive information is being discussed, say democracy movements in China or Iran, should probably use a self-signed certificate. When a person first uses the site, they will need to add an exception and that should store a copy of the cert so even if a CA issues a cert for that site, the browser could if set correctly still pop up an alert that the cert is different than the one previously stored.

It would have been nice if SSL had been designed in such a way as to allow CAs to be trusted only for certain portions of the DNS namespace. That way, EDUCAUSE could sign .EDU certificates, the NSA could sign .GOV certificates, and so on...but EDUCAUSE couldn't fake a .GOV site.

(An even better extension would be chaining them, so that MIT could sign any certificates for MIT.EDU but no other EDU domain name....)

This is a serious problem, and forget about this just being the US government. I am sure we have the most influence, but you know every government in the world wants this power and wouldn't think twice about using it (and probably has!)

Because of the international nature of the Internet, CAs should never, and yes I mean never, work with any outside body to compromise their own certificates. It undermines the whole purpose for their existence

I do like the idea of letting consumers choose their own CAs, or at least informing the consumer of which CA is authorizing the certificate they are using on use. Still it is probably still an OK system for me to buy a DVD at Amazon.

Could someone tell me why the existence of said hardware couldn't easily be explained by the need for a few organizations to use leaked private certificate keys? Instead of blackmailing a CA (not so easy), I just bribe an Amazon engineer to give me the Amazon certificate private key (arguably easier), and I need the exact same hardware to perform a MITM attack using this key?

Then the existence of the hardware wouldn't prove the existence of false certificates issued by corrupt CA, but only the existence of compromised-and-yet-still-in-use certificates.

I don't see how that's better.

Scenario A: Government coerces someone at Verisign to give them a certificate for Yahoo MailScenario B: Government coerces someone at Yahoo to give them a certificate for Yahoo Mail

I suppose you have a greater scope of the threat in the case of a CA because they can forge any website instead of just one, but that doesn't mean much if they can read your email either way.

Quote:

How about not providing trusted root CAs with browsers? That way the user has to choose who to trust. Really, the only way around this is for companies to take control of managing their own cert infrastructure. Cmon, how hard is it for Amazon to have a page offering its trusted root CA cert? You get it once, load it, and never see it again until the CA cert expires or you reload the box.

I've got a better idea. Keep the root CAs, but only use them once per website. The first time you ever connect to a particular website, you get a full page in the browser saying "this is a secured website: Verified by [whoever]" where the CA is emphasized, its place of origin is noted from a database inside the browser (San Fransisco, CA vs. Beijing, China) and in addition you do all the "ask someone over TOR whether they see the same certificate" stuff and put up huge warnings and paint everything in red if any of the checks fail.

Then you ask the user if they want to permanently store the website's certificate. If they say yes, no more using the root CA for that website. You just use the local copy of the site's certificate from then on. And if the certificate ever changes, then you get a red screen with a huge warning that says the certificate has changed.

How about not providing trusted root CAs with browsers? That way the user has to choose who to trust. Really, the only way around this is for companies to take control of managing their own cert infrastructure. Cmon, how hard is it for Amazon to have a page offering its trusted root CA cert? You get it once, load it, and never see it again until the CA cert expires or you reload the box. You can have secure or easy, pick one.

Unfortunately, as someone running an internal corporate PKI with external trusted roots, this proposal is less of a solution than just doing away with CA-based certificates altogether. I cannot tell you the amount of work I have put into my environment to ensure that there is trust for my certifcates across all of our necessary platforms... some of which don't allow for importing new trusted roots. Secondly, you are asking users to take more responsiblity (do more work) for security, which isn't going to happen easily if at all. There is an excellent research paper out of MSFT by Cormac Herley (So Long, and No Thanks for the Externalities) that describes this phenomenon quite well.

In short, certificate based models require some type of trust. Whether it is authority-based trust (as used in the X.509 system used with TLS) or a "web of trust" (as used in PGP/GPG), you have the same issue. When discussing something like e-commerce, it gets worse. If you devolve the system to pure side-channel key distribution, how do receive your trading partner's key? Do you contact an individual or office of the company? If so, how do you verify that person's identity and association with the company? Do you reverify the certificate or key information for each transaction? There are several issues here... and many make me reach for my own tinfoil hat even though they are possible and plausible.

The irony of it all is how easily and widespread stories of China's Great Firewall enter and dominate the national discourse here in the US, but stuff like this not so much. I get that censorship by governments is frowned upon by democratic societies, but at the same time these free thinking and freewheeling societies actively work to spy on free people within the society and of course without telling them.

Self-signed certificates and storing the certificate in the browser can be done. However, none of the companies (such as Google, Amazon) that is self signing can afford to go against the court orders and therefore cannot be a solution

I've got a better idea. Keep the root CAs, but only use them once per website. The first time you ever connect to a particular website, you get a full page in the browser saying "this is a secured website: Verified by [whoever]" where the CA is emphasized, its place of origin is noted from a database inside the browser (San Fransisco, CA vs. Beijing, China) and in addition you do all the "ask someone over TOR whether they see the same certificate" stuff and put up huge warnings and paint everything in red if any of the checks fail.

Then you ask the user if they want to permanently store the website's certificate. If they say yes, no more using the root CA for that website. You just use the local copy of the site's certificate from then on. And if the certificate ever changes, then you get a red screen with a huge warning that says the certificate has changed.

So basically what SSH does. It's logical but unfortunately, it's yet another screen for users to robo-OK.

There is one big oversight with this article: a 'full' digital certificate contains two very different parts: a keypair composed of a public key which anyone can access and private key which as its name implies is private, and the certificate itself which is signed by the CA and only contains the public key.

In fact, the CA as a rule does not generate the keypair, it is up to the customer to do it. The customer then sends a so-called "certificate signing request" to the CA , which contains only the public key and various attributes, and the CA cryptographically puts its stamp on it. At no point does the private key leave the customer's premises; Moreover, the private key is usually generated into a hardware device (HSM) and cannot leave it - the HSM does crypto calculations with the key if necessary but never gives it away.

If you want to do a man in the middle attack, you need both the public certificate and the corresponding private key. (cryptographical) Proof of ownership of the private key asserts you are the rightful owner of the corresponding certificate, this is what TLS/SSL is about. Without the private key you cannot do your attack, and the only way to get the private key is to get it at the customer's, not the CA's.

Bottom line:

1. No one has to coerce a CA into giving out a customer's certificate, they are public - it's the whole point of public key crypto2. No one can coerce a CA into giving out a customer's certificate + private key... because the CA never had access to the private key in the first place3. On well-run sites, the only way to get a copy of the certificate's private key would be to physically steal a HSM, something that would get noticed. So you cannot bribe a sysadmin either. There are processes such as dual approval, duty separation and so on, which ensure a single person can never have access to such systems..

1. No one has to coerce a CA into giving out a customer's certificate, they are public - it's the whole point of public key crypto2. No one can coerce a CA into giving out a customer's certificate + private key... because the CA never had access to the private key in the first place

No, but the CA can sign a public key that corresponds to the attacker's private key on a certificate that says it belongs to e.g. gmail.com, in which case the attacker can impersonate gmail.com.

Quote:

3. On well-run sites, the only way to get a copy of the certificate's private key would be to physically steal a HSM, something that would get noticed. So you cannot bribe a sysadmin either. There are processes such as dual approval, duty separation and so on, which ensure a single person can never have access to such systems..

This is assuming that a) all companies Do The Right Thing and b) that the procedures you're talking about would mean a thing if the company is required to comply with a subpoena.

If an entity is powerful enough to threaten a single person, threatening 2 people probably isn't all that difficult either. Then again, if this is the case, using self-signed certificates isn't the answer either, since theoretically anyone could be threatened like this.

There should be a way to publish issued certificates, with a registration number, in a public place.Then I (or a browser) can look up the registration and see who really got it.For example, GMail get as cert from Verisign, it is published as #12345678901234I can see that number in the browser, can then go to a public site and find who really has it.Then GMail can see who has certs issued for it's name.

Exactly. I don't see why some people are much more willing to send their bank's routing number and checking account number through the mail and hundreds of people's of hands.

I didn't say I wanted to send a personal check; there are other forms of one-time money to a particular person through physical mail, some of which are handled by the banking system as if they were checks. (And there are one-time electronic transfers, too,I know; postal money orders are more handy for me to use.)

There should be a way to publish issued certificates, with a registration number, in a public place.Then I (or a browser) can look up the registration and see who really got it.For example, GMail get as cert from Verisign, it is published as #12345678901234I can see that number in the browser, can then go to a public site and find who really has it.Then GMail can see who has certs issued for it's name.

The problem is that it's completely unsecure and makes it even easier to exploit the average user. If someone can reroute me to a fake amazon.com with a fake security certificate, I go to http://www.sslcheck.com it's likely they send me a fake version of that as well. It's less secure than the existing system.

The problem is that while the people creating or hosting that list can't be trusted, the system isn't secure. At the moment we rely on CA's and our webbrowsers. It's fairly obvious what the browsers trust, but it's not obvious at all as to whether the CA's are trustworthy. If we have an online list, it's impossible to tell if that list, for starters, is even remotely real. If it's protected with a certificate itself, we have the same problem still.

edit: The silly thing is, I've generally been pretty security aware, but checking that the certificate that's confirming the page is even remotely real? Never crossed my mind. Maybe we need to move back to a non-controlled web of trust model, although users won't really work with that either.

1. No one has to coerce a CA into giving out a customer's certificate, they are public - it's the whole point of public key crypto2. No one can coerce a CA into giving out a customer's certificate + private key... because the CA never had access to the private key in the first place

No, but the CA can sign a public key that corresponds to the attacker's private key on a certificate that says it belongs to e.g. gmail.com, in which case the attacker can impersonate gmail.com.

In fact, if I'm not mistaken many companies do this for their SSL proxy in order to filter SSL traffic. The proxy can generate a new cert on the fly for the website you visit, signed by the company's own CA which is trusted by corporate computers, so you don't get a warning on your browser.

Could someone tell me why the existence of said hardware couldn't easily be explained by the need for a few organizations to use leaked private certificate keys? Instead of blackmailing a CA (not so easy), I just bribe an Amazon engineer to give me the Amazon certificate private key (arguably easier), and I need the exact same hardware to perform a MITM attack using this key?

Then the existence of the hardware wouldn't prove the existence of false certificates issued by corrupt CA, but only the existence of compromised-and-yet-still-in-use certificates.

Because what company is going to make a product whose only purpose is illegal? I'm pretty sure running a MITM attack hits on some wiretapping laws. If they sell it, there has to be a legitimate use they can point to (where legitimate can be argud include working with the government for other topics, since we're looking for government endorsed legitimacy here.)

There is one big oversight with this article: a 'full' digital certificate contains two very different parts: a keypair composed of a public key which anyone can access and private key which as its name implies is private, and the certificate itself which is signed by the CA and only contains the public key.

So?

The CA can generate, for anyone, a certificate saying "amazon.com". If you can get VeriSign on-side, you can get one that says "amazon.com" and was issued by VeriSign. You can even get it to support EV, so the address bar will go green (or whatever your preferred browser does).

That some details tucked away might say something else is irrelevant. The core data (the data visible when hovering over the certificate/etc.) will all appear correct.

There is a plugin for Firefox called Perspectives that offers some mitigation against this. It tracks the certificates for SSL web sites over time and notifies you of the length of time that the certificate has been in use. Presumably in a "classic" MITM attack (i.e. one w/o the CA's consent) you can say "Hm... what are the odds Amazon just changed its certificate a couple hours ago... maybe I'll just wait a day or so to buy this book".

Obviously, if you get a different certificate than the one they are tracking, you also get a warning. Equally obviously, the communications could be faked by the MITM, but this would require that he know about the plugin, and that he spoof their communications, too (I don't know if they use SSL for the verification).

This is a serious problem, and forget about this just being the US government. I am sure we have the most influence, but you know every government in the world wants this power and wouldn't think twice about using it (and probably has!)

I'll surely go along with you there. The watchers need to be watch, and the governators want their power. (The CAs want their power, too, I daresay.)

But governators and CAs aren't folks: they *consist* of folks, regular folks, working 9-to-5 (or some variety thereof). And just like you said, nonetheless, they need to be watched. For, surely, one of them will fall asleep at the wheel and miss a meeting where a change in the rules of rulemaking is announced. Or another will let a screen's worth of possible enforcement slip by. Or another will turn out to be a crook and do something loathsome.

These individuals, and their bosses, separately and collectively, need to be watched. A gruesome task.

If I were a terrorist, I'd be encrypting the body of my emails myself anyway.

funny thing is, terrorists use private sections of web forums, satellite phones and similar, but do all the talking in the open.

or maybe they use nicknames for various topics. Best i have heard there was a drug ring that got busted, where the police had a better grasp on the nicknames used for various drugs and amounts, then the criminals had, thanks to wiretaps and such.