Another fraudulent certificate raises the same old questions about certificate authorities

For the second time this year, Iranian hackers have created a fraudulent …

Earlier this year, an Iranian hacker broke into servers belonging to a reseller for certificate authority Comodo and issued himself a range of certificates for sites including Gmail, Hotmail, and Yahoo! Mail. With these certificates, he could eavesdrop on users of those mail providers, even if they use SSL to protect their mail sessions.

It's happened again. This time, Dutch certificate authority DigiNotar has issued a fraudulent certificate for google.com and all subdomains. As before, Gmail appears to be the target. The perpetrator also appears to be Iranian, with reports that the certificate has been used in the wild for man-in-the-middle attacks in that country. The certificate was issued on July 10th, and so could have been in use for several weeks prior to its discovery.

DigiNotar has revoked the certificate, which provides some protection to users (though many applications do not bother checking for revocations). However, the company has so far not disclosed how the certificate was issued in the first place, making it unclear that its integrity has been restored. As a result, Google and Mozilla have both made patches to Chrome and Firefox respectively that blacklist the entire certificate authority.

Microsoft says Windows Vista, Server 2008, 7, or Server 2008 R2, check Microsoft's online Certificate Trust List. The company has removed DigiNotar from this list, so Internet Explorer on those systems should already not trust the certificate. The company will issue a patch to remove it from Windows XP and Windows Server 2003.

DigiNotar's silence also means that little is known about the perpetrator. Responsibility for the Comodo hack was claimed by a person claiming to be an Iranian sympathetic with, but independent of, the country's government. This latest hack could just as well be another independent effort, or a government action.

The absolute trust given to certificate authorities, and the susceptibility of that trust to abuse, has long been considered a problem. We wrote about the problem in March, and there has been no material improvement in the situation since then. The certificate authorities remain a weak link in the entire public key infrastructure, and though cryptographic systems can be created that reduce this possibility, the scheme we have remains firmly entrenched, regardless of its flaws.

Update: DigiNotar's parent company, Vasco, has issued a statement about the issue. It claims that DigiNotar first detected a break-in on July 19th, and called in external auditors in response. DigiNotar and the auditors believed that the company had revoked all of the fraudulent certificates; however, "at least one" was apparently missed. An additional certificate has now been revoked. The statement does not rule out the possibility that there are other fraudulent certificates that haven't been revoked.

According to https://twitter.com/johnwilander at least, it was effective in this case. His observations were retweeted by some Mozilla folks (which is how I saw it), which might mean they're considering support in the future. It feels like an ugly ad hoc approach, but if it's effective...

Any answers to these same old questions (i.e., what can/should replace SSL)?

SSL has nothing to do with this Dirac, this is a question of authentication, not encryption. SSL has had occasional flaws, but this has nothing whatsoever to do with the encryption, which remained completely secure. Encryption is orthogonal to authentication. You can have one or the other or both (although encryption without authentication is obviously much more susceptible to MITM). Authentication is a tough problem though. There has to be a balance of both level of verification and trust, security of any chains or webs, scalability, barrier to entry, and so forth. Some stuff seems like it'd be easier to implement and still be effective, like being able to have certs signed by multiple authorities (which in turn enables the ability to require multiple authorities for what users deem critical domains), being able to assign different levels of trust to different authorities, and so forth. More complex hybrids might also be useful, but that can get complex quick, and if it's too hard then people won't bother which defeats the entire purpose of it.

On the individual level I think a personal cert should be part of government issue ID these days. That would help provide a foundation for some other efforts and be a logical extension of one of the government's main jobs, but seems like a tough thing to rally support for .

According to https://twitter.com/johnwilander at least, it was effective in this case. His observations were retweeted by some Mozilla folks (which is how I saw it), which might mean they're considering support in the future. It feels like an ugly ad hoc approach, but if it's effective...

The certificate was issued on July 10th.

Chrome 13, which has pinning, was made current on August 9th. So for about a month, even up-to-date Chrome users were vulnerable, and not-up-to-date ones still are.

SSL isn't the problem, it's the X.509 CA system that's showing significant weaknesses at scale. The CA system is used to authenticate the certificates offered during SSL startup, and to verify the binding of the offered certificate to the purported domain name of the server. There are other ways to do this which can scale a lot better than the CA system we have today. For example, the Internet Engineering Task Force (IETF) has a working group (DANE) whose initial goal is to use Secure DNS (DNSSEC) to verify these SSL certificates based on the domain name of the web server. This isn't a quick fix -- anything so fundamental to the architecture of the web will take some years to roll out. However, there appears not to be a silver bullet solution that can come out any faster. We can ask DigiNotar and Comodo and all other CA companies to improve their security but in a competitive and growing market like the X.509 CA system there will always be new entrants and there will always be a nonzero operational error rate on security.

For example, the Internet Engineering Task Force (IETF) has a working group (DANE) whose initial goal is to use Secure DNS (DNSSEC) to verify these SSL certificates based on the domain name of the web server.

I have what might be a silly question, but I was wandering what kind of legal obligations those certificate authorities have and if they are, in any way, liable for damages that could be encurred by third parties as the result of their security failure.

If not, what intensive do they actually have to ensure the highest level of security ?

For example, the Internet Engineering Task Force (IETF) has a working group (DANE) whose initial goal is to use Secure DNS (DNSSEC) to verify these SSL certificates based on the domain name of the web server.

According to https://twitter.com/johnwilander at least, it was effective in this case. His observations were retweeted by some Mozilla folks (which is how I saw it), which might mean they're considering support in the future. It feels like an ugly ad hoc approach, but if it's effective...

The certificate was issued on July 10th.

Chrome 13, which has pinning, was made current on August 9th. So for about a month, even up-to-date Chrome users were vulnerable, and not-up-to-date ones still are.

Yes, which is why pinning was introduced. The fact that it wasn't working in software that didn't include it is not terribly surprising.

Don't get sensitive because the somewhat effective but inelegant, currently shipping solution that I brought up would make a more interesting story than yours

(Ars, do you realize that your mobile site's 'Reply' link has been broken since a couple of months ago?)

Ostracus wrote:

paulvixie wrote:

For example, the Internet EngineeringTask Force (IETF) has a working group(DANE) whose initial goal is to use SecureDNS (DNSSEC) to verify these SSLcertificates based on the domain name of the web server.

And some other weak point will be targeted.

Nothing can be completely secure, short of patching your computer directly to the switch where the destination switch is attached.

You guys are missing the point here. There's no need for information security majors and professionals to argue about authentication versus encryption. Diffie-helman, MITM, CBC, blah blah. The most effective (short term) solution is to nuke the Iran. Then you take care of about 150 birds with one stone. Problem solved. Then we're good until China starts coming up with fraudulent certificates.

Yes, the weaknesses in the hierarchical X.509 PKI have been know for years, acknowledged for years, but I have never see any effort to fix it. It's not like it's hard.

A useful first step would be to only use the existing PKI infrastructure to download a new set certs that come as part of a site bookmark the user is expected to use to login in future. That gives us several advantages:

- Forward secrecy, because later compromise of the X.509 cert don't compromise downloaded cert, so future connections to the site are safe regardless of what happens to the issuing CA.

- Mutual authentication that doesn't depend on a password, because the site knows it can identify the user by the cert they issued.

- Phishing protection, because the site knows if the user used the bookmark, and thus can train the user to always use it instead of doing something stupid - like clicking on a URL.

So we could fix this with technology and standards. If we did, we could go a little bit further and fix the firesheep hole along with it. That we don't bother is a pox on all our houses.

Ah, good reminder paulvixie, I knew I was spacing out on another obvious one. Yeah, DNSSEC could be another good step forward, separating out domain authentication from individual/organization authentication. The owner of a domain name should be able to have that cryptographically proven, for nothing. Just as a function of having control of the domain. The actual expensive full certification should only be for stuff like extended validation. Of course that runs into some barriers in the forms of existing CAs who make a good chunk of change off of basic domain certs.

Ostracus wrote:

paulvixie wrote:

For example, the Internet Engineering Task Force (IETF) has a working group (DANE) whose initial goal is to use Secure DNS (DNSSEC) to verify these SSL certificates based on the domain name of the web server.

And some other weak point will be targeted.

Safety is about raising attack energy cost, not some sort of magical absolute barrier. One generally aims to raise the cost barrier to be sufficiently above whatever the value that one is protecting.

I don't know if anyone here saw this (actual software) on slashdot a couple weeks ago, but it seems like an awesome solution to me. The best part is, if certificate validity is something that is important to you, you can start running this right now, without having to wait for any other pieces of infrastructure to get put into place.

Short version for those not clicking the link:You install a plugin into your web browser, any time it goes to a TLS secured site, the plugin goes out to one or more servers, from various geographical locations around the internet, that you have previously configured in the program (and that you trust). You give these servers the URL (or at least the domain) that you are connecting to, they then open a connection to that domain and get the certificate from that domain's server, then they send it back to you. At this point the plugin compares the cert that it got from the domain's server to what the other servers found. If it doesn't match, then you know that there is a man-in-the-middle attack of some type going on.

From here you could immediately look back at the chain of trust for both the valid cert from somewhere else, and the fraudulent one that you received and see exactly what CA issued the fraudulent one.

I have what might be a silly question, but I was wandering what kind of legal obligations those certificate authorities have and if they are, in any way, liable for damages that could be encurred by third parties as the result of their security failure.

If not, what intensive do they actually have to ensure the highest level of security ?

According to https://twitter.com/johnwilander at least, it was effective in this case. His observations were retweeted by some Mozilla folks (which is how I saw it), which might mean they're considering support in the future. It feels like an ugly ad hoc approach, but if it's effective...

The certificate was issued on July 10th.

Chrome 13, which has pinning, was made current on August 9th. So for about a month, even up-to-date Chrome users were vulnerable, and not-up-to-date ones still are.

Yes, which is why pinning was introduced. The fact that it wasn't working in software that didn't include it is not terribly surprising.

Don't get sensitive because the somewhat effective but inelegant, currently shipping solution that I brought up would make a more interesting story than yours

Huh?

The point is that Chrome users were vulnerable too. Pinning was introduced too late to help in this particular instance.

Saw it, but I'd love to have more information on the technical workings of Convergence. There is a video from Blackhat which nicely sums up the CA problem, and perhaps describes more in detail how Convergence works, but I haven't had the time to go through its entirety. Any idea where I can find a whitepaper or some documentation ?

You guys are missing the point here. There's no need for information security majors and professionals to argue about authentication versus encryption. Diffie-helman, MITM, CBC, blah blah. The most effective (short term) solution is to nuke the Iran. Then you take care of about 150 birds with one stone. Problem solved. Then we're good until China starts coming up with fraudulent certificates.

This makes me sad.

You do realize that Iran is varied in their support, right? I only know about 4 Iranian nationals, but they are all disillusioned with their country's promises and are leaning more pro-Western than anything. In contrast, I know quite a few people in China who, (though they cannot say so, specifically,) despise their government's restrictions placed on them and the actions the believe their government takes.

Sadly, I was under the assumption that HSTS was already implemented. What is the difference between simply forwarding all requests to the HTTPS: with a redirect? Is it that it still requires an observant end user to notice it not happening? How does HSTS fix that?

I don't know if anyone here saw this (actual software) on slashdot a couple weeks ago, but it seems like an awesome solution to me. The best part is, if certificate validity is something that is important to you, you can start running this right now, without having to wait for any other pieces of infrastructure to get put into place.

Yeah, I saw that and my initial reaction was "that's an interesting idea, I'll look into it when I have more time to research more fully". After this, I think I will be installing it today.

Sadly, I was under the assumption that HSTS was already implemented. What is the difference between simply forwarding all requests to the HTTPS: with a redirect? Is it that it still requires an observant end user to notice it not happening? How does HSTS fix that?

HSTS would provide no help against this attack. In this attack the attacker has a certificate that your browser thinks is valid. This allows them to decrypt all traffic, even if 100% of it is encrypted.

HSTS protects against a portion of your traffic not being encrypted: username, password, session ID etc. Sites that redirect to a secure protocol after the session has been established and and a session ID sent over the wire fail.

SSL isn't the problem, it's the X.509 CA system that's showing significant weaknesses at scale. The CA system is used to authenticate the certificates offered during SSL startup, and to verify the binding of the offered certificate to the purported domain name of the server. There are other ways to do this which can scale a lot better than the CA system we have today. For example, the Internet Engineering Task Force (IETF) has a working group (DANE) whose initial goal is to use Secure DNS (DNSSEC) to verify these SSL certificates based on the domain name of the web server. This isn't a quick fix -- anything so fundamental to the architecture of the web will take some years to roll out. However, there appears not to be a silver bullet solution that can come out any faster. We can ask DigiNotar and Comodo and all other CA companies to improve their security but in a competitive and growing market like the X.509 CA system there will always be new entrants and there will always be a nonzero operational error rate on security.

Every time I've read something about this topic I've seen someone suggest this as an option but the usual response is that it doesn't fix any of the problems with using a Central Authority for the authentication side of things. It just replaces one Central Authority with another and in many cases it may just be different parts of the same company since lots of CA's are also domain Registrars. Might as well just stop trusting all the current CA's and setup new CA's with the current system as it would be much quicker and easier and would accomplish the same thing.

Saw it, but I'd love to have more information on the technical workings of Convergence. There is a video from Blackhat which nicely sums up the CA problem, and perhaps describes more in detail how Convergence works, but I haven't had the time to go through its entirety. Any idea where I can find a whitepaper or some documentation ?

So why would y'all think instantly that IRAN - specifically - is the problem?

It's "Iranian hackers"?

What would random Iranian hackers be doing this for? Reading other people's spam? Not likely.

More likely, Iranian secret police, looking for internal dissidents, or spies, or saboteurs. Who are probably now in custody.

And the "Iranian hackers" were clumsy enough to get caught.

Gee. I wonder how un-clumsy the NSA is? They probably get bogus certificates issued before they butter their toast in the morning. And they never get caught, and if they did, nobody will ever mention it in the press.

I think the increased latency makes Convergence a non-starter. How many round trips through proxies(!) do you expect the browser to make before it can start loading the page?

If I understand correcly, just one. Convergence uses caching to store the notary information (the notary's "network perspective", i.e., the SSL certificate it receives when connecting to the server from its own corner of the internet). More in the video.