You can also not bother using CRLs, and just use OCSP, which is turned on by default (EV certificates require it or else the browser won't display the "green bar").

As it does live checks on only the certificates presented right then, rather than downloading the whole CRL at intervals, OCSP uses less network resources for both you and the CA, updates faster (CRLs update every few days), and is generally superior in all ways. Like CRLs, OCSP responses are signed by the CA that issued them, and so cannot be tampered with.

You can even have your browser set to not trust the certificate presented if the OCSP query fails, which is a good fail-safe. I wish there was a "warn if OCSP check fails" option, rather than "fail silently and allow connection to proceed if OCSP fails" and "fail noisily and not work if OCSP fails". The former leaves people vulnerable, while the latter presents DoS attack targets.

Pushing out OS and browser updates to manually revoke those certificates is not a bad idea, particularly for those who have OCSP disabled for whatever reason, but there's not really any reason to manually install CRLs when OCSP exists.

The problem is that OCSP as currently implemented doesn't really work, precisely because of the two problems you describe (vulnerability if OCSP checks are nonfatal, failure to connect if they are). In fact, there are places where all OCSP is effectively blocked (airport wireless networks before you've paid, say, and paying requires SSL, so if OCSP is fatal then you can't pay). As a result, browsers default to non-fatal OCSP checks (with Chrome having a small unobtrusive warning if the check fails, which

The extent of the problem is surely a matter of whether the signing key was obtained or not. If the attackers obtained that, then revoking the certs issued on the CA's computer are almost immaterial as the attacker can continue issuing certs on their own computer. Gah, this is such a mess.

That's why it's a good idea to have an offline root certificate that is only used for signing one or more intermediate issuing certificates. These intermediates then sign certs issued to the public.

If the intermediates get compromised, the root is brought out of storage, issues a revocation for the intermediates (which also revokes any keys issued by the now-compromised intermediate), signs a new intermediate, and is put back into storage. This greatly reduces the risk of the root being compromised. Since r

No, I said the tools for managing OS certs are better (on all OSes). But yes, the OS store is more hardened from attack. And it makes no sense to have multiple dbs. Microsoft's update blocked their use in all compatible programs (Safari, Chrome, IE, etc). Aside from that, it's kind of embarrassing that Mozilla resorted to hardcoding the ids, rather than editing the configs or pushing the bad certs across.

Anyway, a program is only as secure as the OS it's on, so your post is absurd even disregarding the abov

Why is a US based CA inherently more trustworthy than one from Turkey? Fact of the matter is, TURKTRUST has a perfect security record, while Comodo is just the latest in a long line of breaches of US CAs. And even if that wasn't the case, it's still completely irrelevant to this breach. You can't possibly claim that a major browser should not have Comodo enabled by default, unless you're making the asinine claim that no CAs should be enabled by default.

I don't think the point is specifically the trustworthyness of TURKTRUST, but that it's an example of a certificate which likely is completely useless for him, just like a plethora of other such certificates, all of which are trusted by default. If any of those is broken, you are vulnerable by default. If you enable only those which you actually use, your vulnerability is greatly reduced.

As I write, Turkey has 305,678 FF4 downloads, many more than most EU and ex-USSR countries. Also it is a major US military ally, which adds to the need for trust by default.

If you look at the pending certificates page of Mozilla you'll see that getting that an approval is a slow, painful (due to the bureaucracy needed) and expensive (due to the need for major lawyer firms getting involved) matter. For example, visiting Mozilla headquarters and saying "I am the president of country/corporation X, please ad

No one said Mozilla should not offer that certificate for those who need it. They can even install it on the browser for one-click enabling. However what they should not do is enable it by default.

Ideally, when you first visit a site for which Mozilla has a root certificate but it's not enabled, it should tell the user what certificate it is, and if he wants to enable it (without a big, scary security warning, just "this site uses a certificate issued by TURKTRUST; the root certificate is available, do you

Removing or disabling the affected CA in the browser would be a simple enough workaround in this case, although you'd then have to trust individual certificates by hand. If previously seen certificates could be trusted directly, without fully trusting the CA, that would be even better. For example, I could trust that the existing Google certificates are good, but no longer trust the CA certificate that signed them.

You'd probably want separate levels of trust, so that certificate revocations would still

There's DNSSEC, which more and more ISP's and registries support. Then, if someone managed to hijack a certificate he/she would also have to spoof google's IP.

Here here! The difference the CAs will tell you is they verify and identify the organization rather than the domain name...

Poser = "mcdnalds.com"Ronald = "mcdonalds.com"

The reality seems to be more CAs continue to make the process easier and easier to increasingly enrich themselves without having to do much to show for it in return... Now many offer a completely automated process to instantly obtain a cert...WTF?!?!?!

In my view the system would be better off if we all got SSL certs with our DNS names and t

You still want verification of the actual details, that the individual authorized for the cert is the one authorized to have the website, ad nausium, but your idea is excellent.

Ok, how about this:

A DNS site's certificate has to be master-signed by a CA at level 3 trust (since it's infrastructure we're talking about) and then has to be counter-signed by one or more of the upstream DNS suppliers. This combines the web of trust with a superskeleton of validation that is independent of the web of trust. Comprom

Yeah except if the situation had been reversed and Microsoft had done what Mozilla did. Then there would be pitchforks about how Microsoft was being evil. But, no, this time it was Mozilla and they can just do no wrong.

It's fundamental different when it's an isolated incident versus the standard operating procedure. MS wouldn't be getting anywhere near as much crap if it was just one vulnerability from time to time as they now that it's pretty much every vulnerability.

They have another system in place to handle certificate revocation [microsoft.com]. You can enable/disable this in IE in "Internet Options->Advanced->Security". I believe Safari and Chrome also use this OS level certificate handling too.

Can anyone explain to me why the whole SSL system isn't fundamentally broken in the first place? And by "fundamentally broken" I mean that it seems like trusting Certificate Authorities to vouch for people seems little different than trusting any random stranger.

On the other hand, of course, what choice do I have if I want to do something useful online? It's not like I can call up my bank president and make him pinky swear that if anybody sniffs my login credentials and steals all my money he'll reimburse

I'm not a security expert and my crypto knowledge is limited. But from what I can understand, the general principle here is that trusting somebody unknown is considered more dangerous than not trusting somebody you know. In addition, the meaning of "trust" in the SSL context is that "you can trust me that anything that happens between me and you is encrypted, will stay between you and me, and nobody else can hear us". It's not "trust me, visiting my website won't harm your computer or your person". There ha

Say a site devoted to dissidents, purchases a cert signing from some CA like Verisign.

Now, say your government, someone else's gov't, or some random corp has its own CA that is trusted by your browser. This government/corp wants to spy on your activity, so they gen a cert for dissidentsRus.org, and setup a transparent proxy to intercept your traffic. While they are at it, they setup the same for your bank.

SSL is fundamentally broken. It only allows one signature of a certificate. If it allowed multiple signatures, anyone could sign the certificate, and you could do stuff like check if your friends trust this certificate, or whether your bank does, and so on. Just like PGP/GPG.

Sensible sites would get their certificates signed by multiple authorities, and this would make it possible for browser users to disable e.g. Comodo certificates without losing access to a significant part of the WWW.

Admitting it was a mistake rather than coming up with some bogus excuse gives them points in my book. Whether the decision was by marketing or just company policy it at least suggests they have one or two competent people over there.

Few things that are supposed to be "human condition" really are. That's usually just an excuse to not dig deeper. In this case, Mozilla happened to "err" on the side of non-disclosure just about the time it was releasing a new browser and really didn't need people mistaking the messenger for the message. Far better to let people worry about the security of other browsers.

Mozilla was the first browser vendor to patch.
SURE they could have told us exactly what they were patching, but they erred on the side of caution.
The fact that they want to be OPEN about everything is just a bonus and it's what differentiates Mozilla from every other browser vendor.

You didn't get what they did wrong. The knew about the issue 10 days before they disclosed it (and they were in fact forced to disclose it by a blogger). During that period, the affected unsuspecting people in Iran may have been exploited, snooped, arrested and/or executed. That's what they apologized for just now. But apologies won't help those victims (if there are any) a bit.

This has been debated endless times, as to when it is best to reveal that a vulnerability exists, with one camp arguing that it's best to delay announcements until there is no risk that the announcement will increase the degree of exploitation, and the other arguing that unannounced exploits are ALWAYS a danger, that you should not assume that those who are potential threats are ignorant simply because the users are.

As you can probably gather, I tend to be in the latter group. It doesn't take a child long t

You are absolutely correct and I'd extend that to say that since we don't know if the attackers were able to obtain the master signing key or not (that key was not amongst the revoked), the risks caused by the embargo could be far worse than is currently known.