Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

CWmike writes "Hackers may have obtained more than 200 digital certificates from a Dutch company after breaking into its network, including ones for Mozilla, Yahoo and the Tor project — a considerably higher number than DigiNotar has acknowledged earlier this week when it said 'several dozen' certificates had been acquired by attackers. Among the certificates acquired by the attackers in a mid-July hack of DigiNotar, Van de Looy's source said, were ones valid for mozilla.com, yahoo.com and torproject.org, a system that lets people connect to the Web anonymously. Mozilla confirmed that a certificate for its add-on site had been obtained by the DigiNotar attackers. 'DigiNotar informed us that they issued fraudulent certs for addons.mozilla.org in July, and revoked them within a few days of issue,' Johnathan Nightingale, director of Firefox development, said Wednesday. Looy's number is similar to the tally of certificates that Google has blacklisted in Chrome."

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

In response to DigiNotar incidences, some people are removing the root CA for DigiNotar from their computers. This way your computer will not trust _anything_ signed by DigiNotar.

With DNSSEC, if the people in charge of your DNS have an incident (hackers, malpractice or otherwise) which changes the "certificate" (for lack of a better word) for your website, you are stuck. There is no "root" certificate that you can remove.

It's true that in DNSSEC, it is potentially a huge logistical nightmare if a party entitled to 'bless' public keys as yours is persistently compromised (basically, an unprecedented display of ongoing incompetence to always be open to hijack).

However, on the flip side, the relatively long lifetime of certificates incurs the nastiness of CRLs and some amount of faith that CRLs make it to the right place at the right time. If a DNSSEC authority is compromised and fixes it in an hour, all signs of the compromi

I have to retract a bit, OSCP actually does a fair amount to address the issue of revocation, it's just that it isn't universal. Someone will have to explain how DNSSEC would be fundamentally any better than x509 with ubiquitous OSCP.

But it does not provide a basis for trust because mere holdership of a DNS domain does not mean you are trustworthy.

It does not provide a cure for cancer either...

See my CAA and ESRV Internet drafts.

Maybe you should try submitting a paper to the Lancet too. With a little bit of luck you might catch a peer-reviewer off guard amazed to notice that cancer starts with CA.

No seriously, the "trust" in "trusted third party" has nothing to do with the trust that you put in the second party (i.e. the server or business with which you are communicating). It has all about to do with the trust you put in the third party (the certification agency), that it correctly d

No seriously, the "trust" in "trusted third party" has nothing to do with the trust that you put in the second party (i.e. the server or business with which you are communicating). It has all about to do with the trust you put in the third party (the certification agency), that it correctly does its job (only giving certificates to properly identified entities and appropriately securing their infrastructure so that hackers and spies can't just "help themselves"). The threat that SSL certificates are supposed to protect against is wiretapping, not rogue businesses. I'm sure, all of those shady banks that failed in the 2008/2009 crisis had valid SSL certificates, and rightly so!

Let me explain. I have been working on Web security now for 19 years. I was present at the original meetings at which the SSL system was proposed, I convened several of the relevant meetings.

At no time was government wiretap a design consideration for SSL. NEVER. In fact to claim this was totally ridiculous since at the time we were fighting a running battle with the FBI and the NSA who were trying to stop us using strong cryptography at all. The original SSL design was limited to 40 bits and was very cle

Yes, I do realize that I'm making fun of somebody who believes that the purpose of SSL is to give you warm and fuzzy feelings about online shopping, that CAs certify business' good standing and integrity and then acts all astonished when somebody dares to point out that SSL is supposed to protect against interception/manipulation of data in transit. And then bolsters his position by pointing out his presence at some conferences where SSL was a subject.

Imagine that you are visiting slashdot, wouldn't it be better to use SSL than en-clair if the site supports it? Wouldn't it be better to have encryption with a duff cert than no encryption at all?

Why do you think it would be better to use SSL with a 'duff' cert than an unencrypted transport? What does it protect against, most of those in a position to read your traffic would be in a position to mount a MITM attack?

Anyone that can launch a man-in-the-middle attack can block OSCP verification requests, and for non-EV certificates they can do so in a way that causes all browsers to accept the certificate as valid with no kind of warning whatsoever.

But from what I've read, I'd consider that a failing of how OSCP is *implemented*, not how it is architected. First pass was returning 'tryLater' which looked innocuous enough and lo and behold it was treated as innocuous (I would think that should count as a validation error if implemented properly). Second time around it was shown that most browsers would even treat a 500 error as 'close enough'. In all these cases, the problem is not that OSCP is incapable, it's that the browsers erred on the side of

I just did that. If a certificate authority has been compromised and arbitrary signed certificates are being shown in the wild, it's probably best to deauthorize them and let them issue a new root CA and certificates to everyone once they have the person who leaked private info beaten within an inch of their life, and then 2.54cm more.
Firefox: Tools -> Options -> Advanced pane -> Encryption tab. "View Certificates". "Authorities" tab. Select DigiNotar Root CA, Edit Trust, De-select check boxes.

No idea what he's talking about... a cursory Google search [google.com] reveals that provision has been made to revoke certificates, so presumably he's making some larger point about something else....Damned if I know what that is, though. But I do follow the Convergence project and am testing out the browser plug-in... If Moxie reads Slashdot and sees this: Would you care to expound on the quoted Tweet?

You can revoke your own certificate. You cannot revoke someone else's certificate. With a web browser you can remove someone else's root certificate which means that your trust problems with that person go away.

Moxie meens dat with the current CA-system, you have several CA's. With DNSSEC you in a way have just one CA. So if one CA messes up, with the current system, you can remove that one CA. But with DNSSEC you can't remove that one CA, because it is the only one.

Oh I know what he is trying but he has no clue what the threat model is.

The threat model in this case is a well funded state actor that might well be facing a full on revolution within the next 12 months. It does not matter how convergence might perform, there is not going to be time to deploy it before we need to reinforce the CA system. [Yes I work for a CA]

I think it most likely we will be seeing the Arab Spring spreading to Syria with the fall of Gaddafi. We are certainly going to be seeing a major ratcheting up of repressive measures in Syria and Iran. Iran knows that if Syria falls their regime will be the next to come under pressure. In many ways the Iranian regime is less stable than some that have already fallen. There are multiple power centers in the system. One of the ways the system can collapse is the Polish model, the people of Poland didn't have a revolution, they just voted the Communist party out of existence. If the Iranian regime ever allows a fair vote the same wil happen there.

Anyone think that we will have DNSSEC deployed on a widespread scale in the next 12 months? I don't and I am one of the biggest supporters of DNSSEC in the industry. DNSSEC is going to be the biggest new commercial opportunity for CAs since EV. Running DNSSEC is not trivial, running it badly has bad consequences, the cost of outsourced management of DNSSEC is going to be much less than a DNS training course ($1000/day plus travel) but rather more than a DV SSL certificate ($6 for the cheapest).

The other issue I see with Convergence is that it falls into the category of 'security schemes that works if we can trust everyone in a peer to peer network'.

Wikipedia manages a fair degree of accuracy, but does anyone think that they really get up to 99% accurate? Until this year the CA system had had three major breaches, all of which were trapped and closed really quickly plus about the same number of probes by security researchers kicking the tires. Until the Diginotar incident anyone who had revocation checking in place was 100% safe as far as we are aware, not a bad record really.

There is a population of about 1 million certs out there, even 200 would mean 99.95% accuracy.

Running a CA is really boring work. Not something I would actually do personally. To check someone's business credentials etc takes some time and effort. It is definitely the sort of thing that you want a completer-finisher type to be doing. Definitely not someone like me and for 95% of slashdot readers, probably not someone like you either.

The weak point in the SSL system is not the validation of certs by CAs, they are (in order) (1) the fact that SSL is optional (2) the fact that the user is left to check for use of SSL (3) the fact that low assurance certificates that have a minimal degree of validation result in the padlock display.

The weak point being exploited by Iran is the braindead fact that the Web requires users to provide their passwords to the Web site every time they log in. I proposed a mechanism in 1993 that does not require a CA at all and avoids that. Had RSA been unencumbered I would have adopted an approach similar to EKE that was stronger than DIGEST but again did not require a cert.

Certs are designed to allow users to decide who they can share their credit card numbers with. That is a LOW degree of risk because the transaction is insured. Certs are not intended to tell people it is safe to share their password with a site because it is NEVER safe to do that.

1. Actually, revocation checking does not solve the problem, alteast if someone had the CA private key, they could generate the same ID's as other existing certificate. OSCP/revocation lists only checks id's not names, which makes it not useful for all possible problems.

2. I also think DNSSEC can be useful, it would be really helpful for the domain-owner to be able to make it clear that his website uses cert X and cert Y (which implies CA A and CA B). And not any other cert or CA. Deployment of DNSSEC is ve

1. Actually, revocation checking does not solve the problem, alteast if someone had the CA private key, they could generate the same ID's as other existing certificate. OSCP/revocation lists only checks id's not names, which makes it not useful for all possible problems.

The only control in the system is to revoke the CA root and that can be effected on Windows by issuing a new CTL (as happened to revoke the Diginotar root) that drops the compromised root. The other browsers have similar mechanisms.

2. I also think DNSSEC can be useful, it would be really helpful for the domain-owner to be able to make it clear that his website uses cert X and cert Y (which implies CA A and CA B). And not any other cert or CA. Deployment of DNSSEC is very slow though at the moment.

The war could well be over by the time DNSSEC is deployed. The Iranian group have developed new attacks and dramatically escalated the sophistication of their attacks. The time between attacks has been weeks

There is a Secure Remote Password protocol [wikipedia.org] that allows you to authenticate both server to you and yourself to server at the same time. There's also a RFC 5054 [ietf.org] aimed to incorporate it to TLS, unfortunately without any client support AFAIK.

With HTTPS, the people you trust are the few hundreds of CAs your browser is configured to trust. It's way too many, and your vulnerability with them is a logical OR -- any CA fails and you are vulnerable. It's a fucked up system. However, at least you can remove DigiNotar from your browser's trusted list.

With DNSSEC, you trust the root. They are your "trust anchor". And you get no choice about it.

Each system is fucked up.

This relates to the concept of "trust agility" that Marlinspike discussed. He wr

Worse, far from providing increased trust agility, DNSSEC-based systems actually provide reduced trust agility. As unrealistic as it might be, I or a browser vendor do at least have the option of removing VeriSign from the trusted CA database, even if it would break authenticity with some large percentage of sites. With DNSSEC, there is no action that I or a browser vendor could take which would change the fact that VeriSign controls the.com TLD.

If we sign up to trust these people, we're expecting them to willfully behave forever, without any incentives at all to keep them from misbehaving. The closer you look at this process, the more reminiscent it becomes. Sites create certificates, those certificates are signed by some marginal third party, and then clients have to accept those signatures without ever having the option to choose or revise who we trust. Sound familiar?

I think Marlinspike was smart to start with defining the problem. And now, with Convergence, he's also trying to address it. Check it out. (And check out Perspectives. Perspectives is the project he based Convergence on.)

In the light of the realisation that security can never be absolute - can we not have some sort of 'trust-voting' ? You pick a random amount of trust mechanisms from a fixed set available on your machine and the internet, and you make them decide whether or not something or someone can be trusted and to what degree. You could even have a 'slide-bar' in the bottom of your browser.

"If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC." -- Moxie Marlinspike

That's a fundamental mischaracterization of DNSSEC. You can't realistically remove individual DNS registrars now, but they all feed into registries, and you generally either trust those registries or you don't. If you don't, then you don't go to those TLDs. More to the point, this argument incorrectly tries to model the security of all websites

Unfortunately the registrar system is rather less trustworthy than you imagine. We have not to date encountered an outright criminal CA. We do however know of several ICANN registrars that are run by criminal gangs.

The back end security model of the DNS system is not at all good. While in theory a domain can be 'locked' there is no document that explains how locking is achieved at the various registry back ends. A domain that is not locked or one that is fraudulently unlocked is easily compromised.

The part of the CA system that has been the target of recent attacks is the reseller networks and smaller CAs. These are exactly the same sort of company that runs a registrar. In fact many registrars are turning to CAs to run their DNSSEC infrastructure since the smaller ones do not have the technical ability to do it in house. In fact a typical registrar is a pure marketing organization with all technical functions outsourced.

There are today about 20 active CAs and another 100 or so affiliates with separate brands. In contrast there are over a thousand ICANN registrars.

Sure there are some advantages to incorporating DNSSEC into the security model. But to improve security it should be an additional check, not a replacement. Today DNSSEC is an untried infrastructure, it is grafted on to a legacy infrastructure that is very old and complex and security is an afterthought.

The current breach is not even an SSL validation failure. The attacker obtained the certificate by bypassing the SSL validation system entirely and applying for an S/MIME certificate that did not have an EKU (which it should). That makes it a technical exploit rather than a validation issue. DNSSEC is a new code base and a very complicated one. Anyone who tells you that it is not going to have similar technical issues is a snake oilsman.

We do however know of several ICANN registrars that are run by criminal gangs.

Ultimately, it doesn't matter. Bugs notwithstanding, DNSSEC is still provably no less secure than CA-based certs because if you can compromise DNSSEC, you can also change the contact info on a domain and get any CA to give you a cert for the domain. Therefore, even if every CA were above board, you still cannot trust the CAs (even the best CAs) to protect you from someone compromising the domain itself.

Both the current CA model and Moxie Marlinspike's proposed notary system already implicitly trust DNS registration data. When someone requests example.com, how does the CA (or notary) know that the requestor owns it? In a few rare cases, the CA (or notary) knows the requestor personnally, but that's rare, and doesn't scale to the Internet. In the normal case, the CA (or notary) has no information other than DNS. The CA (or notary) will either check that the requestor's contact data matches the DNS whois

. . . and I appear to have misunderstood Moxie's system. It does not implicitly trust DNS at all. It does rely on SSL certs not to change, which I find odd, given that SSL certs tend to be replaced (either shortly before expiration or after a private key compromise.)

Any new SSL cert validation scheme needs to interoperate with the CA-based SSL cert validation scheme. The existing SSL cert validation scheme does have cert expiration, needed or not. Your bank is not going to switch to a self-signed perpetual cert when the overwhelming majority of its customers are relying on CA-based schemes that will claim the bank's site is unsafe. So certs are going to keep changing. For a new cert validation scheme to succeed, it must be able to accommodate this during the transi

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

While a complete re-work of the certificate signers is a good idea - and implementing DNSSEC widely is also a good idea - we're still going to need TLS, and that means certificates. DNSSEC doesn't provide any mechanism for encrypting the data stream after you've securely established you're talking to the right server, nor should it, that's not its job.

So DNSSEC protects against DNS poisoning and some MITM attacks; but there are plenty of other ways such as fake gateway, passive listening of wifi traffic, AR

While I agree about DNSSEC as a possible solution. A lot of people probably don't agree. Because DNSSEC is to much like a single-CA-model. And many don't like it. I personally probably do trust the root to get it right, I just don't trust all the TLD's.

Also you mention 8.8.8.8 and 8.8.4.4 but they don't have support for some of the basis parts of DNSSEC yet.

Which means if I have a working DNSSEC-setup on my end that can verify the DNSSEC key material I can't use them to check what Google gives me.

If you keep a spare house key on your front porch in a metal box marked spare house key you'll be robbed sooner or later. This is not a flaw of the lock and key security.

The public key system is working fine. What is not working so well is the trust model. The current system is fatally flawed in that security depends on none of the many many CAs failing. It doesn't matter if you choose a high quality CA to sign a cert for your site, your users can still be fooled by a backwater CA you've never heard of before and wouldn't trust to guard a dime.

No, it is a flaw of lock and key security because the human who operates the system is as much a part of it as the lock and the key. Though you might not blame the lock or key for that specific incident it is still a problem.

This is not to say that it is convenient or even possible to design a flawless security system, only that there is progress to be made in keeping one's eyes open as to the ways they can go wrong.

No it isn't, that is how we got locksets that stay locked, ones that relock on a timer, ones that can remind you if you've left them unlocked.
It is why my vehicle dings at me if I leave the keys in the ignition (or the lights on).
If I forget to lock my office door and someone steals my stapler, no big deal really. If I forget to lock the front door of the nursery and someone steals all the babies, big deal. So we consider the human element in designing the locks for those two places, despite the fact th

Not really no. We didn't get a single one of those things by blaming the lock and key system for human errors. We got them by recognizing that the lock and key work fine but the human needs a reminder or 3.

Had we instead blamed the lock and key, we'd have some other security system or (by now) we'd have just given up.

You are a moron if you think this is a cryptography problem. The cryptography works fine. The chain of trust is the problem. If you were to drive over to google with a thumb drive, get their certificate, install it and use just that for your encrypted connection to google.com you are fine. The problem is you are trusting some unknown third party for google's keys. If you think that is a good plan for security I have a plankpad errrr..... ipad I can sell you tonight in a parking lot.

There are indicators that the number is a lot more than just 200 certs - some speculate that there were log wipes involved, which means we can expect a very, very large number.

If that's true, it's wonderful that some browsers are blocking a bogus *.google.com cert. It'll be useless, however, if the attackers generated 50,000 OTHER *.google.com certs, along with multiple certs for world+dog.com.

As to the impact of this CA's incompetence, it's pretty evil when you con

"...A side-by-side review comparing code contained in an upcoming version of Chrome increased the number of secure sockets layer certificates hardcoded in the browser's blacklist by 247. A comment accompanying the additions said: “Bad DigiNotar leaf certificates for non-Google sites.”

Regardless of what's happening now, some reactions were to kill distinct certs. And likely some still are.

CAs are done, stick a fork in 'em. Just generate your own certs. A CA cert only increases your chance of getting MITM'ed (since you don't have sole control over distribution), and without a big store of certs in one place, they'll be harder to steal.

BTW, after giving Convergence a try, I still prefer Perspectives. Convergence's anonymization feature is nice but it uses a mechanism that installs a local CA, causing CertPatrol to go nuts, and it doesn't offer anywhere near the level of customization of Perspectives.

No one can "steal" your existing certificate unless they also steal your web server's private key. A CA can issue a fraudulent certificate for your site, but anyone can generate a self-signed certificate for your site as well. How does a CA make MITM attacks more likely? How many users visit your web site for the first time on an untrusted wireless network or in a country where the government may want to feed them a fake certificate anyway? Propagation and widespread trust of self-signed certs is what w

No one can "steal" your existing certificate unless they also steal your web server's private key. A CA can issue a fraudulent certificate for your site, but anyone can generate a self-signed certificate for your site as well. How does a CA make MITM attacks more likely?

Because the CA issues certs that the browser trusts. That fraudulent cert will work A-OK and give no warnings to users. It's as good as the one already installed on the web server. Because the CA will cave to government requests and is a nice juicy target for black hats, this cert is more likely to be issued fraudulently than if the keys are stored on a flash drive in your desk drawer.

How many users visit your web site for the first time on an untrusted wireless network or in a country where the government may want to feed them a fake certificate anyway?

AKA the "prayer method" - pray you don't get MITMed the first time. It would be very shortsighted, at best, to rely on this.

This is what the network notary system (Perspectives / Convergence plugin) is for, take a look at it. When you visit a site, it compares the cert your browser receives with what other computers around the world are seeing at the same time.

From the perspectives project: Perspectives is a new approach to helping computers communicate securely on the Internet. With Perspectives, public “network notary” servers regularly monitor the SSL certificates used by 100,000s+ websites to help your browse

No, the government of Iran generated a key and a CSR for *.google.com, had Diginotard sign them (not sure if this was social or technical hack) and then deployed them inline for a MitM attack on the residents of the area their organization controls.

They have the key and the cert. They didn't get Google's key or cert, they have their own.

I wonder how many dissidents have died because of this sloppy CA and the reliance on the CA system.

Seriously, I wonder what percentage of software actually checks the CRL's. It's extra steps that are annoying to code and I bet a lot of programmers just skipped it.

So even though these certs have or will be revoked that doesn't mean you're safe. If the programmer(s) of the software you're using were lazy and didn't code the extra steps to get the CRL's (or maybe the CRL itself is inaccessible for some reason) then you're screwed.

This is one of those things that programmers would have never considered unt

The problem is what they do when they can't reach that data. All the browsers out there now simply fail silently and go to the site anyway.

For some reason this is seen as a problem with CAs and not the broken browsers. But from the browser providers perspective 99% of their customers are really interested in getting to sites reliably and without fuss and less than 1% are dissidents whose lives might be threatened.

How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time. That narrows the attack vector to active MITM attacks where Mallory can intercept your first connection (if they want to actually get your data) and every connection thereafter (if they don't want to be noticed). It makes widespread surveillance impossible (they'd be noticed) and targeted attacks very unlikely to succeed.

You can even add a CA to that model: have the first-time dialog be "[ nobody | ] certifies that is . Does that sound OK to you? (looks good) (hell no)". In other words, just make self-signed certs less scary, and CA-signed certs more scary... Which would accurately reflect the actual level of security you're getting: both are probably OK, and one is a little more certified but certainly not golden. Only pop up the BIG SCARY WARNING when the cert changes, even if it's signed by the CA.

Except in the case of countries like Iran and China, where they can easily do a permanent MITM attack for webmail providers if they wanted to for the first and any subsequent connections. I'm not saying the current system is perfect or even good, but your alternative is worse in many respects.

1) You have an optional CA. Sites like Gmail will get a cert. That (usually) covers the initial connection.

2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

3) Even if all of the network is subverted AND all of the CAs are subverted, the MITM is still detected when people VPN to another country, or dial out, or travel, or the fingerprints are manually verified.... you can't guarantee the availability of encryption, but you can always detect wid

2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

There are firefox extensions which do just that: Certificate Patrol [mozilla.org]. If a certificate changes without reason (i.e. while still being far from expiration), a warning pops up.

However, the problem with this approach is again stupidity of the webmail operators and ignorance how certificates work.

Some large webmail providers (yahoo, google,...) who have load-balanced banks of servers sometimes have half of their servers with one certificate, and the other half with another (possibly even signed by another CA.

How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

A bit harsh, but the model has some issues due to obsolete objectives.

The SSH model works great

Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM. Maybe they can't keep it up for days, but a single visit is sufficient to mess you up. If the server's key is compromised? You are pretty well screwed, as not fixing the problem *looks* more secure than if they fixed it (e.g. the big deb

OCSP forces the CA to know and remember all certificates it had issued, this way even if the private key was used to create a rogue subCA it won't be valid as OCSP won't give the "yes it's valid" response. The DigiNotary case shows this is a real problem, they don't know which or how many certificates have been created. With OCSP validation model they are useless.

So I'm a little naive on the underlying tech, but should OSCP, being a whitelist, require only that valid certs be remembered, and revoked/invalid certs could be forgetten? Or is it that DigiNotary didn't retain database info even for valid certs, which would be another damning indication of their general ineptitude as a CA?

It does this for a very good architectural reason, it's a whitelist mechanism: you need to get a response saying "yes, we did issue this cert, it's valid", not the blacklist mechanism of CRL "we did revoke those certificates".

Nope, OCSP is blacklist just like CRLs. In fact it was specifically designed to be 100% bug-compatible with CRLs. Attempts were made to turn it into a whitelist mechanism, but were ruled out of scope by the standards group involved, because CRLs were the way you do things, and making OCSP a whitelist would make it more functional than a CRL, which wasn't permitted. So it's just as broken as CRLs, while providing the (dangerous) illusion that it isn't.

Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM.

Your home ISP isn't going to MITM you. They want to keep you as a customer. The coffee shop you visit isn't going to. They don't want to get prosecuted for credit card fraud. Same thing with a hotel network.

I'd expect it from random TOR exit nodes, but why would you use an anonymity network to shop with a credit card?

Passive eavesdropping is a real concern, but what's an example of a network where people would engage in active MITM attacks *hoping* that someone will try to send secret information on the

The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time.

You can (theoretically) do that too with SSL. Connect to the site, you get a certificate warning. Instead of blindly accepting the certificate, read the SHA1 fingerprint (which is displayed in the dialog box asking for acceptance), and call the helpdesk of the business with which your interacting to verify that it is the correct one. After accepting the certificate once, your browser now has it in its cache, and in knows forever (or rather: until expiration) that you're connecting to the same site as you di

> The SSH model works great: connect to a site once; verify the> fingerprint once if you consider a MITM to be a reasonable> concern; cache the key and know that forever after you're> connecting to the same site as you did the first time.

It works great for sites with 1 up to a few certs certs. There are distributed (Akamai-style) sites out there, that will present you a different cert with almost every page refresh! PITA... Normally hidden, since your browser will "trust" all of them anyway, but

Let's say you were hoping to insinuate yourself unnoticed into traffic destined for a particular site - for the sake of argument, let's use the Tor project. What would be the best way to do this without someone suspecting you had a specific target in mind? Stealing a couple hundred certs all at once, only one of which is related to your project, comes immediately to mind.

It's not like similar approaches haven't been taken before, even in the non-digital world. I seem to recall that was one explanation John Muhammad gave for the DC Sniper attacks - he really wanted to kill his ex-wife, and hoped killing a bunch of other people would keep suspicion from him.

As an example, I downloaded the cert Google offered up on encrypted.google.com. It had no OCSP specified, but it did have a CRL specified. Now, is Firefox checking the CRL embedded in the cert or not? I think it is, but the only way to confirm would be to actually try to hit a site with a revoked cert. FF by default is configured to only use OCSP if the cert has the information embedded in it, which this Google cert didn't. Which doesn't give me the warm fuzzy about other certs, either. I checked a few others. The Verisign sites, including RapidSSL, have an OCSP URI embedded. So that's better.

My point is that the whole revocation business remains slipshod and saying that you 'revoked' the certificate doesn't mean a hell of a lot in reality.

Just delete DigiNotar from your trusted CAs. Honestly I was just going to wait for the revocation lists like everybody else but seeing the scope of this now I think they've earned the right to be fired from the Internet forever.

This is still a manual process which is great now a month after 200 certificates were actively used in the wild, and also great for those who read slashdot. I've removed it too, but what about the rest of the family who don't read Slashdot?

Yes, this is why browsers are also shipping updates with certs explicitly distrusted.... and why the fact that DigiNotar did not tell browsers about the problem a month and a half ago when it happened is such a huge issue.

Can't see anyone having posted this, but Mozilla have instructions [mozilla.com] on how to remove DigiNotar as a trusted CA in your Firefox. I'm sure other browsers have similar processes.

I also note they've just released [mozilla.com] a new Firefox (and Thunderbird) version that has removed the CA entirely - good response:

Because the extent of the mis-issuance is not clear, we are releasing new versions of Firefox for desktop (3.6.21, 6.0.1, 7, 8, and 9) and mobile (6.0.1, 7, 8, and 9), Thunderbird (3.1.13, and 6.0.1) and SeaMonkey (2.3.2) shortly that will revoke trust in the DigiNotar root and protect users from this attack. We encourage all users to keep their software up-to-date by regularly applying security updates. Users can also manually disable the DigiNotar root through the Firefox preferences.

Yeah, and why not hunt down the people who run the company and execute them?Send their ashes into the sun, burn down their house, and remove their names from all public record and make it illegal to ever speak of them again?/sarc

My general desire for Freedom of Speech requires me to object to that last part of the proposed punishment. After all, it'd be useful to be able to mention them when using them as an example of what not to do.

Any enterprise running Firefox just got sent two huge signals. The first: Get the hell off Firefox. Pronto. The second: Reconsider CA's for trust models, if your security needs exceed the limitations of that model. I agree. It can be a big hole for certain applications, but nothing is airtight. It depends upon the security requirements.

In the case of the first signal, Mozilla told the enterprise customer this point blank weeks ago. They're not interested in supporting enterprise customers, but enterprise cu

The problem here is that any CA that is in my list of root certificates is able to create a valid certificate for say www.google. com, and that some CA can be tricked into giving someone other than Google such a certificate. That is not enough, the attacker also has to redirect traffic that should go to www.google. com to their own server. The whole thing is mostly dangerous because _many_ people go to www.google. com in the first place; the same attack against say my homepage would have very little potenti