This site uses cookies to deliver our services and to show you relevant ads and job listings.
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service.
Your use of Stack Overflow’s Products and Services, including the Stack Overflow Network, is subject to these policies and terms.

Join us in building a kind, collaborative learning community via our updated
Code of Conduct.

Google Chrome doesn't do typical CRL/OSCP checks, instead it depends on CRLsets. In simple terms, Google scoops up the CRLs from most CAs, trims them down and delivers the CRLset to the browser via the update mechanism. They claim this is more secure, but it only seems to artificially speed up the browser.

Soft Fail

Much like an attacker can terminate the OCSP request, they could also drop traffic to/from the Chrome update service. Receiving the new CRLset seems prone to the same flaws in relying on the information arriving at the browser. If the attacker has control of the transport layer, how can we expect to receive new information?

Incomplete

In light of Heartbleed, and the proof that private keys can be retrieved, the number of certificate revocations taking place is likely to explode. To continue expanding the CRLset to incorporate all of these new revocations seems like a cumbersome process. Unless Chrome only receives a delta of the list, there is the potential for the file to become quite large. Whilst Google trim the list, and not all CAs are included, this only seems to make the problem around how complete the list is worse.

Delay

Once a revocation request is processed by a CA, their CRL and OCSP responses will reflect that. Without knowing what the lag time on an update hitting the CRLset is exactly, it's hard to say just how bad this will be. If an attacker gains access to a private key, they are likely to use it as soon as possible. As such, the browser needs to know about the revocation at the earliest possible opportunity. It seems the best way to do this is to ask the CA directly (or depend on OCSP stapling).

Performance / Privacy

Certificate revocation checks come with an inherent performance and privacy cost. The browser needs to perform a DNS look up and TCP round trip to the CA and wait for a response. You've also just disclosed your IP and the sites you visit to the CA. Whilst a CRLset avoids both of these issues, OCSP stapling allows a host to do the same. IIS has OCSP stapling out of the box and Apache/NginX both support it with minimal configuration.

Whilst the CRLset does offer some benefits, I would have thought it would be better working in tandem with CRL/OCSP checks, rather than instead of, which is how Chrome is configured by default. The browser can use the CRLset as a 'quick reference' for high importance, revoked certificates, and then continue with normal checks as you would expect.

GRC have published a breathless piece attacking a straw man argument: “Google tells us ... that Chrome's unique CRLSet solution provides all the protection we need”. I call it a straw man because I need only quote my own posts that GRC have read to show it.

The original CRLSets announcement contained two points. Firstly that online revocation checking doesn't work and that we were switching it off in Chrome in most cases. Secondly, that we would be using CRLSets to avoid having to do full binary updates in order to respond to emergency incidents.

In the last two paragraphs I mentioned something else. Quoting myself:

“Since we're pushing a list of revoked certificates anyway, we would like to invite CAs to contribute their revoked certificates (CRLs) to the list. We have to be mindful of size, but the vast majority of revocations happen for purely administrative reasons and can be excluded. So, if we can get the details of the more important revocations, we can improve user security. Our criteria for including revocations are:

The CRL must be crawlable: we must be able to fetch it over HTTP and robots.txt must not exclude GoogleBot.
The CRL must be valid by RFC 5280 and none of the serial numbers may be negative.
CRLs that cover EV certificates are taken in preference, while still considering point (4).
CRLs that include revocation reasons can be filtered to take less space and are preferred.”
In short, since we were pushing revocations anyway, maybe we could get some extra benefit from it. It would be better than nothing, which is what browsers otherwise have with soft-fail revocation checking.

I mentioned it again, last week (emphasis added):

“We compile daily lists of some high-value revocations and use Chrome's auto-update mechanism to push them to Chrome installations. It's called the CRLSet and it's not complete, nor big enough to cope with large numbers of revocations, but it allows us to react quickly to situations like Diginotar and ANSSI. It's certainly not perfect, but it's more than many other browsers do.”

“The original hope with CRLSets was that we could get revocations categorised into important and administrative and push only the important ones. (Administrative revocations occur when a certificate is changed with no reason to suspect that any key compromise occurred.) Sadly, that mostly hasn't happened.”

And yet, GRC managed to write pages (including cartoons!) exposing the fact that it doesn't cover many revocations and attacking Chrome for it.

They also claim that soft-fail revocation checking is effective:

The claim is that a user will download and cache a CRL while not being attacked and then be protected from a local attacker using a certificate that was included on that CRL. (I'm paraphrasing; you can search for “A typical Internet user” in their article for the original.)

There are two protocols used in online revocation checking: OCSP and CRL. The problem is that OCSP only covers a single certificate and OCSP is used in preference because it's so much smaller and thus removes the need to download CRLs. So the user isn't expected to download and cache the CRL anyway. So that doesn't work.

However, it's clear that someone is downloading CRLs because Cloudflare are spending half a million dollars a month to serve CRLs. Possibly it's non-browser clients but the bulk is probably CAPI (the library that IE, Chrome and other software on Windows typically uses - although not Firefox). A very obscure fact about CAPI is that it will download a CRL when it has 50 or more OCSP responses cached for a given CA certificate. But CAs know this and split their CRLs so that they don't get hit with huge bandwidth bills. But a split CRL renders the claimed protection from caching ineffective, because the old certificate for a given site is probably in a different part.

So I think the claim is that doing blocking OCSP lookups is a good idea because, if you use CAPI on Windows, then you might cache 50 OCSP responses for a given CA certificate. Then you'll download and cache a CRL for a while and then, depending on whether the CA splits their CRLs, you might have some revocations cached for a site that you visit.

It seems that argument is actually for something like CRLSets (download and cache revocations) over online checking, it's just a Rube Goldberg machine to, perhaps, implement it!

So, once again, revocation doesn't work. It doesn't work in other browsers and CRLSets aren't going to cover much in Chrome, as I've always said. GRC's conclusions follow from those errors and end up predictably far from the mark.

In order to make this post a little less depressing, let's consider whether its reasonable to aim to cover everything with CRLSets. (Which I mentioned before as well, but I'll omit the quote.) GRC quote numbers from Netcraft claiming 2.85 million revocations, although some are from certificate authorities not trusted by mainstream web browsers. I spent a few minutes harvesting CRLs from the CT log. This only includes certificates that are trusted by reasonable number of people and I only waited for half the log to download. From that half I threw in the CRLs included by CRLSets and got 2356 URLs after discarding LDAP ones.

I tried downloading them and got 2164 files. I parsed them and eliminated duplicates and my quick-and-dirty parser skipped quite a few. None the less, this very casual search found 1,062 issuers and 4.2 million revocations. If they were all dropped into the CRLSet, it would take 39MB.

So that's a ballpark figure, but we need to design for something that will last a few years. I didn't find figures on the growth of HTTPS sites, but Netcraft say that the number of web sites overall is growing at 37% a year. It seems reasonable that the number of HTTPS sites, thus certificates, thus revocations will double a couple of times in the next five years.

There's also the fact that if we subsidise revocation with the bandwidth and computers of users, then demand for revocation will likely increase. Symantec, I believe, have experimented with certificates that are issued for the maximum time (five years at the moment) but sold in single year increments. When it comes to the end of the year, they'll remind you to renew. If you do, you don't need to do anything, otherwise the certificate gets revoked. This seems like a good deal for the CA so, if we make revocation really effective, I'd expect to see a lot more of it. Perhaps factor in another doubling for that.

So we would need to scale to ~35M revocations in five years, which is ~300MB of raw data. That would need to be pushed to all users and delta updated. (I don't have numbers for the daily delta size.)

That would obviously take a lot of bandwidth and a lot of resources on the client. We would really need to aim at mobile as well, because that is likely to be the dominant HTTPS client in a few years, making storage and bandwidth use even more expensive.

Some probabilistic data structure might help. I've written about that before. Then you have to worry about false positives and the number of reads needed per validation. Phones might use Flash for storage, but it's remarkably slow.

Also, would you rather spend those resources on revocation, or on distributing the Safe Browsing set to mobile devices? Or would you spend the memory on improving ASLR on mobile devices? Security is big picture full of trade-offs. It's not clear the revocation warrants all that spending.

So, if you believe that downloading and caching revocation is the way forward, I think those are the parameters that you have to deal with."

i copypasted his blog post without permission but in good faith (that he welcomes it). should he or an affiliated party demand takedown, plz just erase my comment.

If the browsers would do proper OCSP checking, e.g. connect only to sites where they got a proper OCSP response (e.g. a signed yes or no, not a try later) it would be secure enough. But browsers don't, e.g. they mostly accept try later responses, except for extended validation certificates.

If servers would all support OCSP stapling there would be not performance impact, because the OCSP response is send from the server during the SSL handshake. But, if this is not the case the browser has to do another query and this time to a different server just to get the OCSP response. And like written in the blog you mentioned - this might fail due to capture portals or similar network filtering.

And like also mentioned in the blog: the attacker might try to block downloading the CRLsets to use a revoked certificate. But it must do this all the time after revocation, while with OCSP it is enough to interrupt access to the OCSP server only if the client tries to access the specific website, e.g. it is enough to do this attack inside an internet cafe or similar places.

As for the delay issue: CRLs like issued from the CA have a lifetime (or better they have a date when to expect a new CRL) and OCSP responses have too, so a browser will cache these responses and not ask all the time. This might be different with OCSP stapling, but in this case the server will cache the data it got from the CA to send inside the OCSP response. But OCSP stapling might actually be the fastest notification.

But, I don't expect a huge time difference between OCSP queries and CRLsets, because google pushes updates to this sets to the browser, probably with similar techniques they already use for the safebrowsing data (differential compressed updates).

Whilst using the CRLSet feature in Chrome is supposed to offer better performance, how does performance replace the security of users if there is no certainty of actual trust? Point in case was recently the CloudFlare challenge website which revoked it's certificate. I decided to test my main two browsers to make sure I still wasn't being taken to the website in question. In the end, Chrome still took me to the website, trusting the certificate for 2-3 days. Firefox picked up the revocation pretty much instantly and blocked me with a warning from accessing the website. There is the option if OCSP fails in Firefox, it doesn't trust the certificate.

I understand reading a quote from Google in the past defending the use of CRLSet was the issue of privacy that CA's can see the websites you visit even if we are just essentially passing that trust over to Google. I accept that point but another one brought up was that OCSP that it only works when there is no problem and fails where there is one i.e soft-fail checking.

I'm not talking about whether an attacker here has MITM set up on the local network here. If the attacker stops requests to the OCSP or CRL service (even requests to update Chrome's CRLSet), it surely has all fallen apart already. In the wake of heartbleed the attacker has the keys to the certificate, the user should have a proper check that a certificate is valid or not before trusting it.

I really don't know a solution, other than OCSP stapling and using CRLSet in tandem as a quick check. Saying that, I'm not an expert on this area though. Just the thoughts of an infosec university student.

Not quite sure how I've been voted negatively to that answer. Just my thoughts on the issue after commenting on Scott's blog and twitter. I'm new to this though, give me time :)
– DalApr 16 '14 at 6:47