Share this story

This article was originally published on Scott Helme's blog and is reprinted here with his permission.

We have a little problem on the web right now and I can only see it becoming a larger concern as time goes by: more and more sites are obtaining certificates, vitally important documents needed to deploy HTTPS, but we have no way of protecting ourselves when things go wrong.

Certificates

We're currently seeing a bit of a gold rush for certificates on the Web as more and more sites deploy HTTPS. Beyond the obvious security and privacy benefits of HTTPS, there are quite a few reasons you might want to consider moving to a secure connection that I outline in my article Still think you don't need HTTPS?. Commonly referred to as "SSL certificates" or "HTTPS certificates", the wider Internet is obtaining them at a rate we've never seen before in the history of the web. Every day I crawl the top one million sites on the Web and analyze various aspects of their security and every 6 months I publish a report. You can see the reports here, but the main result to focus on right now is the adoption of HTTPS.

Not only are we continuing to deploy HTTPS, the rate at which we're doing so is increasing, too. This is what real progress looks like. The process of obtaining a certificate has become more and more simple over time and now, thanks to the amazing Let's Encrypt, it's also free to get them. Put simply, we send a Certificate Signing Request (CSR) to the Certificate Authority (CA) and the CA will challenge us to prove our ownership of the domain. This is usually done by setting a DNS TXT record or hosting a challenge code somewhere on a random path on our domain. Once this challenge has been satisfied the CA it issues the certificate and we can then present it to visitors' browsers and get the green padlock and "HTTPS" in the address bar.

The problem is when things don't go according to plan and you have a bad day.

"We've been hacked!"

Nobody ever wants to hear those words, but the sad reality is that we do—more often than any of us would like. Hackers can go after any number of things when they gain access to our servers and often one of the things they can access is our private key. The certificates we use for HTTPS are public documents—we send them to anyone that connects to our site—but the thing that stops other people using our certificate is that they don't have our private key. When a browser establishes a secure connection to a site, it checks that the server has the private key for the certificate it's trying to use, and this is why no one but us can use our certificate. If an attacker gets our private key, though, things change.

Now that an attacker has managed to obtain our private key, they can use our certificate to prove that they are us. Let's say that again: if your key is stolen, that means there is somebody on the Internet who is not you, who can prove that they are you. This is a real problem, and before you think "this will never happen to me," you should recall Heartbleed. This tiny bug in the OpenSSL library allowed attackers to steal private keys and you didn't even have to do anything wrong for it to happen. On top of this there are countless ways that private keys are exposed by accident or negligence. The simple truth is that we can lose our private keys, and when this happens, we need a way to stop an attacker from using our certificate. We need to revoke the certificate.

Revocation

In a compromise scenario we revoke our certificate so that an attacker can't abuse it. Once a certificate is marked as "revoked," Web browsers will know not to trust it, even though the certificate is valid. The owner has requested revocation and no client should accept it.

Once we know we've had a compromise, we contact the CA and ask that they revoke our certificate. We need to prove ownership of the certificate in question, and once we do that, the CA will mark the certificate as revoked. Now that the certificate is revoked, we need a way of communicating this revocation to any client that might require the information. Right after the revocation, visitors' browsers doesn't know about it—and of course, that's a problem. There are two mechanisms that we can use to make this information available: a Certificate Revocation List (CRL), or the Online Certificate Status Protocol (OCSP).

Certificate Revocation Lists

A CRL is a really simple concept and is quite literally just a list of all certificates that a CA has marked as revoked. A client can contact the CRL Server and download a copy of the list. Armed with a copy of the list the browser can check to see if the certificate it has been provided is on that list. If the certificate is on the list, the browser now knows the certificate is bad and it shouldn't be trusted. The browser should throw an error and abandon the connection. If the certificate isn't on the list then everything is fine and the browser can continue the connection.

The problem with a CRL is that they contain a lot of revoked certificates from the particular CA maintaining it. Without getting into too much detail, they are broken down by each intermediate certificate a CA has and the CA can fragment the lists into smaller chunks. Regardless of how it's broken up, the point I want to make remains the same: the CRL is typically not an insignificant size. The other problem is that if the client doesn't have a fresh copy of the CRL, it has to fetch one during the initial connection to your site—which can make your site look much slower than they actually are.

This doesn't sound particularly great—so how about we take a look at OCSP?

Online Certificate Status Protocol

The OCSP provides a much nicer solution to the problem and has a significant advantage over the CRL approach. With OCSP, we ask the CA for the status of a single, particular certificate. This means all the CA has to do is respond with a good/revoked answer, which is considerably smaller than a CRL. Great stuff!

It is true that OCSP offers a significant performance advantage over fetching a CRL, but, that performance advantage comes with a cost (don't you hate it when that happens?). The cost is a pretty significant one, too: your privacy.

When we think about what an OCSP request is—the request for the status of a very particular, single certificate—you may start to realize that you're leaking some information. When you send an OCSP request, you're basically asking the CA this:

Is the certificate for pornhub.com valid?

So, not exactly an ideal scenario. You're now advertising your browsing history to some third party that you didn't even know about, all in the name of HTTPS—which set out to give us more security and privacy. Kind of ironic, huh?

Hard fail

But wait: there's something else. I talked about the CRL and OCSP responses above, the two mechanisms a browser can use to check if a certificate is revoked. They look like this.

CRL and OCSP checks.

Scott Helme

Upon receiving the certificate, the browser will reach out to one of these services and perform the necessary query to ultimately ascertain the status of the certificate. But what if your CA is having a bad day and the infrastructure is offline? What if it looks like this?

CRL and OCSP servers down.

Scott Helme

The browser has only two choices here. It can refuse to accept the certificate because it can't check the revocation status, or it can take a risk and accept the certificate without knowing the revocation status. Both of these options come with their advantages and disadvantages. If the browser refuses to accept the certificate, then every time your CA has a bad day and their infrastructure goes offline, your sites goes offline, too. If the browser continues and accepts the certificate then it risks using a certificate that could have been stolen, and exposes the user to the associated risks thereof.

It's a tough call—but right now, today, neither of these actually happen.

Firefox users who want the browser to always perform an OCSP check and abort the connection if the OCSP server cannot be reached: go to about:config and set security.OCSP.require to true.

Caveats you should be aware of:* If the OCSP server for a website goes down, you won't be able to connect unless you turn off this preference.* If you connect to networks with captive portal pages that are served over HTTPS... You most likely won't be able to access that captive portal page, because the required OCSP check for the captive portal page's SSL/TLS certificate gets blocked.* Firefox syncs this preference by default, so if you are thinking of enabling it on your home computer but disabling it on your laptop (which might connect to WiFi with captive portal pages), you'll also want to set the preference services.sync.prefs.sync.security.OCSP.require to false.

There is no additional privacy impact because, unlike Chrome, Firefox is already performing those OCSP checks anyway. If you'd rather have the complete opposite of what I described above (for example, because you're worried about your privacy), you can disable OCSP checks entirely in Firefox by setting security.OCSP.enabled to false. Note that Firefox syncs this preference as well.

Just 20% of the top sites redirect to HTTPS? That's a sad state of affairs. Surprisingly even one big certificate authority very recently (weeks) still allowed entering confidential data over plain, unencrypted HTTP in some forms.

Just 20% of the top sites redirect to HTTPS? That's a sad state of affairs. Surprisingly even one big certificate authority very recently (weeks) still allowed entering confidential data over plain, unencrypted HTTP in some forms.

When you're that big, you can't just decide one day to turn on HTTPS.

Even if technically it's just a matter of adding a few lines to a few config files, there's a massive amount of edge cases to work through—for example, want to use the latest and most secure set of TLS ciphers? It's not just a matter of using the "Modern" cipher suite line. You have to balance that against, say, your target audience's projected device usage. If you don't want to shut out people using Android 4.x, for example, prepare to wind back your cipher suite to far less secure levels and offer a bunch of TLS 1.1 ciphers. Anticipate needing to be able to serve up pages to people on WinXP and IE7? Prepare to go even further back.

It's damn hard when you're dealing with millions of visitors a day to just "turn on HTTPS." You gotta plan that shit, sometimes to ridiculous detail. And that's not even getting into the fucked up insanity that is dealing with ad networks, many of whom don't care about TLS at all.

If you think about it how revocation checking evolved over the years is a great example of the iterative and sometimes extremely jury-rigged way many Internet standards we've come to rely on developed.

At first there was just the CRL, but people worried that it would be too big to download frequently.

So Delta CRL was invented to allow CAs to publish smaller CRLs between the "full" ones.

Quickly they realized that this was a stupid idea only kicking the can down the road, so OCSP was proposed to provide simple yes/no on a given certificate instead of having to download a multi-MB CRL.

Then site operators balked at putting their livelihoods in the hands of CAs to keep up OCSP responders, so they talked browsers into not doing hard revocation check failures...

Personally I think browsers need to respect ALL revocation check mechanisms, in the order of how cost efficient they are, like this:

1. If server presents stapled response, respect that.2. If there's no stapled response or the response is unknown then contact the OCSP server directly and respect the response.3. If 2 fails then try to download the CRL; however there may be little point to this as CRL and OCSP responses are frequently served from the same domain so likely to be attacked together in a malicious scenario.4. If all mechanisms fail present a hard failure to the user with option to override and explaining the risk.

HPKP, CAA and CT all address the issue of rogue certificates. There really needs to be just one RFC that weaves together all these plus stapling into some coherent whole that everyone can just implement.

I'm not sure OCSP and CT will help on their own. Especially in the scenario, when you get hacked.

If that happens, the attacker will have your private key and the certificate (which they can get from CT logs) so they don't have to get a new certificate.

The hardest thing is to figure out you were hacked. Let's assume that's not the problem and look at what can happen.

The attacker can do several different things:1. use the control of your server to get a new certificate -> this will appear in CT. However, in most cases the attacker would go after a free certificate, i.e., Let's Encrypt. If they delete a so called account key afterwards, you will not be able to (or it will be very hard) to revoke your certificate (https://community.letsencrypt.org/t/how ... il/22000/5)

2. simply steal your private key -> CT will not help, you may be able to revoke your certificate.

3. use the server control and revoke your certificate -> you get a proper denial of service ("distributed" is not required for a complete shutdown) and your business may suffer tangible losses, if it uses the website to generate revenue.

I agree that OCSP is great (although it's often built as "an API" above CRL, so it doesn't provide any added security value).

CT is great, but there aren't many service, which would help you using it for identification of rogue certificates (especially, when you automate the renewal process). We are building security alerts for our https://keychest.net (free service) so I know it is not easy at all.

I just tried https://revoked.scotthelme.co.uk/ across multiple browsers. Firefox, Edge, and IE11 all did the right thing and displayed revoked errors. Chrome (and the chromium derived) Vivaldi both acted like nothing was wrong and showed damning green locks.

I was a member of a cert signing committee at a major company that used a publicly recognized cert chain. We issued certs for certain devices scattered around the internet in physical locations that weren't very secure, so the certificates were only valid for a few days. There were tens of thousands of devices, which means that we were issuing over ten thousand publicly recognized certificates every day.

If you want to maintain your own infrastructure in house so that you are dependent on another company for creating up to date OCSP responses, good luck. It's PITA to set up, is ridiculously expensive, and any maintenance on the HSA is going to require a minimum of like 10 people to maintain all of the required security and chain of custody (at least 2 people for each role). Oh, and you're going to want at least two of those for high availability.

If you're not maintaining your own infrastructure, then you're dependent on other people to maintain an OCSP server. And God help you if you put Must-Staple on there and the OCSP goes offline (which it will at some point). The article is correct in that OCSP is a great supplementary feature, but it is nowhere near as reliable as hosting a CRL file (which is literally just a hosted file).

Fun fact, for the first year or two that we operated our cert chain, it wasn't limited in scope to just our set of domains. This meant we could issue publicly recognized certs for anyone (ei, Google.com, etc). Updating the whole cert chain to limit the scope was a massive amount of work, but definitely good for everyone. The idea of Certificate Transparency and maintaining a publicly visible list of all issued certificates is a good thing, although in our case it would have been a bit unwieldy with the list growing by a few million each year.

Reducing validity periods for certificates is a sound idea, but the bigger issue beyond revocation to me is the fact that the oversight process for CAs appears to be completely broken (see Symantec) and the CAs themselves don't seem to care about the enormous risks of not following proper procedures and enforcing baseline requirements (again, see Symantec).

I just tried https://revoked.scotthelme.co.uk/ across multiple browsers. Firefox, Edge, and IE11 all did the right thing and displayed revoked errors. Chrome (and the chromium derived) Vivaldi both acted like nothing was wrong and showed damning green locks.

True ... just tried that. Though, my AV suite I have installed (ESET) barks that site uses invalid certificate when going through Vivaldi. Apparently their safe web browsing feature takes care of certificate management. I am not sure if it's for better or worse given recent Ooopses with many AV suits.

I just tried https://revoked.scotthelme.co.uk/ across multiple browsers. Firefox, Edge, and IE11 all did the right thing and displayed revoked errors. Chrome (and the chromium derived) Vivaldi both acted like nothing was wrong and showed damning green locks.

Safari on iOS loads it just fine and shows a padlock next to the site name (implying everything is hunky dory). This is on the newest 10.3.2.

Just 20% of the top sites redirect to HTTPS? That's a sad state of affairs. Surprisingly even one big certificate authority very recently (weeks) still allowed entering confidential data over plain, unencrypted HTTP in some forms.

When you're that big, you can't just decide one day to turn on HTTPS.

Even if technically it's just a matter of adding a few lines to a few config files, there's a massive amount of edge cases to work through—for example, want to use the latest and most secure set of TLS ciphers? It's not just a matter of using the "Modern" cipher suite line. You have to balance that against, say, your target audience's projected device usage. If you don't want to shut out people using Android 4.x, for example, prepare to wind back your cipher suite to far less secure levels and offer a bunch of TLS 1.1 ciphers. Anticipate needing to be able to serve up pages to people on WinXP and IE7? Prepare to go even further back.

It's damn hard when you're dealing with millions of visitors a day to just "turn on HTTPS." You gotta plan that shit, sometimes to ridiculous detail. And that's not even getting into the fucked up insanity that is dealing with ad networks, many of whom don't care about TLS at all.

I agree that HTTPS is hard to do on a big site, you can't do it overnight, and it takes a lot of planning. But I don't really agree with the conclusion simply because HTTPS isn't a new thing. We're talking years.

Security in general is hard and the larger the impact, the more planning needed. But just 20% of the top million sites means someone wasn't planning ahead or just puts compatibility for some users above the security of all the others. Or maybe saving some money on the short term.

The good news is that statistics say more than half of all web traffic is encrypted which sounds better.

P.S. Imagine a bank justifying leaving the banking site/app on HTTP due to the migration being too complicated or too expensive. Replace bank with online retailer, public institution, hospital, insurance company, etc. What's your first thought?

With certificate revocation, it seems like the site itself needs a way to handle two certificates and the ability to flag one as revoked. So if you need to revoke a certificate you mark it that way and if the browser reaches your site and sees two certificates with the old one showing it has been revoked, then it forces a new lookup with a hard fail.

EDIT: On second thought, I guess this wouldn't really prevent man-in-the-middle attacks since the browser would never reach the official site in the first place, and therefore never know about the revoked certificate.

-----

And on a different note, isn't it weird that revoke (with a K) becomes revocation (with a C)?

I agree with your arguments - HTTPS is hard to do on a big site, you can't do it overnight, and it takes a lot of planning. But I don't really agree with the conclusion simply because HTTPS isn't a new thing. We're talking years.

...

That's okay—you don't have to agree with it for it to be the case.

Quote:

P.S. Imagine a bank justifying leaving the banking site/app on HTTP due to the migration being too complicated or too expensive.

Not really a valid comparison, since financial institutions have to comply with explicit regulations with respect to online banking and security and don't really have a choice about it. Same with a lot of other institutions that have to provide, say, SOx-compliant audit trails.

A more apt example would be a large business without a clear business incentive to provide HTTPS support. Look, for example, at my old employer. No HTTPS on their main site anywhere, because there's no real need to. The stuff that matters to them, like their customer-facing portal where airlines can manage their aircraft, or the external-facing employee HR portal, or the SSL-VPN access page, are all HTTPS. But the main site? No reason to spend the money to do it, because it would involve a lot of validation testing and there's no obvious return.

I agree with your arguments - HTTPS is hard to do on a big site, you can't do it overnight, and it takes a lot of planning. But I don't really agree with the conclusion simply because HTTPS isn't a new thing. We're talking years.

...

That's okay—you don't have to agree with it for it to be the case.

Quote:

P.S. Imagine a bank justifying leaving the banking site/app on HTTP due to the migration being too complicated or too expensive.

Not really a valid comparison, since financial institutions have to comply with explicit regulations with respect to online banking and security and don't really have a choice about it. Same with a lot of other institutions that have to provide, say, SOx-compliant audit trails.

A more apt example would be a large business without a clear business incentive to provide HTTPS support. Look, for example, at my old employer. No HTTPS on their main site anywhere, because there's no real need to. The stuff that matters to them, like their customer-facing portal where airlines can manage their aircraft, or the external-facing employee HR portal, or the SSL-VPN access page, are all HTTPS. But the main site? No reason to spend the money to do it, because it would involve a lot of validation testing and there's no obvious return.

if a news website like ars can have https then boeing can to especially since they use it on other websites.

I agree with your arguments - HTTPS is hard to do on a big site, you can't do it overnight, and it takes a lot of planning. But I don't really agree with the conclusion simply because HTTPS isn't a new thing. We're talking years.

...

That's okay—you don't have to agree with it for it to be the case.

Quote:

P.S. Imagine a bank justifying leaving the banking site/app on HTTP due to the migration being too complicated or too expensive.

Not really a valid comparison, since financial institutions have to comply with explicit regulations with respect to online banking and security and don't really have a choice about it. Same with a lot of other institutions that have to provide, say, SOx-compliant audit trails.

A more apt example would be a large business without a clear business incentive to provide HTTPS support. Look, for example, at my old employer. No HTTPS on their main site anywhere, because there's no real need to. The stuff that matters to them, like their customer-facing portal where airlines can manage their aircraft, or the external-facing employee HR portal, or the SSL-VPN access page, are all HTTPS. But the main site? No reason to spend the money to do it, because it would involve a lot of validation testing and there's no obvious return.

I feel like you're making a point out of refuting the need for security/encryption by finding a hypothetical reason why it's acceptable even when putting user's data at risk. And my example was clearly about delicate user data being left unencrypted, not about having to encrypt generic public data before sending it to the user. I guess your arguments can be used to excuse any kind of failure for due diligence and good security practices. "It's hard, it's expensive, it takes a lot of planning".

If your site has millions of visitors then you most likely afford to do it right. And if you still don't do it it's a problem.

But I'll keep your point of view in mind for any future security blunders that can be excused by the same arguments.

I like DNSSEC but it has its own problems. Offloading certificate validation to it in the current state just seems to be too much of a putting all your eggs in one basket thing. There really needs to be some kind of overarching and comprehensive standard that takes care of all identity proofing needs. However that all be an expensive overhaul so we are back here in the practical hell of keep patching up a boat full of holes.

If your site has millions of visitors then you most likely afford to do it right. And if you still don't do it it's a problem.

But I'll keep your point of view in mind for any future security blunders that can be excused by the same arguments.

"Afford" is a tricky word in these circumstances. I'm not trying to excuse laziness or justify running HTTP in an increasingly HTTPS world. I'm just providing context. It's never as simple as "just turn on HTTPS." There are always reasons why sites are lagging.

The reasons may be dumb or otherwise indefensible, but they are reasons, nonetheless. It's never as simple as "they don't want to do it because they're evil."

If your site has millions of visitors then you most likely afford to do it right. And if you still don't do it it's a problem.

But I'll keep your point of view in mind for any future security blunders that can be excused by the same arguments.

"Afford" is a tricky word in these circumstances. I'm not trying to excuse laziness or justify running HTTP in an increasingly HTTPS world. I'm just providing context. It's never as simple as "just turn on HTTPS." There are always reasons why sites are lagging.

The reasons may be dumb or otherwise indefensible, but they are reasons, nonetheless. It's never as simple as "they don't want to do it because they're evil."

I don't think I ever mentioned that "turning on HTTPS is a simple matter, one click away". Is there a reason why you argue that point as if I did?

P.S. I noticed you insist on "the reason" regardless of whether it's "dumb or otherwise indefensible". Is there a point in using such reasons in any conversation when trying to excuse the result?

I like DNSSEC but it has its own problems. Offloading certificate validation to it in the current state just seems to be too much of a putting all your eggs in one basket thing. There really needs to be some kind of overarching and comprehensive standard that takes care of all identity proofing needs. However that all be an expensive overhaul so we are back here in the practical hell of keep patching up a boat full of holes.

Honestly, the biggest problem with DNSSEC is that it's really fucking hard to understand how to implement it, especially if you're doing split-horizon DNS.

I use cloudflare as my authoritative DNS for my sites, and fortunately they make it pretty easy: I enable DNSSEC for each domain at my registrar, and then I enable it for each domain at Cloudflare by toggling a switch. Turning it on "manually" (that is, if you're trying to enable it and set it up yourself) is seriously complicated as fuck.

Or, put another way: I've turned on long-duration HPKP for all my sites and it ultimately wasn't that difficult to do, but DNSSEC legit scares me.

I like DNSSEC but it has its own problems. Offloading certificate validation to it in the current state just seems to be too much of a putting all your eggs in one basket thing. There really needs to be some kind of overarching and comprehensive standard that takes care of all identity proofing needs. However that all be an expensive overhaul so we are back here in the practical hell of keep patching up a boat full of holes.

The only disagreement I would have is that if we went down a rip it out and rebuild route with a single overarching standard there is no guarantee we will see all the holes (in fact we almost certainly wont). So I think incremental improvement is the way forward regardless of expense. Produce these standards as addons, see which gain adoption.

Most of the holes people found in these standards probably didn't occur to people until they were at or near implementation and someone tried to break it. If we started again we would just end up in the same situation but with significantly more effort spent to get there.

Seems like CAA records are somewhere between minimally effective and actively harmful if DNSSEC is not being used. For example, if the attacker can specify arbitrary CAA values, now your site appears invalid with its own certificate, or perhaps valid with the attacker's CA-issued certificate.

Granted, it's an extra step you have to go through if you want to impersonate a site, but one that seems low cost.

Having seen problems with this a lot over the last couple years, I have to ask how Certificate Transparency and *-staple will jive with corporate TLS proxies.

For those that haven't heard of these: your organization wants to ensure (for various reasons, some of which might be related to regulatory compliance) that all of your at-work web traffic is scanned. Secure web is great for you, but it makes it as hard for your organization to intercept as it does for an attacker to do the same. Your organization gets around this by having a proxy server act as the client, then re-encrypting a new TLS connection to your browser. Because the proxy doesn't have access to the server's private key, it uses its own, and issues its own "fake" cert for that key. The admin of your computer adds the trusted CA cert for the corporate CA that issues these fake server certs. It's all an elaborate, "acceptable," man-in-the-middle attack on secure web.

Although I understand the motivations for doing TLS proxy, it has always felt wrong. That is, I'd rather see that a site was outright blocked at work than know that the connection to my bank is being decrypted on an enterprise server that is probably less well protected than the point-of-sale consoles at Target. What happens when we adopt security technologies that come into direct conflict with the interests of corporate IT goals? Who wins this fight? What happens if corporate IT wins? So far, HSTS and cert pinning have compromise solutions for TLS proxy--solutions that can be potentially used by hackers who can pull of the scenarios listed in the graphics in this article. Do we want more compromises like this?

Having seen problems with this a lot over the last couple years, I have to ask how Certificate Transparency and *-staple will jive with corporate TLS proxies.

For those that haven't heard of these: your organization wants to ensure (for various reasons, some of which might be related to regulatory compliance) that all of your at-work web traffic is scanned. Secure web is great for you, but it makes it as hard for your organization to intercept as it does for an attacker to do the same. Your organization gets around this by having a proxy server act as the client, then re-encrypting a new TLS connection to your browser. Because the proxy doesn't have access to the server's private key, it uses its own, and issues its own "fake" cert for that key. The admin of your computer adds the trusted CA cert for the corporate CA that issues these fake server certs. It's all an elaborate, "acceptable," man-in-the-middle attack on secure web.

Although I understand the motivations for doing TLS proxy, it has always felt wrong. That is, I'd rather see that a site was outright blocked at work than know that the connection to my bank is being decrypted on an enterprise server that is probably less well protected than the point-of-sale consoles at Target. What happens when we adopt security technologies that come into direct conflict with the interests of corporate IT goals? Who wins this fight? What happens if corporate IT wins? So far, HSTS and cert pinning have compromise solutions for TLS proxy--solutions that can be potentially used by hackers who can pull of the scenarios listed in the graphics in this article. Do we want more compromises like this?

I just tried https://revoked.scotthelme.co.uk/ across multiple browsers. Firefox, Edge, and IE11 all did the right thing and displayed revoked errors. Chrome (and the chromium derived) Vivaldi both acted like nothing was wrong and showed damning green locks.

Safari on iOS loads it just fine and shows a padlock next to the site name (implying everything is hunky dory). This is on the newest 10.3.2.

Having seen problems with this a lot over the last couple years, I have to ask how Certificate Transparency and *-staple will jive with corporate TLS proxies.

For those that haven't heard of these: your organization wants to ensure (for various reasons, some of which might be related to regulatory compliance) that all of your at-work web traffic is scanned. Secure web is great for you, but it makes it as hard for your organization to intercept as it does for an attacker to do the same. Your organization gets around this by having a proxy server act as the client, then re-encrypting a new TLS connection to your browser. Because the proxy doesn't have access to the server's private key, it uses its own, and issues its own "fake" cert for that key. The admin of your computer adds the trusted CA cert for the corporate CA that issues these fake server certs. It's all an elaborate, "acceptable," man-in-the-middle attack on secure web.

Although I understand the motivations for doing TLS proxy, it has always felt wrong. That is, I'd rather see that a site was outright blocked at work than know that the connection to my bank is being decrypted on an enterprise server that is probably less well protected than the point-of-sale consoles at Target. What happens when we adopt security technologies that come into direct conflict with the interests of corporate IT goals? Who wins this fight? What happens if corporate IT wins? So far, HSTS and cert pinning have compromise solutions for TLS proxy--solutions that can be potentially used by hackers who can pull of the scenarios listed in the graphics in this article. Do we want more compromises like this?

if you think pos terminals are secure your delusional my friend.

I guess my statement wasn't obvious enough, so I'll make it explicit: I don't think POS terminals are secure, and I know corporate proxies are just as bad.

So it's a "one comment - one reply" kind of conversation yet you have to resort to sarcasm and name calling about this? That was really uncalled for... It was fair to simply admit that what you were refuting weren't my arguments as you suggested.

Does this by any chance have anything to do with the discussion we had last time I criticized one of your articles (yes, I'm aware this one isn't yours)? Because this headed the same way real fast.

Not consciously—I legit don't remember, and I just spent the past few minutes scanning your post history and I don't see any comments on pieces I've written (which isn't surprising—I haven't written much in the past ~12 months because I'm in more of a supervisor role now).

Quote:

To get back on topic, I think we agree that the reasons for not encrypting sensitive data may be dumb and indefensible yet some people and site owners still use them to excuse their choices.

I absolutely agree, to a point—many of the reasons being supplied are indeed dumb. But they're not all easily dismissible, and a blanket statement to that effect lacks perspective. There are legitimate reasons for large site operators to not yet offer https, even if those reasons appear silly or even malicious (like, "No capex forecast this FY for overhead time to update web infrastructure to https," or "Cost to implement far exceed hard or soft benefits derived from implementation," or "Business-mandated dependencies or rules prevent HTTPS rollout for one reason or another," or even something like, "Long-term signed agreement with outsourced tech company providing web support means we can't go HTTPS for a few more years because we signed a stupid contract").

Doesn't mean that those reasons aren't shitty and ultimately user-hostile, but they are legitimate, real reasons—they don't always signal that the company is incompetent or dumb. (Though sometimes they might, especially when dealing with SMBs without a tech staff that understands the issues—like a local business w/an online store or something similarly sized).

Just 20% of the top sites redirect to HTTPS? That's a sad state of affairs. Surprisingly even one big certificate authority very recently (weeks) still allowed entering confidential data over plain, unencrypted HTTP in some forms.

When you're that big, you can't just decide one day to turn on HTTPS.

Even if technically it's just a matter of adding a few lines to a few config files, there's a massive amount of edge cases to work through—for example, want to use the latest and most secure set of TLS ciphers? It's not just a matter of using the "Modern" cipher suite line. You have to balance that against, say, your target audience's projected device usage. If you don't want to shut out people using Android 4.x, for example, prepare to wind back your cipher suite to far less secure levels and offer a bunch of TLS 1.1 ciphers. Anticipate needing to be able to serve up pages to people on WinXP and IE7? Prepare to go even further back.

It's damn hard when you're dealing with millions of visitors a day to just "turn on HTTPS." You gotta plan that shit, sometimes to ridiculous detail. And that's not even getting into the fucked up insanity that is dealing with ad networks, many of whom don't care about TLS at all.

To add to this - Nick Craver from Stack Overflow posted about their transition, which took 4 years of planning and working through issues before flipping the switch (which only took 2 months).

Just 20% of the top sites redirect to HTTPS? That's a sad state of affairs. Surprisingly even one big certificate authority very recently (weeks) still allowed entering confidential data over plain, unencrypted HTTP in some forms.

When you're that big, you can't just decide one day to turn on HTTPS.

Even if technically it's just a matter of adding a few lines to a few config files, there's a massive amount of edge cases to work through—for example, want to use the latest and most secure set of TLS ciphers? It's not just a matter of using the "Modern" cipher suite line. You have to balance that against, say, your target audience's projected device usage. If you don't want to shut out people using Android 4.x, for example, prepare to wind back your cipher suite to far less secure levels and offer a bunch of TLS 1.1 ciphers. Anticipate needing to be able to serve up pages to people on WinXP and IE7? Prepare to go even further back.

It's damn hard when you're dealing with millions of visitors a day to just "turn on HTTPS." You gotta plan that shit, sometimes to ridiculous detail. And that's not even getting into the fucked up insanity that is dealing with ad networks, many of whom don't care about TLS at all.

To add to this - Nick Craver from Stack Overflow posted about their transition, which took 4 years of planning and working through issues before flipping the switch (which only took 2 months).

Doesn't mean that those reasons aren't shitty and ultimately user-hostile, but they are legitimate, real reasons—they don't always signal that the company is incompetent or dumb. (Though sometimes they might, especially when dealing with SMBs without a tech staff that understands the issues—like a local business w/an online store or something similarly sized).

It makes me very, very happy that I can no longer reply to this with "Pot, Kettle."Thanks for bringing HTTPS to everyone rather than just subscribers.

The primary issue you guy had was your ad networks, wasn't it?

Also, beyond that, did having HTTPS set up for subscribers make it easier to transition to HTTPS for all? Like, other than working out what to do for ads, did the process boil down to flipping a switch once the ads problem was sorted out?