8 Answers
8

SSL/TLS has a slight overhead. When Google switch Gmail to HTTPS (from an optional feature to the default setting), they found out that CPU overhead was about +1%, and network overhead +2%; see this text for details. However, this is for Gmail, which consists of private, dynamic, non-shared data, and hosted on Google's systems, which are accessible from everywhere with very low latency. The main effects of HTTPS, as compared to HTTP, are:

Connection initiation requires some extra network roundtrips. Since such connections are "kept alive" and reused whenever possible, this extra latency is negligible when a given site is used with repeated interactions (as is typical with Gmail); systems which serve mostly static contents may find the network overhead to be non-negligible.

Proxy servers cannot cache pages served with HTTPS (since they do not even see those pages). There again, there is nothing static to cache with Gmail, but this is a very specific context. ISPs are extremely fond of caching since network bandwidth is their lifeforce.

HTTPS is HTTP within SSL/TLS. During the TLS handshake, the server shows its certificate, which must designate the intended server name -- and this occurs before the HTTP request itself is sent to the server. This prevents virtual hosting, unless a TLS extension known as Server Name Indication is used; this requires support from the client. In particular, Internet Explorer does not support Server Name Indication on Windows XP (IE 7.0 and later support it, but only on Vista and Win7). Given the current market share of desktop systems using WinXP, one cannot assume that "everybody" supports Server Name Indication. Instead, HTTPS servers must use one IP per server name; the current status of IPv6 deployment and IPv4 address shortage make this a problem.

HTTPS is "more secure" than HTTP in the following sense: the data is authenticated as coming from a named server, and the transfer is confidential with regards to whoever may eavesdrop on the line. This is a security model which does not make sense in many situations: for instance, when you look at a video from Youtube, you do not really care about whether the video really comes from youtube.com or from some hacker who (courteously) sends you the video you wish to see; and that video is public data anyway, so confidentiality is of low relevance here. Also, authentication is only done relatively to the server's certificate, which comes from a Certification Authority that the client browser knows of. Certificates are not free, since the point of certificates is that they involve physical identification of the certificate owner by the CA (I am not telling that commercial CA price their certificates fairly; but even the fairest of CA, operated by the Buddha himself, would still have to charge a fee for a certificate). Commercial CA would just love HTTPS to be "the default". Moreover, it is not clear whether the PKI model embodied by the X.509 certificates is really what is needed "by default" for the Internet at large (in particular when it comes to relationships between certificates and the DNS -- some argue that a server certificate should be issued by the registrar when the domain is created).

In many enterprise networks, HTTPS means that the data cannot be seen by eavesdroppers, and that category includes all kinds of content filters and antivirus software. Making HTTPS the default would make many system administrators very unhappy.

All of these are reasons why HTTPS is not necessarily a good idea as default protocol for the Web. However, they are not the reason why HTTPS is not, currently, the default protocol for the Web; HTTPS is not the default simply because HTTP was there first.

+1 for static content takes a hit. Consider a site with multiple <img> tags. Since each image will have a separate connection, the computational overhead of say a 2048-bit or 4096-bit encrypted connection can become fairly significant on mobile platforms where increasesd CPU usage quickly drains a battery, users might avoid your site because for one reason or another they think it drains their battery. This is of course one merit to hosting non-confidential static content on a separate server (without SSL).
–
PuddingfoxJun 6 '11 at 1:17

@puddingfox: I used to work as a PM for mobile development, and as I see it, if security is required then the current favorite is to use a platform-specific native app, which uses a encrypted compressed data-only REST/SOAP API to talk to the web service. The SSL setup use fx 2048-bit (remember, after setup you will switch to fx 128 bit AES), but CPU load for encryption is not a real problem. The real problems with in-browser apps/SSL are network latency, and buggy Javascript support on mobile browsers. That's why native apps are preferred.
–
Jesper MortensenJun 6 '11 at 7:19

7

@puddingfox: a browser will open only a few SSL connections to a given site (e.g. 3 or 4), using HTTP keep-alive to send several successive HTTP requests within each. Moreover, only the very first one needs asymmetric key exchange (the one where a 2048-bits-or-so key is involved); the other connections will use the SSL/TLS "resume session" feature (faster handshake, less messages, symmetric crypto only). Finally, most SSL/TLS server use a RSA key and the client part of RSA is cheap (the server incurs most of the cost in RSA).
–
Thomas PorninJun 6 '11 at 10:58

1

There are also many other highly cacheable contents that can benefit from digital signature adding integrity and authentication without necessarily needing confidentiality and which doesn't have the problem of needing to be a seekable stream like videos. It would be really good if there is a standardized mechanism in HTTP to add digital signatures which browsers can check to ensure integrity and which allows signed contents to be cached by intermediate caches. This will be an option in the middle of HTTPS and HTTP.
–
Lie RyanFeb 10 at 8:02

I agree, there's no reason to leave anything overlooked. One thing I've wondered, and relates to your point on public information. Are URLs viewed during HTTPS transactions to one or more websites from a single IP distinguishable? For example, say the following are HTTPS URLs to two websites by one IP over 5 mins: "A.com/1", "A.com/2", "A.com/3", "B.com/1", "B.com/2"; would monitoring of packets reveal nothing, reveal only the IP had visited "A.com" and "B.com", reveal a complete list of all HTTPS URLs visited, only reveal IP's of "A.com" and "B.com", or something else?
–
blundersJun 6 '11 at 0:15

A telephone number on a "brochure-ware site" might be completely public information. That doesn't mean being able to spoof a telephone number on that website isn't a security risk.
–
ChristianSep 9 '14 at 15:49

@JesperMortensen, You are confusing "doesn't need security" with "doesn't need privacy". Yes, the data is public, that doesn't mean that we can avoid HTTPS (the mitm can simply return a bogus misleading page).
–
PacerierFeb 16 at 9:55

+2 @D.W.: Guess it makes sense that it'd be hard to have a universal method for sharing application specific implementations; meaning you couldn't just make a copy of the keys and the configs to render the decryption built into the debugger. And yes, I'd seen that listed in @Thomas answer, which was posted after @Mike Scott; left it as is, since I'd wondered if @Mike Scott had another reason. Cheers!
–
blundersJun 6 '11 at 18:47

@D.W. Windows XP isn't supported either. Its end of support in April 2014 breaks the last major barrier to SNI deployment.
–
tepplesFeb 19 at 0:40

@tepples, my comment was written 4 years ago. Obviously, the situation with Windows XP has evolved over time. (And, it's not relevant whether Windows XP is supported or not. What's relevant is how many users use Windows XP -- i.e., how many users won't be able to use your website if you turn on SNI. As I wrote in my comment, see Thomas Pornin's answer.)
–
D.W.Feb 19 at 0:51

@D.W. I was pointing out that your comment had since become outdated. I apologize for not being more explicit about this. But even if you had a lot of XP users, what's the point in making a secure connection to a machine that is itself likely to be compromised with a zero-day?
–
tepplesFeb 19 at 0:57

@tepples, it is not that simple. What is the value of that user to the site owner? The answer will depend on the site. They don't lose that user, and might be able to monetize that user (e.g., that user might want to buy something from them, so they might lose a sale if they block such users). I'm largely sympathetic to your argument, but again, what I'm saying is this is not so cut-and-dry as you are trying to make it out to be. If you want to persuade others, it's important to understand the considerations that drive their decisions.
–
D.W.Feb 19 at 0:59

No one has pointed out a clear problem that arises from using http as default, rather than https.

Hardly anyone bothers to write the full uri when requesting a resource that needs to be encrypted and/or signed for various purposes.

Take gmail as an example, when users are visiting gmail.com, they are in fact visiting the default protocol of http, rather than https. At this point security has failed in scenarios where the adversary is intercepting the traffic. Why? Because its possible to strip html from https request, and point them to http.

If https was in fact the default protocol, your sessions to websites would have been protected.

To the question why http is chosen over https, the various answers above applies. The world is just not ready for widespread use of encryption yet.

Thomas has already written an excellent answer, but I thought I'd offer a couple more reasons why HTTPS is not more widely used...

Not needed. As Jesper's answer insightfully points out "the majority of information on the web doesn't need security". However, with the growing amount of tracking taking place by search engines, ad companies, country-level internet filters and other "Big Brother" programs (eg. NSA); it is raising the need for greater privacy measures.

Speed. It often feels slow because of the extra round trips and extra requests for certificate revocation lists (OCSP etc.). Thankfully SPDY (created by Google, and now supported in all major browsers), and some interesting work from CloudFlare are helping shift this.

Price of certificates. Most certificate authorities charge exhorbitant amounts of money (hundreds of dollars) for a certificate. Thankfully there are free options, but these don't get as much publicity (not sure why?).

Price of IP addresses. Until IPv6 becomes widespread, websites will face the rising scarcity (and thus cost) of IPv4 addresses. SNI is making it possible to use multiple certificates on a single IP address, but with no SNI support in Windows XP or IE 6, most sites still need a dedicated IP address to provide SSL.

The server administrator needs to purchase and renew certificates for each domain. The process of installing a certificate is time-consuming as for obvious security reasons you can't simply re-use the same certificate or just generate one yourself, you have to have each certificate signed by a signing authority.

Additional IP addresses required

Name based hosting (virtual hosting) is a widely used practice allowing many different domains to be hosted from the same IP address. However with HTTPS, this mechanism doesn't work because the server would need to know what hostname to present in the SSL/TLS validation layer before the HTTP request, containing the hostname, can be decrypted.

IP addresses (at least until IPv6 becomes viable on its own) are becoming a scarce resource.

There is a workaround to this supported by newer browsers and servers, called Server Name Identification (SNI) which effectively implements name-based hosting at the SSL/TLS layer, and we're finally getting to the point where most browsers support this, so this limitation is going to be much less of an issue moving forward.

Slow pace of change

HTTPS was a modification to an existing protocol, HTTP, which was already very much entrenched before many people were starting to think about security. Once a technology has become established and as ubiquitous as HTTP was, it can take a very long time for the world to move to its successor, even if the reasons for changing are compelling.

Hi fjw - this is a very old question, with good answers, and an accepted answer. Your answer doesn't bring anything new - I'd encourage you to contribute by answering new questions.
–
Rory Alsop♦Sep 9 '14 at 13:29

Do you really quality this as a low-quality answer? I'm really sorry, I came upon this as I was researching something and felt that the existing answers failed to address what I thought were some important points. I'd strongly disagree that this is a "low quality answer".
–
fjwSep 9 '14 at 14:32

It was flagged to me by community members, and i agree with them. My comment above explains my point.
–
Rory Alsop♦Sep 9 '14 at 14:46