Month: May 2011

Over a year ago, when we first announced SPDY, the most prominent criticism was the requirement for SSL. Weâ€™ve become so accustomed to our unsecure HTTP protocol, that making the leap to safety now seems daunting.

Since that time, many things have happened, and it is now more clear than ever that SSL isnâ€™t an option â€“ itâ€™s a matter of life and death.

SSL was invented primarily to protect our online banking and online purchasing needs. It has served us fairly well, and most all banks and ecommerce sites use SSL today. What nobody ever expected was that SSL would eventually become the underpinnings of safety for political dissidents.

Last year, when China was caught hacking into Google, were they trying to steal money? Two months ago, when Comodo was attacked (and suspected the Iranian government), did they forge the identities of Bank of America, Wells Fargo, or Goldman Sachs? No. They went after Twitter, Gmail, and Facebook â€“ social networking sites. Sites where youâ€™d find information about dissidents, not cash. To say that these attacks were used to seek and destroy dissidents would be speculation at this point. But these incidents show that the potential is there and governmental intelligence agencies are using these approaches. And of course, it is well known fact that the Egyption government turned off the Internet entirely so that their citizens could not easily organize.

The Internet is now a key communication mechanism for all of us. Unfortunately, users canâ€™t differentiate safe from unsafe on the web. They rely on computer professionals like us to make it safe. When we tell them that the entire Web is built upon an unsecured protocol, most are aghast with shock. How could we let this happen?

As we look forward, this trend will increase. What will Egypt, Libya, Iran, China, or Afghanistan do to seek out and kill those that oppose them? What does the US government do?

Fortunately, major social networking sites like Facebook and Twitter have already figured this out. They are now providing SSL-only versions of their services which should help quite a bit.

So does all this sound a little dramatic? Maybe so, and I apologize if this sounds a bit paranoid. Iâ€™m not a crypto-head, I swear. But these incidents are real, and the potential is real so long as our Internet remains unsecure. The only answer is to secure *everything* we do on the net. Even the seemingly benign communications must be encrypted, because users donâ€™t know the difference â€“ and for some of them, their lives are at stake.

Last year, Googleâ€™s Adam Langley, Nagendra Modadugu, and Bodo Moeller proposed SSL False Start, a client-side only change to reduce one round-trip from the SSL handshake.

We implemented SSL False Start in Chrome 9, and the results are stunning, yielding a significant decrease in overall SSL connection setup times. SSL False Start reduces the latency of a SSL handshake by 30%1. That is a big number. And reducing the cost of a SSL handshake is critical asmore and more content providers move to SSL.

Our biggest concern with implementing SSL False Start was backward compatibility. Although nothing in the SSL specification (also known as TLS) explicitly prohibits FalseStart, there was no easy way to know whether it would work with all sites. Speed is great, but if it breaks user experience for even a small fraction of users, the optimization is non-deployable.

To answer this question, we compiled a list of all known https websites from the Google index, and tested SSL FalseStart with all of them. The result of that test was encouraging: 94.6% succeeded, 5% timed out, and 0.4% failed. The sites that timed out were verified to be sites that are no longer running, so we could ignore them.

To investigate the failing sites, we implemented a more robust check to understand how the failures occurred. We disregarded those sites that failed due to certificate failures or problems unrelated to FalseStart. Finally, we discovered that the sites which didnâ€™t support FalseStart were using only a handful of SSL vendors. We reported the problem to the vendors, and most have fixed it already, while the others have fixes in progress. The result is that today, we have a manageable, small list of domains where SSL FalseStart doesnâ€™t work, and weâ€™ve added them to a list within Chrome where we simply wonâ€™t use FalseStart. This list is public and posted in the chromium source code. We are actively working to shrink the list and ultimately remove it.

All of this represents a tremendous amount of work with a material gain for Chrome SSL users. We hope that the data will be confirmed by other browser vendors and adopted more widely.

1Measured as the time between the initial TCP SYN packet and the end of the TLS handshake.

When you turn on IPv6 in your operating system, the web is going to get slower for you. There are several reasons for this, but today Iâ€™m talking about DNS. Every DNS lookup is 2-3x slower with IPv6.

What is the Problem?The problem is that the current implementations of DNS will do both an IPv4 and an IPv6 lookup in serial rather than parallel. This is operating as-per the specification.

The â€œAâ€ request there was the IPv4 lookup, and it took 39ms. The â€œAAAAâ€ request is the IPv6 lookup, and it took 40ms. So, prior to turning IPv6 on, your DNS resolution finished in 39ms. Thanks to your IPv6 address, it will now take 79ms, even if the server does not support IPv6! Amazon does not advertise an IPv6 result, so this is purely wasted time.

Now you might think that 40ms doesnâ€™t seem too bad, right? But remember that this happens for every host you lookup. And of course, Amazonâ€™s webpage uses many sub-domain hosts. In the web page above, I saw more of these shenanigans, like this:

The average website spans 8 domains. A few milliseconds here, and a few milliseconds there, and pretty soon weâ€™re talking about seconds.

The point is that DNS performance is key to web performance! And in these 3 examples, weâ€™ve slowed down DNS by 102%, 567%, and 75% respectively. Iâ€™m not picking out isolated cases. Try it yourself, this is â€œnormalâ€ with IPv6.

What About Linux? Basically all of the operating systems do the same thing. The common API for doing these lookups is getaddrinfo(), and it is used by all major browsers. It does both the IPv4 and IPv6 lookups, sorts the results, and returns them to the application.

In this particular case, we only wasted 75ms, when the actual request would have completed in 18ms (416% slower).

But Itâ€™s Even WorseI wish I could say that DNS latencies were just twice as slow. But itâ€™s actually worse than that. Because IPv6 is not commonly used, the results of IPv6 lookups are not heavily cached at the DNS servers like IPv4 addresses are. This means that it is more likely that an IPv6 lookup will need to jump through multiple DNS hops to complete a resolution.

As a result, itâ€™s not just that weâ€™re doing two lookups instead of one. Itâ€™s that weâ€™re doing two lookups and the second lookup is fundamentally slower than the first.

Surely Someone Noticed This Before?This has been noticed before. Unfortunately, with nobody using IPv6, the current slowness was an acceptable risk. Application vendors (namely browser vendors) have said, â€œthis isnâ€™t our problem, host resolution is the OSâ€™s jobâ€.

The net result is that everyone knows about this flaw. But nobody fixed it. (Thank goodness for DNS Prefetching!)

Just last year, the â€œHappy Eyeballsâ€ RFC was introduced which proposes a work around to this problem by racing connections against each other. This is an obvious idea, of course. I donâ€™t know of anyone implementing this yet, but it is certainly something weâ€™re talking about on the Chrome team.

What is The Operating Systemâ€™s Job?All browsers, be it Chrome or Firefox or IE, use the operating system to do DNS lookups. Observers often ask, â€œwhy doesnâ€™t Chrome (or Firefox, or IE) have its own asynchronous DNS resolver?â€ The problem is that every operating system, from Windows to Linux to Mac has multiple name-resolution techniques, and resolving hostnames in the browser requires using them all, based on the userâ€™s operating configuration. Examples of non-DNS resolvers include: NetBIOS/WINS, /etc/hosts files, and Yellow Pages. If the browser simply bypassed all of these an exclusively used DNS, some users would be completely broken.

If these DNS problems had been fixed at the OS layer, I wouldnâ€™t be writing this blog post right now. But I donâ€™t really blame Windows or Linux â€“ nobody was turning this stuff on. Why should they shine a part of their product that nobody uses?

Lesson Learned: Only The Apps Can â€˜Pullâ€™ Protocol Changes IPv6 deployment has been going on for over 10 years now, and there is no end in sight. The current plan (like IPv6â€™s break the internet day) is the same plan weâ€™ve been doing for 10 years. When do we admit that the current plan to deploy IPv6 is simply never going to work?

A lesson learned from SPDY is that only the applications can drive protocol changes. The OSâ€™s, bless their hearts, can only do so much and move too slowly to push new protocols. There is an inevitable chicken-and-egg problem where applications wonâ€™t use it because OS support is not there and OSes wonâ€™t optimize it because applications arenâ€™t there.

The only solution is at the Application Layer â€“ the browser. But that may be the best news of all, because it means that we can fix this! More to comeâ€¦

Over the next few days, Iâ€™m going to be posting some blogs about IPv6 performance.

The results are pretty grim â€“ but my aim is not to make everyone despair.

There is a solution, and I think I can see light at the end of the tunnel. My theory is that weâ€™ve been approaching IPv6 deployment incorrectly for the last 10 years. It seems obvious now, but it wasnâ€™t obvious 10 years ago, and things have certainly changed which enable this new mechanism.