Sample Text

Unordered List

Tuesday, November 17, 2009

Google is in all hurry to make everything faster and focused a lot on web arena. It has set a new benchmark for the web application are designed and are high performing. Google has been very innovative when its comes to that. Lot of initiative have gone from this special company to make the website performance better like http://code.google.com/speed/ and Google has recently open sourced some tools and tuts to help developers build fast website.

They have much interesting attempt of replacing HTTP protocol. But why…, below is the answer given by Google.

“

Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.

Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.

Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB. As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.

Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.

Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.