Google have created a new HTTP-based protocol "SPDY" (pronounced "Speedy") to solve the problem of the client-server latency with HTTP. "We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support."

In the page, they are again asking for community work on this. They have reached a point where ........

Well, that's more or less how open-source works, isn't it? A core team builds the basics of something, then looks for collaborators to help it grow?

For something as fundamental as the HTTP protocol, Google certainly can't do everything themselves - they need people who make web servers, and web browsers, and HTTP and web services libraries to pick up what they've done, and incorporate it into their own projects...

Header compression is something that I can certainly see being useful. Web apps using AJAXy techniques, web services - they're characterized by having relatively little content, meaning the uncompressed headers could often be half the traffic being transmitted.

I see one downside to this. With only one TCP connection loosing a packet will pause the transmission of ALL resources till the the lost packet is retransmitted. Because of the way the TCP/IP congestion avoidance works (increase speed till you start loosing packets) - this will not be a rare occurrence. There are two ways around this - use multiple TCP streams or better - use UDP.

...but the resources to be retransmitted are also now smaller and more efficient, helping to negate it. So, if it becomes a problem, do a little reconfiguration, and change default recommendations on new pieces of network infrastructure. The networks will adapt, if it's a problem.

If it ends up working out, it can be worked into browsers and web servers all over, and many of us can benefit. Those who don't benefit can safely ignore it, if it's implemented well. We all win. Yay.

The Real Problem we have is that old protocols have proven themselves extensible and robust. But, those protocols weren't designed to do what we're doing with them. So, if you can extend them again, wrap them in something, etc., you can gain 90% of the benefits of a superior protocol, but with easy drop-down for "legacy" systems, and easy routing through "legacy" systems. This is generally a win, when starting from proven-good tech, even if it adds layers of complexity.

You can implement detection and retransmission of lost packets on top of UDP. The problem is the in-order transmission of packets in TCP. Because of it when you loose a packet all received packets will wait till the lost is retransmitted. With UDP you can use the data in the new packets right away, no matter if an older packet is missing and has to be retransmitted.
Imagine the situation where you are loading many images on a page simultaneously, a packet is lost and because only one TCP connection is used - all images stall till the lost packet is retransmitted.

HTTP headers are huge. Paylods are huge. 375 Mbps is not enough when you have dozens of largish HTTP requests flying over the wire. Remember, that's mega BITS, not mega BYTES, and that's just a measure of bandwidth, not latency. Also keep in mind that as soon as a request gets larger than a single packet/frame, performance can quickly tank. If the compression keeps the entire request in under the MTU, you can get huge latency reductions.

It was my understanding that SSL compressed the stream as a side effect of encryption, and that headers are within the encrypted stream - so if they are using SSL exclusively, why would you need to compress headers?

Please enlighten us - what are those purposes? For bonus points, explain the SoC students working on Haiku, FreeBSD, and other OSes and projects google has never used.

Mindshare: They gain a bunch of Google seminary students who will likely support and user Google's platform in the future, and simultaneously deny their competitors (Microsoft, Yahoo, etc) from gaining mindshare with the next generation of devs.

The picture you try to paint suggests this bunch of students has the CHOICE to "support and use Google's platform in the future". They are students; I assume they can think for themselves and understand what marketing means.
I don't see any problem with Google's SoC and I am happy that a big company like Google tries to be open in at least some way and shares a lot of it's research and code. Even if it is their only motive to make a lot of money or to market itself: this is the case for every company. With Google we at least get something in return....

Google is not going to be more successfull just because of a faster implementation of HTTP. Every internet user would benefit from a faster WWW though, and anybody contributing to such a goal, paid or unpaid, successful or unsuccessful, deserves credit and respect.

So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?

So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?

what do you mean 'rip off' and 'offering none'? their source is there for everyone to also 'rip off' and every webkit change they've done has been committed back to webkit, they didn't fork.

So, after ripping all the features for Chrome off from their competitors and offering none new, now they want to copy paste Opera Unite into their new client-server HTTPish protocol. Will Google ever create anything innovative?

As stupid as it sounds, probably the most noteworthy feature of Google Chrome (and the one that differentiates it the most) is that it puts its tabs above the address bar. Innovative? I wouldn't say that - it just plain makes more sense that way. But it certainly wasn't ripped off from anyone else.

As for Opera Unite??? What in the hell are you talking about? This SPDY stuff isn't even remotely related to that, and I mean not even r e m o t e l y.

Chrome = Speed Dial, Top-tabs, bookmark syncing, etc..
Opera DOES have all these before Chrome. The first two were introduced in Opera before anywhere else, while I believe Opera was the first browser to integrate bookmark sync.
SPDY is more like Opera Turbo on the other hand, which compresses HTTP streams but also reduces quality of images.
Hell, even GMail wasn't the first 1GB mail -- I remember this mac fansite (spymac? it's a strange site now) who offered 1GB free email before Google did.

Opera is often innovative but doesn't put much energy into refining it. Google on the other hand, waxes and polishes it and makes it shiny for the user.

[rant]And seriously; it being open source doesn't automatically mean that any business can and will adopt it.. It's better, sure.. but that doesn't stop their world domination ^_^[/rant]

Opera Turbo is a proxy. If you don't mind your data being routed through Europe and heavily compressed beyond recognition.

SPDY is _not_ a feature in some web browser--it is a communications standard that anybody could implement in any browser. They have created a test version in Chrome, but Mozilla could just as well implement it too.

Both of the technologies do the same thing -- compress webpages. One does it via a proxy, the other does it through protocol implementation. And a proxy is much easier to integrate as compared to a wholly new standard. Unless you have something racially against europe, if it sends me my pages faster I have no issues. Images? Yes! It's for viewing web pages faster on slow dialups. That's the exact intent. So other than your personal bias against opera, there's not much else different.

To sum it up, both of them do *exactly* the same thing - compress web pages. One does it via a proxy, the other is a wholly new standard. Now read the part where I said, Opera innovates and Google polishes it.

"Unless you have something racially against europe, if it sends me my pages faster I have no issues.

Way to jump to conclusions. Depending on where you are, re-routing through Europe could make things slower. " if it sends me my pages faster
See that part there? Opera makes sure if the pages you get are faster with Turbo on. Else, it warns you and disables itself.

[rant]And seriously; it being open source doesn't automatically mean that any business can and will adopt it.. It's better, sure.. but that doesn't stop their world domination ^_^[/rant]

World domination by open source software is no problem, because bad behavior by such an open source project immediately leads to forks. Just look at waht happened to XFree86: They got forked by Xorg the second they started behaving funny (closed license).

I do not get, why people don't seem to grasp the difference between world domination by a closed source entity vs. world domination by an open source entity.
It is as different as night and day.

I think Email as it exists also carries some painful legacy decisions - although I don't know which is harder to ditch: http or smtp?

HTTP: it would be nice to have a new protocol like SPDY, but stop and think about how many services and applications were designed with only HTTP in mind.. it hurts. Browsers change every few months, not enterprise-level applications. If anything, SPDY could be at least be used as an auxiliary or complementary browser data pipeline. But calls to replace HTTP mostly come from performance issues, not catastrophic design flaws (enter SMTP)..

SMTP: the fact that you're expected to have an inbox of gargantuan capacity so every idiot in the world can send you pill offers to make your d!@k bigger is as stupid as taking pills to make your d!@k bigger. As it exists today, any trained beagle can spam millions of people and disappear with no recourse. Terabytes of "Viva Viagra!" is due to the simple fact that the sender is not liable for the storage of the message - you are, you sucker. If the message is of any actual importance, the sending server should be available for the recipient to retrieve when they decide to. This provides many improvements over SMTP such as:

1) confirmation of delivery
--- you know if it was accessed and when - The occasional 'send message receipt' confirmation some current email clients provide you with is flaky and can easily be circumvented - this could not be.

2) authenticity
--- you have to be able to find them to get the message, they can't just disappear. geographic info could also be used to identify real people (do you personally know anyone in Nigeria? probably not...)

3) actual security
--- you send them a key, they retrieve and decode the message from your server.

4) no attachment limits
--- meaning, no more maximum attachment size because you're retrieving the files directly from the sender's 'outbox'. "please email me that 2.2GB file" OK! now you can! Once they've retrieved it, the sender can clear it from their outbox - OR, send many people the same file from ONE copy instead of creating a duplicate for each recipient. This saves time, resources, and energy (aka $$$)!

5) the protocols and standards already exist
--- sftp and pgp would be just fine, a simple notification protocol (perhaps SMTP itself) would send you a validated header (sender, recipient, key, download location, etc) which you could choose to view or not.

You'll still get emails, but spammers will be easily identified because their location (and perhaps an authenticity stamp) will point to the server - if not, you can't get the message even if you wanted to. And again - if it's so damned important I know senders will be happy to hold the message till recipients pick it up...? right?

But we're talking about HTTP here, which I can say isn't quite as broken. Although they should keep working on SPDY, because give it a few years and the world will find a way to break it...

Seriously though... google wave... when comparing it to gmail, could replace gmail if the wave protocol was available for everyone to join in on. It actualy COULD replace e-mail and do a sweet job at it.

Good link, but it kind of makes me think this architecture will never get adopted. Nine years after it's namesake and I sure as heck never heard of it (although it does exactly what I was looking for). But there lies the problem, how do you enable it's adoption on a widespread basis, without breaking compatibility, and without locking into a vendor's service? Google Wave, innocent as it is - it's still a provided service delivered by a company. I'm looking for an architectural change (like I.M.2000) that could be adopted transparently, perhaps we'll have to wait till email's completely unusable for it to really change...?

1) confirmation of delivery
--- you know if it was accessed and when - The occasional 'send message receipt' confirmation some current email clients provide you with is flaky and can easily be circumvented - this could not be.

Google’s evil twin. Their motto is "Do No Good". They’re a closed and proprietary company constantly seeking to usurp the web with their own proprietary technologies and patents—a bit like Microsoft, you could say!

In a way I applaud the idea of addressing latency. Handshaking, the process of requesting a file is one of the biggest bottlenecks remaining on the internet that can make even the fastest connections seem slow.

To slightly restate and correct what Kroc said, every time you request a file it takes the equivalent of two (or more!) pings to/from the server before you even start receiving data. Real world that's 200-400ms if you have what's considered a low latency connection, and if you are making a lot of hops between point A and B or worse, have connections like dialup, satellite or are just connecting to a server overtaxed on requests - that could be up to one SECOND per file, regardless of how fast the throughput of your connection is.

Most browsers try to alleviate this by doing multiple concurrent connections to each server - the usual default is eight. Since the filesizes are different there is also some overlap over those eight connections, but if the server is overtaxed many of those could be rejected and the browser have to wait. As a rule of thumb the best way to estimate the overhead is to subtract eight, reduce to 75%, and multiply by 200ms as the low and one second as the high.

Take the home page of OSNews for example - 5 documents, 26 images, 2 objects, 17 scripts (what the?!? Lemme guess, jquery ****otry?) and one stylesheet... That's 51 files, so (51-8)*0.75==32.25, we'll round down to 32. 32*200 = 6.4 seconds overhead on first load on a good day, or 32 seconds on a bad day. (subsequent pages will be faster due to caching)

So these types of optimizations are a good idea... BUT

More of the blame goes in the lap of web developers many of whom frankly are blissfully unaware of this situation, don't give a **** about it, or are just sleazing out websites any old way. Even more blame goes on the recent spate of 'jquery can solve anything' asshattery and the embracing of other scripting and CSS frameworks that do NOT make pages simpler, leaner, or easier to maintain even when they claim to. Jquery, Mootools, YUI, Grid960 - Complete rubbish that bloat out pages, make them HARDER to maintain than if you just took the time to learn to do them PROPERLY, and often defeat the point of even using scripting or CSS in the first place. CSS frameworks are the worst offenders on that, encouraging the use of presentational classes and non-semantic tags - at which point you are using CSS why?

I'm going to use OSNews as an example - no offense, but fair is fair and the majority of websites have these types of issues.

First we have the 26 images - for WHAT? Well, a lot of them are just the little .gif icons. Since they are not actually content images and break up CSS off styling badly, I'd move them into the CSS and use what's called a sliding-background or sprite system reducing about fifteen of those images to a single file. (In fact it would reduce some 40 or so images to a single file). This file would probably be smaller than the current files combined size since things like the palette would be shared and you may see better encoding runs. Researching some of the other images and about 22 of those 26 images should probably only be one or two images total. Let's say two, so that's 20 handshakes removed, aka three to fifteen seconds shaved off firstload.

On the 12 scripts about half of them are the advertising (wow, there's advertising here? Sorry, Opera user, I don't see them!) so there's not much optimization to be done there EXCEPT, it's five or six separate adverts. If people aren't clicking on one, they aren't gonna click on SIX.

But, the rest of the scripts? First, take my advice and swing a giant axe at that jquery nonsense. If you are blowing 19k compressed (54k uncompressed) on a scripting library before you even do anything USEFUL with it, you are probably ****ing up. Google analytics? What, you don't have webalizer installed? 90% of the same information can be gleaned from your server logs, and the rest isn't so important you should be slowing the page load to a crawl with an extra off-server request and 23k of scripting! There's a ****load of 'scripting for nothing' in there. Hell, apart from the adverts the only thing I see on the entire site that warrants the use of javascript is the characters left counter on the post page! (Lemme guess, bought into that ajax for reducing bandwidth asshattery?) - Be wary of 'gee ain't it neat' bullshit.

... and on top of all that you come to the file sizes. 209k compressed/347k uncompressed is probably TWICE as large as the home page needs to be, especially when you've got 23k of CSS. 61k of markup (served as 15k compressed) for only 13k of content with no content images (they're all presentational), most of that content being flat text is a sure sign that the markup is probably fat bloated poorly written rubbish - likely more of 1997 to it than 2009 - no offense, I still love the site even with it's poorly thought out fixed metric fonts and fixed width layout - that I override with opera user.js.

You peek under the hood and it becomes fairly obvious where the markup bloat is. ID on body (since a document can only have one body what the **** are you using an ID for), unnecessary spans inside the legend, unnecessary name on the h1 (you need to point to top, you've got #header RIGHT before it!), nesting a OL inside a UL for no good reason (for a dropdown menu I've never seen - lemme guess, scripted and doesn't work in Opera?), unneccessary wrapping div around the menu and the side section (which honestly I don't think should be a separate UL), those stupid bloated AJAX tabs with no scripting off degradation, or the sidebar lists doped to the gills with unnecessary spans and classes. Just as George Carlin said "Not every ejaculation deserves a name" not every element needs a class.

Using MODERN coding techniques and axing a bunch of code that isn't actually doing anything, it should be possible to reduce the total filesizes to about half what it is now, and eliminate almost 75% of the file requests in the process... Quadrupling the load speed of the site (and similarly easing the burden on the server!)

So really, do we need a new technology, or do we need better education on how to write a website and less "gee ain't it neat" bullshit? (Like scripting for nothing or using AJAX to "speed things up by doing the exact opposite")

I think what pisses me off the most is the fact that I've made websites, I want for example geometric shapes but I can't do it without having to use a weird combination of CSS and gif files. Why can't the W3C add some even most basic features which would allow one to get rid of large amounts of crap. Heck, if they had a geometric tag which allowed me to create a box with curved corners I wouldn't need to use the frankenstein code I use today.

What would be so hard to create:

<shape type="quad" fill-color="#000000" corners="curved />

Or something like that. There are many things that people add to CSS that shouldn't need to be there if the W3C got their act together - where the W3C members have done nothing to improve the current situation in the last 5 years except to drag their feet on every single advancement put forward - because some jerk off in a mobile phone company can't be figged upping the specifications in their products to handle the new features. Believe me, I've seen the conversations and it is amazing how features are being held up because of a few nosy wankers holding sway in the meetings.

While it's hardly simple, SVG was actually intended for exactly this kind of thing. The problem is that only Webkit allows you to use SVG anywhere you'd use an image.

Gecko and Opera allow you to use SVG for the contents of an element only. Internet Explorer doesn't support SVG at all, but allows VML (an ancestor of SVG) to be used in the same way you can use SVG in Gecko and Opera.

So the functionality is there (in the standards) and has been there since 2001. We just aren't able to use it unless we only want to support one browser. Cool if you're writing an iPhone application, but frustrating otherwise.

As for your specific example, you can do that with CSS, using border-radius. Something like this:

Of course, as with everything added to CSS or HTML since 1999, it doesn't work in Internet Explorer.

Blaming the W3C for everything hardly seems fair, considering that these specs were published almost a decade ago, and remain unimplemented. Besides, there are plenty of other things to blame the W3C for. Not having actually produced any new specs in almost a decade, for example.

Since Adam already spilled the beans in one of the Conversations, I may as well come out and state what is probably already obvious: There is a new site in the works, I'm coding the front end.

_All_ of your concerns will be addressed.

The OSnews front end code is abysmally bad. Slow, bloated and the CSS is a deathtrap to maintain (the back end (all the database stuff) is very good and easily up to the task).

Whilst we may not see eye to eye on HTML5/CSS3, I too am opposed to wasted resources, unnecessary JavaScript and plain crap coding. My own site adheres to those ideals. Let me state clearly that OSn5 will be _better_ than camendesign.com. I may even be able to impress you (though I doubt that )

That there just means you don't use google analytics (or don't know how to use it). It is a very powerful peace of software that can't be replaced by analog, webalizer or digging through logfiles.

No, it's just that the extra handful of minor bits of information it presents is only of use to people obsessing on tracking instead of concentrating on building content of value - usually making such information only of REAL use to the asshats building websites who's sole purpose is click-through advertising bullshit or are participating in glorified marketing scams like affiliate programs... such things having all the business legitimacy of Vector Knives or Amway.

I agree, this isn't anything new. I remember reading about how this could be done way back in 1999 (the original article author is probably working for Google now).

This should be the W3C's job, to update web standards and promote the new updated versions. Instead, the W3C works on useless crap like "XML Events", "Timed Text", XHTML 2.0 and "Semantic Web" (which is due to reach alpha state some time after the release of Duke Nukem Forever).

Let's face it, HTTP 1.1 is abandonware, and I think we have to applaud Google for taking the initiative and actually implementing it and trying to put some weight behind the push. On the same token, let's see Google push more for IPv6 and the ideas suggested by two of the people in the comments for this article :-)

I thought the comment about "Internet Explorer" not waving any flags was uncalled for.. i think you should direct that hatred towards the slacking company behind the shitty product! Oh, and a bit off-topic aswell.. not really Microsofts fault HTTP is crap?

It's nobody's specific fault that HTTP is crap, but then what matters is who is going to do anything about it.

Microsoft have had total dominance of the web for almost a decade. At no point during that time did they attempt to improve the status quo. At no point did they say that "You know, HTTP is slow and could do with improving". They just coasted along with careless disdain.