Chrome 27, released today, is 5 percent faster and includes conversational search

Google gets faster page loading just by downloading files in a different order.

Google has updated the stable version of Chrome to version 27. On top of the usual bug and security flaw fixes, the new version is claimed to load webpages about 5 percent faster on average.

Finding a 5 percent improvement in a browser that's already fast is no mean feat. The better performance comes from making Chrome smarter about the way it uses the network: being more aggressive to download things in some instances and being less aggressive in others.

HTML pages generally include references to many other files that the browser needs to download before it can show a complete page to the user: CSS, JavaScript, and images. These can themselves have dependencies; HTML files can embed other HTML files, CSS files can reference images or other CSS files, and scripts can cause other scripts to be loaded, for example.

Although Google is switching to its own Blink rendering engine, Chrome 27 still uses the WebKit engine. WebKit detects the resources that are needed and puts them in a download queue. A component called the scheduler then sets about downloading all the resources. When doing this, it has to make various trade-offs. Some resources (like HTML itself) are needed more urgently than others (like images). Network bandwidth is finite, and browsers in general should not make too many connections to any one server.

In WebKit, this scheduler is part of the rendering engine itself and as such, each tab (which has its own renderer) has its own scheduler. Chrome 27 moves the scheduler to be shared across the entire browser. This gives the scheduler a better view of the current network activity. Downloads belonging to background tabs, for example, can now be run at a lower priority than those of visible tabs.

Chrome 27 also changes how the scheduler works. If it detects that the network is idle (something it can now do, thanks to its holistic view of the browser's network activity), it will try to pre-load resources that are probably going to be used.

It also scales back some activity: the WebKit scheduler would try to fetch an unlimited number of images at a time. The new one limits this to ten concurrent images, which reduces bandwidth contention. This in turn means that the first few images download faster, and as these images tend to be the ones that are currently visible, it allows the page to render sooner.

The new version also includes a bunch of more or less unspecified "improvements" to the built-in spell checking and predictions in the Omnibox. For developers, there's an extension to the HTML5 FileSystem API that enables files to be stored to, and synced with, Google Drive.

Chrome 27 users will also see a new feature if they use Google as their search engine. The search box has picked up a microphone icon. Click it, and you can speak to the search engine and dictate your searches. It should even support Siri-esque conversational requests.

That said, at the time of writing, this feature seemed extremely unreliable, most times telling us that we had no Internet connection and refusing to let us speak to the search engine.

I wonder if there is a stability tradeoff with the shared scheduler. As it is now, each tab is it's own little sandbox and if something goes wrong it doesn't break everything else. This update seems like it might be trading some of that sandboxing for more speed.

I wonder if there is a stability tradeoff with the shared scheduler. As it is now, each tab is it's own little sandbox and if something goes wrong it doesn't break everything else. This update seems like it might be trading some of that sandboxing for more speed.

I don't think that's exactly true even before this change. I know tabs could be put in the same process for a few reasons and I was under the impression that there was only one JavaScript interpreter running, could be wrong about that though.

I updated to the latest Chrome on my Nexus 4 and the scrolling performance is very poor. It auto hides the address bar when scrolling down and makes it reappear when scrolling up again. Flicking my finger up and down makes it really choppy.

"Chrome 27 users will also see a new feature if they use Google as their search engine. The search box has picked up a microphone icon. Click it, and you can speak to the search engine and dictate your searches. It should even support Siri-esque conversational requests."

Did I miss something? I have had only the standard version (no beta nor nightly builds) of Chrome on my PCs and I've had the microphone icon (and have used it) on the Google.com search box for quite a while.

I updated to the latest Chrome on my Nexus 4 and the scrolling performance is very poor. It auto hides the address bar when scrolling down and makes it reappear when scrolling up again. Flicking my finger up and down makes it really choppy.

Yet Opera Mini on my N8 (Symbian on 600Mhz CPU + an older PowerVR GPU) is smooth as butter. Also, the original Lumia 800 with Windows Mobile 7 was smooth, too (and that had a 1Ghz CPU). I know I'm not comparing like-for-like, but still. Can anyone explain why all the complaints with choppy experience on Android even with core apps? It seems like the problem is still there (I last used Android on Eclair).

Yet Opera Mini on my N8 (Symbian on 600Mhz CPU + an older PowerVR GPU) is smooth as butter. Also, the original Lumia 800 with Windows Mobile 7 was smooth, too (and that had a 1Ghz CPU). I know I'm not comparing like-for-like, but still. Can anyone explain why all the complaints with choppy experience on Android even with core apps? It seems like the problem is still there (I last used Android on Eclair).

The only app that I have any choppines on is Chrome. It has the best tab interface in my opinion, and is fine for general websites like Arstechnica. (For reference, I use a Galaxy Nexus)

However it seems to struggle deeply with longer, more complex pages like Facebook and other similar websites.

Due to this, I've switched to Dolphin but I keep checking each update hoping.

I wonder if there is a stability tradeoff with the shared scheduler. As it is now, each tab is it's own little sandbox and if something goes wrong it doesn't break everything else. This update seems like it might be trading some of that sandboxing for more speed.

The sandboxing has two main roles: - Preventing a misbehaving component (mostly plugins) in one page to crash the entire browser- Preventing a security flaw on one page to gain access to other pages

This was necessary because web sites become applications in scope and complexity, and the browser needs to manage security, scripting, plugins, rendering, background tasks, etc, etc... A browser of today is probably more complex than a full operating systems from the 90s, and there's a lot of shared components in operating systems.

Chrome's scheduler must be stable and tested enough to have been deemed worthy of a promotion as a shared component. Given Chrome's background of technical excellence, I trust the choice is right.

I updated to the latest Chrome on my Nexus 4 and the scrolling performance is very poor. It auto hides the address bar when scrolling down and makes it reappear when scrolling up again. Flicking my finger up and down makes it really choppy.

This just makes sense. I always wondered why browsers need to open so many connections. NO! It will not make the website load faster, the speed is the same, so opening 100 connections or 10 is exactly the same, if your bandwidth is limited, which of course is. And the other party (the web server) will not respond better with more load.

It may actually faster to use less connection, otherwise websites with allot of images, you end up with broken images half the page down. The problem are web servers. Most of them limit or have a limit on connections for users. The worst one was always Explorer. It opened as many as 10 times more connections than the rest. Some companies may even block this on their firewall as a false positive. The reason is todays websites need to receive more and more traffic and users open several tabs, this is really just too intensive for most web servers, in particular smaller websites on shared hosts. Browsers should use better the connections they have, instead of just going on a killing spree with connections to web servers.

If I understand this correctly, it will walk the resources, asynchronously adding resources to a queue to fetch, while re-using connections for a given server. This way resources can be concurrently requested while re-using connections.

I updated to the latest Chrome on my Nexus 4 and the scrolling performance is very poor. It auto hides the address bar when scrolling down and makes it reappear when scrolling up again. Flicking my finger up and down makes it really choppy.

Yet Opera Mini on my N8 (Symbian on 600Mhz CPU + an older PowerVR GPU) is smooth as butter. Also, the original Lumia 800 with Windows Mobile 7 was smooth, too (and that had a 1Ghz CPU). I know I'm not comparing like-for-like, but still. Can anyone explain why all the complaints with choppy experience on Android even with core apps? It seems like the problem is still there (I last used Android on Eclair).

Because Android doesn't prioritize the UI input the way every other modern mobile OS does. It never will either.

I updated to the latest Chrome on my Nexus 4 and the scrolling performance is very poor. It auto hides the address bar when scrolling down and makes it reappear when scrolling up again. Flicking my finger up and down makes it really choppy.

Yet Opera Mini on my N8 (Symbian on 600Mhz CPU + an older PowerVR GPU) is smooth as butter. Also, the original Lumia 800 with Windows Mobile 7 was smooth, too (and that had a 1Ghz CPU). I know I'm not comparing like-for-like, but still. Can anyone explain why all the complaints with choppy experience on Android even with core apps? It seems like the problem is still there (I last used Android on Eclair).

Because Android doesn't prioritize the UI input the way every other modern mobile OS does. It never will either.

Wasn't "project butter" meant to fix all this (I haven't used a modern Android device for over a year, so I can't say for sure).

...Most of them limit or have a limit on connections for users. The worst one was always Explorer. It opened as many as 10 times more connections than the rest. Some companies may even block this on their firewall as a false positive....

Off topic (?); Firefox fan here, honestly I really like Chrome (dev build) and actually switched to it as my default for awhile but just ran back to Firefox Nightly after a delightful new, default feature; on pages you connect to using https, Firefox blocks all insecure elements from the page.

This just makes sense. I always wondered why browsers need to open so many connections. NO! It will not make the website load faster, the speed is the same, so opening 100 connections or 10 is exactly the same, if your bandwidth is limited, which of course is. And the other party (the web server) will not respond better with more load.

It may actually faster to use less connection, otherwise websites with allot of images, you end up with broken images half the page down. The problem are web servers. Most of them limit or have a limit on connections for users. The worst one was always Explorer. It opened as many as 10 times more connections than the rest. Some companies may even block this on their firewall as a false positive. The reason is todays websites need to receive more and more traffic and users open several tabs, this is really just too intensive for most web servers, in particular smaller websites on shared hosts. Browsers should use better the connections they have, instead of just going on a killing spree with connections to web servers.

You're making a LOT of assertions with fsckall basis in reality.(a) There are PLENTY of sites which pump out data at substantially slower rates than my internet bandwidth,(b) Any serious site these days is using some sort of load-balancing, so your multiple requests don't all get handled by the same CPU, and even if they did(b) Throwing a large number of requests at a site means you only pay the latency costs (which for small assets dominates the bandwidth cost) once rather than once per asset.