Steve first outlines how JavaScript can affect how the browser renders a page:

The growing adoption of Ajax and DHTML means todayâ€™s web pages have more JavaScript than ever before. The average top ten U.S. web site[1] contains 252K of JavaScript. JavaScript slows pages down. When the browser starts downloading or executing JavaScript it wonâ€™t start any other parallel downloads. Also, anything below an external script is not rendered until the script is completely downloaded and executed. Even in the case where external scripts are cached the execution time can still slow down the user experience and thwart progressive rendering.

He then took the Alexa top ten websites and tracked how much of the code was executed before the onload event, based on functions called. The results are below:

Now, it is easy to understand why this is the case. There are factors such as the simplicity in putting the code in one file, and feeling like the cache effects will make the point moot (which Steve argues against). Steve gets this:

The task of finding where to split a large set of JavaScript code is not trivial. Doloto, a project from Microsoft Research, attempts to automate this work. Doloto is not publicly available, but the paper provides a good description of their system. (You can here the creators talk about Doloto at the upcoming Velocity conference.) The approach taken by Doloto uses stub functions that download additional JavaScript on demand. This might result in users having to wait when they trigger an action that requires additional functionality. Downloading the additional JavaScript immediately after the page has rendered might result in an even faster page.

GWT actually solves two problems. First, it can detect classes, functions, fields, and parameters in your script which are never used and remove them. It aggressively obfuscates (much more so than typical JS minifiers/packers in a gzip-friendly way) everything further reducing the size.

Secondly, there the proposal for the GWT.createAsync() method, which I believe they have already proven via a prototype. This differs from Microsoft’s approach in that the developer chooses the splitting points, and then calls GWT.createAsync() to instantiate a lazily loaded class. The compiler will then compute the transitive closure of every reachable method from that lazily loaded class (that is not reachable from the entry point of the program), and ensure that those classes/methods are placed in a separate JS file.

This has pros and cons and differs from Doloto:
Con: you have to manually pick the split points.
Pro: You avoid the issues with trying to make the stub functions synchronous (since the JS has to be loaded asynchronously).

I think the Doloto approach, like the Volta approach for tier-splitting suffers from a leaky abstraction. They try to hide too much from the developer, whereas I think achieving an optimal split between progressive loading, or client/server that maximizes user experience takes a little bit of developer forethought and probably can’t be totally automated.

Zazzle’s Web site makes extensive use of deferred loading of components of UI. The shirt design tool, for example, will load the supporting JavaScript widget class code and component markup for various features, only when those features are invoked.

The UIZE Framework, that is used throughout the zazzle.com Web site, provides a mechanism to facilitate deferred loading / load-on-demand / lazy loading / whatever-you-want-to-call-it. The JS client interaction of all pages is managed by a single instance of the “page widget” (this is a concept that UIZE introduces). Because UIZE, at its core, offers a feature called a widget hierarchy, child widgets on the page can access features provided by their host context, in this case the page widget. It is the page widget that offers support for loading components on demand (the concept of receiving functionality from a parent on the widget tree is a concept I call the “abundant context”).

Finding the balance between caching benefits of static JS libraries and load-only-as-much-as-needed is kind of an art form. I think it’s still best left to developers, although you could argue that with enough machine intelligence and data upon which to base decisions, a server could provide the “split” points. It seems like an incredibly complex problem to solve, and its solution could be quite intrusive into coding practices and framework architecture, and I fear that it’s a problem that’s more “cool” to solve than incredibly compelling – kind of in the diminishing returns territory.

OT: With the exception of Wikipedia and – surprise – MSN none of these sites validate. No wonder our clients still think Webstandards don’t matter, when names likes these on the list still don’t seem to care.

Comment by Gordon — May 15, 2008

Interesting but old news ;)
We’ve had JS on demand in Ajax Callback for more than 9 months now, and also all of our JS for all of our widgets (more than 40 widgets) are in fact LESS than the size (before gzip) of the “average top website”… ;)
Oh yeah @Gordon
We’re 100% validating after Glory ;)
(meaning in roughly 20 days from now)

Many frameworks have their ways of solving this problem. I used to work with backbase which simply checks which ‘tags’ are located in the document and then loads the correct files (which even contain the CSS for those widgets).

In a not too complicated environment every programmer can choose for himself what to load at what time. And: modern browsers do not need the script tag in the head so we could also add our script tags just below the end of the body right? That should solve the progressive rendering problem steve describes.

Comment by vitrus — May 16, 2008

For a while I’ve been thinking about building an ‘import’ method similar to python using mootools Assets.javascript class, but just haven’t gotten around to it yet. That way you can load things you need after page loads, instead of messing with page download and rendering times

Comment by tj111 — May 16, 2008

I find you have to be careful about not getting too over-zealous in your post-page-render download of javascript because it makes the page seem unresponsive. Especially in IE6.

IE6 sometimes gets stuck in “loading” mode forever (often if you have a 404 image or something of the sort by accident), so if you go for loading the javascript after the page has rendered you might never get it. I think if this is the route you want to take you should set a small timeout to download it instead of waiting for the page to be fully rendered.

If you only download functionality when the user wants to perform the action that’s even worse for perceived responsiveness. If you’re on a really fat pipe you might not notice but anyone with dialup (yeah, there are still a lot of them) will cringe at how sluggish your app is. People are used to a page taking a while to load when their connection is slow and don’t mind waiting (well, not as much at least) but waiting each time they want to do something once the page is “already loaded”, they don’t forgive.