Dojo 1.3 Is Out!

If you’re already using Dojo, this should be a no-brainer upgrade. It’s out-and-out better. As a quick example, dojo.create("tagname", { /*properties*/ }) is now the preferred way to build DOM nodes quickly. Its simple API will be natural to anyone who has used dojo.attr(). Even better, Pete’s exciting PlugD version of dojo.js has been updated to 1.3 as well.

1.3’s Core features the new “Acme” CSS selector engine which provides a big boost in speed for many operations in the fast-path. I blogged before about the work we did to make Acme fast, and rest assured it is (in aggregate, across all use cases) quicker than any other selector system you can get your hands on today. But selector performance isn’t where it’s really at, and I’ve been saying that for a long time.

Luckily, Pete Higgins decided to prove it and has been working on a new set of benchmarks with the help of other toolkit vendors (to ensure fairness) called “TaskSpeed“. Dojo 1.3 wins by a wide margin. Across all the reported browsers so far, Dojo is at least 2 times faster than other toolkits on common DOM operations. We’ve worked very hard over the years to make sure that Dojo’s APIs don’t encourage you to do things that will hurt you later, and TaskSpeed finally shows how much this philosophy pays off:

The numbers above are from TaskSpeed, a new toolkit benchmark developed by Pete Higgins with tests contributed by other toolkit authors to ensure fairness. Shorter is better.

Given that DOM is the primary bottleneck in most appsDOM is a big bottleneck in today’s apps, usually just behind network I/O and these tests demonstrate how Dojo’s approach to keeping things fast pays off not just on micro benchmarks like CSS selector speed, performance improvements to single toolkit functions, or even file size – but on aggregate performance where it really matters. Dojo’s modern, compact syntax for these common operations doesn’t slow it down, either. For instance, if you go check out the TaskSpeed reporting page, you’ll see that where browsers are slowest (IE6/7/8, etc.), Dojo’s focus on performance pays off most. Why use a toolkit that’s going to hurt you when it really counts, particularly when Dojo so easy to get started with? Dojo’s Core has been designed from the ground up with APIs that encourage you to do things that are fast and keep you from doing things that are slow unless you really know what you’re doing. In some cases, we’ve made hard size-on-the-wire tradeoffs in order to keep actual app performance speedy. That hard engineering doesn’t show up in micro-benchmarks or single test release-over-release improvements or the “my toolkit is smaller” comparisons that some would prefer that web developers focus on. It’s easy to win rigged games, after all. It’s only when you see APIs composed together in real-world ways, across browsers, that you can start to see the real impact of a toolkit’s design philosophy. Dojo is designed to help you make things that are awesome for users, and that means they need to be FAST.

Other toolkits have released performance numbers of late, and most of them have been either reported badly or run without much rigor, so it’s exciting to see everyone finally pitching in to build end-to-end tests that show how library design decisions interact with real-world realities of browsers. The TaskSpeed tests have been designed to be both even-handed and reliable (no times below timer resolution, etc.). The reporting page is also designed to make the results understandable and put them in context. A lot of care has been taken to keep this benchmark honest. JavaScript developers have suffered at the hand of chart junk for far too long.

I can’t do 1.3 justice in a single blog post, so I recommend that you check out these resources and then just dive in:

6 Comments

It seems like from a high level perspective, the Sly chart gave a good approximation of the summation across all browsers, it wasn’t intended as a chart to describe the relative performance difference between specific browsers. Seems like if you want to show one chart only, that one isn’t too bad.

I would argue that some of the charts shown in the TaskSpeed reports aren’t very useful, for instance the entire “Libraries” section at the bottom.

That being said, this is great stuff! I definitely agree that more benchmarking needs to be done, not just the CSS selectors.

The Sly chart is junk because it uses “stacked bars”. Stacking values without, at a minimum, providing a table to explain the values does more to obscure the data than to present it evenly. Also, having re-run Sly through a better set of benchmarks, I’ve found that their “additive error values without variance” chart wildly mis-represents the actual performance impact of their engine. It’s not faster, it just looks that way because they used a bad benchmark and type of chart that works to obfuscate, not clarify, data.

FWIW, I don’t hold it against the Sly folks…benchmarking is hard. It requires experience and some background in interpreting numbers. Many engineers don’t have that background.

Also, FWIW, I argued against the “Libraries” section at the bottom, but it’s not a huge loss to have them. They simply add detail about relative browser performance at the toolkit level. The more important thing about the TaskSpeed charts is that they show by-toolkit numbers on a normalized scale, grouped by browser. Without that view (which Sly didn’t provide), the numbers are more-or-less noise.

Dojo’s create() method doesn’t support the full stan-like syntax for nested creation. It’s (currently) more suitable for creating individual nodes. I had argued for a hierarchical version, but it was deemed to big (and potentially too slow) for Dojo Core. We may add something like that back if we can come to a resolution on that.

@alex: Ok, interesting to hear, thanks. I’ve learned to enjoy the mochi-style but I’ve understood (and measured) it to not necessarily have the optimum performance, though I’d argue most of (my) html is create-once-use-many.

5 Trackbacks

[…] want the TaskSpeed library task test suite to be lost in the Dojo 1.3 announcement. Alex called it out: Pete Higgins has been working on a new set of benchmarks with the help of other toolkit vendors […]