The Qooxdoo gang have created tests for Taskspeed with some surprising results:

On IE qooxdoo is by far the fastest framework.

Across browsers and frameworks, qooxdoo gained the highest ranks on all versions of IE (i.e. 6, 7 and 8), and made its lowest mark coming out third on Firefox 3.0. This exceptional IE performance also leads to the best overall score. The IE results are a big surprise and we’ll try to investigate, what we do different (better) than all the other JavaScript libraries.

As always performance tests should be taken with a grain of salt. It’s hard to judge whether all implementations are really equivalent. For example in the jQuery tests John Resig implemented all tests in a pure jQuery way. There are obvious optimizations he consciously omited, but it apparently reflects the genuine jQuery coding style. There is no official qooxdoo way to work with the DOM yet, so we modeled our tests closely after the Dojo and jQuery tests.

If there is no standard way to access the DOM in qooxdoo, what do these results reflect? I thought Taskspeed is about profiling common DOM operations? Classical RIA vs. low-level JavaScript Framework … apples vs oranges.

@SubtleGradient:
All the qooxdoo tests in Taskspeed use synchronous DOM operations. We do have an asynchronous DOM layer, which creates DOM nodes on demand but this is not used in Taskspeed.
The chart generated by Taskspeed (only IE8 and Firefox 3) however is a bit unfair because the x-axis starts at one second. Since qooxdoo requires a little over a second to run it appears the it takes almost no time. For this reason I’ve use the IE7 chart in my blog post, which starts at 0.

@digitarald:
There is no standard way to access the DOM in the sense that until recently qooxdoo was not supposed to be used for DOM operations only. The DOM API was simply not promoted as public API. The real standard way was by using the widgets, which are implemented on top of our DOM API. Like ExtJS with ExtCore we are now opening this DOM API and I think its completely reasonable to compare this part of qooxdoo with low-level libraries.

@WebReflection:
I would appreciate the addition of your pure DOM tests. This could define the base line and we as framework developers could better identify performance issues. I would suggest you fork Taskspeed on github and commit your changes. I’m sure Peter Higgins will integrate your patches. This is how I did it.

That is just one example of many where the author is using slow ass stuff instead of faster methods, maybe the guy who wrote this should try using methods that someone using each framework correctly would, use instead of writing everything to take as long as possible.

That is just one example of many where the author is using slow ass stuff instead of faster methods, maybe the guy who wrote this should try using methods that someone using each framework correctly would, use instead of writing everything to take as long as possible.

That’s exactly the point of having the library authors write the test to their API. The API provided is expected to be an abstraction that developers will use. The purpose of the benchmark is to test real-world use of those abstractions—essentially, to test the performance of the API.

Perhaps you wouldn’t use the Element() DOM builder abstraction, but many of us do because it provides a lot of utility in a generally concise API. Replacing it with innerHTML is not only not a 1:1 replacement, but also more and more problematic as you go to incorporate user data into the DOM nodes you’re creating.

There’s a very valid reason for testing the various libraries DOM APIs as such: people are using them, and it’s good to know what kind of performance we’re getting.

See eyelidlessness, it looks to me like Dojo and jQuery are using a string insertion via innerHTML to add their li’s in this test round. I mean for apples to apples sake why would you use one library’s full Element creation method to create and inject sub nodes for a test, while skipping that additional logic processing on the same test for a different library? I have to think that skipping the steps to process the function of a library that creates an element and does its insertion and instead using innerHTML has to be a performance difference. If I am way off here just let me know…

I have to agree with csuwlscat. Mostly because I don’t think there is a “preferred” way to do these things amongst, say, jQuery users. If there is, it’s just due to examples getting copied.

There are many ways to do the same thing in any of these languages, and which one is chosen is due to the situation and the programmer. So what are we testing the speed of? One programmer’s favorite solution, which may be based on clarity rather than speed.

Yeah I tested some of the Mootools test snippets using standard methods of the mootools library done the way the examples right in the doc specify and the ms total were notably less (better score) on many tests. My buddy redid one of the jQuery tests – the one that does the (div[rel^=foo]) – something like that, forget the specific one, and it went from over 300ms in Google Chrome to 60ms. These tests are flawed severely enough to warrant rescinding the conclusions the author is coming to. Some libs get to use native methods mixed with the lib methods, some don’t, some tests grab waaay more elements in their tag name selector queries some don’t, just poor baselines for comparisons. A more relevant title for TaskSpeed in its current state would be RandomTaskSpeed…lol about as useful as an Austin Powers henchman!

Hmm, I was under impression that tests were written/reviewed by people from respective communities. Weren’t they? If they were, I assume they reflect the culture and the spirit of those communities. If they weren’t, they should be reviewed — as far as I know all code is publicly available.

I think for the sake of comparability the tests should weed out innerHtml altogether (unless, maybe, a test cannot be implemented otherwise). It’s about library performance, not about bypassing the API.

@csuwldcat – I would love to see the JQ test that went faster. Provided it follows the tests “english description” in procedure order, number of iterations and return values there is no reason a better test should not be used. John Resig reviewed/rewrote the JQ tests, and expressed the same concerns about the iterations and whatnot – but the fact of the matter is everyone _should_ be doing identical “tasks”, under identical constraints. I would hardly call it random. I thought it interesting to see the code required to accomplish identical tasks in the different libraries.

As stated previously, TaskSpeed was not ‘ready’ to be released. I’d mentioned in the initial post regarding this suite about the use of .innerHTML in places, and either have addressed or plan to address a lot of the other above statements with the “real initial announcement” – hopefully which will include YUI, ExtJS, and @WebReflection’s “PureDom” library.

There are a _number_ of issues I want to address regarding the suite, the charts, the tests … Please hold off on bashing until I’ve had a chance to ‘justify my actions’.

Hey no prob man, I just want to see a true test of the best that the libs can do without commingling regular DOM methods, as well as tuned to use the library methods that are best suited for each task. In that spirit of constructive effort, I would be willing to redo some of the Mootools tests that were not using the most efficient Mootools methods available, and my co-worker, the one who got the huge performance boosts by tweaking the jQuery tests, would be willing to do that lib’s methods as well. We would use strictly library methods and try for the most efficient possible. Just say the word P-Higs!