Webby thoughts, most about around interesting applications of ecmascript in relation to other open web standards. I live in Mountain View, California, and spend some of my spare time co-maintaining Greasemonkey together with Anthony Lieuallen.

2010-09-19

I have been poking around with some performance tests and profiling lately, first with Steve Souders' Browserscope, which gets the job done, at some convenience cost of setting up a publicly hosted web page and tying it to a Google account (that you might have to first create), and, for loading performance, different ways of organizing script tags, images, iframes, style sheets and so on, Cuzillion, by same author.

Today, I found and toyed around a bit with jsperf.com by Mathias Bynens, which does much of the boring setup job for you, when it comes to micro-benchmarking some snippet of javascript versus another, assuming the thing you want to test is synchronous, and doesn't involve complex preconditions. (I wonder what makes Array copying using a saved Array.prototype.slice significantly slower than the other more evenly performance-matched variants?) As should always be stated when mentioning benchmarks and optimization, this kind of micro-benchmarking is less relevant than finding your code's hot spots and picking better algorithms where relevant. That said, it's still academic fun to play with this kind of thing.

For this kind of small test, jsperf.com does all the Browserscope setup work for you, and lets others improve (fork) your test after the fact, adding versions you didn't think of yourself that do the same thing. I have a vague plan of trying to make it run some performance tests for a little code snippet I wrought up recently that does a deep copy of a nested (JSON:able) structure from a parent window where there may be crud on Object.prototype (the code should run in an iframe free of such), to benchmark vs a mere JSON.parse(JSON.stringify(taintedObject), which would also clean away the crud:

What surprised me about the above code, when testing it on the kinds of payloads it would typically handle (data somewhat heavy on largeish strings), was that it actually was a bit faster than using the browser native JSON codecs, whereas, for larger inputs, native code always won out. Sometimes you just have to test stuff, to find out what wins.

I have been poking around with some performance tests and profiling lately, first with Steve Souders' Browserscope, which gets the job done, at some convenience cost of setting up a publicly hosted web page and tying it to a Google account (that you might have to first create), and, for loading performance, different ways of organizing script tags, images, iframes, style sheets and so on, Cuzillion, by same author.

Today, I found and toyed around a bit with jsperf.com by Mathias Bynens, which does much of the boring setup job for you, when it comes to micro-benchmarking some snippet of javascript versus another, assuming the thing you want to test is synchronous, and doesn't involve complex preconditions. (I wonder what makes Array copying using a saved Array.prototype.slice significantly slower than the other more evenly performance-matched variants?) As should always be stated when mentioning benchmarks and optimization, this kind of micro-benchmarking is less relevant than finding your code's hot spots and picking better algorithms where relevant. That said, it's still academic fun to play with this kind of thing.

For this kind of small test, jsperf.com does all the Browserscope setup work for you, and lets others improve (fork) your test after the fact, adding versions you didn't think of yourself that do the same thing. I have a vague plan of trying to make it run some performance tests for a little code snippet I wrought up recently that does a deep copy of a nested (JSON:able) structure from a parent window where there may be crud on Object.prototype (the code should run in an iframe free of such), to benchmark vs a mere JSON.parse(JSON.stringify(taintedObject), which would also clean away the crud:

What surprised me about the above code, when testing it on the kinds of payloads it would typically handle (data somewhat heavy on largeish strings), was that it actually was a bit faster than using the browser native JSON codecs, whereas, for larger inputs, native code always won out. Sometimes you just have to test stuff, to find out what wins.