Links

performance

What is ASM.JS?

Now that mobile computers and cloud services become part of our lives, more and more developers see the potential of the web and online applications. ASM.JS, a strict subset of JavaScript, is a technology that provides a way to achieve near native speed in browsers, without the need of any plugin or extension. It is also possible to cross-compile C/C++ programs to it and running them directly in your browser.

In this post we will compare the JavaScript and ASM.JS performance in different browsers, trying out various kinds of web applications and benchmarks.

For some time it has been possible to build and run QtWebKit on Linux using Google's V8 JavaScript engine instead of the default JavaScriptCore. I thought it would be good to see some numbers comparing the runtime performance of the two engines in the same environment and also measuring the performance of the browser bindings.

Many people are using OProfile for system measurements on Linux. This is a very handy profiling tool (supports hardware performance counters and has a very low runtime overhead) although it is in alpha phase. Using OProfile to measure any application performance is a straightforward task on Linux: you should get the latest OProfile for your machine or device, build, configure and run the daemon, and start measuring an application or your whole system. Finally, you can execute several scripts which report the performance numbers to you (according to your reporting requests). It sounds easy, doesn't it?

Perhaps the first question to pop out of somebody's mind is why do we need a fast JavaScript parser? A browser does so many things, why do we focus on JavaScript parsing, which probably consumes only a small amount of time? Earlier, we were thinking alike, but oprofile yielded some surprising results. The JavaScript parser was often responsible for 15-20% of the runtime of libWebKit. Further analysis has shown that popular web-sites usually come with big JavaScript source files (about 200-400k). These JS files contain a lot of source code, many are brower-specific. The expensive operation here is the syntax checking of the whole source code, as the browser must reject the execution of invalid JS files.

I have a quite fundamental problem... Let's assume that I came up with a bright new idea how to speed up the execution of JavaScript programs. Once implemented, of course, I'd like to measure it's effect and present the results to the world.

There are benchmark suites that exist exactly for this purpose. Now, if it takes 100 seconds for the unaltered JavaScriptCore to execute the whole SunSpider benchmark, and this drops to 77 seconds with my improvements applied, then what should I report?