I realise I’m behind the times, but I’ve only just discovered the DownloadHelper addin for Firefox, which allows you to download videos and convert them into a format suitable for viewing on an iPhone. This has allowed me to view a number of Google IO talks from 2008 and 2010, with the aim of understanding a little more about Android. I thought the following talks were particularly good and have added some brief notes about each of them.

The Dalvik VM is used to run java applications on the Android platform. The talk covered the reasons for needing a new more concise bytecode which is typically generated by converting java class files. The architecture of the VM was covered with a good explanation of how the interpreter is optimised by inlining the dispatch routine and aligning the individual bytecode handlers on instruction cache boundaries.

On Android, a modified Linux kernel is used which uses copy-on-write as a means for sharing pages between OS processes. A zygote process is created on system startup. This is a VM instance which has some system classes and libraries preloaded. When a new VM instance is required, this original VM instance is forked. Of course, this means that we want to avoid writing to pages as this will cause them to be no longer shared with the zygote. This led to the design decision of keeping the mark bits (that the garbage collector uses) separate from the objects themselves.

A trace JIT has recently been added to the Dalvik VM. The talk covered why a trace jit is better suited to a mobile platform when compared against a method level jit. In particular, the interpreter is used for large chunks of code that are not deemed by the system to be hot. This keeps things nice and compact, with the code expansion that happens when jitting limited to the hot paths through the code. The system will freely pitch code when the generated code gets to a certain size, restarting the generation process.

A good introduction to APK (application packages), tasks and processes. This talk covered the lifecycle events to which an application binds, allowing the application to load and persist its data state and (separately) its UI state when focus moves between applications and when applications are shut down.

A great introduction to the technique that the V8 engine uses to get good performance for Javascript. In brief, they impose structure on the Javascript object instances which are essentially dictionaries, grouping together objects via a class which identifies a set of objects with the same properties (as long as they were added to the dictionary in the same order). This extra structure allows native code to be generated which can use inline caches, using the class object to guard the inlined action, to get good performance. The talk also covers the garbage collector – this has two generations with the first generation collected via copying, and the second generation using mark-sweep and mark-compact depending on heuristics.

The standard Javascript objects are also written in Javascript. This requires a bootstrap phase, where a new collection of system objects are compiled into a part of the heap which can then be dumped and reloaded into a new VM. In the old days, when I worked on Lisp systems, we had the same mechanism. Virtually all of the system was written in Lisp, and it was bootstrapped by taking an existing Lisp image, loading the new code into this image taking note of the heap segment into which it was allocated, and using this segment as the initial heap for the new system which could then read in other parts of the system. Writing the system in itself is great because performance improvements to the compiler and runtime affect the performance of the system itself.

There seem to be loads of systems around for compiling a high level language down into Javascript so that it can execute in a browser. GWT is one of the more mature systems and this talk gave a good overview of some of the improvements to the GWT system.