NAMASTE: Adaptive Optimization in Interpreters

Interpreters inhabit a sweet spot on the performance/price curve of programming language implementation. This means that it is cheaper toimplement an interpreter than a compiler, but on the other hand thatcompilers are usually faster than interpreters. This is particularlytrue for high abstraction-level interpreters, where many traditionaloptimizations are not effective. Consequently, such interpreters havea bad reputation for being too slow.

This talk focuses on a previously successful line of research inpurely interpretative optimizations carried out at the Compilers andLanguages group of the Institute of Computer Languages, and presentsnew results of continuing this line of research. In particular, wegive a brief overview of purely interpretative inline caching usingquickening and show how to leverage this foundation for a noveladaptive optimization: native machine-abstraction execution, or NAMASTE forshort. We have implemented NAMASTE and report that this techniqueboosts our previously maximum reported speedup by more than 40%: froma factor of up to 2.4 to a factor of up to 3.4. Since this techniqueis purely interpretative, too, it offers the usual benefits ofinterpreters: ease of implementation, portability, and—compared to ajust-in-time compiler, constant and low memory usage.