Like the original Squeak VM, Cog is implemented and developed in Smalltalk, and translated into a lower-level language to produce the production VM. Being a Smalltalk program it is a delight to develop. Cog is available under the MIT open source license and is unencumbered for commercial deployment. Cog lives on github at https://github.com/OpenSmalltalk/vm. See README.md for more information on the repository.

Cog’s performance relative to the existing Squeak interpreter varies, depending on the benchmark chosen. As of early-2011, the Cog JIT used strong inline cacheing techniques and stack-to-register mapping that results in a register-based calling convention for low-arity methods. Due to the complexity of the Squeak object representation it has a limited set of primitives implemented in machine code that, for example, exclude object allocation. Performance of the early-2011 JIT for the nbody, binarytrees and chameneos redux benchmarks from the computer language shootout is in the range of 4 to 6 times faster than the interpreter. As of mid 2014 the new Spur object representation provides the Cog JIT with more opportunities to optimize. Performance for the same set of benchmarks is 4 to 11 times faster than the interpreter, and overall Cog Spur is about -40% faster than VisualWorks‘s HPS for the shootout benchmarks on x86.

Cog is now the standard VM for Squeak, Pharo, Newspeak and Scratch on Raspberry Pi. Cog currently has back ends for x86, ARMv6, and x64 (x86_64), with a MIPSEL back-end in preparation. Squeak 5.0 uses Spur, while Pharo is currently transitioning to Spur for the upcoming Pharo 6 release. Spur provides both 32-bit and 64-bit support. Squeak 5.0 is available in either 32-bit or 64-bit versions.

I am writing an occasional series of blog posts describing the implementation on this site. See the Cog category at the left side of the page. With Clément Béra I’m working on adaptive optimisation (a.k.a. speculative inlining) at the image level, optimising from bytecode to bytecode. We call this project Sista, which stands for Speculative Inlining Smalltalk Architecture. Clément’s image-level (entirely in Smalltalk above the virtual machine) adaptive optimizer is called Scorch. Sista should offer about a -66% speed-up (3x) for conventional Smalltalk code. The combination of Spur and Sista should yield almost a 5x speedup over the original Cog. See the side bar for more information.

Cog is simply a small part of a Smalltalk system. The name is appropriate, but inspired in part by another beautifulCog.