Friday, December 20, 2013

Building a Lisp Interpreter from Scratch: -- Part 13: Updates

Quite a few of the posts in this series are woefully out of sync with the current code base, so I thought I'd do a post on the most significant developments.

Memory System

It turns out that we can have our cake and eat it too: native pointers/memory via malloc/free that still allow us to tag them with object types. The magic incantation that makes this possible is posix_memalign(), which allows us to create aligned memory chunks; if we request for memory aligned to a specified boundary, we can get pointers with the corresponding lower bits zeroed-out. Voila, we now have a place to tag our object types.

Using native pointers this way means that we don't need to manage the heap ourselves. No more mucking around with free lists, start and end segments, and so on. Life just became a whole lot simpler.

Object System

OBJECT_PTRs are still unsigned ints, but they're no longer indexes into the heap that are decorated with tag values; they are uintptr_t values (derived from posix_memalign(); see above) to which the tag has been appended. The same logic for marshalling and unmarshalling objects still applies, of course, the only catch being that the number we get after lopping off the tag bits is not an index into a heap managed by ourselves, but is a native pointer.

The other change in the object system has to do with integers/floats and with continuation objects. Integers and floats are also created using posix_memalign() and the corresponding tag appended to the pointers; quite a bit inefficient, since new space will be created for each occurrence of the number, but this doesn't seem to affect performance that much. Coming to continuation objects, we create a one-word space and plonk the relevant stack object there. The earlier hackery is thus mercifully done away with.

Garbage Collection

The main change here is that garbage collection no longer happens only after each top-level expression is evaluated. While this approach is simpler, it breaks down when a large computation is in progress and there is no memory available because GC hasn't happened yet. The solution is to trigger GC after every, say, 1000 instructions of the virtual machine. The trick is to protect the VM's ISA elements (reg_accumulator, reg_current_env, and their brethren) from being garbage collected -- this is done by keeping them pinned to the grey set.

Serialization

Since we're not managing the heap ourselves, using TPL for serialization is no longer feasible, since we don't have a single pointer to serialize. We therefore serialize and deserialize the relevant OBJECT_PTRs ourselves. The format chosen is JSON, and we use the cJSON library for this (Update: have replaced this with a homegrown JSON parser, on account of the need to handle large JSON arrays in a performant way). One positive impact of this change is that the size of the image file is now practically tiny (about 600K), when compared to earlier, when the entire heap -- whether used fully or not -- was dumped to hard disk. Oh, we also now 'serialize' the loaded foreign libraries. Update: serialization and deserialization is now possible at the level of individual objects too, through the SAVE-OBJECT and LOAD-OBJECT special forms.

The UI

The UI is a bit more user-friendly now; there is parens matching, some rudimentary indentation (nothing fancy, but better than nothing), and using the workspace efficiently got a bit easier. Not sure whether I mentioned this in one of the earlier posts, but there is now a debugger window that helpfully shows the in-progress frames and their respective local variables. We're are not in Smalltalkville yet, but getting there.

P.S. There is now also a special form called 'profile' which takes as argument an expression and prints the profiling information: for each sub-expression called by this expression, the number of times the sub-expression was called, how much wall- and CPU time the sub-expression took, and the number of words allocated during the evaluation of the sub-expression. Screenshots below.