The Problem

Tight feedback loops are awesome.

For me, tight feedback cycles are right at the heart of every
methodological advance that the software industry has made over the
twenty years.

CI, CD, TDD and lean and agile methods all work because they leverage
the exponential gains that we get by
amplifying learning
through feedback. The tighter the loop and higher the information
content the faster we can move and adapt.

Reducing the time and friction involved in QA and testing processes
means we can increase the scope of what can be tested on developers’
own machines bringing this valuable feedback right into developers’
own development cycle.

With in-memory databases, embedded web servers and suchlike we can
make fairly realistic end-to-end automated testing practical right
within the development cycle, and sometimes, with a just a little more
ceremony, we can even spin up reasonable simlacra of
deployment architectures using tools like Vagrant.

However it only takes a momentary lapse to lose valuable
time. Forgetting to run tests before a svn commit or git push can
lead to failed builds with knock-on effects and delays.

Client-side
git hooks
solve this for projects in git. You can easily set up git
to automatically run all your checks every time you push (or even at
each commit). Never again will you commit and see a wave of builds
kicking off on your CI servers only to suddenly remember you missed
running the unit tests after your last little tweak.

However git hooks are private to your repository so it takes a little
effort to share them amongst developers. In many scenarios
(particularly fairly centralised development models that are typical
in commercial closed-source development), it is appropriate to
enforce client-side hooks at a project level.

I’ve wanted to do that in the past but haven’t found a particularly
satisfying way to do it. Then while working on a node.js project this
week, a colleague of mine pointed me at an effective solution in
javascript-land
(git-pre-hooks).

It’s a simple approach: define the hooks you want in your project
metadata and allow the build tool to deploy and manage your git hooks.

There may be many implementations of this same approach out there but
I want one for Leiningen and I couldn’t find an
equivalent, so I wrote it.

Then when you next run any leiningen task (this is the optional
:auto-install bit), the hooks will be installed or updated. The
hook scripts themselves simply invoke leiningen to run the specified
commands so there are currently a lot of JVMs spinning up in the
process so it’s not the zippiest.

You can use it more manually if you prefer (:auto-install off and
you’ll need lein githooks install to manage them) and you can
override project settings in your own profiles.clj.

Full instructions in the README.

…and feedback is, of course, welcome.

Continuous Testing

Hooks are not the only approach to having tests run frequently inside
the development cycle. Continuous testing runs tests repeatedly or
on every file change. This has similar or even better
benefits. Midje’s autotest
is an example in Clojure.

However, even if you are using continuous testing you might want to
run other tasks such as static code analysis (from
Eastwood for example) on pre-push.

This time round I won’t say much about the contents of the
chapter. The translation is an accompaniment to the book and the blog
post is in no way a substitute. Buy the book if you don’t already have it!

Chapter 3

Chapter three of
LiSP
begins by reviewing a range of control structures, historical and
modern, that incorporate the notion of an escape - a transfer of
control that is non-local but nevertheless less powerful than
arbitrary ‘goto’. Unlike ‘goto’, you can only escape back to places
you’ve already been.

prog, return

catch, throw

block, return-from

These can be used for optimisation, for instance, for shortcutting out
of a deeply nested search procedure or for error handling as per the
near-ubiquitous try / catch constructs in popular OO languages.

Each pair provides a means of marking a continuation at which control
will resume and a means of transferring control to that continuation.

In addition the book covers unwind-protect, a form that interacts
with the various other escapes to provide an analogue of the familiar
finally construct, ensuring that a body of code is run when control
escapes a block regardless of how the control escapes.

The notion of continuation that is gestured at by these control
structures is famously provided as a first class object in
Scheme. call-with-current-continuation,
a.k.a. call-cc, enables a continuation to be identified, named and
stored (with indefinite extent) for later use.

Using call/cc, any of the other control flow primitives can be simulated.

Furthermore, by reifying this notion of continuation in our
interpreter, using either the facilities in the underlying lisp
(call/cc) or via a translation to continuation passing style
(“CPS”), any of these control structures can be provided as primitives
in our daughter lisp.

Chapter three describes both these methods although most of it is
devoted to the strategy we need in Clojure - CPS.

The Translation

It sticks closely to the strategy in the book with a few renamings
here and there. It’s pretty generously commented so inspect the source
for more detail.

Instead of directly mirroring the object orientation approach in the
book, I’ve eradicated the few non-essential uses of implementation
inheritance in the book (e.g. full-environment inheriting
null-environment…) in favour of a more idiomatic protocol / record
representation in Clojure. Further discussion of this below.

I’ve implemented call/cc, block / return-from and catch /
throw. I’ve left out unwind-protect for now at least, despite its
obvious utility. As our evaluator still lacks even such
creature-comforts as an extensible global namespace, it’s hardly an
pressing issue.

A Note on Object Modelling

Over years of dealing with Java and C++ and some pretty large
codebases, I’ve become extremely wary of implementation
inheritance. Not dogmatically - any language feature is fair game to
developers striving for the simplest and cleanest expression of their
intentions - but enough to acquire a firm conviction that inheritance
is heavily overused in both ecosystems.

While deep inheritance hierarchies may seem like a great way of
modelling your concepts when you have a blank slate, it’s a recipe for
some extremely tight and non-obvious coupling that can deadlock
refactoring attempts in later phases once those concepts have shifted
and the original model is no longer a good fit. The problems can be
particularly acute when the inheritance hierarchy has spread across
several semi-autonomous modules or (worse) long-lived code branches.

Dynamic languages and languages with superior type inference can
alleviate some of these difficulties - you can mitigate coupling by
leaving things unsaid or allowing some flexibility in the meaning of
what you have said. And various approaches to mixins and traits have
been conceived in an attempt to reconcile the fidelity of modelling
with evolvability.

Nevertheless, part of the problem is simply inherent. It seems that
the more intricate a representation of a conceptual hierarchy, the
harder it is to change. The issue is compounded by the scant effort
that developers normally take to control the API they present to
subclasses (the protected methods) and to take care of method
visibility in general.

So in Java, I would generally start with a flat duality of interface
and implementation these days, using composition and other approaches
where possible and only indulging in implementation inheritance in
cases that seem particularly benign.

Happily, this is the basic paradigm that Clojure’s protocols
encourage too. However, in translating chapter three’s evaluator into
Clojure I don’t think it’s appropriate to go to protocols / records in
all cases.

define-class sets up a true implementation inheritance relationship
here. The definition of continuation contributes a k field to all
subclasses and automatically defines accessor methods continuation-k
and set-continuation-k! which are available on all subclasses.

This would correspond roughly to the following Java - where the
generic functions have become methods defined inline within the class definitions.

/** * Unified call interface for functions, primitives and * continuations. */publicinterfaceInvokable{voidinvoke(Object[]vals,Environmentr,Continuationk);}publicclassContinuationimplementsInvokeable{privatefinalContinuationk;publicContinuation(Continuationk){this.k=k;}publicContinuationgetK(){returnk;}// Default, intended to be overriddenpublicvoidresume(Objectvalue){k.resume(value);}// This is a case of API adaptation, *not* to be overriddenpublicfinalvoidinvoke(Object[]vals,Environmentr,Continuationk){resume(vals[0])}}publicclassIfContinuationextendsContinuation{privatefinalFormet;privatefinalFormef;privatefinalEnvironmentr;publicContinuation(Formet,Formef,Environmentr){this.et=et;this.ef=ef;this.r=r;}publicFormgetET(){returnet;}publicFormgetEF(){returnef;}publicEnvironmentgetR(){returnr;}// override resume for custom behaviourpublicvoidresume(Objectvalue){...}}publicclassBeginContinuationextendsContinuation{...}

This illustrates three subtly different uses of implementation inheritance:

getK - which provides access to the inner continuation, it would be
dangerous for subclasses to override or alter this behaviour, they
would very likely violate assumptions made in the class or elsewhere

resume - which merely provides a sensible default that almost no
subclasses will need

invoke - which adapts the Continuation#resume API to a more generalised
invoke API. This adaptation involves only the Invokable and
Continuation abstractions and it would be incorrect to even want to
override it elsewhere.

It’s tempting perhaps to define a Resumable interface and work where
possible at a higher level of abstraction but the parts of the
evaluate which work with continuations, the implementations of block
and catch for instance, need to follow the chain of wrapped
continuations via getK so the going the extra mail with interface
segregation doesn’t buy us much in this case.

Both object systems provide for method implementation at the level of
an intermediate base class, even if the Java approach is not as open
to future extension.

By contrast, Clojure protocols and records do not provide for
intermediate base classes at all. So a different approach again is appropriate:

Each continuation is a separate record type and does not share a
common implementation base class. Therefore each redefines the
wrapped continuation, k.

From other code, access to the wrapped continuation is available via
(:k cont) and available only because each and every continuation
defines k. A more careful implementation might define a protocol for
access to this information but in the spirit of simplicity and a
preference for data over code (which we like in the Clojure world)
a better implementation still would probably just rename the field to
something sensible and leave it accessible to the full range of built
in functions. k was chosen merely to stick close to the book’s implementation.

The Continuation protocol is actually the Resumable interface we
considered earlier. No default implementation of resume is defined.

Invokeable is replaced by a multimethod which is a) closer to the
original implementation and b) allows this sort of adaptation of at
the abstraction level quite easily.

Despite protocols “solving the expression problem” for Clojure by
allowing extension to arbitrary predefined types, they do not solve
in and of themselves solve the further problem of protocol adaptation.

If we had realised Invokeable as a protocol we would then have had
to extend Invokeable to each and every implementation of
Continuation to effect the adaptation. Or extend the protocol to
Object and manage our own type-based method dispatch at that level.

In some circumstances extending protocols to an exhaustive
enumeration of implementations might be reasonable (see
BlockLookup in the translation for instance) but in the case of
protocol adaptation it is clearly not reasonable.

Other approaches exist (see
clojure protocol adapters
for instance) but the multimethod approach is simple, flexible and powerful.

The fiddly bit though is the interaction if isa? which multimethod
use to resolve method dispatch and the protocol extension
relationship. If you’re dispatching on #(type %) you need to be
very careful that you’re referring to the interface generated by the
protocol rather than the protocol itself:

;; Continuation is a var reference the protocol, defined with defprotocol(isa?(type(BeginContinuation.nilnilnil))Continuation);; => false;; lispic.chapter3.cont.Continuation is the fully qualified class;; name of the corresponding interface(isa?(type(BeginContinuation.nilnilnil))lispic.chapter3.cont.Continuation);; => true

Other Stuff

There’s an extremely rich literature on continuations out there that I
couldn’t even begin to cover. The book itself discusses delimited or
composable continuations briefly and there are various approaches to
these. See David Nolen’s
delimc for an exprimental
implementation in Clojure and a set of pointers for further reading.

Monadic implementations of continuations are also available in
twoof the prominent Clojure monad
libraries though monads and category theory are in the main orthogonal
to the concerns of the book.

Rich Hickey’s
announcement
of some upcoming changes to the seq functions in clojure.core prompted
a bit of discussion.

It’s probably to be expected that a few people would mistake the
behaviour of (map f) for partial application. And given that map is
often understood as the action of the list functor on functions, of
course there would be raised eyebrows about the dedication of the new
single argument arity map function to what looks like a very different
and unexpected beast.

The thing that made me gulp a little was the backwards nature of
transducer composition.

If you’re not paying attention when Rich demonstrates the analogy
between composing transducers and the thread-last macro, you might
read straight past the similarity between:

(->>aseq(map f)(filter p))

and

(sequence(comp (map f)(filter p))aseq)

and not even twig that this is backwards from the way that
composition normally works.

(comp f g) - otherwise known as f ∘ g or fg or g;f - is gthenf.

So (comp (map f) (filter p)), whilst it reads pleasantly forwards
as map with f then filter with p - and that is the behaviour -
is actually doing whatever it is doing in the other order - whatever
(filter p) does then whatever (map f) does.

If you didn’t spot this, then Rich’s
elucidation of the
type signature of (map f) shouldn’t allow you to remain in ignorance:

mapf:(a->b)->(x->b->x)->(x->a->x)

Yes. (map f) for an f which maps as to bs returns something that
goes from b-ish things to a-ish things.

It’s not very hard to see why this has to be the case. If you can’t
see it yet, let me help.

Consider:

(sequence(comp (map f)(map g))aseq)

where aseq is a sequence of as ([a] in Haskell). f takes a to b and g takes b to c.

aseq knows how to reduce itself when given a-reducing function;
that is at the heart of what reducers and transducers are
about. Efficiency can be gained by letting individual sequence type
handle reduction themselves.

So aseq can reduce itself given a reducing function for as (r
-> a -> r). So whatever it is that comp delivers to sequence or
transduce must be of the type (r -> a -> r).

Only f knows anything at all about as so it must be (map f) that
returns an (r -> a -> r). And for composability with other maps we
know that it must accept something of the same shape.

Hence:

mapf:(r->b->r)->(r->a->r)

…and by analogy,

mapg:(r->c->r)->(r->b->r)

…so the only way these compose is (map g)then(map f). Or (map f) ∘
(map g). Or (map g);(map f). Or:

(comp (map f)(map g))

…which really means: create something that can turn a c-reducer
into a b-reducer so we can feed it into something which can turn a
b-reducer into an a-reducer so aseq can reduce it. These
“things” when run within the context of aseq‘s reduction will
transform each a all the way back into a c, using the logic in
each of the transducers in the stack.

There might well be deep category-theoretical reasons for the
backwards composition but I find it much easier to wrap my head around
simply by considering it as a means of ultimately providing an
a-reducer to something that can reduce as.

It is that it is clear and comprehensible. If we can understand it we can make it work.

What is the second most important thing about the code that you write?

Still not that it works. The second most important thing is whether it
is properly tested.

Without automated tests we can’t even tell whether it works. But if
it’s comprehensible, in a pinch, we can write the tests.

Whether it works is at most the third most important thing about the
code you write. If it makes sense we can write the tests. If it’s
tested we can fix the code.

Does this mean you can deliver code that doesn’t work? Of course
not. We demand that the code works. And therefore also that it is
clear and comprehensible and it is tested.

If there is anything controversial about this it is the ordering of
numbers one and two, not the position of three.

To unpick an important subtlety however:

the most important thing about the software you deliver is that it works

whether you are engaged in writing code or delivering software or
both depends rather on how enlightened your organisation is, what
your role is within it and what sort of commercial and contractual
factors apply

Chapter 2

Chapter two of LiSP covers the concept of namespaces, the
distinction between Lisp-1s and Lisp-2s, dynamic binding and how
recursion can work in different binding schemes (including a section on the
Y combinator - see
here
for the combinator in Clojure.)

A number of variations on the interpreter are suggested and explored.
The implementation I’ve provided in
this gist is more or
less the “Lisp-3” (!) that is defined by separating out the namespaces
for variables, functions and dynamic variables.

Notes on the translation

For no good reason I’ve used an entirely different implementation of
mutable cons cells this time round, using Java arrays rather than
mutable deftypes.

I’ve also split part of the evaluator out into a multimethod
evaluate-sexp.

Special forms can then be added by adding the appropriate defmethods
(although the new features of this evaluator depend on all the
separate environments being passed through all calls so it’s clearly
not the case that such features can be added without invasive changes
to the evaluator).

The mutable environments are still based on a-lists (or the clojure
equivalent, seqs of vectors) in atoms but accessed through protocols.
This has meant some switching around of parameters so they no longer
match the book but the result is cleaner.

funcall - Lisp-2s

In Lisp-2s like Common Lisp, only symbols in function position are
looked up in the function environment; symbols elsewhere are looked up
in the normal variable environment.

Or equivalently if binding is handled using cells on the symbol
itself, the function cell of the symbol is only used when the symbol
is in function position - otherwise the variable cell is used. (Any of
us more familiar with elisp than CL will probably be familiar with the
“symbol’s definition as variable is void” error message.)

To access functions from the function environment in any other
position, the function special form must be used.

Relatedly, as the function application approach expects to evaluate a
symbol in function position in order to acquire the function to apply,
calling a function that is already stored as a value in the normal
value environment needs special handling. This is the role of the
funcall special form.

In order to populate the function environment we also provide flet
(and labels - see later).

This function / funcall protocol is one of the awkwardnesses of
Lisp-2s in comparison to Lisp-1s like Scheme and Clojure.

Our daughter lisp has the following namespace characterstics for the
function environment.

Dynamic variables

The representation of functions has changed in order to accept a
dynamic environment at call time.

Therefore:

make-function accepts only env and fenv but returns a fn requiring
a denv

defprimitive needs to provide functions that accept the dynamic
environment so we provide a simple with-env wrapper to adapt the
call protocol of the native clojure functions

In contrast to Common Lisp and Clojure which streamline access to
dynamic variables at the expense of extra decoration at the definition
site (cf. ^:dynamic), our lisp makes the dynamic nature explicit at
the access site.

So we have a dynamic special form which is equivalent to function
but accesses the dynamic environment instead of the function
environment. And a dynamic-let which is equivalent to flet for
extending the dynamic environement instead of the function environment.

So note that Common Lisp has an invisible access protocol for the
dynamic namespace but an explicit, visible access protocol for the
function environment. Clojure has an invisible access protocol for
both. Our Lisp-3 has an explicit, visible access protocol for both.

Neither CL nor Clojure allows dynamic (‘special’) variables to share a
name with non-dynamic variables so in reality it’s just one
namespace - the variables are distinguished at definition time using
defparameter or similar (CL) or ^:dynamic (Clojure).

Our daughter lisp has the following namespace characterstics for the
dynamic environment.

Recursion

I’ve used the simple labels special form approach to providing for
mutual recursion which works by mutating the environment under the
covers so there’s a small period of time where the function
environment contains :unititialised values. This is invisible to the
daughter lisp though as no code can be executed during this period.
(We’re still single threaded only here.)

At the REPL

Demonstrating the protocols for using dynamic and function environments:

Core War

Core War is the grandaddy of
programming games. If you date it from the initial article in
Scientific American which effectively started the whole phenomenon
then it’s 30 years old this year.

Competing programs (written in rather an esoteric assembly code -
“redcode”) battle for control of a memory space (in the “MARS” -
Memory Array Redcode Simulator) by bombing or trapping each other
while trying to evade enemy fire and replicating themselves around the
MARS. A warrior wins if all its opponent’s processes die (by executing
an “illegal” instruction - i.e. DAT or divide by zero).

These programs (or “warriors”) get pretty complex and there’s a
hard-to-penetrate terminology that’s used to describe and analyse
them. I’m not sure how many beginners enter the fray anymore.

A thriving early ecosystem with
weekly newsletters seems to have
died down these days - the
variousforums don’t see much activity. I’m sure
it’s alive and kicking but from the outside it certainly has the
charming aura of a community whose heyday is behind it.

Despite this, there are a lot of
hits on GitHub
and the main “hills” are still open
for business although again there’s not much activity and there is a
notably narrow selection of authors represented.

I think it’s long overdue a resurgence but I’m not sure what would
kickstart it. It’s entirely possible more people are interested in
writing MARS implementations than warriors to fight in them…

Despite which, for a bit of fun I hacked out a quick
Clojure implementation of the
ICWS94 Draft standard the other day. So far at least, it’s only an
implementation of the interpreter (not the assembler) and it doesn’t
implement the PSpace feature yet.

And, by steering clear of external dependencies, it’s pretty easy to
compile as ClojureScript as well so we can run and visualise battles
in the browser. (It’s not the first time this has been done,
obviously. There’s at least one JavaScript MARS implementation out
there.) I think a lot of the Core War literature would be much easier
to assimilate if battles were embedded in the pages to illustrate the
behaviour of the warriors under discussion.

Imp vs Dwarf

Just for starters, here’s a slow motion battle between the two
original archetype warriors, an imp (that copies itself forward
through the core) and a dwarf (or stone) that bombs core at intervals
of four instructions.

Without an assembler, it’s a bit of a pain to load up sophisticated
warriors right now which is a shame because these two are a poor
representation of the weird and wonderful Core War menagerie that’s
evolved over the past thirty years. There are scanners and vampires
and replicators and so on.

This isn’t much of a battle. Imp cannot win - if it overwrites the
dwarf, the dwarf becomes an imp, executing the copy forward
instructions itself forever. The result is a draw after a limited
number of cycles have passed. Dwarf can get lucky and bomb the imp at
the cell at which it’s about to execute in which case the result is a
victory for the dwarf.

This is a crude visualisation that doesn’t indicate which programs are
executing where - it just colours the cells by the type of instruction
they contain so you can see the code changing as the programs execute.
pMARS and other simulators use colour to
distinguish the different warriors instead which gives a better idea
of who’s doing what. Other drawbacks: it doesn’t display the results
and it doesn’t show any activity at all if instructions aren’t
changing. Still, it’s a start.

See corewar.co.uk for a really good
collection of Core War resources. (It’s actually part of a
web ring too - remember those?)

I’ll release the ClojureScript code if and when there’s anything worth
looking at or anyone wants to look. The component above is just a
few bits of Om and
core.async strung together.
As well as obvious improvements to the visualisation, to be a useful
means of embedding battles in web pages this would need to offer a
sensible JavaScript API and accept warrior definitions as redcode
(embedded or sourced elsewhere).

So it had to be. We had a little over an hour and a half to cobble
something together from scratch. It had the sniff of a challenge about
it and there was guinness and pizza for fuel.

A few things were clear immediately:

We didn’t have time to learn Overtone
properly. So it was copy-and-paste plagiarism for the basics.

We needed a Bodhran sample. We couldn’t pronounce it but we knew we needed it.

We needed a lot of diddly.

Overtone

overtone.core and (boot-external-server) got us started. Dan had
the SuperCollider IDE open which might have been running the
supercollider server. We weren’t sure… We had to kill a process or
two along the way.

Everything we needed we copied and pasted from the Overtone cheatsheet
and wiki docs and then iteratively and incrementally made it more Irish.

We couldn’t get the flute synth working with a sensible envelope in
time so we stuck with an ugly saw wave but generally making noise
wasn’t difficult.

Our namespace, diddly.core might be a fairly accurate genre
description for some of our speaker-busting early experiments…

The Bodhran

This was nice and easy once the WiFi started behaving itself. One line
of code and the Bodhran was ours, thanks to Overtone’s simple helpers
for downloading samples from freesound:

(def bodhrun(sample(freesound-path65833)))

The Diddly

Simplest, cheapest, quickest way to an Irish vibe. Mix together a lot
of this:

and this:

Randomly. Interminably.

How did we achieve this? In the simplest way conceivable. We play a
note on every first tick, every third tick and, half the time, on a
second tick too.

Add in our kick drum and bodhran and it looks like this. Probably a
bit of time reading overtone docs for at and friends could tidy this
up somewhat:

Tonality

We didn’t have time for any clever note selection. It was a case of
pick a scale and select randomly from it for every note. So there is
no state at all. On every metronome tick a pitch is selected fresh
from the list with no regard for what has come before or what shall go after.

We went with pure D major, hoping for a fairly bright sound. A few
brief experiments in the Mixolydian didn’t work out so we ditched them.

We stayed within a single octave to avoid crazy leaps and rigged the
frequencies a little to cause the “melody” to hover around the tonic a little.

The Result

We did get a lot of jitter despite Dan having a pretty beefy laptop.
This surprised us a bit. Nonetheless the timekeeping was good enough
to convey the intended impression. We think everybody enjoyed it. We
certainly did.

One day we’ll learn Overtone properly and do something really good
with it.

Lisp in Small Pieces
is a deep exploration of some of the fundamentals of programming that proceeds by describing a series of lisp evaluators and compilers over the course of 400 pages or so.

Personally I find the text pretty baffling at times and the English is
often rather curious. Also the Kindle version I have garbles and
shrinks some of the code which adds a certain challenge to the
reading. Nevertheless the depth and detail of the content are awe-inspiring.

…of Clojure

There have probably been a few gestures towards porting examples from
the book to Clojure already (I know of a
gist by Fogus at least).

I don’t intend to produce a complete port of the code from LiSP (like
these).
I’m probably not interested enough.

However I have the book and have worked through some of the
implementations. And, my implementation language of choice is for the
most part Clojure. So it’s at least possible a few gists will appear.

In the following, I’ll have no hesitation in commiting crimes against
idiomatic Clojure in the name of faithfulness to the book and crimes
against the book in the name of idiomatic Clojure. Just a warning.

Chapter 1

Chapter one delivers a simple interpreter that is heavily parasitic on
the underlying lisp. As the implementation language in the book is
assumed to be Scheme we need to build a little bit of a compatibility
shim in Clojure in order to stay close to the implementation in the book.

In particular, we need:

Mutable environments

Mutable cons cells

Mutable Environments

The interpreted language provides a set! special form for mutating
variables. The environment data structures used in chapter one are
simple A-lists that support set! by mutating the cdr of the cons cells.

At this stage I decided to stick with a similar structure in my
Clojure implementation (atom wrapped round seq of seqs) and provided
lookup, update! and extend operations as per the book.

Fogus chose a more idiomatic map implementation which I think is
equivalent except for pathological cases like:

((lambda (xxx)(+ xx))123))

…and in that it provides for set! adding new bindings automatically.

A large part of the exploration of the book regards the behaviour of
the environment and the global environment in particular so I decided
to stick with the book on this. At this stage there is no dynamic
creation of global variables so the implementation creates some
initial variables in the environment (foo, bar, fib, fact) for
use by programs.

The Evaluator

The evaluator itself is a straight translation from the book differing
only in language specifics like atom?.

Mutable Cons Cells

Just as in the scheme implementation, functionality is exposed to the
child lisp using definitial and defprimitive macros (substituting
Clojure’s defmacro for the Scheme “hygienic” macro approach).

However Clojure is very different from Scheme. If we want to provide
similar facilities to the child language (and mutability is essential
to many of the problems discussed in the book) there’s some work to
do. In particular we need to provide mutable cons cells, which means
the cons we need to expose to the child lisp is most certainly not
the cons of Clojure.

We could work around this by putting atoms around pairs (and lists
would contain nested atoms in their cdr positions) but I chose instead
to brute-force my way out:

As the gist illustrates, providing cross-cutting enhancements like
tracing, or enhancing any of the component parts (like the
environment) is currently difficult because of the inflexible design.

There are numerous ways in which the design might be improved:

the environment functions (lookup, update!, extend) are crying
out to become a protocol, allowing alternative implementations to be
more easily substituted

the evaluator itself could be pulled apart as a multimethod which
dispatches on the symbol in the car position - a similar approach is
used in the core.async go macro implementation (see -item-to-ssahere
and some explanation
here)

a monadic approach to the evaluation could be introduced to provide
for the clean addition of cross-cutting concerns (see Wadler’s “The
essence of functional programming”
here for
an example of iterative enhancements to an interpreter by altering
the monad)