Pages

Sunday, August 20, 2006

Lisp is built out of an amazing idea: like everything else in the machine, code is also data. This is true in any language, but Lisp's syntax makes the code especially easy to manipulate as data.

This feature allows new language constructs to be implemented as program transformations, without touching the core language. Given this uncommon ability, how could any language ever beat Lisp in terms of level of abstraction? Any missing functionality, it would seem, can only be temporarily missing, awaiting a sufficiently dedicated macro writer. Surely, Lisp must be the most high-level language ever designed!

Well, I think not. Lisp is low-level.

Macros may allow you to write programs more abstractly, but writing macros themselves is more painful than needed. A macro is manipulating a raw syntax tree, devoid of meaning, especially since it may contain new macro constructs it has never heard of. How can it be expected to do the right thing?

The only hope, for the poor macro, is to leave the new constructs alone. Thus, some constructs shadow the influence of other constructs, effectively preventing a class of independently developed language extensions from being used together. That's a shame, and that's a direct consequence of Lisp's macro facilities being too low-level tools.

What would a high-level macro system look like? Personally, my bet is on more advanced reflection mechanisms. In an upcoming post, I'll explain how reflection and macros are interrelated, and provide a much needed example to illustrate the kind of things which Lisp cannot do.

Oh, I remember the reflection thing now! Let me write it here before I forget again.

Instead of manipulating the AST as a concrete piece of data, we should manipulate the AST as an abstract type. For example, we could have a class Block with a method to obtain the list of statement it surrounds, and there would be classes like While and For that would subclass Block.

This way, we could extend the language by implementing more subclasses. More details to follow in a couple more years.

I'll clearly never write the follow-up blog post I was planning to write, but I can at least write a comment.

The problem I see with lisp macros is that they manipulate an opaque blob of syntax, without necessarily understanding all the pieces. For example, a macro could traverse the AST and replace all the λs with lambda, but it would have to do so blindly, without understanding whether some λs perhaps have a different meaning in some of the sub-trees. As a result, two macros which both use same placeholder, and both want to replace it with something else, cannot be used together. For example, if macroA wants to replace λs with A and macroB wants to replace λs with Bs, then unless one of the macros is specifically designed to detect the other's calls, an expression such as (macroA (λ (macroB λ))) will expand to (A A) instead of the intended (A B).

Hygienic macro systems help because they allow for the case in which two macros both want to name their special placeholder λ, by distinguishing between the two λ via some other means. This is definitely better, but it's still not perfect: what about identifiers bound in the user's namespace? If macroC wants to replace λC-bound identifiers with C and macroD wants to replace λD-bound identifiers with D, we can't use them both together unless one specifically checks for the other's λ binder. So (macroC (macroD (λC [x] (x (λD [x] x))))) expands to (C C), not (C D). For simplicity, let's ignore hygiene and stick with the simpler example.

At first I thought the problem was the opaqueness, and that the solution was to attach more metadata to the AST's piece: is the identifier macroB a macro? Then leave this subtree alone. Or even more precisely: is this identifier a macro? If so, what are the identifiers whose meaning it changes? If it includes λ, then leave that subtree alone.

Today we have Racket's syntax-parse, a much better macro system in which the macro specifies the format of the ASTs it can deal with very precisely. So instead of getting the wrong answer (A A), we'll get a macro-expansion error telling us that macroA doesn't expect to see macroB within its input AST. Which is much better, but still doesn't allow us to use macroA and macroB together. Clearly macroA needs to change its input to allow macroB, but that might be too much to ask, we can't ask every macro writer to explicitly support every other macro. What we can do is design syntactic guidelines for what binding pattern should look like, and then macroA could be changed to support that specific binding pattern by noticing when x or λ gets shadowed and not replacing it within that subtree. Now we can write a family of macros which are compatible with each other.

Later on, as I was working on my Master's thesis, I became obsessed with commutativity, and with how things which combine non-commutatively are often considered to conflict. I recognized the problem with (macroA (λ (macroB λ))) as an example of that situation: macroA and macroB both want to transform an AST, but the order in which they do so results in a different output: with one order, all the λs become As, and with the other order, they all become Bs. My solution to these kinds of conflicts is to ask the macros, or whichever participants are conflicting, to work with a much more restricted set of transformations which are carefully designed to guarantee commutativity.

I now think that commutativity is a red herring: the important part is to come up with a shared system, such as the above variable-shadowing system, which captures all the relevant ways in which we want to make sure that the participants don't conflict. A restricted set of transformations is one such system, but it's way too restrictive.