Notes on the implementation of Supercompilation in GHC

This page is very much a draft and may be incorrect in places. Please fix problems that you spot.

Problems

The whistle implementation: What does the subterms of a context mean?

The Simplifier gets confused by the wrong OccInfo on things. So run occurence analysis at the end of supercompilation. The occurence analyser gets confused by having the wrong Unfoldings for id's though. We currently zap things here and there, but this is not the right way to do it.

Scalability

The whistle is THE bad guy when it comes to performance; we spend 35% of our time on testing. It is trivial to run in parallell.

Figure out arity for each top-level (lambda lifted) function, and only inline when it is saturated. (Write notes in paper, explaining why this might be good.) NB: linearity becomes simpler, because a variable cannot occur under a lambda.

Refined whistle-blowing test

Neil's msg idea

Later

Faster representation for memo table; a finite map driven by the head function

Use idUnfolding and a bitmap for letrec/toplevel things instead of traversing the binds list.

Using lazy substitutions

Case-of-case duplication

Post-pass to identify deepId

Post-pass to undo redundant specialisation??

Neil does "evaluation" before specialising, to expose more values to let, and maybe make lets into linear lets. We don't. Yet.

A substitution-based implementation exists, that transforms append, reverse with accumulating parameters, basic arithmetics and similar things. There are still bugs in the implementation, mainly name capture. It takes 8 seconds to transform the double append example, but there's still plenty of room for improvement with respect to performance.

The typed intermediate representation has caused some trouble, but nothing fundamental.

Shortcomings of the prototype:

Use a state monad

Uses eager substitution

Divide by zero

Homeomorphic embedding for types? Currently all types are regarded as equal (like literals). Decision: leave it this way for now.

Msg does not respect alpha-equivalence. If we match lambda against lambdas, and the binders differ, we say "different". Decision: deal with alpha-equiv in msg when we have the new alg working.

Open questions

Should R contexts include let-statements? Need to worry about name capture even more then.

Should matching for renamings be modulo permutation of lets? (Performance vs code size)

Consider (\x xs. append x xs). Do we inline append, and create a specialised copy? (Of course, identical to the original definition.)

Yes. At provided we don't create multiple specialised copies, we are effectively copying library code into the supercompiled program. Then we can discard all libraries (provided we have all unfoldings).

No: then need to keep the libraries
But it's not clear that we can always inline everything. For example things with unsafePerformIO.

(Given no cycle in imports) Perhaps the things we can not inline should be put at the top level in the same module, and the old module discarded?

Can we improve the homeomorphic embedding so that append xs xs is not embedded in append xs ys?

Diffferences from Supero

1) Call-by-value vs Call-by-n{ame,eed}. I am not completely clear on whether Supero preserves sharing. (Pscp does, otherwise the improvement theory is not usable and the proof doesn't go through. Previous discussions between Neil and Peter concluded that this is not sufficient - more expressions need to be inlined so we do not want to preserve all the sharing for performance reasons; Simon later pointed out that we must preserve sharing).

2) Termination criterion. Both use the homeomorphic embedding, but there are some small differences.

a) Pscp checks against fewer previous expressions compared to Supero. (Pscp only looks at expressions on the form R<g es>, where R are evaluation contexts for call-by-name, and g are top/local level definitions. Let-statements, for example, are not bound in our contexts). This could be beneficial for compilation speed, but I don't have any concrete numbers right now. The drawback is that our approach can not perform Distillation; a more powerful transformation that makes the checks even more expensive. This should not be hard to change in an actual implementation though.

b) Generalisation. Pscp currently uses the msg, and Neil has deducted a better way to split expressions. Switching between them should be a matter of changing a couple of lines of code in an actual implementation.

c) simpleTerminate. This roughly corresponds to a mixture between values and what pscp have labeled as "annoying expressions". Neil was spot on in earlier discussions on what annoying expressions are: something where evaluation is stuck (normally because of a free variable in the next position, for example the application "x es"). Simon has an interesting algorithm formulation that avoids annoying expressions altogether and is probably simpler to implement.

3) Inlining decisions. Neil has a more advanced way of determining when things should be inlined or not. I believe that the solution is some kind of union between the inliner paper, Neil's work and possibly some kind of cost calculus for inlining.

What is not mentioned in either of our works though is that it is a typed intermediate representation in GHC. System Fc has the casts that might get in the way of transformations (so effectively they are some kind of annoying expressions). Rank-n polymorphism is another potential problem, but I am not sure if that will be a problem with System Fc. These are not fundamental problems that will take two months to solve, but I think that implementing a supercompiler for System Fc is more than just two days of intense hacking, even for someone already familiar with GHC internals.