So this is about how to make your already-fast Haskell programs faster without doing the hard work yourself. I’ll walk through the approach of using a GA library to breed solutions, and show performance improvements in already hand-optimized programs submitted to the Computer Language Benchmarks Game found by a GA search.

As a taste, the GA found an inlining hint combination resulting in an 18% reduction in runtime for the parallel k-nucleotide benchmark (a program that had already had extensive hand optimization!). Sweet.

Background

A modern optimizing compiler like GHC is a complex beast, with a barrage of optimizations available to transform code from lambda calculus to assembly. GHC follows a compilation-by-transformation approach, doing as much as possible of its code improvement via “correctness-preserving, and hopefully performance-improving, program transformations”. Deciding when to perform some particular transformation is hard, so instead the compiler has many, many tunable flags, as well as allowing hints in the source in the form of pragmas to let the programmer make domain-specific optimizations that may not be generally applicable.

Since the compiler doesn’t always get the thresholds for optimization right, exposing, for example, inlining hints to the programmer can have significant benefits when the programmer knows something more about how the code is to be used. I’ve seen factors of 10x to 100x performance improvements in inner loops with careful overriding of the default inliner heuristics used by the “Illustrious Inliner” (in Data.Binary)

The problem is the complexity of it all. If we have n inlinable functions or compiler flags, there’s 2^n combinations of inlining suggestions we can give to the compiler (double that if we start disabling inlining, and even more if we start setting particular phases). Even if the programmer has a few heursitics in mind to help with pruning, the search space is still huge. For these reasons, it is hard to know precisely when a particular flag, option, or inlining hint will be of benefit (and the same goes for parallelism hints, and strictness hints).

So, let’s have the computer traverse the search space for us.

Acovea

With a large search space for our optimization problem, one classic technique is to use an evolutionary algorithm to minimize for some cost. And for breeding the best set of compiler flags for a given program, we can use an off-the-shelf solution: acovea

Given a specification of the available flags, acovea uses these as variables to fill out an optimization search space, which it then traverses, using GA techniques to hang on to useful flags, breeding them to find a semi-optimal combination.acovea is relatively simple to use:

Compile the C++ libs (libevocosm, libcoyotl and libacovea)

Develop a specification of the flags to tune for your compiler (or reuse the defaults for GCC)

Then launch the “runacovea” wrapper script on your program

Go away for a few hours

Come back and you’ll be presented with suggested optimistic and pessimistic flags, best flag combinations, generally good combinations, and measurements against any baselines you set up. I’ve used acovea in the past (for optimising the GCC flags used in a polymer chemistry simulation) and in this post we’ll see if we can adapt it to solve other kinds of optimization problems.

An example: optimising a C program

Acovea comes with some benchmark programs to test out how it works.

First, if it doesn’t have a specification for your compiler, you’ll need to make one. A compiler specification is just an xml file with a list of all the flags you want to try permuting. Here’s one quick one I made for GCC 4.3 on a core 2 duo. It sets up some baseline flag combinations that tend to be good (-O, -O2, -O3) and then lists different flags to try.

The input program must only print to standard output its “fitness”. A value indicating how good this program was. This is the value the solver will try to minimize. By default we will time the program’s run, and have it print that time as its fitness. Smaller fitness numbers mean faster programs.

We can then run the evolver with a given input program and compiler spec as arguments,

For a quick test like this, we’ll limit the size of the population of programs, the number of them, and the number of generations to run, to avoid the search taking too long. The result when run looks something like (when run on the fftbench.c distributed with acovea):

So in this short 5 minute run, we found a combination of flags that was pretty close to -O3. If we let it run overnight, it might well find a good 10-20% on our best generic defaults. Fun stuff!

Evolving a faster Haskell program

We can do the same thing with a Haskell compiler too. First, we need a specification of the GHC’s optimisation flags.To start with, let’s just use the tool to answer a couple of simple questions when developing production Haskell code:

should I use -O1 or -O2?

should I use the C or native backend?

To answer these questions all we’ll start with a simple (incomplete) GHC specification file with just those flags available, like so:

by default, we’ll use –make -fforce-recomp -v0 to allow full recompilation and linking

we leave all optimisations off by default

as a baseline, we’ll use the known “good” flags

-O2 -funbox-strict-fields -fvia-C -optc-O3 -optc-march=core2

-O2 -funbox-strict-fields -fasm

Letting us crank up all the optimisations, and pick between the GHC backends to use. With more time, we can traverse a larger search space, and start including more speculative GHC flags (like -funliberate-case-threshold). Also, if we really have time on our hands, we can include all the GCC flags as well! (-optc-…).

Optimizations in GHC can have a huge impact, which is good when we’re searching for them, but GHC is also somewhat problematic, as (afaik) there are optimizations baked into the -O and -O2 levels that we can’t turn on or off via flags. As a result we must always include -O and -O2 as available optimisations in our spec.

Note the last one defines a monster search space of 2^120 flag combinations.

Timing a Haskell program

So now we’ve got a spec for GHC, let’s try to see if it can find some sensible flags to optimize our program. We’ll use as input an obsolete language shootout benchmark – recursive – since it’s small. I’m hoping it will tell me that either -O2 -fasm or -O2 -fvia-C -optc-O3 is sensible.

First, we have to modify the program to emit its fitness, not some other output (but we have to be careful to also not avoid doing work … pesky lazy languages). To do this, I change the ‘main’ function to contain the following wrapper:

I’ve set an arbitrary upper limit of 10 seconds on the program (which Acovea seems to take into account as a failure), and then we measure the cpu time the program gets. I also have to be careful to replace any IO functions with code that forces the data to be evaluated, but doesn’t print it out. `rnf` comes in handy here. We also have to modify the program to parse its arguments from a string, not the command line.

So now our program prints out its fitness (in cpu itme) when run:

$ ghc -O2 --make A.hs
Linking A ...

$ time ./A
0.3366

./A 0.34s user 0.00s system 97% cpu 0.350 total

I expect -O2 -fasm to be around the best we can for this program (possibly -O2 -fvia-C -optc-O3).

Evolving a better set of GHC flags

We can now bring the two together and use the GA lib to find a good set of flags for this program. Note: GAs are slow to converge on a good solution.

the difference between no optimizations and the optimized result is more than a factor of 10.

the best measurement was taken with -O2 -fexcess-precision (this uses the native code generator, not the C backend)

the final results include noise (e.g. without -fvia-C the -optc- flags have no effect)

GHC’s native code gen outperformed the GCC backend on this code.

It would be useful to have a “shrinking” phase at the end to remove noise (the way QuickCheck does). But for now we can do that by hand, meaning that acovea believes

ghc -O2 -fasm -fexcess-precision

is the way to go here. Let’s check the assembly for the best variants. For just the fibonacci function, where GHC first specialises it to Double, (where most of the benchmark’s time is spent) we get from the native code backend:

Which looks identical to -fasm and runs in the same time. So -fexcess-precision is noise here. Acovea declares -fasm beat the C backend here. That’s interesting: we’d already assumed -fvia-C was best for this benchmark, but that looks to be wrong. Progress!

We’ll now look at a full scale example: finding the best inlining strategy for some already highly optimised code.

Evolving the inliner

As we talked about before, the GHC inliner is a complex thing, but one with lots of optimization potential. By default, GHC tries to inline things using some magic SimonPJ heuristics, described in http://darcs.haskell.org/ghc/compiler/simplCore/Simplify.lhs. Since it’s often useful to override the inliner’s default strategy, GHC allows us to place INLINE and NOINLINE pragmas on functions, like so:

Note you can also disable inlining on a function, or add a phase annotation to say in which optimization phase of the compiler the inlining should be fired. Looking at these is further work.

In high performance code, inlining can enable all sorts of interesting new optimizations to fire, and in some cases can turn GHC into a sort of whole program optimizer, by inlining entire libraries into the user code, and then specializing them precisely for that particular use. Fusion-enabled libraries like uvector work like this. The problem in general code though is knowing when an INLINE is going to help. And for this we want to get computer support.

The main trick to have acovea program our INLINE pragmas is to make sure we can turn them on and off via compiler flags. To do this, we’ll use CPP. That is, each potential INLINE point will be lifted into a CPP symbol, that acovea can then switch from its specification file.

In the source we’ll identify each inlining site with a new CPP symbol:

key_function :: Int -> String -> (Bool, Double)
INLINE_1

and then build a custom acovea xml file for the inline points in our program. By default, our command will disable all inlining points:

GHC will now call cpp first, with a bunch of acovea-determined INLINE points redefined via a flag:

<flag type="simple" value="-DINLINE_1={-# INLINE key_function #-}" />

So now acovea can run our program, flicking switches to turn on and off inlining at each point we suggested it look at. It’ll then measure the result and try to find the best combination.

Results

Initially I tried this approach on two programs the Computer Language Benchmarks Game. The results are now available on the shootout itself at the time of writing. The faster programs (after inline optimization) are first. Note the k-nucleotide is a multicore parallel program.

In both cases the result found by the GA was an improvement over the hand optimized (and inlined) version. In the case of k-nucleotide, it was 18% faster over the hand optimized version. For nbody, it was a marginal improvement (one better inlining site was found).

That is, there’s a free win if we inline vel1 (which was a let-bound variable used in two places). Instead of saving the result and reusing the value, we actually get marginally faster code if we just duplicate that small computation.

The improvement is marginal, but that one inlining site (vel1) is enough to make a consistent small difference. At first, acovea appeared to have found a dramatic improvement (from 3.4s to 2.1s). It quickly decided that it was a good idea to inline this:

Which is interesting, since htPayload was not identified as a good inlining site in the original program (by hand), and hasGenome was! The hand optimized version was going awry. The final suggestions were:

Interestingly, the best-of-the-best was measured at the end as worse than the common options. Applying these inlining hints, and the result is now on the shootout as GHC #4.

And that’s it, we’re using a GA to evolve the inliner!

What next?

If we’re going to program the inliner, it can be beneficial to make transformations on functions that allow more inlining. In particular, manual worker/wrapper. There’s been some work already on using a genetic algorithm to evolve optimal strictness hints for GHC using a similar approach (hints that can be turned on and off). This should also be doable via acovea, using CPP once again to flick the switch.

Next steps are also to automate the construction of acovea specifications and running of timed Haskell programs, using a tool to emit the inlining, flag and runtime specs for a program. I also don’t know if using something like simulated annealing would give better results, or results sooner.

There are a number of GHC runtime flags that are interesting to mess with (heap size, number of generations, scheduling ticks, and for multicore programs, the number of cores to use, and how to tweak the parallel garbage collector. These are all runtime flags, and acovea only really lets us set things as ‘compile time’ flags – we’d have to bake them into the executable (doable with a C inclusion).

Finally, Tim Harris and Satnam Singh outlined an approach to discovering good `par` points to add paralleliism to Haskell programs, in Feedback Directed Implicit Parallelism (.pdf). They annotated the runtime to write logs of the actual costs of subexpressions in a profiling phase, and then used that information to place `par` hints on expressions in the source code. It seems to me that we could use the GA approach to breed `par` hints as well, discovering implicit parallelism in existing programs.

Coincidentally, after writing this article, I found that Tyson Whitehead had suggested the GA approach to improving inlining on the GHC mailing list only a couple of weeks ago. Also, there’s been some prior research on genetic algorithms for finding compiler heuristic values, and inliners in particular. See, for example, papers by John Cavasos
(e.g. “Automatic Tuning of Inlining Heuristics”, John Cavazos and Michael F. P. O’Boyle.). I’m sure there’s other work.

This is really cool! On a more basic level, I’ve been wondering if there exists a framework for testing alternative function definitions (e.g. by parsing a haskell source file and extracting alt. definitions defined in comments). It seems like there is a lot of potential for things like this in a functional language.

In the java space I have kicked around an idea that I think is somewhat related to this. The idea is to take a complete set of automated tests for an app and run these tests in such a way to be able to detect dead code. Later all the dead code could be deleted and the suite rerun. The idea though isn’t to just delete code that you have written but to do things like delete code from your libs (log4j, dom4j), app server (jboss, tomcat & spring), database (mysql) even the os (linux). So that when you are done you have a software stack that can pretty much only pass your test suite but hopefully run on a 10 year old machine.

The danger of dropping code like in John’s approach is if there is /anything/ not covered by your test suite, it’s certain to implode for someone, somewhere, at some point :)

Better to use static analysis to figure out what’s needed and what’s not, IMO – assuming you’re not loading code by introspection or anything like that. Heck, even better, just make sure the dead code isn’t paged in until needed (as with most compiled languages) – if Java can’t do that, then that’s where you should probably start…