He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. –Thomas Jefferson

Saturday, September 19, 2009

Linux Builds Part II: The Acceleration Incantation

Ubuntu offers a system monitor that can show graphs of how system resources are being used. It's very interesting to turn on all the graphs on and build a large project without fiddling with the build. You'll see some interesting things.

First, the CPU usage will jump up and down, and so will the disk activity - but you'll rarely see them both high at the same time. That's because the compiler typically operates in three phases on a source file:

It reads the source file and all the headers. This is disk-intensive but not CPU-intensive.

Then it does all the usual compiling stuff like lexical analysis and parsing and code generation and optimizing. This makes heavy use of the CPU and RAM, but doesn't hit the hard disk much.

Then it writes the object file out to disk. Again, the disk is very busy, and the CPU just waits around.

So at any one time, the compiler is making good use of the CPU or the disk, but not both. If you could keep them both busy, things would go faster.

The answer to this is parallel builds. Common build tools like make and jam offer command line options to compile multiple files in parallel, using separate compiler instances in separate processes. That way, if one compiler process is waiting for the disk, the Linux kernel will give the CPU to another compiler process that's waiting for the CPU. Even on a single-CPU, single-core computer, a parallel build will make better use of the system and speed things up.

Second, if you're running on a multi-CPU or multi-core system and not doing much else, even at its peak, CPU usage won't peg out at the top of the panel. That's because builds are typically sequential, so they only use one core in one CPU, and any other compute power you have is sitting idle. If you could make use of those other CPUs/cores, things would go faster. And again, the answer is parallel builds.

Fortunately, the major C/C++ build systems support parallel builds, including GNU make, jam, and SCons. In particular, GNU make and jam both offer the "-j X" parameter, where X is the number of parallel jobs to compile at the same time.The graph above shows what I would generally expect the results of parallel builds to be on a particular hardware configuration, going from left to right.

When running with one compile at a time, sequentially, system resources are poorly utilized, so a build takes a long time.

As the number of compiles running in parallel increases, the wall time for the build drops, until you hit a minimum. This level of parallelization provides the balanced utilization of CPU, disk, and memory we're looking for. We'll call this number of parallel compiles N.

As the number of compiles passes N, the compile processes will increasingly contend for system resources and become blocked, so the build time will rise a bit.

Then as the number of parallel compiles continues to rise, more and more of the compile processes will be blocked at any time, but roughly N of them will still be operating efficiently. So the build time will flatten out, and asymptotically approach some limit.

Anticipating further posts, that is what you actually see, except the rise after the minimum is tiny, often to the point where the times in the flat tail are only a tiny bit higher than the minimum time.

A Brief Aside On Significance

In any physical system, there's always some variation in measurements, and the same is true of computer benchmarks. So an important question in this kind of experimentation is: when you see a difference, is it meaningful or just noise?

To answer that, I ran parallelized benchmarks on Valentine (a two-core Sony laptop) and Godzilla (an eight-core Mac Pro). In each case, the Linux kernel was built twenty times with the same settings. Here are the results:

Generally speaking, a difference of one sigma or less is probably not significant, while a difference of two sigma or more is probably significant. So I'll generally use the rule of thumb, based on the above, that differences between individual values of 2% or less are probably not significant and may easily be due to experimental error (noise).

About Me

I write software for Mac, Windows, and Linux.
My academic background includes a B.S. (mathematics and chemistry) and graduate work in organic chemistry. In between, I worked for two years as a bench chemist for Radio Shack. Upon leaving grad school, I fell into a career as a programmer, first on Macs, then DOS, Unix, and Windows. After working for a couple of startups, and doing Mac AutoCAD when there was such a thing, my wife and I had a consulting company for many years. Following that, I did a variety of contracts, and now work primarily on embedded Linux, but still do some Mac and Windows programming.
In the open source world, I contribute code to the GTK/GDK and X.org ecosystems.