Raumfeld is a multi room audio system (as the name, translated from German, suggests: "Raum" = "room", "feld" = "field"). Raumfeld began as a startup in Berlin and is now owned by Lautsprecher Teufel, a leading manufacturer of speakers (also based in Berlin).

The Raumfeld family of speakers all communicate over standard Wi-Fi. They are controlled remotely from Android and iOS devices. They can stream directly from various popular music-streaming services (i.e. the speaker connects to the service, not the phone). One speaker can stream to another over Wi-Fi, with automatic compensation for the time delay (the stereo pairs can still be wired together to give optimal quality). All this means that there has to be complex firmware running in those devices.

Codethink delivered this project for Raumfeld to improve the firmware build system, leaving the client’s own development resource to concentrate on customer facing activity.

The Original Build System

Raumfeld have been shipping devices since 2009. Some of these are now obsolete and are no longer sold but all of them are still supported. There are 3 different machine architectures that the firmware has to target and multiple devices using each architecture. Did I mention that each device image actually consists of multiple Buildroot images? As well as a filesystem image for each device, there is an installer image for each device that deploys the real filesystem. This leads to a total of 15 different Buildroot builds to manage.

Developers were able to do local, incremental builds quite quickly to test their changes but to do a clean rebuild of the firmware for all platforms took a whole 8 hours. This meant that the test and release process was majorly inconvenienced.

Easy Wins

An obvious route to faster builds is to use a faster computer to build on. Teufel replaced the aging build server they were using with a new machine which reduced build times to just under 4 hours.

Another quick win when doing multiple Buildroot builds is to build the toolchain once as a separate step and then use the external toolchain backend in subsequent builds. The toolchain takes about 15 minutes to build and we went from building eleven toolchains to three: one for each architecture in use. This got build time for a full rebuild down to under 2 hours.

However, this also added 3 more Buildroot targets into the build process, leading to a total of 18. The overall build was driven by a couple of simple shell scripts which knew nothing about the dependencies between the tasks they were running. It was also easy to accidentally overwrite one build with another as they all took place in the same buildroot.git clone. The next step was to get a handle on this complexity.

Buildroot.cmake

The core functionality of Raumfeld’s devices is provided by a set of C and C++ modules developed internally. Codethink did the work of converting the build systems for these modules to CMake. Previously they used a mix of GNU Autotools and hand-written Makefiles. The move to CMake meant one consistent syntax for all of the build instructions, as well as much better IDE integration and the ability to do incremental builds across all of the modules.

Having used CMake there, we looked to CMake again to create a 'toplevel' build system that would manage all of the Buildroot builds. CMake's forte is generating 'low level' build systems that run compilers and linkers directly. But there is a precedent for using CMake as a “meta” build system, for example the ExternalProject module.

What we ended up with was Buildroot.cmake, a module that helps you drive Buildroot builds from CMake. Here’s a simple example of the CMakeLists.txt file building a toolchain with Buildroot, then using that toolchain to build a rootfs:

Using the Buildroot.cmake module, we could remove the shell scripts that hardcoded a specific build order and replace it with a CMakeLists.txt file that is a bit longer, but is explicit and exact about the inputs and outputs of the build process and the interdependencies. Each Buildroot build also runs as an out-of-tree build, so there’s no risk of mixing two targets together and having to start from scratch.

This clarity was vital for our further work on reducing the time spent doing clean rebuilds of the firmware. We had more or less reached the limit of optimising build times. The next step was to implement a reliable caching mechanism, so we could avoid building something altogether if it hadn’t changed. We will talk about this in our next article.