Observations on development and related issues

Main menu

Monthly Archives: February 2012

The most common step in creating software is building it. Usually this means running make or equivalent and waiting. This step is so universal that most people don’t even think about it actively. If one were to see what the computer is doing during build, one would see compiler processes taking 100% of the machine’s CPUs. Thus the system is working as fast as it possibly can.

Right?

Some people working on Chromium doubted this and built their own replacement of Make called Ninja. It is basically the same as Make: you specify a list of dependencies and then tell it to build something. Since Make is one of the most used applications in the world and has been under development since the 70s, surely it is as fast as it can possibly be done.

Right?

Well, let’s find out. Chromium uses a build system called Gyp that generates makefiles. Chromium devs have created a Ninja backend for Gyp. This makes comparing the two extremely easy.

Compiling Chromium from scratch on a dual core desktop machine with makefiles takes around 90 minutes. Ninja builds it in less than an hour. A quad core machine builds Chromium in ~70 minutes. Ninja takes ~40 minutes. Running make on a tree with no changes at all takes 3 minutes. Ninja takes 3 seconds.

So not only is Ninja faster than Make, it is faster by a huge margin and especially on the use case that matters for the average developer: small incremental changes.

What can we learn from this?

There is an old (and very wise) saying that you should never optimize before you measure. In this case the measurement seemed to indicate that nothing was to be done: CPU load was already maximized by the compiler processes. But sometimes your tools give you misleading data. Sometimes they lie to you. Sometimes the “common knowledge” of the entire development community is wrong. Sometimes you just have to do the stupid, irrational, waste-of-time -thingie.

This is called progress.

PS I made quick-n-dirty packages of a Ninja git checkout from a few days ago and put them in my PPA. Feel free to try them out. There is also an experimental CMake backend for Ninja so anyone with a CMake project can easily try what kind of a speedup they would get.

One of the most annoying things about creating shared libraries for other people to use is API and ABI stability. You start going somewhere, make a release and then realize that you have to totally change the internals of the library. But you can’t remove functions, because that would break existing apps. Nor can you change structs, the meanings of fields or any other maintenance task to make your job easier. The only bright spot in the horizont is that eventually you can do a major release and break compatibility.

We’ve all been there and it sucks. If you choose to ignore stability because, say, you have only a few users who can just recompile their stuff, you get into endless rebuild cycles and so on. But what if there was a way to eliminate all this in one, swift, elegant stroke?

Well, there is.

Essentially every single library can be reduced to one simple function call that looks kind of like this.

Dynamic detection of features and changing behavior based on them makes apps work with every version of the library ever. The car could even be changed into a moped, tractor, or a space shuttle and it would still work.

For added convenience the basic commands could be given as constant strings in the library’s header file.

Deeper analysis

If you, dear reader, after reading the above text thought, even for one microsecond, that the described system sounds like a good idea, then you need to stop programming immediately.

Seriously!

Take your hands away from the keyboard and just walk away. As an alternative I suggest taking up sheep farming in New Zealand. There’s lots of fresh air and a sense of accomplishment.

The API discussed above is among the worst design abominations imaginable. It is the epitome of Making My Problem Your Problem. Yet variants of it keep appearing all the time.

The antipatterns and problems in this one single function call would be enough to fill a book. Here are just some of them.

Loss of type safety

This is the big one. The arguments in the function call can be anything and the result can be anything. So which one of the following should you use:

You can’t really know without reading the documentation. Which you have to do every single time you use any function. If you are lucky, the calling convention is the same on every function. It probably is not. Since the compiler does not and can not verify correctness, what you essentially have is code that works either by luck or faith.

The only way to know for sure what to do is to read the source code of the implementation.

Loss of tools

There are a lot of nice tools to help you. Things such as IDE code autocompletion, API inspectors, Doxygen, even the compiler itself as discussed above.

If you go the generic route you throw away all of these tools. They account for dozens upon dozens of man-years just to make your job easier. All of that is gone. Poof!

Loss of debuggability

One symptom of this disease is putting data in dictionaries and other high level containers rather than variables to “allow easy expansion in the future”. This is workable in languages such as Java or Python, but not in C/C++. Here is screengrab from a gdb session demonstrating why this is a terrible idea:

Your objects have now become undebuggable. Or at the very least extremely cumbersome, because you have to dig out the information you need one tedious step at a time. If the error is non-obvious, it’s source code diving time again.

Loss of performance

Functions are nice. They are type-safe, easy to understand and fast. The compiler might even inline them for you. Generic action operators are not.

Every single call to the library needs to first go through a long if/else tree to inspect which command was given or do a hash table lookup or something similar. This means that every single function call turns into a a massive blob of code that destroys branch prediction and pipelining and all those other wonderful things HW engineers have spent decades optimizing for you.

Loss of error-freeness

The code examples above have been too clean. They have ignored the error cases. Here’s two lines of code to illustrate the difference.

Since the generic function can not provide any guarantees the way a function can, you have to always inspect the result it provides. Maybe you misspelled the command. Maybe this particular object does not have an x value. Maybe it used to but the library internals have changed (which was the point of all this, remember?). So the user has to inspect every single call even for operations that can not possibly fail. Because they can, they will, and if you don’t check, it is your fault!

Loss of consistency

When people are confronted with APIs such as these, the first thing they do is to write wrapper functions to hide the ugliness. Instead of a direct function call you end up with a massive generic invocation blob thingie that gets wrapped in a function call that is indistinguishable from the direct function call.

The end result is an abstraction layer covered by an anti-abstraction layer; a concretisation layer, if you will.

Several layers, actually, since every user will code their own wrapper with their own idiosyncrasies and bugs.

Loss of language features

Let’s say you want the x and y coordinates from an object. Usually you would use a struct. With a generic getter you can not, because a struct implies memory layout and thus is a part of API and ABI. Since we can’t have that, all arguments must be elementary data types, such as integers or strings. What you end up with are constructs such as this abomination here (error checking and the like omitted for sanity):

How can one avoid this trap? There is really no definitive answer, but there is a simple guideline to help you get there. Simply ask yourself: “Is this code solving the problem at hand in the most direct and obvious way possible?” If the answer is no, you probably need to change it. Sooner rather than later.

I played around with btrfs snapshots and discovered two new interesting uses for them. The first one deals with unreliable operations. Suppose you want to update a largish SVN checkout but your net connection is slightly flaky. The reason can be anything, bad wires, overloaded server, electrical outages, and so on.

If SVN is interrupted mid-transfer, it will most likely leave your checkout in a non-consistent state that can’t be fixed even with ‘svn cleanup’. The common wisdom on the Internet is that the way to fix this is to delete or rename the erroneous directory and do a ‘svn update’, which will either work or not. With btrfs snapshots you can just do a snapshot of your source tree before the update. If it fails, just nuke the broken directory and restore your snapshot. Then try again. If it works, just get rid of the snapshot dir.

What you essentially gain are atomic operations on non-atomic tasks (such as svn update). This has been possible before with ‘cp -r’ or similar hacks, but they are slow. Btrfs snapshots can be done in the blink of an eye and they don’t take extra disk space.

The other use case is erroneous state preservation. Suppose you hack on your stuff and encounter a crashing bug in your tools (such as bzr or git). You file a bug on it and then get back to doing your own thing. A day or two later you get a reply on your bug report saying “what is the output of command X”. Since you don’t have the given directory tree state around any more, you can’t run the command.

But if you snapshot your broken tree and store it somewhere safe, you can run any analysis scripts on it any time in the future. Even possibly destructive ones, because you can always run the analysis scripts in a fresh snapshot. Earlier these things were not feasible because making copies took time and possibly lots of space. With snapshots they don’t.