First I took a look at existing benchmark code that was meant to be used internally at Adobe. Some of the tests could be abstracted (removing all the Adobe specific bits), and some could not. Some were written for very specific compiler bugs and needed to be expanded. And some needed to be split into multiple, more specific benchmarks before they would be useful.

Many of the optimized versions of our code showed patterns in the techniques used to make them faster. If we had to repeatedly do an optimization ourselves, then that optimization is an opportunity to improve the compiler – and that needs a benchmark written for it.

I also went through my compiler, performance, and language reference books and thought about areas that could cause performance issues. That added a lot of ideas, but many of them are difficult to test accurately. Also, some of them are such well known and well solved problems that they may never cause a performance issue. But I don’t trust compilers – I prefer to verify that they are doing the right thing.

But the most useful way I come up with items to test is by example.

An application recently identified a slowdown when parsing a certain file type, but only on one platform. A quick profile showed that the function isdigit() was about 4 times slower on that platform than on others (relative to other functions involved), and being called frequently by the file parser. So I wrote a quick benchmark to test isdigit(), and found that the compiler in question had a very inefficient implementation of isdigit().

We could have stopped after reporting the bug to the compiler vendor. But shouldn’t other developers know about this? What if related functions are slow and causing problems for other applications? What if the performance regressed on other compilers/platforms or in a later release of this compiler?

So, I expanded the quick benchmark into something more generalized, and added the rest of the common functions from ctype.h. Then I added baseline versions of a few routines for verification and comparison. That’s how I found that isspace() was another order of magnitude slower than isdigit() under that compiler, and that both are slower than the obvious lookup table approach to implementing the ctype functions.

I wouldn’t have thought to look at the ctype functions for performance problems. They’re so old and well used that I didn’t immediately think they could cause application level slowdowns. Hmm, what other common C library functions might not be performing well? What other assumptions are we making about our compilers and support libraries that could be hiding important performance problems? What else should we be testing?