Code Review

Why Names Matter. A common motif in many fantasy settings is the idea that naming something gives you power over it.

And nowhere is that more true, in my opinion, than when writing code. Names in code are simultaneously the most and least important thing. After all, a large part of the compiler/assembler’s job is to take all of the names of variables, functions, classes and get rid of them in favour of address, indices or register numbers. It’s like humans giving names to cats – no self-respecting cat is going to ever actually use the name a mere human gave them, even if occasionally they will deign to show interest when called by it. So, the names don’t matter. The reason they do, of course, is because only a relatively small proportion of the average program is written for the compiler’s benefit. So why is it that, as programmers, we tend to treat our friends so badly? The big problem with this is that in the short term, and in some case the medium term, it’s easy to get away with it.
Performance metaprogramming. Programming comes down to making a solution with the appropriate set of trade-offs for a set of constraints.

A typical choice is speed versus memory. One easy way to optimize a piece of code is to create specialized versions for different circumstances. Now, I’m not much of for fancy C++. Given the option I’d program in C99. I regard the upcoming C++0x standard with similar dread as I would a high school reunion. Although this seems like an unalloyed good, often this comes down to choosing performance over clarity. Templates This snippet of code abstracts some SPU code I changed. The SPU as you know only static branch prediction. // we anticipate this conditional will be falseif (__builtin_expect((a>b),0)) c += a; else d += 1; As it turned out, m_do_expensive is constant for any run of ManifoldJob.

Moving the test into a template parameter, the compiler has the information it needs to completely elide the test (and, does).
Teaching the high-performance mindset. A few years ago, I started working on a project where through various technical decisions, a much larger portion of the team’s programmer would have to write pretty high-performance code.

Now, as I’m sure most of you can imagine, this is easier said than done. Most large teams I know work the same way: most programmers write code that works and a few guys write the code that needs to be fast (including fixing other people’s code that needs to be fast and isn’t). But what happens when suddenly, the portion of the code that needs to be fast grows tenfold? A hundredfold? Do your high-performance guys have the bandwidth to cope with the increase? What if you throw parallelism into the mix? Here are some random thoughts I have about the process of injecting some high-performance mindset into the brain of programmers that have typically spent their time thinking about other types of problems.

What most programmers know about performance They know scale. They know about the Big-O notation.
Designing an API. How do you design an API?

My experience is that just like with most other issues concerning code formatting and standards, every programmer has their own set of preferences. Which of course means this post is entirely based on my personal views, and I will obviously assume you agree with the choices I make. This is part two in my series of posts about building a profiling library. The previous post covered the base code for measuring elapsed time. Just like the code for that post, the code for this and the next parts will be available through github, released to the public domain: Let’s get down to business!

Consistent. And now we apply these principles to the problem of designing a profiling API. Everything else we add might violate one or more of the rules, so we need to tread carefully. Going forward, we also want a function to enable/disable profiling as well as functionality to insert generic log messages into the profiling data stream.
A Random Walk Through Geek-Space. In my opinion, one of the most insidious form of technical debt is what I like to call future-coding.

You might also call it over-engineering, second-system syndrome, etc. It’s code written in an overly elaborate and general way in order to, the future-coder reasons, pre-emptively handle use cases that we’ll “probably see in the future”. I find it more treacherous than plain “stupid code” because it predominantly affects smart people.