April 2011 Archives

Let me preface this by saying I'm not a code review expert. I just happen to have experienced everything from a very formal code review meeting to a very informal peer-review system. And I like code reviews. No, I love them. They really do help produce better code overall, and also help developers familiarize themselves with the code. Done properly, it will end up saving the company significant amounts of money and developer time by fixing errors before they make it into production.

So, here's what I've seen. Formal reviews that are checklists and formal meetings are time consuming, but are also very thorough--something that's helpful when you have a detailed set of requirements. Slightly less formal peer audits can be useful, but there are a few problems I've noticed. First of all, depending on who reviews the code, different issues tend to be caught. Theoretically, everyone should be working out of the same guidelines, but different reviewers pick up different things. That's human nature. Also, one book on developing code review standards made a very good point: don't sweat the small stuff. Depending on the circumstances, "small stuff" may have different meanings to different people. For one person, spaces vs. tabs may not be relevant, for others, maybe it's eliminating the requirement for Hungarian notation. In either case, you need to figure out what works well for you.

Now, here are some of the problems I've seen: the fact that reviews are completed by different people means some people will review a change and approve it with no comments, where others will pick the code to the point where there's nothing left. Even more amusingly are getting comments that are completely unhelpful or wrong on working code, while other code is being checked in that won't even compile. This all boils down to the absence of one particularly vital role: the moderator. The moderator is an independent third party whose job it is to check the comments, ensure they are all applicable, and also ensure all concerns are properly addressed before the review passes. By placing all the power in a single reviewer, you have destroyed the system of checks and balances that would exist otherwise. If the reviewer doesn't like your code for some reason, sorry, but not today.

The most effective system I have seen so far was a rather informal system using Atlassian Crucible (yes, I really was pleased with their product), but where reviewers were randomly assigned to the review. Then, a few of the reviewers (a minimum of two) would look over for defects and suggestions. Only defects (such as security issues or gross violations of the standards) needed to be corrected, and suggestions for improvement were left up to the discretion of the programmer. A moderator would observe the comments, decide on which ones were actually necessary to fix, and overall steer the discussion in productive ways. After reaching the minimum number of reviews, the review would be closed with a simple pass/fail status, and re-review would be required for failed status. The moderator would then ensure all comments were appropriately addressed in future reviews.

Basically, using this method, reviews were both fast and simple, and didn't waste a large amount of time. At the same time, the comments out of a review were, on the whole, helpful, and provided a forum for a developer to make their point if they disagreed with some of the comments. Take this for what you will, but I have found moderated informal pass-arounds to be the most effective reviews.