I'm working on a project solo and have to maintain my own code. Usually code review should be done by someone other than the author so the reviewer can look at the code with the fresh eyes. I don't have such luxury. What practices can I employ to more effectively review my own code?

Answer: Checklist & Refresh (7 Votes)

It seems the common sentiment is that self-review is not effective. I disagree, and I think self-review can catch a lot of issues if done thoroughly.

Here are tips from my few years of experience:

Have a rough checklist handy. These are things you want to flag while you read your code.

Take your code review offline. It might sound wasteful, but take printouts that you can annotate and flip back-and-forth, or the digital equivalent of nicely highlighted pdfs synced to an iPad which is then taken offline. Go away from your desk, so that all you do is review your code distraction-free.

Do it early in the morning, rather than the end of a working day. Fresh pair of eyes is better. In fact, it might help to have been away from the code a day before reviewing it afresh.

Just an FYI—these guidelines were part of recommendations made by Oracle a few years ago when I was working there, where the aim was to catch bugs "upstream" before the code went into testing. It helped a lot, although it was considered a boring job by a lot of developers.

Answer: Delay & Document (11 Votes)

First, set your code aside for as long as practical. Work on something else, some other piece of code. Even after a day, you will be amazed at what you will find.

Second, document your code. Many programmers hate to document their code, but make yourself sit down and write out documentation, how to use the code and how it works. By looking at your code in a different way, you will find mistakes.

It has been said that true mastery of a subject is the ability to teach it to someone else. With documentation you are trying to teach someone else your code.

Answer: Go Historical & Static (9 Votes)

The Personal Software Process technique for reviews might be useful, although it relies on having historical data about your work and quality of products.

You start with historical data about your work products, specifically the number and types of defects. There are various methods of classifying defects. You can develop your own, but the idea is that you need to be able to tell what mistakes you are making along the way.

Once you know what kinds of mistakes you make, you can develop a checklist that you can use during a review. This checklist would cover the top mistakes that you make that you think can best be caught in a review (as opposed to using some other tool). Every time you review a work product, use the checklist and look for those mistakes or errors, document them, and fix them. Periodically revise this checklist from time to time to make sure you are focusing on real, relevant problems in your code.

I would also recommend using tool support when it makes sense. Static analysis tools can help find some defects, and some even support style checking to enforce consistency and good code style. (This page offers some explanations of the difference between static code analysis and code review.) Using an IDE with code completion and syntax highlighting can also help you prevent or detect some problems before you click "build". Unit tests can cover logic problems. And if your project is sufficiently large or complex, continuous integration can combine all of these into a regularly-run process and produce nice reports for you.

Answer: Print & Check (2 Votes)

I usually print out all my code and sit down in a quiet environment and read through it, I find a lot of typos, issues, and things to refactor by doing that. It's a good self-check that I think everyone should do.

Think you know the best way to review your own code or disagree with the opinions expressed above? Leave your answer in the comments or bring it to the original post at Stack Exchange.

13 Reader Comments

This was a nice read for me. I program as a side task to my survey work, and am the only person I have access to with any knowledge of either the language (I use PowerBasic Win and Dos), or the techniques involved (I'm the lone data-crunching tech geek in a 5 person hydrographic survey company). Consequently, I am the only person who can review my code.

Of all the ideas listed, I really like the documentation exercise, because it does the double duty of both facilitating a thorough review, and getting an incredibly important piece of work done. I think the trick to make it most effective is also well exposed in the suggestions above: walk away from your code for a day or two before you review and document it. I think the problem lies in the way we read things. Just as we don't read every letter of every word, we pattern match whole words, we also pattern match whole expressions in our own code when we are in the zone. This necessary efficiency is directly counterproductive to the task of review, which is obviously to NOT take for granted anything, especially the internal contents of little hunks of code that our mind normally sees as whole units. When we are away from the code, when our minds have dropped the 1000 detail context we have fresh while in the coding zone, that is when we can more easily NOT unitize whole hunks of familiar expressions / code that probably contains a few errors.

Also, when I am writing that documentation a day or two later, I try to make the target audience of my documentation someone other than me, instead of just code comments for myself. I write code comments and notes as though my task is to fully explain the operations to a co-worker who has only a bare grasp of the programming language. I think this approach nicely combines the document technique with rubber ducky in a pro-duck-tive way. And avoids the forever-alone feeling I would have were I actually talking to an inanimate object, even if that's ultimately what I am doing.

Created an account to mention a site I have been working on for a few months related to this exact thing. It may not be exactly what the OP is looking for as all code reviews are currently publicly accessible.

It's still very early in development so there is a lot changing but the basics are there of creating a code review and allowing commenting. You don't need to register with the site if you don't mind entering captchas. All comments are left inline with the code so as to maintain context, just click a line in the code to leave your comment. I know it's not very pretty right now, I'm a developer not a designer but I'm working with some designers so hopefully that will change.

Sorry if this seems shameless or spamish. This post just seemed inline with what I am trying to offer to folks and I feel like its a better option than something like stackexchange's code review option (I'm a bit biased though).

I'm not sure you can beyond doing a thorough and careful job of all the things you're supposed to do. Sure, you can come back later, but it will still be your brain rethinking the same problem. You'll probably end up with nearly, but not quite, the same thing.

I think there's benefit to rethinking your code and designs, but you should already be doing that as you go. I wouldn't call it review. It's more like solving the problem multiple times with a different strategy each time to find the best strategy.

And when you're nearly done you should go back through every line of your patch(es) thinking about them in the context of the whole patch.

Properly commenting your code for doxygen and running it through and checking the docs and call graphs can help you pick up a few things.

Some of the flow charting tools can help as can show up the logic/decision chain.

Haven't yet found similar tools for vhdl/verilog. Only really used xilinx or altera for hdl.

But regardless self reviewed code is a bad idea.

Better to write comments first and psuedo code if needed then write the code.Not that always works or is practical. Can be easier when working on embedded systems and you need to set lots of config registers .

OT - would be nice if freescale improved their datasheets to more like microchips and clearly explained all the register conditions and order of setting them to get valid conditions.

1. Write the code that is self-explanatory. Break down long complex constructs into small independent functional units.2. If you are using any dirty tricks document them clearly.3. If you are documenting your code do not explain WHAT it does (that should be obvious from function and parameter names), explain HOW it does what it does.4. Try to compile your code using highest warning level available without getting any warnings. If it would tale a lot of casts to shut up the warnings, then you should rethink your design.5. Use static analysis tools WHILE you write the code (e.g. for C++ you can use PC-Lint and VisualLint addin for Visual Studio for background analysis).

3. If you are documenting your code do not explain WHAT it does (that should be obvious from function and parameter names), explain HOW it does what it does.

And why. Nothing like seeing a huge equation that makes no sense - sure, I can follow the equation, but I don't know why it's there and what you plan to use the results for, especially since this is a method that displays data to a console.

3. If you are documenting your code do not explain WHAT it does (that should be obvious from function and parameter names), explain HOW it does what it does.

And why. Nothing like seeing a huge equation that makes no sense - sure, I can follow the equation, but I don't know why it's there and what you plan to use the results for, especially since this is a method that displays data to a console.

I can't stress this more. Usually, you can figure out what the code does and how by just reading it. But the motivation to write that piece of code and write it in that particular way is quite difficult to guess. And if you're new to the code and barely understand the architecture of the whole thing (nobody seems to care about UML...), knowing WHY that code was written can help immensely.

Here I want to quota a fine tale related to software productivity metrics and its quality published in Scott Rosenberg's excellent book DREAMING IN CODE

-- There is no reliable relationship between the volume of code produced and the state of completion of aprogram, its quality, or its ultimate value to a user.

Andy Hertzfeld tells a relevant tale from the early days at Apple about his mentor Bill Atkinson, a legendary software innovator who created Quickdraw and HyperCard. Atkinson was responsible for the graphic interface of Apple's Lisa computer (a predecessor of the Macintosh). When the Lisa team's managers instituted a system under which engineers were expected to fill out a form at the end of each week reporting how many lines of code they had written, Atkinson bridled. "He thought that lines ofcode was a silly measure of software productivity," wrote Hertzfeld in his account of Macintosh history, Revolution in the Valley. "He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code."The week that he was asked to fill out the new management form for the first time, Atkinson had just completed rewriting a portion of the Quickdraw code, making it more efficient and faster. The new version was 2000 lines of code shorter than the old one. What to report?

He wrote in the number -2000. !!!

If counting lines of code is treacherous, other common metrics of software productivity are similarly unreliable... //