Where I work, we have very few rules about what constitutes a code review. In addition to going through the code, I tend to briefly test out the software I'm reviewing (in my case, it's always our website) just to see if there are any glaring errors, at least if the project is large It has been the case where a code reviewer here has given anywhere from none to many suggestions in their feedback, but in fact the site did not load at all or simply did not function as it was supposed to.

There are also times when I look at the code and believe there may be a semantic error, but I would have to test it out to make certain.

Finally there are some very obvious errors that can come up from even brief testing that may not be obvious at all from looking at the code (or perhaps I'm just an inferior reviewer). Moreover, I think that some developers and testers may have a very different mindset in terms of how they use software, so a reviewer may find issues that a tester would not (and certainly vice versa).

I believe that code review is a very important step in the development process for catching errors in maintainability, semantics, consistency, standards, and even syntax. The questions I put forth are: should code reviewers actually test the software? If so, is it their responsibility or should this practice be controlled as part of the development process? If they are to test, how much testing is reasonable? If they find errors in their testing, how much time (if any) should they spend looking at the code to find the error vs. simply providing reproduction steps for the developer to debug on their own?

4 Answers
4

A good code review will examine the code as it is. This can be difficult for the developer who has a good understanding of what they believe they wrote to do.

I covered a lot of your question in my answer trust-factor-in-code-review. I would expect code being review to have been tested by the developer. Further testing should be covered in the various test suites. I'll expand on testing a little as that is your specific question.

I would answer "No, testing is not part of the code review process". However, there are classes of errors that are easier to spot in code reviews than in testing. There are also coding techniques which are more prone to error which can be identified in code reviews. For example a code review can identify poor variable naming which is difficult to automate.

It might be appropriate to review the related unit tests. Things to look for include:

Do they test the non-trivial functionality?

Do they test the important functionality?

Do they test important edge cases?

Do they reasonably cover success cases?

Do they reasonably cover failure cases?

For any non-trivial code, you will not be able to cover the full set of functionality in a reasonable amount of time. However, the unit tests should ensure that the code is functional and fails when expected.

Watch for code that implements functionality available in libraries. In addtion to its test cases, library code gets heavily tested though use, and may have important optimizations and safety checks. Either way, your programmer's implementation won't get fixed when the library does. If library functionality needs to be extended, use a wrapper (developed once for the project or organization). Use as much "Not invented here" technology as possible.

As you do reviews generate a check-list of things to checklist. This will help you ensure the code review is good as you can do.

As you identify problems, you may want to produce a checklist of problems individuals generate. Ask for their list of common errors they make and use it. Share your list with them and check future code from that developer for those problems. As the developer develops, problems they have leaned to avoid may be removed from their list.

In my opinion it depends on how much test coverage the code changes have, and what sort of testing your QA team is performing. During review, you can see in the code the various inputs that are handled. If tests correctly cover all of these inputs, then I would think it'd be a waste of time for the code reviewer to manually test these. However, if some (or all) of these inputs are not covered by tests, it would be prudent for the code reviewer to test these inputs, as the QA team, who presumably does not have access to the source, will not necessarily hit these cases in their testing.

In the case where there is virtually no test coverage (e.g. at my current company), I would say it is virtually a necessity to have the code reviewer perform functional testing.

I sympathize with having very few requirements around code review. It's also my understanding, although I can't remember why, that unstructured code review isn't at all effective. The Fagan Method of code review is a very tight structure to code review.

That being said, I can't see why testing shouldn't be a small part of code review. Without running a few tests, a reviewer may only have a vague suspicion of what's wrong with a piece of code.

By the time the code gets to a code review, the developer should have written unit tests, successfully built both the code under review and the unit tests, and have all currently-written unit tests passing. Because of this, the code reviewer should not have to test the code - the author of the code should have already done it. If a developer submits code for a code review that does not execute, compile, or pass tests, they should be held accountable for wasting the time of everyone involved in the code review.

That said, it might be a good idea to also review the unit tests that correspond to the code under review to ensure that all the essential cases are covered, there are no extraneous test cases that would slow down test execution, and perhaps point out some other cases that should be covered if there's sufficient time and/or resources. Improving test quality can only improve the overall quality of the project.