4 Answers
4

At my work we have the following procedure for code reviews. It has worked well for us so far, and we found it to be very time-efficient, especially in terms of man-hours. We do not really have any specific time allocated to the reviews. Every commit or merge to the trunk must be reviewed, and it takes as long as it takes for the reviewer to ok it.

Edit:
The time it takes, of course, depends on the magnitude of the change. Small features and bug fixes take minutes. Large new features, refactorings, or changes that affect many parts of the system can take half a day to review and another day to address all the issues that come up as a result.

To make this system work it is crucial to commit to the trunk often, so that the changes are of manageable size. You do not want to have the situation when you have to review a year's worth of somebody's code.

In our project, every significant change to the system is reviewed by the team leader or together with another developer who's gonna be the main "consumer" of the new module. We talk on skype and either use Rudel in Emacs (a plugin for collaborative editing, basically it allows several users to edit the same file live), or TypeWith.me (Piratepad), or one of us shares his screen in skype.

It's hard to quantify this, because mundane changes, like new views, pages, etc. are not reviewed. We do review new modules, major updates and refactorings. As for big changes, code review can take from 10% to 30% of time, but it's worth that.

I can say pair programming, when 2 programmers do edit the same file at the same time, not just sit at the same computer, is a lot better than the usual office practice of sitting behind one's shoulder.

For simple things like naming conventions and scope errors we use our own or open source automatic tools (jslint, pylint, pyflakes, pep8). And we don't limit commits and pushes: we use Mercurial which has very easy branching and merging (I have to say, easier than in Git). Bugs are not a code review matter.

We do team meetings where the changes and new things are announced, but there, not everyone really pays attention. Probably we should do a bit more code reviews.

In regards to studies, Smart Bear software will send you a small book, Best Kept Secrets of Peer Code Review, for free. It has a number of articles in it about various aspects of code review, including studies on how much time they should take and how effective they are.

Every organisation and code base is different, so it is difficult to get an industry wide value.
If you are really serious then you should start collecting metrics. I.e. Do the code review until is satisfactorily done including rework. Start collecting this in a database (LOC, code complexity, programming language, time etc.). Then also collect metrics on your defect rate during testing. As long as you can reduce this code review should pay by itself. If defect comes back from testing then collect metrics on how much time was spent on fixing defects. Build these data in your organisation, create baselines, and you can predict it quite accurately. The terms to search for further learning is Cost of Quality and Cost of Poor Quality.

The only caveat is that this can start to become bureaucratic and depends on organisation culture.