Continuous Integration with Jenkins

The first tool to consider is the one that will automatically call every other ones for every modification of your source code. It is really important to have a software to do that and there are several reasons for that:

It does boring and repetitive tasks

It does always the same things, whereas a developer may sometimes make mistakes

It can build your software 24 hours a day

It can build a software for each modification of your source code

The standard steps you should do on the continuous integration tool:

Detect modification on the source code repository (svn, git, …)

Build the software (gcc, Microsoft Visual Studio, …)

Use complier features to avoid potential issues, for example 0 warnings is a best practice.

Run unit tests

Run automatic integration tests

Do some custom checks on the source code

Use custom scripts to check coding guidelines

Use tools to do static source code analysis (Coverity, …)

Build release packages

Static Source Code Analysis with Coverity

While peer code review is necessary for software changes, doing part of the job with a tool help saving time and increase quality. There are several tools on the market for static source code analysis and they incredibly find many defects developers have left, sometimes because they were junior developers, sometimes because the algorithm is complex and the defect was not obvious.

One of the most important point to check when evaluating this kind of tool is the rate of false-positive. If the rate is too high, your developers will spend precious time to analyze defects that are not real problems. However, a 0 false-positive rate is probably not possible. It the tool produces 0 false-positive, I can guess that it also finds very few number of defects. Moreover, it is important to analyze false-positives; often they are linked to complex source code that even a human will have trouble to analyze. That is why some will advice to also fix all false-positive defects: it simplifies the source code and help developers to fully understand what the program does.

It is important to notice that static source code analysis can help finding bugs in your software, but also errors that are not yet bugs but that can become bugs after a new modification.

I experienced Coverity on a one million sloc code base. The first time I ran it, I was really impressed of the findings. It helped improving quality already at the first run. Then, it is necessary to integrate the tool with the continuous integration system in order to maintain the quality level.

Efficient Code Review with ReviewBoard

When managing a software project with several teams and many developers scattered over several sites, it is sometimes difficult to maintain code quality: it is necessary to ensure that the changes are correct technically and fit coding rules and software architecture. For this, we need to communicate between remote developers on a proposed change or correction of a defect. It often begins to achieve this by setting up a process based on patch files sent by email. There are several drawbacks to this method:

The emails may not be archived and published for all developers

The comments on the product code are inserted in the email, and differently from one developer to another which does not facilitate the resumption of work by another person

The process is completely manual which increases the risk of errors and omissions.

It is difficult to produce indicators on code reviews.

One way to overcome these problems is to use a tool that will allow to automate certain tasks, centralize code reviews and standardize them. ReviewBoard, an open source web tool does this job quite well.

ReviewBoard can be used in two ways: by reviewing pre-delivery and post-delivery. I use both approaches: pre-delivery when there is an identified risk of the delivery or doubt on the solution. In post-delivery to systematically inform the maintainer of a module that delivery was made in its scope and thus avoid unpleasant surprises that would have been discovered much later with more damage.

Using pre-delivery reviews

In this mode, the developer who wants to submit a code change will generate a patch file and create manually a new code review request in ReviewBoard. This is done simply by downloading the file to the web interface of the tool. A review request is created: then just add in the list of those reviewers to whom we want to submit the change. At the publication of the request, the reviewer will be automatically notified by email of the review request.

Using post-delivery reviews

ReviewBoard provides a useful command line tool to publish new code review requests. Thus, it is very easy to integrate into a continuous integration system (eg Jenkins) creating a review request for each new modification detected in the code base. In practice, I created a script that allows developer to specify who subscribes to a particular piece of code: the developer is then automatically added as a reviewer to the new code review when a change was delivered in his scope and it will be notified by email of the change.

Code review

Once the code review request has been created, it is possible to annotate the modified code line by line or by grouping lines. It is also possible to make general comments on the change. Then the reviewer publishes the review and the author of the request is then notified by email. It can then fix his delivery and resubmit a change: the process then starts again.

In terms of ergonomics, the tool is again well done: the ReviewBoard code viewer supports syntax coloring and reading the code and modifications is nice: we can really concentrate on the review without having to worry about the tool.

Coding Standard Checking

It is important to define a coding standard for developing your software. And it is also really important to check the coding standard is applied.

To avoid doing too much checking by peer reviews, which is time consuming, it is easy to integrate home made scripts in the continuous integration system. Most of the time, using simple regular expressions in the source code can help to find errors.

I also experienced successfully this kind of script to verify that the software architecture is not broken. For example, if yout software is organized as a Model-View-Controller pattern (MVC) and models, views and controllers are in different folders, it is easy to check that there is no view file included in a model file.

Improving the Quality of legacy source code

Sometimes, you will build a better development chain on an old source code base with many quality defects. In this case, it is often not possible to fix all quality issues before applying the new development chain. The good approach in this case is to consider the actual quality level as the reference, the target being to do always better but never worse.

Let's take the example of compilation warnings. Consider you have 1024 compilation warnings in the software:

In the script that checks the number of warnings, consider the build fails if at least one new warning has been introduced. If the warnings count is still 1024, do not break the build.

If for the current build, there are less warnings, for example 1020, consider this as the new reference

Return to step 1 while there are remaining warnings

When there are no remaining warnings, modify your compilation flags to consider warnings as errors

Communicate this rule to your developers, and you will see the number of warnings decrease until reaching 0. It can take several weeks or several month, depending of your software.