In a previous post I wrote about the benefits of frequent local commits and publishing them as a perfect sausage; that is, a single, perfect commit. Today, I want to slightly adjust my advice.

It seems obvious that perfect sausages make it easier to track and review changes to a code base, but that’s not necessarily always the case. Let me explain.

Imagine a typical scenario. Developer Joe needs to implement a small feature. While coding away, he suddenly realizes that the existing code is not really prepared for the new functionality: some rework on the class hierarchy is necessary. Being this clean coder that he is, he also renames quite a few really bad identifier names, factors out some common functionality and improves upon some missing comments here and there. Eventually, he implements the feature, codes a little, tests a little, refactors even more and finally releases everything (squashed together) as a perfect sausage.

Imagine further that busy developer Sue has been given the task to peer-review what Joe implemented. Naturally, she’s got a lot of other work already on her plate and when she takes a look at Joe’s changeset, she is shocked: to implement this little feature, Joe changed 800+ lines of code, much more than the 50 lines she’d expected. Sue has to learn the new class hierarchy and skim many editorial changes. When she finally reaches the meat of Joe’s change, her attention will have decreased to a level where she might overlook that Joe introduced that nasty off-by-one error (both in the implementation and the unit tests, of course).

I believe it would have been much easier for Sue if Joe had released his initial refactorings (possibly distributed over many small local commits) as an atomic “refactoring sausage” first, followed by a smaller atomic “functionality sausage”. Then, she could have focused on either change at a time, or even skip the refactoring altogether and only review the functional modifications. Other developers could learn about the new class hierarchy by just looking at the refactoring commit — without the burden of all the changes that were necessary for the new feature.

Another benefit of doing larger refactorings in a discrete step is that the overall risk is reduced because a) any regressions will likely be found by the continuous integration machinery and b) developers working in the same area are less likely to get merge conflicts.

Differentiating between refactoring and functional commits requires discipline, though. If you find out during feature development, that some refactoring is necessary first, you just stash your current changes, do the refactoring, deploy the refactored code, pop your original changes back and continue working on the feature. But in my view, the overall benefit outweighs these little inconveniences.

Some practical advice: First, don’t get religious about it! If only trivial refactorings are required (like changing a name/comment here and there), it’s probably not worth the effort. Instead, do it as-you-go, or, for extra credit, as a “refactoring sausage” after you have released the “functional sausage”. Second, it might be useful to establish a convention that makes it clear from the commit message what category a particular commit belongs to. Here is an example that discriminates at a fine level of granularity:

The take-away is this: big commits that mix major refactorings and functionality are often detrimental to traceability and make reviews harder. As a general rule, don’t mix significant functional and non-functional changes — a sausage is easier to digest if it’s consumed in smaller pieces.