Main menu

Post navigation

Test Before or Test After, that is the Question

In Jeff Langr’s blog, Jeff responded to an assertion (from someone Jeff calls Schmoo) that writing tests after developing a unit of production code takes less time than using TDD to create production code and its tests. For starters, I am happy the discussion is about when to write the unit tests and not if.

I think a model would help us talk about this issue. It would be great to have some real numbers in the model; that will be hard. But for starters let’s look at a model. Maybe then someone can figure out how to put some numbers to the model.
If we are just interested in the production and test code time and not other factors, like better design and its future savings (but I am interested in this), we can look at these areas where time is spent:

TProdCode is the time to write the production code

TTestCode is the time to write the test code

TFindMistakes

TFixMistakes

TManageFindAndFixBugs is the time to manage the bug list, and track down and fix the bugs that don’t get caught by tests

“Done” for both TDD and TAD is the some of these factors

TProdCode + TTestCode + TFindMistakes + TFixMistakes

Schmoo’s assertion, if I have it right, is the TTAD-Done < TTDD-Done. Jeff’s asserts the opposite.

Schmoo thinks TTestCode for TAD is less because less tests are needed. I think he would also argue that TProdCode is also less because you can just write the code and don’t evolve it line by line.

Jeff points out that with TAD, the test and fix cycle (TFindMistakes + TFixMistakes) will generally be longer. It will be harder to find problems revealed by the tests in TAD. Let’s dissect that assertion:

TFindMistakes is close to zero for TDD and takes time in TAD, maybe a lot.

TFixMistakes may take longer in TAD because some of the working TAD code might depend on the mistake or a design might have a critical flaw requiring bigger changes. In TDD mistakes are discovered sooner, so code developed after the mistake is found won’t rely on the mistake to be correct.

Neither Jeff or Schmoo mention the lifetime cost of mistakes that are not discovered by either TAD or TDD. TManageFindAndFixBugs is hard to quantify. I would assert that the thoroughness of the TDD tests would result in fewer mistakes that become future bugs.

My experience is that TDD reduces TFindMistakes and TFixMistakes. Schmoo might argue that time is negated by having less tests and writing the production code all at once. That’s not my experience so I would say that TDD wins or at least ties TAD. When we factor in TManageFindAndFixBugs, I think TDD is a clear winner. Why? The average cost of managing, finding and fixing bugs is high. More mistakes slip by TAD than TDD.

If we had some real numbers, maybe we could prove it. But again, I’m glad we are talking about when and not if tests should be written.

2 thoughts on “Test Before or Test After, that is the Question”

Nice way to lay it out, James. I also believe T(prodCode) and T(testCode) are diminished as well. Prod code, because you keep modules more decoupled and cohesive, and thus more reusable; you also keep a simpler, and thus more maintainable, design.

Test code, because it’s a giant pain in the ass to write tests after the fact for code that wasn’t designed for testing. My rough estimate is that when I wrote tests after, it was about a 1-to-1 correspondence: An hour of pure greenfield production development took about an hour to write tests for, sometimes longer, and at our diminished coverage levels. That same amount of production code takes about an hour and a quarter, and sometimes less, and at high coverage levels and with much better design, when doing it TDD.

Honestly, I don’t make these arguments to say that TDD is better than TAD, so much as I make them to say that TAD is a waste of time. If you aren’t willing to do TDD, fine, but then invest your money elsewhere. I’d probably put it into more and better acceptance tests.