reflective software development, as a service

Menu

TDD: three easy mistakes

I run introductory training in test-driven development quite frequently these days. And each time I do, I find the same basic mistakes cropping up every time, even among teams who already claim to practice TDD. Here are the three mistakes I see most often:

These feel easy to write, and give a definite feeling of progress. But that is all they give: a feeling of progress. These tests really only prove to ourselves that we can write a test.

When written first like this, they don’t deliver any business value, nor do they get us closer to validating the Product Owner’s assumptions. When we finally get around to showing him our work and asking “Is this what you wanted?”, these tests turn out to be waste if he says “Actually, now I see it, I think I want something different”.

And if the Product Owner decides to continue, that is the time for us to advise him that we have some edge cases to consider. Very often it will turn out to be much easier to cope with those edge cases now, after the happy path is done. Some of them may now already be dealt with “for free”, as it were, simply by the natural shape of the algorithm we test drove. Others may be easy to implement by adding a Decorator or modifying the code. Conversely, if we had started with the edge cases, chances are we had to work around them while we built the actual business value — and that will have slowed us down even more.

This way you will get to ask the Product Owner that vital question sooner, and he will invest less before he knows whether he wants to proceed. And you will have a simpler job to do, both while developing the happy path, and afterwards when you come to add the edge cases.

2. Writing tests for invented requirements:

You may think that your solution will decompose into certain pieces that do certain things, and so you begin by testing one of those and building upwards from there.

For example, in the case of the word counter we may reason along the following lines: “We know we’ll need to split the string into words, so let’s write a test to prove we can do that, before we continue to solve the more difficult problem”. And so we write this as our first test:

No-one asked us to write a method that counts the words, so yet again we’re wasting the Product Owner’s time. Equally bad, we’ve invented a new requirement on our object’s API, and locked it in place with a regression test. If this test breaks some time in the future, how will someone looking at this code in a few months’ time cope with that: A test is failing, but how does he know that it’s only a scaffolding test, and should have been deleted long ago?

So start at the outside, by writing tests for things that your client or user actually asked for.

3. Writing a dozen lines of code in order to get the next test to pass:

When the bar is red and the path to green is long, TDD beginners often soldier on, writing an entire algorithm just to get one test to pass. This is highly risky, and also highly stressful. It is also not TDD.

This is a huge leap from the current algorithm, as any attempt to code it up will demonstrate. Why? Well, the code duplicates the tests at this point (“happy” occurs as a fixed string in several places), so we probably forgot the REFACTOR step! It is time to remove the duplication before proceeding; if you can’t see it, try writing a new test that is “closer” to the current code:

After making this relatively simple change, we have now test-driven part of the algorithm with which we struggled earlier. At this point we can try the previous test again; and this time if it is still too hard, we may wish to ask whether our chosen result format is helping or hindering…

So if you notice that you need to write or change more than 3-4 lines of code in order to get to green, STOP! Revert back to green. Now either refactor your code in the light of what just happened, so as to make that test easier to pass, or pick a test closer to your current behaviour and use the new test to force you to do that refactoring.

The step from red bar to green bar should be fast. If it isn’t, you’re writing code that is unlikely to be 100% tested, and which is prone to errors. Choose tests so that the steps are small, and make sure to refactor ALL of the duplication away before writing the next test, so that you don’t have to code around it whilst at the same time trying to get to green.

Post navigation

14 thoughts on “TDD: three easy mistakes”

I agree that the first test “wordCounterCanBeCreated” is a common mistake, but I disagree that starting with edge cases is in general a mistake.

They have the benefit of helping to define the api without having to worry about the implementation. There are a number of design decisions that have been made as a result of writing some very quick tests, e.g. the name of the class (WorkCounter), the name of the method that does the counting (count), the interaction with the class, count take a parameter, rather than say passing a parameter to the constructor and having a count method with no parameters, or having count as a static/class method. You’ve also defined a precondition for the count method, that null is an illegal argument, rather than returning an empty result when passed null.

Personally I often find it useful to flesh out these sorts of decisions with the simple edge cases rather that the “fake it ’til you make it” approach.

Hi Brian, I’m nervous about design decisions like that, because they haven’t been checked in realistic scenarios. But my biggest objection to these edge-case tests is that they put this code into the project too early, and all too often I find it just gets in the way. Personally I’m happier adding these cases when I have a working algorithm, than I am trying to build / design a good algorithm while tippy-toeing around the null checks etc.
Of course, we all prefer different sets of trade-offs.

As you say it’s all about trade-offs. I haven’t had the experience of the error cases/empty case getting in the way. Do you find that you use “Fake it ’til you make it” often? I rarely use it in practice. I hadn’t considered it before but it seems like the edge case tests/”fake it ’til you make it” could be two different paths to get to the same point.

I wonder if there is a hard distinctions between “edge case” and “simplest happy path”. Similiar to Brian I tend to have the simplest happy path as first case to sketch out the interface w/o worrying about implementation. Sometimes, “simplest happy path” and s.t. that could be considered an edge case are identical.

Yes, you (both) are probably right. My main concern here is to have beginners avoid procrastinating, and to start delivering value as soon as possible. I find that real cases work best for that.
And for myself, I tried both approaches under reasonably controlled conditions, and I found that I work best with a non-trivial first test.

To me the big benefit of leaving the edge cases until later is that they actually allow the project, should it decide to do so, to ship working code that provides benefit. Sure it has errors in it but that risk can be worth more than waiting until you have finished polishing every edge case that can be thought of.

I don’t see invented requirements as a great evil, particularly when teaching TDD. Often I find those unfamiliar with the concepts have a hard time getting to the “big” requirement and it can be beneficial to create a set of micro requirements that are testable on the way to building the big thing. The point is that TDD code should be treated like all other code. It should not be written in stone and you should encourage them to be thought of as temporary, a series of stepping stones and thrown away as soon as they no longer help you.

Regarding your comments about missing a refactor, or having to alter too many lines of code to go from red to green, I feel this is an appropriate time to mention the ‘Transformation Priority Premise’ – a set of guidelines to help you write tests in an order which should lead to not only smaller incremental changes in the code under test to pass the test, but also a more efficient algorithm too.

… or testing border cases that are not specified. I always make a point of teaching “Only test essential stuff!”. Essential meaning: It is in scope and we know how the system should behave. If we don’t know, we start asking but we never write a test before the desired behaviour has been clarified.