This is a question concerning the fundamental approach of TDD, so the example below is as simple as possible which might make it seem a little useless; but of course the question applies to more complicated situations as well.

Some colleagues and I are currently discussing and trying out some basic TDD ways of coding. We came across the questions how to deal with cheap solutions for existing but not encompassing TCs. In TDD one writes a TC which fails, then implements whatever it takes (and not more!) to let the TC pass. So the task at hand would be to make the TC green with as little effort as possible. If this means to implement a solution which uses inside knowledge of the TC, so be it. The reasoning was that later TCs would check for more general correctness anyway, so that first solution would need to be improved then (and only then).

Example:

We want to write a comparison function for a data structure with three fields. Our comparison shall return whether the given values are equal in all three fields (or differ in at least one). Our first written TC only checks if a difference in the two first values is detected properly: It passes (a,b,c) and (a,b,c) and checks for a correct detection of equality, then it passes (a,b,c) and (x,b,c) and checks for a correct detection of inequality.

Now the cheap approach would be to also implement only a comparison of the first field because this should be enough to pass this TC. Keep in mind that this can be done because we know that later tests will also check for the equality of the two other fields.

But of course it does not seem very useful to only implement such a (more or less) nonsense solution; every programmer doing this would do it in the knowledge of writing a bug. It obviously seems more natural to write a decent comparison right away in the first iteration.

On the other hand, writing a correct solution without having a TC which checks it might lead to the situation that such a TC which tests the behaviour more thoroughly will never get written. So there is behaviour which was written without having a TC for it (i. e. which is not developed test-driven).

Maybe a proper approach is to not write such rudimentary TCs (like the one only checking the first field) in the first place, but that would mean to demand perfect TCs in first iteration (and of course in complexer situations one will probably not always write perfect TCs).

So how should one deal with rudimentary TCs? Implement a cheap solution or not?

I honestly don't get people that take TDD too seriously.. The end goal should be that you end up with a good implementation that is thoroughly tested... Who cares if you are writing code for tests you haven't yet implemented? sometimes you know how to formulate the tests, only after you thought about and started writing the code and saw how it all holds up (prototyping).
– AK_Feb 14 '15 at 10:46

One advantage I haven't seen mentioned of the minimal tests, which result from a minimal implementation in each step, is that if there is a failure you will get a more granular test failure that better indicates where the failure is. If the case testing equality of 'a' and 'b' passes, but the one testing equality of 'b' and 'c' fails, it tell you more information about what might be wrong than if you only have a single test of 'a', 'b' and 'c' which is failing.
– Sean BurtonJan 12 '18 at 15:30

6 Answers
6

I think your problem arises only because the requirement is very simple and the solution comparing all 3 values at once maybe just a one-liner. Having slightly more complex requirements, and it will perfectly make sense not to implement anything beyond the scope of the already implemented test cases.

Nevertheless, your "cheap approach" has indeed one advantage which makes it less nonsensical than you might think: chances are much better you don't forget to add all of the important test cases. If you implement the three-value comparison at once, there is a certain probability that you might omit further test cases, since you are already in the mental state of "beeing done". If, however, you know your code is "not ready" yet, and you force yourself not to change it without further test cases, chances are much higher you actually will take the time and add those test cases.

Especially for TDD learning purposes, I recommend "TDD as if you Meant it", an exercise invented by Keith Braithwaite to train developers doing TDD in even smaller steps. Applied to your example: in this exercise, your first step would not even be to implement a function with one equality check, you would implement the equality check in the testing code first, and then refactor it out afterwards to the comparison function.

Implementing the "cheap" solution first is a good idea, not just because it forces you to write tests that cover all expected behaviour, but also because it sometimes ends up with you writing a simpler solution than you might have in your head.

A good example of this is given in the book Beautiful Code, describing the FIT testing framework. This system works by scanning HTML documents for tables containing data to plug in to test cases and then produces an output document with the table row coloured red or green. A naive approach would be to use an HTML parser, but by just taking the tests one at a time and writing the simplest possible solution at each step, the authors arrived at a much simpler solution that just uses simple string manipulation.

Another thing to consider is Robert C Martin's "Transformation Priority Premise". This is an extension of TDD that gives some extra rules for writing your code as a sequence of simple transformations (similar to refactoring, but with the goal of changing behaviour in controlled ways rather than preserving it), and can give some very interesting results. It's well worth investigating further, I think.

I am not sure if this is really a good argument. For lots of problems I can think of, there is the risk of starting with a too simple approach first which cannot be generalized easily and must then be thrown away or redesigned later. Of course, the already written tests will make this more smooth. But IMHO it is most times better to think first and implement then, with or without TDD.
– Doc BrownFeb 9 '15 at 16:10

@DocBrown Perhaps. I've certainly seen it go both ways. But I think you gain more on the occasions when it does work out than you lose on the occasions it doesn't.
– JulesFeb 9 '15 at 16:14

2

... for example, the when you are unlucky, for the requirement of scanning arbitrary HTML the usage of an already available third-party HTML parser might be the best end most simple solution. By starting with simple string manipulation the risk is high of reinventing the wheel and piling over and over more HTML parsing code which gets harder and harder to be thrown away, though it might turn out that this would be the better decision.
– Doc BrownFeb 9 '15 at 16:16

Yes, I also think that taking a step back as early as possible and creating a more general approach to solve a task will lead to better solutions in the sense of maintainability. If just a behaviour is implemented without putting any thoughts to what the idea behind this behaviour is, we are more likely to get into trouble at each new feature requirement which might come later.
– AlfeFeb 10 '15 at 9:35

I would like to say that the refactoring phase in the TDD cycle is the second most important step, and the one that make the switch from simpler and cheap to intended code. The most simple example I can think of is a constructor.

My first test can be that my new class receives a certain parameter and check the value of a parameter. That's pretty easy to implement right? I can make it pass very cheaply. But once that it passes the implementation is left to a refactor.

I can implement a getter/setter into that parameter.

Or maybe that my class makes a get request to a external API (mocked) and stores the result into the parameter.

Or that parameter is the result of mat operation.

But all those cases are tested with the premise that you input a value and it expects another, the back workings are irrelevant, you want the same result. That's to me the great advantage of cheap solutions.

A lot of this boils down to what is "cheap, useless and ... little effort as possible."

Here's another simple example: Write a function that returns the result of two numbers added together.
Test:

check func(2, 2) = 4
// Simple:
func(a, b) { return a + b}

I would like to think this is the first strategy I would take. Seems simple enough to me, but now that I think about it, what about an even simpler approach:

//Simpler
func(a,b) { return 4}

It is simpler and probably useless, but did it really take the least amount of effort? Seems like the first reasonable solution that pops in your head would take the least amount of effort unless the amount of typing is a productivity problem. If something is so obvious that it is a waste of time, don't do it. This is the advantage of having experience and a possible explanation why some people feel good programmers can be 10x more productive because they don't waste time repeating mistakes or writing code they don't need.

Yeah »… if it's a waste of time, don't do it«, allright, but what alternative do you propose? That's my question. We currently have two proposals in the discussion (extend TCs first or write the correct solution anyway), and I have problems with both.
– AlfeFeb 10 '15 at 9:40

1

Btw, I think everyone would agree that writing just return 4 will bring you nowhere, and that's a great point to make to show that just making the TC green cannot be the way to go.
– AlfeFeb 10 '15 at 9:43

2

You start with what you think is the most reasonable test. In your example you start with just comparing the first set of elements and the test passes. So what? It's not a waste. If you can't get your code to work to compare the first elements, you're going to have trouble with the rest. If you feel you "cheated" the test, write another test to catch the cheat. That's what programming is all about IMHO>
– JeffOFeb 12 '15 at 22:55

@Alfe "... or write the correct solution anyway" How would you know it is correct and more importantly that it remains correct? Quite often ensuring proper test cases before hand may seem "over the top" for the initial implementation, but they start shining when someone inadvertently makes a change that introduce a bug and that bug goes unnoticed because the test cases where done after the fact and may well suffer from myopia by the developer who thought (s)he was already done.
– Marjan VenemaFeb 20 '15 at 11:39

You're correct that demanding perfect Test Cases is unrealistic for most code (with a possible exception for Mars Rover code).

I'd argue that it's necessary to honor the intent of the Test Case / specification in addition to the 'letter of the law'. In your example the developer has the opportunity to write an obvious bug. The correct action would be to fix the Test Case first, then write the code.

Tests are code too, and are also subject to bugs, incomplete implementations, etc.

What about the notion that by being allowed to change the TC the developer could even more easily make the TC green, namely by taking aspects out of the TC? This could happen without intention of course (I'm not talking about cheaters but about hasty situations). Do you really think it is a good idea to allow adjustments to the TC? Is that still the fundamental(istic) way of TDD?
– AlfeFeb 9 '15 at 15:59

@Alfe If the TC is buggy or obviously incomplete, wouldn't it make sense to correct the oversight? (In this case, expanding test coverage - contracting it would be a different matter).
– Dan PichelmanFeb 9 '15 at 16:05

1

The TC is certainly not buggy. One might call it incomplete, allright, but that's a matter of taste, more or less, lacking a clear definition of when it would be complete. Coverage in TDD should always be at 100% (if done correctly), so expanding the TC before the implementation if shall test is there would not change the coverage.
– AlfeFeb 9 '15 at 16:11

@alfe instead of expanding an existing test case, you can always add an extra one. Each test should cover just one aspect of a function/method. Doesn't mean there can only be one assert (it often takes more than one assert to verify an aspect); but your second scenario should probably have its own test method within the testclass instead of being added to the first.
– Marjan VenemaFeb 20 '15 at 11:43

The problem that you have is really in thinking about writing your test cases. You are getting stuck in TDD Dogma of needing to have only one test fail at a time.

You define the behavior that you want to be that a function determines if two groupings of three elements have the same values. Then you define the test case checking only the first one. You know from the definition of the problem that you have at least four cases to test (equal, 1st not equal, 2nd not equal, 3rd not equal). So you would create these test cases immediately so that you define the behavior of the code.

In your example, the behavior is that in the two groupings the first element is checked to be equal. That is an incomplete definition of the behavior. And you would end up creating the naive solution that you have and thus needing to do more work as you go on.

Use your tests to define one behavior and write tests that fail and only pass when that behavior has been properly created.

When I got you right, your answer is "if you don't like strict TDD, try BDD". Though this will surely work for lots of people, I think that evades the core point of the question, which is IMHO "is there any value in doing TDD strictly?"
– Doc BrownFeb 10 '15 at 6:38

Thank you @DocBrown, that's basically the core of my question: How strict (or "fundamentalistic") should one be in TDD dogma and is there any benefit of being it. The other point I'd like to address in Schleis's answer is this: The idea of being able to write "complete" TCs which define the behaviour "completely" is a wrong one. In my minimal example this is possible of course, but if the real task gets more complex, you simply will have the situation that you have written a bunch of TCs, start implementing, and then notice that the behaviour isn't completely defined by those. What then?
– AlfeFeb 10 '15 at 9:24

@alfe What then: you add more test cases. TDD doesn't require you to be complete from the start. In fact it specifically allows for requirements and implementations to change over time and to be refactored / added to with changes in perception of what the behavior should be. Test cases that test for behavior that is later found to be redundant can and should be removed. Test cases are a living breathing code base, just like the production code. Think of them as the dynamic documentation of your requirements (which change over time) and as a history of bugs solved.
– Marjan VenemaFeb 20 '15 at 11:49

1

@alfe and to answer your question of how strict you should apply something: dogma is never the answer to anything. Keep using your brain. When you start to rely on strict dogmatic application of anything, everybody loses.
– Marjan VenemaFeb 20 '15 at 11:50