The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go.

The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, so what's the harm? Sometimes it's easier to write all the tests for a class (or module) first as a form of 'brain dump' to quickly write down all the expected behavior in one go.

Do what works best for you (after some experimentation). Blindly following dogma is never a good thing.
–
Michael BorgwardtApr 2 '12 at 10:40

6

I daresay that writing all your tests at once much like writing all your app code at once.
–
Michael HarenApr 2 '12 at 13:29

1

@MichaelHaren All tests for a class (or functional module), sorry for the confusion
–
RichKApr 2 '12 at 13:32

2

Addressing the "brain dump" issue: Sometimes there are points in the testing/coding when you realize the need for several different specific input tests, and there is a tendency to want to capitalize on the clarity of that realization before you get distracted with the minutiae of the coding. I usually manage that by maintaining a separate list (e.g. Mylyn), or else with a list of comments in the Test class of different things I want to remember to test (e.g. // test null case). However, I still only code one test at a time, and instead work my way down the list systematically.
–
Sam GoldbergApr 2 '12 at 13:51

@Chad I was not the down vote, but I believe this answer misses the obvious. Test Driven development is about using the Test(s) to drive the design of the code. You write the test individually in order to evolve the design, not just for testability. If it were only about the test artifacts then this would be a fine answer, but as is it is missing some crucial information.
–
Joshua DrakeApr 2 '12 at 14:49

7

I didn't downvote this but; I thought about it. It's far too brief an answer to a complex question.
–
Mark WestonApr 2 '12 at 15:03

2

+1 for concentrating on one thing at a time, our ability to multitask is overrated.
–
cctanApr 3 '12 at 6:52

One of the difficulties when writing unit tests is that you are writing code, and that in itself can potentially be prone to error. There is also the possibility that you might end up needing to change your tests later on as a result of a refactoring effort as you write your implementation code. With TDD, this means you could potentially end up getting a little too carried away with your testing and find your self needing to rewrite a lot of essentially "untested" test code as your implementation matures over the course of the project. One way to avoid this sort of problem is to simply focus on doing one single thing at a time. This ensures you minimize the impact of any changes on your tests.

Of course, this will largely comes down to how you write your test code. Are you writing a unit test for every individual method, or are you writing tests which focus on features/requirements/behaviours? Another approach could be to use a Behaviour Driven approach with a suitable framework, and focus on writing tests as if they are specifications. This would mean either adopting the BDD method, or adapting BDD testing if you wish to stick with TDD more formally. Alternatively you might stick entirely with the TDD paradigm, yet alter the way you write tests so that instead of focusing entirely on testing methods individually, you test behaviours more generally as a means to satisfy the specifics of the requirements features you are implementing.

Regardless of the specific approach you take, in all the cases that I've described above you are using a test-first approach, so while it may be tempting to simply download your brain into a lovely test suite, you also want to fight the temptation to do more than is absolutely necessary. Whenever I am about to start a new test suite I start repeating YAGNI to myself, and sometimes even throw it into a comment in my code to remind me to stay focused on what is immediately important, and to only do the minimum required to satisfy the requirements of the feature I am about to implement. Sticking to Red-Green-Refactor helps to ensure that you will do this.

I think by doing this, you miss out on the process of TDD. By just writing all your tests at the start you are not really going through the process of developing using TDD. You are simply guessing up front which tests you will need. This will be a very different set of tests from the ones you end up writing if you do them one at a time as you develop your code. (Unless your program is going to be trivial in nature.)

Most business and enterprise applications are technically trivial in nature and seeing as how most applications are business and enterprise, most applications are thus trivial by nature as well.
–
maple_shaft♦Apr 2 '12 at 11:22

5

@maple_shaft - the tecnology may be trivial, but the business rules are not. Try and build an app for 5 managers all who have different requirements and refuse to listen to some BS about your simplictic, elegant, less-is-more, minimalism design.
–
JeffOApr 2 '12 at 12:06

4

@JeffO 1) It isn't BS. 2) An elegant minimalist design requires good software development skills. 3) The ability to mitigate the requirements from 5 different managers who don't have more than 5 minutes a week to waste with you and still pull off a minimalist design requires an excellent software developer. Pro Tip: Software development is more than just coding skills, it is negotiation, conversation and taking ownership. You gotta be Alpha dog and bite back sometimes.
–
maple_shaft♦Apr 2 '12 at 12:15

1

If I understand correctly, this answer is begging the question.
–
Konrad RudolphApr 2 '12 at 12:44

1

@maple_shaft I think that was what Jeff O was getting at with his comment, no?
–
ZweiBlumenApr 2 '12 at 13:24

I do “write” all the tests I can think of up front while “brain storming”, however I write each test as a single one comment describing the test.

I then convert one test to code and do the work so it will compile and pass. Often I decide I don’t need all the tests I thought I did, or I need different tests, this information only comes from writing the code to make the tests pass.

The problem is you can’t write a test in code until you have created the method and classes it tests as otherwise you will just get lots of compiler errors that get the way of you working on a single test at a time.

Now if you are using a system like spec flow when the tests are written in “English” you may wish to get the customers to agree to a set of tests while you have their time, rather than only creating a single test.

Yes, while agreeing with the answers above that point out the problems with coding all your tests first, I do find it very helpful to dump my overall understanding of how the current method should behave as a set of test descriptions without any code. The process of writing these down tends to clarify whether I completely understand what's required of the code I'm about to write, and whether there are edge cases I haven't thought about. I find myself much more comfortable coding the first test and then making it pass after I've outlined my "overview" of how the method should work.
–
Mark WestonApr 2 '12 at 12:07

TDD is a highly iterative approach, which (in my experience) fits better the real world ways of development. Usually my implementation takes shape gradually during this process, and each step may bring further questions, insights and ideas for testing. This is ideal to keep my mind focussed on the actual task, and is very efficient because I only need to keep a limited number of things in short term memory at any point in time. This in turn reduces the possibility of mistakes.

Your idea is basically a Big Test Up Front approach, which IMHO is more difficult to handle, and may become more wasteful. What if you realize midway through your work that your approach is not good, your API is flawed and you need to start all over, or use a 3rd party library instead? Then a lot of the work put into writing your tests up front becomes wasted effort.

That said, if this works for you, fine. I can imagine that if you work from a fixed, detailed technical specification, on a domain you are intimately experienced with, and/or on a fairly small task, you may have most or all the necessary test cases ready and your implementation clear right from the start. Then it might make sense to start by writing all tests at once. If your experience is that this makes you more productive in the long run, you needn't worry too much about rulebooks :-)

In my (limited) experience with TDD, I can tell you that every time I've broken the discipline of writing one test at a time, things have gone badly. It's an easy trap to fall into. "Oh, that method is trivial," you think to yourself, "so I'll just knock out these two other related tests and keep moving." Well, guess what? Nothing is as trivial as it seems. Every time I've fallen into this trap, I ended up debugging something I thought was easy, but turned out to have weird corner cases. And since I'd gone on to write several tests at once, it was a lot of work to track down where the bug was.

You are assuming that you know what your code will look like before you write it. TDD/BDD is as much a design/discovery process as it is a QA process. For a given feature you write the simplest test that would verify that the feature is satisfied (sometimes this may require several because of a feature's complexity). That first test you write is loaded with assumptions of what the working code will look like. If you write the entire test suite before writing the first line of code to support it, you are making a litany of unverified assumptions. Instead, write one assumption and verify it. Then write the next. In the process of verifying the next assumption, you might just break an earlier assumption so you have to go back and either change that first assumption to match reality or change reality so that first assumption still applies.

Think of each unit test you write as a theory in a scientific notebook. As you fill out the notebook you prove your theories and form new ones. Sometimes proving a new theory disproves a previous theory so you have to fix it. It's easier to prove one theory at a time rather than trying to prove say 20 at once.

Beyond just thinking of one thing, one paradigm of TDD is to write the least code possible to pass the test. When you write one test at a time, it is much easier to see the path to writing just enough code to get that test to pass. With a whole suite of tests to pass, you don't come at the code in small steps but have to make a large leap to make them all pass in one go.

Now if you don't limit yourself to writing the code to make them all pass "in one go," but rather write just enough code to pass one test at a time, it might still work. You'd have to have more discipline to not just go ahead and write more code than you need, though. Once you start down that path, you leave yourself open to writing more code than the tests describe, which can be untested, at least in the sense that it isn't driven by a test and perhaps in the sense that it isn't needed (or exercised) by any test.

Getting down what the method should do, as comments, stories, a functional specification, etc., is perfectly acceptable. I would wait to translate these into tests one at a time though.

The other thing that you can miss by writing the tests all at once is the thinking process by which passing a test can prompt you to think of other test cases. Without a bank of existing tests, you need to think of the next test case in the context of the last passing test. As I said, having a good idea of what the method is supposed to do is very good, but many times I've found myself finding new possibilities that I hadn't considered a priori, but which only occurred in the process of writing the tests. There is a danger that you might miss these unless you specifically get in the habit of thinking what new tests can I write that I don't already have.

The Red-Green-Refactor cycle is a checklist intended for developers new to TDD. I'd say it's a good idea to follow this checklist until you know when to follow it and when you can break it (that is, until know don't have to ask this question on stackoverflow :)

Having done TDD for close to a decade I can tell you that I very seldom, if ever, write many failing tests before I write production code.

You are describing BDD, where some external stakeholder has an executable specification. This may be beneficial if there is a predetermined up front specification (for example a format spec, industrial standard or where the programmer is not the domain expert).

The normal approach is then to gradually cover more and more acceptance tests, which is the progress visible to the project manager and customer.

You usually have this tests specified and executed in a BDD framework such as Cucumber, Fitnesse or some such.

However, this is not something you mix up with your unit tests, which are much closer to the nitty gritty implementation details with a lot of API related edge cases, initialization issues etc heavily focused on the item under test, which is an implementation artifact.

The red-green-refactor discipline has a lot of benefits, and the only upside you can hope for by typing them up front is to break even.

One test at time: main advantage is focus on one thing. Think of depth-first design: you can go deep and stay focused with fast feedback loop. You may miss the scope of the whole problem though! That's the moment (large) refactoring comes into play. Without it TDD doesn't work.

All tests: analysis and designing may reveal you more of the scope of the problem. Think of breadth-first design. You analyze problem from more angles and add input from experience. It's inherently harder, but may yield interesting benefit - less refactoring - if you do 'just enough of it'. Beware it's easy to over-analyze and yet completely miss the mark!

I find it hard to generally recommend to prefer one or the other, because factors are many: experience (especially with the same problem), domain knowledge and skills, friendliness of the code for refactoring, complexity of the problem...

I'd guess if we focus more narrowly to typical business applications then TDD with its fast -almost- trial&error approach would usually win in terms of effectiveness.

Assuming that your test framework supports it, what I would suggest is that instead of implementing the tests you want to braindump, instead write descriptive pending tests that you will later implement. For example, if your API should do foo and bar but not biz, just add the following code (this example is in rspec) for your test suite, then attack them one by one. You get your thoughts down quickly and can address all your issues one by one. When all the tests pass you will know when you have addressed all your issues you had during your braindump.

describe "Your API" do
it "should foo" do
pending "braindump from 4/2"
end
it "should bar" do
pending "braindump from 4/2"
end
it "should not biz" do
pending "braindump from 4/2"
end
end