I've recently been working on a web development project on a team of less-than-five people. Things are going good but now we are decided to go with test-driven development.

So far what I've learned is that one of the ultimate goal is that when you are testing your software, everything should be passed. However our current situation is that we have more open issues/stories than what we can handle within an iteration or two and we're a bit confused on how to actually handling test-driven development in this situation.

One thing we can do is that when each of us pick a story, that person will write the test for it. Each of us will working within red-green-refactor cycle. Or another thing we can do is that we try to create test cases for all open issues. Then each of us can pick that up and working on eliminating the red for that issue.

The first approach looks nice to me that we can ensure what we've done is green across the board. However the latter approach is what I've been thinking it looks more realistic as it reflects the imperfect status of your software. I am not really sure how it is managed in real world or what should be the best practice for this.

3 Answers
3

I would only create tests for issues right before they start, and only for stories you plan on completing this iteration. They should probably be created by the developer who will eventually complete the functionality for the story.

The problem with creating all the tests in advance for all your stories, is that you risk the stories changing. It's better to take on things one story at a time. When you finish a sprint/iteration/whatever, you should relook the issues left to complete. Often times, they will have to change because of something that happened in the previous work cycle. If you had written tests, they would also have to change.

By writing the tests during the development of the story, you make sure that the tests and development work is in synch.

You are not allowed to write any production code unless it is to make a failing unit test pass.

You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.

You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

Regardless if you are working on a new story or fixing an open issue, this will not change.
Do it 1 unit test at a time.

It will be easier to make unit test for new codes. However, it can be a bit troublesome for an existing code. As your fix go deeper in your existing code, more and more unit tests needs to be created. Again to reiterate, Test each one before proceeding to create any new test.

Or another thing we can do is that we try to create test cases for all open issues.

The person or people you assign to this will get very bored, very quickly. The "cover this code with unit tests" project is unexciting and without a clear goal (OK, it looks clear at the beginning, but it isn't).

In addition, this approach wastes all the knowledge the engineers gather along the way. When I've written a failing test, I've discovered:

what the software is supposed to do

how the software authors claim that should be done

what the difference is between the expectation and reality

I'm in a prime position to fix the bug and get the test to pass. If instead I move on to writing a different test, then I've squandered that opportunity.