Tuesday, March 27, 2007

Six reasons why TDD is necessary

I'll write here few benefits of TDD (mostly on why tests are necessary? ) as I understood:

You want to be confident about your code, with tests backing your code up, you always know your code will work without problems. You get confidence by successful test runs. Every time you see a green bar, you get satisfaction.

Even when you have inevitable bugs, by chance, its easy to isolate them. The test points the exact location, you can quickly reproduce it with other test and fix it. For example, If a test method "testBlah" fails, you can directly look in to the code of that tests and real code tested there. Tests brings fewer surprises without dealing with huge logs and coding horrors.

Writing good tests forces you to write good code and hence better low level design. For example, if I tell you to write a test for a thousand line C# method, it will be near to impossible to achieve it, so you'll break this method in smaller methods and try to test them each. With each test written, your code becomes good looking and reliable.

Tests are the best ways to learn about the library, framework etc, tests are documentation. For example, If you want to learn how Hibernate.NET works or how to use APIs and don't want to RTFM, then you can have a quick sneak in Hibernate tests and you'll know how to use it.

Talking of original TDD sense: If you've always written code first then you never worried about testability of your code. Whereas if you test first develop it, your code becomes testable. And there are many reasons why you want to make your code testable...

With automated tests, You don't need to repeat yourself with debugging sessions, it's once and for all. Even if you have to debug, it will be at much smaller scale. For example, a bug is repeatedly coming up and you debug each time to fix it, each fix takes days because it involves debugging entire component, even if that component is as simple as build SQL and fetch-map objects with rows, it can turn out to be a debugging nightmare, if you'd written tests for it - they would have saved you hours of pain to step through the buggy code.

I hope this list will be helpful for those who are still wondering why TDD has become such a phenomenon or are still skeptic about it.

Believe me, Tests save your life and makes it very easy :). If you like this post and found useful, please share it with others, your comments are welcome.

9 comments:

My short experience with TDD says that it can be best understood by implementing it. The above points should be kept at the back of the mind and while implementing we'll automatically be able to relate to any or all of these. So take a framework and start using it. It really pays off.

One thing that happens to us while only 'listening' to TDD is that we start to feel it is only meant for BIG applications, having lot of business logic. These are the applications where we find all the above points materializing. But there is nothing preventing us from using TDD for small applications, trivial applications. Who knows how they grow to big ones. In such application all these points might not be realized like the "low level design". Trivial apps might not have any design, just some piece of code. what say?

None of these reasons really explain why we should be writing our tests first. They simply state that tests are good.

The bottom line is that if you're diligent about making your code testable, it will be testable regardless of whether you write your tests first of last. And writing tests last feel a lot more intuitive and less impractical (e.g. IDE problems) to most developers.

TDD specifically is about writing tests first, which isn't the same as writing tests, as it specifies an order.

If you tell me to write a test for a thousand line C# method, I can write a simple one. Run the method. Does it throw an exception? No? Ok, passed. It's an awful test, but most are.

I'll try to work out what the method is supposed to achieve and test that, by looking at its inputs and outputs (including 'this' parameters and any globals). If it tries to achieve too much to write a good test for, then I'll consider breaking it up. That way I can sensibly choose how to break it up.

It's not necessarily true that reading the tests tells you how to use an API. For example, in most of my tests, I call ContextUtility.debugContext(), which gives me a Context object, but that Context object is suitable only for running tests. Real code that needs a Context should accept it as a parameter to a static method, rather than instantiating it.

If a new programmer joined my project without realising this, he'd cause trouble.

The green bar is a little bit of a problem. When I wrote some Lisp code recently, I had some automated tests that failed, which were basically for features I hadn't bothered to implement yet.

When I scrolled to that point in the source code, I evaluated the tests that should pass, and they did, then I evaluated all the tests that shouldn't pass, and they didn't.

In other words, my issue tracker was my source code. If I used JUnit (I use something similar in Java), I'd be tempted not to write the tests for the features I've not started writing yet, because they'd a) probably not compile, and b) fail and annoy me.

"If you've always written code first then you never worried about testability of your code".

That's untrue. Writing tests first limits your ability to quickly evolve new code. For example, with some code that just outputs stuff to the screen, I might decide what format the stuff should be in while I'm writing the code, then write the test later. That doesn't mean I'm not considering testability.

When you have a huge code base, a huge network of objects interacting with each other, you make change in one class how do you ensure that no other code is broken, by reading the code, or by walking through the entire application? How good if we have something automatic that says your change has broken this and this and this?

when we write tests first we are documenting what we want a piece of code to do everytime it is run irrespective of what happens to the environment (the network of objects it uses or delegates to). And this documentation is executable. That's a respite! Even after 1 yr of writing code I know what I expected from it.

This is a strawman. It has nothing to do with the testability of code, and is not necessarily true.

I do see where you're coming from. How can writing a test, then writing code to pass it, then refactoring the code and its tests be more efficient than simply refactoring the code until you're happy with it before finally writing the tests?

The answer is that writing tests first ensures test coverage of your most important code, and provides a safety net for the refactoring. If your units are small enough, the tests should be trivial and not likely to prevent quick evolution of new code.

Agree to "most of the points are about tests and not TDD in it's real sense", but my primary motivation of practicing TDD has always been making testable designs. I don't agree that developers can be test conscious by default, it's a myth!

I used to write tests and code parallely (more or less, code first), then I found myself getting into the loop of "IDE Refactoring" the code and failing "old" (stale) test cases.What's the use of those tests, if they are not doing what they are supposed to? had I always been following TDD, would it not always be useful?

On your generalization "test last is more intuitive and less impractical" :

"Test last" may be intuitive because most of the developers write them for verification that it works and not for making it testable, and about practicality of it, just try it out....

Don't agree with this assumption, nothing limits your ability to evolve the code/design, for example, refactoring demands TDD as the basis for evolving code. I've done mammoths refactoring without writing tests first and I know the regression impact and pains of it beyond words. Generally beloved "test last" isn't of any significant help in such scenarios, but of greatly frustrating.

Any sufficiently good developer will not write as simplistic test as you mention without writing test (validating preconditions), writing good tests are implied.

You'll sensibly achieve granular pieces of methods, but without tests backing it up, how is it assured that it works the same way as it was in a single method? people generally don't have luxury of repeating trial and error.

N00bie can be inducted, and It's not very good reason to avoid believing tests are not understandable, unless the "new" is out of programming world.

Well your "print to console" example is fairly trivial, I don't see any sign of testability there as well.

I admit I have never practiced TDD on any non-academic project where I was not just exploring TDD. But I will speak based on what I learnt from these projects as well as other projects where I did not employ TDD.

"None of these reasons really explain why we should be writing our tests first."

Yeah, the reasons in the post are really more about tests in general and not about TDD specifically.

"And writing tests last feel a lot more intuitive and less impractical (e.g. IDE problems) to most developers."

While at face value this seems right, I don't think writing tests first needs to be awkward. If you're like me, as you write code, you have a general idea of the immediate goals you are trying to achieve with the code. For example, if you were writing a XmlWriter class, you know the output it would need to produce given some input DOM tree. And this is generally the case - you have at least a basic idea of the inputs and outputs for the portion of code you are working on at the moment. TDD is just a way to formalize these ideas that are sort of floating around your head already and are already guiding your design and code. It gives you something more concrete while you design and develop.

"TDD is clearly unnecessary, as working code predates TDD."

Design, UML, XP, other Software Engineering practices are clearly unnecessary, as working code predates them. TDD is another practice in the Software Engineering toolset. It can

"If you tell me to write a test for a thousand line C# method, I can write a simple one. Run the method. Does it throw an exception? No? Ok, passed. It's an awful test, but most are."

I think this is the same point you are trying to make but I'm not sure - one of the benefits of TDD is that it gets you to avoid writing code where there is one big method that really should not be one big method. When you write the tests first, you have to design in such a way that you have manageable units to test.

"For example, in most of my tests, I call ContextUtility.debugContext(), which gives me a Context object, but that Context object is suitable only for running tests."

Nothing wrong with mock objects for tests. More below.

"Writing tests first limits your ability to quickly evolve new code"

Drawing on a point from above - having tests first is really a way to have something more concrete from the beginning. It means more rigor. When you design your software system without having tests first, you will already have an idea of how the various objects in your system interact with each other and the external environment. The tests are a way to bring the ideas from your head into actual code. For example, when you create a mock object interacting with some other set of objects, you have something in code which describes and tests the way the objects under test are supposed to behave in your design.