I'm a new programmer (only been learning for about a year) and in my goal to become better at it I have just recently learned about TDD. I wanted to get into the habit of using it since it seems very helpful. I wanted to check and make sure I'm using it correctly.

What I'm doing:

Think of a new method I need.

Create a test for that method.

Fail test.

Write method.

Pass test.

Refactor method.

Repeat.

I'm doing this for EVERY method I write, are there some I shouldn't bother with? Later on I usually think of a way to test my already existing methods in a different way or situation. Should I make these new tests I think of, or since each method already has a test of their own should I not bother? Can I be OVER testing my code I guess is my main concern in asking this.

EDIT

Also, this was something I was just wondering. When doing something like making a GUI, would TDD be necessary in that situation? Personally, I can't think of how I would write tests for that.

8 Answers
8

What you are describing as a workflow isn't in my opinion the Spirit of TDD.

The synopsis of Kent Becks book on Amazon says:

Quite simply, test-driven development is meant to eliminate fear in
application development. While some fear is healthy (often viewed as a
conscience that tells programmers to "be careful!"), the author
believes that byproducts of fear include tentative, grumpy, and
uncommunicative programmers who are unable to absorb constructive
criticism. When programming teams buy into TDD, they immediately see
positive results. They eliminate the fear involved in their jobs, and
are better equipped to tackle the difficult challenges that face them.
TDD eliminates tentative traits, it teaches programmers to
communicate, and it encourages team members to seek out criticism
However, even the author admits that grumpiness must be worked out
individually! In short, the premise behind TDD is that code should be
continually tested and refactored.

Practical TDD

Formal automated Testing, especially Unit Testing every method of every class is just as bad an anti-pattern and not testing anything. There is a balance to be had. Are you writing unit tests for every setXXX/getXXX method, they are methods as well!

Also Tests can help save time and money, but don't forget that they cost time and money to develop and they are code, so they cost time and money to maintain. If they atrophy from lack of maintenance then they become a liability more than a benefit.

Like everything like this, there is a balance which can't be defined by anyone but yourself. Any dogma either way is probably more wrong that correct.

A good metric is code that is critical to the business logic and subject to frequent modification based on changing requirements. Those things needs formal tests that are automated, that would be a big return on investment.

You are going to be very hard pressed to find many professional shops that work this way either. It just doesn't make business sense to spend money testing things that will for all practical purposes never change after a simple smoke test is preformed. Writing formal automated unit tests for .getXXX/.setXXX methods is a prime example of this, complete waste of time.

It is now two decades since it was pointed out that program testing
may convincingly demonstrate the presence of bugs, but can never
demonstrate their absence. After quoting this well-publicized remark
devoutly, the software engineer returns to the order of the day and
continues to refine his testing strategies, just like the alchemist of
yore, who continued to refine his chrysocosmic purifications.

That pretty much addresses what I was concerned about. I was feeling that I shouldn't be testing every method like I was, but wasn't sure. Looks like I may still need to read some more about TDD.
–
ZexanimaMay 18 '12 at 18:33

@kevincline Most of the time setXXX/getXXX are not needed at all :)
–
ChipMay 18 '12 at 20:20

1

When you memoise that trivial getXXX and get it wrong, or introduce lazy loading in your getXXX and get it wrong, then you will know that sometimes you really do want to test your getters.
–
Frank SheararMay 19 '12 at 11:04

Thanks for the good links. I'm half way through reading "Clean Code" and have a few other books lined up to read like "Code Complete" so there is still a lot more I need to learn. I just figured I would see if I was thinking about all this in the right direction.
–
ZexanimaMay 18 '12 at 18:48

1

@Zexanima: You're doing way better than most of us were after a year. Just trying to point you to the next step.
–
pdrMay 18 '12 at 18:53

1

I think these 3 rules you link to; as idyllic as they may sound, are exceptionally dogmatic and highly unrealistically rigid in 99% of all production shops that anyone will encounter.
–
Jarrod RobersonMay 18 '12 at 21:19

1

@FrankShearar or it can be seen as the impractical blathering of a fundamentalist extremist and wholesale disregarded. I have worked in shops that had this dogmatic attitude, they took the dogma literally and missed the point; writing tests that didn't test any of their actual code in a practical fashion and ending up just testing Mocking and Dependency Injection frameworks ability to confuse what was important at best.
–
Jarrod RobersonMay 18 '12 at 21:46

1

My point was that dogmatism comes from blindly accepting something as gospel. Take the polarising statement, try it out, make it force you out of your comfort zone. You cannot evaluate the tradeoffs involved in TDD if you do not try the 3-point-all-or-nothing extreme approach, because you will have no data.
–
Frank SheararMay 19 '12 at 11:02

There is such a thing as over testing. You want to make sure your unit tests overlap as little as possible. There's no point of having multiple tests verify the same conditions in the same piece of code. On the other hand, when you refactor your production code and you have many tests that overlap that section, you will have to go back and fix all those tests. Whereas if they do not overlap, then one change will at most break only one test.

Just because you thought of a better way of writing a test, I would not go back there and start rewriting it. This is going back to the individuals who keep writing and rewriting the same class/function because they try to make it perfect. It will never be perfect, so move on. When you discover a better method, keep it in the back of your mind (or add to comments of the test). Next time you are in there, and you see immediate benefit of switching to the new way, that's the time to refactor. Otherwise, if the feature is done and you moved on and everything works, leave it working.

TDD focuses on delivering immediate value, not simply making sure every function is testable. When you add functionality, start by asking "what does the client need". Then define an interface to give the client what it needs. Then implement whatever it takes to make the test pass. TDD is almost like testing use case scenarios (including all the "what-ifs"), rather than simply coding up public functions and testing each one.

You asked about testing GUI code. Look up "Humble Dialog" and "MVVM" patterns. The idea behind both of these is that you create a set of "view model" classes, that don't actually have UI-specific logic. However, these classes will have all the business logic that typically is part of your UI and these classes should be 100% testable. What's left is a very thin UI shell and yes, typically that shell is left without test coverage, but at that point it should have almost no logic.

If you have a large portion of existing code, as few others suggested, you shouldn't start adding unit tests absolutely everywhere. It'll take you forever and you won't get benefit from adding unit tests to 80% of classes which are stable and will not change in the near (or not so near) future. However, for new work, I do find using TDD development with ALL code to be extremely beneficial. Not only do you end up with a suite with automated tests when you are done, but actual development has huge benefits:

By considering testability, you will write code which is less coupled and more modular

By considering your public contract before anything else, you will end up with public interfaces which are much cleaner

As you are writing code, verifying new functionality takes milliseconds compared to running your entire application and trying to force execution down the right path. My team still releases error handling code which has not been even executed ONCE just because they couldn't get the right set of conditions to happen. It is amazing how much time we waste when later on in QA those conditions do happen. And yeah, a lot of this code is what someone would've considered "not area for a lot of change in the future once smoke testing is done".

There are some methods that aren't being tested, namely those tests. However, there is something to be said for some tests being added after the initial code has been written, such as boundary conditions and other values so that there may be multiple tests on a single method.

While you can over test your code, that usually comes where someone wants to test every possible permutation of inputs which doesn't quite sound like what you are doing. For example, if you have a method that takes in a character, do you write a test for every possible value that could be entered? That would be where you'd get to overtesting, IMO.

Ahh, okay. That's not what I'm doing. I just usually end up thinking of a different situation I could test my methods a ways down the line after I've already made their initial test. I was just making sure those 'extra' tests were worth making, or if it was over doing it.
–
ZexanimaMay 18 '12 at 18:03

If you work in small enough increments you can usually be reasonably sure your test actually works. In other words, having a test fail (for the right reason!) is in itself testing the test. But that level of "reasonably sure" won't be as high as the code under test.
–
Frank SheararMay 20 '12 at 9:28

Tests are code. So if you can improve the test, go ahead and refactor it. If you think that a test can be improved go ahead and change it. Do not be afraid to replace a test with a better one.

I recommend in testing your code, avoid specifying how the code is supposed to do what it is doing. Tests should look at the results of the methods. This will help with refactoring. Some methods do not need to be explicitly tested (i.e. simple getters and setters) because you will use those to verify the results of other tests.

I was writing tests for getters and setters too so thanks for that tip. That'll save me some un-needed work.
–
ZexanimaMay 18 '12 at 18:10

"Some methods do not need to be explicitly tested (i.e. simple getters and setters)" - You've never copy/pasted a getter and setter and forgotten to change the field name behind it? The thing about simple code is that it requires simple tests -- how much time are you really saving?
–
pdrMay 18 '12 at 18:13

I don't mean that the method isn't tested. It is just checked via confirming that other methods are set or during the actual setting up of a test. If the getter or setter doesn't work properly, the test will fail because the properties weren't set correctly. You get them tested for free, implicitly.
–
SchleisMay 18 '12 at 18:17

Getter and setter tests don't take long, so I probably will continue to do them. However, I never copy and paste any of my code so I don't run into that issue.
–
ZexanimaMay 18 '12 at 18:18

My opinion on TDD is that the tooling has create a world of 'point and click' style developers. Just because the tools create a test stub for each method doesn't mean you should be writing tests for every method. Some people are 'renaming' TDD as BDD (behaviour driven development) where the tests are much larger-grained and intended to test the behaviour of the class, not each fiddly little method.

If you design your tests to test the class as its intended to be used, then you start to gain some benefits, especially as you start to write tests that exercise a bit more than each method, especially when you start to test the interaction of those methods. I suppose you could think of it as writing tests for a class, rather than methods. In any case, you must still write 'acceptance tests' that exercise the combination of methods to make sure there are no contradictions or conflicts in how they are used together.

Don't get TDD confused with testing - it's not. TDD is designed so that you write code to exercise your requirements, not to test the methods. Its a subtle but important point that's often lost on people who blindly write test code for every method. Its that you should be writing tests that make sure your code does what you want it to do, not that the code that you wrote works like its supposed to.

When you start learning TDD, yes, you should blindly follow the dogmatic approach of not writing a single line of code except to make a failing test pass, and writing only enough of a test to fail (and fail for the right/expected reason).

Once you have learned what TDD is about, THEN you can decide that certain kinds of things aren't worth testing. This is the same approach you should follow for everything, and the Japanese martial arts call this "shuhari". (The link also explains how one can progress through the stages of learning without a teacher which is, I suspect, how most people have to learn.)