Thursday, August 14, 2008

Ready, Set, Panic!!!

I saw a TV commercial recently that reminded me of what it's like for developers learning advanced object-oriented development techniques. The commercial showed a golfer standing in front of a golf ball, club in hand. Before he swings, he thinks about all of the tips his friends have given him about golf:

He's thinking about so many things that instead of hitting the ball, he releases the club in his backswing and nails his caddy - after all, he forgot to grip the club. I couldn't find the clip on YouTube, but I found something very similar to give you an idea:

I had a similar experience when learning to water-ski on one ski. I didn't get on top of the water until I realized the only thing I needed to remember was to push hard with my back foot. That was it - when the boat engine roared, I pushed with my back foot. Viola!

When I began working for Jeffrey Palermo, I struggled to find a common theme to guide my development decision making. The list of rules to follow when writing maintainable, object-oriented software is immense:

For developers dedicated to excellence (but unfamiliar with advanced OOP), the list above can basically prevent them from ever writing any code. In my case, what will Jeffrey think of the code I commit? How do I *know* I'm not writing the legacy code that will torture my team in the future? Is there a simple rule I could follow (like pushing with my back leg) that I can focus on to be successful?

Fortunately, I found just such a rule to address all of the concerns listed above. It is... drum roll... 100% Unit Test Coverage. Specifically:

1. Unit test every meaningful input combination2. In Onion Architecture terms, test the behavior of only one domain object or service at a time3. Test all delegation from one class to another4. Keep your tests small and readable

Viola!

By testing all delegation logic, you're almost forced to define an interface for each of your services so you can mock them when unit testing the classes that depend on them.

By testing every meaning input combination, you're forced to break up code into small, simple classes. Methods with high cyclomatic complexity will create too many input combintations. A class with 16 input combinations can often be broken up into two classes with four input combinations each. You just cut the number of test cases in half (4 + 4 = 8, 4 * 4 = 16).

By unit testing your classes, you prevent any dependencies on infrastructure. For example, you can't include code like DateTime.Now directly in your classes. Otherwise, you'll need to reset your system clock to produce a predictable unit test result - yuck.

The mistakes you do make will be small and easy to correct. After all, your unit test suite will catch any refactoring mistakes you make.

The next time you're feeling overwhelmed keeping up with the Palermos, Millers, and the like, challenge yourself to achieve 100% Unit Test Coverage. If you're like me, you'll find it rarely leads you astray.

9 comments:

Yeah you are right while I'm reading your post it's exactly what is happening to me. I'm reading a lot of Books about DDD, Patterns (Fowler) and I see too much things that I need to apply. Unfortunately in my company are not appying TDD but I'm trying on my freelance projects

Thanks for this post. I'm new to TDD and a lot of the concepts you write about. I'm excited about learning them, but I admit to feeling a bit overwhelmed by everything there is to learn. That said, I wanted to make one comment. Your post seems to almost contradict a post Jeffrey Palermo made yesterday. He specifically says, "Testability is a side-effect of good design". Whereas, you're saying to focus on testability in order to arrive at good design. I think see both points, but given my current level of TDD-fu, I think I'm going to go with your approach.

Kevin,I am with you in striving for 100% Unit Test coverage. However I hear many blogger saying 100% coverage should not be target because even with 1005 coverage you can have some path of code remains untested (of course CC metric can help here).

5 Month back when I introduced TDD to my then new team, one of the senior tester, who comes from 20 year of Waterfall background, has asked me about % of coverage I am looking for. And his argument was achieving even 70% test coverage is very hard for any application (at least web app). In spite of my best effort, I couldn't convince him that I am talking about testing at unit level vs his perception of testing whole system.

What do you think about coverage for whole application, keeping in mind that even 100% unit test will not give 100% coverage for app

1) Only certain parts of your code base can be covered with automated tests. For example, code-behind files in ASP.NET are almost impossible to cover. As a rule, when I mean 100%, I'm not talking about those projects. However, the amount of code in code-behind files and other difficult to test areas can be greatly minimized if you work at it

2) 100% code coverage does not necessarily mean 100% validation. You need to make the proper assertions in your test to make your tests have any value. TDD prescribes how to accomplish this through red-green-refactor. One thing is certain, though. 50% code coverage absolutely means your code can't be more than 50% validated through automated testing. In other words, 100% code coverage doesn't guarantee 100% validation, but anything less than 100% absolutely indicates you're accepting risk when changing code.