I read Kent’s ‘Test Driven Development By Example‘ book a couple of years ago and remember enjoying that so I was intrigued as to what it would be like to see some of those ideas put into practice in real time.

As I expected a lot of Kent’s approach wasn’t that surprising to me but there were a few things which stood out:

Kent wrote the code inside the first test and didn’t pull that out into its own class until the first test case was working. I’ve only usedthis approachin coding dojos when we followed Keith Braithwaite’s ‘TDD as if you meant it‘ idea. Kent wasn’t as stringent about writing all the code inside the test though – he only did this when he was getting started with the problem.

He reminded me of the ‘calling the shots‘ technique when test driving a piece of code. We should predict what’s going to happen when we run the test rather than just blindly running it. Kent pointed out that this is a good way for us to learn something – if the test doesn’t fail/pass the way that we expect it to then we have a gap in our understanding of how the code works. We can then do something about closing that gap.

I was quite surprised that Kent copied and pasted part of an existing test almost every time he created a new one – I thought that was just something that we did because we’re immensely lazy!

I’m still unsure about this practice because although Ian Cartwright points out the dangers of doing this it does seem to make for better pairing sessions. The navigator doesn’t have to wait twiddling their thumbs while their pair types out what is probably a fairly similar test to one of the others in the same file. Having said that it could be argued that if your tests are that similar then perhaps there’s a better way to write them.

For me the main benefit of not copy/pasting is that it puts us in a mindset where we have to think about the next test that we’re going to write. I got the impression that Kent was doing that anyway so it’s probably not such a big deal.

Kent used the ‘present tense’ in his test names rather than prefixing each test with ‘should’. This is an approach I came across when working with Raph at the end of last year.

To use Esko Luontola’s lingo I think the tests follow the specification style as each of them seems to describe a particular behaviour for part of the API.

I found it interesting that he includes the method name as part of the test name. For some reason I’ve tried to avoid doing this and often end up with really verbose test names when a more concise name with the method name included would have been way more readable.

One thing which I don’t think I quite yet grasp is something Kent pointed out in his summary at the end of the 4th screencast. To paraphrase, he suggested that the order in which we write our tests/code can have quite a big impact on the way that the code evolves.

He described the following algorithm to help find the best order:

Write some code

erase it

write it in a different order

And repeat.

I’m not sure if Kent intended for that cycle to be followed just when practicing or if it’s something he’d do with real code too. An interesting idea either way and since I haven’t ever used that technique I’m intrigued as to how it would impact the way code evolved.

There were also a few good reminders across all the episodes:

Don’t parameterise code until you actually need to.

Follow the Test – Code – Cleanup cycle.

Keep a list of tests to write and cross them off as you go.

Overall it was an interesting series of videos to watch and there were certainly some good reminders and ideas for doing TDD more effectively.

Regarding the TDD game that Kent describes at the end of Episode 4 (write code, erase it, write it in different order, repeat), I do believe he intends to practice it each time. He describes the goal of TDD is “Clean code that works.” To achieve that, he’s refactoring until it’s most efficient in terms of design and usability.