Thursday, 16 August 2012

For a little while now I've been collecting Refactoring Kata exercises in a github repo, (you're welcome to clone it and try them out). I've recently facilitated working on some of these katas at various coding dojo meetings, and participants seem to have enjoyed doing them. I usually give a short introduction about the aims of the dojo and the refactoring skills we're focusing on, then we split into pairs and work on one of these Refactoring Katas for a fixed timebox. Afterwards we compare designs and discuss what we've learnt in a short retrospective. It's satisfying to take a piece of ugly code and after only an hour or so make it into something much more readable, flexible, and significantly smaller.

Test Driven Development is a multifaceted skill, and one aspect is the ability to improve a
design incrementally, applying simple refactorings one at a time until
they add up to a significant design improvement. I've noticed in these dojo meetings that some pairs do better than others at refactoring in small steps. I can of course stand behind them and coach to some extent, but I was wondering if we could use a tool that would watch more consistently, and help pairs notice when they are refactoring poorly.

I spent an hour or so doing the GildedRose refactoring kata myself in Java, and while I was doing it I had two different monitoring tools running in the background, the "Codersdojo Client" from http://content.codersdojo.org/codersdojo_client/, and "Sessions Recorder" from Industrial Logic. (This second tool is commercial and licenses cost money, but I actually got given a free license so I could try it out and review it). I wanted to see if these tools could help me to improve my refactoring skills, and whether I could use them in a coding dojo setting to help a group.

Setting it up and recording your Kata
The Codersdojo Client is a ruby gem that you download and install. When you want to work on a kata, you have to fiddle about a bit on the command line getting some files in the right places, (like the junit jar), then modify and run a couple of scripts. It's not difficult if you follow the instructions and know basicly how to use the command line. You have a script running all the time you are coding, and it runs the tests every time you save a source file.

The Sessions Recorder is an Eclipse plugin that you download and install in the same way as other Eclipse plugins. It puts a "record" button on your toolbar. You press "record" before you start working on the kata.

Uploading the Kata for analysis
When you've finished the kata, you need to upload your recording for analysis. With the Codersdojo Client, when you stop the script, it gives you the option of doing the upload. When that's completed it gives you a link to a webpage, where you can fill in some metadata about the Kata and who you are. Then it takes you to a page with the full final code listing and analysis.

The Sessions Recorder is similar. You press the button on the Eclipse toolbar to stop recording, and save the session in a file. Then you go to the Indutrial Logic webpage, log into your account, and go to the page where you upload your recorded Session file. You don't have to enter any metadata, since you have an account and it remembers who you are, (you did pay for this service after all!) It then takes you to a page of analysis.

Codersdojo Client Analysis
The codersdojo client creates a page that you can make public if you want - mine is available here. It gives you a graph like this:

It's showing how long you spent between saving the file, (a "move"), and whether the tests were red or green when you saved. There is also some general statistics about how long you spent in each move, and how many modifications there were on average. It points out your three longest moves, and has links to them so you can see what you were doing in the code at those points.

I think this analysis is quite helpful. I can see that I'm going no more than two or three minutes between saving the file, and usually if the tests go red I fix them quickly. Since it's a refactoring kata I spend quite a lot of moves at the start where it's all green, as I build up tests to cover the functionality. In the middle there is a red patch, and this is a clear sign to me that I could have done that part of the kata better. Looking over my code I was doing a major redesign and I should have done it in a better way that would have kept the tests running in the meantime.

Towards the end of the kata I have another flurry of red moves, as I start adding new functionality for "Conjured" items. I tried to move into a more normal TDD red-green-refactor cycle at that point, but it actually doesn't look like I succeeded very well from this graph. I think I rushed past the "green" step without running the tests, then did a big refactoring. It worked in the end but I think I could have done that better too.Sessions Recorder Analysis
The Sessions Recorder produces a page which is personal to me, and I don't think it allows me to share it publicly on the web. On the page is a graph that looks like this:

As you can see it also shows how long I spend with passing and failing tests, in a slightly different way from the Codersdojo Client's graph. It also distinguishes compiler errors from failing tests, (pink vs red).

This graph also clearly shows the areas I need to improve - the long pink patch in the middle where I do a major redesign in too large a step, and at the red bit at the end when I'm not doing TDD all that well.

The line on the graph is a "score" where it is awarding me points when I successfully perform the moves of TDD. Further down the page it gives me a list of the "events" this score is based on:

(This is just some of the events, to show you the kinds of things this picks up on.) "New Green Test" seems to score zero points, which is a bit disappointing, but adding a failing test gets a point, and so does making it pass. "Went green, but broke other tests" gets zero points. It's clearly designed to help me successfully complete red-green-refactor cycles, not reward me for adding test coverage to existing code, then refactoring it.

There is another graph, more focused on the tests:

This graph has mouseover texts so when you hover over a red dot, it
shows all the compilation errors you had at that point, and if you hover
over a green dot it tells you which tests were passing. It also distinguishes "compler errors" from a "compiler rash". The difference is that a "compiler rash" is a more serious compilation problem, that affects several files.

You can clearly see from this graph that the first part of the kata I was building up the test coverage, then just leaning on these tests and refactoring for the rest. It hasn't noticed that I had two @Ignore 'd tests until the last few minutes though. (I added failing tests for Conjured Items near the start then left them until I had the design suitably refactored near the end).

I actually found this graph quite hard to use to work out what I need to improve on. There seem to be three long gaps in the middle, full of compilation errors where I wasn't running the tests. Unlike with the Codersdojo Client, there isn't a link to the actual code changes I was making at those points. I'm having trouble working out just from the compiler errors what I should have been doing differently. I think one of these gaps is the same major redesign I could see clearly in the Codersdojo Client graph as a too big step, but I'm not so sure what the other two are.

There are further statistics and analysis as well. There is a section for "code smells" but it claims not to have found any. The code I started with should qualify as smelly, surely? Not sure why it hasn't picked up on that.

Conclusions
I think both tools could help me to become better at Test Driven Development, and could be useful in a dojo setting. I can imagine pairs comparing their graphs after completing the kata, discussing how they handled the refactoring steps, and where the design ended up. Pairs could swap computers and look through someone else's statistics to get some comparison with their own.

The Codersdojo Client is free to use, and works with a large number of programming languages, and any editor. You do have to be comfortable with the command line though. The Sessions Recorder tool only supports Java and C# via Eclipse. It has more detailed analysis, but for this Refactoring Kata I don't think it was as helpful as it could have been.

The other big difference between the tools is about openness. The Sessions Recorder keeps your analysis private to you, and if you want to discuss your performance, it lets you do so with the designers of the tool via a "comment on this page" function. I havn't tried that out yet so I'm not sure how it works, that is, whether you get feedback from a real person as well as the tool.

The Codersdojo Client also lets you keep your analysis private if you want, but in addition lets you publish your Kata performance for general review, as I have done. You can share your desire for feedback on twitter, g+ or facebook. People can go in and comment on specific lines of code and make suggestions. That wouldn't be so needed during a dojo meeting, but might be useful if you were working alone.

Further comparison needed
On another occasion I tried out the Sessions Recorder on a normal TDD kata, and
found the analysis much better. For example this graph of me doing the
Tennis kata from scratch:

This
shows a clear red-green pattern of small steps, and steadily increasing
score rewarding me for doing TDD correctly. Unfortunately I didn't do a
Codersdojo Client session at the same time as this one, for comparison. A further blog post is clearly needed for this case... :-)

Monday, 6 August 2012

Please note - As of March 2013, I have rewritten this post in the light of further experience and discussions. The updated post is available here.

I feel like I've spent most of my career learning how to write good automated tests in an agile environment. When I downloaded JUnit in the year 2000 it didn't take long before I was hooked - unit tests for everything in sight. That gratifying green bar is near-instant feedback that everthing is as expected, my code does what I intended, and I can continue developing from a firm foundation.

Later, starting in about 2002, I began writing larger granularity tests, for whole subsystems; functional tests if you like. The feedback that my code does what I intended, and that it has working functionality has given me confidence time and again to release updated versions to end-users.

Often, I've written functional tests as regression tests, after the
functionality is supposed to work. In other situations, I've been able
to
write these kinds of tests in advance, as part of an ATDD, or BDD
process. In either case, I've found the regression tests you end up with need to
have certain properties if they're going to be useful in an agile
environment
moving forward. I think the same properties are needed for good agile functional tests as for good unit tests, but
it's much harder. Your mistakes are amplified as the scope of the test increases.

I'd like to outline four principles of agile test
automation that I've derived from my experience.

Coverage

If you have a test for a feature, and there is a bug in
that feature, the test should fail. Note I'm talking about coverage of
functionality, not code coverage, although these concepts are related.
If your code coverage is poor, your functionality coverage is likely
also to be poor.

If your tests have poor coverage, they
will continue to pass even when your system is broken and functionality
unusable. This can happen if you have missed out needed test cases, or
when your test cases don't check properly what the system actually did. The consequences of poor coverage is that you can't refactor
with confidence, and need to do additional (manual) testing before
release.

The aim for automated regression tests is good Coverage: If you break
something important and no tests fail, your test coverage is not good enough. All
the other principles are in tension with this one - improving Coverage will often impair the others.

Readability

When you look at the test case, you can read it through and understand what the test is for. You can see what the expected behaviour is, and what aspects of it are covered by the test. When the test fails, you can quickly see what is broken.

If your test case is not readable, it will not be useful. When it fails you will have to dig though other sources outside of the test case to find out what is wrong. Quite likely you will not understand what is wrong and you will rewrite the test to check for something else, or simply delete it.

As you improve Coverage, you will likely add more and more test cases. Each one may be fairly readable on its own, but taken all together it can become hard to navigate and get an overview.

Robustness

When a test fails, it means the functionality it tests
is broken, or at least is behaving significantly differently from
before. You need to take action to correct the system or update the test
to account for the new behaviour. Fragile tests are the opposite of Robust: they fail often for no good reason.

Aspects of Robustness you often run into are tests that are not isolated from one another, duplication between test cases, and flickering tests. If you run a test by itself and
it passes, but fails in a suite together
with other tests, then you have an isolation problem. If you have one broken feature and it causes a large number of test failures, you have duplication between test cases. If you have a test that fails in one test run, then passes in the next when nothing changed, you have a flickering test.

If
your tests often fail for no good
reason, you will start to ignore them. Quite likely there will be real failures hiding amongst all the
false ones, and the danger is you will not see them.

As you improve Coverage you'll want to add more checks for details of your system. This will give your tests more and more reasons to fail.

Speed

As an agile developer you run the tests frequently. Both (a) every time you build the system, and (b) before you check in changes. I recommend time limits of 2 minutes for (a) and 10 minutes for (b). This fast feedback gives you the best chance of actually being willing to run the tests, and to find defects when they're cheapest to fix.

If your test suite is slow, it will not be used. When you're feeling stressed, you'll skip running them, and problem code will enter the system. In the worst case the test suite will never become green. You'll fix the one or two problems in a given run and kick off a new test run, but in the meantime someone else has checked in other changes, and the new run is not green either. You're developing all the while the tests are running, and they never quite catch up. This can become pretty demoralizing.

As you improve Coverage, you add more test cases, and this will naturally increase the execution time for the whole test suite.

How are these principles useful?

I find it useful to remember these principles when designing test
cases. I may need to make tradeoffs between them, and it helps just to step back and assess how I'm doing on each principle from time to time as I develop.

I also find these principles useful when I'm trying to diagnose why a test
suite is not being useful to a development team, especially if things have got so bad they have stopped maintaining it. I can often
identify which principle(s) the team has missed, and advise how to
refactor the test suite to compensate.

For example, if the problem is lack of Speed you have some options and tradeoffs to make:

Invest in hardware and run tests in parallel (costs $)

Use a profiler to optimize the tests for speed the same as you would production code (may affect Readability)

push down tests to a lower level of granularity where they can execute faster. (may reduce Coverage and/or increase Readability)

Identify key test cases for essential functionality and remove the other test cases. (sacrifice Coverage to get Speed)

Explaining these principles can promote useful discussions with people new to agile, particularly testers. The test suite is a resource used by many agile teamembers - developers, analysts, managers etc, in its role as "Living Documentation" for the system, (See Gojko Adzic's writings on this). This emphasizes the need for both Readability and Coverage. Automated tests in agile are quite different from in a
traditional process, since they are run continually throughout the
process, not just at the end. I've found many traditional automation
approaches don't lead to enough Speed and Robustness to support agile development.

I hope you will find these principles will help you to reason about the automated tests in your suite.

About Me

I'm a self employed consultant, with a focus on agile automated testing. I'm the author of "The Coding Dojo Handbook" (http://leanpub.com/codingdojohandbook). I'm married to Geoff Bache, creator of StoryText and TextTest. I tweet with username @emilybache.