Functional tests test that your application works as specified - in fact the tests can act as a specification. The tests can be specified in a 'human readable' form (the user story) and then through your test framework turned into an 'executable specification'. More on the human readable specification shortly.

As well as a functional test telling you when the feature is complete (help resist developer gold-plating), if you have agreed the written text of the user stories with the customer then a passing test suite is proof to the customer that you have met their specification.

As your test suite develops it becomes invaluable. Firstly, your functional test framework will often pick up failures that your unit tests miss. This is particularly true when new features interact in unforeseen ways with existing features. Every developer has had the experience of a shiny new feature breaking an old one...

Perhaps less expected is that your test suite will become massively useful as you tackle big refactors. When you make core changes to your application you will expect a lot of your unit tests to break - and it can be hard to know which breakages are expected (because of the changes) and which are genuine failures.

As you start to fix the application in light of the refactoring, your functional tests will tell you when you are complete and when the application functionality is restored.

We've found this in a very real way at Resolver Systems. The first major refactoring I remember was when we turned Resolver One from a single threaded application to doing recalculations on a background thread. This changed just about everything internally - but to the user it was only a single new feature; recalculations no longer blocked the user interface.

After making the main change internally (and hunting down threading errors), we repeatedly ran the functional tests fixing everything they reported as broken. I can't imagine trying to hunt down bugs and work out what else needed to change without the help of the automated test suite.

Continuous Integration

Your test infrastructure

Running your tests should be part of an automated build

To get the best out of them run them continuously or on every check-in

We used to use Cruise Control .NET

We broke it and replaced it with a custom script - 130 lines of Python

Now we have a distributed build system

Preferably you should run a full build (all tests) before every checkin. Functional tests can be slow - our distributed test system that allows us to spread a build across many machines helps (displaying failures and exception tracebacks through a web application whilst the tests are still running).

At Resolver Systems we have around three hundred functional tests now (growing all the time) plus about 4500 unit tests. A full build on a single machine takes over four hours. Our continuous integration uses our distributed test system and spreads the tests over three machines - giving us slightly faster feedback between check-ins and seeing whether we have broken the build.

As we look at user stories we will see how creating and implementing user stories can drive development.