The idea of acceptance tests -- a set of tests that must pass before an application can be considered finished -- is certainly not new. Indeed, the value of testing an application before delivering it is relatively well established. But the way most organizations do it comes too late in the process and is not well-integrated with the actual development process. A new approach called acceptance-test-driven development (ATDD) can change that.

Traditionally, testers prepare test plans and execute tests manually at the end of the software development phase. Acceptance testing is done relatively independently of development activities. In some organizations, QA departments also use automated testing tools such as HP's Quick Test Pro, but, again, this activity is generally siloed away from the rest of the development activity.

Testing an application after it has been developed has several significant drawbacks. Most important, having feedback about problems raised at this late stage of development makes it very difficult to correct bugs of any size. This results in costly rework, wasted developer time, and delayed deliveries.

ATDD takes a different approach. Essentially, ATDD involves collaboratively defining and automating the acceptance tests for upcoming work before it even begins -- a simple inversion that turns out to be a real game-changer. Rather than validating what has been developed at the end of the development process, ATDD actively pilots the project from the start. Rather than being an activity reserved to the QA team, ATDD is a collaborative exercise that involves product owners, business analysts, testers, and developers. And rather than just testing the finished product, ATDD helps to ensure that all project members understand precisely what needs to be done, even before the programming starts.

In addition, acceptance tests are no longer cantoned to the end of the project and performed as an isolated activity. Instead, ATDD tests are automated and fully integrated throughout the development process. As a result, issues are raised faster and can be fixed more quickly and less expensively, the workload on QA at the end of the project is greatly reduced, and the team can respond to changes faster and more effectively.

ATDD in practice

Consider how ATDD typically works in the context of an agile development project. As a rule, a software project aims at delivering users with a number of high-level "features" (sometimes called functionalities or capabilities). A feature is a general value proposition relating to something the application can do for the user, expressed in terms you might put on a product flyer or press release; for example, a feature of an online real estate lease-management application might be "Manage property repairs."

Features are generally too big to implement all at once, so they are broken into smaller, more manageable chunks. In agile circles, these chunks are often expressed in the form of user stories -- a short sentence capturing what the user wants from a particular piece of functionality. For example, user stories for the "Manage property repairs" feature might include "Issue work order" and "Approve invoice."

A user story cannot stand alone, however; it is merely the promise of a conversation between developers and users about a particular requirement. The details about what needs to be implemented will arise from this conversation. It will then be formalized as a set of objective, demonstrable acceptance criteria. For example, you would need to specify acceptance criteria for "user can approve an invoice for an amount less than the agreed maximum" and "user cannot approve an invoice if the price exceeds the agreed maximum."

Acceptance criteria determine when a particular user story is ready to be deployed into production. But they do much more than record what should be tested at the end of an iteration. Acceptance criteria are drawn up as a collaborative exercise, at the start of the iteration, with developers, testers, and product owners involved. As a result, they help ensure that everyone on the team has a clear vision of what is required. They also help provide clear guidelines for developers as to what needs to be implemented. (These guidelines are even more effective if the developers doing the programming are practicing test-driven development, or TDD.)

Note that acceptance criteria are not designed to be exhaustive -- there will be more technical tests for that. Instead, they are used as much for communication as they are for verification. They take the form of working examples, which is why ATDD is sometimes referred to as "specification by example."

Acceptance-test-driven development is not just limited to agile projects. Even teams using more formal and detailed use cases, or more traditional approaches such as the software requirements specification (or SRS) documents, can benefit from having verifiable, automated acceptance criteria as early as possible.

Automating your acceptance tests

A key part of acceptance criteria is that they are automated. They are not simply stored in a Word document or Excel spreadsheet; instead, they are living, executable tests. This is important -- for ATDD to be effective, the automated acceptance tests need to be run automatically whenever a change is made to the source code. So it is vitally important to have a tool that will integrate smoothly into your build process and that can be run on your build server with no human intervention.

Automated acceptance tests not only serve to test the application; they also provide an objective measurement of progress (in agile projects, working software is considered to be the only true measure of progress). The tests can also give an idea of the relative complexity of each feature and story, because a functionality that is long and complicated to test is likely to also be long and complicated to develop. This in turn can give a useful head's-up to product owners needing to set priorities.

Although you certainly can write automated acceptance tests using conventional unit testing tools such as TestNG, there are several dedicated ATDD tools. These tools are focused as much on communication and feedback as they are on testing.

ATDD tools

ATDD is more an approach than a toolset, but there are several tools that can make things easier.

FitNesse is one of the earliest ATDD tools. Using FitNesse, users enter their requirements in tabular form in a wiki, and developers write code behind the scenes to run the test data stored in the wiki against the actual application. When the tests are executed, the table is colored according to whether the tests succeeded or failed. FitNesse is very useful when your acceptance tests' criteria can be expressed in terms of tables of data and expected results, although it is also used to express acceptance tests as a series of steps.

More recently, other tools have emerged that support behavior-driven development, or BDD. This technique encourages developers to think in terms of the behavior of an application, and to express their low-level technical requirements using a narrative approach. Cucumber is a popular tool from the Ruby community that allows you to express your acceptance criteria using the "given-when-then" structure commonly used in agile projects. It is also easy to use Cucumber with Java. JBehave uses a similar approach, with stories expressed in text files and tests written using annotated Java classes. Easyb is a similar tool based on the Groovy language.

Concordion is another, more recent ATDD tool. In Concordion, acceptance tests are expressed in the form of HTML pages containing free-form text and tables. Java classes are then used to analyze special tags placed in these pages, to execute and display the results in HTML form.

All these tools place a high emphasis on readability and communication. The code below illustrates how an acceptance criterion might be expressed using Easyb:

scenario User can approve an invoice for an amount less than the agreed maximum"
{
given "the User has selected an open invoice",
and "the User has chosen to approve the invoice",
and "the invoice amount is less than the agreed maximum amount",
when "the User completes the action",
then "the invoice should be successfully approved",
}

Once the acceptance criteria are defined in this way, the corresponding test code can then be written in more conventional programming languages such as Java, Groovy, and Ruby.

In addition to showcasing Easyb, this code snip demonstrates the communication focus of ATDD tools. Automated acceptance criteria are expressed in high-level terms that makes sense to business managers as much as to software engineers and programmers. Most ATDD tools also generate reports that express the test results in familiar business terms. Tests that have been written in this way, but with no backing test code, are marked as "pending." At the start of an iteration, all the acceptance criteria are in this state. As development progresses, the next step is to implement them, which is where the actual code that tests the application is written. So these reports not only tell you what tests pass and fail, they also provide a way to track the progress of your project, by indicating what work remains to be done.

Taking a slightly broader perspective, automated acceptance tests are like any other automated tests -- they should be stored in your version control system and executed periodically on your continuous-integration (CI) server (at least on a nightly basis, but preferably whenever a change is made to the application source code). Getting fast feedback when acceptance tests fail is essential. You can also configure your CI server to publish the results of the acceptance tests where they can be easily consulted by nondevelopers. Fortunately, modern CI tools such as Jenkins integrate well with virtually all of the common BDD tools.

Automating acceptance tests for Web applications

When it comes to implementing ATDD for a Web application, a wide range of open source and commercial tools is available. Given this wide range, choosing your tool with care is important; it can mean the difference between a set of automated acceptance tests that is easy to maintain in the future, and one that quickly becomes unusable due to prohibitive maintenance costs.

Modern automated Web-testing tools, both commercial and open source, fall into three categories:

Record/replay

Script-based

Page objects

Record/replay tools, such Selenium IDE and JAutomate, let a user step through a Web application, recording the user's actions as a test script. Although tempting in its simplicity, this approach is in fact a poor strategy. The low-level scripts generated by these tools are fragile and hard to maintain. For example, there is no reuse of testing logic between scripts, which makes maintaining the scripts very costly.

Script-based testing support a slightly more flexible strategy. Tools such as Selenium, Watir, Canoo WebTest, and the commercial Quick Test Pro fall into this category, using tests written in a programming language such as Java, Ruby, or VBScript. However this strategy is still quite low-level, focusing on the technical details of the Web tests rather than the business requirements that they are testing. It also requires strong discipline and structure to avoid duplication within the scripts. Again, this tends to make tests more fragile and harder to maintain.

Good automated acceptance tests should be high-level, expressed in business terms. They need to isolate the what from the how. Doing so ensures that, if the implementation details for a particular screen should change, the changes would only minimally affect the low-level test code and not affect the high-level tests. Ideally, you want to maintain a level of abstraction between what a Web page does in business terms ("Approve an invoice") and how it does it ("Click the invoice in the invoice list, wait for the details to appear, then click on the Approve button").

The page-objects pattern, well supported by Selenium 2/WebDriver in particular, is an excellent choice for ATDD tests. High-level acceptance criteria need to be expressed in high-level business terms (the what), and then implemented under the hood using a set of well-structured, maintainable page objects. For example, an automated acceptance test is expressed in business terms and implemented as a series of steps. Each step uses page objects to interact with the Web application. These levels of abstraction make the acceptance tests considerably more stable and maintainable.

The regression-test bonus

Defining and automating your acceptance criteria up front makes a lot of sense. Not only does it provide clear goals for developers, it also gives excellent visibility into what feature are being implemented, how they will work, and how the project as a whole is progressing. And, as a bonus, ATDD also provides a broad set of regression tests.

Many open source tools exist to help you implement an ATDD strategy in your project -- I've put together a compendium of resources. Although you can use conventional unit-testing tools for ATDD, dedicated ATDD tools provide a stronger emphasis on communication and reporting, which are key parts of the ATDD approach. And for Web applications, automated testing tools based on the page-objects pattern are an excellent choice when it comes to implementing the tests themselves.