It’s hard to visualize how a system will function just by reading the requirements specification. Tests based on requirements help make the expected system behaviors more tangible to the project participants.

And the simple act of designing tests reveals many problems with the requirements, long before you execute the tests on an operational system. In fact, if you begin developing tests as soon as portions of the requirements stabilize, you’ll often discover problems while it’s still possible to correct them quickly and inexpensively.

REQUIREMENTS AND TESTS

Tests and requirements have a synergistic relationship. They represent complementary views of a system. Creating multiple views of a system — written requirements, diagrams, tests, prototypes, and so forth — gives you a much richer understanding of it.

Agile software development methodologies often emphasize writing user acceptance tests in lieu of detailed functional requirements. Thinking about the system from a testing perspective is valuable, but that approach still leaves you with just a single representation of requirements knowledge.

Writing black-box (functional) tests crystallizes your vision of how the system should behave under certain conditions. Vague and ambiguous requirements will jump out at you because you won’t be able to describe the expected system response. And, when business analysts, developers, and customers walk through the tests together, they’ll achieve a shared vision of how the product will work.

Of course, you cannot test your system when you’re still at the requirements stage because you haven’t written any executable software yet. Nonetheless, you can begin deriving conceptual tests from use cases or user stories very early on.

You can then use the tests to evaluate functional requirements, analysis models, and prototypes. The tests should cover the normal flow of the use case, alternative flows, and the exceptions you identified during requirements elicitation and analysis.

LESSONS ON TESTING REQUIREMENTS

A Personal Testing Requirements Victory For me, a personal experience really brought home the importance of combining test thinking with requirements specification.

Simple Project I once asked my group’s UNIX scripting guru, Charlie, to build a simple email interface extension for a commercial defect tracking system we were using. I wrote a dozen functional requirements that described how the e-mail interface should work. Charlie was thrilled. He’d written many scripts for people, but had never seen written requirements before.

Testing Discovery Unfortunately, I waited a couple of weeks before I wrote the tests for this email function. Sure enough, I had made an error in one of the requirements. I found the mistake because my mental image of how I expected the function to work, represented in about twenty tests, was inconsistent with one of the requirements. Chagrined, I corrected the defective requirement before Charlie had completed his implementation, and when he delivered the script, it was defect free.

Proven Success Had I not caught the error before implementation, it would have surely resulted in a defective email interface. That would have meant attempting to locate and fix the problem retroactively, causing delays and headaches in the process. It was a small victory, but small victories add up.

CONCEPTUAL TESTS: Chemical Tracking System

As an illustration, let’s consider an application called the Chemical Tracking System. In this system, there’s one use case, called “View Order.” This lets the user retrieve an order for a chemical from the database and view its details. Under this system, some conceptual tests would be the following:

User enters order number to view, order exists, user placed the order.

Expected result: show order details.

User enters order number to view, order doesn’t exist.

Expected result: display message, “Sorry, I can’t find that order.”

User enters order number to view, order exists, user didn’t place the order.

Ideally, a business analyst will write the functional requirements and a tester will start the tests from a common starting point — the user requirements — as shown in Figure 1.

Ambiguities in the user requirements and differences of interpretation lead to inconsistencies between the views represented by the functional requirements, models, and tests. Finding those deviations reveals the errors.

And, as developers gradually translate the requirements into user interface and technical designs, testers can elaborate on the early conceptual tests and transform them into detailed procedures.

Four Elements Of Requirements Testing

If the notion of testing requirements seems abstract, a specific example may help. Let’s see how a team working with the Chemical Tracking System tied together requirements specification, analysis modeling, and early test-case generation.

The following contains a use case, functional requirements, part of a dialog map, and a test, all of which relate to the task of requesting a chemical.

Figure 1. Development and testing from a common source.

1. USE CASE

A fundamental use case for this system is “Request a Chemical.” This use case includes a path that permits the user to request a chemical container that’s already available in the chemical stockroom. This option would help the company reduce costs by reusing containers already on hand instead of buying new ones.

Here’s the use case description:

The Requester specifies the desired chemical to request by entering its name or chemical ID number. The system either offers the Requester a new or used container of the chemical from the stockroom, or lets that person order a new container from a vendor.

2. FUNCTIONAL REQUIREMENT

Here’s a bit of functionality associated with this use case:

If the stockroom has containers of the chemical being requested, the system shall display a list of the available containers. The user shall either select one of the displayed containers or ask to place an order for a new one from a vendor.

3. DIALOG MAP

A dialog map is a high-level overview of a user interface’s architecture, modeled as a state-transition diagram. Figure 2, shown below, illustrates a portion of the dialog map for the “Request a Chemical” use case that pertains to this function. The boxes in this dialog map represent user interface displays (dialog boxes, in this case), and the arrows indicate possible navigation paths from one display to another.

Figure 2. Portion of the dialog map for the “Request a Chemical” use case.

4. TEST

Because this use case has several possible execution paths, you can envision numerous tests to address the normal flow, alternatives, and exceptions. The following is just one test, based on the path that shows the user the available containers in the chemical stockroom:

At dialog box DB40, enter a valid chemical ID; the chemical stockroom has two containers of this chemical. Dialog box DB50 appears, showing the two containers. Select the second container. DB50 closes and container two is added to the bottom of the Current Chemical Request List in dialog box DB70.

Suppose the test lead for the Chemical Tracking System, Ramesh, wrote several tests like this one, based on his understanding of how the user might interact with the system to request a chemical.

Such abstract tests are independent of implementation details. They don’t describe clicking on buttons or other specific interaction techniques. As the user interface design activities progressed, Ramesh refined these abstract tests into specific procedures.

By tracing the execution path for each test on the model, you can find incorrect or missing requirements, correct errors in the dialog map, and refine the tests.

To test the requirements, Ramesh first mapped them against the functional requirements. He then checked to ensure each test could be executed by the existing set of requirements. He also made certain that at least one test covered every functional requirement. Such mapping usually reveals omissions.

Next, Ramesh traced the execution path for every test on the dialog map with a highlighter pen. The red line in Figure 3 shows how the preceding sample test traces onto the dialog map.

By tracing the execution path for each test on the model, you can find incorrect or missing requirements, correct errors in the dialog map, and refine the tests.

Imagine that after “executing” all the tests in this fashion, the navigation line in Figure 2, labeled “order new container” that goes from DB50 to DB60, hasn’t been highlighted. There are two possible interpretations:

Figure 3. Tracing a test onto the dialog map for the “Request a Chemical” use case.

The navigation from DB50 to DB60 is not a permitted system behavior. The business analyst needs to remove that line from the dialog map. If the Software Requirements Specifications (SRS) contains a requirement that specifies the transition, the business analyst must also remove that requirement.

The navigation is a legitimate system behavior, but the test that demonstrates the behavior is missing.

When I find such a disconnect, I don’t know which possible interpretation is correct. However, I do know that all the views of the requirements — SRS, models, and test — must agree.

Suppose that another test states the user can take some action to move directly from DB40 to DB70. However, the dialog map doesn’t contain such a navigation line, so the test can’t be “executed.” Again, there are two possible interpretations:

The navigation from DB40 to DB70 is not a permitted system behavior, so the test is wrong.

The navigation from DB40 to DB70 is a legitimate function, but the dialog map and perhaps the SRS are missing the requirement that allows you to execute the test.

In these examples, the business analyst and the tester combined requirements, analysis models, and tests to detect missing, erroneous, or unnecessary requirements long before any code was written. Every time I use this technique, I find errors in all the items I’m comparing to each other.

As one consultant, Ross Collard, pointed out, “Use cases and tests work well together in two ways: If the use cases for a system are complete, accurate, and clear, the process of deriving the tests is straightforward. And if the use cases are not in good shape, the attempt to derive tests will help to debug the use cases.”

I couldn’t agree more. Conceptual testing of software requirements is a powerful technique for controlling a project’s cost and schedule by finding requirement ambiguities and errors early in the game.

ABOUT KARL WIEGERS

Karl has provided training and consulting services worldwide on many aspects of software development, management, and process improvement. He has authored five technical books, including “Software Requirements,” and written more than 175 articles. Karl has led process improvement activities in small application development groups, Kodak’s Internet development group, and a division of 500 software engineers developing embedded and host-based digital imaging software products.