An Uncomfortable Truth about Agile Testing

One characteristic of agile development is continuous involvement from testers throughout the process. Testers have a hard and busy job. Jeff has finally starting to understand why testing in agile development is fundamentally different.

In organizations that have adopted agile development, I often see a bit of a culture clash with testers and the rest of the staff. Testers will ask for product specifications that they can test against to verify that what was built meets the specifications, which is a reasonable thing to ask for. The team often has a user story and some acceptance criteria in which any good tester can quickly poke holes. "You can't release the software like this," the tester might say. "You don't have enough validation on these fields here. And what should happen if we put large amounts of data in this?"

"This is just the first story," I'd say. ""We'll have more stories later that will add validation and set limits on those fields. And, to be honest, those fields may change after we demonstrate the software to the customer—that's why we're deferring adding those things now."

"Well, then there's no point in testing now," the testers would usually say. "If the code changes, I'll just need to retest all this stuff anyway. Call me when things stop changing."

I can understand their concern, but I also know we need to test what we've built so far—even if it's incomplete, even if it will change. That's when I realized what testing is about in agile development. Let me illustrate with a story:

Imagine you're working in a factory that makes cars. You're the test driver, testing the cars as they come off the assembly line. You drive them through an obstacle course and around a track at high speed, and then you certify them as ready to buy. You wear black leather gloves and sunglasses. (I'm sure it doesn't work that way, but humor me for a minute.)

For the last week, work has been a bit of a pain. When you start up your fifteenth car of the day, it runs rough and then dies after you drive it one hundred yards from the back door of the plant. You know it's the fuel pump again, because the last five defective cars you've found have all had bad fuel pumps. You reject the car and send it back to have the problem properly diagnosed and fixed. You may test this car again tomorrow.

Now, some of you might be thinking, "Why don't they test those fuel pumps before they put them into the cars?" And you're right, that would be a good idea. In the real world, they probably test every car part along the way before it gets assembled. In the end, they'll still test the finished car. Testing the parts of the car improves the quality downstream, all the way to when the car is finally finished.

Testing in agile development is done for much the same reason. A tester on an agile team may test a screen that's half finished, missing some validation, or missing some fields. It's incomplete—only one part of the software—but testing it in this incomplete stage helps reduce the risk of failures downstream. It's not about certifying that the software is done, complete, or fit to ship. To do that, we'd need to drive the "whole car," which we'll do when the whole car is complete.

By building part of the software and demonstrating that it works, we're able to complete one of the most difficult types of testing: validation.

I can't remember when I first heard the words verification and validation. It seemed like such nonsense to me; the two words sounded like synonyms. Now I know the distinction, which is important. Verification means it conforms to specification; in other words, the software does what you said it would do without failing. Validation means that the software is fit for use, that it accomplishes its intended purpose. Ultimately, the software has no value unless it accomplishes its intended purpose, which is a difficult thing to assure until you actually use it for its intended purpose. The best person qualified to validate the software is a person who would eventually use it. Even if the target users can't tell me conclusively that it will meet its intended purpose, they often can tell me if it won't, as well as what I might change to be more certain that it will meet its intended purpose.

Building software in small pieces and vetting those pieces early allows us to begin to validate sooner. Inevitably, this sort of validation results in changes to the software. Change comes a bit at a time, or it arrives continuously throughout an agile development process, which agile people believe is better than getting it all at once in a big glut at the end of the project, when time is tight and tensions are high.

Being a tester in an agile environment is about improving the quality of the product before it's complete. It also means becoming an integrated and important part of the development team. They help ensure the software—each little bit that's complete—is verified before its users validate it. Testers are involved early to help describe acceptance criteria before the product is written. Their experience is valuable to finding issues that likely will cause validation issues with customers.

At the end of the day, an agile tester likely will pour over the same functionality many times to verify it as additional parts are added or changed. A typical agile product should see more test time than its non-agile counterpart. The uncomfortable truth for testers in agile development is that all of this involves hard work and a fair amount of retesting the same functionality over and over again. In my eyes, this makes the tester's role more relied on and critical than ever.

User Comments

1 comment

Jeffrey Croft

I think that before agile came along we had iterative based on developing Use Case flows which was very Tester 'friendly' as you could easily identify with the high-level requirements and each Use Case flow was a meaningful user action that delivered a purposeful outcome in terms of an 'actor'.

Developers did not like it so much as that's not the way they develop and deliver software i.e. to build up enough of each component to provide a complete Use Case flow.

Now with agile its much more Developer-centric with early sprints delivering small features that may or may not expose some functionality that's actually testable.One thing I do think that suffers is traceability with often a disjoint between the Requirements / Technical spec and the stories. What does the story deliver in terms of the overiding spec that the tester has been pouring over so much.

I know its the nature of the beast but testers should also be wise to what's 'firm' and whats likely to change so as not to invest a lot of time inventing complex test scenarios or criteria for stories that have features that are very much still in the discussion stage.

About the author

Jeff Patton leads Agile Product Design, a small consultancy that focuses on creating healthy processes that result in products that customers love. Articles, essays, and blog can be found at www.AgileProductDesign.com. Information about public classes, including Certified Scrum Training, can be found at www.AgileProductDesign.com/training.

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.