The Cost of Software Testing

Let's face it: Testing is regarded as the number one bottleneck in the software delivery process. Whether you’re practicing DevOps, continuous integration, continuous delivery, or continuous everything, testing is consistently considered the primary holdup for delivery. Most people simply conclude that developers are value centers, and testers are cost centers.

Well, it is correct that testers are cost centers. The more they test, the more time they need, so the more they cost. But the same is true for developers: The more they develop, the more time they need, and the more they cost, too. It's easy to apply this logic to any person in your company who contributes to the software delivery process.

I think we are missing something important here, especially when we talk about testers. First, we shouldn't talk about testers; we should talk about testing. Second, we shouldn't just see the cost of testing, but also the value of testing—and then we should ask how this value relates to the costs.

That's what most people overlook. The reason for this is simple: Testing is inherently abstract. Through testing you don't produce something tangible; you collect quality-related information about your software, like risks. You then share this information with other people, such as product owners, to enable them to make decisions about shipping, fixing bugs, etc.

It’s obvious that this activity—let's call it testing—has some value. Hence, people who test—let's call them testers—add value to the software delivery process. So, we should reframe our thinking: "Good" developers are value centers, and "bad" testers are cost centers. Likewise, "bad" developers are cost centers, and "good" testers are value centers.

You might now be saying, "Well, everybody can collect quality-related information about the software in one way or another. Do we really need dedicated testers?"

It’s true: Everybody can test. If that happens to be your developer, so be it. But everybody can also test badly. So you probably want someone on your team who can test more reliably, more deeply, and more efficiently than anyone else on your team. In the same way, you probably want someone on your team who can develop software more reliably, more sustainably, and more efficiently than anyone else on your team.

So, for the same reason most companies have dedicated developers, dedicated product owners, dedicated product managers, dedicated support engineers, dedicated marketing strategists, dedicated documentation experts, dedicated UI designers, dedicated UX experts, and so on, they also have dedicated testers on their teams.

These companies understand the value of professional testing. They don't reduce the act of testing to the number of test cases that have been created, in the same way they don't reduce the act of development to the amount of code that has been written, or they don't reduce the work of a product owner to the number of user stories that have been created.

These are the companies that realize it requires specialists in each field to succeed with their software along the line.

We have to look at Testing as a "cost containment" process. We help to reduce the waste of money due to rework and other factors that relate to both "hard" dollar and "soft" dollar impacts/effects. Testing does have a cost, but also a value when done correctly. So think about how the cost is outweighed by the value. Testing's focus is to gather information about the software under test and its perceived quality, or value. Testing brings that information to light and presents it to the impacted parties, be it the end-user or all the way to the CEO. This then can influence their perception of quality of the software and if they will release and/or use it. This leads to hard dollar impacts such as sales/revenue. It can also lead to soft dollar impacts such as reputation of the company/group producing the software. Users are fickle and if you put out a crappy product they will let you know and then you will lose money from loss of sales/renewals. Also, if the software makes it more difficult for users to do their job there is the hidden cost of loss of productivity and also the increased cost of support for it. There is more to discuss on this, but these are some of the highlights that people need to be aware of regarding the "cost" of testing.

You're right, there is so much more to say about the "cost" of testing. To me, your "hard/soft dollar impacts/effects" are manifestations of risks. Through testing we collect information about these risks, but are we really mitigating these risks (i.e. preventing potential problems) and so ensuring "cost containment"? I don't think so. Michael Bolton once described it this way: "Just as smoke detectors don’t prevent fires, testing on its own doesn’t prevent problems. Smoke detectors direct our attention to something that’s already burning, so we can do something about it and prevent the situation from getting worse. Testing directs our attention to (potential) problems. Those (potential) problems will persist - presumably with consequences - unless someone makes some change that fixes them." This then implies, that testers don't contain costs, but they play a crucial role in containing the costs by making aware of the related (potential) problems. So, instead of calling testing a "cost containment" process, we could call it an "information service". What do you think of that? Would love to hear your opinion on that. Best, Ingo.

About the Author

Ingo Philipp is on the product management team at Tricentis. In this role his responsibilities range from product development and product marketing to test management, test conception, test design, and test automation. His experiences with software testing embrace the application of agile as well as classical testing methodologies in various sectors including financial services, consumer goods, commercial services, healthcare, materials, telecommunications, and energy.