Over the years I have been wondering why folks involved in testing are not more eager to adopt practices that we have shown numerous times to be superior to current practices. Let’s face it, making sure that all the interesting scenarios and corner cases have properly, effectively, and efficiently been tested is not an easy task. In fact, the human mind is horrible in this kind of creative, combinatorial exercise – we are so poor in doing combinatorics and calculations in our head that we easily overlook scenarios that we should be focusing on. We just lose track of things.

This is where automated test design comes in. We outsource the process of figuring out how the application should be tested to a computer. Instead of writing the test cases manually, we create graphical models that focus on the intended and expected behavior of the application to be tested. We model the behavior of the application, by drawing the flows and user interactions on a high level of abstraction, and then we let a computer do the actual test design. The computer is responsible of figuring out how the application operates and how it should be actually tested in order to meet all our goals and objectives.

When writing test cases manually we can get away with a “job half done” by not fully understanding what the application is supposed to do and then resorting in to suboptimal testing efforts like passing the buck to the test execution team to figure it out. Compared to the more formal approach of modeling, writing tests manually is much more forgiving. In manual practices there is hardly anything that would enforce meaningful metrics for measuring the completeness and quality of the testing efforts, thus we resort to heavily misused and often modestly useful metrics such as number of test cases, source code statement coverage, and so on.

Not so with the more formal approach of modeling and using a computer for automatic test design and test case generation. Soon after you start to model graphically you begin to see that in order to model the intended application behavior you really need to understand how the application is actually supposed to operate. As you draw a model of the intended application behavior you need to clearly understand the intended operational characteristics of the application while at the same time typically you raise a lot of questions regarding ambiguous or omitted requirements. The mere act of modeling the intended application behavior often improves the quality of the requirements and actually a lot of defects can already be spotted in the specifications and requirements stage before even writing a single line of code. Save a lot of time debugging errors later by spending a few extra minutes upfront making sure the requirements and application behavior is exactly what is intended.

In this way modeling can be a quite unforgiving activity which in a formalized process way is a good thing. If you have not fully grasped how the application is supposed to operate, you cannot really create a good model nor can you really create good and comprehensive test cases manually. If the application logic is not well defined, again, it’s quite impossible to model that. The unfortunate part of all this is that people don’t even see these issues when they are doing manual testing. They are not even aware that they are doing subpar work. And this most likely is why in some cases people emotionally feel a bit intimidated by adopting a more formal approach to testing. There is no way of escaping poor quality work – it’s all there in front of everybody to see.

The main business argument for switching to graphical modeling and using a computer for automated test design is that it makes good business sense. Reducing testing costs and time by 30-50% or more over manual test design is good business. But if a company’s practices do not really allow it to measure the difference between mediocre, good, and very good testing, it is quite impossible to make the methodological and cultural changes that modeling requires. The biggest misunderstanding is that modeling is difficult. It is not. Thinking about the application details, rather than deferring to execution, is the difficult yet critical part.

This whole mentality and lack of understanding often expands much broader than only within the testing organization and, in fact, it is often management that pushes or supports outdated and inefficient approaches. But if companies would rather maintain their manual testing status quo, it must be that they are so profitable that the cost and quality benefits from modeling and automating test design aren’t important. Huh?