Automated vs. exploratory – we got it wrong!

There is a misconception floating around that Agile Testing is mostly about automated checking. On the other way the whole testing community fights since decades about the value of test automation vs. the value of exploratory testing. I find these two different extreme position unhelpful to do any constructive testing. Most recently I ended up with a model that explains the underlying dynamics quite well, to testers, to programmers, and to their managers. Here’s the model.

Reasons to repeat a test

There are several reasons why you would want to repeat a test. In my consulting work I have met teams that hardly could remember anything about their work from two weeks ago. With the ever more complex systems that we deal with, automated tests can help us remember about our current thought process today, so that we won’t forget them once we revisit our code.

In his book, Tacit and Explicit Knowledge, Harry Collins explains that a transformation of a message can yield knowledge transfer. Automated tests are a way to transform our current understand about the software system in a way that we can utilize tomorrow to remember it. The transformation of our current understanding about the software system in terms of an automated then becomes a pointer to the past.

This pointer comes in two flavors. First, TDD practitioners often talk about test-driven design. Automated unit tests are a by-product of that work. When using TDD automated tests then become a way to foster our design understanding in an executable manner. Automated unit tests put constraints on our understanding of the underlying technology. They help us get from the chaos technological incompetence to a complex or even complicated understanding about the system we have to deal with.

On the other hand, automated end-to-end tests help us reach a better understanding of the underlying business domain for our code. Using a specification workshop can help us discover the right amount of domain knowledge to get started with. We might also find out while implementation that understanding that there is yet more to discover in this regard. When we automate our domain understanding, we are probably working on the level of an end-to-.end tests from a business user perspective. Therefore we codify also the view of the business problem that we are trying to solve. After all, Acceptance Test-driven Development, Domain-driven Design, and Behavior-driven Development are all part of understanding the business context, and the very business problem that we would like to solve with our software product. Automated tests derived from either of these approaches helps us remember our domain understanding tomorrow.

Overall, this explains the struggles between TDD, ATDD, and BDD together with DDD. You can use any of these techniques to explore the technology or the domain. They have their advantages at times when you don’t understand the underlying technology or the underlying business domain. They help you tackle one or another, at times even both. In the end, whether automated or scripted, we foster an understanding about our software with a repeated test.

Reasons to explore

With all this automation in place there is really no reason why we would need to explore, right? Sorry, that’s obviously wrong. There are certain things that we know, and we can make this knowledge explicit for tomorrow. But there is also stuff that we don’t know. And this comes in two different flavors:

Conscious Ignorance is all the stuff we are aware that we don’t know it. Any risks that might become real on a project are examples of conscious ignorance. Since we are aware of these, we can come up with mitigation strategies to tackle them.

Unconscious Ignorance on the other hand is all the stuff that we don’t know that we don’t know it. Dan North referred to this as second-order ignorance. These are all the risks that we drives down our projects, since we couldn’t come up with mitigation strategies.

Exploratory Testing really is about tackling with the these two ignorances. We want to use manual exploratory tests to check whether we any risks became real, and we should install our mitigation strategies now. Automated tests then might become one mitigation strategy for a particular risk, but also exploratory testing is one in this regard.

On the other hand, all the stuff that were not aware of that could become a problem, can be explored with manual tests. We would like to find out serious stuff that can become more than a burden on our development effort tomorrow, so that we can learn more about the problems and mitigation strategies for it. Exploratory Testing helps us to discover the stuff that we are not aware of, yet. In the end, if we were aware of them, we could have written an automated check for that, right?

While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we might need tomorrow. This is crucial, since the stuff we were unaware of might become an obstacle in tomorrow’s work. After we learned about things we didn’t know, we can make a more conscious decision whether to dive deeper, and automate that stuff, or whether to ignore this new information for the time being. Exploratory Testing then really is about learning new things, our conscious and unconscious ignorance about the project we are dealing with.

Are we dealing with the right problem?

In the past year, I learned about polarity management. In essence when struggling between opposite sides we should rather wonder whether we are solving the right problem. Manual exploratory testing and automated testing are two such polarities. Applying thoughts from polarity management, I wonder whether we solve the right problem with answering the question about manual or automated testing with an extreme position.

Automated tests help us remember our current knowledge about the technology and the domain of our software. Exploratory Testing helps us make us aware of our ignorance in certain regards, our underlying assumptions. Rather than asking ourselves whether we should strive for automated our manual testing, we should rather ask ourselves, how much automated testing can we sacrifice to enable us to learn about new things about our product, and how much knowledge we will need tomorrow. Of course, the latter piece is also part of our unconscious ignorance. Really, it’s about finding the right balance between learning things we are unaware of yet, and foster understanding we have today. So, how much learning can you skip to get that knowledge codified? How much codified knowledge can you skip to tackle more risks? The answer to these questions surely deserves some contextual thought.