A new model for test strategies… (An update to the Heuristic Test Strategy Model).

A new model for test strategies… (An update to the Heuristic Test Strategy Model).

The Heuristic Test Strategy Model

The heuristic test strategy model was created by James Bach with the purpose of offering a set of patterns for designing a test strategy.

The model has been very valuable for me in the past, not just with helping me to think about how context affects my testing strategies, but with helping me to form test ideas and talk about test techniques within my overall testing activities, including my strategy.

A change in my context

Recently, I’ve changed my path by beginning a new adventure at eBay.
In my initial month or so with the company, I’ve been speaking to a lot of people about testing in general and around test strategies. So I dug out the trusty HTSM from James’ blog, but my “spidey-sense” tingle arose when I was trying to use it to start discussing it as a model for designing a test strategy. I had a niggling feeling that something was missing.

Coincidentally, I noticed a tweet from a conference relating to a slide showing the HTSM, and I started a conversation about my thoughts on having a slight niggle, and the responses on twitter helped me to dive a bit deeper into the HTSM and the problems I were seeing within my context of how I was trying to use it.

At the heart of it, I realised that the HTSM seemed to be missing a bit about test approaches and the “how” side of that. The model details context influences, which have arrows into test techniques. I feel that in my context, it would be useful to have the approaches listed in between. Additionally, the model has an arrow from techniques to show an output of “perceived quality”. I agree that the output is perceived quality, but I feel that it may be useful to detail the test execution part between the techniques and the output.

A new model based on the Heuristic Test Strategy Model

I began drafting a new model, based on the HTSM. I stretched it out and added a section to indicate test approaches, using my own “information and testing” model that I’ve had for a while. I also added a section on test ideas, and another section around continuous testing, (which I’ll explain later).

It was looking pretty busy, but I had a coherent top down flow: Context -> Test approaches (with a sub-model detailing the value of the approaches) -> driving test ideas (using test techniques, heuristics, plus more) -> and an indication that test execution produces an output of “perceived quality”.

It looked reasonable, but I decided to loop in a friend to get feedback on the structure of the model.

Refactoring the model with the help of a friend(ly tester) – Thanks Richard Bradshaw!

Richard provided some valuable ideas from his perspective of how the model could be structured to be easier to understand.
Richard offered some good feedback on how I should expand more on the test execution part of the model, and reshape the model from a top down flow to show the relations between approaches, test ideas and test execution more clearly.
Thanks for your input, Richard! I’ve got another model that I’m keen to get your opinions on again too 😉

The model

Without further ado, here is the new model – explanation will follow below:

There are 6 key aspects that build up this strategy model: Context, Approaches, Driving Test Ideas, Structure, Test Execution, and Continuous Testing. Each section has deeper information within the boxes and the model shows the linkage between each section. The output of it all is “perceived quality”.

A deeper dive into the model and each section follows…

At the top of the model is “Context” (turquoise box). This highlights that context is the overarching starting point for test strategies, in a similar way that the HTSM model has 3 properties that form the context. I’ve expanded these properties out to explicitly include “people and skills”, “risks” and “tools” as influencing factors that contribute to forming our context. These are implicit in the HTSM model, but my reasoning for splitting them out is to be explicit to be included

Context guides our approaches to testing. I’ve explicitly called out 3 approaches to testing (green boxes), for which I’ve detailed the testing artefacts and structure relating to each approach (purple boxes):

a scripted approach – using test cases to enable a person to follow steps and assert expectations of how the software should work, having a quantitative focus for reporting, driven by metrics.

an automation approach – using coded scripts to automatically manipulate the software and assert expectations on how the software should work, having a quantitative focus for reporting, driven by change detection.

An exploratory approach – using test charters and session based test management to structure investigation, taking testing notes throughout each session, with a qualitative focus for reporting.Note: automation can also be used within an exploratory approach to assist the investigation, e.g. using scripts to automatically generate test data or set up a system in order to be able to investigate. And using an exploratory approach also uncovers information that can lead to having explicit expectations that we can automate to detect future changes.

Taking an exploratory approach for our testing puts a huge emphasis on the autonomy of the tester to generate their test ideas simultaneously while exploring, executing those ideas and uncovering information all while building up their testing notes to report, so the model shows the exploratory approach feeding into the “driving test ideas” section (orange box). Context also drives our test ideas (which is where: test techniques, types of testing, types of risks, heuristics, oracles, etc. to help stem test ideas). The “driving test ideas” box also has a zoomed in essence relating to the “investigation” part of the “test execution” box too.

The box in the middle of the model represents “test execution” (red box). the small model within the test execution section shows us the relationship that each approach has with “information” for each separate test approach. There are many properties of information – Explicit, Tacit and Implicit information that we have, unknown information that we are aware of and unknown information that we are unaware of.

A scripted approach relies on our existing information (i.e. our expectations), with the intent of confirming those expectations. We can only assert the explicit information that we have, that sets our expectations.

An automation approach also relies on us having expectations in order to codify our assertions to confirm those expectations automatically. If we are using automation to automatically assert our expectation, this also only relates to the explicit information that we have.

An exploratory approach to testing is investigative, with the intent of uncovering new information. An instance of a test (noun) relates to unknown information that we are aware of (i.e. we can question and investigate to uncover information surrounding those unknowns, translating it to knowledge). It also relates to tacit and implicit knowledge that haven’t been, or can’t be, portrayed explicitly. The activity of testing (verb) also gives us the opportunity to become aware of more unknowns that we were previously unaware of.

The “information” box within the “test execution” section of the model also represents the multitude of elements regarding the software that we test continuously (blue circle).

Testing the ideas
We can test the ideas for new features . This is at the point where ideas for new features come through to solve a problem, generate revenue or for any other reasons, by the business, or the dev team, or the users, or any other stakeholder.
If we are involved early in this cycle, we can utilise an exploratory approach to uncover information. We can question the idea, uncover and challenge assumptions, uncover and highlight risks with the idea, and highlight problems and other information about the idea. Testing the ideas is done using an exploratory approach.

Testing the artefactsIn the same way that we can test the ideas, we can also question and challenge the artefacts that we create and obtain. By “artefacts”, I am referring to: epics, user stories, acceptance criteria, data flow diagrams, risk maps, business flow diagrams, feature files, mind maps, models, requirement specification documents, etc.
We can question the info supplied within the artefacts and uncover and discuss risks and other variables about the artefacts to stem more information and dispel assumptions in the same way that we do when we test the ideas. Testing the artefacts is done using an exploratory approach.

Testing the designsWhen the artefacts are created, they are used for various activities – design activities, programming activities and testing activities. The design activities produce more design artefacts that we can also test – if you have wireframes or models that represent the software, how it should look and it’s usability, etc, then we should be testing these too. We can ask questions and uncover information surrounding various risks at that level too. Testing the designs is done using an exploratory approach.

Testing the codeCode reviews are an important part of building a high quality product. Anyone should be able to read through the code and test it from different perspectives – e.g: from the perspective of how well is the code written and is there another way to make the code more secure, testable, maintainable, scalable, performant, etc; or from the perspective of investigating potential product risks at code level. In addition to this, Unit Testing is ideally included as part of the development activities whether you are using TDD to design the code or not. Testing the code is done using an exploratory approach (investigative activities). Unit testing is done using an automation approach (assertive activities).

Testing the softwareWe also need to investigate the software. We can spin up test environments or dev environments and operate the software to test it. I’m not just talking about a software UI here, we should test at any interface – APIs and HTTP interfaces, server and database interfaces, components and individual integrations, etc. The UI is in addition to all these things. Also, there are various different kinds of interfaces specifically relating to the UI too – e.g. screen sizes, resolutions, browsers, operating systems, input methods, etc, etc… Much of this testing relates to investigating product risks, and some of this testing will be based on asserting the expectations that we have of how the software should work. This could be all the information that we’ve gained from testing the ideas, artefacts and designs, translating that into expectations that we can assert against the product.When we test the software, we use an exploratory approach to investigate as well as the scripted and automation approaches to assert.

Testing the processesWe should also remain conscious about testing our processes. If you are working within an Agile methodology, there should be a goal of continuously improving. Agile processes such as Scrum and Kanban have an inherent purpose of striving for iterative process improvements. I believe that testing our current processes is a sure-fire way of uncovering information about problems and areas of improvements. The majority of us will probably implicitly do this already, but being explicit and conscious about it will bring it to the fore and put more emphasis on actions. Testing the processes is done using an exploratory approach.

The infinite loopI’ve included this loop to emphasise that these testing activities are ongoing. As long as the project is ongoing, we need to be thinking about testing at each of these layers of the SDLC.

Finally, the grand finale of the model is the output: “Perceived quality”.
This is the same as the HTSM model. I completely agree with James and Michael regarding their view that the level of quality that we view the product to possess through our discoveries is always going to be from our perception. There will always be things that we don’t think of, which could change that perception of quality as soon as we become aware of those things. And time can change this perception too.

Comments and thoughts?

I fully understand that all models are fallible, and this mode is not supposed to be used as a checklist. In the same way that Michael and James do with the HTSM, I invite you to critique the model, adjust it, mould it, enhance it and make it your own. I’ll be happy if this helps stem more conversations and healthy debates too.

I hope the model is useful in stemming ideas for anyone looking to create a test strategy on a page, or something similar, or for just talking about testing in general. Please share your stories with me if this model does help you in these ways.

Great stuff! I think it is very important that people like you re-invent stuff. I also find HTSM helpful, but it would be sad if it was the best that humankind ever produced. I am glad that it is helpful you you and I bet it will be for many. Keep it up!

I think the relationship between the approaches (green) and test execution (red) can be made more explicit given that both scripted and automation are (by definition) related to the explicit information available (and may indeed be the source of that information in some contexts). Conversely, the exploratory approach relates directly to that which is not explicit and is the primary reason for its existence I guess.

The picture in my head is reminding me of Del Dewar’s MoT talk last year (I think?) and how he depicted tacit vs explicit knowledge, I’ll find some time to review it and maybe draw a model to reflect …

I really like the model and appreciate the effort you’ve expended to think it through thoroughly and explain it. There are 2 things I feel are under-represented 1. Our need to influence to the team to design the system to mitigate important risks and design the system to be Testable. 2. Closing the loop and feeding qualitative & quantitative data back into your test approach.