Thursday, 21 November 2013

I have seen on far too many occasions, whilst working
in testing, people spending months gathering together information and creating
test plans to cover every single requirement, edge case, corner case.Some people see this as productive and
important work, in the dim and distance past I did too. I have learnt a lot
since then and now I personally do not think this is as important as people try
to make it out to be.I see this as a waste of
effort and time, time which would be better suited to actually testing the
product and finding out what it is doing.This is not to say that ‘no’ planning is the right approach to take, rather that the test planning phase may
be better suited to defining what you need to do, to start doing some testing.It is more important to discover things that
could block you or even worse prevent you from testing at all.This article looks at the planning phase of
testing from my own personal experiences and viewpoint.

A starting point for this article was after re-reading
an article that Michael Bolton wrote for a previous edition of the Sticky Minds magazine called ‘Testing without a map'. Within this article Michael talked about using
heuristics to help guide your testing effort, at the time of that article he
suggested using HICCUPS to act as a guide to your testing and focus on inconsistencies. This was about useful approaches when
actually testing the product rather than on the planning phase. This article focuses on the before you
actually test.

Since the only way to know something is to experience
it and by experiencing what the software is doing you are testing
it. My own experiences is that normally there is a delay between what is being developed and
having something testers can test (yes even in the world of Agile) this is the
ideal time in which we should and can do some test planning. But what do we include in our plan? If we follow the standard IEEE standards for test planning we get the following areas recommended for inclusion in the test plan.

4.Features to be tested
(derived from risk analysis and test strategy), including:

a.Detail the test items

b.All (combinations of)
software features to be tested or not (with reason)

c.References to design
documentation

d.Non-functional attributes
to be tested

5.Approach (derived from
risk analysis and test strategy), including:

a.Major activities,
techniques and tools used (add here a number of paragraphs for items of each
risk level)

b.Level of independence

c.Metrics to evaluate
coverage and progression

d.Different approach for
each risk level

e.Significant constraints
regarding the approach

6.Item pass/fail criteria
(or: Completion criteria), including:

a.Specify criteria to be
used

b.Example: outstanding
defects per priority

c.Based on standards
(ISO9126 part 2 & 3)

d.Provides unambiguous
definition of the expectations

e.Do not count failures
only, keep the relation with the risks

7.Suspension and resumption
(for avoiding wastage), including:

a.Specify criteria to
suspend all or portion of tests (at intake and during testing)

b.Tasks to be repeated when
resuming test

8.Deliverables (detailed
list), including:

a.Identify all documents
-> to be used for schedule

b.Identify milestones and
standards to be used

9.Testing tasks for
preparing the resource requirements and verifying if all deliverables can be
produced.

10.The list of tasks is
derived from Approach and Deliverables and includes:

a.Tasks to prepare and
perform tests

b.Task dependencies

c.Special skills required

d.Tasks are grouped by test
roles and functions

e.Test management

f.Reviews

g.Test environment control

11.Environmental needs
(derived from points 9 and 10) for specifying the necessary and desired properties
of the test environments, including:

a.Hardware, communication,
system software

b.Level of security for the
test facilities

c.Tools

d.Any other like office
requirements

WOW – if we did all of this when would we ever get time to test? The problem is that in the past I have been guilty of blindly following this by using test plan templates and lots of cut and paste from other test plans. Why? This was how we had always done planning and I did not question if it was right or wrong or even useful. Mind you in the back of my mind I would think why we are doing this, since nobody ever reads it or updates it as things change. Hindsight is a wonderful thing!

My thoughts and thinking over what we really need to
do when planning has changed drastically and now I like to do enough planning
to enable a ‘thinking’ tester to do some testing of the product. The problem we face with our craft is that we make
excuses to not do what we should be doing, which by the way, is actual
testing. We try to plan in far too much
detail and map out all possible scenarios and use cases rather than on what the
software is doing. Continuing on with
the theme of ‘The Map’ from the article by Michael Bolton, Alfred Kozbskit once stated that

“The map is not the territory”

As a reader of this article what does that imply to
you?

To me it was an epiphany moment, it was when I
realised that we cannot, nor should not, plan what we intend to test in too
much detail. What Alfred was trying to
say with this statement was that no matter how much you plan and how detailed
your plan is it will never match reality.
In some ways it is like designing a map with a 1:1 scale. How useful would you find this kind of map to
get around? Would it be of any use? Would it actually map the reality of the
world you can see and observe? It would
not be dynamic so anything that has changed or moved would not be shown. What about interactive objects within the
map? They are constantly changing and
moving and as such by the time you get hold of the map it is normally out of
date. Can you see how that relates to
test plans?

What this means in the reality of software testing is that
we can plan and plan and plan but that gives no indication on the reality of
the testing that we will actually do. After having a discussion with Michael
Bolton on Skype he came up with a great concept and said we need to split
planning time up into test preparation and actual planning.

You need to spend some time getting ready to test,
getting your environments, equipment, and automation in place, without this in
place you could be blocked from actually starting to do some testing. This is vital work and far more important
than writing down step by step test scripts.

The purpose of testing is to find out information and
the only way to do this is to interact with the application. It is said that most things are discovered by exploration and accident than by planning for something and that something
happening is more than likely a coincidence.
The problem with doing too much planning is that it becomes out of date
be the time you get to the end of your testing.
It is much better to have a dynamic adaptive test plan that changes as
you uncovered and find more to test. One of the ways I have adopted this is by
the use of mind maps and there have been many articles in the testing community
about this subject, I would suggest if you want to know more about this is that
you go and Google ‘mind maps and software testing’

The problem we have is that people are stuck in a
mentality that test cases are the most important thing that needs to be done
when we start to do test planning. There
is a need to move away from test cases towards missions (goals) something that
you could do and achieve in a period of time and something that more
importantly is reusable and that how it will be used will depend on the context
and the person doing the mission. When
planning you only need to plan enough to start testing (as long as your test
prep has been done) then when you test you will uncovered interesting
information and start to map out what you actually see rather than what you
thought you may see. Your test plan will
grow and expand as you become information and knowledge rich in what you find
and uncover.

Accurate forecasts aren't possible because the world is not
predictable

So it is wise to not plan too far ahead and plan only
enough to do some testing find out what the system is doing and adjust your
planning based upon the hard information you uncover. Report this to those that matter. The information you find could be what is
valuable to the business. Then look for
more to test, you should always have a backlog and the backlog should never be
empty. The way in which I do this is to
report regularly what we have found and what new missions we have generated
based upon the interesting things we came across. I then re-factor my missions
based upon

The customer priority – how important is it that we do this mission to the customer

AND

The risk to the project - if we did this mission and not one that we have planned to do next from the backlog what risk is this to the project?

To summarise we need to think more about how much
planning we do and think critically if producing endless pages of test cases
during test planning is the best use of our resources. We need to plan enough to do some testing and
adapt our test plan based upon the information we uncover. There is a need to re-evaluate what you
intend to do often and adapt the plan as your knowledge of the system
increases. It comes down to stop trying
to map everything and map just enough to give you a starting point for
discovery and exploration.

*Many thanks to Michael Bolton – for being a sound
board and providing some useful insights for this article.

Having followed Elisabeth on twitter @testobsessed and used her test heuristics cheat sheet extensively I was very excited when I found out that she was releasing a book about exploratory testing and I was fortunate to be able to receive an early ebook version. The following is my review of the book and of the things I found interesting and that I hope others may find interesting.

The beginning of the book starts with an explanation of testing and exploration in which she mentions the debate on testing and checking and to me this gives a good grounding of where Elisabeth sets the context for what follows in the book. I especially like the point she makes regarding the need to interact with the system:

Until you test—interact with the software or system, observe its actual behaviour, and compare that to our expectations—everything you think you know about it is mere speculation.

Elisabeth brings up the point of how it is difficult to plan for everything and suggest we plan just enough. The rest of the first chapter goes into more details as to what are the essentials of exploratory testing and making use of session based test management.

One part of the book I found useful was the practice sessions at the end of each chapter to help you recap what was being explained within the chapter. If you are the type to normally skip this kind of thing (like myself) on this occasion I would recommend that you give them a go, it really does help to understand what has been written in the chapter.

The next chapter introduces charters to the reader and for me this is the most useful and important chapter of the book. It helped me to clarify some parts of the exploratory testing approach that I was struggling with and simplified my thoughts. Elisabeth explains a rather simple template for creating your own charters.

Explore (target)

With (resources)

To discover (information)·

Where:

Target: Where are you exploring

Resources: What resources will you bring with you

Information: What kind of information are you hoping to find?

The rest of the chapter takes this template and using examples provides the reader with a way in which to create charters simply and in some cases quickly. Along the way she introduces rules that one may wish to follow to avoid turning the charters in to bad charters. She also offers advice on how to get information for new charters (joining requirement/design meetings, playing the headline game) .

What, you do not know what the headline game is? Well you need to buy the book to find out.

I have started to use this template to create charters for my own testing going so far as to add this template into the mind map test plans. This to me was worth paying for the book just for this very useful and simple approach to chartering exploratory testing.

The following chapter takes you on the journey of the importance of being able to observe and notice things. This is a key element of exploratory testing and looking for more things to test is a part of this. Elisabeth talks about our biases and how easy it is for us to miss things and provides examples of how we may try and avoid some of them. She talks about the need for testers to question and question, again to be able to dig deep and uncover information that could be useful. This chapter is useful for being able to uncover the hidden information and it suggest ways in which you can get more information about what you want to explore without the need for requirement documents. This is important since it is better to have the skills that allow you to be able to ask questions

The next few chapters of the book look at ways in which you change or alter the system to undercover more information by means of exploration. These chapters take the cheat sheet Elisabeth and others produced and add a lot more detail and practical ways to look at the system with a different perspective. These chapters include titles such as:

Find Interesting Variations

Vary Sequence and Interactions

Explore Entities and their relationships

Discover states and transitions

A great deal of this is found in part two of the book and this section is something I repeatedly return to for quick inspiration of what I can do to explore the system more. It gives some great techniques on how to find variants in your system and how to model the way the system is working. It provides useful ways to help you find the gaps in your system or even in your knowledge.

In the middle of the book there is a chapter called ‘evaluate results’ whereby Elisabeth asks if you know the rules of your system. If you do not then it would be useful to explore and find them. She explains the meaning of rules using ‘Never and always’. If you have a rule that saying it always should do this, then explore. The same for ‘never’ you can explore and uncover where these rules are broken. This chapter also looks at outside factors such as standards, external and internal consistency. All these are important when exploring the system and Elisabeth within the book reminds us in this chapter to be aware of such things.

The final section of the book is titled ‘putting into context’

In the chapter ‘Explore the ecosystem’ expands upon the ‘evaluate results’ chapter and now asks you to think about external factors such as the OS, 3rd party libraries. Elisabeth gives a great tip in this chapter on modeling what is within your system and what is external and how they interface. I have found this extremely useful to work out where I can control the system and where this is outside of my control. Once this has been done, you can then, as Elisabeth suggests as the ‘What if’ questions of these external systems. If you want to know more about these What if questions, again, I recommend reading the book.

Within here, Elisabeth gives advice on how to explore systems with no user interfaces. For someone such as myself where there is very few user interfaces, I found a lot of useful information in this chapter. Especially for making me think of ways in which I could manipulate the interfaces and explore the APIs.

Next Elisabeth talks about how to go about exploring an existing system and gives some great tips on how to do this such as:

Recon Session

Sharing observations

Interviewing to gather questions

This chapter is useful for those who are, or have tested, an existing system and need new ideas to expand their exploration.

Elisabeth then talks about exploring the requirements which is very useful for those who have requirement documentation and within the chapter there are lots of ways offered in which you can explore them. One great suggestion in using a test review meeting and turning it into a requirements review. Elisabeth offers many other suggestions on how to create charters from the requirements and use these during your exploratory testing sessions

The final chapter of the book is to think about exploratory testing throughout the whole of the development of the system and how to make exploratory testing a key part of your test strategy. The key point I got from this chapter was the following:

When your test strategy includes both checking and exploring and the team acts on the information that testing reveals, the result is incredibly high-quality software

Elisabeth gives some real life experiences and stories of how she went about ensuring ‘exploring’ is a key part of testing. This chapter is very useful for those who want to introduce exploratory testing and are not sure how to go about doing this.

At the end of the book there is a bonus section on interviewing for exploratory testing skills and some details about the previously mentioned cheat sheet.

I recommend that all testers should have a copy of Explore It as well as anyone who works with testers. There is information in this book that can help developers with their unit tests by making them ask ‘have I thought of this’? It can be used by product owners to put together their own charters which they feel would be important to be investigated or explored.