"My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni" - James Bach

My LinkedIn Profile : http://www.linkedin.com/in/shrinik

For views, feedback - do mail me at shrinik@gmail.com

Sunday, July 20, 2008

Exploratory Testing SHOCK ....

A colleague of mine other day expressed his struggle to make exploratory testing work for his time (scalability and making it as a best practice !!!) . He said "Exploratory testing is HIGHLY person DEPENDENT - that is the biggest problem for me ... Do you have any process document for doing best exploratory testing. I will have that included in our testing process framework. BTW that will help us in earning some kudos from our CMMI level 5 assessment team."

I said --" ... that is true ...why exploratory testing ... any good sapient testing is "person - human" dependent. A good testing requires thinking and inquisitive human mind. Are you planning to get testing done by machines, robots - so that your person dependency goes away? If yes .. kindly ask the CMMI team to order few robots for a Testing POC (proof of concept) "

He could not answer for a while ... then responded with low voice "I know you will say something like this only ... I have an answer… Automation!!!! I have raised a request for buying 10 licenses of this # 1 tool in the test tools market .. Howzzzzzzzzzzzzzzzt ?"

7 comments:

It sounds like a man who doesn't trust skilled labor. Would he rather fly in an airliner run by a robot? Would he rather speak to a robot when he calls a company because a human receptionist won't treat him identically every time? Is he married to a robot? Does he crave robot children?

Does he mistrust himself? Could it be that he himself has so little skill in testing that he craves to transfer it to a robot?

I'm going to go out on a limb here and claim that both you and Mr. Bach missed your colleague's point. At root I think that person just wanted a way to fit ET into some managable process. If you dust away all of the adversarial parts of that conversation that's what I see remaining.

So . . . is there a way to fit ET into a managable process where you work? Is there a way to make it measurable and predictable in some way? Playing devil's advocate, if not, doesn't that make ET a risk to any schedule?

Do your testers document what they tried and what they found? Do they share that with other testers and developers, get feedback, and use that to drive further investigation? If so, there's already some kind of (possibly informal) process you can point at.

Even if that's not the best solution in your case, I'd claim it's better that everybody wins. Nobody likes being an angry robo^H^H^H^Htester.

My colleague's sole frustration was that ET is "human" (skill) dependent was looking for some magic pill that would make all ET as "predictable" and "repeatable" process so that he can put that process in place and wish that all testing problems go away.

As it apparent from my colleagues next comment that he plans to circument some pains of "person" dependency by "Automation" - there is appears a total mis-understanding of "human" requirement for any sapient testing. That is a point of this post and Mr. Bach's view.

>>So . . . is there a way to fit ET into a managable process where you work?

Yes there can be. I would like to know what do you mean by "manageable" process? Does that require skilled humans or not?

But that is not the point of my colleague who was unable to understand and comprehend the "person" dependency of the ET.

>>> Is there a way to make it measurable and predictable in some way?

When people (especially the managers) want everything to be measurable and predictable - they typically tend to miss the point that anything associated with humans is likely to have an element of vagueness and uncertainity. That is a strength at times and weakness at others.Anything that is perfectly measurable and predictable - may be necessarily good and vice versa.

We must also note that this predictability and measureablity comes at a cost more often than not at the cost of a very poor results.

>>> if not, doesn't that make ET a risk to any schedule

ET is evolving ... I believe currently, our community (context driven testing) focus is to improve the practice so that it improves the quality of testing and agility of testing (impromptu testing) ... once we get good at that ... we can think about making it manageable and others... There is such much to research about human thinking, modeling, cognitive challenges, observation, heuristics ... right now the focus on "individual" human ... managing a group of testers doing exploratory testing and study of their behaviours is next ...

>>>Do your testers document what they tried and what they found?

In ET world or sapient testing practice - whether to document the test procedure and outcomes is best left to the testers and their managers. They may document the things or may not depending upon project context and testing mission. Insisting on document everything that testers do is a "worst" practice in our industry.

Do they share that with other testers and developers, get feedback, and use that to drive further investigation? If so, there's already some kind of (possibly informal) process you can point at.

As I mentioned - my colleague's concern is to make the process completely independent of humans.

I suggest to read on Session based Test management (SBTM) and current work of Cem Kaner, James Bach, Jonathan Kohl, Michael Bolton. ET's structure make it very adaptable in almost every project contexts.

>>> Nobody likes being an angry robo^H^H^H^Htester.

But someone up there (managers) would want the testers to be so ... that will make their job easier. It is easy to manage a group of people that behave in predictable and repeatable way. Otherwise how else managers are managing the things that hardly understand ....

For the record, I am a tester. I'd like to think that I'm a skilled, thinking one. Just my opinion, of course.

I think you're focusing a little too much on "automation", which IMHO was just your colleague's naive attempt to make testing more predictable. Any claim that automation is the magic pill is really a cry for help. It mease that someone needs to learn about other options.

I'm just suggesting that you meet in the middle. It's possible to have a documented process that still allows for humans to um . . . be humans.

Sure, testing isn't like working on an assembly line, but that doesn't mean that there can't be some kind of agreed upon process. Giving pointy haired bosses a little insight into what the testers are actually doing will make them happy. Even giving them some semi-bogus metric like "improved code coverage via ET by x%" might make them go away and stop arguing. (Minor digression: PHBs love percentages.)

So . . . is there a way to fit ET into a managable process where you work? Is there a way to make it measurable and predictable in some way? Playing devil's advocate, if not, doesn't that make ET a risk to any schedule?...

Sure, testing isn't like working on an assembly line, but that doesn't mean that there can't be some kind of agreed upon process. Giving pointy haired bosses a little insight into what the testers are actually doing will make them happy. Even giving them some semi-bogus metric like "improved code coverage via ET by x%" might make them go away and stop arguing. (Minor digression: PHBs love percentages.)

Testing isn't a risk to any schedule, because you really can stop testing any time you like.

The question is do you want to stop testing? Do you feel okay about stopping testing? If you're satisfied with the questions you've asked and the answers you've obtained, then you can stop. If you still have questions, keep going.

The unknown might be a risk to a schedule. Fixing the problems that you've discovered might be a risk to the schedule.

Testing itself is entirely manageable and predictable, if people bother to think about it. Consider some of these ideas:

1. I have thirty areas of concern to which I'd like to assign an hour of testing each.

2. I have six systems to set up, and I'll spend an average of 60 minutes each doing it. (The first one will take the longest--I'm guessing two hours--but I'll learn stuff that will make the later ones take only 30 minutes each.) Then I'll spend a day testing on each of these systems.

3. I have twenty-four general scenarios to test, each covering from one to twenty different variations on the data model. As of now, the project owner says that I have to get through each and every one of these, and I don't know how long it's going to take to do that, because the application is very shaky at the moment and there's a lot of repair work to be done. But I'll have a better idea tomorrow--and I know that the project owner wants to ship in eight weeks. Maybe we should get together at the end of the week and see how it's all going.

4) My agile team has created a very short list of tasks to accomplish in this two-week cycle. They're doing a lot of unit testing, so I'm going to be more focused on workflow stuff, and I'll need some time--I dunno, a couple of days?--to work through some transactions that the new code is supposed to support, and some variations on those transactions.

Any one of these is a kind of prediction; there are infinite variations on the theme. So testing is predictable, isn't it?

Ah... but what if the predictions don't come true? Well, then someone has to make decisions. Those decisions might involve adjustments to product scope, staffing, budget, schedules, contracts, test coverage, development resources... Making decisions about such things is called management. Thus testing is manageable, isn't it?

>>> So testing is predictable, isn't it? Ah... but what if the predictions don't come true?

Taking it bit further ... when and how the predictions related to testing can become false ? Few parameters that influence the prediction might be - testing mission, Test Environment Availability, state (or quality) of the application that is being tested, preparedness and/or speed of developers fixing the bugs reported by test and overall number of bugs logged by test. It is important, in my opinion that tester (after giving or commiting an estimation) do not go in an exile and work (in caves). For testing predictions to become true - there are many external factors that need to work in sychronization.

As You mentioned, Testing can be HIGHLY predictable - we, testers can stop at any given point in the project. There after it is up to management to see if it has any questions about the software that still need to be answered.

So in a way "predictability" - a favorite term for project managers and business sponsers - takes a whole new meaning when it comes to testing. Testing predictable in one way and it is not, in the other - depending upon how you look at it.

>>>Making decisions about such things is called management. Thus testing is manageable, isn't it?

When you say testing is manageable - I am sure you are NOT saying testers make such decisions.Probably what might be trying to say is - testing can performed in such way that it lends itself to management to make decisions.

Excellent points Micheal - I thought I did not address the issue of "ET's manageability, measurability" could be percieved as risk. Thanks for the comments...