Below is a response we wrote to the latest Tester Magazines Newsletter article; what’s All the Fuss About? Structured vs Unstructured Testing. This was email directly to the author Geoff Horne but after his reply suggested this be used in the next edition of his magazine we felt it would be best published on our own Hello Test World blog.

If you have any thoughts, we’ll be looking forward to the in the comments.

Hello Geoff,

We’ve read your article that was in the mid-edition newsletter 21/05/2013. While we have nothing much to comment on Colin Cherry’s part of the article, we would like to challenge some of the things you state in your part.

1) Your selection as to the attendance to KWST #2 was by no means a given. The decision fell in the timeframe between your email from the 14/5/2012 and the sending of the invites on the 27/05/2012. There is a selection process which involves conferring with the organisers of the event as to the make-up of those 20 people. In clear text, you were voted in. By no means do we have a “loyalty card scheme”. We cannot know everyone in the testing industry so we rely heavily on people we know suggesting people we should invite.

2) “Seemingly “brave new world”” – how long would you say (in years) does it take for a brave new world to be established? The context driven testing school was formed by Cem Kaner, Brett Pettichord, James Bach and Brian Marick in 2001. The practices were present years before that. I’d stipulate that is just as tried and proven. The fact that you referred to it as a brave new world only highlights Colin’s point about testers not looking beyond their own back yard.

3) The “brave new world” was Unstructured Testing – Exploratory Testing and Context Driven Testing is far from unstructured. In fact there are many ways to structure Exploratory Testing, so that it is accountable, auditable, reportable, and plannable. The opposite of exploratory testing is scripted testing, and not structured testing.Secondly Exploratory Testing (ET) and Context Driven Testing are not synonyms. Context Driven Testing (CDT) is testing driven by the principles of the context driven school, much like how Agile development is development driven by the principles laid out in the Agile Manifesto.

ET is an approach to testing that focuses on the tester’s skill and judgement to guide their testing as they are in the moment of testing.

So ET and CDT are far from unstructured. But that is actually secondary, as that was not really what the discussion was about at KWST. The discussion we had was about counting test cases in order to to inform and the wider practice of supplying metrics. You were adamant about you “crunching the numbers” without giving any proof or scientific reasoning behind what you were doing or why. This is, at best, called pseudo-science. We noted that this was a common practice in many projects and that does not make it good or lead to successful projects. You, nor anyone else for that matter, could prove any correlation between metrics and the success of a project.

The discussion we had was never about unstructured testing and we would contend that there is no such thing as unstructured testing. It was also not about scripted vs unscripted or any one of those discussions. So we do have some disconnect with what you are writing and our take on what was said.

4) Interestingly enough you then proceed to describe why rigid methodologies fail in most projects. Part of what we do at KWST is to talk about our experiences (Experience Reports or ERs) and challenge one another to try and find different and hopefully better solutions. And we are really pleased to hear that you felt challenged and reflected on what you were doing. But it does appear that you have decided to do the same things you have been doing your whole testing career, never questioning if there is a better way or just a different way. This is the exact thing we try to challenge and improve on at KWST amongst ourselves.

5) You then get to the part where you wonder if you’ve always been a context driven tester. We would contend that your testing, as we understand it from your descriptions, is far from context-driven. As defined by the founders of CDT (who are mentioned above), Context Driven Testing (and the Context Driven School of Testing) is much more than acknowledging the context of a project or organisation. It is a set of guiding principles which are:-

The value of any practice depends on its context.

There are good practices in context, but there are no best practices.

People, working together, are the most important part of any project’s context.

Projects unfold over time in ways that are often not predictable.

The product is a solution. If the problem isn’t solved, the product doesn’t work.

Good software testing is a challenging intellectual process.

Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Any tester who is aligned with the above principles allows the context to drive the appropriate response whereas a tester that is context aware is most likely to pay lip service to context (and then continue with their own “tried and true” methods). Knowing context and adapting to context are two different things. A context driven tester will reject best practice as they know that best practice in testing is a fallacy. However, they see that there are many good practices that will morph depending on the project.

Context Driven does not mean taking an ISO or similar standard and watering it down to something one is able to do and can still charge $$$ for without blushing. You stand corrected. You are not (yet) a context driven tester.

6) As for the getting along bit. Different schools of software testing do not all get along because the paradigms behind them are completely incompatible. We can have mutual respect but such respect is earned and built over time and by shared experience. There are many attempts for the differing fractions to communicate which are more or less successful. The discourse is what makes us progress and not the harmony. We admit that there are discussions that may not suit the stomach of many an individual, and can understand that the tone of many of the more aggressive leaders in the field can rub people up the wrong way, including us sometimes. But we are human after-all and we can see from history that discourse is our MO and that it sometimes escalates. It escalates because we are passionate about our profession and want to see it flourish. We want to see our field move forward and be respected, and provide real lasting value to the projects we work on.

It is a wide spread illusion that testing is all known and defined. We’d argue the exact opposite. The whole of IT is still in its infancy and evolving. How dare we be so arrogant to assume we know everything or have a best practice? Even if we have worked in testing 10, 20 or more years doesn’t mean we are right (we could still be successful from a financial perspective though). Humanity believed the world was flat for how long? That sicknesses were caused by demons, that the atom was the smallest particle, that witches existed, that the Earth was the centre of the Universe… and all those that believed in these things were learned, intelligent, successful and totally wrong individuals.

7) We always find it quite hard to follow “If someone else is doing the same however in a manner you don’t like or agree with then unless you are that person’s manager et al, “live and let live”. There are non-combative ways to express an opposing viewpoint and challenge someone. How else do people get exposed to new ideas and improve their thinking? This is what many in our community are attempting to do by attending peer conferences like KWST, OZWST and WeTest, and engage in social media. Secondly, these are issues that affect the profession at large, even if they are occurring in apparent isolation. It sets the expectation of what a software tester does, and the value they provide on a project (and ultimately what they are willing to pay for such services). It affects the market and the demand for certain services.

Secondly, it’s a personal ethics thing. There are certain practices that we feel provide little value at best, distract them from what’s important, and can actually mislead people at worst. Some of us do feel compelled to challenge these practices when we hear about them. The practice of counting test cases is one of them. Which is why we challenged you on that in the hope that you would tell us what you actually did with the data. That is, how you collected the data, how you manipulated that data, and then what you did with the outcome.

We are glad that you reflected on your experience at KWST, but we do feel like our ‘camp’ has been misrepresented in your article. Since you have publicly disclosed your experience and thoughts then we would also like to express our view in the form of this email as we are part of this story.

Best Regards

Brian Osman, David Greenlees, Oliver Erlewein

Share this:

Like this:

Related

A little addendum for those that might have gotten confused. Geoff references IEEE 802.3 as a testing standard. What he probably means is IEEE 829. The standard he is referencing is the Ethernet standard. Thank you Laurent Bossavit (@Morendil) for pointing that out.

I am also planning a response to the article. There will be some crossover with some of the points you make above, but I will be focussing on the point that I beleive the current division between context driven and non context driven testers is a good and necessary thing – and not some thing we should try to smooth over.

I am hoping it will be in the next NZ Tester – but if not I might ask you guys if I can submit it here.

1) Just sharing my experience here guys, I stated what I observed & what it reminded me of, not what I thought the reality might have been.

2) Again, just sharing my experience. Yes I’m aware that ET has been around a long time however this was the first time that I’d heard it promoted as the only viable way to test, which is the way I interpreted it from KSWT2 & other discussions since. I’d always seen ET as part of the testing arsenal not the arsenal in its entirety & that concept is very much “brave new world” to me.

3) Understand where you’re coming from however given previous gaffes of me calling the spade a spoon, I was trying to find the best way of describing the two approaches hence the term was used colloquially rather than scientifically.

4) Not at all, just because I haven’t publically declared my support of CDT & ET to the world to the exclusion of all else doesn’t mean that I haven’t taken on board the possibility of benefits in an appropriate context nor that I’ve never questioned or considered better or different ways.

5) Everyone I talk to who terms themselves a context-driven tester also turns out to be an ET proponent, which to my mind negates the whole CDT concept, if I’m understanding it correctly.

6) And I think this is the crux of the issue. ET proponents appear to position it as the next step in testing evolution and anyone who does not agree to drop everything & follow is a world-is-flat religious test-scripting nutcase. I have no issues with the ET “movement” other than this.

7) The intent of the article was not to discredit any camp & if it seems like yours has been misrepresented, can I respectfully suggest that it may be because of the position it appears to me & others you have taken of ET being the only reasonable way to test. If I have misunderstood this please do feel free to correct.

The Tester Magazines are simply attempting to bridge a gap & refocus the testing community back on reality, which is that we all have a job to do and (hopefully) want to do it to the best of our abilities. We are therefore challenging everyone to concentrate energies on doing just that without all the fuss and grandstanding.

Hi Geoff
Someone once said to me “Often when we think we’re arguing about conclusions, we’re actually arguing about premises”. I wonder if that’s what has happened here.

I can only speak for myself, but I feel fairly confident that I know the positions of Brian, Richard, Oliver, and James Bach, and that’s that noone would ever call ET the ‘only viable way to test’. That is, unless we had carefully defined what we mean by ET, and what we mean by ‘test’ upfront (see the checking vs testing discussions). However, if we take “test” as “stuff that testers do on a project” then no, noone would ever suggest that.

However, I do take the position that good scripted testing results from good exploratory testing. But like the ‘manual vs automation’ debate, if there isn’t a good reason to script, why script?

However, this doesn’t really matter because my memory of the discussion was quite different. If I recall correctly, the discussion was on the use of counting test cases, and esitimating number of test cases required to test something as a method to estimate testing and tracking progress. There was some challenging of this idea, and it was on that issue, and not about scripting vs exploratory or anything else that was discussed (unless I missed some offline discussion).

Just a couple of other points: “5) Everyone I talk to who terms themselves a context-driven tester also turns out to be an ET proponent, which to my mind negates the whole CDT concept, if I’m understanding it correctly.”

That’s an interesting point, but it’s more of a reaction against the factory school way which promotes ‘repeatable’ step by step test scripts with expected results as the only way to test, with ET relagated as an after thought, and given labels such as “unstructured” testing, or “informal” testing. A lot of CDTers, myself included, focus on this aspect a lot as we see ET, when structured appropriately, as a much leaner and effective way to test. Maybe that makes me come across like you mention in point 6, but I see it as an equal but opposite reaction to the factory school doctrine.

But coming back to “Often when we think we’re arguing about conclusions, we’re actually arguing about premises”. Maybe we’re coming from a completely different starting point. The ISTQB glossary defines testing as:

“The process consisting of all lifecycle activities,both static and dynamic, concerned
with planning, preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit for
purpose and to detect defects.”

This definition of testing definitely puts the role of testing as more of a confirmatory role. To check that certain stated claims are met. This would emphasise a more scripted approach.

Whereas I, and many others, prefer Cem Kaner’s defintion which is: “Testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test.”

This definition of testing puts the role of testing in a much more investigatory role. To discover problems with the product. And this emphasises a more exploratory approach.

Geoff,
Let’s stick with the numbered theme, as I’m sure you value consistency.

1. If this was not reality, then why did you use quotation marks in your article? “yes of course you can come”. You’re quoting the organisers incorrectly. So was it really what you observed? We also still don’t see how your gold elite card status has any relevance to our selection processes.

2. You’re getting ET confused with CDT. In our reply we were talking about CDT. The two are very different.

Can you clearly recall the time where ET was promoted as the only viable way to test? We were all there, with many others, and none of us can recall this. Please also note the CDT manifesto clearly states there is no Best Practice, just good practice. So there is no way we could promote ET as something like best practice.

3. Perhaps if you were looking to find the best way of explaining them, you should have asked, or perhaps researched?

It’s been proven time and time again how dangerous it is in our industry to use colloquial language to explain things. We have different definitions for the same words and terms, let alone when they are different. When conferring with each other we need to be clear and concise, and if we’re not sure then we need to state that in order to move forward and not get caught up on labels unless it’s required. I would think this is especially important in a public press such as your magazine.

4. Apologies if we misunderstood, however through your writing it definitely appeared that you had not reflected, or undertaken a great deal of research into CDT or ET. If you had, then the title and content of the article would have been completely different (see above).

5. We don’t think you understand correctly. We may be proponents of ET, but ET can also take many forms, shapes, and sizes. ET is not a process; there are no steps that must be followed on every occasion. ET is the simultaneous act of learning, designing, testing, and evaluating. This by no means has to be done the same way each time.

Therefore, even if we do promote ET… we are allowing for many different ways to undertake it; what ever way suits your context. Your misconception here emanates from the false assumption that we treat ET as a testing panacea. As stated above this is an impression that only you have.

6. Completely disagree. Oh, and we think you’re mixing up ET and CDT again. Your point would have made more sense if you had written CDT instead of ET.

Of course, we can only speak for ourselves and those that we know, however we don’t know anyone from our community that is not willing to listen to other’s views on testing. Yes, we may challenge as we see fit, but how else do we progress our wonderful craft?

It is at this point that we see many factory school proponents walk away from the discussion. If they are not willing to demonstrate the effectiveness of their approach, then how are we supposed to learn from it? From our own investigations and knowledge of the processes, we know this methodology is being overused and misused. It certainly has its place but not in the way it is being advertised and promoted.

Let’s have the debates, like adults. Let’s learn from them. Let’s build a better craft.

7. You have clearly misunderstood Geoff. This should be clear from the responses above. As our response has shown, it did not only “seem” as if we were misrepresented, you actually did misrepresent.

“The Tester Magazines are simply attempting to bridge a gap & refocus the testing community back on reality, which is that we all have a job to do and (hopefully) want to do it to the best of our abilities. We are therefore challenging everyone to concentrate energies on doing just that without all the fuss and grandstanding.”

We don’t see any fuss or grandstanding. The grandstanding and fuss was actually your article that forced us into this situation. You are the one using aggressive terminology in your original write-up.

What we see is a community of testers wanting to make testing better. Wanting the C-levels of the world to respect the craft for what is truly is; which is not simply confirming written requirements and providing meaningless statistics.

This is our reality. You say it yourself above, that we want to do this to the best of our abilities.

And we’re sorry, but your post scriptum does not deserve a response. Again, it is just inflammatory.

Just backing up what Aaron says above. I consider myself to be strong proponent of ET as part of a context driven test approach.

If I look at a typical approach to a testing project here at Tait, we might start with a test / test system design phase in advance of hands on exploratory test execution (which will be where most of test effort goes).

In the later phases we often introduce some scripted elements into our testing – either through automation or (if we need it ) as part of an acceptance test process.

In this case the test scripts themselves are an output of an exploratory test process, produced to meet a specific set of project / customer / partner needs.

I stick the banner “Context Driven Testing” across the top of that process because thats the way I see it. It is also a process that puts a heavy emphasis on exploratory test methods – but we do acknowledge and use other tools as well.

So I identify with, and promote exploratory test methods as part of an overall context driven approach to testing. But I use other tools as well.

It might seem like a fine distinction to make – but it is an important one.

Well I finally got around to reading the article in question. I’m reminded of my eagerness as a youth to watch The Life Of Brian, excitedly wondering “what all the Jihad was about”. On that point I found myself a little dissapointed.