SBTM vs QC

It resulted in what might charitably be termed a storm of controversy with noted testing thought leader James Bach (among others) campaigning against the use of Quality Center (QC hereafter) on test projects more or less as a matter of principle. The discussion (or at least some of it) that took place on the 22nd June, can be viewed here.

Although there is undoubtedly some merit in James’ assertion that this problem could be solved by way of increased credibility and my simply refusing to use the tool, I must confess to preferring a slightly more diplomatic approach, particularly where my main source of current income is concerned. Perhaps when I’m an internationally renowned testing consultant I’ll have a bit more freedom to pick and choose who and what I work with, but until then… (Though this view does obviously beg the question, which comes first – reputation or client? I’ll save that for another time though.)

So as things stand, I’m working with an agile team on what is essentially a waterfall project. We deliver our product by way of Jenkins continuous integration and BDD/ATDD style testing with a layer of rigorous exploratory testing on top. A large technology outsourcing company then integrates it with their [government] data processing system. I want to manage my exploratory testing in a Session Based manner. They want me to report on my testing via Quality Center.

What’s a tester to do?

First of all I should probably define what Session Based testing means to me, since the inevitable response to this post will be – “but that’s not Session Based Testing.” A Session Based Test (SBT hereafter) is a test idea/case that has been conceptualised as a mission or charter (or tour if you’re a Whittaker fan) and that will be executed as a discrete test session, or block of test execution time. Test ideas/cases therefore map to charters, charters are executed as sessions, and the session notes are de-facto test results.

My solution to the problem I posed above is relatively straightforward and most likely not new, but I thought I’d document for posterity in any event:

There are clearly some problems to be addressed still. Not only am I combining test tools and approaches here, but the project is mixing it up with software engineering methodologies (agile and waterfall), resulting in a weird blend of both. Hence our agile team is delivering a continuously integrated product against a set of change-controlled requirements, not in sprints but in a big-bang towards the latter end of the development cycle. But I digress.

In step 4 we have the notion of finalised charters. This is where our ALM test management tool starts to rear it’s ugly, constraining head. Clearly in true Session Based Test Management the concept of a finished set of test charters is unlikely to exist. You would instead have something much more like my original mind map wherein a charter leads to a branch which may lead to additional branches or recursions ad-infinitum. This is the true nature of exploratory testing and it simply cannot be managed effectively in Quality Center. Certainly from a tracking perspective, the project test manager had a somewhat horrified expression when I informed him that I would likely add further tests in at a later date [step 6] based on the results of initial test charters. This approach to testing simply doesn’t stack up against the QC’s inflexible reporting constraints.

Additional test status’ are also likely to be required for the individual charters, rather than the rudimentary Pass/Fail/Not Complete/Blocked out of the box variety that QC provides by default. Sami Söderblom provides similar insight here.

Interestingly, Michael Bolton pitched in to the end of my Twitter discussion (see earlier) with his view that [QC style] bureaucracy is unlikely to add value to the project (particularly in relation to the associated [QC] cost), but that it is our clients right to decide on the form of reporting. He neatly summarised what I believe to be the spirit of the dialogue, being that “we’re obliged to offer (at least) more efficient service.”

I’d like to think that I’ve gone some way towards achieving this, given the QC dogma our technology outsourcing partner bring with them. Maybe in the next phase of the project they’ll see the value of SBT and eliminate QC as a reporting tool entirely… But I doubt it.

- Simon

9 Comments

Nice post, a lot of testers find themselves in situations like yours ( I’m talking to one at the moment and trying to help him find a good strategy to help him still be effective )
The Twitter discussion doesn’t appear, the link goes to your Twitter account and I dont want to go scrolling through it, shame as it sounds like a good discussion to read

Thanks for reading. I’ve updated the link so it points here which does an ok job of showing [some of] the debate. Unfortunately there doesn’t appear to be an elegant way of linking to Twitter conversations at present!

Just realize that you are writing about SBTM, which I created specifically in defiance of a client who wished my team to define, record, and count test cases and put them in Quality Center. They asked me to do something, I risked looking obstinate by saying no. And now you are benefiting. You’re welcome.

You have to ask yourself, once in a while: how do famous testers become famous? Is it by doing exactly what they are told? Absolutely not. There are many many experienced testers out there that no one has heard of, and no one ever will, because they either have no fresh ideas, or they don’t dare to offer them.

Maybe you feel helpless right now, but… What’s your plan to become the sort of person that people will listen to? What’s your plan to leave people behind who can’t appreciate what you do for them?

Thanks for reading/commenting James. I guess at some stage I may need to take a harder line. On this occasion I didn’t see the merit in doing so, hence the post.

It’s not really a matter of feeling helpless since, as you have noted previously – a tester always has the option of resigning, which in some contexts may be the right thing to do. I’m more of a have my cake and eat it kinda guy though so if I can see a middle road, I’ll tend towards it.

Documenting my experiences and the thinking behind them is one way of leaving a legacy for other testers I guess. Hopefully something more substantial will follow in the future… We all have to start somewhere though, right? And please do rest assured James, your thinking and advice is always greatly appreciated!

I pondered the same issue a while back with a previous employer but quickly came to the conclusion that trying to map qualitative data from something like SBTM onto a quantitative reporting model (QC or any other test case management system) was a waste of time and would ultimately betray the spirit of the changes I was looking to introduce. You can read my paraphrased discussion with them on quantitative versus qualitative here – http://bit.ly/OdMvUa and http://bit.ly/dWtmqJ

I tend to find a quantitative approach to reporting encourages distance from the stakeholders and as a result more emphasis is put on the numbers with is coupled by less understanding of the story behind them. They’ll simply glance at the numbers, make a deduction and move on. Give them a qualitative report however, and they’ll start asking questions, which in turn encourages communication and brings everyone closer together.

Thankfully, my previous roles were abut advocating change (much like my current role) so I had an audience who were willing to listen.

Instead of trying to contort all the valuable information you have from you testing sessions/charters into the QC beast with a pass/fail/block delineation, why not sit your stakeholders, managers, etc. down and explain that you strongly believe there’s a better way to do this and get them to agree to let you run a Quality Center FREE exploratory test effort so you can report the results in a qualitative fashion. Either do this for a particular iteration or better still side-by-side another QC-based effort to illustrate the failings of the quantitative approach.

Do it well and you get the traction (and credibility) you’re looking for to take this forward and perhaps make your projects totally QC free.

If they’re resistant to the idea in the first instance or even worse refuse to look at anything that doesn’t involve QC, then perhaps this company isn’t the place for you?

Thanks Del – your insight is much appreciated. Context is clearly important here and I should mention in my defence, that we’re up against quite a tight deadline with a massive (£30 million) penalty if we fail to deliver. Under these circumstances, advocating such drastic change as removing QC from the equation entirely might be somewhat difficult for the stakeholders to stomach. Arguably we might end up with a better product as a result of the decreased overhead, but in my opinion, that is a discussion for another time. Next iteration possibly.

I’d also like to re-state (Twitter conversation refers) that I’m not encouraging the continued use of Quality Center. Merely trying to find a means of peaceful co-existence. It’s interesting hearing experiences from those that have been down a similar path, so thanks for your comments!

Seems like you touched a nerve there, thus the discussion went far away from a useful way to improve working procedures…
So I will try to Detach it from “QC” – last time I write the “devil” name in this comment
(I currently favor XStudio in its free form, and prefer roughly any of these free ALMs over an MS-Word test documents)

What are we looking for, and what do we like in SBTM and alike?
1. Short writing form.
2. Still manageable.
3. Retrieving *only important* feedback into Debriefing with as less work.
4. Retrieving *only important* feedback into Reporting with as less work.
5. Allowing to easily edit existing tests, or add tests we might wish to run next time.

I have to say that one of the things I liked the most in SBTM, was the insist on constant Debriefing – I think this is one of the things I miss the most in other forms of testing – and it’s not that one can’t do it, just that it is not done due to numerous excuses.

Now I look at your diagram above, and it brings me back to a question I once tried to raise and got only a partial answer to:
“What do we gain in using MindMaps vs. a Tree in any ALM/Test Management tool?” – So we gain some Colors and Icon placing abilities (I guess we can get that in a tree too if we would insist), and we more or less get the need to handle Import/Export when ever we wish to switch between the two.
Depth of trees is not much limited in most tools, so I would just take this block off (I don’t follow the MM hype – sue me ).
Some ALMs like Practitest, allows you to view the tree (same content) by different views based on attributes – by version, by product & feature etc.

The depth of the test we write, can be just the test name/header in the tree with all other values on default, but hopefully it will include some Test Purpose to better explain what we aim for, in the details field too.

While running tests from an ALM (and I prefer a short format like XStudio’s Tabular launcher – manual execution engine), you can still report all the feedback which you need – and yes, in most tools it will be useful to add several types of comments, which will make it easier to filter later.
Results and comments are already uploaded seamlessly into the DB.
No need to write bugs in the comments as in SBTM – as these are already linked if needed.

In most ALMs, it is quite easy to add additional tests while you execute – much easier then when using MS-Word the way most of us do these days – where the whole doc has to be approved at once…

If we need to show the traces of our actions in a bug, we can use a tool such as qTrace (with all the GUI automation tools out there – its amazing that we still don’t have a descent actions recorder which results in readable textual description) ,
We may also use that to map which objects have we used less during the cycle.

Still need to consider how to follow the session time frames – but this seems like a minor issue (surely a timer countdown is not the preferred means for getting a useful test).

About the author

Simon works with teams of all shapes and sizes as a test lead, manager & facilitator, helping to deliver great software by bulding quality into every stage of the development process. He specialises in automation and performance testing and is also skilled in Java development and application security testing.
When he's not working, Simon runs the #BrummieTesterMeetup, co-organises #MEWT, hosts #Testbash and edits/writes content for the the Ministry of Testing & The Testing Planet. His most favourite activity of all though is spending time with his amazing family.