Category: Rapid Testing (page 2 of 2)

Last February and March I have had the privilege to talk at Belgium Testing Days in Brussel and TestBash in Brighton about what testing can learn from social science. In a series of daily blog posts I am going to write about this subject: why I choose this topic, what sources I studied and finally how I have applied this stuff to my work.

Rapid Software Testing In the Rapid Software Testing Class I took in 2011 Michael Bolton talked about being empirical and a critical thinker as a tester. About collecting data from experiments using a heuristic and exploratory approach. About reporting by telling stories in testing instead of only reporting figures and numbers. Testing is about providing valuable information to inform management decisions. This awesome class empowered me to connect the dots of stuff I had been thinking about for years. It also pointed me towards a lot of books and information “outside” the IT and testing domain. It also triggered me to learn more about social science.

Test reports Do you recognize test reports like this? (click the report the enlarge).

I used to write test reports like that. I was counting test cases and issues and advising my clients to take applications in production. But what do these numbers tell us? What if we didn’t test the most important functionality is the software? Numbers don’t mean anything without context!

Another example was an assignment I did at a telecom company years ago. Testing was estimated by numbers of test cases. We have 8 weeks to test so we can do 800 test cases, was a normal way to plan and estimate testing. Somewhere along the project my project manager told me his budget had been cut 10%. He asked me to drop 80 test cases from our 800 test cases scope. What was he thinking? As if all test cases take equally long to create, execute and report?

Exact or social science? Testing and informatics (the science of information) are often seen as exact or physical science. People perceive that computers always do exactly the same. This gets reflected in the way they think about testing: a bunch of repeatable steps to see if the program is working and the requirements are met, but is that really what testing is all about? I like to think of testing more as a social science. Testing is not only about technical computer stuff, it is also about human aspects and social interaction.

Traditionally the focus in testing is on technical and analytical skills, however testing requires a lot more! Testing is also about communication, human behaviour, collaboration, culture, social interaction and (critical, creative and systems) thinking. The seven basic principles of the Context-Driven School tell us that people, working together, are the most important part of any project’s context. That good software testing is a challenging intellectual process and only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Quality Can we measure the quality of software? And can we do that objectively? When I ask people about quality they often refer to requirements. “Quality is compliance to functional and non-functional requirements”. In my experience I have never seen a document that contained all requirements for a software product. We can argue that requirement engineers have to do a better job. Are they doing a bad job? Can we solve the problem by writing better requirements? When discussing quality I like to use coffee as example. I like strong, black coffee without any sugar or milk. But what if you do not like coffee? For somebody who doesn’t drink coffee, my cup of coffee has no value at all. But is still the same cup. How can that be? And how about the taste? Why does coffee from an average office machine doesn’t taste very well while it meets the “requirements” I just mentioned. And what if I change my mind? Not so long ago I drank lots of cappuccino, nowadays I don’t like that any more. That is why I like the definition by Jerry Weinberg and the additions made by James Bach and Michael Bolton.

Quality is value to some person (Jerry Weinberg)Quality is value to some person who matters (James Bach) Quality is value to some person at some time (Michael Bolton)

I began to believe that there is much more to quality than requirements alone. I also believe that software quality is very subjective and will change over time. To better understand the subjective, human aspects of software quality I started to study social science in general and our thinking and qualitative research in particular.

Qualitative and quantitative research Quantitative research is about quantities and numbers. The results are based on numeric analysis and statistics. There is nothing wrong with numbers, but we need to understand the story behind these numbers! Like the test report example: what is the story behind these numbers? What did we test? And how good was our testing? That is where the qualitative aspects come in. Qualitative research is focused on differences in quality and is usually for more exploratory purposes. It is more open to different interpretations. Qualitative research accepts and deals with ambiguity, situational specific results and partial answers. When doing this, testers may be more prone to bias and personal subjectivity.

I updated the first part of my write-up about RST a little. You can find that post here.

Heuristic test strategy model

An important part of the RST course is the heuristic test strategy model. Teaching us the mnemonics. It is fun learning how these can be remembered easily: the dumber the mnemonic, the more memorable it is…

If you want to know what the letters mean, just watch this video I shot during class.

Test Strategy and test plans

After the impressive part on the heuristic test strategy model, test strategy and test plans were discussed some more. With a nice defintions of a (test) plan:Plan (set of ideas that guide your test proces) = Strategy (set of ideas that guide your test design) + Logistics (set of ideas that guide your resources to fulfill the strategy)

“Usability is often a testability problem.”

Of course I have thought of testabilty before. I also can remember several occasions where I have asked developers to help me with scripts or tooling to do my work more efficient and more rapid. On the other hand are testers asking for testabilty enough? Testers can make their work far more efficient by collaborating with developers. Ask yourself the question: “how often do I discuss testing with developers?” In agile development this is becoming common practise more and more, but shouldn’t all testing be facilitated by good tooling? It will help testing become more Rapid. Let the computer do the counting (in logging)!

The faster we can test, the more chance we have to find bugs. Bad testibilty gives bugs a chance to hide and therefore it is a serious issue. RST gave me insight in the importance of testability and while studying the appendices of the course material, I found some nice heuristics (Heuristics of Software Testability by James Bach, Satisfice, Inc.) testers can use to gain insight in testabilty: Controlability, Observability, Avalability, Simplicity, Stability and Information. While reading I realized that this is an obvious list, but I can’t remember a single project I did or have seen where testers were aware of all these factors that influence their work.

Safety language

Another eye opener was safety language! Testers tend to be very precise. Results are compared in detail, using 5 decimals if possible. And after a couple of tests we say: it works! Since I attended Michael’s TestFraming tutorial at TestNet in July, I already knew the importance of telling the good stories:
1. The product story, about how the product works, what fails and what might fail.
2. The testing story, about how we have tested the product, what we want to test and what we won’t test at all.
3. The story about how good testing was, the testability of the product and the risks and costs.

This subject also came up during the intervision session we had last week at work, where 6 colleagues who went to EuroStar last year shared their experiences and take aways. One of the topics discussed were “elevator pitches”, the importance of presenting and effective communication.

Accuracy verus precision

“Sometimes we testers spent to much time to find precise answers as accurate answers would do.” The difference between accuracy and precision is nicely demonstrated on wikipedia. I like the target analogy very much. Another thing I took away as an open door, but very true and not applied too often: in testing we should let the risk define the level of accuracy and precision needed.

A quote to finalize this post: “Testing is about questioning and learning under conditions of fundamental uncertainty“.

Via Twitter I read the post by Brian Osman blogging his experience with the RST by James Bach, which triggered me to finish the first post on RST today.

8, 9 and 10 June I attended the Rapid Sofwtare Testing training in Utrecht. Michael Bolton was invited by my employer to train almost 50 testers and test managers in one week. This great course got me thinking and my mind is overflowing with ideas! This post is to capture some interesting topics from the course from the notes I took.

Inspiration!

It was cool to see how Michael really knows how to inspire his audience. The first day gave me a nice view on testing vs. checking. This wasn’t new for me since I have seen Michael speak on several occasions. The part about “It works really means: it works, to some degree, on my machine, it appears to fulfill some requirement in some point of time.” made me think about the meaning of testing and how others look at it. Also the view that testability is important and the great use of heuristics inspired me to start using this in my daily work.

Asking questions

Michael is brilliant in asking questions and demonstrating “good” and “bad” behaviour by testers and stakeholders. The value of credibility and the use of safety language was discussed. It was interesting to see that the whole classs agreed with Michael without much discussion about this. I can imagine that lots of testers agree, but in our daily work we do not use it very often. We should train ourselves more in using safety language and carefully formulating our testing story. Testers also should be asking more questions! A great sentence I heard Michael use a couple of times could help us: “Hmmm, interesting, let’s talk about that!”.

Great testing questions

During the course we made a list of Great Testing Questions on a flip-over. This is the list from our class, but there are a million more questions testers can ask.

Who is my client?

Can I ask questions?

Are there more rules?

What is the budget?

Are there more like this?

Are there more of these?

What is the risk?

But more about safety language and questions later.

How RST can work

The best take away from the first day were my thoughts on how RST can work. During one of the exercises we were making mind maps while listing product elements. After this 15 minute exercise it became clear to me how powerful using a mindmap and heuristics can be creating test ideas rapidly. Me and my RST buddy Matthias worked together inspiring each other after every new idea. Imagine what our list could become when we would ask a designer or programmer to review it. Two-way inspiration in a split second! And how many more new ideas could we retrieve from the requirements or the client?

The best quotes from the first day could be (because we were served many): “Testers responsibility is to remain unsure when everybody is sure!” and “Testing is about questioning and learning under conditions of fundamental uncertainty”.

Context-driven tester?

Since I did the Rapid Software Testing course I know: I am (or at least want to become) a context-driven tester! Although I often praticed the Basic Principles of the context-driven school in the past and “Lessons learned in software testing” has been my favorite testing book for years now, the course made it perfectly clear to me that testing should be context-driven. RST: a must do for every tester who takes his/her job seriously.