"My vote for the World’s Most Inquisitive Tester is Shrini Kulkarni" - James Bach

My LinkedIn Profile : http://www.linkedin.com/in/shrinik

For views, feedback - do mail me at shrinik@gmail.com

Sunday, December 01, 2013

Connection between Software Metrics and News

I discovered Maria Popova's brain pickings accidentally. I am happy that I did. Fully loaded with stuff that makes you think almost every time you read her blog - is something that stands out to be noticed. If you have not already signed up for her newsletter and not aware of brain pickings - I strongly recommend you sign up. If you are curious mind - you cannot afford miss this "interstingness hunter-gatherer and curious mind at large". Thanks Maria - for keeping us busy in reading and absorbing stuff that you keep serving to knowledge hungry, curious world.

In a recent post she explores (or re-explore) a book "Does my gold fish know who Am I" and in the narration that follows the central theme of curious and urgent questions by kids, I found a paragraph about news. I could immediately make a connection with how software metrics are produced and consumed.

Thanks to my confirmation bias towards anything that criticizes software metrics - I sat down, this sunday afternoon (while busy in finishing all piled up work) to write this post. If you feel strong about any idea to write about (for a blogger) - you would find time to write about it.

For a question "what news papers will do when there is no news" -

"Newspapers don’t really go out and find the news: they decide what gets to count as news. The same goes for television and radio ....The important thing to remember, whenever you’re reading or watching the news, is that someone decided to tell you those things, while leaving out other things. They’re presenting one particular view of the world — not the only one. There’s always another side to the story"

Wow , that seems to be absolutely right to me. Exactly same thing goes for software metrics. The producers of the metrics decide what they want the consumers (Managers, stakeholders) to see and absorb while leaving out some unpleasant things that probably matter. How often you have see testing producing results that confirms what stakeholders looking for. Zero Sev 1 and Sev 2 bugs in open state and 2 Sev 3 bugs with clear work arounds. On release Go-No Go meetings - what else can be sweet news like this? If as a stakeholder you wanted the release to happen - you would not question these numbers at all. Thanks to confirmation bias.

Management's preference for numbers and summarized data - it is very easy to hide things that matter. And there is always another side to the story - sorry the numbers (numbers themselves are astonishingly incapable of telling any story leave alone telling the right story). Why this works (or apparently works) - our brains are wired for optimism - we like to hear good stories (good numbers) and most importantly stories that confirm our existing world view. Here is where critical thinking comes as savior. To me critical thinking is about questioning ones' own suppositions and line of thinking. "Am I missing anything here?" or "Is my understanding right, should I seek some contradictory information if exists" - are examples of critical thinking. For software testers - this is very CRITiCAL - we should last person to say "all right this is right".

Sadly as is the case with news, the metrics madness goes on - consultants year after year mint money in the name of software engineering, software process and metrics rule our life as software folks.

While I am writing all this - I need to be critically thinking as well - Am I being overly negative and dismissive about metrics and news?

4 comments:

You are right..there is other side of story. It's not always necessary that we have to play with "number" just to make a senior level "Stakeholder" happy.

IMHO, once test team identify true "Objective" of collecting data for a "Metrics" to a relevant "Audience", only then we can achieve something valuable.

What is important is how we analyze the information after looking at the Metrics.

For example, Within internal team of 10 testers, working on 10 different modules, the team decided to provide information of "Test Coverage" and related "Defect Density Distribution (Severity wise)" on weekly basis. Now we have total ten iterations and in 3rd iteration we looked at the Metric and found....

Coverage seems good per module say 40% but high severity Functional defects not present or only a few low severity defect are present ---- Analyze it (IMO) from this point of view 1. Shouldn't we look back at our test cases and find out if they really cover High Risk Areas or not 2. Have we deployed balanced amalgamation of test techniques with the help of Functional and Technical testers which can surface defects in early STLC

I don't think, it is always about "What we decided to show" and then present accordingly to make someone happy. In a team where delivering good quality is an "Objective", team ask questions to itself through the information from Relevant and Important "Testing Metrics" and then look at Test Strategy, Plan and Design accordingly.