Question: What metric or easily understood information can my test team provide users, to show our contribution to the software we release?

I just got back from vacation and am looking at a beautiful pie chart that shows the following per iteration:

# of features delivered

# of bugs found in test vs. prod

# of bugs fixed

# of test cases executed

After a series of buggy production releases, my team (or at least the BAs) have decided to provide users with colorful charts depicting how hard we’ve been working each iteration. My main gripe is providing my BAs with a # representing executed test cases.

Second, the pie chart looks like all we do is test. One slice lists 400 tests. Another lists 13 features...strange juxtaposition.

Third, I’m not even sure how to provide said count. I certainly don’t encourage my test team to exhaustively document their manual test cases, nor do I care how many artifacts they save distinct tests within. Do I include 900+ automated UI test executions? Do I include thousands more unit test executions? Does the final # speak to users about quality? Does it represent how effective testers are? Not to me. Maybe it does to users...

PR is important, especially when your reputation takes a dive. I, too, want to show the users how hard my QA team works. I want to show it in the easiest possible way. I could provide a long list of tests, but they don't want to read that. What am I missing? What metric or easily understood information can my test team provide users, to show our contribution to the software we release?

When testers found bugs that were already in production, we used to just fix them. Soon we realized, fixing them may do more harm than good. For example, we have a time control in one of our apps that accepts input in a format like HH:MM:SS. We noticed inconsistent behavior across instances of these controls in the app; things like some control instances would force leading zero while others would not, some would allow double-clicking-to-select time units while others would not.

We logged a bug to standardize these time controls throughout our app. When the bits went to prod the users screamed bloody murder. They hated the change and complained about disruptions. It turns out the users didn’t even know the inconsistency existed in the first place. As devs, BAs, and testers, we’re in and out of said time controls all over the app. But in production, users tend to only work in one of about 10 different modules based on their jobs. They could care less how the time control worked in neighboring modules.

“Don’t fix bugs unless users want them fixed.”

This mantra also applies to larger problems testers find. A room full of devs and testers can pat themselves on the back, thinking users will love them for certain bug fixes, only to find the users had adjusted to the broken code and want it back. And the danger increases the longer your app has been in production.

“Oh! I just came up with a killer test. It’s totally going to fail! I know the dev is not coding for this scenario. In fact, the whole team will be impressed that I even came up with this brilliant test. Dude, I can’t wait to see the look on the developer’s face when he finishes his code and I log this bug. He’s totally going to have to refactor everything. I’m such a sneaky tester, he he he…”

This post was inspired by a comment made by my former QA Manager and mentor, Alex Kell, during an excellent agile testing presentation he recently gave.

Who am I?

My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.