Sunday, November 19, 2006

For years I have contemplated writing an article on the problems with Metrics. It seems that Joel Spolsky just wrote it for me.

Now, again, don't get me wrong(*) - Metrics can be very helpful, but they need to be explained in context. For example, let's say that the data points to less and less defects found per week, or even per day. Why, that's a "converging trend line on defects open" - the product is ready to ship!

Or, it could just be the month of December, when everybody took vacation.

Or maybe a tester quit, and the lead and supervisor are spending a lot of time interviewing and no time testing - without context, the uninformed reader begins to make up explanations for the behavior of the data. That can be very, very bad.

What bugs me the most about metrics, and, well, I'll be brutally honest here - is the purpose they seem to serve.

As they are presented in the textbooks (and I have read a lot of them), Metrics make things easier. After all, once we define "Good" and put performance metrics in the way, then all the decision maker has to do is breathe easy (when the numbers keep going higher) or make a stink (when they don't.)

Now, read that paragraph again. I submit that in some cases, it's not really about making the job easier - it's about creating a situation where people don't have to think. After all, they can just manage to the numbers.

When I think of the context of software development, everytime I can think of a situation where someone was clinging to an idea because the alternative was scary and involved thinking for yourself, it has gone bad. CMM, UML, RUP, CASE, record/playback testing, Agile ... take your pick - when the motivation was to solve all our problems and avoid thought ...