Are you a victim of your own metrics?

by Arlen Ward

Every once in a while I see a news report about someone hitting a first responder’s vehicle while it is stopped alongside the road. Given the number of flashing lights on an emergency vehicle these days, I can not imagine the driver doesn’t see them. It seems another factor is at play- it is the result of the driver’s focus. All the lights and activity has the focus of the driver, and their vehicle just follows their eyes. Bam.

“When a measurement becomes a target, it ceases to be a good measure.” –Goodhart’s Law

For difficult-to-articulate goals a common practice is to replace the goal with some sort of summary measurement. Some value or statistic that gives the decision maker a view of the overall system. The problem is while that approach may work for simple systems, as they become more complex we lose fidelity in our model if we rely too much on summary statistics. One measurement doesn’t cut it anymore. That measurement can hide complexity, hide problems, and obscure visibility into the system. This leads to bad decisions being made off a poorly chosen measurement.

Model Fidelity: the degree of exactness with which something is copied or reproduced

In his 2016 TED talk“The Surprising Habits of Original Thinkers”Adam Grant mentioned (9:50) that a lot can be determined about your job performance and work commitment based on the internet browser you use. This doesn’t mean to get better at your job you need to download a new browser. The measurement is not the target. Another example was a nonprofit I learned about a few years ago. They were collecting children’s books in order to distribute them to families that didn’t have any. The basis of the charity was a study that determined that kids that grew up in homes with books in them did better in school that those that didn’t. Clearly that meant we needed to put books in all the homes. Starting a conversation about correlation and causation was not terribly welcome in the middle of a book drive.

Project and portfolio management are areas where this happens quite a bit. Give the fuzziness of the forecast inputs, using net present value (NPV)as the metric for project decisions reduces a very complex system down to a single value is a prime example of decisions based on too little model fidelity. It doesn’t take long before every project seems to have the requisite NPV. Is that because the quality of all the proposed projects improved over that time? That seems unlikely. I think Goodhart would agree.

If the decisions are important and your system is complex, it becomes extremely important topick your metrics carefullyand weigh them against your knowledge of the system.