Why Performance Measurement Fails

'I always like to plan ahead; that way, I don't do nuthin right now.' From the (vastly underrated) movie 'Tremors', starring Kevin Bacon.

Over the last decade or so PRS has engaged in performance measurement projects for a number of departments and agencies, and for a variety of functional areas. Not all of those engagements would I classify as successful. Sometimes, performance measurement fails.

Understand, we did fulfill the terms of the contract, and set up a performance measurement framework that was custom-designed for the client. Despite that, in a couple of cases that framework was never really embedded in the management practice of the organization. And in at least one case the implementation took more than twice as long (and cost twice as much) as was originally planned. So we call these 'performance measurement fails' even though technically we did the job.

We've also been brought in by clients to 'fix' existing performance measurement frameworks. Typically these frameworks had been developed but never launched, or launched and then fizzled out.

Why does this happen? We've learned a few things over the years about why performance measurement fails, and how to avert it. Here's a few of the common Key Failure Factors (KFF's) and lessons learned from our experience.

KFF #1 – No dedicated client performance measurement resource. Despite having the smartest and handsomest and most modest consultants on the job, it is very tough to make performance measurement really work in an organization without an internal 'custodian' of the process.

The whole key to the process is the consultant handing over his knowledge about performance measurement to the organization. If there is nobody there to receive it, the project will eventually go nowhere.

KFF #2 – Inadequate consultation/communication. We have seen an absolutely beautiful, technically brilliant PfM framework that no manager in the organization really bought into or believed in or even understood very well. The framework had been developed more or less in isolation in the departmental evaluation unit; the people who's performance was being measured were inadequately consulted.

This was one we (me and Mr. Snelling) were hired to 'fix'; it took us about two days to figure out it was un-implementable in it's existing state,and suggest another path.

Lesson 2 - It is just about impossible to successfully do performance measurement to someone; you can only do it with them.

KFF #3 - Delay in getting to data. In all our consulting engagements we relentlessly push our clients to get to the point of data collection. We tell them, DO NOT try to create the 'perfect' logic model, or craft the perfect measures, first time out.

Reason 1 – you can't do it (even we can't do it, and we're the handsomest, smartest etc etc etc ). A model is just an approximation of the real thing; don't spend inordinate effort trying to capture the minutia of your program, just focus on key activities and objectives. Reason 2 – you don't REALLY understand what you need to measure about your organization until you see some results i.e. data.

So, your goal when first embarking on performance measurement is to identify some key outputs and desirable outcomes that support the objectives of your organization, and then come up with a handful of measures you believe will reflect your results.

If the measures are any good, your people will want to see more, and will work to improve the framework over successive reporting periods.

Lesson 3 – like the Kevin Bacon character quoted above, you can plan and plot and draw logic models indefinitely. You only make progress when you do something; that is, get some data and see what it tells you. Rule of thumb: if you don't have some data in hand 90 days after starting, you are doing it wrong.