David Ainsworth: How do you measure the value of impact measurement?

David Ainsworth asks why a discipline so focuses on measurement can provide so little evidence of its own achievements.

If you ever want to be really annoying, as I occasionally do, you can ask someone who works in impact measurement how they measure the impact of measuring your impact.

It's a silly question with sensible ends. The charity sector is increasingly being asked to provide evidence of its effectiveness, but those doing the asking appear not to be subject to their own rules. There are no measures and metrics which can tell us whether a growing focus on outcomes and impact has really added value to the charity sector. Nor does anyone seem much interested in developing them.

It's a key question for a number of reasons, not least because the charity sector seems quite inclined to treat impact measurement with deep suspicion and - at times - downright antipathy.

At first glance this seems odd. After all, what could be more sensible that putting some checks in place to see whether you're actually doing any good?

It's not as simple as that, though. Measuring the impact of a charity's work is complex, time-consuming and expensive. It's hard to justify to donors and it takes away from resource which can actually be used to help people. So doing it is not trouble-free.

An initial reluctance to dedicate resource to measurement been made worse because those involved in promoting it have sometimes spoken in a language the sector does not understand, and seemed unwilling to understand the exigencies of life on the front line.

Added to that, much of the requirement to measure outcomes has been championed by funders, who often don't bother to align their requirements with each other or the charity's own processes. When it is done internally, it often seems to have been an exercise in PR to attract funding. It is seen by some as a self-justifying industry.

Suspicions around impact measurement

In short, impact measurement has developed a dubious reputation, and people are suspicious of it. So if you are going to ask charities to do it, it would be useful to be able to counter all of these objections with a single phrase: "Yeah, but it works."

You would have thought that an industry dedicated to measurement would be uniquely well placed to prove its own efficacy, but it has not panned out that way.

There’s some good circumstantial evidence. After all, the health sector has been using really robust metrics for a good long time, and there is definite evidence that doing so is useful.

But health tends to have nice, easy, binary metrics which can be used to test the efficacy of interventions – such as, but not limited to, “Is this bloke still breathing?” And in any case, evidence from one field is notoriously hard to impose in another. So those trying to get the sector to change its ways need to show charities that doing so will help them.

So why isn't there any evidence of the value of measurement?

Perhaps because those involved are worried about where it might end. Measuring the impact of measuring impact, after all, sounds like the social sector equivalent of an Escher painting - disappearing into infinity while going nowhere.

Or perhaps it's because it's too hard. Maybe there are just too many confounding factors to ever allow you to know the benefits of measurement. (It would certainly be difficult to find a control group - someone you could measure as they didn't measure anything.)

It doesn't appear that way, though. It feels more as if it had just never occurred to anyone to really ask. Those working in the field simply took it as read that things were the way they believe.

Ironic, really.

Theory of change

So we can’t actually show that impact measurement has any impact. What do we hypothesise is happening?

First, I don’t believe there is a causal link between being able to demonstrate you impact and getting funding. It ought to be that demonstrating effectiveness is a prerequisite of funding, but it doesn't seem to be. I bet that if you did measure, you'd find little correlation.

There are certainly individual funders who fund preferentially on impact, but in most cases, getting money appears to rely far more on making friends and telling stories, or being able to put a tick in the right boxes. You just have to be good at looking useful. Actually being useful is probably helpful, but certainly not essential.

On the other hand, if you want to provide a better service, you probably need to measure whether what you’re doing works. And not only that, but you need to start the process with measurement built in. The theory of change is part of the jargon which makes impact measurement so tedious. But a theory of change is a simple thing:

Decide what change you want to see

Decide what you think is the best way of achieving it

Identify some way of measuring whether what you did was effective

Do your intervention

Take your measurement

See if it worked

If it worked, do it again and tell everyone

If it didn’t work, stop immediately, tell everyone, and do something different

I tend to think that if you’re not using the process above, you are probably doing a lot of useless or even counterproductive things. I can’t help thinking that many organisations haven’t even got to 1.

So we can test this. Go out and find the organisations which have used impact measurement religiously. See how much it cost. Identify the number of times they were able to identify improvements. Show where it added value.