NYU: Measuring Impact, Valuing Investment

Last week, I had the pleasure of attending the Sixth Annual NYU Conference of Social Entrepreneurs here in New York. The day-long conference brought practitioners together with academics for a day-long discourse on measuring impact and valuing (social) investment.

(Before I get much further, some needed disclosure: I sit on the conference’s Practitioner Advisory Board.)

The day kicked off with Jed Emerson, arguably the most well-known expert in the field of strategic philanthropy and impact investing. Of course, Jed began his talk by demurring – he claims he’s not an expert in metrics nor impact investing; rather, the expertise is in the room and the larger community. I beg to differ, Jed – after all, your introduction of social return on investment (SROI) is one of the founding blocks on which impact investing has been built.

But I digress. To Jed, the question is simple: “Are we maximizing total performance while generating real impact for our multiple investments?”

Easier said than done, and from Jed’s quick (and clear) overview of the history of impact investing, it is clear that while much has been done, we still have a long way to go. Specifically, Jed focused on two challenges:

First, the need for social management information systems to track outcomes – or else, as he put it, “you generate crap data”.

Second, Jed pushed us to embed metrics into knowledge management, in order to actively improve practice.

Following Jed’s talk was a great panel on impact investing, but as it focused primarily on domestic work, I won’t go into it here on NextBillion.

The afternoon, however, brought Laura Callanan from McKinsey to the stage. Laura, of McKinsey’s Social Sector Office, offered an excellent 10-point guide to good program evaluations. If you are involved with aid or social investing, LISTEN UP. This may seem obvious but taken together has real potential to help us all (myself included!) raise our game when it comes to program evaluations. Without further ado, the 10-point plan:

Hear the constituent voice

Context matters!

Give feedback on what works and not

Exercise rigor within reason

Use the right tool for the job to maintain credibility and understand feasibility

For example, randomized control trials are not the right tool for every intervention

We’re in a bubble around the “gold standard” for measurement – careful not to get caught up in the bubble

Drive assessment with learning

Start with a question: “What are we trying to learn that will help us do our work better?”

What are the unintended consequences? What is the external environment in which you’re working?

Look at “expectation failures” – not necessarily a program failure, but something that simply did not live up to the expectations

Don’t measure everything

“Funders are asking for reams of information that they never really look at”

Don’t shift the burden of information gathering to the grantees

Hewlett Foundation, for example, is cutting down their grant application form and reporting forms based on asking themselves, “Do we use this information or not?”

Design assessment and strategy together

Assessment begins with the start of the program and continues throughout – not for a few months at the end

Don’t let assessment sit on a shelf

“How many people have read an audit and done something differently?”

How can audits be more actionable?

Collaborate, don’t dictate

Make sure your grantees have the resources to provide you what you’re asking them for

Build off and build up

What has already been tried (and worked) and tried (and failed)

Use existing knowledge and contribute more knowledge back into the sector