Stop doing evaluation!

Anybody who has even dipped a toe into the waters of public engagement recently will know what I mean when I talk about the dreaded ‘i’ word.

Impact.

It seems to be everywhere – in funding applications, at conferences, even (for those of us fortunate to work in higher education) in the REF case studies. Impact is the word of the day, and proving that you have it is everybody’s goal. After all, why fund something that isn’t having an appreciable effect? Why spend time and resources embedding something into your practice if it isn’t going to change hearts and minds?

The problem, of course, is how to measure this. Evaluation is impact’s much talked-about but highly misunderstood little sibling. Sure, we need to evaluate our projects, but not just any evaluation will do. This is why I have massively stepped back the evaluation I do of my programmes, all but eliminating the usual gamut of questionnaires and surveys that used to be a must-have for any robust initiative.

Think about it this way: have you ever ever gotten a truly surprising answer to ‘did you enjoy this activity/event/project?’ Most people will have done, a few people didn’t, and that tells you… precisely nothing. Sure, if you’re developing something particularly new or experimental it might be worth checking if your audience enjoyed it, but nine times out of ten you’ll be able to tell how enjoyable something was without asking.

Same with ‘did you learn anything today?’ The facts and figures people might be able to recall and parrot back five minutes after finishing your event are all but worthless in measuring whether you had a real impact on their knowledge. I can memorise a phone number that I need to call – that doesn’t mean I learned it or that I’ll remember it tomorrow, much less in a year’s time.

True evaluation of impact is going to take a lot more effort and a lot more care than what we’re used to. We need to look at long-term changes, all the while understanding the many complex and intersecting factors at play when it comes to affecting people’s attitudes about science. Groups like the British Science Association and Wellcome have started undertaking studies into longer-term impact of STEM projects, among other things, but it will still be many years before we have the data we need to know what makes a good, impactful project.

Despite the click-baity title this isn’t a call to stop all evaluation ever. But think about the questions you’re asking and what they’re telling you. Are they really informing best practice and proving impact, or are they just a waste of your audience’s time – and yours?

So: what questions are worth asking, and what impact should we be aiming for? That will be the subject of future blogs but I invite you to continue the discussion below!

2 responses to “Stop doing evaluation!”

It’s a funny one, right? I’m not sure you always need to do evaluations or impact analysis of outreach work, especially, for example, in the instance of public talks that some researchers give for the shear joy of talking about science. On the other hand, if your goal is longer term then having fun talking science one day, then you need to continually assess it just like any other goal.
It’s a big ask, especially given how many researchers are obliged to do research but are given neither time nor resources to do so. I think this is changing, slowly, but there are all kinds of associated pitfalls to dodge before we get it right. I don’t have the solution, but I do think that careful impact analysis, that avoids simple box-ticking exercises, will be a part of that solution.

Very true that good evaluation takes time and resources – researchers are pressed for both in among their many other responsibilities and PE support staff are often no better! Carving out time in the project plan to really think about your aims and how you’re measuring them and ensuring you have the time to gather/analyse that data is important!