Last week I had the good fortune to speak at the launch of a new book, Helen Kara’s Research and evaluation for busy practitioners: A time saving guide. Whilst the event – and the book – wasn’t aimed specifically at a voluntary sector audience my task was to think the unthinkable (or should that be research the unresearchable?) and question just where is the voluntary sector at the moment when it comes to research and evaluation?

I reckon its a pretty mixed bag. My guess – i.e. not evidence based – is that once we put aside the charities whose primary purpose is research or public policy, there are parts of the sector that excel in being evidence driven: I’m thinking the large national children’s charities, for example, organisations with dedicated research and evaluation staff and, hopefully a culture of learning. But after that, it is still the case that there is a long tail of organisations where research and evaluation play a much smaller role in developing policy and practice. The reasons are likely to vary: lack of, or prioritisation of, resources will figure highly. In many organisations, research and evaluation will be part of somebody’s job role, a slice of their daily time budget.

But is it still the case that there are organisations out there with a more difficult relationship with research and evaluation? What you might call the research refuseniks, the gut instinct operators, and in some cases the research abusers. Whether by accident or design, it strikes me that we need to help this long tail get better in its use of research and evaluation. The evidence hurdle gets higher – and harder to jump over – every year. The ‘do we make a difference’ challenge now comprises the expectation that organisations with small numbers of staff are sometimes expected to undertake complex needs analyses, cost benefit assessments, summative evaluations, SROI calculations. And so on: my point is, this is often skilled stuff. Given that there are probably few full time researchers, what should we do?

I’m conflicted on this. My desire to train practitioners and managers is tempered by the question I often ask myself: if the central heating broke down at home, would I try and fix it myself? The answer being a definite no. So I think the answer is one we’ve tried to do on the Cass Charity MSc, where I teach a module called Research Methods for Managers. Let’s think about practitioners in the voluntary sector who have to deal with research and evaluation as being creators, curators, commissioners and consumers. Let me explain

The research creators are, as the name suggests, undertaking primary research. I think we need to put this group in contact with each other (more peer support, such as via VSSN) and provide them with guidance that supports without leading them into any bear traps – ARVAC‘s guide to getting started in community research is a good example. We should also help to network these people with the academic community, where a surfeit of initiatives are trying to build community-university partnerships. But I’m not sure how realistic it is to aim for more creators – and I know I am possibly going against the research co-production grain here.

More numerous are likely to be the curators: those who are trying to pull together ‘the state of the art’ type reviews, ideally in 2-4 pages from my experience. For this group I reckon we should be trying to make use of tools such as Rapid Evidence Assessments or, even better, getting our academic colleagues to produce more such briefings. One wonders whether the research councils should hand out far more brownie points for such reviews.

The commissioners are increasingly commonplace as voluntary organisations attempt to become more evidence-based. I’ve heard many a comment that much commissioning in our sector is, ahem, wasted. Bad practices – such as the idea that any commission should cost no more than £10,000, or that primary research is always necessary – can again be addressed by better dissemination of some good guidance. I think this is increasingly a critical group that we should support, whether in terms of skills or knowledge of where to buy from.

And finally, the research consumers. As we’ve moved to a knowledge economy we are faced with a deluge of data, information, knowledge, intelligence, insight…but what’s valuable? How do we sort the wheat from the chaff? This is a problem for the commissioners too. I wonder if its where we should concentrate our effort – helping busy practitioners and managers to commission and consume research and evaluation by empowering them to know what good research and evaluation looks like. There are available frameworks to help public sector policy makers judge research quality, for example. Hence, at the launch of Research and evaluation for busy practitioners I argued that if the book was a song, it would be ‘Won’t get fooled again’.

I reckon the advice in the book for practitioners is part of the answer to the sector’s uneven progress to being ‘research ready’. But its only a part. We also need to focus on other parts of the system, for example the funders and commissioners who are asking for evidence. And it’s not enough for voluntary organisations to be research ready: researchers need to be sector ready. As such, I’m glad that NCVO is supporting the Alliance for Useful Evidence. We also need to think about issues at a sectoral, or sub-sectoral, level, such as standards and principles. This issue came up at the launch of Inspiring Impact.

The launch covered many other issues – including how we better translate research into policy and practice (avoiding the sudden reveal, what the excellent Jane Lewis called the ‘Ta Da!’ approach to research findings), and how as voluntary organisations we deal with research that shows we aren’t as good as we think or hope we are. I’ll blog about these another time if someone asks! In the meantime, this is a good book worth buying.

Research and evaluation for busy practitioners by Helen Kara is available to buy with 20% discount here.