Publish or perish? Metrics and research diversity

If we want to embed equality and diversity in research culture, any future use of metrics to assess research must not adversely affect specific groups or researchers.

Last year, I attended the Royal Society’s Annual Diversity Conference: ‘Pride and Prejudice – breaking down barriers in science’ and took part in a panel discussion on creating an inclusive research culture. This touched on the role of metrics for research assessment and implications for equality and diversity.

Who could metrics disadvantage?

Over half of those who responded expressed concerns about increasing the use of metrics in research assessment. Many were worried that further use of metrics could disadvantage under-represented groups, such as early-career researchers, women, those with disabilities, and black and minority ethnic academics.

There are more than 132,000 academic staff in English HEIs. Approximately 44 per cent are female, 4 per cent have declared a disability, 9 per cent are from non-white ethnic groups, 17 per cent work part-time, and 5 per cent are on low-activity contracts. That’s a lot of people who could experience a disadvantage. (See information published by the Equality Challenge Unit for more information about disciplinary differences within these statistics.)

Approaches to research assessment look at ‘track record’. People who are further along in their career and those who spend more time actively doing research, writing grant applications, publishing papers, networking, supervising students and so on, can build up evidence of a stronger track record. In this system, greater input correlates with greater output.

But a focus on productivity creates a disadvantage for early-career researchers, for those whose individual circumstances constrain their ability to work long hours, and for those who have taken time out of research for personal reasons. Yet productivity is celebrated and championed.

Quality and quantity

As a nation, we pride ourselves on the return for our investment in research activity. As highlighted by Elsevier in 2013, the UK represents just 0.9 per cent of the global population, 3.2 per cent of R&D expenditure and 4.1 per cent of researchers, and accounts for 9.5 per cent of downloads, 11.6 per cent of citations and 15.9 per cent of the world’s most highly-cited articles. We call this ‘punching above our weight’ and it encourages us to seek more from less.

We favour the most productive and thereby disadvantage those who cannot spend all their time doing research-related activities.

But it is simplistic to assume that by counting the volume of someone’s inputs we can measure the quality of their research. Less doesn’t mean worse, just as more doesn’t mean better.

Analysis of the 2014 REF shows the outputs of early-career researchers and staff with individual circumstances to be of equally high quality as those produced by other staff – and why would we expect anything else?!

REF guidance encouraged HEIs to submit all their excellent researchers for assessment and the staff selection report found that more early-career researchers and staff with individual circumstances were submitted to the REF than to the 2008 RAE. While this is a positive outcome, demonstrating progress in addressing equality and diversity issues across the sector, this isn’t necessarily how it feels on the ground.

We want to invest in the most excellent, internationally competitive research. So, we evaluate previous research activity and look at track record. We do this by counting and comparing, scoring and ranking.

Traditionally, the academic community has used peer review to judge research quality. Peer review is not a perfect system but it has rules and processes, is open to all, and is readily understood and accepted by researchers, research managers and research funders. More recently, we have also started to consider the use of metrics: things that we can count or measure by numbers.

What is the appeal of using metrics? Well, numbers can give an objective measure for comparison and counting outputs could be a quicker, cheaper and less burdensome way of measuring research activity. We can count lots of things – books, papers, conference proceedings, citations, grant income, hours using a facility, number of collaborations, numbers of staff or students – but does this actually tell us what we want to know about research quality?

In the words of William Bruce Cameron: ‘not everything that can be counted counts, and not everything that counts can be counted.’

Is there a place for metrics?

The challenge is to reconcile three perspectives: How would researchers choose to demonstrate the quality of their research? What information do funders need in order to make their funding decisions? What assurances do policymakers require around research quality and funding?

But, where does this leave us in terms of thinking about the use of metrics in research assessment and the implications for equality and diversity? Can we define good, reliable and useful measures of quality and excellence that do not adversely affect groups of researchers? Can any of these measures be metrics-based?

Well, The Metric Tide report recommends the use of ‘baskets of indicators’ alongside peer review, tailored to the research community being assessed, to capture the valuable aspects of research practice, output and impact within all disciplines. Building on lessons learnt from the REF, we must also continue working to embed equality and diversity within research cultures.

Ultimately, the goal is to identify the value, quality and significance of research activity as an indicator of future potential success.