#HEFCEmetrics

In response to HEFCE's call for responses to the proposition of using metrics in research assessment, many shared their thoughts. Here, we've collected the highlights from both blogs and Twitter.
UPDATE: We've added tweets from the October 7, 2014 "In metrics we trust?" event.

The independent review of the role of metrics in research assessment will consider how well metrics can be used across different academic disciplines to assess the excellence of research undertaken in the higher education sector.

Science has been extraordinarily successful at taking the measure of the world, but paradoxically the world finds it extraordinarily difficult to take the measure of science - or any type of scholarship for that matter. That is not for want of trying, as any researcher who has submitted a PhD thesis for examination or a manuscript for publication or an application for funding will know.

Update 24th June: 7,500+ views, 100s of shares, 200+ signatories! And a new post with some responses to further issues raised. The Higher Education Funding Council for England are reviewing the idea of using metrics (or citation counts) in research assessment. We think using metrics to measure research quality is a terrible idea, and we'll be sending...

We have had overwhelming support from a wide range of academics for our paper on why metrics are inappropriate for assessing research quality (200+ as of June 22nd). However, some have also posed interesting follow-up questions on the blog and by email which are worth addressing in more depth.

This Monday marks the end of the open consultation for HEFCE's Independent Review of the Role of Metrics in Research Assessment. Steve Fuller expands on his submission and also responds to other prominent critiques offered. He argues that academics, especially interdisciplinary scholars, should welcome the opportunity to approach the task of citation differently.

This post is a draft of my contribution to the ' Independent review of the role of metrics in research assessment' being undertaken by the Higher Education Funding Council of England and Wales (I will think about it over the weekend).

Statistician David Spiegelhalter explained that he thinks gathering metrics is a good thing, so long as those metrics are only used to supplement a more thorough examination of the quality of research.

"The use of a standard set of outputs data to track subsequent impacts of research and support evaluation studies is still experimental, and the most efficient route to capture this information, the most effective system to record it, and the best ways to use the data still needs refining."

PLOS offered their perspective, which echoed concerns about the maturity of many bibliometrics measures and their ability to accurately capture the quality of research:

David Colquhoun wrote a passionate statement against the use of any metrics--including citations--in the REF. The most salient of his points was a call for more studies that find correlations (or lack thereof) between various metrics and actual quality of research, as determined by a panel of expert reviewers. (It's worth noting that some studies to date have shown (weak to medium-strength) correlations between F1000 expert peer reviews and bibliometric indicators. 1, 2, 3, 4.)

The Higher Education Funding Council England (HEFCE) gives money to universities. The allocation that a university gets depends strongly on the periodical assessments of the quality of their research. Enormous amounts if time, energy and money go into preparing submissions for these assessments, and the assessment procedure distorts the behaviour of universities in ways that are undesirable.

Next Monday 30th June 2014 at noon is the deadline to reply to the 'Call for Evidence' for HEFCE's Independent review of the role of metrics in research assessment. I share some quick notes on my personal position. The opinions expressed here are solely my own.

Research manager Simon Kerridge shared his thoughts on the drawbacks to simply adding bibliometrics to the existing REF review process, rather than optimizing the REF review process to better accommodate them:

Bibliometrics could make research assessments more robust and transparent. But they might also create more work than they save, cautions Simon Kerridge. After the 2008 Research Assessment Exercise the Higher Education Funding Council for England, spurred on by a 2006 government proposal, flirted with using citation counts in its next assessment.