---
layout: post
title: It is no use trying to replace the Impact Factor
categories: [open access, impact factor]
tags: [open access, impact factor]
published: True
---
At a session at OpenCon last weekend we discussed how to replace the impact factor. While the actual title of the session was "Taking on the Impact Factor", the subtitle was "how do we reform research assessment?" This gets to the heart of the matter and I wanted to jot a few notes.
You can't replace impact factor with something else because to do so would misunderstand the reasons that people like IF. To "replace" IF would mean that one would need a measure that acted as a proxy for scarcity. If it is commensurately hard to publish in a venue with a high IF as it is to get a job, then it works well to save labour for people on hiring panels. We know this is damaging and a poor/lazy way to appraise people. But it's what people with little time actually like about IF.
So there's no point suggesting other metrics (at least in a quantified sense) that might stand in instead of IF. It can be replaced, but it would be futile. What we need is what the subtitle suggested: a reform of research assessment practices. This is a social not technological/metrical change. Of course, changing what we measure can alter social practices through incentives, but it's not the same as tackling those core social issues. Changing _where_ we measure can also help to some extent (i.e. at the article not journal level) but this can mean that the _predictive_ measure (even if false) desired by hiring/funding panels is lost. If we want to get rid of the economic concentration of power caused by IF and other proxy measures for quality, then, we need to find viable research assessment practices that don't require massive additional labour and that are at least as good as IF (this latter point is probably, many might say, not difficult). Narrative statements about what research did and people's roles might work here. A mixture of quantitative and qualitative measures might also be worthwhile. Statements of appraisal by reviewers (i.e. manually collated sentiment- and context- aware citations) could be interesting.
I feel that at the end of the day we reduce most multi-dimensional data/narratives on quality to numeric representations because they need to function as economic proxies (we can fund 10%; we can publish 30% etc.) But that's not to say that the way that we derive those figures, even if it becomes subjective, may not have power and cannot/should not be changed. In fact, it may be the very quantitative objectivity claimed by the formula of IF that gives it its false god-like status. Perhaps that's what we need to back away from.