Month: January 2017

Liaisons and other librarians working with faculty should be aware of Elsevier’s recent release of a new bibliometric, called the CiteScore Index (CSI). This metric will be a direct competitor to Thomson Reuters’ (now Clarivate Analytics’) ubiquitous Journal Impact Factor (JIF). The metrics are similar in that they both purport to measure the impact of academic journals based on the ratio between citable content published in the journal to citations to the journal.

While the JIF is based on content indexed in the Web of Science database, CSI will be based on the content in Scopus, which indexes a significantly larger number of titles (22,000 titles compared to 11,000).

If a journal’s impact is a consistent and measurable attribute, it stands to logic that its impact rank and score would be very similar regardless of who calculates the metric. However, preliminary analyses are showing that this is not the case. Librarians might wish to read the findings of early comparisons by Carl T. Bergstrom and Jevin West (developers of yet another metric, the EigenFactor). Surprising no one, they report that Elsevier journals seem to enjoy a boost in ranking using the new CiteScore, while the scores for Nature and Springer journals(now owned by the same company, and a major competitor to Elsevier journals in the space) are lower than what you might expect given their Impact Factors. Additionally, journals published by Emerald, which performed poorly compared to journals from other publishers in the same disciplines during our own analysis, have also seen a boost from the new metric.

These findings underscore the fact that reputational metrics are neither impartial nor objective and are subject to the influences of the entities that produce them. Librarians should be prepared to engage in critical evaluation of these metrics and to answer questions from faculty.

(Thank you to Klara Maidenberg, Assessment Librarian, for providing this information.)