Research evaluation is increasingly being influenced by quantitative data. Journal impact factor (JIF) (the mean citation counts of items published in journals in the preceding two years) has become particularly salient in this context, leading to “impact factor obsession”. There has been widespread opposition to this trend in the scientific community. The DORA declaration for example recommends that journal-based metrics, such as JIF, should not be used “as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”. However, despite the opposition these metrics continue to flourish.

The legal field has not escaped this ‘metrics’ wave. Law schools and legal journals are being ranked by multiple global rankings. The key rankings for law schools are the Times Higher Education and Shanghai University subject rankings for law and SSRN Ranking for U.S. and International law schools. These global rankings are accompanied by local ones such as the influential U.S. News Ranking in the U.S., the UK law schools ranking by the Guardian and the University Magazine ranking of Best Canadian law schools. Law Journals are measured by four different rankings: Clarivate Analytics Web of Science Journal Citation Reports (JCR), CiteScore from Elsevier, Scimago and Washington and Lee. Despite their quantitative appearance, the pretense of these metrics for objectivity is merely illusory. Because of the increasing influence of these metrics, and the bodies that produce them, on research evaluation, it is important to closely scrutinize their structure and methodology. In our paper we examine one particular metric - the influential ranking of law journals in Journal Citation Reports and critically assess its structure and methodology. I will discuss our findings in the next post.