Tuesday, 27 March 2012

Alternative metrics for journal quality: the usage factor

Jayne Marks is starting things off for us on another glorious day here in Glasgow. Usage factor vs impact factor - why will the extra metric be useful? What research has been done so far? What next?

Usage factor vs impact factor

With pretty much all journals online, and COUNTER well established and respected, we have a good source of reliable data to explore individual journals and their usage, as an alternative to citations (which underpin the impact factor). The impact factor, while widely respected and endorsed, is not as widely applicable across different disciplines (it's optimised for the hard sciences; in other areas eg nursing or political science, content can be well used and valued but not cited) and is US-centric. The usage factor will provide a new perspective, available and applicable to all disciplines, to any publisher prepared to provide the data. It will serve better those disciplines where usage of the content is more relevant than citations.

Authors, editors, publishers were surveyed to assess whether such a metric would be of interest. Example data was analysed - 150,000 articles - to model different ways of calculating a usage factor (detailed report available from CIBER).

How is the usage factor calculated?

To avoid gaming, the usage factor is calculated using the median rather than arithmetic mean. (Comments welcome to elucidate that - my maths is a bit rusty!) a range of usage factors would be published for each journal (afraid I missed the detail on this while pondering the arithmetic - I think it would mean across a range of years). The initial calculation for a title would be based on 12 months of data within a maximum usage window of 24 months. A key question is when the clock starts ticking - when the article is submitted? When it's published online? When it goes into an issue? Does it matter if publishers decide this differently?

What might the future hold?

The team behind the usage factor suggests that ranked lists of journals by usage could be compiled eg by COUNTER to enable comparison. There are concerns about gaming but the robustness of the COUNTER stats and the use of the median should help to repel most attempts at gaming (CIBER's view is that the threat is primarily from machine rather than human "attack"); the project's leaders continue to consider gaming scenarios and welcome input from "bright academics" who can help to posit potential gaming scenarios ("we're not devious enough").

Work is still required on the infrastructure, for example, to understand how we can extract data from publishers and vendors. The project's ongoing work is being led by Jayne Marks (now at Wolters Kluwer) along with Hazel Woodward and a board of publishers, librarians and vendors. Thanks were noted to Peter Shepherd and Richard Gedye.

Question: shouldn't we be moving to article-based metrics? Marks: it could break down to that in due course.