More In Jobs

Tracking Scholarly Influence Beyond the Impact Factor

“A very blunt instrument” is how Peter Binfield of the Public Library of Science describes the impact factor. It’s handy for librarians and others who make decisions about which journals to buy but not so dandy for evaluating specific papers and researchers.

Mr. Binfield is the publisher of the journal PLoS One and the PLoS community journals, like PLoS Computational Biology. PLoS works on an open-access model; the impact factor doesn’t reign supreme there as it does at so many subscription-based operations. Instead, the publisher emphasizes a variety of article-level metrics: usage statistics and citations, sure, but also how often an article is blogged about or bookmarked and what readers and media outlets are saying about it. The approach is part of a broader trend toward altmetrics, alternative ways of measuring scholarly influence.

Go to any PLoS article online and you will find a “metrics” tab at the top of the screen. That gives you five categories, including article usage, citations, social networks (currently the bookmarking sites CiteULike and Connotea), blogs and media coverage, and PLoS readers (that’s a ratings system that lets users give an article one to five stars). Readers’ comments get a tab of their own.

PLoS began experimenting with article-level metrics in July 2009. The approach involves “a large basket of metrics,” Mr. Binfield says, but the two most significant categories are citations and usage statistics. For citation data, the publisher draws on four different databases: CrossRef, PubMed Central, Scopus, and Web of Science, with the last two being the most complete, according to Mr. Binfield.

Usage details for each article come from nightly logs run through Counter, a service used by many publishers and libraries to keep track of how often an article is accessed or downloaded. PLoS filters out a lot of robot activity, too, “so we’ve done our best to make the data as clean as possible,” Mr. Binfield says. PLoS keeps a running total of downloads for each article, broken down by month and rendered as a graph.

“The vast majority of publishers don’t supply this information at all,” even if they have it, Mr. Binfield says. Especially if you’re a subscription-driven publisher, “you don’t necessarily want to highlight the fact that some of your articles have very few downloads. It exposes the flaws in the model.”

PLoS has had more mixed results with some other article-level metrics. Readers have showed little taste for the five-star rating system. That’s more a social problem than a technical one, according to Mr. Binfield. “Academics aren’t really interested in rating articles that way,” he says. “They don’t want to take the time or put their name behind it.” The comments feature “gets more traction” but not as much as it could.

Mr. Binfield hopes scholars will see the advantages of “post-publication commentary and discussion” around an article. “The intention is to have a dialogue with the author, where a reader might see a problem or have a question and the author might be able to respond, and keep a running record or what people thought about the article,” he says.

Blog and media coverage of specific articles often escapes notice, too. Research Blogging has been useful for tracking blog coverage, Mr. Binfield says. Media reports on scholarly articles often don’t include details—digital object identifiers or full titles, for instance—that would make them easier to collect. Twitter’s complicated, because tweets disappear and the article-level-metrics program is meant to be part of a more permanent archive.

The usefulness of many of these data sources depends on just how much information third-party sites are willing to share. “Openness facilitates more re-use and discovery,” Mr. Binfield explains.

Over all, the publisher believes that article-level metrics give PLoS a competitive advantage over publishers who don’t share information. “We see this as a powerful thing that demonstrates the power of open access,” Mr. Binfield says. “We really would like to see it adopted much more widely, and for every publisher to provide this kind of data on their articles.”