Feed URL:

One click:

Close

A new paper is kicking up a storm in the world of altmetrics (a community that seeks to incorporate social coverage in the assessment of scholarly impact). Analysing the relationship between social metrics and more traditional measures, the study by Gunther Eysenbach in the Journal of Medical Internet Research (JMIR) concludes that highly tweeted papers are more likely to become highly cited.

Eysenbach’s study looked at a total of 4,208 tweets citing 286 separate JMIR papers. Not surprisingly, 60% of the tweets were sent within two days of the article being published. There was a correlation between the number of tweets about a JMIR article and the quantity of citations in Google Scholar or Scopus 17-29 months later. Highly tweeted papers were found more likely to become highly cited but the numbers are arguably too small for concrete conclusions (12 out of 286 papers were highly tweeted).

The key finding in the paper is that “highly tweeted articles were 11 times more likely to be highly cited”, which creates an impressive headline but requires a good deal more context for a full understanding.

Criticism

There are far fewer barriers to microblogging than creating an article citation (“which requires an author to produce a new piece of research, have it vetted by peers, and published in a journal, which is indexed by a reputable source that tallies citations”).

Eysenbach has purchased several domain names (twimpact.org, twimpactfactor.org and twimpactfactor.com) with the possible goal to create services to calculate and track twimpact and twindex metrics for publications and publishers. According to Davis, “it becomes hard to separate the paper’s contribution to science from its contribution to entrepreneurship”.

Eysenbach serves as editor and shareholder of JMIR. “His reference list includes a total of 69 articles citing JMIR, only three of which cite articles for their content — 55 serve to cite data points, and 11 are unaccountable. If listing each paper was important for understanding the paper, the author could have listed them in a data appendix…The practice of serial self-citation by an author simultaneously serving as editor and shareholder of his journal appears as suspicious behavior”.

However, in response to the last point of serial self-citation, JMIR has since issued a correction of the original article, removing the offending citations to dataset articles, and replacing them with a list of articles in an appendix. Jason Priem (co-founder of the altmetrics project) commented on Davis’s post, describing the change as “a lovely example of how grassroots post-publication peer review is continuing to grow into a vital part of the scholarly communication system”.

Whether or not you agree with the validity of Eysenbach’s study, the very fact that it has been published and discussed so widely is surely a testament to the increasing importance of social metrics in evaluating article impact.