FEATUREAltmetrics 101: A Primerby James Careless
Altmetrics. It’s a new way to measure the impact and distribution of academic research, and it has quickly become the focus of a raging controversy.

There’s a reason for the debate: Altmetrics are being touted as a faster, fairer, and potentially more relevant way to assess an adjunct professor’s suitability for being granted tenure. In other words, altmetrics bring “publish or perish” into the digital age, providing new measures that give scholars credit not only for being cited online but for playing a role in shaping academic research and discussion through nonacademic journal venues.

Here are just a few thoughts from the experts in the field to help sort through the altmetrics debate.

What Are Altmetrics?

The term “altmetrics” is short for “alternative metrics.” These are a range of nontraditional metrics that can be used to assess the impact that scholars have on research in their areas of study. They can include the number of article downloads, citation of research in online news/social media sources, Mendeley bookmarks (a web-based system for sharing and extracting information from PDFs and other electronic documents), and nontraditional forms of scholarship.

“Altmetrics are measures of scholarly impact mined from activity in online tools and environments,” says Jason Priem, a doctoral student, Royster Fellow at the University of North Carolina–Chapel Hill, and author of Altmetrics: A Manifesto (www.altmetrics.org). “Given the potential of Twitter as an altmetrics source, I think it’s fitting the word was, as far as I know, first used as part of a tweet.”

The Case for Altmetrics

Proponents see altmetrics offering a much-needed alternative to traditional measures such as impact factor (noting the number of citations from the academic journal in which the research was published) in assessing suitability for tenure.

“The notion that the impact factor can encapsulate the value of everything a scholar produces is a bit simplistic,” says Todd Carpenter, executive director of the National Information Standards Organization (NISO). Accredited by the American National Standards Institute, NISO is a nonprofit association that identifies, develops, publishes, and maintains information management technical standards.

For Joe Esposito, president of Processed Media and an independent management consultant working in the areas of publishing, software, and education technology, “Altmetrics are a new way to measure the quality of published material,” he says. “Think of it in terms of conventional publishing. In the past, a magazine’s circulation was sufficient for establishing the value of advertising. But in today’s online world, we also have metrics like page views, unique visits, and links to Facebook and Twitter. New circumstances support new metrics, a fact that is as true for scholarly research as it is for advertising.”

Priem argues that the traditional methods of assessing scholarly impact are no longer good enough in the Brave New Digital World. “Citations only measure one kind of impact,” he says. “They also only measure scholarly articles, which is just one product in a world of many.”

Peer-reviewed scholarly articles also take time to generate citations worth measuring, adds Judy Luther, president of Informed Strategies, a scholarly publishing consulting firm. “It can take up to five years for a journal-published article to really show up in citations, and that occurs after the lengthy process of writing, submitting, and having an article peer-reviewed,” she says. “This is why people are seeking faster ways to measure academic impact, especially when tenure is at stake.”

The Case Against Altmetrics

Critics of altmetrics are sympathetic to these arguments, but they argue that altmetrics lack the rigor, reliability, and standards-based validity offered by traditional measures such as impact factor.

“The real problem is, how do you measure the ‘goodness’ of an altmetric?” asks Phil Davis, head of Phil Davis Consulting, a firm that provides statistical analysis of citation, readership, publication, and survey data for editorial boards, scientific societies, and academic publishers. “How do you ensure that it is valid, empirically based, and actually provides a measurement that actually means something?” he says.

This is where Davis sees a fundamental flaw in altmetrics, as least as they currently stand. These measurements are too vague, unregulated, and open to manipulation to act as reliable alternatives to traditional metrics, he says. “To come up with a meaningful altmetric, you need all those people arguing for it to provide a verifiable, authoritative base to justify its usage,” he says. “Otherwise, you can have people advocating for whatever metric suits their purpose simply because they didn’t win tenure by traditional means.”

Properly Validated Altmetrics

The storm surrounding altmetrics tends to cloud a simple truth; namely, that we live in an internet age where knowledge is being disseminated meaningfully outside of accepted academic, peer-reviewed journals.

When we take this fact into consideration, a solution to the altmetrics debate arises: What is required is a mechanism for assessing specific altmetrics to ensure that they provide the same degree of academic reliability as the old, accepted measures.

This is the very point that Davis touches upon; Davis lines up with the altmetrics skeptics. But this is one that also appeals to altmetrics evangelist Priem. However, Priem flips the argument by pointing out that the impact factor is a limited way to evaluate scholarly impact, and in today’s internet-connected world, it’s an increasingly inadequate way to do so.

“Citation indexes allowed impact assessment to become scientific, in the same way the optical telescope created the science of astronomy,” says Priem. “But astronomers didn’t stop with the optical telescope. Today, they also rely on instruments that measure radio, infrared, and more exotic spectra—data sources Galileo couldn’t have imagined.”

But ensuring that these new instruments provide clear, accurate views is a topic NISO is looking at right now. “We hope to foster conversations to this end in 2013, with the goal of moving towards qualification and standardization of specific altmetrics,” says Carpenter. “There are a lot of ground-level issues to be tackled, such as what is the definition of alternative metrics? And what are the dangers of lumping them together, when different metrics measure different things?”

It is impossible to know how long this process of standardizing altmetrics will take or how successful it will be. It’s not just a matter of digesting current web-based metrics: New ones are bound to develop as new forms of communication catch on. After all, social media didn’t even exist 10 years ago, and now it has become a powerful source of communal information.

In the interim, one thing is clear: An established system based on peer-reviewed printed journals is not sufficient in a digital age. Like it or not, altmetrics will become influential as the world becomes more web-based, just as peer-reviewed journals became important once the printing press was invented. As a result, what is really up for grabs is not whether altmetrics will take hold (because its ascendance is inevitable) but the usefulness, quality, and fairness of the information collected using altmetrics.

“I believe that in the coming years, the assessment of impact will come to rely on a similarly expansive set of tools, gathering data from all over the impact spectrum,” says Priem. “Altmetrics will, I hope, be a big part of that.”

James Careless is a freelance writer who specializes in information technology subjects. His credits include articles in Streaming Media, KM World, and Searcher magazines. Send your comments about this article to itletters@infotoday.com.