(Phys.org) —From a qualitative perspective, it's relatively easy to define a good researcher as one who publishes many good papers. But quantitatively measuring these papers is more complicated, since they can be measured in several different ways. In the past few years, several different metrics have been proposed that determine an individual's scientific caliber based on the quantity and quality of the individual's peer-reviewed publications. However, most of these metrics assume that all authors contribute equally when a paper has multiple authors. In a new study, researchers have argued that this assumption causes bias in these metrics, and they have proposed a new metric that accounts for the relative contributions of all coauthors, resulting in a rational way to capture a researcher's scientific impact.

The researchers, Jonathan Stallings, et al., have published their paper "Determining scientific impact using a collaboration index" in a recent issue of PNAS.

"Since we all have credit cards, it goes without saying that measuring credit is important in daily life," corresponding author Ge Wang, the Clark & Crossan Endowed Chair Professor in the Department of Biomedical Engineering at Rensselaer Polytechnic Institute in Troy, New York, told Phys.org, "How to measure intellectual credit is a hot topic, but a way has been missing to individualize scientific impact rigorously for teamwork such as a joint peer-reviewed publication. Our recent PNAS paper provides an axiomatic answer to this fundamental question."

Currently, one of the most common measures of an individual's scientific impact is the H-index, which reflects both a researcher's number of publications and number of citations per publication (a measure of the publication's quality). Specifically, a scientist has a value h if h of their papers have at least h citations each, and their other papers are less frequently cited. The H-index does not account for the possibility that some collaborators may have contributed more than others on a paper. There are also many situations where the H-index falls short. For example, when a researcher has only a few publications but they are highly cited, the researcher's h value is limited by the small number of publications regardless of their high quality.

The scientist who originally proposed the H-index, Jorge E. Hirsch, noted that the index is best used when comparing researchers of similar scientific age and that highly collaborative researchers may have inflated values. He suggested normalizing the H-index based on the average number of coauthors. However, the researchers in the new study want to account for the coauthors' relative contributions axiomatically in order to minimize bias.

"Any quantitative measure of scientific productivity and impact is necessarily biased because intellect is the most complicated wonder that should not be absolutely measurable," Wang said. "Any measurement will miss something, which makes research interesting. When we have to measure a paper for multiple reasons, our axiomatic bibliometric approach is the best choice one can hope for."

The new measure of scientific impact is based on a set of axioms that determine the space of possible coauthors' credits and a most likely probabilistic distribution of what the researchers call a credit vector, which determines the relative credits of each coauthor of a given paper. Because this method is derived from the axioms, it is called the A-index.

In the A-index, each coauthor is assigned to a group. For a publication with just one author, that author always has an A-index of 1. Multiple coauthors who contribute equally to a publication would all be in the same group and split the credit equally. For example, four coauthors who contribute equally to a publication would each have an A-index of 0.25. But if each coauthor contributes a different amount, then they would not be in the same group, and the credit would be distributed in a weighted fashion. For example, four coauthors with decreasing credits would have A-indexes of 0.521, 0.271, 0.146, and 0.063, respectively.

The sum of a researcher's A-indexes, called the C-index, gives a weighted count of publications based on that researcher's relative contributions. The A-index (a single-paper metric) can also be used to weight an individual's share of the quality of a publication, whether quality is defined in terms of the journal's impact factor or the number of citations of the publication. The sum of these values is the productivity index, or P-index.

When testing the C-index and P-index on 186 biomedical engineering researchers and in simulation tests, the researchers found that these metrics provide a fairer and more balanced way of measuring scientific impact compared with the the N-index and H-index, the former of which is simply the number of a researcher's publications.

One important point of comparison is that, while a high H-index requires a large number of publications, a researcher can achieve a high P-index with just a few publications if they are published in journals with high impact factors or receive lots of citations. A researcher can also achieve a high P-index by publishing many moderately important papers. In this way, the P-index balances quantity and quality by accounting for relative contributions and not only relying on a researcher's total number of publications. This advantage makes the P-index useful for young researchers and for comparing researchers with different collaborative tendencies.

"Our axiomatic framework is a fair and sensitive playground," Wang said. "It should encourage smoother and greater collaboration instead of discouraging it, because it is well known that 1+1>2 in many cases and especially so for increasingly important interdisciplinary projects."

The researchers point out that a main criticism with the new metrics is the lack of a well-defined system of coauthorship ranking, which is a problem of all collaboration metrics. They emphasize that developing a well-defined system of coauthorship ranking is necessary for realizing the full potential of these metrics.

The researchers also add that the A-index can be used to weight other metrics of scientific impact, such as the H-index. They hope to further investigate these possibilities in the future.

(HealthDay)—Compared to a liver biopsy, available blood tests are accurate for diagnosing fibrosis and cirrhosis in patients with hepatitis C virus (HCV), according to a review published in the June 4 issue ...

American University's Kogod School of Business announced today the release of a new Made in America Automotive Index that evaluates and ranks 253 car models based on country of origin and several factors not addressed by ...

Researchers have developed a new metric to measure obesity, called A Body Shape Index, or ABSI, that combines the existing metrics of Body Mass Index (BMI) and waist circumference and shows a better correlation with death ...

Recommended for you

Why does time seem to crawl if you're waiting in line at the post office, but hours can fly by in minutes when you're doing something fun? A new study in the Journal of Consumer Research examines the factors that determine how co ...

Why do some consumers make choices based on their feelings instead of rational assessments? According to a new study in the Journal of Consumer Research, consumers who consider themselves independent are more inclined to rel ...

Why is it so hard for consumers to save money? According to a new study in the Journal of Consumer Research, consumers are often impatient and do not think about the long-term consequences of spending money. ...

How do consumers react to products with diverse online reviews? According to a new study in the Journal of Consumer Research, a mix of positive and negative reviews can benefit products that are evaluated based on person ...

If you're traveling at 60 miles per hour, just a few milliseconds can mean the difference between life and death when you need to come to a quick stop. According to a new study in the Journal of Consumer Research, driver ...

User comments : 13

However, most of these metrics assume that all authors contribute equally when a paper has multiple authors.

Oh boy - that is seriously flawed. There are big differences in what type of people will wind up on the author lists (in some disciplines it's even different who is author and who is coauthor. In some disciplines the head of the department is always in the first author spot - whether he contributed or not. And it even differs from nation to nation what the customs on author/coauthorship are)It's also nearly impossible to infer how much a coauthor contributed to a paper. In some cases it's substantive work, in others it's 'merely' testing or data collection.

The number of possible bias factors is huge. (not that I have a better idea. But such impact factor algorithms should always be taken with a grain - or better a lump - of salt)

Getting ones name on a publication is part of the 'game' of advancing oneself in the sciences. Quantifying this game for the purpose of generating conclusions about the underlying substance is a grave error.

You cannot evaluate cpntributions without understanding the context and background science. Only scientists in the same fields of discovery are qualified to give opinions of substance.

This ISNT moneyball or sabermetrics. The ART of doing science research is subjective. You dont judge a paintings quality by its price.....its price is derived from judgements of quality experts and tastemakers.

I know a couple of administrators who would go for that line of reasoning ;)

Naaa, my only point was that a "brilliant" up and coming person, oft times might become a dud. While every now and again, some truly new science comes from an unexpected quarter.

Agreed. But I'd rather hire 10 billiant up-and-coming scientists and live with the one dud instead of hiring 10 from an unexpected quarter on the off chance one will turn out to be brilliant.

It's not perfect - but the numbers game favors the one with the track record (quite heavily in my experience). But mostly it's decided based on how they present themselves, their work and their planned work. Impact factor just gets you the invitation to the interview - not the job.

He actually is. I think his stance that science shouldn't be awarded with prizes is pretty cool.

But then again we live in a real world - and not all sciences can be done by pen and pencil (like his math). And with real world issues come real world problems. How do you choose the head of an institute? Would Perelman be the right man for the job? Despite his genius I'd say: No way.

Impact factors are important for the interface between scientists and the institutions where they have their jobs (be it in the industry or at universities).

AMONGST scientists (e.g. when they discuss their science at conferences) you will find that impact factors don't matter one bit (most scientists don't even KNOW their own impact factor).

I found that the most well known guy at a conference will happily discuss theories with the 'lowliest grad student' as readily as with one of his 'career peers'.

in reality, everything happens at the edge of the herd, and this website and anything on it would be aware of none of that.

You'd be surprised. Some of us here are actually directly in touch with people on the very edge of our respective specialty.

Science isn't so mysterious. People in science are also just people. You can go and talk to them like you can go and talk to most anyone if you get up the nerve to actually do it.

Heck, I even got PMs from authors of papers I commented on, here, twice, asking for review in one case (which I couldn't because the paper was over my head) and discussing my comment on another occasion because it actually seemed relevant as a qualifier for the statement the paper seemed to make. On other occasions I asked the authors directly for the paper or discussed an idea based on their work with them.

And I'm sure others have had similar experiences here.

This isn't the edge, I agree, but the way to the very edge is just an email away.

What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization. Such a criterions are not just harmful, but they represent the brake of the further progress. For example the scientists avoid research of new areas, because they have no one to cite here (no citation, no grants and salary). Aren't we paying the scientists just for original research instead?

What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization.

Damn my eyes Zephyr, I agree with ya, the cold fusion guys haven't made an impact,,,, it seems the mainstream physics is the culprit, no one is writing papers on it,,,

We must pass a law: "Every 2nd Paper On The Nuclear Physics Must Be On The Cold Fusion For Ninety Years Now",,,,,,,, or for short we could just refer to it as the "Mainstream Fairness to Bunk Science Act"