Unique Researcher Identifiers

A unique identifier allows you to distinguish yourself from other researchers. It can be used in journal and grant submissions and it allows you to standardize your research persona ensuring that you get recognition for your work. It is now required by many funders and publishers.

ORCID is an open, non-profit, community-driven effort to create and maintain a registry of unique researcher identifiers and a transparent method of linking research activities and outputs to these identifiers. (From website)

Types of Metrics

The JournalImpact Factor is a measure of the frequency with which the average article in a journal has been cited in a particular year. The JCR also lists journals and their impact factors and ranking in the context of their specific field(s).

The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar.

The h-index is based on the highest number of papers included that have had at least the same number of citations. The value of h is equal to the number of papers (N) in the list that have N or more citations.

Thus, if the h-index for an author is 10, it means that of all his published articles, at least 10 have been cited 10 or more times.

The g-index is an index for quantifying scientific productivity based on publication record (an author-level metric). It was suggested in 2006 by Leo Egghe

Altmetrics are statistics sourced from the social Web that can be used to help you understand the many ways that your work has had an impact with other scholars, the public, policy makers, practitioners, and more. They are useful supplementary measures of impact, best used in tandem with traditional measures like citation counts. Together, the two types of metrics can illustrate the full impact of your work.

Loading ...

Things to Consider & Limitations

There are very clear limitations to using the different types of metrics currently produced and used. Some of the limitations to the traditional metrics are as follows:

Citation errors can lead to missing or inconsistent citations

Metrics can be biased depending on length of research career, field of research, and format of scholarly output

Do not consider all forms of research output (datasets, programming language) or qualitative value of research

Citation counts do not measure readership

Citation counts do not reflect accuracy or authoritativeness of the research product, journal, publisher or author