a. Expert opinion or knowledge, often obtained through the action of submitting a matter to, and its consideration by, experts; an expert's appraisal, valuation, or report. b. The quality or state of being expert; skill or expertness in a particular branch of study or sport. Oxford English Dictionary, second edition, 1989.

One could further say that expertise is the quality exhibited by people whom we believe demonstrate an above-average ability to perform a non-trivial task (see Expert for tongue-in-cheek examples). This can be restated by saying that when one is not an expert in a given field, the perception of expertise is often granted to a person based on what other people say is demonstrated by the presumed expert. That people often accept such claims prima facie is somewhat understandable, if ill-advised, and is just one of the many problems associated with assessing and quantifying human expertise. It can be argued that this behavior is partly due to a paucity of tools, metrics and software that focuses on characterizing whatever expertise they demonstrate in areas other than sports, the one domain where excellent tools are available.

This article addresses the issues and tools regarding the problem of finding and assessing individual expertise, with particular focus on scientific expertise.

Contents

It can be argued that human expertise is the most valuable resource in the universe, more valuable than capital, means of production or intellectual property. Why? Contrary to expertise, all other aspects of capitalism are now relatively generic: access to capital is global, as is access to means of production for many areas of manufacturing. Intellectual property can be similarly licensed. Furthermore, expertise finding is also a key aspect of institutional memory, as without its experts an institution is effectively decapitated. However, finding and “licensing” expertise, the key to the effective use of these resources, remain much harder, starting with the very first step: finding expertise that you can trust.

Until very recently, finding expertise required a mix of individual, social and collaborative practices, a haphazard process at best. Mostly, it involved contacting individuals one trusts and asking them for referrals, while hoping that one’s judgment about those individuals is justified and that their answers are thoughtful.

At the other end of the spectrum are specialized knowledge bases that rely on experts to populate a specialized type of database with their self-determined areas of expertise and contributions, and do not rely on user recommendations. Hybrids that feature expert-populated content in conjunction with user recommendations also exist, and are arguably more valuable for doing so (e.g., LinkedIn ).

Still other expertise knowledge bases rely strictly on external manifestations of expertise, herein termed “gated objects”, e.g., citation impacts for scientific papers or data mining approaches wherein many of the work products of an expert are collated. Such systems are more likely to be free of user-introduced biases (e.g., ResearchScorecard ), though the use of computational methods can introduce other biases.

A number of interesting problems follow from the use of expertise finding systems:

The matching of questions from non-expert to the database of existing expertise is inherently difficult, especially when the database does not store the requisite expertise. This problem grows even more acute with increasing ignorance on the part of the non-expert due to typical search problems involving use of keywords to search unstructured data that are not semantically normalized, as well as variability in how well an expert has set up their descriptive content pages. Improved question matching is one reason why third-party semantically normalized systems such as ResearchScorecard and BiomedExperts should be able to provide better answers to queries from non-expert users.

Avoiding expert-fatigue due to too many questions/requests from users of the system (ref. 1).

Finding ways to avoid “gaming” of the system to reap unjustified expertise credibility.

Means of classifying and ranking expertise (and therefore experts) become essential if the number of experts returned by a query is greater than a handful. This raises the following social problems associated with such systems:

How can expertise be assessed objectively? Is that even possible?

What are the consequences of relying on unstructured social assessments of expertise, such as user recommendations?

How does one distinguish authoritativeness as a proxy metric of expertise from simple popularity, which is often a function of one's ability to express oneself coupled with a good social sense?

What are the potential consequences of the social or professional stigma associated with the use of an authority ranking, such as used in Technorati and ResearchScorecard)?

Many types of data sources have been used to infer expertise. They can be broadly categorized based on whether they measure "raw" contributions provided by the expert, or whether some sort of filter is applied to these contributions.

Unfiltered data sources that have been used to assess expertise, in no particular ranking order:

user recommendations

help desk tickets: what the problem was and who fixed it

e-mail traffic between users

documents, whether private or on the web, particularly publications

user-maintained web pages

reports (technical, marketing, etc.)

Filtered data sources, that is, contributions that require approval by third-parties (grant committees, referees, patent office, etc.) are particularly valuable for measuring expertise in a way that minimizes biases that follow from popularity or other social factors: