Creating Academic 'Dream Teams'

A fantasy football-style application has been launched that allows ­research managers to compare the performance of their own ­researchers with that of imaginary "dream teams" drawn from other institutions.

By

A fantasy football-style application has been launched that allows ­research managers to compare the performance of their own ­researchers with that of imaginary "dream teams" drawn from other institutions.

Elsevier’s SciVal Strata tool ­allows users to draw together information on individual researchers or teams of researchers based on data from the Scopus database of bibliometric information. Researchers can be compared ­according to a number of metrics, including their annual publication output, citation counts and h- and g-indexes, which combine assessments of output and citation impact.

One of the system’s developers, Lisa Colledge, product manager at Elsevier, said the tool would allow research managers to judge how their team’s performance might be improved if they recruited specific ­individuals from other institutions, as well as how an envisaged dream team for a new center of excellence might perform.

"You can drag and drop ­researchers into any group and see what would happen," she said, adding that the tool had been developed on the basis of feedback from researchers, students, research managers and administrators.

"The most urgent gap people identified was of being able to drill down into an organization or across organizations and look at the teams of researchers [involved]. The tool allows people to say: 'We have a strategy, now how do we implement it?,' or: 'We are only as good as the people we have so let’s take a closer look at them.' "

John Curtis, director of research and public policy at the American Association of University Professors, warned that measures of ­research productivity advantaged "projects that are marketable or lead to immediate published results, to the detriment of basic science and broader conceptual works that are the building blocks for the exploration of new ideas."

He said using research productivity as the basis for individual assessment risked undermining teaching and weakening researchers’ commitment to their institutions by "de-emphasizing the importance of participating in the development of institutions that contribute to the common good."

Colledge admitted that there was increasing demand from research managers for metrics because they were "crying out" for simple, transparent ways to do a "quick comparison" of people. But she said that the Strata tool was not intended to abolish the ­necessity for them to assess each ­researcher's output in the light of their particular circumstances. "Strata should not replace anything they are doing already. I hope it will be seen as a complementary tool that gives a different way of looking at things, that quickly throws up a few other points of ­interest that they would then look at in ­other ways, such as talking to somebody’s line manager," she said.

Further, Colledge admitted that citation and output data were not yet normalized, making comparisons of researchers from different disciplines problematic. But she pointed to a "citation benchmark" option that allowed ­researchers to be assessed against the citation average in a particular field defined by a specific range of journals.

She said the system also allowed researchers to be meaningfully compared with others of an equivalent level of seniority in a similar field. Colledge added that researchers themselves might also find the tool useful to illustrate their prowess when writing funding ­applications, as well as to defend themselves against unfair appraisals.

"Our impression from talking to researchers is that they are used to being assessed from above and not having easy comeback to challenge things they think are unfair," she said. "With Strata, if they thought something being said was not justified, they could take a look themselves and understand it – or come back with some other ­information."