... though the financial services industry must surely be the most egregious instance of the misuse of performance indicators and performance pay, let’s not forget “metrics” is one of the great curses of modern times.It’s about computers, of course. They’ve made it much easier and cheaper to measure, record and look up the various dimensions of a big organisation’s performance, as well as generating far more measurable data about many dimensions of that performance.

Which gave someone the bright idea that all this measurement could be used as an easy and simple way to manage big organisations and motivate people to improve their performance. Setting people targets for particular aspects of their performance does that. And attaching the achievement of those targets to monetary rewards hyper-charges them. Hence all the slogans about “what gets measured gets done” and “anything that can be measured can be improved”.

Thus have metrics been used to attempt to improve the performance of almost all the major institutions in our lives: not just big businesses, but primary, secondary and higher education, medicine and hospitals, policing, the public service – the Tax Office and Centrelink, for instance. Trouble is, whenever we discover new and exciting ways of minimising mental effort, we run a great risk that, while we’re giving our brains a breather, the show will run off the rails in some unexpected way. ....

I’ve long harboured doubts about the metric mania, but it’s all laid out in a new book, The Tyranny of Metrics, by Jerry Muller, a history professor at the Catholic University of America, in Washington DC....

Richard R Ernst , Nobelist gives a clarion call below. We must thank R Ernst in pointing out that literature and classical music has been spared from metrics. If metrics had been followed Kazuo Ishiguro slow paced work " Remains of the Day" would have been missed by many for a good read.

"And as an ultimate plea, the personalwish of the author remains to send all bibliometrics and its diligent servants to the darkest omnivoric black hole that is known in the entire universe, in order to liberate academia forever from this pestilence. And there is indeed an alternative: Very simply,start reading papers instead of merelyrating them by counting citations!

"Start reading papers" is mental effort as you have mentioned. It takes time to read and understand. Counting citations and kinship index (h index) a word coined by Prof Geoffery is minimising mental effort.Prof Geoffery's article to which Prof Eents wrote his follies article below. https://www.researchgate.net/publication/49664045_Bibliometrics_as_Weapons_of_Mass_Citation

Then an article in guardian, very long requires mental effort.

https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-scienceThe question is should the huge profit making publishing houses give funds to do research to unis. This query is related to the excellent observation from the guardian article below.

"The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place"

A high impact factor journal has low impact factor journals cited as references in a paper. If one sits and reads the references in luxury journals you will find references cited from low impact factor journals. This high IFJ citing Low IFJ paradox itself reveals that metrics is truly mismeasurement. One is surprised how the so called rational scientific community globally accepted this dangerously skewed measure for so many years to promote, select, fund faculties in universities and research centres globally. One cannot blame Australia alone for this.

It is easy to criticize blind use of metrics. The book by Muller, which I have read, offers many examples. But all of us use heuristics of various kinds to reduce the effort associated with complex tasks. I would welcome constructive suggestions from "metric critics" for scenarios like the following:

1. You are on a panel reviewing proposals for funding of individual PIs. The panel will review 25 proposals, each 15 single spaced pages long + ~30 pages of supporting data (CVs etc.). At most 15% of the proposals can be selected for funding. You must prepare for and then run this review panel on top of all your day to day commitments.

2. A department head is required to write annual evaluations for ~40 faculty who wrote ~300 unique papers in the past year. These evaluations are used, in part, to determine salary raises.

Granted that I am not a Dept head, and I don't see a way out of #2. However, I do have experience with #1, and I read (studied!) to them all. I feel that if one can't do that, one should politely decline to be on the review panel. It is not appropriate in my personal opinion to do anything less when evaluating proposals on which people have spent so much effort to write them (as the funding field is rather competitive...).

https://www.nature.com/articles/423479aThis is a letter to Nature journal was by David Colquhon"A useful method for job interviews that has been used in our department is to ask candidates to nominate their best three or four papers, then question them on the content of those papers. This selects against publication of over-condensed reports in high-impact journals (unless it is one of the relatively few genuinely important papers of this type). It also selects against ‘salami slicing’, and is a wonderful way to root out guest authors, another problem of the age. Experience has shown that candidates can have astonishingly little knowledge of the papers on which their names appear.David ColquhounDepartment of Pharmacology, University CollegeLondon, Gower Street, London WC1E 6BT, UK

The above letter is for a job interview. Now REF in UK has this. Unfortunately not able to access. This write up below is similar to what David C recommends in his job interview , implying

"4 quality publications over a period of 6 years. One may have published 400, but one can only declare 4 papers which one consider to be of international impactful quality (not impact factor). The papers are “READ” by nominated experts in the field and scored, and a GPA is obtained for the Unite of research submission – which determines how much money we get depending on the score and the size of the submission"

Then there is DORA below, which says only 21 out of 96 unis in UK have signed for responsible use of metrics which is a low percentage.

Subscribe To

About Me

I have fun at work trying to use quantum many-body theory to understand electronic properties of complex materials.
I am married to the lovely Robin and have two adult children and a dog, Priya (in the photo). I also write an even more personal blog Soli Deo Gloria [thoughts on theology, science, and culture]

Followers

Disclaimer

Although I am employed by the University of Queensland and funded by the Australian Research Council all views expressed on this blog are solely my own. They do not reflect the views of any present or past employers, funding agencies, colleagues, organisations, family members, churches, insurance companies, or lawyers I currently have or in the past have had some affiliation with.

I make no money from this blog. Any book or product endorsements will be based solely on my enthusiasm for the product. If I am reviewing a copy of a book and I have received a complimentary copy from the publisher I will state that in the review.