I lecture Biochemistry at a small University in Oporto. Although originally raised as an experimental Biochemist, I have since changed my research into theoretical and computational chemistry and biochemistry. In this blog, I will mostly commment on recent (or not so recent...) research papers that happen to have called my attention. I hope someone will find it interesting/useful :-)

Friday, July 7, 2017

I am now writing a referee report. I usually frame my comments diplomatically and try to be constructive (you will have to take my word for it...). Unfortunately, my first comment to these authors is uncharacteristically harsh, and I wish I had not needed to write it:
"I do understand that productivity and impact metrics like the number of citations, h-index, etc. are wrongly used by intitutions and funding agencies to measure research productivity, and that scientists are implicitly (or explicitly) pressured to inflate them. I cannot, in good conscience, agree with that practice but would have kept silent if a manuscript cited a couple of papers by the authors in the introduction. However, in this manuscript 46 references are cited, of which 23 (number 8-11, 15-19 , 21-23 , 34-42 , 44-45) are from the current authors. None of these 23 citations refers to specific results from those papers: they are rather cited as examples of well-known facts which either require no citation or should cite seminal papers/reviews in the area. I will not accept this paper in any form, for publication in this or any other journal, if those references remain."

I am afraid such comments to authors and editords must become much more common to stop the continuous gaming of the system. As long as metrics are used for ends they were not designed to, authors will (more or less grudgingly) try to game them, if only to ensure that they do not "fall behind" in comparisons with colleagues who feel even less compunction to game. Race to the bottom, and all that...