Are the Ivory Towers still there? Or has science left this painful metaphor of senseless wisdom? The answer is: unfortunately not, not at all – many an ivory tower still abound. Much of this has to do with the stranglehold of the so-called impact factors of scientific publications.

The ‘Impact factor’ of a journal is based on how frequently scientific articles that appear in it are quoted in other articles. Admittedly, this is one of the strangest definitions of ‘impact.’ The ‘impact factor’ sets a whole train in motion. Scientists are being evaluated on how much they publish multiplied with the impact factors of the journals in which they publish. This is a very large part of their job assessment and determines their chances of being elevated to higher positions. Sad but true, relevance of the work, uptake of ideas or working with needy groups – all these are usually not part of the job evaluation.

Interestingly, the impact factors are calculated by Thomson Reuters – the media giant that is behind so many global indicators. These indicators hold enormous sway –they are at the heart of our economy that is so much based on perceptions of reality rather than reality itself. Thomson Reuters then has a pivotal role in ‘informing us’, which sometimes involves the manipulation of its preciously guarded information. The most notorious was the LIBOR where the estimated interbank interest was manipulated by panel members for a long time, and this was known to Thomson Reuters. Another indicator is the US Consumer Confidence, again collected by Thomson Reuters, and a key factor in driving share prices up or down. For a while, Thomson Reuters released this vital indicator to select customers one minute ahead of its official release, allowing them to act faster in anticipation of changes in stock markets. Ethics had little role to play here.

In case of Journal Impact Factors manipulation is sometimes part of the game as well. Journal managers actively seek to be quoted by authors that publish in them. In an interesting version of ‘what you can measure you can ‘manage’ – a Guest Editor of a special issue asked us the following:

[Quote]Some time back, you must have received a note from the (blank) Journal Manager (blank) regarding some citation inclusion in the up coming special volume that we are Guest Editing. I was recently informed that the journal office has not heard back anything from you regarding this suggestion. On behalf of the Editorial team, I would request you to kindly respond at your earliest convenience on this, as the final publishing of the special issue is on-hold because of this small issue. I am reproducing part of (blank) 's email, for your ready reference:"In order to improve the journal impact, we wanted the selected papers in the special issue should cite each other (wherever relevant) and also cite the Editorial. I am attaching the list of papers of the special issue and your article to do this. Please cite at least the Editorial and two or more of the suitable research articles in your article and send it to me at your earliest convenience."[Unquote]

It does not always come as blunt as this, but it is quite common that in the review of an academic paper editors ask for more articles to be quoted. Though there is nothing wrong in being inspired and informed by the work of others, this is not what it is all about. It is the Impact Factor riding high and creating its own dynamic.

So what comes together is the good old Ivory Tower. Clever brains talking to each other, quoting each other and calling this ‘Impact’. The larger picture of course is that we have an intellectual elite that has withdrawn from the world and is engaged in publishing for its own sake.

We are of the opinion that there need to be more ethics in research and academia and that impact should mean something different. We propose that research should be systematically solution oriented. We think there should be a preference for the ‘art of the simple and practical’. Researchers should not indulge in the ‘luxury of the complex’. There is this tendency to dabble in never ending ‘complex frameworks’. This does not help anyone apart from the researcher who in the meantime has a pleasant relatively stress-free life exchanging a vague world of images with other researchers. We need a different interpretation of ‘impact’.

Instead we propose that research be solution-oriented and be evaluated and assessed as such. It may mean a number of things:

Topics are selected that are relevant in the eyes of identifiable next users. It Is their reality that counts. Now often research is sold with the idea that it will inform ‘Decision Makers’ or ‘Policy Makers’, but it is hard to pinpoint who they are and hence whether they will have any benefit from the research.

Topics are selected that have a fair chance of coming to a do-able, practical solution. Research questions may even be selected on the likelihood that they come to something specific, practical and actionable

The group that will benefit is engaged in the research if practical, but at minimum is informed before, during, and after research

In no way are lives of the persons involved in the research are endangered, compromised or jeopardized

Outcomes are made available as communication products that are preferably open access.

These rules are simple, and they make up for a much more pleasurable, engaged type of research. From our own experience we can say that it is much more fun: seeing genuine impact, getting feedback, getting knowledge from how things are done rather being conceptualized, working among people who do rather than with those who assess and criticize as second nature.

This is a bit like the discussion between the real and virtual economy of a few years back: solution-oriented research moving around in the world of blood, beer and tears; rather than in derivates that eventually become hard to put a finger on. Even for those who value peer reviews over praxis, our advise is just try-- solution-oriented research is much more fun!

Comments

First, I agree that impact factors are both manipulated (I've been asked to quote papers in the publishing journal as well) as well as inaccurate. You can read my paper on how to reform the entire process: http://www.springerlink.com/content/2q80214867370564/

Second, I applaud your interest in application BUT (a) there's always a need for SOME open thinking and (b) there are plenty of opportunities to put existing ideas into practice (as you know).

Third, if you want impact, I'd suggest publishing more consultant reports (paid by government or grant money, say) as those have LOTS of useful information.