UPDATE: THE COMMITTEE’S REPORT HAS NOW BEEN PUBLISHED. THE DETAILS ARE AVAILABLE HERE.
While the Committee gave no specific reason for launching the current inquiry it seems evident from the questions MPs have been asking that two particular incidents have been exercising their minds: the long-running saga over Andrew Wakefield and the MMR vaccine scare, and the so-called Climategate incident.

In the June 8th session the Committee asked questions not just about the efficacy of peer review, but about scientific fraud, bias, the willingness of universities to investigate allegations of misconduct, and the need to make research data freely available so that others can access, examine and test them.

MPs seemed particularly concerned that universities may be unwilling to investigate claims of misconduct. As MP Graham Stringer put it in one of the questions he asked the witnesses, “There is a certain amount of evidence that very little fraud is detected in universities and major research institutions in this country. Do you think we should be doing more to try and detect that, because in one sense there is an interest within those bodies not to discover or expose the problems they have, to sweep it under the carpet, isn’t there? If you are running a university and you find you have a researcher who just writes down his figures without doing the work, which has happened in one or two cases, the university doesn’t want to say that it has been employing a fraudster for 10 years, does it?”

Politicians also probed the witnesses about the use of journal impact factors as a “proxy measure for research quality” when assessing the performance of academics, and whether “the growth of online repository journals” like PLoS ONE is a “technically sound” development.

Robust defence

For their part the witnesses put up a robust defence of current practices. They denied that universities would cover up fraud; they dismissed suggestions that the impact factor is used as a proxy measure of quality; and they insisted that, while it might not be perfect, there is no practical alternative to traditional peer review.

In support of the latter claim they repeated the oft-made analogy with Winston Churchill’s description of democracy. Churchill famously described democracy as “the worst form of government, except for all those other forms that have been tried from time to time.” Thus it is with peer review, averred the witnesses: no one has come up with anything better.

Those with any experience or knowledge of how peer review works in practice might have been tempted to conclude that analogising peer review with democracy is to obfuscate the issue. At the very least, it appears oxymoronic.

Such a conclusion was all the more likely in light of the opening question and answer. The Chair suggested that it might be helpful to conduct some research into the efficacy of the current system — on the grounds that “evaluation of peer review is poor”. To this Wellcome Trust director Sir Mark Walport replied: “Peer review is no more and no less than review by experts. I am not sure that we would want to do a comparison of a review by experts with a review by ignoramuses.”

Sir Mark’s statement can only have served to remind the audience that peer review is more oligarchic than democratic in effect. Rather than encouraging egalitarianism, it promotes elitism, and all the privileges one might associate with an old boy’s club (appositely perhaps, there was not a single female witness called to give evidence on June 8th).

Of course the Churchillian analogy is not really meant to suggest that peer review is a democratic process. Nevertheless the witnesses’ repeated claims that the current peer review system is “good enough” would surely be challenged by many junior researchers, who frequently complain that scholarly journals tend to be controlled by small elite groups of insiders, invariably senior researchers.

As one researcher pointed out to me recently, this is particularly problematic for those working outside North American and Europe. As he put it, “Peer-reviewed journals with a high impact factor are either dominated by certain gangs, or groups, or the editors rely on the opinion of reviewers too much.”

The upshot, he added, is that “a small guy from Russia, Brazil or Thailand will never get published, even with excellent results, unless he or she has a prominent Western colleague as a co-author.”

In fact, it is not just researchers in less privileged parts of the world who can struggle to get published in scientific journals today. Nor is it only junior researchers who complain about the peer review system. In 2009, for instance, 14 leading stem cell researchers wrote an open letter to journal editors highlighting their disquiet at the way in which the system operates.

Speaking to the BBC about the letter Professor Lovell-Badge commented: "It's turning things into a clique where only papers that satisfy this select group of a few reviewers who think of themselves as very important people in the field is published.”

Responding in a (separate) BBC interview, Sir Mark downplayed the criticism. Scientists, he said, “are always a bit paranoid” about peer review. And to make his point Sir Mark again used the analogy with democracy — peer review is not perfect, but it is the best system that the research community has been able to come up with.

Risks

As indicated, the June 8th witnesses also denied that the journal impact factor is used to evaluate researchers’ performance. “[W]e are very clear that we do not use our journal impact factors as a proxy measure for assessing quality,” David Sweeney, the director for research, innovation and skills at the Higher Education Funding Council for England (HEFCE) told MPs. “Our assessment panels are banned from so doing.”

Once again, however, we could expect researchers to refute claims that journal impact factors are not used for promotion and tenure purposes. Whatever funders do or do not do, they might say, universities (that is, their employers) make it quite clear that being published in high impact journals is one of the best ways to advance an academic career.

Indeed, some universities (perhaps not in the UK, but certainly in other countries) operate schemes in which researchers are given cash bonuses if they succeed in being published in a high-impact journal. Universities in China, The Netherlands and Egypt are said to provide such incentives. In the case of Cairo University the details are publicly available on the institution’s web site. The accompanying table shows that faculty members published in a prestigious journal can earn a lump sum payment of between 2,000 and 100,000 Egyptian pounds (£5,000 to £10,000), with the amount paid directly related to the impact factor of the journal.

But it would be wrong to imply that the June 8th witnesses were averse to new developments. They talked positively about the ways in which the Internet could improve the system, and they spoke enthusiastically about new online journals like PLoS ONE. “PLoS ONE has very good peer review”, said Sir Mark. They also cited with approval the development of web-based post-publication services like Faculty of 1000.

They were far less enthusiastic, however, about social networking tools like blogs, Twitter and Facebook. In contrast to PLoS ONE and Faculty of 1000 — which have been developed by, and are managed by, traditional publishers — the former tools were viewed as dangerously anarchic, uncontrolled, and uncontrollable.

Describing the risks inherent in such services Sir Mark commented, “You have only got to look at the world of blogs, Twitter or anything else. Openness brings its own risks. If anyone can comment, then they can all say what they want, so of course there are risks like that.”

Professor Rick Rylance, chair-elect for Research Councils UK appeared to agree: “You could end up in the rather ludicrous receding world of having to peer-review the post-review and the rest of it to find out whether it has worth.”

It is essential, added Rylance, for journals to continue to act as a quality filter “Clearly, if those filters are removed, there is a danger that people will be relatively unbuttoned about things.”

Why this was problematic was not entirely clear, particularly as Sir Mark had earlier pointed out that in the humanities, “there is a long tradition of writing book reviews where one academic is scathingly rude about another academic.” Was the suggestion that, while it might be fine for humanities scholars, scientists should be preserved from scathing criticism?

“I hope they are a threat”

Not every witness evinced such a conservative view. Sweeney appeared to represent a more radical school. A little anarchy, he implied, might not be such a bad thing. “I think those risks exist but there are benefits,” he suggested. “We will have to adjust to the use of social networking in this area.”

And when the witnesses were asked if article-level metrics posed a threat to high impact journals, Sweeney commented: “I don’t care if they are a threat to the base journals because the journal ecology will develop based on competition and alternative ways of doing things. I am sure they will respond. In some ways, I hope they are a threat.”

Nonetheless, the overall impression created by the witnesses was that any change to the current system should be avoided, and that traditional peer review should be treated as sacrosanct — on the grounds that, whatever its drawbacks, it is the only practical way of ensuring the quality of published research. As indicated, there even appeared to be resistance to testing its efficacy.

Again, many would disagree that published research achieves the levels of quality implied by the witnesses. They would also challenge the proposition that traditional peer review remains the only acceptable way of doing things.

And they might add that, as a result of the intolerable pressure put on researchers to publish as many papers as possible, the quality of peer-reviewed research is actually falling. Finally, they might point out that since practically all papers are eventually published anyway (in one journal or another), quality is a relative concept in the world of scientific publishing today.

It is for these reasons that there are growing calls for change. Moreover, many believe that, far from being a threat to the quality of published research, social networking offers a viable alternative.

Richard Smith, the former editor of the prestigious journal BMJ, for instance, has reached the conclusion that the very notion of pre-publication peer review is flawed. The best way of assessing a paper, he suggests, is after publication, not before.

As Smith puts it, “The problem with filtering before publishing, peer review, is that it is an ineffective, slow, expensive, biased, inefficient, anti-innovatory, and easily abused lottery: the important is just as likely to be filtered out as the unimportant. The sooner we can let the ‘real’ peer review of post-publication peer review get to work the better.”

And today, he suggests, there is an effective way of doing this. “For journal peer review the alternative is to publish everything and then let the world decide what is important. This is possible because of the internet.”

In January Smith developed his views further on the BMJ blog. In a post entitled “Twitter to replace peer review?” he suggested that bloggers and tweeters may be more effective at assessing research than the traditional peer review system.

Smith related how the blogosphere had drawn attention to a major flaw in a paper published (and peer reviewed) by the high-impact journal Science. This claimed that scientists were now able to predict human longevity with 77% accuracy. “One week after the paper was published the authors acknowledged that they had made a technical error,” said Smith, “and shortly afterwards Science issued an “expression of concern,” meaning ignore this paper.”

Revolution underway

It is worth noting that the current inquiry into peer review is by no means the first such inquiry. It is further worth noting that all previous inquiries reached similar conclusions to those expressed by the June 8th witnesses: traditional peer review is as good as it gets.

In 1989, for instance, the UK Boden Report concluded there are, "no practical alternatives to peer review for the assessment of basic research.” It added, however, that this was no cause for complacency. "Rather the reverse. Peer review does have problems both in principle and in practice."

The same sentiment was echoed in a 1995 Royal Society report on peer review.

The current inquiry, however, is taking place in a web-enabled world. Given this, and the increased pressure on researchers to publish, the Committee is confronted with two new questions: 1) Have peer review practices deteriorated to the point where the status quo can no longer be justifiably supported and, 2) does the Internet, and the new social networking tools developed for it, offer a viable alternative to traditional peer review? Might these new tools indeed prove a superior way of reviewing, filtering and adjudicating on the quality and value of new research?

The most likely outcome, of course, is that the Committee will once again conclude that traditional peer review remains the only practical solution. And in doing so, it may even cite Churchill’s views on democracy.

But whatever the Committee concludes, we should not doubt that a revolution in peer review is underway. Today the status quo is under attack from young Internet-savvy researchers keen to shake the establishment's tree.

Smith cited one example. There was a similar incident last year, when another Sciencepaper became the target of criticism in the blogosphere. The authors of this paper, which claimed to have demonstrated that arsenic-based life forms are possible, were accused of publishing flawed and inadequate research.

The firestorm of criticism was sufficiently compelling, and vocal, that Science subsequently published eight papers criticising the original research, and the paper’s authors appear to be back-pedaling. They have also agreed to release the bacteria so that other groups can try and reproduce their results.

Incidents like this are becoming commonplace, and some now believe that we are witnessing the beginning of the end for traditional peer review. Interestingly, the new approaches emerging on the Web appear to hold out the promise of providing a more democratic way of reviewing papers, one moreover that could prove both more efficient, and more cost-effective.

In short, there may now by a practical alternative to traditional peer review.

These developments are inevitably viewed by the establishment as threatening, not least because it means giving up some of the control it has traditionally enjoyed. But to resist these new forces would be to sit like King Canute ordering the tide to retreat.

Some of the issues explored by the committee on June 8th are highlighted in the edited questions and answers below the video. The asterisked headings are mine.

* On whether research funders should fund research on the effectiveness of peer review…

Q250 Chair: … Evaluation of editorial peer review is poor. Should you, as funders of research, contribute towards a programme of research to, perhaps, justify the use of peer review in publication and find out how it could be optimised?…

Sir Mark Walport: It all depends what you mean by "research". It is quite important to have a very straightforward understanding of what peer review is. Peer review is no more and no less than review by experts. I am not sure that we would want to do a comparison of a review by experts with a review by ignoramuses.

Q251 Chair: That’s not very nice, is it?

Sir Mark Walport: Having said that, we do conduct studies of peer review. The Wellcome Trust published a paper in PLoS ONE a couple of years ago in which we took a cohort of papers that had been published. We post-publication peer-reviewed them and then we watched to see how they behaved against the peer review in bibliometrics. There was a pretty good correlation, although there were differences. Experiments of one sort or another are always going on, but the fundamental question of whether you should compare expert review with just randomly publishing stuff I don’t think is something that anyone would be very keen to do. It lacks equipoise.

David Sweeney: Through our funding of JISC and through our funding of the Research Information Network, much work has been carried out in this area and we remain interested in further work being carried out where the objectives are clear.

Professor Rylance: Yes. We, too, would be open to trying to think about how that might be researched. We have to bear in mind that peer review is not a single phenomenon. It is peer review in relation to publication, grant awards, REF and so on. Again, there are differences between the natural sciences, the social sciences and the humanities. You would have to define the task a bit more carefully. We do, from time to time, undertake research on, for example, the influence of bibliometrics and its relationship to peer review, so work is going on in that way.

* On whether peer review limits the emergence of new ideas…

Q252 Chair: The Wellcome Trust highlighted a common criticism of peer review by saying: "It can sometimes slow or limit the emergence of new ideas that challenge established norms in a field." Do the others agree and what can be done about this?

Professor Rylance: Churchill once said that democracy was the worst system in the world apart from all the others. I think the same about peer review. Peer review is absolutely crucial, but, of course, it carries limitations of one kind or another in that it can slow down things. The volume of work load and so on and so forth is increasing but, none the less, we need to remain committed to the principle of doing peer review because, in the end, it is always the first and last resort of quality.

David Sweeney: We think that there is a risk, but we also look at the many experiments that are going on with social networking and modern technological constructs. We hope that the broad view that is taken of those will mitigate the risks which the Trust identified.

Sir Mark Walport: To be clear, the Wellcome Trust, in our submission, said: "Other commonly raised criticisms of peer review are…" We didn’t say that we agreed with that criticism. The issue is that peer review or expert review is as good as the people who do it. That is the key challenge. It has to be used wisely. It is about how the judgment of experts is used. It is about balancing one expert opinion against another. The challenge is not whether peer review is an essential aspect of scholarship because there is no alternative to having experts look at things and make judgments.

* On whether the growth of “online repository journals” like PLoS ONE is “technically sound”…

Q253 Chair: If that common criticism has validity, is the growth of online repository journals like PLoS ONE technically sound?

Sir Mark Walport: It is entirely sound. PLoS ONE has very good peer review. Sometimes there is a confusion between open access publishing and peer review. Open access publishing uses peer review in exactly the same way as other journals. PLoS ONE is reviewed. They have a somewhat different set of criteria, so the PLoS ONE criteria are not, "Is this in the top 5% of research discoveries ever made?" but, "Is the work soundly done? Are the conclusions of the paper supported by the experimental evidence? Are the methods robust?" It is a well peer-reviewed journal but it does not limit its publication to those papers that are seen to be stunning advances in new knowledge. It is terribly important to put to bed the misconception that open access somehow does not use peer review. If it is done properly, it uses peer review very well.

Professor Rylance: It is important to distinguish between peer review that is looking at a threshold standard, i.e. "Is this worthy of publication?" and peer review that is trying to say, "What are the best?" when you are over-subscribed in terms of the things you can publish.

* On whether the impact factor of the journal a paper is published in should be used as a proxy measure of quality when evaluating researchers…

Q255 Stephen Mosley: We have heard that the quality of journals, often determined by the impact factor of those journals, is becoming a proxy measure for research quality. Would you tend to agree with that assessment?

David Sweeney: With regard to our assessment of research previously through the Research Assessment Exercise and the Research Excellence Framework, we are very clear that we do not use our journal impact factors as a proxy measure for assessing quality. Our assessment panels are banned from so doing. That is not a contentious issue at all.

Sir Mark Walport: I would agree with that. Impact factors are a rather lazy surrogate. We all know that papers are published in the "very best" journals that are never cited by anyone ever again. Equally, papers are published in journals that are viewed as less prestigious, which have a very large impact. We would always argue that there is no substitute for reading the publication and finding out what it says, rather than either reading the title of the paper or the title of the journal.

Professor Rylance: I would like to endorse both of those comments. I was the chair of an RAE panel in 2008. There is no absolute correlation between quality and place of publication in both directions. That is you cannot infer for a high-prestige journal that it is going to be good but, even worse, you cannot infer from a low-prestige one that it is going to be weak. Capturing that strength in hidden places is absolutely crucial.

* On whether the 2014 Research Excellence Framework will use impact factor as a measure of quality …

Q256 Stephen Mosley: We have had some very good feedback about the RAE process in 2008 and the fact that assessors did read the papers, did understand them and were able to make a subjective decision based on that. But we have had concerns. I know that Dr Robert Parker from the Royal Society of Chemistry has expressed a concern that the Research Excellence Framework panels in the next assessment in 2014 might not operate in the same way. Can you reassure us that they will be looking at and reading each individual paper and will not just be relying on the impact?

David Sweeney: I can assure you that they will not be relying on the impact. The panels are meeting now to develop their detailed criteria, but it is an underpinning element in the exercise that journal impact factors will not be used. I think we were very interested to see that in Australia, where they conceived an exercise that was heavily dependent on journal rankings, after carrying out the first exercise, they decided that alternative ways of assessing quality, other than journal rankings, were desirable in what is a very major change for them, which leaves them far more aligned with the way we do things in this country.

Q257 Stephen Mosley: That is a fairly conclusive response, is it not? Lastly, you were talking about PLoS ONE in answering the Chair’s questions. From what you were saying, there is a difference in standard between papers in PLoS ONE that might not be in that 5% most excellent bracket, but just so long as the work is technically sound and correct, they are in there without being excellent. With the impact factor of those repository journals gradually increasing, does it mean that the proxy use of peer-reviewed publications is even a less valid approach to assessing the quality of research in institutions in the future?

David Sweeney: I think we just don’t do that. We are not keen to do that. We want to assess-all the time we do work every few years-on how much we can use bibliometrics in a robust way, particularly as you aggregate the information over a large number of publications. At present we do not feel that the role that that should play is beyond informing the expert judgments that are made by panels. We are very conscious of the fact that our research assessment exercise has to go across all disciplines. There would be little argument that the use of metric information is really quite difficult in many disciplines. We are trying to have a consistent way of doing things. We are very keen to be abreast of the latest research but confident that peer review should remain the underpinning element.

Sir Mark Walport: If you are assessing an individual, there is simply no substitute for looking at their best output. If you are assessing a field, that is when you can start using statistical measures. You can start using things like the number of citations. If you look at most funders, they are very focused on asking people to tell them what their best publications are, sometimes limiting the numbers. For our Investigator Awards, we limit the number of publications to people’s best 20.

Professor Rylance: Following on from David’s point, in my field, in the humanities, the majority of publications are not in journals. They are in other forms like books or chapters in books and so on. There simply is not the bibliometric apparatus to derive sound conclusions for that reason.

* On whether formal training in conducting peer review should be compulsory…

Q258 Pamela Nash: Given the importance of peer review in both academic research and publishing, do you think that formal training in conducting peer review should become a compulsory part of gaining a PhD?

Sir Mark Walport: Part of the training of a scientist is peer review. For example, journal clubs, which are an almost ubiquitous part of the training of scientists, bring people together to criticise a piece of published work. That is a training in peer review. Can more be done to train peer reviewers? Yes, I think it probably can. PhD courses increasingly have a significant generic element to them. It is reasonable that peer review should be part of that. People sometimes talk about the opportunity cost of peer review. Peer review is a form of continuous professional development. It forces people to read the scientific literature and it gives a privileged insight into work that is not yet published. Most laboratories would involve, if not their PhD students, their early post-docs in peer review work.

Professor Rylance: I would echo and support that. It seems to me that research is a collective enterprise and that anyone who wishes to enter that field either as an academic or in some other capacity needs to understand that. So an engagement with the work of others of a judgmental or other kind is really quite important as part of that process.

Q259 Pamela Nash: I am aware that the "Roberts funding" provided training for PhD students until recently. Would any of you have any ideas on who could be responsible for continuing that funding for that training?

Sir Mark Walport: That funding is available. For example, the Wellcome Trust funds four-year PhD programmes, so we are providing funding for a longer period. The research councils can speak for themselves, but the four-year model of the PhD is becoming well established and that gives universities the opportunity to provide that transferable skills training.

Q260 Pamela Nash: But should specific peer review training be recommended when that funding is given?

Sir Mark Walport: We are not prescriptive in what universities teach. As I said, that would be a reasonable component of it.

* On reducing the burden on referees…

Q261 Pamela Nash: … Both Research Councils UK and the Wellcome Trust mentioned in their contributions to this inquiry that it would be favourable to reduce the burden — the bulk of the work — on referees of the peer review process. What would each of you propose to help streamline that process and reduce the burden on referees?

Professor Rylance: … One thing you can do is demand manage. If the burden is increasing, and we recognise that it is just in terms of volume and the complication of frequency, if you start to reduce the number of applications, that work load starts to reduce and the quality of peer review goes up, presumably, how do you demand manage in that situation? You could do it in a draconian way. You could, for example, say, "The quota for this university is whatever it is", based on historic performance. You could do it developmentally working with universities to filter their own application processes, such that ones which are not going to go anywhere in any reasonable scheme are filtered out at an early stage, or you could go for what, in the jargon, is called "triage" processes when you receive them. So you do a relatively light-touch first stage application and then you reduce others.

My personal view — there are differences of opinion about this — is that measures like quotas have quite significant downsides, of which probably the most significant is that they would discourage adventurous, speculative, blue skies applications because, naturally, if you have a quota, people tend to be conservative about what they are putting in in order to try and gain the best advantage…

David Sweeney: For us it is a volume problem. Obviously, more research is being done and more findings are being produced. We think that the amount that needs to go through the full weight of the peer review system need not continue to increase. Indeed, we are seeing initiatives in that. As part of our assessment exercise, we require four pieces of work over seven years from academics. In most disciplines, they will publish much more than that, but they do not submit it to the exercise because we are interested in selectively looking at only the best work. We would want to encourage academics to disseminate much of their work in as low burden a way as possible, but submit the very best work for peer review both through journals and then, subsequently, to our system. That is the only way to control the cost of the publication system. We must look for variegated ways of disseminating and quality-assuring the results.

Sir Mark Walport: The first thing is that the academic community is still highly supportive of the fact that peer review is an intrinsic part of the scholarly endeavour. To put some numbers on it, between 2006 and 2010 the Wellcome Trust made about 90,000 requests for peer review. We got about 50% usable responses. The response rate was a bit higher but not every referee’s report added value. That is a pretty good response rate, and much of that was international. We used the global scientific community to help review and they do that very willingly. People who are in environments where they know they cannot themselves get a Wellcome Trust grant are, nevertheless, willing to referee for us…

* On the withdrawal of funding for the UK Research Integrity Office (UKRIO)…

Q264 David Morris: Professor Rylance, have Research Councils UK and Universities UK withdrawn funding from UKRIO, and, if so, why?

Professor Rylance:… The original RIO was set up primarily with a remit for the biomedical sciences. It was set up on a fixed-term basis through a multi-agency system, which I am sure you are aware of, that included not just the funding councils, research councils and the Department of Health, but Wellcome were involved and other bodies. When that came to the end of its term, we had to make a decision about whether to continue. In other words, funding had stopped. It was not a question of withdrawing it. Do we continue that funding or do we not? There was a sense of two things. One is that it was really important to establish a body that had a remit and that that body should cover a broader range of disciplines than was the case with the original RIO. Secondly, we needed to disentangle various sorts of functions which were caught up within that original body. Could one be, for example, both a funder and an assurer of it, because you are clearly in quite a complicated relationship? Also, could you be both an assurer and an adviser, because, clearly, if you are giving advice which then turns out to be wrong, you would then be policing your own mistake at some level…

…The general conclusion was that, in its current format at that stage, RIO was not going to meet the sorts of needs that I have just described. We continued its funding for a little and we are now thinking about different ways in which we can put together a collective agreement on it, largely through, probably, a concordat style arrangement. The key player in this, just to complete the story, will be Universities UK. The reason why Universities UK are key to this is because they are not funders themselves of research.

Q265 David Morris: Are you saying that it is moving more towards the subscription funding model? Is this a necessary change?

Professor Rylance: It will be a subscription model in the sense that it will involve a series of agencies that will participate in the funding of it...

…There is a genuine sense among the bodies that I have just described that we need a cross-disciplinary organisation to provide assurance in tandem to link up the various sorts of assurance mechanisms that each funder has, to look at consistency and so on and so forth. That will be done, as I have described, through a concordat arrangement largely run through UUK, but that is as far as we have got at the moment.

Sir Mark Walport: … The Wellcome Trust was fully supportive of Research Councils UK on this matter. Research integrity is important. There is no argument and no debate about that. The question is where the responsibilities lie for ensuring that it happens. We believe very strongly that the responsibility for the integrity of researchers lies with the employers, so by and large that is the universities for university academics. It is clearly the research institutes for people employed by research institutes. That is why we support moving to a concordat between research funders and the employers whose researchers we fund that it is their responsibility, in the same way that health and safety is a responsibility that is delegated to employers. Frankly, we did not believe that UKRIO in the form that it was constituted was delivering what we needed.

Q273 Graham Stringer: … At our last evidence session we had the Pro-Vice-Chancellor responsible for research at Oxford here — I could give you the exact quote but I will not read it — who, basically, said that in his experience there had not been an occasion when they had had to investigate somebody for fiddling their results for fraudulent practices in research. On the other hand, we had another witness who told us that, if research institutions had not sacked at least one person, then they were not trying. Taking Oxford as an example, if you take your assertion that it should be the employers, that indicates that the employers are not carrying out that job. Certainly, in the case of Wakefield with the MMR scandal, the employers of Wakefield did nothing. I will now come to my question. Doesn’t that mean to say that there has to be a huge change in employers’ practices if your view was to be maintained?

Sir Mark Walport: Employers are responsible for the integrity of their employees in all sorts of aspects of life. They are responsible in business for making sure that they do not commit fraud and that the accounting is done well. I can’t possibly comment on whether individual universities are immune from the malpractice of their employees. I do not think it alters the fact that, as in health and safety, and all sorts of other aspects, such as the good behaviour of employers in respect of how they deal with students, this is an employer’s responsibility. Increasingly, universities are taking this very seriously. Of course, you can pick examples of where things go wrong. You can pick examples of where peer review hasn’t worked well. The Wakefield sad story is a very good example of that. That paper should never have been published. But that is not an argument against organisations doing it well. In a sense, the importance of the concordat will be that it sets out in extremely clear terms what the relationship is and what the roles and responsibilities of universities as employers are for the integrity of their employees.

Q274 Chair: It is clear that the universities would have responsibilities, but, taking your two examples of health and safety or fraud in conducting their business, in both of those instances there is an external regulator with statutory powers.

…

Sir Mark Walport: The question is what those statutory powers should be. Ultimately, it is clear that a scientist who has committed some form of scientific fraud, if I can put it that way, should lose their job. Does that then fall under some other regulator? Is it something that the courts should deal with? Probably not very often. In the case of medical research, Andrew Wakefield eventually met his come-uppance at the General Medical Council. There are ways of doing this.

Q275 Graham Stringer: But he did not, did he? He was struck off for bad ethical practice. The General Medical Council did not deal with whether his research was fraudulent or not. In a sense that is a bad example. If I can repeat Andrew’s point, yes, it is the employers’ responsibility, but who is going to keep the employers good?

Sir Mark Walport: That is where the funders will play a very serious role. We take research integrity very seriously as well. It is a grant condition that the work is done properly. From our perspective, in relation to an institution that failed to manage the research integrity properly, we would have to question whether that was an institution at which we could fund research. It is not that we don’t take it seriously, but we believe that the mechanism for dealing with this has to be through the employer. Frankly, if the employer is unaware of things going wrong in the research, it is difficult to see how others would be aware and the employer would be completely unaware. They are doing it in whistleblowing procedures…

Q276 Pamela Nash: If I could take up that point, without an external regulator — you have just said that funders have a responsibility here on who they fund — surely, that is then an incentive for an academic institution to keep things quiet so that they don’t lose funding.

Sir Mark Walport: Not at all. It is the nature particularly of scientific research that errors are found out, and it can’t be in the interests of any good university not to have the research done to the highest possible standard….There is no incentive to cover up.

* On whether there is a need to provide greater openness and transparency in scientific data …

Q277 Graham Stringer: … Can I … quote from last week’s Scientific American, which makes the point really well? … It is by John P.A. Ioannidis: "The best way to ensure that test results are verified would be for scientists to register their detailed experimental protocols before starting their research and disclose full results and data when the research is done. At the moment, results are often selectively reported, emphasising the most exciting among them, and outsiders frequently do not have access to what they need to replicate studies. Journals and funding agencies should strongly encourage full public availability of all data and analytical methods for each published paper." Do you agree with that and do you follow those policies?

…

Sir Mark Walport: This is one of the arguments in favour of good peer review, because a good peer reviewer when reviewing a scientific paper actually probes and says, "Where are the controls? Where is the missing data?" That is the first thing. Secondly, we do explicitly ask investigators when they are generating datasets how they will handle the data. In general terms, we do encourage openness. In fact, at the moment there is a Royal Society inquiry on openness in science which is looking at the whole issue of openness of data. One has to recognise that there are both real costs and opportunity costs. Data is not an unalloyed good, as it were. It is something that has to be interpretable. It is quite easy to bamboozle by just putting out billions of numbers. It is actually a question of presenting the data in a way that is usable by others. But the principles of openness in science, of making data available and open, are something that the Wellcome Trust and other funders of biomedical research around the world are fully behind and completely supportive of.

Q278 Graham Stringer: Is what lies underneath that answer that you believe that codes, computer programs and all the data that would enable other researchers to replicate the work should be made available publicly?

Sir Mark Walport: Bearing in mind the feasibility and garbage in/garbage out, one has to be careful that the data is usable. Yes, increasingly very large datasets are generated. We want to maximise the value of the research that we fund. Therefore, openness is a very important principle. There are some other issues that need to be dealt with as well, so if you are dealing with clinical material then the confidentiality of participants is paramount. You have to manage data so that they are appropriately anonymised and people cannot be revealed. It has to be in the general interest of the advancement of science and knowledge. As you say, science is validated by its reproducibility. If you cannot see the data, that is a problem. Of course, the revolution of the power of the internet to make data available has meant that it is possible to put out data in ways that were never possible before.

…Broadly, it makes complete sense to make as much data available in as usable a form as possible. That is something that we strongly support. It is why the funding of institutions like the European Bioinformatics Institute, which is housed at Hinxton, is so important. The UK Government has a good track record in supporting the EBI and funding has recently been announced for an extension there as part of the European ELIXIR project. Making data available is something that is incredibly important.

David Sweeney: We believe in openness and efficiency in publicly funded research. Dr Malcolm Read took you through some of the issues at a previous hearing. We have funded and continue to fund projects that will push this area forward — UKRDS — and now some projects are looking at how cloud computing can help. Of course, we have learnt a lot from the research councils that the ESRC data archive has been a stunning success over many years…Technology is now allowing us to make advances, and through the work we fund we will learn a lot. Our objective is openness.

Q279 Graham Stringer: Where research is publicly funded, if I can paraphrase what you say, you are saying that the data should be publicly available. If there are good reasons for it being confidential, do you think it should be made available in a confidential depository to the reviewers and, potentially, for other researchers so that it is available in some form?

David Sweeney: That requires consideration of the particular circumstances and the sensitivity. Reviewers should have access to all the information. They need to assure themselves of the quality.

Professor Rylance: You start from that principle and then you think why it is that you shouldn’t reveal that rather than thinking you should close it and then why you should reveal it.

* On the costs of storing large amounts of data…

Q280 Graham Stringer: You have mentioned that you could have a huge dataset. Some of it may be good data and some of it may be rubbish. Are there real problems of costs and, if there are, who should pay for those costs of storage? Are there any other practical problems of storing huge datasets?

Sir Mark Walport: There are very major costs. For example, the Sanger Institute this year alone has generated 1,000 human genome sequences. That is a massive data burden. Indeed, the costs of storing the data may in the future exceed the costs of generating it. Who should be responsible for doing that? It is, ultimately, a research funder issue, because we fund the research and so we have to help with the storage. It is like all of these things. Our funding is a partnership between the charity sector and the Government and it is a shared expenditure.

Professor Rylance: There are issues as well about obsolescence. At what point does this data become simply not relevant anymore? The length of time for that will be discipline- specific and so on. There are a whole host of practical issues about how you do this. IP — intellectual property — is one, particularly, in my area, to do with creative works, for instance.

* On the degree to which article level metrics can measure the quality of research, and whether they pose a threat to high impact journals…

Q281 Stephen Metcalfe: I would like to turn now to the importance of articles versus journals, if I may. As I know you are aware, PLoS ONE instituted a programme of article level metrics. Do you believe that that is a good way to judge a piece of published science and, therefore, you are judging it on its intrinsic merit rather than the basis of the publication that it is in?

Professor Rylance: Yes, absolutely. To echo what we were saying earlier on, it is intrinsic merit that we are after. It is not reputational or associational value.

David Sweeney: I am not entirely sure that I would say that article level metrics necessarily captured the intrinsic metric merit. We should look at metrics of all kinds and try and judge where the collection and development of the metric does add value. As you drill down to individual articles, some metrics really are not entirely helpful. We have seen that with certain solid evidence in bibliometrics. Equally, we can see, with some of the networking metrics, that they may provide helpful information. I remain of the view that there will be no magic number or even a set of numbers that does capture intrinsic merit, but one’s judgment about the quality of the work, which may well be, in any way, in the eye of the beholder, may be informed by a range of metrics.

Sir Mark Walport: I complete agree with David Sweeney on that. You can alter the number of times that an article is downloaded by merely putting some words in the title. There is good evidence that the content of the title influences the number of times that something is downloaded, so measuring download metrics can be very misleading. Different fields have different types of usage. Methods papers, typically, are extraordinarily heavily cited. There can be a long time before the importance of a paper is picked up. It is like all of these things; at a mass scale the statistics are helpful. If you want to assess the value of an individual article, I am afraid that there is no substitute for holding it in front of your eyes and reading it.

Q282 Stephen Metcalfe: You don’t see the article level metrics as a potential threat to the more established high impact journals.

Sir Mark Walport: They are not a threat. Web-based publishing brings new opportunities, because it brings the opportunity for post-publication peer review and for bloggers to comment. There are things like the Faculty of 1000, which provides commentaries on papers. There are more and more ways for finding papers among a long tail of publications. This is a fast-evolving space. As the new generation of scientists comes through who are more familiar with social networking tools, it is likely that Twitter may find more valuable uses in terms of, "Gosh, isn’t this an interesting article?" All sorts of things are happening. It is quite difficult to predict the future. It can only be an enhancement to have the opportunity for post-publication peer review. It has turned out to be quite disappointing in that scientists have been surprisingly unwilling to put detailed comments. When the Public Library of Science started, it had plenty of space where you could comment. Academics are remarkably loath to write critical comments of each other alongside the articles.

Q283 Stephen Metcalfe: Does anyone else want to add to that? …

Professor Rylance: No. I, personally, do not think it is a threat. There are two issues here. One is the recognition of merit. I entirely agree with my colleagues that, in the end, you have got to read the bloomin’ thing to see whether that is true. Then there is the issue about how people gain access to the good and the strong. That is a slightly different question.

David Sweeney: I don’t care if they are a threat to the base journals because the journal ecology will develop based on competition and alternative ways of doing things. I am sure they will respond. In some ways, I hope they are a threat.

Q284 Stephen Metcalfe: You touched upon scientists being unwilling to get heavily involved in post-publication peer review. Philip Campbell from Nature told us that that may well be — I am summarising here — because there is no prestige or credit attached to that particular role and there is the risk of alienating colleagues by public criticism. Do you agree with that? Do you think that there should be a system of crediting people?

Sir Mark Walport: There are two separate issues. There are some very interesting community issues here. In the humanities, there is a long tradition of writing book reviews where one academic is scathingly rude about another academic.

… In the case of the scientific world, that tearing apart is done at conferences and at journal clubs. The scientific community does not have a culture of writing nasty things about each other. This is an evolving world.

Q285 Stephen Metcalfe: So introducing a system of credit-

Sir Mark Walport: On credit, I think one has to be realistic. Are you going to promote someone on the basis of the fact that they wrote a series of comments on other scientific articles? The hard reality is that the core activities of an academic in terms of their promotion and pay recognition are going to be around their own scholarship and their own educational activities. It can only be at the margins that you will get brownie points for having done post-publication peer review.

Q286 Stephen Metcalfe: Finally, if post-publication commentary were to grow, are you concerned about how you could ensure that there was no bias in that commentary, either positive or negative, either those wanting to build up someone’s reputation or those wanting to tear it down without anyone actually challenging them?

Sir Mark Walport: It is quite clearly a risk. We see that in every other walk of activity on the internet. You have only got to look at the world of blogs, Twitter or anything else. Openness brings its own risks. If anyone can comment, then they can all say what they want, so of course there are risks like that.

Professor Rylance: You could end up in the rather ludicrous receding world of having to peer-review the post-review and the rest of it to find out whether it has worth. Sir Mark was talking about the way humanities review each other’s things in print. Of course, one function for the journals that do that is to act as a quality filter to make sure that nothing defamatory, inaccurate or prejudiced is being said. Clearly, if those filters are removed, there is a danger that people will be relatively unbuttoned about things.

Sir Mark Walport: It is self-correcting in that the scientific community is constantly scrutinising each other. A scientist who wrote something that was particularly egregious would be subject to the peer review of their own community.

David Sweeney: I think those risks exist but there are benefits. We will have to adjust to the use of social networking in this area.

* On whether the peer-reviewed literature is fundamental to the formation of Government policy…

Q287 Chair: … You are familiar with the piece of work that we are undertaking. We have heard that researchers perceive peer review to be "fundamental to scholarly communications". Is peer-reviewed literature also fundamental to the formation of Government policy?

Professor Beddington: … The answer to that question is that science and evidence is clearly fundamental to Government policy and peer review is a fundamental part of science evidence. That is not meant to be a cute response, but it is absolutely clear that the process of science involves peer review, and properly so, and that scientific evidence is essential for being the evidence-based policy of the Government.

* On whether there is a need for research into the efficacy of peer review…

Q290 Chair: Evaluation of editorial peer review is poor. Do you think that there is a need for a programme of research in this area to test the evidence for justifying the use and optimisation of peer review in evaluating science?

Sir Adrian Smith: The short answer is no. It is an essential part of the scientific process, the scientific sociology and scientific organisation that scientists judge each other’s work. It is the way that science works. You produce ideas and you get them challenged by those who are capable of challenging them. You modify them and you go round in those kinds of circles. I don’t see how you could step outside of the community itself and its expertise to do anything other. You have probably had it quoted to you already, but there was a paper in Nature in October 2010 when six Nobel Prize winners were asked to comment on how they saw the peer review process. Basically, it was the old Churchillian thing that there are all sorts of problems with it but it is absolutely the best thing we have.

Professor Beddington: Peter Agre makes that point in that same article, saying: "I think that scientific peer review, like democracy, is a very poor system but better than all others."

Q291 Chair: That is twice that that has come up today.

* On the benefits of codifying the use of peer review, and whether UK scientific advisory groups are mandated to use peer review…

Q292 Stephen McPartland: I would like to ask you about Government use of peer review research. The US Congress has codified the use of peer review in Government regulations using the "Daubert Standard". In the US, the Supreme Court codified their use in the courtroom. Have you had any discussions with your American counterparts regarding how this works and what any of the benefits are?

Professor Beddington: … We would not see particular merit in excluding non-peer-reviewed information, because we have to recognise that there is a whole set of information that comes in as Government makes policy, some of it via the media, for example, evidence that is coming in to deal with emergencies. A basic decision on that I don’t think would be helpful. The issue is obviously going to be that, when we provide scientific advice to Government, there will be a weighing of that advice and the fact that certain advice is peer-reviewed and appropriately so, or indeed has been highly cited in a praiseworthy way, will go into the balance of that advice. I think I would advise against a piece of legislation saying that only peer review would be done. One would also have to question the definition of peer review and so on. I don’t think it would be something that I would be recommending to Government to think about adopting…

Q294 Stephen McPartland: Do you believe that a test should be developed to identify whether or not peer review is reliable? This Committee recommended in 2005, in a report entitled Forensic Science On Trial, that a test for expert evidence should be developed, building on the US Daubert test, and the Law Commission has now built on that and published a draft Criminal Evidence (Experts) Bill.

Professor Beddington: I would think that this has to be thought about on a case-by- case basis. Peer review is not a homogeneous activity. If one is starting to see that there are, for example, problems of peer review in a particular journal or in a particular area of science, that needs to be addressed by that journal and by the people who work in that particular area of science. If you posed the question, "Is the peer review process fundamentally flawed?" I would say absolutely not. If you asked, "Are there flaws in the peer review process which can be appropriately drawn to the attention of the community?" the answer is yes. From time to time that will happen and that’s the way to do it.

Sir Adrian Smith: And there will, from time to time, be misjudgments in that system. You can distinguish the system from particular cases within the system.

Professor Beddington: No, for the very reasons I gave in my answer to the Chair’s earlier questions. We would certainly always take into account peer-reviewed information in providing advice to Government. I don’t think we would ever exclude it, but that would not be the sole evidence. In fact, some of the evidence that would come in would depend on the area of science. For example, in a large part of social science the scholarship is developed by the production of books, quite often well after the event. Yet social research is extremely important to Government policy. We would have this but it would not necessarily have been published in a social research journal. By contrast, for example, if we are thinking in the context of some work on genomics, then one would be expecting that to have been peer-reviewed and that would be going into the evidence. Again, I just don’t think that one would seek to make regulation. I emphasise again that the evidence we use in scientific, including social research, evidence, will sometimes be peer reviewed. Obviously, we would not seek to exclude peer-reviewed material but we would not wish to exclude material that had not been peer reviewed for these sorts of reasons.

* On the effectiveness of peer review in validating assertions made in articles submitted for publication…

Q296 Roger Williams: … In your opinion how well does the peer review process validate the assertions made in articles put forward for publication?

Professor Beddington: In a sense, both Adrian and I have answered that question earlier. Peer review does not guarantee that the results are correct. Science moves on by its use of scepticism and under challenge. We see all the time in the journals that are published this week that there will be people who have challenged peer-reviewed papers that were published some years ago and pointed out fundamental flaws in them or new evidence that undermines the conclusions of those papers. That is the progress of science. We can’t say that it is a guarantee, and manifestly not.

We can say that it is an awful lot better than bare assertion without evidence. Particularly when you are looking at scientific issues that are fundamental to policy — I have talked about this to this Committee before — the emergence of scientific consensus is very important. That is not to say you do not have sceptics or appropriate challenges, but peer review does not guarantee that and it never could…

* On whether peer reviewers should assess the underlying data supporting a research article as well as the article itself, and whether the raw data should be made freely available…

Q297 Roger Williams: Today, and increasingly, I guess, in the future, submissions in science will be accompanied by very large and complex sets of data. Do you think that the reviewers should be assessing that underlying data as well as the article that is being produced?

Sir Adrian Smith: In an ideal world, but that is rather difficult, is it not, because data will come out of laboratories and field studies. As a reviewer, you can’t go off and replicate that. If you are trying to study somebody’s derivation of a mathematical formula, you can replicate. The difference between the scientific argument and the data is rather different, but the protocols that are in place for collecting data, for example, in medicine, in conducting proper clinical trials and all the rest of it, are in an environment where all the pressures and checks and balances are to get that right.

…

Q299 Roger Williams: Sir John, the Government is, obviously, a very substantial funder of science. Should it, as a matter of principle, require that all this raw data should be made available?

Professor Beddington: Adrian has made a parallel point. With Government-funded science, the push is to have data out into the open. There are some areas, for example, shared data, which means you have a mix of data where some of the ownership of that data is outside the UK. You cannot make a hard and fast rule. In principle, though, the answer is that the more people who will look at the scientific problems from which we are wanting to get evidence the better. Therefore, transparency is, obviously, extremely attractive. From time to time, there will be timing issues, IP issues and so on, which will mean that transparency can be problematic. In the area we were looking at-the community of chief scientific advisers deals with this a lot of the time-we would be looking at material, and if it was not out in the open they would ask why not. If there is no good reason, they would urge that it would be put out into the open. Indeed, research councils push exactly along these lines.

Sir Adrian Smith: There will always be issues of personal data protection, commercial interests and intellectual property and national security, so the situation is quite complex. I understand that the Royal Society will be doing a study sometime over the next 12 months that the Committee may well be interested in.

Q300 Roger Williams: I think there is agreement that this data should be made available, subject to all the concerns that you have expressed about IP and commercial interests. Another matter is the cost of all this. Who should bear that cost if it is going to happen on a greater scale than it has in the past?

Sir Adrian Smith: That is one of the issues that the Royal Society may well look at. Different communities, different cultures and different forms of data pose different issues, but there is a real problem…

* On whether there should be a legal requirement on institutions to conduct a timely inquiry in cases of publication fraud or misconduct, and then publish details of the incident and the disciplinary action that has been taken…

Q310 Gavin Barwell: In the past there has been a perception that publication fraud or misconduct has not always been investigated by the institutions in a timely fashion. Wakefield and MMR is an example. Should there be a legal requirement on institutions to conduct a timely inquiry and to publish the full findings of that inquiry and any disciplinary action that is taken?

Sir Adrian Smith: I don’t know whether you need to go to what "legal" means, but, if you think of the funding that goes into universities, some of it will come through the Funding Council, for instance, through the QR stream and some through research grants. Both with the research councils and the Higher Education Funding Council conditions of grant are attached which make it clear what the expectations of behaviour are. I don’t think those are sufficient sanctions in themselves. An institution that would not follow up properly would be putting at risk its funding from HEFCE and the research councils.

Q311 Gavin Barwell: Are there specific conditions relating to what institutions should do if there is a suggestion that misconduct was taking place?

Sir Adrian Smith: Probably not.

Q312 Gavin Barwell: Do you think there ought to be?

Sir Adrian Smith: …My own view, having run a university for 10 years, is that the constraints you are under in terms of conditions from the many funders that one has are quite sufficient to frighten one into doing appropriate things.

Professor Beddington: The RCUK’s code of conduct, too, is a good look guideline in terms of conflicts of interest and appropriate behaviour. In the sense that universities depend on a significant income from the research councils, then they would be extremely unwise not to take forward any issues very quickly where they had detected fraud. The media would be commenting on it and other people in the same scientific area would be commenting on it. There would be a very substantial incentive for the universities to take this forward rather quickly.

Q313 Graham Stringer: … There is a certain amount of evidence that very little fraud is detected in universities and major research institutions in this country. Do you think we should be doing more to try and detect that, because in one sense there is an interest within those bodies not to discover or expose the problems they have, to sweep it under the carpet, isn’t there? If you are running a university and you find you have a researcher who just writes down his figures without doing the work, which has happened in one or two cases, the university doesn’t want to say that it has been employing a fraudster for 10 years, does it?

Sir Adrian Smith: I would disagree. When I ran a university, I would put it exactly the other way round. The institutional reputation will suffer much more long-term harm if you allowed fraudsters to exist and you don’t do anything about it. In fact, I think you would get a lot of brownie points in many communities if you publicly identified such people and threw them out. I think the incentives are all in the opposite direction.

Q314 Graham Stringer: It is surprising, therefore, is it not, … that there are no cases in Oxford, as the Pro Vice Chancellor told us, and that there are very few cases in other universities and research institutes where people have found fraudulent behaviour? In the case of Wakefield, even when fraudulent behaviour was found out, the institution investigated itself and found nothing wrong. The evidence we have is in the other direction, isn’t it?

Professor Beddington: I would not seek to comment on the Wakefield case. The issues here are that there is so much in the checks and balances in the way that science operates that fraudulent behaviour is highly likely to be detected by, initially, I suspect, gossip and then increasing concern that there is something wrong. That will happen. It may happen in the community and the attention will then be drawn to the university, and it would be very unwise for the university to ignore that information. I have not experienced it in 25 years at Imperial College.

Q315 Graham Stringer: Can I ask why you won’t comment on Wakefield, because it is one of the great scandals of the last 10 or 12 years? It was not dealt with very well. Are there not things to be learnt from that?

Professor Beddington: Yes, there are. My reason for not commenting is that I haven’t read into it for a while, and I would like to re-familiarise myself before I commented, Mr Stringer, rather than any shyness on my part. I am not on top of the detail.

* On whether researchers are biased towards the products of the pharmaceutical companies that sponsor their research…

Q316 Graham Stringer: … Are there problems with peer review in other areas? For instance, there is a huge amount of research sponsored by pharmaceutical companies and companies that produce biomedical products. Do you believe that a lot of researchers in those areas are biased towards the products that those companies are selling?

Sir Adrian Smith: ... I don’t think a lot of the research itself is biased. There are biased reporting effects, because if you are doing clinical trials and you get negative results, there isn’t a journal on clinical trials that didn’t work. It is the ones that work that get published. There is a selection bias in that sense. Do not forget that at the end of the day these things have to get through the FDA or drug regulatory authorities if they are to come on to the market. Then you have incredibly close scrutiny of the protocols, the trials that were done, the conditions under which they were done and so on and so forth. I think there are tremendous checks and balances in the system against that.

* On whether there is a problem with colleagues reviewing each other’s papers in those areas where only a small number of researchers are working…

Q317 Graham Stringer: Are there structural problems where there are only three experts in a particular field, so that they are, effectively, all peer reviewing each other and they either agree or disagree? In one sense, that was the major criticism of those people who criticised the researchers at the university of East Anglia for their research, was it not? There is a very small pool of researchers in that area.

Professor Beddington: Yes, you have that, but people are always moving out of their own fields. There is academic interchange. If things are of sufficient importance, they are likely to get challenged, not necessarily by the top two experts in the field but by others who are around the fringes, particularly if they are of significant interest…

Q318 Graham Stringer: To finish on a fairly obvious question, nearly all of our witnesses have used the Churchillian quote, but when you get fraudulent papers that have been through the best process we have of peer review, do you think that peer review has damaged that process? Getting back to Wakefield, his paper was peer reviewed. Do you think the peer review process has been damaged?

Sir Adrian Smith: How far do you want to take the Churchillian democracy analogy? There are bad things that happen within the peer review system. Not every MP who has been elected has behaved totally honourably.

Graham Stringer: What a shocking thing to say.

Sir Adrian Smith: You would not abandon the democratic process, presumably.

Graham Stringer: No. That would be terrible. Thank you.

Q319 Chair: Finally, are you aware that RCUK has ever cut funding because of fraud or allegations of fraud? If so, could you give us any examples?

Sir Adrian Smith: I would have to go back and look through the archives, as it were, and directly ask that of chief executives. I am not directly aware of a case.