The quality of scientific evidence in government heavily depends upon the independent assessment of research. Pressure from those commissioning the research may pose a threat to scientific integrity and rigorous policy-making.Edward Page reports that whilst there is strong evidence of government leaning, this leaning appears to have little systematic impact on the nature of the conclusions that researchers reach due to the presence of disincentives within academia and research administrators within government.

Do governments lean too much on the researchers who evaluate their policies?One can think of one reason why they would: to get good publicity. And one can think of a reason why researchers would give in to such pressure: to get contracts. But there are also plausible reasons to expect government not to lean on researchers: whether government genuinely wants to draw lessons from research or just wants good PR it needs to be rigorous rather than obsequious. Moreover researchers might not be expected to give in to pressure since their reputations are built on independence and can be destroyed by evidence or suspicions that their professional opinions are for sale.

Image credit: Andy Arthur (CC-BY)

The issue is an important one even though the signs are that the Coalition government is not as keen as New Labour was on securing evaluation evidence by commissioning research. If evaluation research is generated under pressure from sponsors intending to produce results that suit them, then the nature of this pressure and the responses of those facing it are at the very least relevant to our assessment of the quality of that evidence. To examine government leaning and researcher buckling we conducted a study, based on a web survey of 204 academics who had done research work for government since 2005, supplemented by interviews with 22 researchers.

The strongest evidence of government influence can be found in the very initial stages of the research: setting up the research design. Some of our respondents offered comments along these lines:

… the real place where research is politically managed is in the selection of topics/areas to be researched and then in the detailed specification. It is there that they control the kinds of questions that are to be asked. This gives plenty of opportunity to avoid difficult results.

When asked, nearly half of our respondents (45 per cent) pointed out that “the government organisation(s) had a clear idea of the precise questions the research should examine” with 27 per cent suggesting “the government organisation(s) had a broad idea of what the research should examine and left the definition of the precise research questions to me/us”, 26 per cent indicated the development of the research questions was a joint effort with government and 2 per cent did not know.

Whether government intervened in the research design or not appears to have a significant influence on whether the research produced was supportive of government policy or not, as reported by our respondents. Of those who were left to develop the research questions, only 23 per cent produced a report broadly supportive of government policycompared with 50 per cent of those writing reports. When government was involved in developing research questions, whether alone or in conjunction with researchers, their reports were substantially more likely to be supportive.

At no other stage in the research did government pressure have an impact on how supportive the final report turned out to be. This was not for want of trying. For instance, 52 per cent reported they were asked to make significant changes to their draft reports (i.e. affecting the interpretation of findings or the weight given to them). And some of our respondents elaborated along these lines: “There was a lot of dialogue back and forth at the end between us and the Department … before it was published to ensure they did not look bad. They wanted certain wording changed so that it was most beneficial from a PR and marketing point of view; and they wanted certain things emphasised and certain things omitted”. Yet such pressure seems to have had little effect on whether the end result was a favourable or critical report. If anything, those asked to make changes were more likely to produce critical reports (though this finding does not approach statistical significance).

One must be careful about what we mean by “government” influence. There are at least four distinct groups within a ministry, each with different relationships with researchers. First there are the officials who are responsible for research, possibly because they are researchers themselves. Where mentioned, these seem to have the best relationships with researchers and appear most likely to share a belief in the importance of programme evaluation objectives in research. A second group is made up of “policy people” — officials with the task of looking after policy within the department, whether to amend, defend, expand or contract it. One respondent summarised concisely her view of the difference between these two groups: “the research manager places a lot of emphasis on research integrity, whereas the policy teams may have their own ideological or policy motives”.

The third group is the ministerial political leadership. Our survey evidence suggests unsurprisingly that they are highly likely, in the view of our respondents, to downvalue programme evaluation objectives of research as only 4 per cent of respondents (N=182) agreed with the proposition that “Ministers are prepared to act on evidence provided by social research even when it runs counter to their views”. The professionals and service providers in the programmes being evaluated make up a fourth group. Their views might be sought on an ad hoc basis as the research develops or they might be part of “stakeholder” or “expert” steering groups. Several respondents and interviewees mentioned the role of service providers as a source of constraint on their research findings. One argued

We met regularly with the Head of Research in the [Department] and also occasionally with their policymaking colleagues. One difficulty with these meetings was that they insisted representatives of the [organisation running programme being evaluated] attended. This made it quite difficult to discuss the report openly because these peoples’ livelihoods depended on the scheme. My part of the report was critical of the [programme] and I thought it inappropriate for the [department] to invite these people along. I felt it hindered honest and open discussion.

Overall there is sufficient evidence here to suggest that governments do lean on researchers. However, for the most part this leaning appears to have little systematic impact on the nature of the conclusions that researchers reach. The most effective constraint appears to be found when government specifies the nature of the research to be done at the outset. No other form of constraint has as powerful an effect on the degree to which the overall conclusions the researchers reach support government policy.

Our findings suggest that there are two main forces that reduce the impact of government “leaning” on the character and quality of the research reports. The first is the persistence of disincentives within academic career structures to compromise scientific integrity for the sake of securing government contracts. Our findings point to this, but we also have to note the shortcomings of our evidence base in this respect: it is based on academics reporting on their own behaviour. The second is the existence within government of a body of research administrators given significant responsibility for developing and managing research and coming in between policy officials and politicians on the one hand and the researchers on the other. In the absence of such a body of research administrators, the pressures on researchers to produce politically congenial research would likely be far stronger. Without the disincentives to compromise scientific integrity, one would have to have serious concern for the value of commissioned research.

This research by the LSE GV314 Group was recently published in the journal Public Administration. GV314 Empirical Research in Government is a final year undergraduate course in the Government Department at LSE. With a group of up to 15 students, Edward Page conducts a separate research project each year. More details on the project can be found here.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Professor Edward PageFBA is the Sidney and Beatrice Webb Professor of Public Policy in the Department of Government at LSE. His most recent books are Changing government relations in Europe (London: Routledge, 2010), co-edited with Mike Goldsmith; and Policy Bureaucracy. Government with a cast of thousands (Oxford: Oxford University Press, 2005). Co-authored with B. Jenkins.

Share This Story, Choose Your Platform!

4 Comments

As the blog points out, the conclusion is based on academics reporting on their own behaviour. It also uses the term integrity to refer to academics, but there is a hint that research managers follow other less honorable standards of professional conduct. So, it is not clear whether the interviews covered how a common tactic by academic researchers is to agree with commissioners to follow the set questions/design to then answer other questions or follow their own design to pursue their own academic agenda. The policy and politically irrelevant results this approach creates is one of the reasons many academic reports end up buried on the shared drive. It would have been useful to frame this in terms of research that responds to policy or politically relevant issues – decided through a democratic, political system. The research to inform and shape those policies has to be framed by this and not by the academic interests of a few people. There are other funding sources for academics to pursue their own academic agenda.

[…] journal has been read and downloaded less than 100 times. But his post about it came out on the LSE Impact blog (and got re-published on LSE’s British politics blog), and was retweeted multiple times. In six […]

[…] journal has been read and downloaded less than 100 times. But his post about it came out on the LSE Impact blog (and got re-published on LSE’s British politics blog), and was retweeted multiple times. In six […]