From pinstripes to poverty: a refugee banker’s first 100 days at Oxfam

In this final post (Chris Whitty and Stefan Dercon have opted not to write a second installment), Rosalind Eyben and

Chris Roche reply to their critics. And now is your chance to vote (right) – but only if you’ve read all three posts, please. The comments on this have been brilliant, and I may well repost some next week, when I’ve had a chance to process.

Let’s start with what we seem to agree upon:

Unhappiness with ‘experts’ – or at least the kind that pat you patronizingly on the arm,

The importance of understanding context and politics,

Power and political institutions are generally biased against the poor,

We don’t know much about the ability of aid agencies to influence transformational change,

We suggest the principal difference between us seems to concern our assumptions about: how different kinds of change happen; what we can know about change processes; if how and when evidence from one intervention can practically be taken and sensibly used in another; and how institutional and political contexts then determine how evidence is then used in practice. This set of assumptions has fundamental importance for international development practice.

Firstly, we understand social change to be emergent and messy. Organised efforts to direct change confront the impossibility of any of us ever having a total understanding of all the sets of societal relationships and contested meanings that generate change and are in constant flux. New inter-relational processes are constantly being generated that in turn affect and change those already in existence. Complexity theory privileges a concern for process as much as goals and supports an approach that seeks to make a difference by working through relationships rather than focusing on narrowly defined pre-set projects and outcomes. It encourages being explicit about values and a concern for how an organisation’s intervention is judged by others, in particular by those that are meant to ultimately benefit, and the creation of effective feedback mechanisms – including, but not limited to, those produced by high quality research.

At their best, development practitioners often have to surf the unpredictable realities of national politics, spotting opportunities supporting interesting new initiatives, acting like entrepreneurs or searchers, rather than planners. They are keeping their eye on processes and looking to ride those waves that appear to be heading in the direction that matches their own agencies’ mission and values, and which can support local coalitions for change. On the contrary, assuming that development practitioners are in control and that change is predictable – as expressed through some of the demands of evidence-based planning approaches – prevent them from responding effectively to feedback in an often unpredictable and dynamic policy environment, and can, if badly managed, chain them to a desk. Ben Ramalingam’s blog site – Aid on the Edge of Chaos – offers current insights on complexity thinking in development.

That it is relatively easier to eradicate rinderpest in cattle and build bridges than tackle police corruption or reduce violence against women is because the first are examples of what Dave Snowden describes as complicated problems and the latter are complex – an effect of there being so many collaborators involved in non-routine interventions with absence of consensus among them. Such issues can’t be ‘solved’ like a Sudoku puzzle. In that respect, we were puzzled by Chris and Stefan’s two examples of what we would describe as complex issues. We found the first – the effect of political quotas for women in rural India – to be somewhat superficial and wondered why so little reference was made to the considerable number of studies from political sociology on the same topic that ask more probing questions and arguably provide more insightful understanding of what has been learnt in different contexts. The World Bank study on whether top-down large scale interventions can stimulate bottom-up participation was on the other hand puzzling for exposing myths that perhaps only World Bank staff had previously believed in, while ignoring the very considerable body of sociological and anthropological knowledge on this topic. It led us to wondering whether you need economists to find something out for it to be accepted as evidence. Perhaps that explains some of ‘the evidence-barren areas in development’………

Which brings us to the second set of assumptions about how we know and therefore what is judged as evidence. This is about more than pluralism and mixed methods, though we recognise that recent advances, in this case funded by DFID, are important. Let’s start by insisting that a criterion for rigorous research is that it should be explicit about its assumptions or world-view. We suggest that a weakness in many studies is that they usually focus solely on the methodological and procedural and render invisible their ‘philosophical plumbing’. The evidence-based approaches that Stefan and Chris advocate are imposing a certain view of the world, just as our approaches do. Their claims to the contrary foreclose any possible discussion about the different intellectual traditions in interpreting reality. Theory invites argument and debate.

An interesting paper by Greenhalg and Russell on evaluating health programmes notes how experimental approaches often ignore thetricky philosophical and political questions. Like the authors of that article, we take an approach that recognizes the partial (in both senses of the word) nature of our knowledge. How does this approach try to deal with unavoidable bias? Through seeking to use dialogic, democratic methods in which multiple perspectives and understandings of what is at stake are explored, and the use of multiple and hybrid approaches. The implications for practice are to be involved in mutual single and double-loop learning and adaptation as you go along. This does not preclude specific studies commissioned from ‘experts’, but it is not they alone who should define the problem nor should they assume that only their kind of knowledge has validity for collective efforts to try to secure greater equity and social justice. Knowledge and power are bed-mates. Our critique of ‘expertise’ – the laboratory references are an extreme example of the trend – is that expertise often uses its power to ignore other ways of knowing and doing, something Chris and Stefan would seem to agree with. Might it be that some of these ways might prove to be pretty good at tackling police corruption or reducing violence against women?

This is where reflexivity comes in. Those of us working as practitioners, bureaucrats and scholar activists in international development cannot escape the contradiction that we are strategizing for social transformation from a position in a global institution – international development – that can and does sustain inequitable power relations, as much as it succeeds in changing them. Reflexive practice seeks to address these power inequities by recognizing that (a) many problems we seek to address are the products of human interaction – and some very important problems for people with less voice go ignored for that reason, and (b) even if people are in agreement about there being a problem, they will often offer multiple diagnoses for its existence, and thus of course (c) multiple solutions, which need to be debated democratically with different kinds of evidence, based on alternative ways of knowing, and having the space to be heard.

We are heartened to note that Chris and Stefan believe “that all actions by external actors will interact with political forces and vested interests” and that “in many of the settings where development actors want to make a difference, power and political institutions are biased against the poor”. We would therefore assume that a reflexive donor would recognise that their power and agenda need examination as much as anyone else’s.

Chris and Stefan suggest ‘the commitment to evidence has opened up the space fundamentally to challenge conventional, technical approaches to aid.’ We would agree, but it would seem that the exception to this is when it comes to addressing the power of donors such as DFID, being honest about the domestic political pressures they are under, and assessing the possibility that their behaviour (including how evidence-based approaches are managerialised) may on occasions be undermining processes of development and social transformation. Is DFID drawing upon anthropologists or ethnographic researchers, as the Police in the UK have recently done, to understand how its policies on, for example, results or value for money change behaviour in the agency, and its relationships with others?

To imply that we are suggesting that ‘it is not worth trying to provide the best and most rigorous evidence to those who need to make difficult decisions’ is simply a wilful mis-stating of our position. On the contrary we are arguing there is more ‘evidence’ out there than some seem to admit because their world view precludes seeing this as such. Where we in particular see the need for more evidence is about how the evidence-based and results agenda plays out in practice. How it affects the behaviour of development agencies and their staff as well as their ability to support the promotion of the kinds of transformational change which are likely to make a significant difference to the lives of people living in poverty and injustice. It is odd that those that argue for more evidence seem rather reluctant to admit that this is needed!

15 comments

Thanks for the response Chris and Rosalind, I found this more useful and accessible than your first post. Good to set out where the key differences in assumptions and worldviews come about.

I agree in particular with “the need for more evidence about how the evidence-based and results agenda plays out in practice”… and in particular, where good intentions at a high level about the use of a particular type of evidence come unstuck in practice. I’m reminded of a DFID-funded project trying to bridge the ‘health-development divide’ (assuming similar backgrounds to the two starting points in the debate here, although not sure if that was fair – Lawrence Haddad blogged about the final workshop: bit.ly/RAWWZm). This failed because the project had in advance wanted to apply systematic reviews to “big cross-sectoral questions” … but the eventual questions weren’t answerable with systematic reviews.

If you’re interested in giving your own experiences of working within the evidence and results-based discourses, just a small plug for the Big Push Forward survey. All information will be used to inform the Politics of Evidence conference mentioned in the post – Please do visit and respond!

I was just about to click a vote for the Eyben/Roche ticket when I thought again.

It’s easier to win an argument from opposition, where passion and rhetoric can be detached from policy, than it is from government, where everything you say has to be tempered with fears of having a real impact on the situation.

Asking that DFID employees be “honest about the domestic political pressures they are under” is probably unrealistic, and possibly even a bit unfair. From within government there are inevitably going to be limits to how far you can stray from a given government line.

Whether this is in fact an argument against the whole concept of unilateral aid agencies, I’ll leave the reader to decide! (perhaps with a little help from this fantastic paper from the OECD)

It sounds like ideally we would have both rigorous research and reflective learning of practitioners with the different kinds of evidence (and there are more than these two – local people’s experience and opinions being another important one)and these would be treated equally. However, in practice, there is limited money and time and the requirements of donors usually lean towards counting and collecting evidence (at best, of different types) and not towards making time and money for practitioners to be reflective and engage in meaningful analysis – because it is hard to show quantitatively how this is ‘value for money’.
Also once any approach is ‘systematised’ then it influences how people operate – we need to look at approaches that value a variety of evidences.

While I have enjoyed this debate, I would like to raise an additional issue, missing from a debate that, thus far, has focused on link (linear or otherwise) between research and policy. This concerns the politics of the research process itself, particularly in the current funding environment.

In their contribution posted yesterday, Chris Whitty and Stefan Dercon painted an image of open, reflexive scientific enquiry. However, in my own research on the co-evolution of science and policy in international crop research for development, the production of ‘evidence’ has to be understood in the context of institutions driven by the need to demonstrate ‘impact’ within ever-shorter timescales in order to secure funds and demonstrate continued ‘relevance’. In this case, a discourse of evidence-based policy and practice obscures an increasing tendency, in practice, to ‘defend the little that we know’, as one scientist put it, and to project certainty rather than acknowledge and debate uncertainties.

It strikes me the debate is missing a couple of things. The first is a historical context bit. And I don’t mean it is ‘the same old, same old, over again’ (which it is, raising its head in Australia in the mdi 1980s and got a full blast in the mid-1990s). But I mean the historical context of the development problem. Development for both set of bloggers is ahistorical. In Duncan’s book there is reference to comparison of Korea and Sudan at being at the same level of development in the 1950s and see where they are now. He of course omitted that Korea had just had a big war and 50 years of a tough Japanese occupation but prior to that was very ‘developed’ country, and so in one sense recaptured its former glory. The same can apply to most projects and their success or lack of. Unless our methodologies can take the historical context of the problem and the intervention then we are getting nowhere.

The second is both set of blogs are donor focussed. It is the evidence to the donor when it fact it takes two to tango and the recipient government and community agendas may be (or will be) quite different so whose results are we measuring and for whom.

It is heartening that this debate is taking place at all, and particularly encouraging that the quality of the blogs and the comments is so high. The development community has come a long way in our reflexivity and in our understanding of political economy since the days of the Washington Consensus and the even more paternalistic period that preceded it. I am increasingly optimistic about the ability of the coming generation of development researchers and practitioners to make a real impact on world poverty.

There is nothing in this article……The evidence is more than enough, but the will of the leaders of poor nations is lacking. And for this blame the people of those nations for putting those kind of leaders in power, or for letting them in power.

This is a conversational blog written and maintained by Duncan Green, strategic adviser for Oxfam GB and author of ‘From Poverty to Power’. This personal reflection is not intended as a comprehensive statement of Oxfam's agreed policies.