Thinking outside the box: evaluation and humanitarian action

In the conclusion to The Quality of Mercy, his classic 1984 analysis of the Cambodian refugee crisis, William Shawcross observed that "evaluations of humanitarian aid are not easy."(1)

"One problem," he continued, "is institutional. Humanitarian agencies do not often publish discussions of their work. They release lists of, and sometimes accounts of, the assistance they have given, but rarely offer real analysis… As a result, mistakes are repeated again and again from one disaster to another.” “Like all generalizations," Shawcross acknowledged, "this one has its exceptions." “But,” he concluded, "it applies both to UN organizations and to private agencies, large and small."

Writing two years later in Imposing Aid, her equally seminal account of the Ugandan refugee situation in southern Sudan, Barbara Harrell-Bond reached a similar conclusion.(2) "Inside the agencies," she stated, "it is well known that the same mistakes have been repeated over and over again. …It is assumed that the impact of development projects will be evaluated, but humanitarian programmes have never been subjected to the same scrutiny… The importance of evaluating the impact of relief programmes is not widely appreciated."

Interestingly, the two authors were also in broad agreement when they came to explain this unsatisfactory state of affairs. According to Shawcross, "deliberate and conscious learning from experience is not part of the non-profit welfare tradition… The refrain: 'we have no time or money to evaluate our efforts - the need is too great' is all too common among aid officials." And in the words of Harrell-Bond, "humanitarian work… is thought to be selfless, motivated by compassion, and by its very definition suggests good work." "As relief is a gift," she concluded, "it is not expected that anyone (most especially the recipients) should examine the quality or quantity of what is given."

The preceding quotations from The Quality of Mercy and Imposing Aid beg a number of important questions.(3) But the conclusion reached by both books - that humanitarian operations were largely exempt from serious evaluation or critical analysis - represented a valid critique of the situation that prevailed in the 1970s and 1980s.

A new scenario

Moving forward some 15 years to the present day, one encounters a very different scenario. For humanitarian evaluations have now become big business (in both a figurative and literal sense) attracting unprecedented levels of donor funding and agency commitment, as well as public and political interest.

While a detailed account of this trend lies beyond the scope of the current article, it can be illustrated by reference to four particular developments that have taken place over the past few years.

First, and in sharp contrast to the situation in the 1970s and 1980s, humanitarian operations are now regularly subjected to critical analysis and assessment.(4) Such reviews are increasingly undertaken by professional teams of consultants, funded by - but independent of - the operational agencies and donor states which have commissioned the review. It has also become common practice for evaluation reports to be reviewed in draft by a wide range of stakeholders and then to be placed in the public domain - a far cry from earlier days when such reviews of humanitarian operations tended to be shrouded in secrecy and distributed on a confidential basis.

The most prominent example of this new approach is to be found in the 1996 Joint Evaluation of Emergency Assistance to Rwanda - a million-dollar undertaking involving 52 researchers which led to the production of a five-volume report, more than 500 pages in length.(5) While the Rwanda evaluation was somewhat unique in its scale, the approach which it took - transparent, consultative, multidisciplinary and independent - has been replicated in a number of other recent studies: a UNICEF-sponsored review of Operation Lifeline Sudan; an independent evaluation of UNHCR's response to the Kosovo refugee crisis; and a global review of Danish humanitarian assistance, commissioned by DANIDA, to give just a few examples.(6)

A second manifestation of the new interest in humanitarian evaluation can be seen in the burgeoning literature on the subject. Prior to the mid-1990s, a great deal had been written about the evaluation of development projects but relatively little had been published on the question of evaluation in the humanitarian sector.

During the past two or three years that situation has changed very rapidly, with at least six major humanitarian actors (AusAid, DANIDA, ECHO, OECD, SIDA and UNHCR) all producing their own evaluation policies, guidelines and manuals.(7) In addition, the Relief and Rehabilitation Network of the Overseas Development Institute has published a comprehensive ‘good practice review’, focusing on the evaluation of humanitarian assistance programmes in complex emergencies.(8) The duplication of effort involved in the preparation of these documents can legitimately be criticized but the fact that they have been published at all provides an important indicator of the importance currently attached to evaluation itself.

Third, recent years have witnessed a strengthening of the evaluation function in several major humanitarian agencies - a phenomenon that can be measured both in terms of the resources allocated to evaluation and in terms of the profile and influence which it enjoys within those organizations. While it is by no means the only agency to be affected by this trend, UNHCR provides a prime example.

At the end of 1998, UNHCR's evaluation function was effectively submerged within a larger unit whose principal task was that of ‘inspection’ - an oversight mechanism focusing primarily on managerial effectiveness and efficiency, rather than programme implementation and impact. The evaluation function was staffed by a single international staff member and had access to a very modest consultancy budget. While they were high in quality, the evaluation reports produced by the unit were regarded as 'restricted' documents, and consequently had only a limited and internal distribution.

During the past year, a number of significant changes have been made to the evaluation function in UNHCR, many of them prompted by the recommendations of an independent review, funded by the Canadian government.(9)

The evaluation function has been separated from inspection, combined with that of 'policy analysis' and given an influential position within the Department of Operations, reporting directly to the Assistant High Commissioner for Refugees. Employing three international staff members, the new Evaluation and Policy Analysis Unit (EPAU) also has a substantially increased capacity to engage independent consultants. At the same time, UNHCR has introduced a new and more progressive evaluation policy, involving the unrestricted dissemination of the organization’s evaluation reports and a new commitment to stakeholder participation in the evaluation process.(10)

Fourth and finally, the new dynamism surrounding the issue of evaluation has been manifested by the increased level of interaction taking place between the personnel of different humanitarian organizations, whether they be UN agencies, NGOs, donor states, research institutes or private consultancy companies. As a result of such interaction, a ‘culture of evaluation’ finally appears to be emerging in the humanitarian sector - a culture that is based on some common principles (such as a commitment to transparency and the introduction of innovative evaluation techniques) and which cuts across the institutional boundaries and turf wars that all too frequently characterize the international humanitarian system.

Perhaps the foremost expression of this development is the establishment and expansion of ALNAP (Active Learning Network on Accountability and Performance in Humanitarian Assistance). Established in 1997, in the aftermath of the Joint Evaluation of the Rwanda emergency, ALNAP provides an important forum for the exchange of ideas and information among individuals and organizations engaged in the humanitarian sector. Its objectives are twofold: "to identify, share and uphold best practices in relation to monitoring, reporting and evaluation within the international system for the provision of humanitarian assistance" and "to move towards a common understanding of ‘accountability’ in the context of the international system." As these statements suggest, Harrell-Bond’s 1986 assertion that "the importance of evaluating the impact of relief programmes is not widely appreciated" is now considerably more difficult to sustain.

The changing context

The developments described above demonstrate that the institutional and normative impediments to humanitarian evaluation are considerably less onerous today than they were ten or fifteen years ago. But what exactly accounts for this new recognition of the need for humanitarian operations to be subjected to critical analysis? To answer that question, a number of related factors must be taken into account.

During the past decade, the scale, scope and visibility of humanitarian action has increased enormously, attracting much greater levels of international attention than was previously the case. With humanitarian agencies being thrust to the forefront of international politics in areas such as the Balkans and the Great Lakes region of Africa, it is hardly surprising that the activities of such organizations have become the subject of increased analysis and appraisal.

The need for such analysis and appraisal has been reinforced by the changing, and often innovative, character of humanitarian action during the past decade. Indeed, many of the most familiar concepts in the contemporary humanitarian discourse - ‘safe havens’, ‘temporary protection’, ‘negotiated access’, ‘humanitarian evacuation’ and ‘post-conflict reconstruction’, for example - were virtually unheard of just ten years ago. As the author of this article wrote in 1995, “many of the initiatives which have been taken during the past five years have been experimental in nature, hastily formulated to meet urgent and unexpected needs. Inevitably, some have proved more effective and equitable than others.”(11). It is precisely because of this very mixed record, and because of the growing belief that relief programmes often do as much (if not more) harm than good, that humanitarian operations have attracted so much critical attention in recent years.

Donor states have played a major part in the growth of evaluative activity in the humanitarian sector. During the early and mid-1990s, with the onset of crises in countries such as Bosnia, Iraq, Rwanda and Somalia, not to mention the continuation of longstanding emergencies in countries such as Afghanistan, Angola and Sudan, international spending on emergency relief operations escalated very rapidly. At the same time, the governments of the industrialized states were under (or at least had placed themselves under) pressure to reduce domestic taxation, to limit public spending and to ensure that they received good value for their expenditures. In such a context, overseas aid programmes - and the agencies that implement such programmes - became a target of particularly close scrutiny.

Interestingly, donor state demands for ‘greater accountability’ in the humanitarian sector have fallen disproportionately on multilateral agencies such as UNHCR. This is partly because of the high levels of expenditure and perceived inefficiency of these organizations. But perhaps more fundamentally it is because donor states increasingly prefer to channel their resources through national NGOs and bilateral institutions. A significant consequence of this trend is that the UN agencies are now at least as (if not more) transparent in terms of evaluation than many major NGOs. Thus very few of the major British relief agencies make either internal or external evaluations of their work available on the internet, whereas this has become a common practice within the UN system.

This is somewhat surprising, as the recent emphasis placed upon humanitarian evaluation is directly linked to a recognition of the need for aid agencies and personnel to function in a more accountable and professional manner. And the NGOs have played a major role in stressing the importance of accountability, not least through their participation in initiatives such the Red Cross Code of Conduct, the Sphere Project, the Humanitarian Ombudsman Project and People in Aid.(12)

While they vary in their specific objectives, such initiatives are based on some common principles: that the ‘beneficiaries’ of humanitarian programmes have rights which must be respected; that humanitarian personnel should work in accordance with agreed professional standards; and that aid organizations have an obligation to provide services of a certain quality. The dissemination of such principles, which act as an important antidote to the kinds of paternalism and amateurism witnessed by Shawcross and Harrell-Bond, have also contributed to the development of a more ‘evaluation-friendly’ culture in the humanitarian sector.

Finally, if we are to understand and explain the emergence of this new culture, then some broader international trends must be taken into account. Fifteen or 20 years ago, humanitarian organizations might have been prepared to withhold damaging information from their key constituents, to conceal their mistakes from public view and to maintain a dignified silence in the face of media criticism. They might also have been willing to downplay the need for evaluations, regarding such exercises as an inconvenience at best, and at worst a threat to their public image, their credibility and their fundraising potential.

Today, however, evaluations are welcomed (or at least tolerated) for precisely the opposite reason. In the increasingly crowded humanitarian marketplace, agencies which open themselves to external scrutiny, which acknowledge the difficulties they have encountered and which demonstrate an ability to learn from past experience may have a distinct advantage over their competitors.

Current challenges

As this article has explained, humanitarian programmes are now being subjected to critical analysis more regularly, more systematically and more openly than was the case in previous years. And that must be a welcome development. For evaluations have the potential to enhance the accountability and operational performance of humanitarian agencies, thereby improving the standard of protection and assistance which they can offer to people in need. As the following paragraphs suggest, a number of steps could be taken to ensure that this potential is more fully realized.

First, humanitarian evaluations would benefit from the introduction of alternative approaches and methodologies. There is particular scope for evaluations to be undertaken in a more consultative and participatory manner, enabling aid agency employees and programme beneficiaries to play a fuller part in the review. There is also an untapped potential for inter-agency evaluations and joint reviews, the latter involving a mixture of personnel drawn from UN agencies, NGOs, donor states, local institutions and academia.

Second, efforts should be made to engage a broader range of consultants in humanitarian evaluations - a field which tends to be dominated by a relatively small number of ‘experts’, a large proportion of them male, originating from the English-speaking world and from northern Europe. Both substantively and symbolically, it would be advantageous for this monopoly to be eroded.

Almost all of the relevant guidelines and handbooks produced in the past few years bear titles that refer to the evaluation of humanitarian assistance. Significantly, none of them refer to protection, or to human rights. A third challenge is to ensure that these concerns are central - rather than marginal - to the evaluation of any humanitarian programme.

Fourth, humanitarian evaluations should be characterized by higher degrees of professionalism and quality control. Contrary to some aid administrators, the author of this article does not believe that humanitarian evaluation will ever become a science, or that it should become a discreet profession. Even so, there is a strong case to be made for the introduction of training initiatives for humanitarian evaluators, as well as an insistence that humanitarian evaluations conform to the standards that are routinely applied to academic research and analysis.

The independent team which reviewed UNHCR’s response to the Kosovo refugee crisis stated that the agency must develop a capacity to ‘think outside the box’. By this, they meant that UNHCR should be able to rethink its own assumptions, to look at situations from fresh angles and to question conventional wisdoms.

‘Thinking outside the box’ is a fifth and final challenge for those organizations and individuals who are engaged in the evaluation of humanitarian activities. Such reviews can all too easily become technocratic assessments, which simply ask whether a project or programme is meeting its stated objectives in an effective and efficient manner. Questions of a more fundamental nature - whether those objectives are the right ones, whether they correspond to the needs and aspirations of the beneficiaries, and whether entirely different approaches to the situation or problem at hand should be considered - are all too easily neglected. And by providing evaluators with narrow terms of reference which exclude such important issues, humanitarian organizations have the ability to discourage such questions from being posed.

In 1986, Barbara Harrell-Bond lamented the fact that there was “no tradition of independent, critical research in the field of refugee assistance.”(13)As demonstrated by the publication of journals such as Forced Migration Review, that is no longer the case. The task now is to ensure that the tradition of independent and critical research is brought to bear on the evaluation of humanitarian programmes.

For example, given the extent to which 'humanitarian' programmes were used for political, strategic and even military purposes during the Cold War era, did donor states not have an interest in limiting the extent to which those programmes were subjected to systematic analysis and evaluation?

At least 25 evaluations of the Kosovo emergency operation have been commissioned since mid-1999. Although Kosovo remains an exceptional case, even less high-profile emergencies such as those in Liberia and Sierra Leone have been the subject of multiple reviews.

Steering Committee of the Joint Evaluation of Emergency Assistance to Rwanda, The International Response to Conflict and Genocide: Lessons from the Rwanda Experience, Copenhagen, 1996.

PLAN:NET 2000, ‘Enhancement of the evaluation function in UNHCR’, Inspection and Evaluation Service, UNHCR, November 1998.

See ‘UNHCR opens up its evaluation reports to public scrutiny and invites NGO participation in evaluation missions’, Talk Back, vol 1, no 8, 1999, International Council of Voluntary Agencies, Geneva. EPAU’s work programme and reports can be found on the evaluation and policy analysis page of the UNHCR website (see above, note 6).

The State of the World's Refugees: In Search of Solutions, UNHCR and Oxford University Press, Oxford, 1995, p 14.

See ‘Humanitarian codes of conduct’ in The State of the World’s Refugees: A Humanitarian Agenda, UNHCR and Oxford University Press, Oxford, 1997, pp 46-7. See Publications section of this FMR for information on the Sphere Handbook.

Imposing Aid, op cit, p xi.

Book traversal links for Thinking outside the box: evaluation and humanitarian action

Disclaimer
Opinions in FMR do not necessarily reflect the views of the Editors, the Refugee Studies Centre or the University of Oxford.CopyrightFMR is an Open Access publication. Users are free to read, download, copy, distribute, print or link to the full texts of articles published in FMR and on the FMR website, as long as the use is for non-commercial purposes and the author and FMR are attributed. Unless otherwise indicated, all articles published in FMR in print and online, and FMR itself, are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) licence. Details at www.fmreview.org/copyright.