Independent versus self-evaluation - is there a place for both?

As we come back to our blog series on the value-for-money of evaluation, let's start with the question why we have to spend all this money on self- and independent evaluation. Couldn't we improve the value-for-money equation by saving on one or both of them?

Monitoring and evaluation spending affects the cost of business. Business efficiency has been one of the reasons why institutions and private sector entities have been hesitant to invest in monitoring, and even more so in evaluation. To them, the cost associated with evaluation does not pay back in benefits or profits derived from the information. And that's likely even less so when there appears to be a double cost of separate self- and independent evaluation.

In an ideal world, we would have one evaluation system that is completely streamlined and efficient, produces the right kind of information at the right time, and does not shy away from difficult or unwanted messages. But, as many of us development professionals know, that is a tough call.

Value of Self-Evaluation. Self-evaluation has the advantage of being much closer to the action. It therefore has a great potential to promote learning, if institutional incentives are right. The value of self-evaluation is two-fold: one, fixing problems as they arise, and two, making sure avoidable mistakes are not repeated. In both cases, unnecessary expenses can be spared, harm avoided, and relationships maintained or even improved.

Value of Independent Evaluation. This second tier of evaluation derives its value exactly from being arm's length. It gives credibility to the institution as a whole which increases trust and confidence of shareholders and others. In addition, it can make up (in part) for weaknesses in the self-evaluation system. Importantly, independent evaluation produces value by, one, helping to fix systemic issues, and two, evaluating a set of interventions from a different perspective to generate new insights. This can lead to greater institutional efficiencies, and to identifying opportunities that would not be seen from the day-to-day operational perspective.

Estimating Value Added. The value of these gains from evaluation (self and independent) are hardly ever estimated. On the operational side, more effort is spent on pressing for higher quality evaluations or defending results and performance of past programs. As evaluators we are still struggling to see whether our evaluations are effective, meaning recommendations are taken up and implemented. This leaves limited space to determine the value added from evaluation.

Cost of Evaluation Systems. A common lament from operations managers is that the cost of collecting data outstrips monies available for managing the project. This is even more so in humanitarian assistance or the private sector. The trade-off seems to be that instead of evaluation (direct and indirect costs), money could be spent on serving more people better, making a project work more efficiently, or reducing the cost of a private sector deal. To some extent, I am sympathetic to this position. It is not always obvious how evaluation systems (self or independent) speak to the information needs of decision-makers or program implementers. Data collection requirements seem endless, reports are unwieldy, and messages nuanced. Simple, sharp and clear answers about "what's next" don't emerge easily. And, often there are calls for more and more data collection.

Where does this leave us in the end?

Yes, there is space, even need for both self- and independent evaluation systems. They complement each other. Better self-evaluation will result in greater efficiencies of independent evaluation.

But, there is also need for action from all sides. Decision-makers and implementers need to work with the designers of self- and independent evaluation systems to clarify information needs. Challenges will arise as different needs have to be weighed against each other. For instance, immediate business needs might not pay due attention to development outcomes. Nonetheless, there is probably room for finding a better balance to achieve:

Greater effectiveness of self- and independent evaluation systems whereby evaluation evidence is used in decision-making, corrective action, and learning for future policies and programs,

Filling information gaps by developing a deeper understanding the value added of evaluation evidence, the improvements it has (or could have) led to, and resultant implications in terms of costs and savings.

Multimedia

Comments

Compared to for-profit business sector, independent evaluation of programmes in social development is more important for their complex nature of work and the expected results or outcomes. Of course, there are process related evaluations to mainly check if certain work approach or methodology were effective or not. Even here, one needs to assess progress against a set of objectives of the work programme. Efficiency in commercial projects is important, while impact and sustainability are for social development ones. Data collection and analysis in social development requires both qualitative and quantitative with an emphasis on qualitative. For commercial business projects? "Value for money" can also mean different in different cases. For example, ex-post evaluation of a social development project only has knowledge value with very limited immediate practical value.

Muhammad, many thanks for your contribution. The for-profit business sector has actually made some big promises for the SDGs and more recently at the COP21 when negotiating the new climate deal. From that point of view, they have committed to achieving more than profit maximization. For this reason, interests of private and public sector move closer together in assessing the impacts investments or policy interventions have on people and the environment. And, I believe both can learn from each other. In the private sector, profitability drives efficiency. In the public sector, delivering a public good and social development outcomes is paramount. If we can get both sides to learn from each other the good things they are doing, we've made this a better world. As for ex-post evaluations, I agree: individual projects don't necessarily tell you much. But in combination, bigger pictures may emerge or trends no-one would see without stepping back to evaluate after the fact.

Anu, I agree that participatory approaches are very useful for many reasons. But, as said in previous blogs, I always advocate for a mixed method approach to ensure one draws on many sources of information and perspectives to get a deeper understanding of what works.

I found this post insightful and also very relevant to my experience at USAID, where I manage both projects and independent impact evaluations related to land tenure. I often hear the same concerns raised by colleagues and have taken to sharing the results of a recent World Bank (DIME) study, which found that project performance is better where projects are accompanied by an impact evaluation: http://documents.worldbank.org/curated/en/2015/01/23173058/impact-evaluation-helps-deliver-development-projects#
The findings of this study comport with my own experience at USAID: I have found that the process of designing an independent impact evaluation helps sharpen the development hypothesis (theory of change) of the project being evaluated and leads to more evidence-based project design and implementation. In addition to learning from the final evaluation results as we design new programs, we are also able to achieve real-time synergies and improve existing project effectiveness where impact evaluation baseline findings are used to test project assumptions and inform implementation strategies, such as by helping to identify the most common causes of land conflict that a project was designed to address.
I hope both of our institutions can continue to foster a learning culture that helps strengthen the synergies between cost-effective evaluations and results-oriented projects.

Mercedes, many thanks for this excellent example of how impact evaluation should be influencing project design, implementation and management. I have always wondered whether people who design the systems are already in tune for this kind of evidence-driven service delivery, and hence make best use of impact evaluation findings. Can it also work with/for people who are not (yet) convinced or committed to bring them around to adopting the same techniques and good management practices?

Evaluation is about effectiveness and impact more than efficiency. It always answers the direction to the higher goal of the organizations. Many times decision makers and implementers respond much quicker to the urgent matters rather than the important ones, which diverts the objectives and even supposedly activities that need to be done. Other manager are reluctant to say no to the most urgent matters because they are afraid of change. What my experience have been is that organizations have developed 'cultures' that is very difficult to penetrate with new Ideas and fear of change pose big resistant in the rank and file of organizations. In these circumstances evaluators need to empathise instead of sympathetically with the actors in the organization. To me, sympathy is to take positions and this is where most error occurs that independent power of the evaluation is lost. Again for self evaluation, an evaluator should be concerned with the art of the habits that shape the character of the team players in the organizations rather than placing much emphasis in their actions. If the best practice is upheld in the evaluation process the long term benefits is enormous.

Stephen, thank you for putting the important question of organizational culture on the table. It is the essential driver to how evidence is used in decision-making and corrective actions. You mention cultures that are not so receptive to new ideas. Likewise, there are others that struggle with critical feedback. Empathy is important in these situations, but not to the extent that it leads to apathy about change. Independent evaluation can address important systemic issues that once resolved overcome institutional bottlenecks and makes life easier for people within organizations.

Thank you for addressing so clearly such a vital, and often misunderstood topic. In my experience, self-evaluation is a must for any organisation that is serious about making an impact. External evaluation is a definite advantage that can raise the value of self-evaluation to a different level and gain credibility for the organisation among its stakeholders.
Adnansaif@kti-uk.com

We actually combine the two by having internal participatory evaluations moderated externally. In overseas development projects, the key to effective evaluation and avoiding incurring prohibitive costs ( money that can be put to better uses) is to ensure criteria is simple. limited, relevant and understandable by all. That means by the beneficiaries who are poor people who may not have had much education. This can be done, provided experts communicate well, in particular do not talk down to them. That unfortunately is all-too common wherever specialisms emerge.

John, thanks for this contribution. I can imagine the approach of having a facilitated self-evaluation process can be really effective, provided (as you say) the moderators allow space for the participants (including local poor populations) take part and express their views. If done well, the self-learning could be incredible. At the same time, I wonder how information gets fed back to ensure problems are solved if/when they exist. Or is it up to the people to solve the issues on the ground. Could be very empowering if they have the wherewithal.

Caroline, this is an interesting and insightful analysis -- as usual. My take on this is somewhat different. While you distinguish self-evaluation from independent evaluation (reflecting the structure at the World Bank), I distinguish evaluation context according to who the client for the evaluation is. Evaluations can be conducted for the program manager or the program hierarchy, or for the funder of the program (which can be an outside agency, a Board of Directors, a government, etc.). Evaluation performed for the program hierarchy tends to focus on program performance (design, delivery, reach, efficiency) while evaluation performed for the funder tends to document program relevance, incremental impact, and comparative cost-effectiveness (these are presumptuous generalizations that know many exceptions, of course). I think you imply that hierarchy-focused evaluations are conducted as "self-evaluations" whereas funder-focused evaluations are conducted independently; that's where I disagree as it is conceivable that both types of evaluations could be performed under both models of evaluation production. Just food for thought and thanks for your intellectual generosity again.

Benoit, many thanks for your contribution. I am glad to see diverse views, as mine is obviously shaped by my long career in multilateral organizations. It is in this exchange that we can learn about different models, each of which needs to fit the purpose for which it is needed.

From an organizational perspective self-evaluation is the foundation for independent evaluation as it provides not only documented evidence about progress but also offers insights what implementers and stakeholders consider success (… and they may not always agree). However, self-evaluation needs ownership, commitment and incentives to be effective and add value. Furthermore, demand for self-evaluation from governing bodies would reinforce its importance. As Caroline points out, there seems to be a continuous competition between budgeting for evaluation systems and project activities. This is also a discussion about short-term and more visible project outputs and medium- or long term investment in learning and accountability. I’m wondering if we can break that stand-off by acknowledging and embracing that many of the programmes we invest in are characterized by uncertainty. The often long list of risks and assumptions that are part of implementation plans - and sometimes tend to be shelved once a project is approved - are a reflection of such uncertainty. Organizations seem less reluctant to spend immense sums on planning assuming you can lock-in success by a very elaborate plan. What if we would plan less and balance that out with more resources and incentives for managing, monitoring and evaluation – self- and/or independent? For a discussion about Deep Uncertainty see: https://halshs.archives-ouvertes.fr/halshs-01166279

Connect With IEG

The Independent Evaluation Group evaluates the work of the World Bank Group to find what works, what doesn't, and why. IEG evaluations provide an objective assessment of World Bank Group results, and identify lessons learned from experience. Through independent evaluation, IEG is helping the World Bank Group achieve its twin goals of eradicating extreme poverty and boosting shared prosperity.