Experience tells us that a simplistic approach based on pre-canned policy recommendations that were gained through technical analyses and regressions simply doesn’t work. The reality is much more complex.

What is called “almost impossible problems” or “wicked problems”, i.e., problems we face in complex systems, are solved through evolution, not design.

For evolution to work, it requires a process of variation and selection.

In the development work today there is a lot of proliferation without diversity and certainly not enough selection.

Especially missing are feedback loops to establish what works and replicate it while scaling down on things that don’t work.

One specially important feedback are the needs, preferences and experiences of the actual beneficiaries. Due to too little efforts spend on rigorous impact evaluation and too much on process and activity evaluations, this feedback loop often doesn’t work. The direct feedback on the citizen themselves should be better taken into account: “People care deeply about whether or not they get the services they should be getting.”

The establishment of better and more effective feedback loops as a crucial ingredient to improve program effectiveness: “We have to be better in finding out what is working and what is not working”.

In evolutionary words: we should not impose new designs, but rather we should try to make better feedback loops to spur selection and amplification.

But as a direct consequence, we also need to acknowledge things that don’t work, i.e., failures, and adopt and adapt what is working. On international policy level, there are no necessary mechanisms to replicate success or kill of the failures.

These insights remind me a lot of a discussion I was involved recently with a group of international development organizations that are working together in a network called the GROOVE. The discussion was about ‘integrating experiential knowledge and staff observations in value chain monitoring and evaluation’. In the discussion that was held during a webinar, two important insights were voiced that correspond with Owen’s points above:

Staff observations can add a lot of value to M&E systems in terms of what works in the field and what doesn’t.

There is a need for a culture of acknowledging and accepting failures in order to focus on successful interventions.

Now, what does this mean if we have – for example – to design a new project? Firstly, I think it is important that the project has an inception period where a diversity of interventions can be tested. But we also need an effective mechanism to assess what impact these interventions have – if any. Now there is the problem of time delays – often, the impact of the interventions are delayed in time and might become apparent too late, i.e., only after the inception period. Especially when we base our M&E on hard impact data, we might not be in a position to say which intervention was successful and which wasn’t. Therefore, we need to rely on staff observation and perceptions of the target beneficiaries. Again, a very good understanding of the system is necessary in order to judge the changes that happen in the system.

As already Eric Beinhocker describes in his book “The Origin of Wealth”, evolution is a very powerful force in complex systems. Beinhocker defines economy as a complex system as he writes: “We may not predict or direct economic evolution but we can design our institutions to be better or worse evolvers”. I think that the same goes for our development systems. We cannot predict or direct evolution in developing countries, but we can support the poor to become better evolvers. This has also strong implication on our view on sustainability, but I’m already sliding into the topic for another post.