Rethinking our methods for the cultural context

While there is an image of evaluation as “objective”, evaluation – and evaluation tools – come from a particular world view.

At the first NSW Australian Evaluation Society session of the year, Kathryn Dinh, of Lotus evaluation, challenged us to think about how we could modify existing evaluation tools to respond to different cultural, political and philosophical contexts.

Kathryn’s PhD explored existing evaluation tools fit with different world views e.g. how Most Significant Change (MSC) fits with a Buddhist world view. Although MSC isn’t prescriptive about causal inference, in Western practice, causality is generally seen as unidirectional. (That is, factor one exerts an effect on factor two.) A Buddhist world view, on the other hand, sees cause and effect as often interdependent, recursive and contemporaneous.

In MSC, the way stories of change are collected means the focus is on recency, and it is expected that stories will be supplemented by other data. However, in Buddhism, karma suggests past actions influence current outcomes, and intrinsic (rather than extrinsic) validation is important.

Kathryn concluded that MSC could be adapted for a Buddhist world view by encouraging participants to think more broadly about causality and look further back for influences and stories, as well as prompting evaluators to consider their reflections on stories of change.

Dinh’s research also considered the fit between contribution analysis and Confucianism and concluded that they were less compatible.

This made me wonder whether certain evaluation tools and approaches are more malleable than others. For example, rubrics – which can be co-developed with various stakeholders to identify value criteria and how they will be judged – would seem to be more fit for adaptation than some other approaches. We broke into discussion groups to consider this very issue.

Our group tackled program logics which, like MSC, typically have a linear understanding of causality. Contribution to higher-level outcomes (e.g. population outcomes defined in government strategies) is what is of value, and individual trajectories are subsumed within the overarching outcomes chain. Time is arbitrarily divided into neat chunks – short-, medium- and long-term (never mind that it is often unclear when short-term is). “Success” is when the program is implemented as intended and achieves its intended outcomes. While, logics do recognise that external factors might affect outcomes, these are generally in a box to the side that doesn’t force us to grapple with their impact.

However, logic models can and have been adapted in ways that better reflect other world views. They can be collaboratively developed to encompass different understandings of the important elements of an initiative and the outcomes to be achieved. Feedback loops can be added to show that progress is not always linear. Logics can be seen as living documents – to be updated as an initiative is implemented, and challenges emerge or adaptations are made to fit the context.

I have also worked on logics organised in the form of a circle, so that they better reflect the interrelationships between different actions at different levels. And we can use negative program theory to identify how a program might have the opposite of intended outcomes.

But there are limits to how far a logic model can be pushed before it looks more like a bowl of spaghetti than an organising tool. If this is the case, it might be more useful to turn to a different tool, such as a systems map with balancing and amplifying loops.

Another discussion group considered Randomised Control Trials (RCTs). This group found RCTs to be less malleable than logic models, but did identify some options for adaptation, such as realist RCTs.

My takeaway was that regardless of how fixed an evaluation approach or tool seems to be (or how rigidly it’s usually applied), it’s worth thinking about the possibility of applying it with greater flexibility.

Why Us

With 30 years in the industry and more than 1600 successful project deliveries, we are one of the most respected and enduring consultancy firms in our field. We were early pioneers in the use of program logic and remain at the forefront of evaluation theory and practice.