Friday, September 8, 2017

Why policy makers do not use evidence?

Some serious claptrap here, this time trying to explain why policy makers do not use evidence.

Till some time back, I had only one explanation. Development is a faith based activity and evidence can, at best, have an effect at the margins, in hastening the formation of a narrative strong enough to tip the balance in favour of the change.

Now, as one sees more of the nature of evidence being churned out in prestigious research journals, there may be another dismal dimension. What if the outcomes on which evidence is being generated did not require any evidence at all? What if the type of evidence required may be of an altogether different nature? Worse still, what if those generating the evidence do not even know about it - an unknown unknown?

No serious policy maker disputes that surprise inspections, or third-party quality audits, or Teaching at Right Level, or supporting entrepreneurship and skill development, or providing consulting services to Small and Medium Enterprises, or nudging on everything, or environmental protection zones, or introducing congestion pricing, or graduation programs, or outcomes-based stuff, or PPPs... are all good, even great. Lack of evidence or difficulty in appreciation of evidence is not the reason for their non-adoption.

It is just that it is super hard to navigate the plumbing challenges and effectively implement them in weak systemic (weak state capacity and/or very limited or small markets) environments. Or, there are just no physical and human resources available to do so. Or catalysing the markets is very difficult. In case of politicians, it is as Jean Claude Juncker said, "We all know what to do, it is just that we don't know how to get re-elected after we've done it"!

Researchers would do well to go beyond looking for such "soft" evidence and trying to generate evidence which can help at least a few interested policy makers move forward with implementation. Unfortunately, even assuming we navigate the plumbing challenges and start searching for evidence that can help inform implementation, there are unlikely to be too many generalisable or actionable evidence.

3 comments:

Policy makers are like a "poor batsman" in the field, facing Shoaib Akhtar and Brett Lee, with a broken helmet, torn gloves, and under-fed. From their vantage point, it seems that they know all the "technique" (right things to do) but they are constrained by the aforementioned factors. All that they think they need is to be provided with these better conditions, and they will execute all the right techniques.

Researchers are like commentators sitting outside, partly oblivious to the batsman's condition. Their main job is to comment on the 'technique". From their vantage point, it looks like the batsman lacks technique, from the way they react to the balls.

Additionally, things move very slow in real life. So, it's like watching this batsman in slow motion, facing Yorkers. When watched in slow motion, even minute deviations that might not matter in the larger picture, seem pretty big (and become worth writing papers on).

The truth may be somewhere in between. The batsman may choose to believe that he knows all technique but the focus on getting around impediments like torn gloves etc. may have taken his mind off the technique, and hence might not have studied them in depth or be updated with latest developments.

Similarly, no matter how hard they try, it's difficult for commentators to completely appreciate the share of torn glove's factor in the lack of display of technique.

Interaction will help moth - commentators will appreciate the real world factors, and policy makers will realize that they actually don't know as much technique as they think do. Their excessive focus on the day-to-day picture may have constrained their ability to look at the big picture and reflect.

Even the best batsman in the world will gain from an external analyst, who dissects his moves closely, finds faults in patterns and suggests recommendations. The reason, why, as Atul Gawande says, even Olympic players have coaches. Needless to say, this is useful only if the batsman is in a position to implement, which many aren't due to the impediments like torn gloves and so on.

PS: Teach at the Right Level is not that obvious. There's still a large group of influential people, mainly pedagogy theorists, like Prof. Krishna Kumar (former NCERT Director), vehemently opposing it and writing articles about it.

I agree with the broader point you are making on marrying both views and the relevance of coaching, but I think that is a digression.

The point I am making is this - we need to now go beyond what much of development economics research is fixated on right now, and focus on generating takeaways that can help inform practitioners on getting stuff done. It is not that some of these issues never needed evidence (like TaRL till recently was not a mainstream idea - that explains the problem that old-timers have with it), but it is not so now.

At a personal level, I have struggled with the uncertain elements of actually doing all these things. I could have easily done with some help by having some implementation evidence as starting point for my iterations...

"..... implementation evidence" ... better not to expect it from development researchers. Implementation goes under their "assumptions" section. :)

I don't think the incentives are aligned. Just like it is uncool for governments to do "last-mile plumbing things" as compared to "inaugurating big projects" - generating evidence on "the nitty gritties" don't add much value to Prof's CV, as compared to pursuing "big ideas", "theories of change".

It's also a matter of time and mindset change. Even today, the big growth theorist guys ridicule the idea/principle testing randomistas for focusing on the unnecessary details thereby diverting valuable human resources away from working on "big growth theory questions" (the only things that matter). It takes a while for people to transition from idea/principle testing to the next level of detail - working on implementation evidence.

May be until someone like Angus Deaton comes up, who made what was perceived as a mundane topic (fine nuances of household surveys) cool. OR may be the number of researchers grows to an extent that it becomes too crowded to pursue big interventions.