The Practical Challenge of Doing What Works

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Michael Ford
March 5, 2018

“Show me what works and we will do it.” This phrase was uttered to me on many occasions by state policymakers back in my practitioner days. It expresses an encouraging willingness to create public policies based on strong social science. And, after all, who could be against what works? Unfortunately, the reality of human-based governance frameworks illustrate that the practice of evidence-based policymaking is far more complicated than simply finding and implementing what works.

Turns out Boston’s parents did not want what works, they wanted what was working for them. The whole episode reminded my of the recently departed Charles Linblom’s classic The Science of “Muddling Through,” in which he wrote, “Agreement on policy thus becomes the only practicable test of the policy’s correctness.” Though Linblom was specifically referring to agreement among administrators, the lesson easily translates to the idea of public acceptance of policies. If the governed do not accept a government policy, regardless of the evidence in its favor, that policy is doomed.

Just as large of a challenge facing the implementation of evidence-based policymaking is the observation, pointed out by Linblom in that very same essay, that individuals place varying levels of importance on different social objectives. As such, establishing the very premise of implementing what works is often as vexing as determining if a policy is delivering on its promise. To illustrate, I will use the case of the Milwaukee Parental Choice Program (MPCP), the nation’s oldest and largest urban private school voucher program.

At its founding the MPCP had multiple overlapping, and sometimes conflicting, social goals. Voucher policy was viewed, at the same time by different constituencies, as a means of empowering low-income parents, a cost-savings reform, and a way to improve both public and private school test scores through competition. The presence of these multiple goals means different constituencies can, at the same time, voice rational supported conclusions about whether the policy is working. So whose conclusions are heard?

Well, in public policy and administration context matters a great deal, and the context of Milwaukee’s school voucher policy is a highly political one. In a February 7 tweet Isral DeBruin, who works with a large cross-section of Milwaukee schools, put it succinctly: “With politically fraught topics like private school vouchers, I fear nobody can be seen as an “honest broker.” It seems like no matter who you are, one side or the other can find some reason for why your research findings are partisan instead of academic.”

In theory, the problem of politics and confirmation bias in high-profile policy debates can be overcome by well-designed transparent policy evaluations. But the MPCP is, in fact, one of the more studied public policies in recent history. While much is known about how Milwaukee’s school voucher policy works, there remains a debate over whether it works. Supporters point to real measurable test scores gains. Opponents point out the substantively small nature of those gains. Evidence piles up, and no consensus on its meaning emerges.

Of course, most public policy changes are less controversial than the Boston school start time or Milwaukee school voucher cases, and these outliers do not suggest evidence-based policymaking is a bad idea. High quality policy evaluation and the pursuit of what works is both noble and necessary for the advancement of our society. But, as stated, evidence-based policymaking poses a practical challenge for policymakers, administrators and researchers. A few simply considerations can help overcome practical challenges.

First, researchers should always engage practitioners when designing policy evaluations. This simple step can ensure methods and assumptions align with the practical needs of individuals in the field. Second, researchers should be careful about including discussions of substantive significance when presenting results. The why and how does this matter questions should always be considered. Third, the zones of public acceptance should be measured and considered prior to rolling out a new policy change regardless of the evidence in its favor. A botched rollout a la the Boston Public Schools only serves to kill a potentially impactful policy change. Finally, we must all be humble when presenting the body of evidence for or against a given social policy; the full worth and potential of a policy can rarely be ascertained solely through a randomized control trial.

Author: Michael R. Ford is an assistant professor of public administration at the University of Wisconsin – Oshkosh, where he teaches graduate courses in budgeting and research methods. He has published over two-dozen academic articles on the topics of public and nonprofit board governance, accountability and school choice. Prior to joining academia, Michael worked for many years on education policy in Wisconsin.

(4 votes, average: 4.75 out of 5)

Loading...

About

The American Society for Public Administration is the largest and most prominent professional association for public administration. It is dedicated to advancing the art, science, teaching and practice of public and non-profit administration.