You are here

Can evidence improve America's schools?

It is not reasonable to expect research to resolve all issues or to erase all differences of opinion. We can but supply some information that we think reliable, and we will continue in the future to supply more. But it is up to the American people to decide what to do. The better their information, the wiser will be their decisions.

No, it’s not easy. Policy makers can exhort educators to adopt “evidence-based practices,” as Congress did in both No Child Left Behind and the Every Student Succeeds Act. Philanthropists and advocates of every ideological stripe can do the same, and they frequently do. Think tanks and scholars and evaluation shops and the What Works Clearinghouse (WWC) can pump out studies and practitioner guides. Structures can be put in place to incentivize education leaders to seek evidence about “what works”— results-based accountability systems, for instance, or the competition that comes with school choice. Insights from research can be embedded into academic standards like the Common Core. Yet it seems to me that all of these efforts have gotten limited traction. Education remains a field in which habit, intuition, and incumbency continue to play at least as large a role as research and data analysis.

The question is why, and what might be done about it. Many people much smarter than I have thought hard and long about these questions, among them Vivian Tseng at the W.T. Grant Foundation, Tom Kane at Harvard’s Center for Education Policy Research, and Michael Barber at Pearson. Some of the key problems they identify include:

Limited supply: There’s undoubtedly more research findings to guide practice today than there were a generation ago; it’s no longer fair to call the What Works Clearinghouse the “Nothing Works Clearinghouse.” As IES founding director Russ Whitehurst told me, the Clearinghouse has identified 111 effective educational interventions in the last twelve years. Rigorous studies have made a big impact on teacher evaluations (for better or worse) and helped make the case for high-quality charter schools. (Ruth Neild, IES’s current acting director, points to yet more examples.) Still, we could all name dozens of practical questions for which education research still hasn’t provided definitive guidance.

Too much supply—of the wrong kind: Education is awash in a deluge of reports, journal articles, emails, tweets, and news stories, all making claims about “what the research shows.” It’s too much for anyone to sift through, and much of it is bogus to start with, so some educators understandably shun it all and keep doing what they’ve always done.

Poor dissemination: A recent study from the National Center for Research in Policy and Practice found that fewer than one in five district administrators checks the What Works Clearinghouse “often” or “all the time” for research findings. Instead they look to books, turn to peers in professional associations, pick up ideas at conferences, and rely on state education departments and the news media. Maybe if the WWC and similar outlets (like this one) did a better job pushing out their findings, they’d have a better uptake rate. (I hear that a new and improved WWC website and social media strategy is coming next week.)

Weak incentives: Maybe test-based accountability and competition from school choice aren’t enough to entice leaders to seek out evidence-based interventions. Maybe what’s needed is an FDA for education, an entity with explicit regulatory authority to keep districts from purchasing dubious products and services. (Then again, if you thought Common Core was controversial….)

Ideology: It’s those education school professors! They’re fundamentally opposed to the reform agenda, measuring schools via student outcomes, and hard-nosed quantitative analyses. Our teachers and principals get trained to love the warm-and-fuzzy while in college or grad school, and they never recover.

Habits of practice in schools and districts: Maybe the problem is that educators aren’t particularly open to new research in the first place. Perhaps they’re weary of the “reform of the month.” Maybe educators distrust the “external validity” of national studies and only put faith in findings from studies about their own students and contexts.

There’s surely some truth in all those explanations, which means that we should stay open to a variety of solutions for addressing the problem. Some options include:

Book it! If education leaders often turn to books for ideas and evidence, let’s develop evidence-based books that might have an impact. Doug Lemov’s best-selling Teach Like a Champion demonstrated a market demand for specific, practical advice for teachers. I’d personally love to work on Evidence-Based Elementary Schools, which could share practices that boost achievement—especially for disadvantaged kids—including teaching a broad, content-rich curriculum. (Anyone out there want to pay for that or publish it?) It would help if universities rewarded junior scholars for publishing well-read books when making tenure decisions.

Get together! Professional associations and personal networks are key sources of information and ideas, so reformers and researchers should do a better job partnering with the key education groups that already exist, like ASCD, AASA, and NCTE. Another option, naturally, would be to create ones. Tony Bryk’s “Networked Improvement Communities” represent one promising model. And how about a national network, or at least an annual gathering, for chief academic officers from large districts and charter management organizations? These folks don’t yet have their own association, perhaps because the role is relatively new. Yet they make key decisions that could and should be guided by evidence, including textbook selection, daily schedules, and so on.

Go small! As Tom Kane and others have been arguing, we might shift a hefty chunk of our research funding from large, national impact studies to smaller, local, “short cycle” evaluations. These publications can help districts and charter networks learn quickly what’s working and what’s not, and adjust appropriately. And we should strive to produce studies that go beyond simply “what works” on average to uncover what works for particular kinds of students in specific situations.

To be frank, I’m not sure any of these strategies will gain traction, at least in our traditional school system. As I wrote in an earlier post about education leadership, nobody seems to know how to transplant the DNA of our best charter management organizations like KIPP into central office bureaucracies that have learned to pay more attention to the dictates of elected boards than what’s best for kids. I don’t know why so many schools and school systems, including some I’m personally familiar with, seem so uninterested in tweaking their curricula, or hiring, or schedule, or student assignments, or anything else that might make them 10 or 20 or 30 percent better.

One final thought: What Works circa 1986 was an earnest effort undertaken because then-Secretary of Education Bill Bennett said it was needed—and devoted much of the department’s discretionary budget to dissemination. I can think of no better mission for whoever takes that post next year than to push not just his (or her) agency, but also the field itself, to infuse U.S. schools with practices that actually help kids to learn.