You are here

Narrowing the rhetoric

The latest study out of the Manhattan Institute makes a mountain out of the proverbial molehill. While the report contributes importantly to a largely neglected area of research, it also overextends its findings. What's more, the suggestion of co-author Jay Greene that its discoveries provide evidence regarding the narrowing of the curriculum is somewhat disingenuous.

The study reports that particular fifth-grade students--those who were enrolled in Florida elementary schools that received in 2001-2002 an "F" from the state's accountability program--made test score gains in reading and math as well as in science the following year. (A similar pattern for science was noted in the comparison schools.) This is worth knowing--part of the accumulating evidence that "accountability works" to boost student achievement. But it's important to be precise about what this study does and doesn't show.

First, the methodological concerns: We start with the report's title, Building on the Basics: The Impact of High-Stakes Testing on Student Proficiency in Low-Stakes Subjects. Sounds nice, but this is not actually an impact study; it's a simple correlation study and can therefore make no causal claims. Moreover, its design does not acknowledge that data are "nested" within classrooms, schools, and districts, an altogether different problem about which another whole article could be penned.

And because the researchers had only one year of science data (the state science test was first administered in 2002-2003), they had to develop a proxy baseline score in order to measure science achievement gains before and after the sanction occurred. Therefore, the study substitutes math and reading scores for science scores, assuming that if the students had taken the science test before 2002-2003, their scores would have been similar to those the pupils received on tests in other subjects. This is a reasonable surmise; student proficiency in science does have a reciprocal relationship with proficiency in math and reading. Still, proxies are proxies, and the results may have looked different with real baseline science data.

Even if we give the researchers a pass on some of these methodological missteps, the study itself is restricted in scope (another of its authors agrees). Let's not forget: it examined just one state, in one year, in one grade. The results really aren't that surprising. Matter of fact, an interim spillover scenario is easy to imagine: In 2001-2002, Florida doles out scarlet letter Fs, designed to embarrass low-performing schools and catalyze remedial action across the board; the very next year, Greene et al., observe data in multiple subjects and, lo and behold, they detect gains.

Put succinctly, this study does not prove that the school curriculum isn't narrowing in the aftermath of reading- and math-focused accountability regimens, and it's misleading to suggest that it does (see here).

Reliable survey data show that time spent teaching subjects other than reading and math in the past few years has decreased overall (see here, here, here ). And the Manhattan Institute study's contention that science is a "low-stakes" subject in Florida is not entirely true, either. If a state test is administered in a subject, regardless of whether it counts toward the school grade, that subject cannot be described as low-stakes, not from a teacher's standpoint. (Martin West, for example, found that schools in states that simply test in science spend roughly 26 percent more time a week on teaching science than do their non-testing state peers. Remember the educator truism that what is tested is what gets taught.) Keep in mind that Florida tested science for the first time in the very year that this study is based on. Can a reasonable person suppose that the state's schools therefore downplayed science teaching during that year?

This is a small-scale study of the relationship between a state's sanction policy and short-term achievement gains. Any "narrowing" questions are simply out of its scope. At a recent conference, co-author Marcus Winters admitted as much when queried by an audience member about potential neglect of non-tested areas: "I'll leave that [question] to the people who have done work on narrowing the curriculum." Indeed.