Periodic updates on developments in disability law and related fields.

Wednesday, September 05, 2012

Today in Misleading Headlines, Special Education Funding Edition

The headline on this story reads, "Academic success in special education not linked to spending, study finds." But the Fordham Institute study to which the story refers finds nothing of the kind. The study actually makes three findings:

1. There is a great deal of variation in special education spending and staffing across school districts in the United States.

2. In two pairs of school districts in each of five states (so ten pairs of districts total) -- pairs "chosen to illustrate the inverse relationship between special education inputs
(spending) and outcomes (achievement)" (p. 11, emphasis added) -- the district with higher test scores for kids with special needs spent the same amount as or (usually) less than the district with lower test scores.3. If school districts across the country with above-average staffing instead staffed at the national median, they would collectively save billions of dollars.

Based on these findings, the author recommends that the federal government should end maintenance-of-effort requirements that in many circumstances prevent states and school districts from cutting special education spending. The author also recommends that Congress should preserve and strengthen the subgroup accountability provisions of No Child Left Behind, and that it should give states more flexibility in using IDEA funds. And the author recommends that local districts should employ "better teachers, not more teachers or non-teachers" (p. 34) and should better manage special education caseloads.

I have sympathy for some, though not all, of these recommendations. But the crucial point is that the study does not advance the case for them. Much less does it show that academic success in special education isn't linked to spending. The only achievement comparison the study draws is the comparison among ten pairs of districts that were specifically chosen because they were pairs of districts in which the higher-achieving district wasn't higher spending. This is a pretty blatant case of selecting on the dependent variable, and it eliminates any possibility of drawing a causal inference (e.g., that higher spending doesn't lead to higher achievement).

The study plays cute with this point. It includes a parenthetical statement that "[w]e do not imply that these relationships are causal," and that, given that the district pairs "were chosen to illustrate the inverse relationship between special education inputs (spending) and outcomes (achievement)," it's "not surprising that they did, in fact, illustrate that relationship" (p. 11). But if the study results have anything whatsoever to do with the recommendations included at the end, it must be because there is a causal relationship. Unless the study has shown that there is no connection between high spending and high achievement, it can provide no justification for policy recommendations that say districts should spend less. So the authors at least trade on the suggestion that they have shown a causal inference -- a suggestion that the Washington Post's copy desk elevated into a headline.

A real study would compare a broad portfolio of demographically diverse districts, a portfolio chosen for its representativeness of districts across the country. It would not look at 20 districts chosen precisely because they illustrate the point the author wanted to make. And, unlike the Fordham Institute study, it would attempt to control not just for differences in socioeconomic characteristics of the districts it studied but also for differences in the types and extent of the disabilities in the children served by those districts.