A study by the Center on Education Policy casts doubt on the conventional wisdom that No Child Left Behind causes teachers to shortchange high and low-performers, given the law’s incentives to get students to the proficient level.

“If accountability policies were indeed shortchanging high- and low-achieving students, we would expect to see stagnation or decline at the basic and advanced levels,” says Jack Jennings, CEP’s President. “Instead, the percentages of students scoring at the basic-and-above and advanced levels have increased much more often than they have decreased, especially in the lower grades.”

Hear, hear for higher test scores at all chievement levels. But how does that show high achieving students aren’t suffering under NCLB? Testing is a measure of where students are, not where they could or even should be. If there’s anything I learned teaching at a struggling school, it’s that the stronger students are largely assumed to be doing fine despite being neglected–a point nailed precisely in the Jack Kent Cooke Foundation’s “Achievement Trap” report a few years back.

Such children are dandelions. They will find a way to grow even in the harshest conditions. I can walk out onto the sidewalk and gather a bouquet of dandelions growing up through the pavement cracks. That doesn’t prove I’m a good gardener.

The assumption that the top kids are doing fine despite being neglected/given level-appropriate resources is not limited to struggling schools. Even in highly-affluent, highly-educated suburban districts, I have heard administrators and some teachers argue against giving resources to the gifted (some object to the existance of such) with the phrase “those kids will do fine anyway.” The idea that in such a district, where most of the kids probably are above average and a significant number are likely to be highly gifted, all kids should be challenged isn’t acknowledged.

The likelihood that such tests truly measure the top of the achievement curve is laughable. The magnet high schools in the districts above use SAT test scores as part of the admission process because none of the high-school tests (SSAT etc) are capable of differentiating at the top. Accepted kids not uncommonly score in the mid-700s (old scoring) on both the SAT math and verbal sections, AS EIGHTH-GRADERS.

Notice how Jennings chose his words, saying he didn’t find evidence that NCLB caused a decline for higher or lower achieving students. He did not say that NCLB actually increased LEARNING. To do that, at a minimum the CEP would have compared State scores with NAEP. As always the CEP was very circumspect.

This is the first of three studies. I get the feeling that the other two won’t be so narrowly targeted and will address what State score increases actually mean. If so, will the pro-NCLB people turn on the CEP?

I sound like a broken record since I discovered Charles Payne, but his So Much Reform provides a good way to look at State scores. At minimum, they mean that educators are working harder. Obviously something “real” is happening in classrooms. Whether it is producing any more real learning is still unclear.

Regardless, think of what a narrow point is being made. We spend 10s of billions of dollars, and its big news when a study can not find actual harm being done to upper and lower level students?

To be fair to the NCLB defenders, the CEP study does suggest that scores actually rose for both low and high achievers. It strikes me as a bit speculative to argue that such a rise is less than what it COULD have been.

I’m more concerned about the fact that the assessments themselves are not necessarily telling us what we want to know. Test prep strategies can presumably drive up scores among low, medium and high performers. It would be interesting to determine how challenging assignments have been–and to have a look at work by students at all performance levels. Alas, that strategy is very expensive.

Yes, Claus, it’s speculative, but so is the thesis that NCLB works for all students because tests go up on reading scores–as if that’s the alpha and omega of an education. I second John Thompson’s take above: Are we really to the point where proof of no harm done is an achievement? That’s pretty depressing.

Are we sure no harm was done? I wonder, for example, how many top-performing high school students have to write extended analytical essays. Perhaps there has been no erosion on this score, but NCLB doesn’t really give us very reliable information about the rigor and “cognitive challenge” of assignments. (Sorry for the jargon!)