Thursday, January 08, 2009

Remember the Achievement Gap?

Text below is from the NEA Today, which has come out its hidey hole now that the big bad Bush is building a new security perimeter for his return to Texas. What a bunch of cowards. If this crew of self-serving bureaucrats had mobilized their 2 million members eight years ago, we would not have had this debacle to begin with.

But, instead, they played it safe, just as they are about to play it safe again as they cuddle up with the corporations to strangle the public schools with more tests and more money based on test scores.

What the charts show, of course, is that trading in the bigotry of low expectations for the racism of NCLB's impossible demands has done nothing to close the achievement gap.

. . . .Is it working? Or as President George W. Bush once asked, “Is our children learning?” We hear a lot about test-stressed students, curricula stripped down to make way for teaching to the test, and exasperated teachers leaving the profession. But NCLB supporters say if students do better in reading and math, and if low-income, minority students close the achievement gaps, that’s worth the agony.

And we do hear that in many schools, teachers are getting out of their silos and working together to help all children achieve.

What’s more, scores on state tests are definitely climbing.

So-is that proof of success?

No, it isn’t, according to leaders in the science of testing. Scores always rise when you put high stakes on a particular test, whether or not students actually know more. This phenomenon even has a name: Campbell’s Law.

Harvard University Professor Daniel Koretz, a leading test researcher, explains it with an analogy to polling before an election. Pollsters can’t call every voter. Instead, they choose a small sample. Let’s say a campaign polled 1,000 likely voters and poured all their energy into winning over just those voters, ignoring everyone else. They would probably see encouraging gains among the 1,000 voters-and then lose the election by a landslide.

Koretz says a math test works the same way: No test can cover all the skills from every angle that students should master, so the test is just a small sample. If you focus on teaching kids to correctly answer problems that use a particular question format and only cover a narrow range of skills, students will do better and better-that is, until someone asks them questions in a different way, or measures a different set of skills from the larger curriculum.

Koretz carried out an ingenious demonstration of this phenomenon in the 1980s in a school district he had to agree not to name. The stakes on test scores in that district were “laughably low compared with today’s,” he says, but teachers did feel pressure to get scores up.

When the district switched to a new test, Koretz says, “scores dropped like a rock.” But over the next four years, they rose steadily.

Now comes the clever part: Koretz gave students the old test, the one that no longer carried high stakes so teachers didn’t prep students to take it. Their scores plummeted. His conclusion: Four years of rising scores did not reflect real achievement, just teaching to a new test.

Research on scores on high-stakes tests in Kentucky and Texas also showed Campbell’s Law in action.

So to see whether NCLB is really boosting achievement, we can’t rely on high-stakes state tests. We need to look at scores on a test for which students don’t get prepped.

Luckily, there is one: the National Assessment of Educational Progress (NAEP). It’s given to large, random samples of students periodically, but there are no scores for individual schools so nobody’s career is at stake.

Last Update January 92:35 pm

I'm not sure the crack staff at NEA was doing with their charts, but here are a couple from NAEP that demonstrate the achievement gaps over time. For some odd reason the Age 9 chart would not upload, but it shows a very similar pattern. For more up-to-date comparisons from NAEP, go here.

2 comments:

I am new to your site and REALLY appreciate the postings. I plan to spend more time perusing it in the upcoming weeks.

I have one question, however. I keep looking at the graphs above and keep thinking the labels for "black" and "white" must need to be switched. I might be missing something obvious, and feel slightly awkward for asking, but could you please clear this up for me?

Also, here's something to read (if you haven't run across it yet): http://www.epi.org/policy/EPIPolicyMemorandum137.pdf

Jim, I can see that you used the graphs from the NEA website, but I don't understand the test scores attributed to "whites" and "blacks." Are the "blacks" supposed to be the higher scores (in pink) or are the higher scores actually the lower numbers (in yellow)?