Opinion: Some Cold, Hard Facts About Common Core State Standards

Teaching to the test is not necessarily a bad thing, nor is the use of data to evaluate teaching effectiveness.

Laura Waters

There’s nothing new under the sun, says Ecclesiastes, but New Jersey teachers, administrators, parents, students, and school board members may be forgiven for feeling otherwise as schools open this year. Beneath the familiar gush of warm, welcoming hugs is an undercurrent of anxiety. Like much of the country, New Jersey is magnifying its use of cold, hard data in order to focus on student growth and teacher proficiency.

Starting right now, our 590 school districts will implement the Common Core State Standards, an initiative that requires realignment of course content to fit more ambitious learning goals.

TEACHNJ, the tenure reform legislation passed last year at the Statehouse and primed for a full rollout this year, ties teacher evaluations and tenure decisions to student standardized test scores. And New Jersey is one of 14 states (plus the District of Columbia) preparing for the full implementation of PARCC (Partnership for Assessment of Readiness for College and Careers) tests next year, which assess student mastery of the Common Core.

While some welcome these new initiatives as gateways to higher academic outcomes for all students, others are less sanguine. Critics concerned with the overuse of data on teachers’ prospects for tenure and compensation point to the insensitivity of algorithms in the context of the ineffable nuances of teaching and learning, even when standardized test scores are weighted for disabilities and socioeconomics. Many teachers are daunted by the prospect of compiling the voluminous portfolios intended to prove classroom effectiveness.

While union officials worry about the impact of data on their members’ job security, the Education Law Center (ELC) is alarmed at the impact of higher academic standards on children who endure placement in one of New Jersey’s poor urban districts.

The new PARCC tests, unlike our current high school tests, require meaningful mastery of course content. A recent press release from the ELC warned that “new and harder tests are on the way, and the bar for a high school diploma is about to become a moving target . . . If NJ adopts PARCC’s ‘college and career ready’ score as the threshold for high school graduation, thousands, or perhaps tens of thousands, of students will not get a diploma.”

From the other side of the economic spectrum, a cohort of mostly suburban parents (who exercised school choice by staying away from the districts that the ELC represents) worry about the emotional impact of high-stakes tests on their children. An organization called “United Opt Out National: the Movement to End Corporate Education Reform” includes instructions for boycotting New Jersey's standardized tests.

Amidst all the catcalls, perhaps it’s useful to find a patch of common ground and defuse some of this gloomy discord. Here’s a short list to get us started:

Data is not evil. Using quantitative measurement as part of a profile of teaching effectiveness can aid educators and students. No one (at least no one with any credibility) claims that statistics can represent all facets of proficient classroom instruction.

Data-based teacher evaluations are here to stay. Gauging teacher effectiveness solely through qualitative measurement such as subjective, often cursory, classroom observations, is an obsolete model that served neither students nor educators. On the other hand, we must be wary of constricting great instructors with cookie-cutter rubrics; that’s bad for the teaching profession and bad for kids. There’s a tipping point. We don’t know where that is yet, but we won’t know until we try.

Any new initiative involves risks. However, unless we argue that New Jersey’s public schools are uniformly fine (and, again, no one with any credibility makes that argument), the upside of boosting student learning -- and that’s why we’re here, right? -- outweighs potential hazards.

Well-functioning schools have always used student growth data to inform instruction and teacher evaluations. It’s a matter of degree, not substance. Some schools are better at this than others, and a standardized system of data infusion holds all schools to similar bars, promoting equity in a state that is pockmarked by academic inconsistency.

Like data, tests are not evil unto themselves. We’ve always tested students, usually without harming their psyches. Aaron Pallas, an outspoken critic of standardized testing and value-added metrics (Diane Ravitch calls him “one of the wisest education scholars in . . . the world), remarks that “teaching to the test is not necessarily a bad thing if the content on the test is a representative sample of the broad array of skills and competencies it is intended to measure.” Like fine tuning the degree of data infusion into teacher evaluations, we must find that delicate balance within classrooms themselves.

We won’t, however, find that balance without making inherently uncomfortable and occasionally clumsy shifts in emphasis. It goes with the territory. Perhaps there’s nothing new under the sun, but most likely there are new ways to use data in a collaborative quest to improve student learning.