Why some educators have serious problems with Minnesota’s school accountability system

Four years after its launch, Minnesota’s homegrown school accountability system is under increasing fire from a growing number of educators and policymakers for failing to accurately and fairly reflect all students’ academic growth.

“It’s wildly incomprehensible and arbitrarily designed,” says Kent Pekel, executive director of the Search Institute. Pekel, who has written a detailed critique of the MMR, was on the state advisory committee that agreed to what he says was supposed to be a temporary set of measurements. “It's just a bad system and it's time to move beyond it.”

Released last week, the latest round of Multiple Measurements Ratings (MMR) sparked a flurry of headlines touting state officials’ claims that the numbers showed two-thirds of Minnesota schools are on track to cut academic achievement gaps in half by 2017. Education Commissioner Brenda Cassellius chastised the Minneapolis and St. Paul districts for failing to participate in a state program she credits for progress in the on-track schools.

Yet both assertions are misleading, according to critics of the system — a number of whom are frustrated enough this year to break their silence on the issue.

Among the concerns is that the numbers are being used in political ways. “When you say two-thirds of schools are on track to close the gap, that suggests kids everywhere are making progress toward closing the gap,” says Pekel, the founder of the University of Minnesota’s College Readiness Consortium and an education official in the Clinton administration. “But that’s counting schools, not the number of kids. There are many more kids in the third of schools that are not making progress.”

Released several weeks earlier, the Minnesota Comprehensive Assessments were essentially flat statewide. That those scores are used to calculate the MMR, which is touted as showing the gap closing, is confounding to many.

“The Minnesota Department of Education has consistently reported the outcomes of the MMR in ways that support the policies of the department and the Dayton Administration,” Pekel says adds, “rather than as the objective agency that could serve as a convening force in Minnesota's divisive educational debates.”

Specifically, because they are in small districts or Greater Minnesota, many of the schools topping the rankings have worked with the department’s Regional Centers of Excellence. The centers were designed to provide technical assistance that Minneapolis, St. Paul and other urban districts already had in house.

Yet a Sept. 1 Star Tribune story about this year’s MMR release described Minneapolis and St. Paul as “resistant” to asking the state for help.

“If the state really wants to meet its goals, we are going to have to see Minneapolis and St. Paul also improving,” Cassellius told the Strib. “We are ready to go all-in, but schools are locally controlled. But we are ready to do all hands on deck.”

One of Pekel’s top objections to the MMRs is that the results were supposed to be used to identify successful strategies so more schools could adopt them. It’s not clear this has happened, he says, and if the Regional Centers of Excellence has pinpointed great practices, “a much more aggressive state would be mandating them.”

“I appreciate the attempt to not just look at sheer proficiency rates,” says Mauri Melander, principal at Minneapolis’ Lucy Laney, a school that has consistently struggled to see its gains reflected in state measurements. “I just think every time you try a new formula you’re going to find flaws. You want two and two to always equal four, but it doesn’t.

“Before you know it, you’re not looking at children as children anymore,” she adds. “You’re just trying to fill pockets and that’s a sad place to be.”

Too complicated?

Melander's colleague at Minneapolis’ Green Central, Matthew Arnold, also has numerous classrooms showing double-digit growth yet is contending with a failing designation. The coaches and grade-level teamwork he is confident are driving the progress are the same strategies beginning to move the needle at Lucy Laney.

Kent Pekel

But because of the phenomenally complicated way the MMRs are calculated, neither school’s progress is acknowledged.

Department officials say they addressed a number of concerns about the methodology in 2014. But there are some factors that are non-negotiable, they counter, given that the system was designed to meet very specific federal requirements.

“Some people don’t realize we’ve been through two iterations of the MMR,” says Cassellius. “I think a lot of educators don’t know we tweaked it to get the variability out of it.”

“Its first weakness is its incomprehensibility,” says Pekel. “You can say what you want about the U.S. News [and World Report] rankings, but if you look in the back you can see how it is calculated.”

The MMR system compiles scores in three “domains,” plus graduation rates in the case of high schools, to come up with a number on a 0-100 index. How schools show progress toward the three — proficiency, growth and progress toward reducing the gap by half — varies.

Schools near the top earn “Reward” and “Celebration-eligible” designations, while those near the bottom are singled out for interventions. Because schools compete against each other on some points vs. a fixed goal, it’s hard for those at the bottom to move up.

Tougher for schools with diverse populations

MinnPost called Pekel after hearing from several school- and district-level assessment coordinators who spent time trying to figure out why they were in the bottom — despite what they felt was strong growth in the classroom.

Particularly in urban areas, leaders of schools with high populations of color have long groused privately that the MMRs do not reflect their year-over-year growth.

The methodology, they insist, is weighted against schools with concentrations of impoverished children of color. Schools with large numbers of poor white students can earn a quality designation by making progress on just one indicator, while those serving more diverse populations must meet a dozen targets.

Several of the schools touted in last year’s state press releases, for example, are located in Greater Minnesota; they have large numbers of low-income white students and not enough children of color, special education students and English-language-learners (ELL) to count. (A subgroup must have 20 students in order to be taken into account by the MMR.)

By contrast, urban schools with more diverse populations have to hit many more targets for academic performance to earn the same points. More baffling, students in a particular racial or ethnic group (Latinos, say) must also meet separate targets — often higher — for students in poverty and by special education and ELL status.

A complicated history

Some of this is due to federal law, Cassellius counters. “Every single student in every single group needs to count,” she says. “We need them to increase.”

Cassellius says it’s not realistic to look at changing the calculations behind the MMR while a rewrite of No Child Left Behind — the law that first mandated student progress be reported in the first place — is before Congress. (Over the summer both the U.S. House of Representatives and the Senate passed revisions that would let states take control of accountability, but it’s unclear whether lawmakers will be able to bridge the chasms between the two bills. A conference committee headed by Lakeville Republican Rep. John Kline hasn’t done much so far.)

The impasse in replacing the 2001 No Child Left Behind (NCLB) is part of the reason the MMR was created. After years of gridlock, in 2011 U.S. Secretary of Education Arne Duncan announced that states willing to develop better accountability systems could win waivers from NCLB.

At the time, it was unclear that Duncan would eventually give most states waivers. NCLB’s unfair and punitive consequences were beginning to be felt throughout the state and officials feared they might have a short window to win relief.

Pekel was on the panel that advised the state on its waiver application. The panel agreed that the goal — cutting the state’s yawning achievement gaps by half by 2017 — was both ambitious and reasonable. “We really didn’t know whether that was a window that was just going to close,” he says. “The feds did require a ratings system to add up to a number.”

All in all, the MMR was judged much better than NCLB’s assessment system, yet still far from perfect, he says. It was understood that a more sound system would be developed after the waiver was secured. Indeed, some of Pekel’s objections are reflected in the feds’ initial response to a draft of Minnesota’s winning waiver application.

State officials say they “tweaked” the system in 2014 when Minnesota sought a one-year extension of the waiver. Earlier this year, the state was granted a second waiver.

Depending on what happens in Congress, Cassellius says she envisions the World’s Best Workforce legislation, which was passed several years ago, to be the spine of a better state model, one in which districts would have more local control over accountability.

“People are reacting to the judgment,” says Eric Moore, Minneapolis Public Schools research and evaluation chief. “Obviously the more groups you have, your work is cut out for you more than a school that has maybe one subgroup.”

Adding to this are frustrations that two Minneapolis high schools, Edison and Patrick Henry, would have qualified for recognition had fewer students opted out of the 2015 tests.

Comments (6)

There are so many things wrong with our current education assessment system it is hard to know where to start. Let me begin with a compliment to MDE. The MMR rankings are better than the US News and World Report of 100 Best High Schools which fails to recognize poverty, diversity, or ELL challenges. It basically is a ranking of the wealthiest high schools in America.

The public's patience with assessment of their children is running out. I would agree with parents that these tests are not measuring ability and potential in crucial areas of child development and learning. It certainly does not measure if a student is college ready or even ready for life as an adult.

We continue to test only in a few areas - math, science, and reading/writing. We ignore multiple intelligences, soft skills coveted by the private sector, and individual student progress. The tests we use also favor teaching methods aimed at white, affluent, and college bred students. Students that are kinesthetic learners, hands-on career tech oriented, or skilled in the arts will not generally do well on the tests we currently have in place. Creativity and innovation skills are rarely measured by any testing company. With all that we now know about child development and brain development I would think that assessments would be aligned to that research.

MDE has been wanting a single accurate test that measures student college readiness for some time. Such a test would be invaluable for some (but not all) college bound students as it would indicate specific work that teachers and students need to focus on to achieve the minimum standard for college entrance.

While waiting for this single measurement tool to be developed (it will likely take years to accomplish), I would hope that in 2015 we would have the sense to acknowledge the other skills needed to become a success in life. Not only a success at certain job sites, but also success as a productive citizen, as a valued family member, and as a contributing member of the larger community. And certainly now must be the time to measure creativity, innovation, and soft skills (such as teamwork and collaboration) that are so desperately needed in today's competitive high tech work environment.

I think we should test students to find their individual unique strengths rather than to try to fit everyone in the math/science mold. Some testing should be done in elementary school but must include multiple intelligences. We seem to be focused on labeling students, schools, and districts as failures rather than finding individual student strengths and encouraging them to pursue higher education opportunities that fit their skill set.

The MCA tests are worthwhile to measure math and reading/writing and have their value in those areas. MAP testing does a good job of measuring student individual growth. ACT and SAT tests also have some value in measuring likely college readiness. The Accuplacer test is not a good indicator of college success nor, I believe, are the entrance tests that force adult students to take remedial classes and use up their grant/loan money in the process. When community college students have been away from the academic world for years, how likely is it that they will need remediation in Math - answer, very likely. On a positive note, there are colleges that have developed very good creativity tests like the U of M.

So to your point, yes, I would prefer better tests than the ones we have and more testing in areas that would identify student strengths. I believe parents would be very willing to have their children take tests that would indicate skills and interests in their areas of strengths. Their criticism, I believe, is aimed at high-stakes tests that label their children as failures at a very early age and often keep that label throughout their middle and high school years. There is no wonder they don't have confidence, or even a desire, in their ability to succeed in high school or college.

I would put the burden on our higher education institutions to provide testing in the areas that they offer coursework. In other words, strong fine arts colleges would have testing tools that would identify those strengths in students. Community technical colleges that have strong career tech programs would have tests identifying those students who would benefit from their coursework. Good middle and high school teachers do this all the time by encouraging students to pursue their strengths as they leave middle school and high school. An encouraging word from a teacher that says to a student you are good at - art, music, theater, welding, carpentry, computer repair, or writing, could, and probably would, make all the difference in the world to that student. In contrast, our current tests say to these same students - you do not have the math/science skills to succeed in college, therefore you will likely not be successful. What a waste of talent and human resources.

I know this is hard, but...the main point is: What is important, you measure. What you measure you get better at. This is how business works. I see no reason why education, which is an industry, is not subject to the same idea. No measurement system is perfect, but just about ANY system is better than nothing at all. (which is what Education MN wants).

SOMEONE out there among all those PhD-Ed's has to be able to figure this out.