In lieu of a longer post, let's just say these sections are all straightforward -- much moreso than the NECAP -- and map more closely to math as exercised in math class. Each section looks pretty much as you'd expect.

If this can be used as an alternate assessment in the RI graduation requirement, I'd recommend looking into it more closely, particularly the exact score required. But it is probably a better choice for getting over the hump, especially since there are more prep resources available.

I feel like a lot of the complaints about NECAP puts folks in a damned if you do, damned if you don't position when it comes to design (set policy aside for a moment). Are we supposed to offer a test that looks for deeper domain knowledge and requires understanding at a conceptual level and linking disperate ideas together? Are we supposed to offer a very straight forward, factual recall/algorithmic application math test? Are we supposed to create strong barriers and walls between different mathematical concepts in an attempt to create very discrete and isolated assessment of specific tools? Are we supposed to provide problems that cross pedagogically derived units in an attempt to produce more authentic approximations of mathematical problem solving?

I have often heard the critique that tests cannot ask students to perform tasks that are truly complex and demonstrate deep learning and knowledge. So much of what people nationally (and generally) don't seem to like about broad scale and scientific assessments is they feel that they are too unsophisticated to adequately assess knowledge. In Rhode Island, the complaint seems to be that a test which tries to do that is just too damn hard. The NECAP has questions that require connecting concepts in novel ways to answer questions that seem to demonstrate to me deeper command of the domain broadly. That seems to have exactly the kind of qualities we want in a more sophisticated assessment that so many say standardized tests never achieve.

And I know you're probably going to say you can't separate the policy from the measurement, but many people have tried to make the indictment of the policy all about the quality of the measurement. So I guess I'm just having a hard time understanding how you could conflate arguments that these tests are bad measures because you can easily just "prep" for them to see big gains without learning the content and they don't represent authentic activities, while simultaneously saying they require complex problem solving skills and combining different concepts that require deeper domain skills. It feels inconsistent.

The NECAP as a summative tool that was always used for accountability feels like a poor fit for directly informing instruction. I was never really onboard with the folks at RIDE who were convinced this is how it should have (or even was) designed and should be used. The information is not timely and very difficult to understand virtually by design because of the requirements for accuracy, the summative nature of the test, and the complexity of testing such a broad domain of content (thinking HS only here). That's why many districts and schools have used various other products for interim and formative purposes and should be. If the results of your students on the NECAP are a surprise, quite frankly, you're not doing your job. And I don't think many teachers or principals in RI are surprised at the results. They know their students are not prepared. The NECAP tells them very little new, other than some comparative information across a broader set of students. With an annual, summative exam, quite frankly, that seems totally appropriate.

I'm saying that it is a problem if you have a graduation test that you can't straightforwardly prep for. There needs to be a very clear remediation path for kids who fail the test the first time. I would just consider it part of the social compact of US schools.

NECAP is designed to drive improvement in mathematics instruction -- I'd say particularly to drive New Hampshire and Vermont from good to great -- it is just the wrong tool for the job.

If RIDE and the reform community in general have opened themselves up for much broader criticism through this obvious technical error, that's their problem. I'm happy to pile on in any way I can.

I guess I don't understand what you mean by "straightforwardly prep for", or at least think you're potentially describing a unicorn.

Again, I'm thinking about this separate from the graduation policy because these critiques of NECAP are not new, just finding a second wind.

So is there a test that can be "straightforwardly prepared for" that you feel would adequately reflect the learning we expect students to do? I feel like if the test was something that could be "straightforwardly prepared for", then the critics would say it was dumbed down, reflected test prep and not learning or content mastery, etc. When there's a test that the only way to prepare is to have deeper knowledge of the content, it's too hard and not relevant to their lives, etc.

I'm well aware you'll take any opportunity to pile criticism onto RIDE and anyone you associate with the so-called reform community. However, I would think you deserve far more credit than that in your discussion around this topic. I think your critique is unique, I just don't want to overly interpret what you're saying.

What I want to understand is do you think that the NECAP is a test that reflects deep content mastery of complex material effectively? Because there is a huge difference, in my opinion, between saying the test is invalid and pointless and poorly designed and saying the bar the test sets is inappropriate for graduation.

I am completely sympathetic to the notion that some folks believe that partial proficiency on NECAP sets a floor that's too high. I just wish that if that's what folks are saying they would make that argument, instead of all the other cruft that seems to come along with it.

I think a graduation test should be very conservative. It is essentially redundant. If many students are failing it that's an indication of a systemic problem, not individual ones.

I certainly think the NECAP math test is intriguing, and I think anyone who gets proficient on it is more than prepared for college, but I don't feel confident about its validity.

I do think that in practice the NECAP 11th grade math test has been a failure because it does not seem to have changed math instruction to adapt to it. There is something about it that is just intractable, but I don't know enough about math instruction to pin that down.