Louisiana “Father of VAM” George Noell Is Back to Peddling VAM for LDOE

A professor at Louisiana State University (LSU), Noell also held the title of executive director, Superintendent’s Delivery Unit, Strategic Research and Analysis, Louisiana Department of Education (LDOE), from 2008 to 2012.

Beth Gleason assisted Noell with the 2011 VAM pilot as an educational research analyst manager for the Louisiana Department of Education (LDOE). In April 2013 she left LDOE and now works for a nonprofit on Hawaii.

The explanations in the Noell-Gleason report did not match the tables of numbers on just how poorly VAM actually performed in reclassifying teachers into the state’s predetermined four “performance” categories given no change in teacher “performance.” I wrote about this in my December 2012 VAM Explanation for Legislators.

Wayne Free of the Louisiana Association of Educators (LAE) also addressed the inconsistency between Noell and Gleason’s words and their reported results:

Dr. George Noell and Dr. Beth Gleason, Strategic Research and Analysis, Louisiana Department of Education (DOE) write: “The results show moderate stability across years. Teachers who fell in the bottom 20% in 2007-2008 were likely to fall in the bottom 20% of results again (mathematics: 45.3%; ELA: 39.8. They were unlikely to move to the top of the distribution one year later. Teachers who were in the top 20% in 2008-2009 were most likely to fall in that range in 2009-2010 (mathematics: 61.6%; ELA: 55.7%). They were unlikely to move to the bottom of the distribution one year later.”

The description of “results show moderate stability” is questionable. The data published by Noell and Gleason indicate that, “Teachers who fell in the bottom 20% in 2007-2008 were likely to fall in the bottom 20% of results again (mathematics: 45.3%; ELA: 39.8)” Perhaps I am weak in mathematical analysis but when I read data that is not consistent with the conclusions, I question the reasons.

Included in Free’s report is an appendixed email response dated May 10, 2012, from Noell to Free and copied to Gleason in which Noell confirms just how poorly VAM reclassified teachers in the 2011 pilot.

Noell is also on record in this video by Herb Bassett as saying that no decisions regarding teacher ineffectiveness should be made using a single year of VAM. In fact, in the video excerpt Bassett features of Noell’s February 2011 testimony before the Advisory Committee on Educator Evaluations, Noell states that three years of evaluations should be used to make decisions of teacher effectiveness.

Noell’s 2011 pilot study was not based on three years of student data. To date, neither Noell nor any other Louisiana VAM researcher has provided detailed data from a pilot study demonstrating VAM stability across three years of teacher data.

By September 2012, George Noell was no longer using his previous LDOE-associated, executive director title connected to work on VAM in Louisiana public schools. Noell had disappeared from the Louisiana public school VAM process.

In March 2013, LDOE produced this VAM report with no acknowledged author except the generic LDOE. In it, LDOE states that VAM is stable across several years– but the report does not include any data tables detailing VAM reclassification accuracy across multiple years.

Furthermore, if VAM truly is stable, such stability should also show across different classes of students for the same teacher within the same year. I know of no study to date (in Louisiana or elsewhere) that examines this.

But VAM is good! Just take the LDOE mystery researchers’ word for it.

As it happens, Gleason left LDOE the month following the release of the above report.

Moreover, the Noell-LDOE connection that was quiet in 2013 seemingly remained silent at least for most of 2014.

In his Power Point, Noell offers an explanation of how VAM (ideally) works; of the care taken in deciding which students are included in a teacher’s VAM; of what variables are included in the prediction (such as prior student test scores); of who gets VAMed (teachers, principals, superintendents), including sample reports for these individuals; of some “strengths” and weaknesses of VAM, and of the history of VAM research in Louisiana.

No details on actual VAM data are included in the above Power Point. Fret not. Here is Noell’s “final” January 2015 VAM presentation for LDOE. It includes “evidence” of VAM classification stability– absent any documented evidence that the classifications can surely and solely be attributed to teacher control of their students’ VAM outcomes.

If teachers are to be graded on the test results of their students, it is on the heads of the “graders” to prove that their “grading” is sound.

No such proof from Noell or any other Louisiana VAMmer.

The holes in the January 2015 Noell VAM sell do not stop there.

Noell’s “final” January 2015 Power Point also includes 1) no link to a detailed research report; 2) no explanation regarding his using two years of teacher data in this presentation when he previously advocated that teacher effectiveness decisions should be made using three years of data; 3) no data tables reporting exact VAM reclassification rates by effectiveness category of 2012-13 teachers using 2013-14 data; 4) no external evidence establishing the “correctness” of either the initial, 2012-13 teacher classifications or the 2013-14 “follow up” classification, and, finally, 5) no VAM interpretation cautions regarding any of the criticisms in this paragraph.

In slide 11, Noell offers information regarding “no bias” in VAM classification for teachers of students with low or high prior achievement. Again, this test presumes that VAM works in the first place. This “no bias” test could just as well mean that VAM is faulty for teachers of both kinds of students.

The next slide, number 12, is several red flags waving. Noell compares distributions of 2012-13 and 2013-14 teachers who had 75 percent or more of their students rated at mastery or advanced to corresponding distributions of state averages. (For 2012-13, teacher ratings were based on VAM, and for 2013-14, teacher scores were based on a “VAM substitute” detailed here. Noell’s conclusion is that teachers of students with higher test scores are not being penalized with VAM (or the “VAM substitute”).

What I immediately notice in slide 12 is that all four distributions in this slide are almost exactly the same– and all four have approximately 10 percent of teachers are rated “ineffective” and 20 percent, as rated “highly effective”– a result that matches quotas set by LDOE.

Teachers are told that they are responsible for student outcomes; then, the results are forced to fit a certain distribution. What this situation unequivocally demonstrates is that cut scores can be set to create any distribution of teacher evaluation classification outcomes that the one controlling the cut scores wishes.

In all four distributions in Noell’s slide 12, the largest category is “effective: emerging.” This is true even for Noell’s group of teachers with at least 75 percent of students scoring mastery or advanced.

The public is told that American education is failing as evidenced by low test scores. The public is also told that low test scores are the fault of “ineffective” teachers.

The public is not told that teachers with high percentages of students with high test scores are still being rated as “ineffective” by design.

In 2012-13, 10 percent (and in 2013-14, 9 percent) of VAM/”VAM substitute” teachers with those high numbers of best-scoring students were rated as “ineffective.”

I testified to the Act 240 subcommittee in November concerning VAM. I pointed to evidence in the Transitional Student Growth data that bias against teachers of high performing students still exists. My analysis indicated that teachers in schools at the 99th percentile of students at mastery and above had a much higher than expected rate of ranking ineffective in VAM (Transitonal Student Growth.) I note in Noell’s PowerPoint that he only considered teachers with 75 percent of students at mastery or above. In past, we have looked at teachers with 50 percent of students at mastery or above. (A proposed rule protecting them was withdrawn from BESE consideration by John White in January, 2013.) It would seem that he has defined a very, very small sampling of teachers to support his position on the bias. How many teachers in the state have 75 percent of students at mastery or above? What are the stats when considering teachers with 50 percent of students at mastery or above as we have considered on the past? Still, as you have suggested, should we even be concerned with measurement of student growth of these tests when the students are significantly above average?

Another problem with Noell’s presentation: He seems to consider only those teachers whose students score Mastery or Advanced at the end of the year, not those whose students scored Mastey or Advanced in the prior year but dropped to Basic or below this year. What we really need to know is what happens to teachers who started with previously high performing students, not how the teachers who ended with high performing students fared.

Noell’s VAM sell has lots of problems.

Here’s yet another sense-defying issue, a crucial piece of information apparently ignored in VAM-worshiping circles: empirical evidence establishing the teacher as The One in control of VAM outcomes.

Not the cut scores already shown as ridiculous, mind you. Just the teacher’s VAM scores for the teacher’s own students.

Dr. Noell, I just read your VAM presentation for BESE and the Act 240 advisory committee.

I have one question:

Do you have any research evidence to back the assumption that the teacher and the teacher alone is the single remaining catalyst for changes in student test scores from one year to the next?

Thank you.

–Mercedes Schneider

teacher, St. Tammany Parish

If VAM is used to hold teachers accountable for student learning, then VAM supporters like Noell must provide solid, empirical evidence to prove without doubt that for their VAM models, the only remaining influence in moving student scores is the teacher.

Like this:

Related

I testified to the Act 240 subcommittee in November concerning VAM. I pointed to evidence in the Transitional Student Growth data that bias against teachers of high performing students still exists. My analysis indicated that teachers in schools at the 99th percentile of students at mastery and above had a much higher than expected rate of ranking ineffective in VAM (Transitonal Student Growth.) I note in Noell’s PowerPoint that he only considered teachers with 75 percent of students at mastery or above. In past, we have looked at teachers with 50 percent of students at mastery or above. (A proposed rule protecting them was withdrawn from BESE consideration by John White in January, 2013.) It would seem that he has defined a very, very small sampling of teachers to support his position on the bias. How many teachers in the state have 75 percent of students at mastery or above? What are the stats when considering teachers with 50 percent of students at mastery or above as we have considered on the past? Still, as you have suggested, should we even be concerned with measurement of student growth of these tests when the students are significantly above average?

Another problem with Noell’s presentation. He seems to consider only those teachers whose students score Mastery or Advanced at the end of the year, not those whose students scored Mastey or Advanced in the prior year but dropped to Basic or below this year. What we really need to know is what happens to teachers who started with previously high performing students, not how the teachers who ended with high performing students faired.

I work in the visual arts and this whole discussion about mastery is out of whack with the aims of education in that field, unless you want to revert to some nineteenth drawing exercises from the French Academy that gave rise to the familiar “I can’t draw a straight line without a ruler.”

A quick update – I contacted Dr. Noell and he clarified that the 75 percent threshold was determined by prior year scores as I agree it should have been. I was wary because in the computation of Bonus points in the 2013 School Performance Scores, LDOE used current year scores instead of prior year scores in determining which students would count toward the bonus.

Mercedes,
Latest news from Wisconsin: A new bill (Assembly Bill 1) directs the University of Wis.-Madison’s
Value Added Research Center (VARC) to “statistically equate” achievement data and scores derived from nationally recognized, norm referenced tests chosen by Charters and Voucher schools as an alternative to the Smarter Balanced CC assessments/MAP etc. Public schools are locked
into the Fed requirements of Common Core, but charters and vouchers would be able to choose from a list of alternative tests provided by the DPI. Is it even possible to do this? I’ll be at a hearing tomorrow morning on this new bill. Help!

Deb, charters are able to play the “we’re private entities” game at will, so it is possible that they could get away with “not being regulated” to administer SBAC.

Ask about the reasoning behind admin’s offering charters this way out. It will likely involve some explanation that ties to charters’ “freedom” to operate outside of the regulations of traditional public schools.