Now 99.5% of the county’s nearly 1,200 teachers are effective or highly effective.

Only one teacher was found to be ineffective!

All of those hundreds of millions of dollars blown away to find one “unsatisfactory” teacher.

If the principals were doing their jobs, that one teacher would have been identified without the grand apparatus and all those millions might have been invested in the arts, playgrounds, reducing class size, social workers and other things that make a real difference.

Share this:

Like this:

“Deputy Superintendent Sandy Hollinger said the district looked at grading scales in Volusia, Marion, Lee and other counties during the revision process to see what was the most commonly used scale to have the overall evaluation reflect the principals’ observations.”

NYS is doing the opposite they want the observations to be correlated (reflect) the test scores (grading scales)! NYS government is saying, no we are perfect, our Test are perfect, our VAM is perfect, it is all the Principals’ evaluations that are flawed. The evaluation should match our perfectness!

Teachers in Miami still have not received their VAM scores. It looks like we will not be getting our evaluations for 2011-12 until 2013. I questioned the head of the district’s RTTT grant about the value of an evaluation system that takes over 9 months for results but he still believes the district deserves “kudos” for this evaluation system. I am creating a human life faster than these people can complete an employee evaluation. The utter waste of time, money and resources is abhorrent. You can read more about it here “We Wish You a Merry VAM Score and a Job for Next Year!” http://kafkateach.wordpress.com

This is obviously what we’ve all been shouting from the beginning. There was no problem with teacher quality. There was no problem with evaluation systems. There was no problem. Period. 1% of all teachers may need intervention/dismissal. A nationwide PAR program could have resolved this issue at little to no cost. And also worked.

This BS evaluation fiasco we find ourselves in now is yet another example of why we, as teachers, need to take charge of this profession. Hire our own, mentor our own, and if need be, dismiss our own. Not one of us wants a person in this career who shouldn’t be here.

Ironically, given the omnipotent layers of initiatives, responsibilities, accountabilities, etc., etc., etc, on top of our wonderful, supportive friends in the political and media worlds, I honestly do not feel we need to change much. If a person is willing to continue to subject themselves to my job on a daily basis, they’ve already earned my respect and admiration. I would be willing to work with any of them.

However, I firmly believe the days of a 50% attrition rate in five years are over. Either the percentage is growing or its duration is lessening. Or both! Ours has evolved into the epitome of a self-weeding-out profession. No million dollar, evidence grounded, research based, consulted expert evaluation needed.

H-m-m. How to keep your school budget low. Make the working conditions so miserable that all your teachers leave within three or four years. Hey! All they have to do is beef up TFA and everything will be hunky dory!

kafkateach–You’ve made me become more vocal and involved. I thank you.

In Brevard County, we rec’d. an email letter from our superintendent, today, which said, in part,

“We want to announce to you that we have also made a change in the student achievement scores assigned to collaborative teams. As you know, this element compared the VAM scores of bottom quartile students with their grade level peers at the school. Data reviewed when setting these targets showed that these bottom-quartile students have in the past two years met their predicted performance at percentages higher than their peers. Such will be the case again this year. In fact, the most prevalent score in this 5-point element was 5. The second most received score was 3. The third was 4. Those scores keep teachers on pace for “effective” or “highly effective” ratings. In part, this success may have contributed to BPS having experienced an increase in bottom-quartile reading scores that was among the highest in the state (Our score the year before was actually below the state average). But the ranking aspect of this element created the possibility that some teams could receive a score below 3 even if the students they were supporting scored collectively above the district average. We have adjusted scores to eliminate this possibility. For teams whose students scored below their school-based, grade-level peers but scored at or above the district average, we have changed their score to 3. This has improved the evaluation score for 126 teachers. It has moved some of them up in evaluation categories (In the end, our 39% “highly effective” percentage is among the highest in the state). If a team remains with a score of 0, 1 or 2, it means that the group of students they were supporting not only scored below the district average as it relates to their predicted performance, they also scored below—in the case of 0 or 1, significantly below—the grade-level peers at their own school. Like the alignment element, this year provides us with new data that can allow us to shift from a ranking concept to performance targets based on historical results.”

What this means, essentially, is the District didn’t like the way the scores were panning out so, like our legislators and education higher-ups of last year’s FCAT score debacle, they changed the (final) scores. This, on an element we did not even know was part of the rubric. I do not know if I am one of the “lucky” 126, and I am not going to make the inquiries to find out.

Last year, I earned a 10 on my PGP because I worked at it. I believed the process “must be okay. It must work.” I was diligent, professional, and conscientious. I cared about the process and system. I wanted there to be a numeric that proved what everyone already knew. I wanted to be able to point to my final “score” and exalt in the (supposed) authenticity of every evaluation ever performed in my classroom.

After all that, due to elements of the evaluation farce OVER WHICH I HAD NO CONTROL, I was not deemed highly-effective.
As I sat in my administrator’s office listening to the explanation of VAM, the evaluation rubric, and how my numbers fell, I cried.
Then I became angry. No, not angry; livid.

This year my PGP “earned” an 8. Not because the rubric’ed elements were not there, but because the scorers, all of whom I highly respect, did not realize that one of the strategies employed in my PGP actually met the requirements of the rubric. I was told I could write an explanation of how my PGP DID deserve a 10, but . . .