This is a warm-up post to collect responses from the fellows attending my teaching demonstration during the 2012 University of Illinois Writing Project (#uiwp2012). The comments and replies represent the opinions of the authors and may not represent the owner of this site.

Before we start today, please take some time to reflect on one or both of the questions below. I’d appreciate a reply to this post, if you’re willing to share!

What is the worst rubric-related situation you’ve ever been in?

Alternately, what’s been the best rubric-related situation you’ve been in?

I remember using a really skeletal rubric when I graded my first set of AP summer reading papers in 2000. I didn’t know these kids, I hadn’t read most of the books, the rubric didn’t work well at all–I just found myself making up numbers under the un-thoughtful categories to justify the grade I felt the paper deserved. It was utterly devoid of meaning.

I’m having trouble coming up with a best, but I definitely had a bad rubric situation the last time I had a student teacher who was not confident grading student essays and wanted me to give her a rubric she could use. We talked through what a rubric for the assignment might consist of, but I wouldn’t write it for her. It got ugly.

The worst rubric situation I’ve been in was when I was working with a colleague from the other high school to create the rubric for the Social Studies quarterly assessment. We seriously spent 4 hours arguing about the value that should be placed upon conventions, and a half-hour about whether a comma changed the meaning of the requirement.

The best rubric situation was after I spent a day working with other colleagues at the AP European History training when we received training on the AP rubrics.

Worst: It was strongly suggested that I continue using a rubric from a previous year for a larger paper, but it didn’t fit my assessment ideals. Every section was allotted the same amount of points. Grammar was worth the exact weight as content. I only used it for that year.

Best: I re-did the same rubric for the following year. It broke everything down into smaller sections and weighted them accordingly. I thought it was a much better representation of the students’ work.

The worst would have to be using the ACT writing rubric on most of the writing assignments my students did. It lost meaning very quickly, and the students did not get much out of the “feedback.” The best would have to be the rubric we created last year for our sophomores. We had a clear idea of what we wanted our students to know, and the rubric allowed us to give them meaningful feedback about their writing, feedback they used to make revisions.

I HATED grading student short responses on the NYC ELA test using a rubric, but all teachers were required to participate. We did 1/2 day normings before we were allowed to grade on our own. It felt very artificial.

I worked with a professor in on an online class who used a rubric of sorts for online discussions, which I actually think helped lift the levels of conversation (because we used it each week, and with the goal of improvement).

I don’t know if it’s the worst rubric related situation-but I am not a fan of the ISAT writing rubric that I had to use when I was teaching 5th grade. I found myself inserted into a situation that was not aligned with my own values and beliefs. The best rubric related situation is when I can create the rubric and where my students can also have a voice.

I don’t have a bad experience, but I do have a good one. I took a class on assessment for my masters and created a rubric for a science fair that could be fairly modified for ELLs and native English speakers. It was a collaborative experience with 2 teachers and a testing expert working together. We then successfully implemented it at my school’s 4th grade science fair.

I used the writing rubric for the ISAT. We were trained by reading papers that had alredy been graded and comparing our score with the graders score. It was helpful to have their explaination for why they scored a paper the way they did. We used this information then to transfer to our own school and our own students writing. It was fairly easy to score, but what we found was that if students don’t have a good understanding of the “score,” it was rather meaningless to them. I guess this was a good and bad experience.

The worst rubric situation I was ever in was, you know, I haven’t had one that I can remember. I must have blocked it out!
The best rubric-situation I was ever in was one that I created for my ESL 4th and 5th grade students. They had been split off from another class that was extremely demanding and their participation was not clearly defined… They just felt inadequate and defeated. We were studying tall tales and I gave them the rubric before we started the unit. They were all able to perform and were so proud.

I had to use a common rubric when I first taught freshman comp. It was a fairly simple one which is best, I think. Each section had a percentage of the total grade that the section represented. That helps new teachers especially put each aspect of the paper into perspective–grammar and punctuation are put in their place!

Worst situation– using a 3-5 Math extended response rubric to grade a skill that is identified as a “developmental” skill, not something my kiddos would necessarily have mastered– but were expected to do correctly and provide a written response in order to score well.

Best situation– leaving a blank space on a co-constructed writing rubric for the kids to chose what they wanted to be graded on, and offering double points for it. The kids loved this brilliant idea shared by our very own Becca!

Worst- I had to create universal rubrics for K-3 writing units, while also teaching teachers to create their own teaching points based on their students’ writing. These two job requirements totally contradicted each other.

Best- watching a class of second graders, who usually resisted revising, use the writing rubic to help them to make thoughtful revisions to their work.

Our high school admissions final evaluations used to be numerically based. The idea was for us to evaluate and quantify twenty to thirty admissions items. It was subject to a lot of personal and cultural bias. We eventually moved away from this tendency to try to make a qualitative activity more legitimate by artificially making it quantitative.