Over the weekend I wrote up a mini-thesis on my assessment methods, which, though standard operating procedure at my last school, are pretty foreign here. I gave it a deliberately confrontational title, “How Math Must Assess.” SOP here is to use whatever tests the manufacturer supplies and give them whenever the manufacturer arbitrarily decided to divide the textbook. I worry that this arbitrary approach to testing stirs up a lot of hatred for math. The implication I try to avoid, since it’s so cocky coming from a third-year teacher’s keyboard, is that I know how to do it better.

Regardless, Monday morning I sent copies of the six-page manifesto to my dept. head and my principal. Before the first period, the dept. head gave some positive feedback.

Tuesday, though, the principal e-mailed me, mentioned that my philosophy matched up with his, and asked me to take 20 minutes of the next staff meeting to present it. I always swore I’d never ever speak in front of teachers, having been on the receiving end of that arrangement too often but this engagement circles around my favorite part about being a teacher and is way too tough to resist. Just gotta keep it entertaining somehow.

66 Comments

Kay Endriss

Hi Dan,
Thanks for posting this. I’m in the process of beginning to teach, and I wish my alternative licensure program would have given us nuggets like this. It makes sense to me. I have a couple of logistics questions for you:
What type of rubric do you use for assigning a final grade?
Do you still administer traditional final exams?
Do you find students don’t have sufficient practice with longer tests? Could be a possible disadvantage that you might have to counterract, especially for AP classes…
How many questions per skill do you use on average for each assessment?
Many thanks for this, and for all of your posts so far (I’m reading as of 27Dec)!
You remind me of a former co-worker of mine. It may be that you are meant to do things other than teaching, but I bet there’s a lot more teaching in you yet. And once you leave, be aware, you will do nothing but teach. (Out in industry, I never realized how much of my job was teaching until I decided to go back to school so I could teach high school math.)
Grazie,
Kay

Thanks for sharing your thinking and methods Dan. You’ve really given me some food for thought.

I’ve started and restarted writing this comment 6 times … I’ve got to mull this over a little bit but you’ve inspired me. I’ve got to do some serious rethinking about how I do assessment.

One question though … how do you come up with a numeric grade for each of your students? What weighting do you assign the concept checklist? Do you do other sorts of assessment as well such as project work or other long term assignments? Has the concept list replaced both tests and quizzes?

For long-term projects, group work, portfolio work, I just make the assignment worth a lot of classwork or homework points. “The linear data assignment is worth ten homework assignments,” I’ll tell them.

I’ve got to ask why you ask, though.

If you wonder how you’ll motivate students towards classwork if so much of their grade tipped towards tests, then I share that concern. And my solution has been pretty amenable, this from a guy who shies from direct instruction whenever possible.

If, however, you ask because, philosophically, you believe the long-term project-based assignments tell you what your students know, then I can’t really recommend this system, which is built ground-up on the belief that homework and classwork are less a measure of comprehension than of how hard your students work.

I guess the decisive hypothetical is this: would you force a kid to retake your class who turned in 0% of the long-term projects but who aced every test?

I like the idea of assessment catering to the individual student, and I also have couple of questions for you if you don’t mind. Let me just clarify so I see if I understood your policy. After the first assessment student gets 1-4/4. Next week he gets a new problem that asses the same skill/concept/procedure. If both were 4/4 you are satisfied that he mastered the material and this finishes the assessment for given material. Otherwise, student will be assessed as many times as needed until they get two 4s. Is that accurate description? If I am correct then it is possible for a student to have quite a long quiz if they are not managing to master a concept, which may become quite cumbersome for both you and the student. Otherwise, when do you stop testing certain material?

What I had questions about is the following:”After I tutor the student in the morning, I write up a fast one question assessment on a piece of scratch paper. I immediately correct it, re-adjust her grade, and give her the positive feedback that is the very momentum of student success.” You make it sound as if it is inevitable that the student will indeed solve the problem she is given correctly.
If in fact the student successfully solves the assessment problem that she is given, right after practicing it with you, how can you judge that this is something she in fact learned rather than something that she can reproduce shortly after practicing it?

Best,
e

I have to say that I unfortunately do not see how you make this work in terms of the grading scheme you talked about in response to Darren’s question. Let’s say it is the end of semester and Andy has the following in your grade book (out of 35 concepts you mention): 15 complete, 12 3s and 8 2s. What percentage of the 70% above does Andy get?

You’ve got it right up until: “Otherwise, student will be assessed as many times as needed until they get two 4s.”

Each semester, I expect only two or three students to master all the concepts. (While we’re here, those students are excused from this week’s final exam — what’s the point?) Most students will master only half and have a B-level understanding of the other half. (That’s one 4.)

For the weekly assessments, every students gets the same 6-concept test (or 3-concept, if I’d rather grade less that weekend) but they customize it depending on how many they’ve mastered. Some students finish inside of five minutes, having mastered all but the newest concept. Some students are still spiraling away at all six.

And I appreciate you hitting hard with the question about rote regurgitation versus long-term understanding in morning tutoring. I’m afraid I don’t have much resolution here. It’s an ongoing struggle to perfect my calibration.

For the record, since I’m no longer pitching this sale, there are students who come in the morning for tutoring and still botch the assessment. They get frustrated but they’re usually back the next morning. On the main, students come in ready to go. I ask if they’re sure? No tutoring necessary? They say, yeah, and they get to it.

If it’s a first 4 they’re after, I make the assessment B-level. Or “Proficient” in CST parlance. A slope problem with an integer answer, for example. The second 4 is so hard that — and this is the calibration, I mention — they can’t just regurgitate their way to mastery. An “Advanced” problem, for example, where they calculate from four given coordinates that the slopes of two lines are -1 and 1, respectively, and have to justify why these lines are parallel, perpendicular, or neither.

It’s an issue. It’s always going to be an issue. If I’m ever sloppy with the calibration, my grades will become inaccurate. Thus far, I’m content with the arrangement but I know that every test is an opportunity to improve it.

As for the grading scheme, perhaps check The Audit for a sample grade printout.

From Andy’s example, it seems necessary to clarify that there’s a grade — a 4 — in between “complete” and “3.” The second time I assess a concept, it becomes worth 5 points. The way you have Andy’s grades, assuming I’ve assessed every concept twice, he has:

15 x 5/5
0 x 4/5
12 x 3/5
8 x 2/5
0 x 1/5
0 x 0/5

This puts Andy’s assessment score at: 127
And the total score at: 175

So his assessment grade is: 72.5%

And that grade weighs 70% on his final grade, which will describe his comprehension either very accurately or very inaccurately, depending on how well I calibrated my assessments.

Mindy

I’ve been doing this type of assessment (different method, but same idea) this year as well.

And I have to agree with the statement you said about how this is your best year of professional development. And especially, this is the “year that assessment changed, and then changed everything for me.”

I spent four years teaching and not feeling comfortable with assessment (and especially with communicating to students and parents about those assessments and what they really meant). Then I became a gifted intervention specialist with a more time to explore what assessment really should be.

Now, I’m teaching fifth grade and have radically changed so many of my practices. However, like you, I’m still perfecting. But the message I want to share with everyone who reads this is, that for the first time, I feel like I’m actually doing things in a meaningful way and that, for the first time, I can fully explain the “why” of what I’m doing. And, in a strange way, I think assessment is easier now and less time consuming because I get the information I need, the students get the information they need, the parents get the information they need and it’s not this big “secret” about what we’re doing and why.

But, as one teacher trying to figure this out to another, I’m wondering how you handle report cards. For example, Johnny hasn’t yet figured out how to multiply fractions, but it’s the end of the quarter and grades must be assigned. He hasn’t yet mastered that skill, so perhaps he’s at a “B”. However, in the second quarter, he gets it. If we had a different reporting system, we could check him off on that skill on his “official report card”, but in my district we can’t go back and change grades. Do you have any thoughts on that?

Dan and Mindy
Me again, Carol Tomlinson, University of Virginia and ASCD is the differentiation “guru” and she has some very logical stuff to say about assessment. She contends that assessment is going on all the time and there are many ways to access. Mindy, you might already be familiar with her work.

“ASSESSMENT IS ONGOING AND TIGHTLY LINKED TO INSTRUCTION. Teachers are hunters and gatherers of information about their students and how those students are learning at a given point. Whatever the teachers can glean about student readiness, interest, and learning profiles helps the teachers plan next steps in instruction. Tomlinson 1995”

I searched online for an article she wrote regarding assessment, grading and “report cards” but couldn’t find it. I do have a hard copy that I would be glad to mail to you if you are interested. But the bottom line is, unless a district is ready to make changes to grade cards and progress reports, students grades will be sent home as they always have been.

Mindy

I am familiar with Tomlinson. Do you have a title of the article? She and many other voices have swirled together in my mind to help me form my understandings about education.

Unfortunately, I have to agree with you. I won’t be able to do much about the time of mastery if it happens after grades have been cut off.

To work with the situation, I’ve tried something this year that seems to be going well.

I’m sending home, with each report card, another sheet of info. On this sheet, I list all of the standards we’ve been working on that quarter and highlight any that the student is still “squeaky” on. (We use the term squeaky in my room because a squeak can always be oiled. It doesn’t represent a permanent deficiency.) So the parents can very clearly see what their child is doing well and can see areas that need work.

There are two reasons I do this. One-I truly want the students and parents to know meaningful information about their education. An “A” or “B” on the report card doesn’t really say much. And it helps me to see it all laid out for each student.

And two-I want the parents to start saying, “Why don’t we get this type of information all the time?” I want them to start expecting it and to start to question the way things are done. I want them to see the value in knowing specifics about their child’s learning. I’ve found that when parents start demanding, school districts listen. If parents see the value and want this type of information, perhaps it might become reality.

I do know that my parent/teacher conferences this year have been WAY more meaningful and successful than any I’ve had in the past. And that was because I had meaningful things to say, rather than just something like, “He didn’t do well on the chapter test.” What does that mean anyway?

Mindy

ps-Dan, I’m so glad I’ve stumbled onto your website. I’ve been craving this type of dialogue with other teachers. As I said I have a few teachers in my school who aren’t afraid (is that the right word) to explore the fact that we might not be doing things in the most meaningful way. But we don’t have as much time as I’d like to discuss these important issues.

Anyone know of a dialogue like this going on about reading instruction? I have some questions, observations, and disagreements there also!! Reading is the area I could really use some help.

Tough one, Mindy. So you guys give out quarter grades? Not just progress reports? The sort that go on a transcript? My school is pretty chill regarding Change of Grade forms. The decision doesn’t even flow through the district. I’m at a loss. That’s a set-back to what is otherwise a system that emphasizes the ongoing-ness of assessment.

Mindy, email me at nbosch@aol.com and I’ll send you the name of the Tomlinson article or the article itself. It sounds like you do a wonderful job in your classroom. After 21 years of teaching gifted students, I could not fo what you do day in and day out. Kudos to classroom teachers, N

Mindy–here’s the abstract of the article found in Educational Leadership magazine March 2001:

Grading for Success-Carol Ann Tomlinson

Teachers who want to help all students succeed in their academically diverse classrooms often wonder how they can both teach to the needs of each student and grade each student. There is no quick fix, because grading grows from a philosophy of teaching and learning. Teachers should first consider the needs of each student and how they teach to meet the different learning needs of the students. Then, teachers need to grade for success in the same way that they teach and assess for success.

Some strategies for grading are to give differentiated tasks and grade students on how well they perform those tasks; to offer consistent, meaningful feedback that clarifies present successes and next learning steps; to look for growth patterns over time when assigning report-card grades; and to find ways to document individual growth and relative standing and explain them to parents and students.

michael

Reading your entry how math must assess reminded me of a program we have instituted at my school. It is called Accelerated Math. It is broken up into objectives instead of chapters you can get an example of how subjects are broken up at http://www.renlearn.com/mathrenaissance/ContentLibraries/

they also provide individual review cards for each objective so that a student can review an objective they did poorly on.

The main difference seems to be that accelerated math is all managed by the computer instead of a teacher.

My biggest warning is it uses a lot of paper. about 1000 pages per student per year, and that is with using the program only once a week with regular teaching the other 4 days.

If nothing else you might want to poke around there website for some ideas

Marie

Dan,
I have been developing an assessment system that is similar to yours in record keeping for my Algebra and 7th grade math classes. I’m interested in knowing a little more about how you design your assessments and grade them.
From the sample test on The Audit it looks like your assessments are not multiple choice. When you design assessments, are they ever multiple choice? Is your final exam multiple choice? I am wondering about the effects of this on their CST exams.
When grading assessments or assigning them, how does a student know if they’re after the first four/second four (5?)? Are there generally two questions (one “4” and one “5”) on each concept on a test?
I am in the process of refining my own system, and I really like many of your ideas!

Marie, great questions. I don’t use multiple choice and, yeah, that does worry me when we come to the CSTs so I make sure I throw up a lot of released questions as we go through the course. If I had to revamp this system in any way it’d be to go 100% multiple choice. I just know it’d make my life more difficult, making the questions too rigorous to guess, carefully inserting distractors, etc.

The 4-5 situation is like this: the first question on an assessment is the easiest they’ll see. If they get that one right but no more, they’ll have a 4/5 (= 80%) on the books, so I make it a B- level probably. Every problem thereafter is harder. An A+ level problem where I know, if they can pull off that problem, they’ve got the concept cold.

If students come in for tutoring, however, when I’m creating their mini-assessments, I check their grades and give them a B- level question if they haven’t cleared that hurdle yet.

Rich

Dan, I’d love to hear more of your thoughts on multiple choice. I do 0% multiple choice, other than when we’re doing our Stanford testing. Initially my students are less-than-thrilled when I tell them that I don’t “do” multiple choice, but I try to help them see that this way I’m actually looking at their work to see how they arrived at their conclusion/answer. I’m fairly strong on developing their process skills, so I fear that a focus only on the answer (i.e., via multiple choice-type testing) doesn’t encourage their process skills.

While a common argument against my format is that it’s “soft” on correct answers, I am extremely “hard” on correct answers (if hard = emphatic about). Ten years of engineering before my teaching career taught me well that accuracy in my math work is critical (I could tell you a story about an entire apartment site where the buildings are all about 6″ too low).

I’m keenly aware that my preference for non-multiple choice obviously leads to more grading work for me – I am writing this as I finish up the third batch of final exam papers!

Jackie

Some logistical questions: I’m assuming you have a different sheet for each student with the skills listed? Do you keep these sheets or do the students ?–I picture a few of mine loosing them. Are the sheets preprinted with the concepts or are they updated through the semester?

How would this system work if one were teaching in different rooms each period (I can’t see myself dragging 125 folders around all day all over the building).

Rich, my desire to swap out free response for multiple choice is only to familiarize my kids with the multiple choice standardized tests. I’d require complete work — as I do now — but it doesn’t feel like a terribly necessary switch, does it?

Jackie, I record everything in our computer grading program. I’ll toss an assignment in like “15. Law of Sines,” and then record all of those concept scores there. The students have their own record which they update after I pass tests back but it isn’t essential to the process. I may be misunderstanding the question but I used this system for two years while running between three classrooms. So, promise, that isn’t a consideration.

Steve Peters

Multiple choice tests are interesting. If stuck, the student can start from the given answers and work backwards. A handy probability trick is also to eliminate all answers you don’t think are correct and then make a higher probability guess. These are problem solving skills that multiple choice testing helps to encourage.

That said, I’ve known several students who’ve used multiple choice test tricks as a crutch for lack of understanding with much success. I applaud you, Dan, for requiring full work to justify an answer to a multiple choice test question, but otherwise I’m wary of the format. It’s best not to teach bad habits, because in engineering you don’t always have a finite set of possible answers to work back from.

I understand the desire to familiarize the students with multiple choice questions, but make sure the tricks aren’t tolerated!

Rich

I’m not coming down all that hard on multiple choice, and I have to acknowledge that for the foreseeable future, students’ test-taking skills must include the ability to succeed at multiple choice tests (our era of standardized testing seems to confirm that). So I’d agree with Steve’s point about narrowing down the pool of possible answers to the best/smallest set sure helps — and I do coach my students on exactly that when we do our annual Stanford test (since we’re private, we don’t do the FCAT, which is Florida’s equivalent to your CST).

Now since I’m here and on the topic of assessment, I’d love your thoughts on this: our school (remember, I’m at the middle school level) doesn’t require any cumulative testing during the year until we get to final exams in May (just did them late last week). Their primary rationale for doing them at all is to prepare our students for high school, when they will presumably take final exams twice a year in most every subject. However, up until late May there is no requirement imposed on any teacher to structure her/his testing other than just to do what’s deemed appropriate. But what I’m finding is that after such an extended period with no structured cumulative testing (yes, I know that in math much of what we do is by its very nature cumulative, and yes, I’m still expecting my students to remember what they learned all through their previous years of math in elementary school, but I’m talking about pulling out questions that we tested/quizzed on back last August, last September, last October and so on. So you could say an “explicitly” cumulative test, not implicitly cumulative.)

My students sort of get starry-eyed and tend to freeze up because they’re used to doing sprints for my class assessments, not marathons (a tangent – would some connectivists suggest that we should prepare them more for relays??). I’m thinking very strongly of adopting/adapting your proposed approach for weekly quizzes, based on a sliding set of questions, the four-point scale, etc. for next year. But what would you think if, once per quarter, I also threw in a cumulative test “experience” so that they could stay in shape for the longer haul too? That is, the first quarter’s test would cover what we learned in the first quarter, the second quarter’s test would cover the first and second quarters, and so on…. until we finally reached the final exam in May, and it didn’t seem so darned scary and huge!

They literally draw a line through the concepts they don’t need. It’s never made sense to me that they should take the same test.

After they’ve passed a concept twice, I put an x at the top of their tests in the box for that concept which they then record on their concept checklist.

The concept list runs in order throughout the year but they can come in whenever and take any one they want. At the end of the semester I take the concept averages and we review the lowest concepts.

You can make the curriculum as broad or as narrow as you want. I layer some concepts on top of each other where, for example, if a student can find the equation of a line given two coordinates, that counts for “Finding Slope” and “Finding Equations.” No need to do both.

For the record, establishing and revising my concept checklist is some of the most fun I have teaching.

Nineteen hands raise and nineteen kids ask me if they can take their concept checklists out and see what they don’t have to take.

I curse under my breath.

They check their lists and then put an x through the concepts they don’t have to take. This is strictly a note to themselves. I’ve got the number 5 in my gradebook beneath any test they don’t have to take.

Jackie

Going back to the multiple choice tests, it seems to me that the best way for students to prepare for them is to actually write multiple choice questions of their own. They need to get into the habit of trying to anticipate how those three or four wrong answers are chosen. They aren’t just pulled out of thin air, but rather are results of common computation/logical errors. If the students have practice trying to trick themselves or each other with their own multiple choice questions, they will be better able to get inside the heads of the test designers.

Matt

Thanks for the great info. I am wondering your (and others) opinions on what “mastery” means. Does getting a concept correct for two weeks in a row mean that they have mastered it? Do they forget it a week later?

Does anyone know of any research out there on how many times you have to accomplish something before it is considered “mastered?”

I am just curious. By the way, I love this idea and plan on implementing it in my 4th grade class this year.

They have to get a concept correct twice, not in a row, not necessarily on consecutive weeks.

If the test problems were similar, I’d worry they’d forget. I still worry but not as much because the second problem is much harder.

For instance, assessing multiplying polynomials in Algebra 1, the first time I give ’em two binomials to multiply. The second time they get trinomials or worse.

A lot of kids can pull that first 4. At the end it’s only worth a B-. Fewer can pull that second, which is worth the A+.

Do kids forget? Yeah. But that’s the nature of the thing. Everyone forgets everything given enough time. But given a system of objectives this clearly defined, can I pull them back quickly? Absolutely.

I’m curious about how you go about breaking a curriculum into the discrete units that you test. How do you decide what are the “atoms” of the course, the elements that will be the topic of each quiz question? Do you start out with deciding what a manageable number of units would be, then cluster as best you can and whittle away pieces that don’t fit in? Or do you start with analyzing each unit in the book and breaking it into components that are as independent of each other as possible? In trying to break down Algebra II this latter way I just end up with such a long list of skills to test that the simplicity of your grading system is lost. A gradebook that listed these skills/concepts as its entries would certainly be very informative, but the number of questions on a weekly quiz would be pretty large! Of course questions like this are better left for a week other than the one before school starts – who has time to write now? – but if some future entry would deal with any simplifying schemes that would be welcome.

Oh and P.S. the winnowing process — deciding what’s important, which standard it matches, and how to teach with the end in mind — is my favorite part of the assessment process. Done wonders for me as an educator also.

Jack

A person named Michael posted about something called Accelerated Math. My school also has this program. The only things that this program teaches are: a hatred of math, and how to find “backdoors” to it’s multiple-choice format. This program implements “Scan Cards” similar to the ones used on the ACT and SAT. The teachers don’t have to check the assignments, so they feel free to assign 40 plus problem assignments whenever they so choose. They often send out two assignments per day, per student. The teachers don’t really teach the material, and then expect the students to be able to pass the “Objective” with no problems. The way this program works is: pass an objective on an exercise, then test over said objective, finally review the objective on a practice. If one fails to pass an objective, one must take it again and again until successful. If one fails the practice after successfully testing, they must test again, if they fail this they must start the whole process over to gain mastery of the Objective. Seems like a fairly simple system, but the creators didn’t factor in the human component. What happens when a student doesn’t comprehend an objective? They fail and their average plummets, then they have to do it again and again until both their average and self esteem/self confidence are depleted. What happens when the teachers are incompetent/bad teachers? They fail to teach the material, the students find ways of answering the problem through the answer, not learning anything. Through this program many students at my school have fallen behind due to lack of effort by the teachers, or lack of effort on their part because they become extremely demoralized by impossible situation they become placed into. All of this leads to hatred of mathematics by students, and angry teachers because all of their students cheat out of necessity.

Mike Carlin

1. Suppose the first assessment of the year tests skills A, B, and C. Do you write one problem for each? i.e., is it a 3-question test?

2. Suppose Suzie gets A and B correct, but gets C wrong. Suzie gets 4 points for A and 4 points for B. How many points does she get for C? Would it be 0 to 3, based on her work?

3. Now suppose Clarence gets C correct, but gets A and B wrong. So he gets a 4 for C and something less than 4 for A and B.

4. On the following week’s test, you have skills A, B, C, D, and E (i.e. you have a second crack at A, B, and C, and you introduce D and E for the first time). Are the questions for A, B, and C the easier types of questions, or the harder types? Clarence seems to need an easier question to answer for A and B but a harder one for C, while Suzie needs an easier question for C but harder ones for A and B. Of course, this easier vs. harder question issue resurfaces with each successive week. Do you provide an easier and a harder question for each skill tested on each test?

1. I write three question tests and six questions tests, three to a side, basically depending on a) how much I feel like grading that weekend and b) how much we’ve reviewed old concepts that week. (I’ve gotta have a reason to believe we’ll do better.)

2. Yes.

3. Yes.

4. As much as this system flexes to the needs of the student, it doesn’t flex that far. Every classroom exam after the first one has the more difficult question. Has to. Otherwise a student might coast to a completion off B-level problems. However, if a student comes in on her own time, if I check her grade and she hasn’t scored a four on the concept, I give her the easier problem.

Becky

I stumbled on your website while looking for assessment alternatives for math and I was thrilled to find something I can use this year! Just a question: I teach in Alberta, Canada, and our curriculum covers more than just concepts. We are also expected to assess other areas (communication, connections, problem-solving skills, reasoning). Some specific examples to be assessed are: use math in everyday life; recognize how different math concepts are connected; explain how manipulatives, pictures or diagrams were used to arrive at an answer. Do you have ideas on how to assess these in addition to the tests you give?

Brian

I’m new to your blog, and I am really fascinated by your idea of assessment and homework and everything. You’re really making a block schedule work, it seems, which is tough. I applaud you for that.

I’m getting into just my second year teaching (and I’m loving it!). I’m curious — how is this assessment policy working out after a few years? Have you revised it at all? Have other teachers bought into it at all?

This assessment policy has aged well, Brian. The word is out this year that my class is “easy” — a charge which I’d emphatically deny. What appears easy to my students is simply a tighter grip on what matters (are you competent in the 30+ concepts that comprise Algebra?) and a looser grip on what doesn’t (acres of classwork, homework, and long cumulative tests).