Extra Tests, Bundled Objectives, and Changes for Next Year

Exam writing is looming just around the corner (May 30th is the big day this year). In response, I am turning to the traditional writing-procrastination technique: entertaining ideas about possible changes for next year’s classes (procrastinating on writing advising letters two years ago resulted in my moving to standards-based grading in the first place, so it can’t be all bad, right?). I foresee more of these posts as the need for writing the exams becomes increasingly undeniable.

Anyway.

First up: how to make weekends less hectic for me and more feedback-y for my students.

The Idea

Instead of students requesting specific objectives, they request test flavor combos (more like what they have been doing for many of our during-the-week quizzes this year). Objectives would be bundled together into related flavors (see below for my first draft), and student could pick up to two bundles per weekend.

Pros

Students will be narrower in their goals for an extra test, making it easier (and hopefully more likely) for them to arrive prepared for the requested objectives. This year, they were allowed to ask for up to 5 skills on an extra test. Some would choose 5 very different skills, resulting in an excessively long test (mini-exam, practically!) and almost definitely a set of 1’s on the skills requested. Limiting them to two bundles of related objectives would keep the extra tests a more uniform (and more focused) length.

I can create multiple versions of each flavor over the summer instead of spending Saturday night/Sunday morning pulling together problems for 20 to 30 individualized tests. I am envisioning one page for each bundle. Kids then get a one- or two-paged test on Sunday. I just have to keep track of which versions have been used for which students, choose available pages, and print the two together. Very quick and easy compared to my current process.

If I create the tests over the summer, and if the pages that are used are always kept together as a solid page (instead of problems pulled apart, remixed, and maybe even a bit reworded per student), I can also write up solutions for every page over the summer. Students would be able to mark up their own extra tests in the same way that they love marking up their regular tests during class.

In addition to that more instant and satisfying feedback (plus the great work of figuring out what went wrong and working through the solution immediately after having spent time thinking intently about the problem), checking out of the extra test with me will be easier and more focused. I won’t have to check their answers and negotiate scores with them since they will already have corrected their own papers. Instead, we can get right to talking about what went well, what problems they are having, what to do about it, etc.

Skills are always put into a slightly broader context, so the cherry-picking phenomenon is further reduced.

Cons/Questions

Each skill might not be getting enough of its own due. If the skills are worthwhile feedback points on their own, why bundle them? Should the bundles really just be larger, less grainy objectives? I think not (of course I could be very wrong). I think they are still valuable on their own, but are also served well by being tested in a slightly larger context. I think it will be common for students to be able to repeatedly demonstrate one of the skills in a bundle while repeatedly struggling with another, so keeping the narrower the feedback seems worthwhile.

Testing at the end of a grading period (quarter or semester) could become complicated for a student with widely spread A objectives that they still need to demonstrate. If they would need 3 or 4 of the bundles to in order to hit every below-mastery A objective, should they be allowed to test on all of them in the final weekend of the term? Should they be allowed to cherry-pick just the As outside of the bundles at that point? I haven’t thought about this piece enough yet.

Would bundling the objectives make the extra tests seem more like the “retest” idea that I’m trying to avoid (these are supposed to be new tests on skills they are now ready to show, not a rehash of an old test, not an old test with maybe different numbers, like “restest” seems to connote)?

I am thinking more and more about the idea of moving toward less passive, more student-initiated/directed/invented assessments, and this change firmly keeps me where I’ve been. On the other hand, I don’t think I’m ready to make that next leap toward student-created assessments. I need more time to think, understand how it could work, and figure out how to take my students from their incoming expectations toward something requiring that level of radically unexpected commitment.

Will these more polished, packaged, pre-written tests signal to students that there is an available way to cheat the system (by getting copies of their friends Sunday tests and hoping for the same version when they request the same package)? For my students, and with the Honor Code at this school, I’m not especially worried about this possibility. Also, SBG still strongly disincentivizes cheating because subsequent tests of the same skills would reveal the lack of mastery, and only the latest scores would matter when it came time for grades.

Draft of Bundled Objectives for Honors Physics

Each test page will be sure to include, but will not be limited to, the set of objectives listed for that flavor.

1 – CVPM
1.1 A CVPM I can draw and interpret diagrams to represent the motion of an object moving with a constant velocity.
1.2 B CVPM I differentiate between position, distance, and displacement.
1.3 B CVPM I can solve problems involving average speed and average velocity. This objective is likely to be changed to something along the lines of “I can solve problems using CVPM.”

2 – BFPM – N3L and FBDs
2.1 A BFPM I can draw a properly labeled free body diagram showing all forces acting on an object.
2.2 A BFPM When given one force, I can describe its N3L force pair.

3 – BFPM – problem solving
2.1 A BFPM I can draw a properly labeled free body diagram showing all forces acting on an object.
2.3 A BFPM I can relate balanced/unbalanced forces to an object’s constant/changing motion.
2.4 B BFPM I can use N1L to quantitatively determine the forces acting on an object moving at a constant velocity.
2.5 B BFPM I can draw a force vector addition diagram for an object experiencing no net force.

4 – CAPM – graphs
3.1 A CAPM I can draw and interpret diagrams to represent the motion of an object moving with a changing velocity.
3.2 B CAPM I differentiate between acceleration and velocity. ] combine these two?
3.3 B CAPM I correctly interpret the meaning of the sign of the acceleration. ]
3.4 B CAPM I can describe the motion of an object in words using the velocity-vs-time graph.

8 – MTM – one-dimensional
5.1 A COMM I can calculate the momentum of and the impulse on an object (or system) with direction and proper units.
5.2 A COMM I can draw and analyze momentum bar charts for 1-D interactions (IF charts).

9 – MTM – two-dimensional
5.3 A COMM I treat momentum as a vector quantity.
5.5 B COMM I can use the conservation of momentum to solve 2-D problems.

10 – MTM – explain in words
5.4 B COMM I can explain a situation in words using momentum concepts.

12 – ETM – diagrams/conceptual
7.1 A ETM I can use words, diagrams, pie charts, and bar graphs (LOLs) to represent the way the flavor and total amount of energy in a system changes (or doesn’t change).
7.2 A ETM I identify when the total energy of a system is changing or not changing, and I can identify the reason for the change.
7.3 B ETM I identify thermal energy as the random motion of the tiny particles of a substance.

13 – ETM – problem solving
7.1 A ETM I can use words, diagrams, pie charts, and bar graphs (LOLs) to represent the way the flavor and total amount of energy in a system changes (or doesn’t change).
7.4 B ETM I can use the relationship between the force applied to an object and the displacement of the object to calculate the work done on that object.
7.5 B ETM I can use the conservation of energy to solve problems, starting from my fundamental principle.

14 – OPM
8.1 B OPM I can draw/interpret motion, force, and energy graphs for an oscillating particle.
8.2 B OPM I can identify simple harmonic motion and relate it to a linear restoring force.

15 – CFPM – general problem solving
9.1 A CFPM I can calculate the magnitude and direction of the acceleration for a particle experiencing UCM.
9.2 B CFPM I can use Newton’s 2nd Law to solve problems for a particle experiencing UCM.

16 – CFPM – universal law of gravitation
9.3 B CFPM I can use the Universal Law of Gravitation to solve problems.

17 – CFPM – energy
9.4 B CFPM I can use the conservation of energy to calculate the escape velocity for an object. This one also needs to be reworded to make it about solving energy problems where the distance from a planet changes considerably instead of just being about escape velocity.

18 – MTET – diagrams/conceptual
10.1 A MTET I can determine whether or not a collision was elastic by analyzing the motion information.
10.2 A MTET I can qualitatively represent the energy stored before and after any collision.

& in closing

I haven’t completely committed myself to the bundling idea yet, though I’ve thought about it for quite a while. I plan to run it by the current physics kids on the course evaluation late next week to get the student perspective.

15 thoughts on “Extra Tests, Bundled Objectives, and Changes for Next Year”

Awesome! I was thinking about doing a similar thing next year as well. I also had trouble with a TON of kids who procrastinated (like me..) and wanted to reassess certain quizzes this week. Which happened to be the last week they were allowed to reassess. Tuesday Lunch and Thursday after school were the last days. I had 75 new reassessments to grade!! Many kids came in and said they were ready, but with the line of kids to see me extremely long, I didn’t have the time to really probe each one to see how ready they were. Some were indeed ready, and did the proper work, and learned the material, hurray! And, some were not ready to say the least. So my idea to make the kids come in before reassessment day it o make a special pass that I will sign after they talk over their old quiz corrections and additional problems worked. This will be stapled to their old quiz and used as a ticket to take the reassessment on the allocated day for reassessments. They cannot get a ticket to reassess on reassessment day. Hope this helps next year. I tried the e-mailing me thing that you and Sam do, but it did not work too well. I’m going to try a paper version of this kind of like an application for reassessment that they have to bring with them along with their work inorder to get a ticket to reassess.

Thank you so much for posting so many great posts! They help so much Kelly!

I think that we all have the late term crush to one degree or another, unfortunately. I was toying with an idea similar to yours to help mitigate the issue and reduce the number of out-of-class assessments – having every other assessment a choose-the-menu type. Kids really do learn from the out-of-class reassessments, but I’d like to push that learning further forward in the term without denying the chance for real organic growth to be rewarded.

Your bundles (and the reasons for them, both pedagogical and logistical) are pretty much what I moved to this year when I made my standards less grainy – I have about that number of standards, excluding algebra, units, vectors, and error analysis, for my class now. It worked better for us, so I’m going to stick with it, if that’s reassuring for your plans to implement it. 🙂

Thanks, Josh. I don’t really want to switch to grading them on the bundles, but I think they might make keeping track of extra test versions a lot easier. The choose-your-flavor quizzes rock. Though you need to be prepared for a good number of students (mainly in the regular level class) not caring enough to ever choose. But the kids that do care (at both regular and honors level) really love the choose-your-flavor quizzes, and they make really good, targeted use of them.

How do these standards change for a regular or college prep physics course?

As for bundling standards, I think it would make it harder for the student to know exactly what is required of him. Maybe if you bundle then have the individual standards below it then students could see exactly what was going into it. It is not like you are dealing with a bunch of standards in each unit and to reduce it down to 2 or three might seem to over simplify it for the students.

Oops! I meant to write a paragraph about this, but forgot by the time I finished writing the post. The bundles would only be for requesting tests. I would still grade them on each individual objective, and I wouldn’t use the bundles at all for in-class testing. The idea is just to create an easier way for them to do a good job of requesting tests and for me to have a lot of that prep work done over the summer instead of the hurry up and wait game from weekend to weekend. Does that make sense?

I totally agree with you that grading it by bundles wouldn’t be ideal since it would cover up some of the usefully grainy feedback.

Upon thinking about this some more and looking at your extended list of standards on your class website, it seems that the standards are heavily weighted towards conceptual idea. How do you take into account for scientific practices like formulating an hypothesis, designing/conducting an experiment, collecting data, analyzing the results and such?

Our school has a policy where semester grades are calculated as 40% for each of the two quarters and 20% for the final exam. So I need to have quarterly grades. (I do have some flexibility in changing some of the quarterly grades). I am struggling with my desire to implement SBG in a way that provides consistent feedback to students and parents on students’ progress. I feel that I’m stuck with a “points” system to report progress.

I tried using the two dimensional conjunctive style grading (that you described in an earlier blog) in the CVPM and CAPM units this year, but ran into trouble keeping track of which students were mastering which objectives but I muddled through the replacement testing.

In the later units I gave more focused quizzes. Sort of like targeting one quiz at level A objectives for that unit, and targeting the next quiz at the level B objectives. I can make the level A quizzes worth more points, which can have the effect of making it necessary for students to master the level A objectives. In a way this is like bundling the objectives by level. I’d like you to comment on this, please.

Also, if a student begin to demonstrate B level mastery of, say, the CAP during the UBFM unit I am thinking about having that student take a CAP B level replacement quiz. I’d like your comments on this also.

To qualify for taking a replacement quiz, students had to make corrections to the original quiz, get help from someone to gain understanding or the material, and do a practice worksheet to demonstrate that understanding.

I still give the modeling worksheets as homework, but we also do quite a few of them in class. If homework is more than a week late, it expires and the student is required to do a replacement worksheet.

I want to improve upon what I did this year, and I’d like your comments (and comments from others also).

What a lot of great ideas and thinking here! I’m definitely lucky that I have flexibility in how I keep track of my grades throughout the year and how I calculate a final grade in the end. It sounds like you’ve already found some good ways to live within the constraints of points. I’ll write some of my initial impressions/ideas below (and I will try to make sure to respond to each of your points/requests above).

In terms of keeping track of things—have you tried using ActiveGrade? It has really helped me a lot. In the summer before trying out SBG (and before the invention of ActiveGrade), we spent a lot of time talking about how we were going to keep track of everything. There weren’t a lot of easy solutions that came to mind. If you want to include more non-numerical feedback, something like Blue Harvest could also be really useful.

When you were describing the idea of having A quizzes vs B quizzes (with A quizzes being more valuable), I was thinking about how to make students think of points more in the way that they think of objectives. Here’s the first idea I have (this is centered around using a system that mirrors my conjunctive-style grading, so please imagine it with other tweaks that make it the way you’d want it to be):

Figure out how many A objectives are in the first quarter. Divide 70 by the number of A objectives for that quarter. That is the number of points that each A objective is worth during that quarter (may vary by quarter). Each A-level quiz is worth a number of points equal to (the number of A objectives in that bundle) * (the value of each A objective for that quarter).

Do the same thing with B objectives in the first quarter, but this time divide 20 by the number of Bs for the quarter. That is the number of points each B objective is worth that quarter.

Create a tree structure where each bundle is dependent on bundles beneath it. So the B-level quiz for that model would be above the A-level quiz for the same model. The B-quiz for projectile motion would be above the B-quizzes for CVPM and CAPM. The central force quizzes would be above the UBFPM quizzes (both A- and B-level). Etc etc. Actually, even cooler might be to have your students decide on how to construct the tree as it goes along, so that you are building a concept-mapping activity right into the assessment structure and giving the students more control and ownership in the process. It would probably be helpful to give them an example tree of skills at the start of the year from some other arena that they would be able to understand by analogy (even if they don’t create the physics tree themselves).

You need a way of keeping track of each student’s score specifically in each part of the tree structure (I think this could be accomplished without too much trouble in either AG or BH). So. Here’s the conjunctive-SBG part snuck into points clothing (bear with me, this will take a few bullet points of setting up): In order to be eligible for any particular quiz, they must have mastered all of the bundles on which it depends (per the tree). So to take a CVPM B-level quiz, they must have current mastery on the CVPM A-level quiz. To take a PMPM quiz, they must have current mastery on the CAPM B-level quiz (and the CAPM/CVPM A-levels, and maybe others, depending on the specifics of your tree).

B quizzes would necessarily involve A objectives. It might be possible to show some holes in A-level understandings on a B quiz. That should affect the score for the A bundle, since having the score go up and down throughout the process of gaining deep mastery is a pretty essential part of SBG. With the point system, that can be a little rough (especially if parents are seeing grades constantly reported). Here’s the idea, though—if a student later regresses on a previously mastered A objective, it changes the score for that A-level bundle, but it just freezes the scores for all of the bundles dependent on that A-level one (that is, the other bundle scores still count as part of the grade, but cannot be improved yet). In order to test at any higher level in the tree in the future, the student must go back and regain a mastery-level score on that older A-bundle. Once they have, the other bundles are now unfrozen and can be tested on again. So more than just because of a higher point-worth (something that is sometimes too abstract or far-off-feeling to teenagers), the A-objectives (and even the earlier B-objectives, like solving kinematics problems) are very clearly essential. And lacking mastery on a core skill requires immediate attention and remediation.

It might be necessary to be a little more forgiving about what triggers tree-freezing when it comes to B-level bundles. It might be a certain number of Bs in the bundle that must be mastered to test at a higher level, or it might be that particular ones in the bundle must be mastered in order to advance. Solving problems with UBFPM should be a B-level, not an A-level objective (since it is about problem solving and not a conceptual skill or diagram). It still must be mastered in order to have a chance at solving problems with the central force model.

I’m leaving the final 10 points of the grade (how to get above a 90) up to what you do already (or would like to do for that piece). Capstone project? Goal-less problems? Etc. Or, if those don’t make sense for a particular class, maybe the Bs can get you all the way to 100?

Throughout the quarter, the grade could be constantly reported since the total number of points available that quarter is 100. It will take quite a while for the students to get a grade that doesn’t sound ridiculous (“I have a 13 in Physics right now!”), so it would keep a bit of the feel of the your-grade-does-not-exist (until I have to turn it in at the end of the semester) aspect that I really like in my own system.

Which maybe leads to another piece to consider—whether the grade should start at 0/100 or 50/100 at the start of a quarter. If it makes more sense to start at 50, then each A would be worth 20/(number of As) points each quarter.

The old objectives (from a previous quarter) would still be relevant in future quarters, so they may continue to be tested (and may required being tested if they result in freezing the higher parts of the tree). At the end of a new quarter, it might make sense to update the grade for a previous quarter. Same after the exam.

Even if you ultimately have to do some averaging of an exam and quarters, the day-to-day business of learning physics would probably be rather SBG-like with this type of system. Also, they’d be really, really ready to crush an exam since they’ve been required to keep testing on old skills until they internalize them.

Anyway, that’s just the first draft of an idea that I’ve had so far about how to make SBG live within such a constrained points system.

One other thing—I really like your requirement of getting help from someone before being allowed to take a new test. How do you check for that? I’m imagining that they must have another person (a classmate, someone from another section, an older student who has taken the class before, their teacher, another physics teacher, etc) sign off on their practice. That could be really neat.

Thank you very much. You have given me some ideas. I followed up on your suggestion and looked at ActiveGrade. I would need to use our in-house grade book; also I think keeping two seperate gradebooks would be too time consuming (plus I don’t have a budget).

I like your idea of figuring out points per objective (with the different weights). That’s close to what I do now, but more objective-y. I would spread them out over 2-3 quizzes; for example, motion diagrams/maps might be worth, say, 15 points in the first quarter. I could make a motion map question worth 10 points on one quiz and 5 points on another. Then grade each problem as having the student earning 0%, 50%, or 100% of the points (i.e., 0, 1, 2 level mastery). Parents would still see the points accumulating as they are accustomed to seeing. There will be a lot of low scores at times, but parents can take comfort in the knowledge that their student can be reassessed. They can also provide encouragement for their student to take the initiative to do so.

I could create a conjunctive style grading matrix for each quarter, but have each student be responsible for keeping their personal version current. Students would be responsible for keeping all of their quizzes as well. When they want to be reassessed, they would select a “bundle” of objectives within that unit or across two units (but probably not more than that … not sure). Reassessment of A level objectives could be a single A level objective, or a narrow bundle of A level objectives. Both subject to approval, but putting the student in control. This would necessitate marking each problem on the quiz with an objective designation so the students could keep track of their progress on each objective. [This takes away the nightmare of me having to do this for 100+ students]. I could send the students an electronic version, and insist that they keep it current and stored on the school network server.

To answer your question, I usually just ask the student who they got help from; and then check up sometimes by thanking the other student and watching their reaction (or ask a specific question). Maybe haveing them sign off on the working materials would be a good idea to implement next year.

I might consider filling the other 10% with labs and projects … maybe some points for whiteboarding participation. We do two big projects ever year, and I also want to have students improve their lab reports … this has been like pulling teeth … maybe making them a level A objectives would be more effective?

Thank you again for your great ideas

Joe

PS. The sequencing of your units differs from what the modeling people use in their workshops. I really like what you have done for an honors physics course. In a general physics course where I need to move at a more deliberate pace I’m not sure that it would work. What do you think?

I read a really great post about getting students to improve lab reports (I have it linked in a draft I’ve been writing about how poorly many students analyze their mistakes when asking for a new test, but the drafts tend to hang around for a while before being published). Here’s the link about lab report improvement. I suspect the answer is less grading, not more, but there are always multiple good paths to accomplishing those goals.

In terms of sequence, I use approximately the same sequence in the Honors Physics and in the regular Physics! classes, but the Honors kids get about 2x as far (and also some of their units have more depth). The only real difference in sequence this year is that I did oscillating particles before circular motion in the Honors class. The regular class got to circles, but skipped oscillating particles, then ran out of time.

I think the regular class needs to find its own groove. It used to have its own groove, but then we changed Honors Physics to match it (use Modeling) because it was working so well. Also, because of the slower pace, I think it would be nicer to get to more of the fundamental principles in the first semester than we currently do. It would feel like we’ve done more real physics by the time we get to the first exam.

I’ve been thinking about resequencing my units along the lines of what you have done, and I’m thinking about introducing MTM-1D just before CAPM. This would build on the BFPM and CVPM and introduce the concept of the cause of a change in velocity. Then move on to the CAPM.

One of the things I like about your sequencing is the way the topics sort of spiral together as the models build.

I’ve been thinking about the order of units myself. My Honors classes are going to stay the same as before, but I’m flipping the order in my regular classes for next year. I’m starting with energy transfer, then momentum transfer, then forces. I’m really excited about it, but haven’t done quite enough work yet to start posting about it.

I think the only problem with putting MTM between BFPM and UBFPM is that it means you’ll have to cycle back to MTM later because you will only be able to deal with systems that don’t have any net momentum transfer (since you don’t have the net force idea yet to change the total momentum of the system). I’m planning on coming back to ETM and MTM after the forces units to do energy with work and momentum with impulse.