Teaching what works

Menu

Desirable but impossible

I am increasingly convinced that one the biggest problems we face in schools is a belief that because something is desirable it must be possible. I think we see this particularly when it comes to assessment, tracking and reporting.

What do we want!

I can completely understand that everyone would like to know what grade a pupil is going to get at the end of a course. It would mean that they and their parents could make plans for the future and schools could get their excuses ready. It would be even better to know what they are going to get if nothing else changed as they or we could then try and change things. They could work harder in particular subjects, hire tutors or receive targeted intervention. It would be desirable, but is it possible?

Short answer, no…

Let’s us imaging that a pupil in Year 7 gets 68% on their geography assessment. Their KS2 data means the school have given them an eventual target of a 7. Are they on track?

We could apply GCSE grade boundaries and say that 68% would probably mean they got a 6 on this assessment. But of course this is nonsense. This isn’t a GCSE assessment. The GCSE grade boundaries don’t apply here. So lets jump forward. Its Year 10. That same pupil gets 68% on an assessment based on a past paper. Does this mean we can apply the grade boundaries and say they achieved a 6? Does it mean we can say they are on track for their 7? In both cases, no.

The GCSE grade boundaries apply to a particular paper. They apply to the paper assessing a range of topics that may not be represented on the part of the paper sat in this assessment. They are also sat in tightly controlled conditions at a time when they are juggling dozens of exam papers. They have little idea about what will be on this paper. The paper our pupil has just done in class will be just a fragment of the eventual exam, done in less tight conditions, when they aren’t also sitting a lot of other exams and they may have been given some areas to focus their revision on. The grade boundaries don’t apply.

If they sat the entire paper in exact exam conditions we might be able to say that they have indeed achieved a 6 but even then we can’t really say what they are on track for. Some pupils make sudden and rapid progress and things just seem to fall into place at the end of the course. Others seem to make these rapid gains early on and then plateau, Progress is not linear.

So in most assessments we can’t convert a score into a grade and even when we can we can’t convert it into a meaningful prediction. What can we do?

How am I doing?

Interested in views as a teacher/parent/leader.— Mark Enser 🌍 (@EnserMark) February 21, 2019

From this thread, a remarkable consensus seems to be emerging that would have been unthinkable a few years ago. Just use the raw score.

All we actually know is how the pupil has done, how the rest of the cohort did, and how this was different last time. Let’s stop pretending we know other things just because it is desirable and instead accept we need to deal with the cold reality.

This pupil got 68%. Maybe the class average is 75%. That tells us something, even if it isn’t as much as we would like. This score might put them at 90/200 pupils in the year. That tells us a little more. Perhaps their KS2 data (which, for all its many faults is what we will be judged against) suggests they should be at 150/200. Suddenly this tells us a lot more. They seem to need some kind of intervention or support. Perhaps in the previous assessment they ranked 100/200 and the one before that 120/200. Now we have a pattern. Perhaps this is a similar story in every subject but history. Now we have a potential solution to explore – what is different there?

Of course this doesn’t tell us everything we desire to know. It doesn’t tell us what grade they will walk out with, but this is impossible so lets move on. It also doesn’t tell us if the entire cohort is under-performing, but then what will you actually do differently if they are? Or for that matter, what will you do differently if they aren’t? Suddenly stop trying so hard? Sit back and say “that’ll do”? Quite.

Conclusion

When it comes to assessment, reporting and tracking we need to stop trying to make the impossible possible and accept that just because it is desirable doesn’t mean we can do it. Lets start being honest about what valid judgement we can make with assessment data and stop filling tracker sheets with junk. If we put junk data in we will get junk conclusions out.

For more on what teaching would look like if we left it to the teachers, check out Teach Like Nobody’s Watching. Available for pre-order now.

Post navigation

5 thoughts on “Desirable but impossible”

Not everything good is ultimately worthwhile. Some things are better, and thus, more worthwhile. The question is always – ‘What is best for us and our students?’ It may mean getting rid of, or adjusting what we already do that is good. Something like ‘good, better, best… make your good better, and your better best…”.

I’m interested in a hybrid approach, using CEM centre information via MidYis to set a baseline. That way we can compare general performance against peers in this assessment against baseline “rank”. One could standardise assessments against the original mean and sd. The only thing that this masks is if a whole cohort gets worse year on year, but you should be able to note that with the raw scores.
But what would I report to parents…?