Monthly Archives: November 2014

I’ve been prolific in my complaints about schools buying into systems for assessment which focus on tracking rather than assessment, that pander to the myths of levels, or re-introduce burdensome approaches like APP. Every time, quite reasonably, several people ask me via Twitter: What are you doing?

I do my best to reply, but the reality is that what works for my school, is not necessarily right for everyone. That said, I have shared the Key Objectives on which our model is based. However, what I really want to advise people to do, is to access the NAHT materials which set out how to build a really effective model. Unfortunately, while the materials themselves I think are excellent, the problem seems to be that the NAHT has not promoted them, nor made them particularly accessible. So here’s my attempt to do so.

The NAHT framework for assessment

The NAHT model is broadly the same as that which led to my Key Objectives, although notable for its brevity in terms of objectives. There are a few key principles that underpin it, which include:

Key Performance Indicators [KPIs] should be selected for each year group and subject, against which teachers can make assessments.

End of year descriptors, based on the KPIs can be used for more summative judgements

The whole process should include in-school, and where possible, inter-school moderation

All of these things strike me as very sensible principles. The NAHT team which put together the materials to support this model went to some lengths to point out that schools (or groups of schools) may want to adapt the specifics of what is recorded for tracking purposes, but to support schools in doing so they have also provided examples of Key Performance Indicators for each year group and core subject area. These can be downloaded (rather frustratingly only one at a time!) from the NAHT website – regardless of whether or not you are a member.

The theory, then, is that assessment can take place throughout the year against specific objectives, rather than simply allocating children to meaningless code groups (‘3c’, ‘developing’, ‘mastery’, ‘step 117’, etc.). Over the course of the year, teachers and pupils can see progress being made against specific criteria, and can clearly identify those which still need to be covered. Similarly, at the end of each year, it is possible to make a judgement in relation to the overall descriptor for the year group. Schools may even decide to have a choice of descriptors if they really wished.

Annual tracking of those who are, and are not, meeting the performance standard for the year group can be kept, with intervention targeted appropriately.

There are several advantages of the NAHT system: firstly, it provides a sensible and manageable approach to assessment that can actually be used to support progress as well as meaningful tracking; secondly it doesn’t create unnecessary – or unrealistic – subdivisions or stages to give the impression of progress where none can reasonably be measured. Perhaps importantly, it also provides a ‘safety in numbers’ approach for schools who fear that Ofsted will judge schools on their choice. As a reputable professional organisation, the NAHT is a good backbone for any system – much moreso that relying on creations of data experts, who while clearly invaluable in creating tracking and analysis software, are not necessarily themselves, experts in education.

The aspect which seems to worry colleagues about approaches such as mine and the NAHTs, is that it doesn’t offer easily “measurable” (by which they usually mean track-able) steps all through the year. The fear is – I suspect – that it wouldn’t be possible to ‘prove’ to Ofsted that your assessments were robust if you didn’t have concrete figures to rely on at termly, or six-weekly intervals. Of course, the reality is that such things were nonsense, and it’s important that we recognise this as a profession. The robustness comes from the assessment and moderation approaches, not the labelling. The easy steps approach serves only to obfuscate the actual learning for the benefit of the spreadsheet. We need to move away from that model. Through use of internal and inter-school moderation, we can have confidence in our judgements part-way through a year and can improve our professional understanding of our children’s learning at the same time.

Of course, plenty of software companies will have come up with clever gadgets and numbers and graphs to wow school leaders and governors – that is their job. But the question school leaders should really be asking software companies is not “what are you offering?”, but “what are you building that will match our requirements?”

I notice this week that the latest release of Target Tracker includes an option for filtering to show the NAHT Key Performance Indicators. Infomentor offers a similar option, which also allows schools to link the objectives directly to planning. They also have a setup where schools can opt for my Key Objectives instead if they prefer (which offer slightly more detail). David Pott has already demonstrated how SIMS can be used to track such assessments.

The options are out there, and schools should be looking for tracking systems that fit with good educational principles, not trying to tack the latter on to fit with the tracking system they’ve got.

It’s not often I quote the words of education ministers with anything other than disdain, but just occasionally they talk sense. Back in April, Liz Truss explained the ‘freedoms’ being given to schools to lead on assessment between key stages, and commented on the previous system of APP. She described it as an “enormous, cumbersome process” that led to teachers working excessive hours; a system that was “almost beyond satire, […] requiring hours of literal box-ticking“.

Not everybody agreed with the scrapping of levels, but the recent massive response to the Workload Challenge has shown that if there is one thing that teachers are in agreement about, it is the excessive workload in the profession. Now at least we had a chance to get rid of one of those onerous demands on our time.

And yet…

Just this evening I came across two tracking systems that have been produced by private companies and appear to mimic and recreate the administrative burden of APP. What’s more, they seem to have managed to take the previously complex system, and add further levels of detail. Of course, they attempt to argue that this will improve assessment, but our experience tells us that this is not the case.

A school’s assessment system could assess everything students are learning, but then teachers would spend more time assessing than teaching. The important point here is that any assessment system needs to be selective about what gets assessed and what does not…

The problem with the new models which attempt to emulate APP is that they fail in this. They’re trying to add a measure to everything and so suggest that they are more detailed and more useful than ever before. But the reality is that this level of detail is unhelpful: the demands of time outweigh the benefits.

Once again, too many school leaders are confusing assessment with tracking. The idea that if we tick more boxes, then our conclusions will be more precise is foolish. If three sub-levels across a two-year cycle was nonsense, then 3 sub-levels every year can only be worse. Just because the old – now discredited – system allocated point scores each year, doesn’t mean that we should continue to do so.

Assessment is not a simple task. By increasing the volume of judgements required, we reduce teachers’ ability to do it well: we opt for quantity over quality. We end up with flow-charts of how to make judgements, rather than professional dialogue of how to assess learning. We end up with rules for the number of ticks required. As Wiliam also says:

Simplistic rules of thumb like requiring a child to demonstrate something three times to prove they have ‘got it’ are unlikely to be helpful. Here, there is no substitute for professional judgement – provided, of course, ‘professional’ means not just exercising one’s judgement, but also discussing one’s decisions with others

If you’re a headteacher who has brought in a system (or more likely, bought into a system) which implies that progress can be measured as a discrete level (or stage, or step) every term, that asks teachers to assess every single objective of the National Curriculum (or worse, tens of sub-objectives too!), or that prides itself on being akin to APP, then shame on you. There’s no excuse for taking an opportunity where the department itself points out that teachers are being expected to do an unreasonable amount of work, and replacing it with a larger load.

If you’re a teacher in a school that has adopted one of these awful systems, then I can only commiserate. Might I suggest that you print off a copy of this blog, and slide it under your headteacher’s one night. I’d also highly recommend adding Dylan Wiliam’s article to it.

We need our school leaders to lead – not just repeat the mistakes of the past.

Teach Primary magazine

It’s only right that I confess that I write an article for each issue of Teach Primary and so couldn’t fairly be said to be completely impartial. That said, I do think it’s well worth subscribing, if only for gems like Wiliam’s article and others that come up each issue, along with resources, ideas and wisdom from actual teachers and leaders. http://www.teachprimary.com/

The draft performance descriptors have been published a couple of weeks now, and the consultation is still open for 5 more weeks, but I’m concerned by how few responses there seem to have been.

There is a significant overhaul proposed of the current teacher assessment, that will affect every primary school up and down the country, not to mention millions of students. My personal view is that the performance descriptors are a disaster. But three weeks into the consultation, when I submitted my consultation response I got an email with identifier number 83. Surely more than 83 teachers, schools and organisations across the country must have a view on at least one of the five Yes/No questions asked?

In the interests of openness, and supporting others who feel similarly to me, here are some of the issues I have with the draft descriptors:

They suffer from the the adverb problem, or similar nuances of language that serve to make judgements vague and unhelpful. Take a look, for example, at these two statements and try to spot the subtle differences. It might be possible to guess which implies the more advanced writer, but could you really quantify it?

Writing demonstrates some features of the given form, as appropriate to audience, purpose and context, arising from discussion of models of writing with similar structure, vocabulary and grammar.

Writing demonstrates features of selected form, as appropriate to audience, purpose
and context, drawn from discussion of models of similar writing and the recording of
ideas from pupils’ own reading.

As Tim Oates’ video recently clearly explained, one of the problems with levels was a combination of three different meanings: a test score, a “best fit”, and a “just in” meaning. Although this removes the first of these, the other two remain. We have a new threshold issue.

Another of the problems with levels was the use of the labels for children – its almost inevitable that some (many? almost all?) schools will end up using these labels as part of a tracking system, and re-create another of the problems levels had.

One of the reasons for getting rid of levels was because it distracted attention from what a child can/cannot do by replacing it with a generic label. This system re-creates those problems.

Generally, these descriptors simply realise our fears of a new system of levels by another name.

Some teachers may be happy with that, and they’re entitled to say so in the consultation, but either way, surely there must be more than 82 other people who care?

I’ve sat through a fair few presentations at conferences myself, and have even given a few. I seem to have been subjected to particularly many lately that seem to lack any real direction or purpose, and have no idea what impact they were meant to have. I have, though, noticed an increasing commonality between some of the poorer presentations I’ve seen and while they’ve left me frustrated at having wasted my time, these tricks seem to go down a storm with audiences of primary educators. The same may be true of secondary colleagues, but I’ve less experience with them.

So if you’re not really bothered about achieving anything, and merely want to set up a career as a sort of after-dinner speaker for between meals, then I suggest the following patter is guaranteed to bring a healthy income:

Find some common sense statements and turn them into ‘bon mots’.

Take random useful qualities, or ideas, or just words and turn them into an acronym.
(Why Plan, Do and Review, when you can Plan, Research, Implement, Complete & Keep?)

If an acronym won’t work, create a diagram for the initials. Stars are good. Or hexagons.

Intersperse your words with asides about imaginary children you once met.

Use the Chart-Art facilities in PowerPoint to link seemingly unrelated things into a single diagram.

Present broad concepts with a background of a grid as if to imply a scientific graph.

Throw in a critical comment about Michael Gove (no need to worry about his successor yet).

Use the phrase “research shows”. No need to back this up with any references.

Throw in some references to books you’ve read. Implore people to read them.

Make another comment about Ofsted inspectors.

Say something that shows you understand how busy teachers are.

Refer to well-known Scientists: a mis-quote from Einstein is as good as any real research.

Emphasise the importance of things other than Literacy and Numeracy.

Point out that SATs are not a real measure of children’s achievement.

Say again that it’s all about the kids; “that’s why we came into this profession”.

Make a reference to the teacher who gets snarky about her mug / chair / parking space.

Criticise the DfE. No specifics are necessary: just criticise the department somehow.

Show people some Buzzfeed style activity that shows their learning style.
Or their dominant brain hemisphere.
Or their balance of red, blue, green or yellow leadership style.

Acknowledge that evidence doesn’t support these ideas, but claim that they remain valid.

Blame secondary schools for something.

Raise the issue of the “mood hoover” or hard-to-engage staff member. No solutions needed.

Remind people that we’re preparing kids for an unknown future, so anything goes.

That should comfortably fill an hour or more. If you pad out the slot with anecdotes about children (your own, your class… a niece. Any will do) and comments that show how you were once an excellent teacher, then all you need now is a couple of common sense statements to underpin your work, or some popular messages that make your listener feel that they agreed with your every word: “we need more focus on the whole child” or “learning isn’t linear” or “teachers do the most important job in the world” are good examples.

Now, I wonder if there are any opportunities for running courses on how to run a course…?

It once again raised a question that I have long wondered about: why do we accept that secondary education costs so much more than primary?

I don’t have any more detail on the figures (yet), so it’s hard to know exactly what is being compared here. It could be a substantial difference in pupils numbers if comparing compulsory primary to compulsory secondary, particularly given the bulge in primary numbers at the moment, but either way it must almost certainly represent a greater number of primary school pupils for 75% of the cost of secondary.

Now, I’m not arguing that there should be no difference. I do recognise that there are some additional costs in resourcing secondary schooling, but are the costs really so much greater that the sector warrants an additional 33% or more spending?

When I queried this on Twitter, various possibilities where put forward, of varying degrees of justifiability to my mind, such as:

Exam boards costs
This seems reasonable to me, although I cannot believe they account for much of the difference

Equipment (especially for practical subjects)Again, I agree to an extent. However, many of these costs are rarer spends, and are then spread across hundreds of pupils. Primary schools, on the other hand, often have to buy resources for relatively small numbers (and I challenge anyone to look at the costs of resources such as Numicon without their eyes watering)

Staffing
This is the big one, understandably. And clearly secondary schools need more staff, but do they need so many more staff than a typical primary on a “per head” basis? And it’s true that they have larger staffs and so more leaders per school, but does that still hold per pupil? For example, one secondary head can often be responsible for 2000 students, a number that would commonly be shared between up to 10 primary heads. Similarly, an admin team might necessarily be larger for a large school, but are the costs necessarily higher than the equivalent admin support across 5-10 primaries? And if so – why?

It is true that small extra costs here and there soon add up, but what if the discrepancy hadn’t previously existed? Would so many secondary teachers have TLRs compared to primary colleagues?

Take for example a smallish 800-place secondary school without sixth form. It wouldn’t be uncommon for there to be a TLR for a Head of Year post – or perhaps someone on the Leadership scale, and perhaps even a deputy head of year on a TLR. Alongside this it would be normal to have Heads of department on a range of TLRs, and in many cases second in department and other roles.

Now take an equivalent size of primary school. Again, it might be common to have paid heads of year, although often on the lowest TLR. These same staff would be expected to take on curriculum leadership roles, too. And often on a far fuller timetable than their secondary colleagues.

Is that because such roles are inherently more costly in secondary schools, or just because the money is more easily available for it?

I genuinely don’t know the answers. There are almost certainly costs of secondary schools that I haven’t considered. There are probably some diseconomies of scale that counter some of the presumed economies. But can any of that really justify spending 33% more on secondary pupils than their primary counterparts?

I have deliberately avoided getting into the details of post-16 and pre-compulsory education. I recognise that there are greater costs involved at either end of the system, particularly on the literal number of staff required, but I’m not persuaded that those factors affect the bigger picture substantially.