As I look to timetabling in the new school year, I reflected on the work Tom Sherrington did a few years ago about secondary timetables. Unfortunately, the primary curriculum timetable is not so easy to analyse, given that very few schools stick to a simple programme of x lessons of equal length per day, and few teach every lesson every week – or even every fortnight, as would be common in secondary.

Because of this, it’s much harder to get a sense of how much time schools are giving over to each subject, particularly given the changes of recent years and those on the horizon. So, I set out to try to find out as much as I could, through another of my Google surveys.

It’s impossible to present all of that information tidily, since every school’s situation is unique, but here I’ve tried to draw out some key things.

Weekly subject hours

Different schools take different approaches. Three different schools might offer 36 hours of Art each year, with one offering a weekly 1-hour lesson, another having two hours every other week, with a third having two-hour lessons every week, but only every other half-term. Yet another might mainly use Art days each term to reach its quota. So we’re not comparing like with like, here, but the table below attempts to show the average number of hours taught for each subject if evened out over 36 weeks of term (allowing a couple of weeks for being off-timetable) – all rounded to the nearest 5 minutes.

I don’t imagine anyone being massively surprised by any of those figures, but it certainly gives an indication of the narrowing of the primary curriculum. When the QCA last recommended teaching hours in 2002, it suggested an average of 55 minutes a week for the majority of foundation subjects. We’re now struggling to get above 30 for Geography!

Perhaps more surprising is the fact that although there is more time given to the tested subjects in Year 6, the decline in ‘breadth’ is not huge. It seems that the curriculum is fairly limited across the whole of primary.The greatest breadth in curriculum, at least in timetable terms, appears to be in Year 3.

Regularity

Some years ago there was a clear government target for primary pupils to have at least 2 hours of timetabled PE each week. It seems that the target has achieved something, as it is the only foundation subject which has ended up with significantly more than its previously recommended amount (which was 1 hour 15 minutes in 2002). It’s also one of the few subjects with weekly slots, with 98% of responses saying they taught PE every week, with nearly 90% having more than 90 minutes of PE each week.

The only subject that comes close to regular weekly slots is Science, with around 2/3 of respondents saying they taught Science every week.

At the other end of the scale, Design & Technology is very rarely taught on a weekly schedule. This is perhaps not surprising given the amount of resource required for the subject. Nearly a third of schools appear to use standalone days each term or half-term for the subject instead:

Exceptional Cases

I didn’t collect exact data, but only in categories, so of those schools who said they had 7½ or more hours each week of English or Maths, I could only count the 7½ hours. In the end, more than half of responses (52%) gave an answer of 7½ hours a week or more for English. It seems, therefore, if anything that the above are under-estimates of the time given over t o English.

Only about 10% of schools gave a similarly high answer for Maths, but this is still quite a significant number. Those figures rise to 57% and 16% each for pupils in Year 6.

At the other end of the scale, approximately 5% of responses said that they gave over no time to PSHE. The subject is not yet statutory, so presumably that figure will fall over the coming year or two. Around 4% of responses said they taught no Computing at all; I wonder if that’s more a confidence issue than a planned decision. Who knows?

Advertisements

Post navigation

As ever with such things, it is important to point out that this data is not a scientific sample, has not been verified, and could be completely meaningless. However, in the absence of any comparative data from the DfE, it is an attempt to give some vague indication of the national picture of schools that took part in the MTC sample.

At the time of writing, some 211 sets of data had been submitted to the open spreadsheet online. Because it’s an open spreadsheet, there’s no guarantee that it doesn’t have errors, or that some data hasn’t been damaged, or even completely made up. With that in mind, I have completed some very simple calculations based on the data to give some idea of indicative figures.

Overall Averages

The mean average of all pupils’ results was 18.4

The mean average of all schools’ averages was also 18.4

The following table shows the approximate cut-off points when comparing schools’ averages, to place schools into bands.

Perfect Scores

There was talk at one point of full marks being the expect threshold. It’s no longer clear that this is the case, or even that there will be a pass mark of any sort at all, but within the sample:

Overall proportion scoring 25/25: 17.4%

Bands for proportion scoring 25/25:

Pupil Scores

More pupils did score full marks than any other individual score, with scores clearly more likely to be at the top end of the scale.

School Averages

The majority of schools had an average score of between 16 and 20

Does any of this mean anything? Not really… it’s a tiny sample from a voluntary pilot of a new test with no clear expectations hastily compiled from questionable data. But some of it is at least slightly interesting.

Post navigation

Having shared our annual report template with a few interested teachers, I thought it was worth sharing the main template more widely. If you’re not interested in reading about it, then feel free to scroll to the bottom just to download the template… I’ll never know 🙂

It’s always struck me as odd that we seem to have contradictory wisdom about the main forms of report to parents. New teachers are always told that there should be “no surprises” at a parents evening. If children are falling behind, or misbehaving, or perhaps failing to complete homework, then parents should already know this rather than finding out in their 10-minute slot.

Why is it then, so many seem to presume the opposite for report-writing, as though parents know nothing of their child’s learning and so need everything spelling out in detail? In truth, most parents receive broadly similar reports year after year, because children don’t change that much.

The need to fill extra lines of content means either repeating the banal detail of what has been taught (regardless of how well it has been learned), or of trying to find minutiae to discuss.

So, when it came to re-working the report template for my current school, I had a few things in mind:

I wanted to minimise the amount teachers have to write, while leaving room for comments about the important personal & social detail (the bit parents are really interested in!)

I wanted to be clear about where children met – or failed to meet – expectations, and to set clear expectations for excellence.

I wanted to give an opportunity to reflect on attainment in all subjects.

So, our report is made up of a number of sections (after the introductory statement):

This is clearly the most important part of the report, not least because this is the section memories are made of. In my school I do ask teachers to write a comment which incorporates the personal/social elements as well as some reference to attainment in the key subjects of English & Maths. It’s also the place to add in detail about any particular skill or expertise in other subject areas.
The whole box takes up to 10 lines – roughly 180 words max.

The subject attainment section is very brief in terms of outcomes, but quite clear for parents. I’m not a fan of the vocabulary of ‘Greater Depth’, but given its use in the statutory assessments, it seems to make sense to use it consistently across the school. Invariably these descriptors are not a great surprise to parents (mine, for example, were never going to expect me to achieve great things in PE!), but where they do highlight something, then parents can of course raise that at the open afternoon that follows shortly after reports are issued.

This section is something I brought with me from a previous school, and we had taken the idea from another school – so if your Nottinghamshire school was the originator, do let me know!
I like it because it’s a clear at-a-glance indicator of key areas of interest, including attendance which can sometimes come as a surprise to parents. I also like the clarity that “Good” is good, but that to be exceptional is, well, exceptional.

There is no doubt that adding a pupil comment creates additional work. I like to keep it as much because I think it’s something for pupils and families to look back on in years to come as it is an insight into their current achievements. It’s also a useful reflective opportunity for older pupils. (Pupils don’t see the rest of the report first; juniors type their entries and they get added electronically; infants write on smaller sheets of paper which are pasted in to the template – achievable in a 1fe school).

As for the targets, I don’t expect anything in-depth or insightful. For most children’s it’s at least one English and one maths target, often linked to key skills that can be practised at home, such as number bonds, key word spellings or regular reading. There might also be a personal/social target if appropriate, or behaviour in some cases. As I say to my staff, though, sometimes it’s also appropriate to put a target that just says “keep up the great work!”

I do manage a headteacher comment for every pupil, but as we only have 200 that’s perhaps more manageable than in some schools. (I haven’t pointed out to my staff that this means I actually write more for reports than any one teacher; I’m not sure the point would go down to well given all the other demands on them!)

Presenting the report

I’m always conscious that school reports are often kept for years, if not generations, and try to present them accordingly. Our template is set up as a 4-page document, which we print onto A3 white card and fold into A4 size. The front cover consists mainly of the large (attractive) logo and pupil name, and the back cover is pretty blank, but I think it makes the whole thing look a whole lot nicer.

As a school, we also currently track Key Performance Indicators in key subjects across the year, and so printed those out to accompany the report last year. I may take soundings from parents this year to see if they value that level of detail; I’m not clear that they would.

I also include a covering letter with reply slip. In theory this helps us to track receipt, but more importantly I hope it gives parents an opportunity to send positive responses and thanks to teachers which they might not otherwise have the opportunity to convey. I still keep some report reply slips from my teaching days – and I ditch others!

The Template

Well done if you read this far. No credit if you just skipped my words of wisdom. I have stripped out the school-specific content from the template (logos, etc.) and uploaded a version here which you are welcome to download, adopt, edit and re-share as you wish. No need to add any credit on the report (it’d look odd for a start!), but I’d be glad to hear if you found it useful.

Post navigation

A cynic might think that it suits the government to create confusion about pay increases, but whatever your view, it’s clear that this year’s changes have been complex. After years of fairly simple – if small – changes to pay, the soundbites surrounding this year’s changes along with the strange calculations about how it’s worked out have led to some confusion which I’m attempting to clear up here:

Extra funding

The DfE doesn’t have enough detail about which teachers earn what to allow it to make exact calculations at a school level. Having announced that it would fund any increase over 1%, the calculations are not that straightforward. The ‘estimate’ they have used has been calculated by working out roughly how much it will cost nationally, and then sharing the money out between schools based on their size.

This means that a typical 420-pupil primary school will be given an additional sum of nearly £12,000 for the whole year, which is intended to cover the additional increase.

Of course, if that school has lots of staff on the upper and leadership scales who only get the 2%/1.5% rise, it may cover all the costs over 1%; if they have lots of staff on main scale who should be getting 3.5%, it may not.

Which pay points are increasing.

Many people are reporting that only teachers at the bottom/top of each pay scale have been given a pay rise by the government. This is a confusion that stems from a misunderstanding about teachers pay. For 5 years now, the government has only set out minimum and maximum rates for each pay scale. Although as teachers we’re used to talking about points M1 to M6, and U1 to U3, these no longer exist by statute. The government sets out the minimum a newly-qualified teacher can be paid, and the maximum any teacher on the main scale can be paid; everything else in between is for schools to decide – normally based on the recommendation of their local authority. (Of course, many academies will also follow the national approach, so will also have the decision to make, although they can choose completely different pay scales if they prefer)

For that reason, the %age awards this year are only applied by the government to the minimum and maximum of each scale. It is then up to schools/LAs (and academies) to decide how to apply it elsewhere. What that means in practice is set out in some examples here:

A teacher on M1 who doesn’t move up to M2 (or its equivalent on its authority’s payscales) will automatically get the 3.5% increase, because the new minimum amount will increase (from £22,917 to £23,729 outside London)

A teacher on M1 who is offered a payrise by their school after appraisal or similar, will get whatever pay rise their school/LA policy allows for. For most schools that still means a move to the equivalent of the old M2 point (£24,728), based on the recommended pay scales that the unions publish together. Whether the amount of the M2 payment is increased by 3.5% is up to the LA’s pay policy. The government would argue that they are giving funding for all teachers to increase; an individual school.LA/academy may decide that it can’t afford that increase and take the opportunity to set pay points that are lower than the union recommendations.

Although the maximum of the main scale has been raised (from £33,824 to £35,008), it does not mean that all teachers currently paid at the old maximum are entitled to the new maximum. Again, its up to the policy of the local authority or academy trust.

The animation below is an attempt to show the main options for employers:

It would be a very odd choice to increase the pay of those at the top of the scale, while not increasing the pay of those on M5, or its equivalent. But arguably that is a choice open to local authorities & schools.

The fact that this confusion still lingers shows how few local authorities and academy trusts have moved away from the well-understood point system. I’d imagine there is a good likelihood that a majority of authorities will move all teachers up by the respective 3.5% / 2% / 1.5% increases that the government announced.

Post navigation

Just a quick blog, inspired by this much more detailed and challenging one by Solomon Kingsnorth:

NEW BLOG: Please RT! The latest installment of my 'Small is Beautiful' series where I re-write the primary curriculum. This time on how the bloated curriculum is harming reading outcomes for children. Less fluff, more fluency. https://t.co/SzWIIQoDh1

I think he has a point about the importance of vocabulary, and it’s something we can easily underestimate. It’s also something we can worry that we’ll never be able to resolve, because there’s no way of knowing what vocabulary will come up in any given text or test.

So I took a look at this year’s KS2 Reading test paper and tried to identify some of the vocabulary required to answer each question. It’s not every word in the texts, but it’s also not just the case of the 10 marks theoretically set aside for vocabulary. In fact, I think there were 80 or more examples of vocabulary which might not have been met by pupils who don’t read regularly:

Q1

approximately, survive

Q2

disguise

Q3

razor-like, powerful

Q4

majority

Q5

develops, newborn

Q6

hibernate

Q7

captivity, territory

Q8

puzzling

Q9

vital, essential

Q10

extinction, survive, supplies, diminishing, poaching, territory

Q11

adopt, reserve

Q12

challenge

Q13

Q14

Q15

fascinating,

Q16

protective, enfold

Q17

punished

Q18

mountainous, praised, lavishly

Q19

wounded, lame, circumstance

Q20

seized

Q21

Q22

vividly recall

Q23

frail, hobbled

Q24

hobbled, hesitate, peered

Q25

Q26

lit up

Q27

amusing, shocking, puzzling, comforting

Q28

arrives, injured

Q29

verses

Q30

suggests, bothered, basins, smelt

Q31

lifeless, ancestors

Q32

guardian

Q33

devices (left to my own devices)

Q34

recesses

Q35

dawned (dawned on me)

Q36

assorted, debris, network, grime

Q37

detemination, thorough

Q38

impression, evidence, frightening, intensity, cautiously

Q39

justice, efforts

Q40

inspect, fashioned, ought

The only questions that are counted as vocabulary marks are the 10 written in italics. And all those ones in bold? They’re listed as inference questions in the mark schemes. The challenge of inference is often about interpreting complex language as much as it is about guessing what the writer intended.

Perhaps more importantly, very few of those words are technically specific to the texts they appeared in. Even in the case of the non-fiction text about pandas, much of the apparently technical vocabulary is applicable to plenty of other contexts that children meet in the course of the curriculum.

The link here to ‘tier two’ vocabulary is clear: there is plenty of vocabulary here that would come up in a number of different contexts, both through fiction and non-fiction reading.

Which rather makes me think that Solomon is on to something important: a significant part of teaching reading is about getting them reading and reading to them.

Post navigation

A few years ago, I was invited to be a member of the panel that helped to select the original board of what has now become the Chartered College of Teaching. Aside from then becoming a member when it launched, I have had no further involvement, and have watched with interest as it has begun to develop.

I have been particularly interested in the arguments surrounding the make-up of the various parts of the college, as I know how much deliberation this caused me when selecting the original board. I know from the selection process that I wouldn’t have been up to the task of setting up the organisation as those members did. Equally, I know now that I wouldn’t have the time, or probably the knowledge, to be an effective member of the council.

But I do think that teachers should be at the heart of that process wherever possible. I would include school leaders in that in its broadest sense, but I hope that we’ll reach a stage where the backbone of the organisation is made up of people who still have to think about planning lessons on the train on the way to meetings.

There has been plenty of talk on Twitter about the current selection of candidates, and I’m minded to agree that there are too many potential council members who are not employed in schools – particularly in the fellows category. But there are also plenty of candidates who are based in schools; you just have to be able to find them.

I tried, by trawling through all the candidates’ statements, to identify their current roles and the sector they mainly work in. There are plenty of representatives from Higher Education, and probably disproportionate numbers from headteacher level. There are also too few from primary for my liking, but then… I didn’t stand either.

I guess the key thing is, members now have to the chance to vote. Perhaps if this helps teacher members to find candidates they want to read more about, then the trawl will be worthwhile?

Caveat: I have described current roles as best I can, based on the information I could find. If you know better about any candidate, please do let me know here or via Twitter.

Post navigation

Lots of primary schools are now using standardised tests in each year group to help monitor the progress of pupils. They can be useful for identifying those pupils who seem to have dropped behind their peers, or perhaps aren’t progressing through the curriculum as you might expect based on their prior attainment.

However, the fact that standardised scores from such tests look very much like the scaled scores issued for end of Key Stage assessments can cause confusion. If schools are aiming to predict outcomes at the end of Key Stage 2, it doesn’t make sense to treat the two as the same thing.

Standardised scores

Tests like Rising Stars’ PiRA and PUMA assessments, or the NFER tests, use standardised scores based on a sample of pupils who have taken the test. For a standardised scale, a score of 100 is the average achievement in a cohort. People are usually familiar with this idea from IQ tests. Scores above 100 suggest achievement that it above average, and vice versa. But even this we should take with caution.

Because no test is a perfect measure, it’s not wise to treat somebody with a score of 98 as any different from a score of 102; we just can’t be that accurate. Most test series will give you an indication of confidence intervals. That is to say, a range of scores within which you could reasonably expect a pupil to fall. For example, scoring 103 on a test might mean that you could be 95% sure that such a pupil would score between 99 and 107 if you kept testing them. Of course, we don’t keep testing them. We use the figures from a single test as an indicator of how they are doing compared to others their age.

Standardised scores are based on the familiar concept of the bell curve. Half of pupils will score below 100, and half will score above (well, after those who have scored exactly 100). For most school tests, only about one in 6 children will score above 115; similarly, only 1/6 will score below 85.

Scaled scores

Scaled scores, while looking very similar to standardised scores, are in fact very different. For scaled scores, the 100 marker has been planned in advance. There is a threshold of attainment which pupils must cross in order to score at least 100. In the Key Stage 2 tests since 2016, considerably more than half of pupils have score over 100.

In simple terms: it is easier to score 100+ in the national tests than in a standardised test like PIRA or NFER.

If we look again the bell curve, around 75% of pupils achieved 100+ in KS2 maths. If we look at the top ¾ of achievers in a standardised test, then some of those pupils might have scored as little as 90 on the standardised scale. It’s not to do with whether the tests are easier or harder; just that the scoring systems are different.

On the bell curve, while only 50% of children can score over 100 on the standardised test, around ¾ can – and do – on the statutory tests.

The problem is reversed when it comes to Greater Depth. On a standardised test, you would expect around ¼ of pupils to score 110 or higher. However, for KS2 maths, only 17% of pupils got a scaled score of greater than 110.

Predictions

As ever, making predictions is a fool’s game. Scoring 95 on one year’s standardised test is no more an indicator of SATs success than England winning a match this year means they’ll win the World Cup next year.

If you rely on standardised scores for making your predictions of later scaled scores, then you may find yourself over-estimating your proportions at greater depth, and potentially under-estimating your proportions achieving the expected standard.

Rising Stars have provided indicative bands based on the correlation between their PiRA/PUMA tests and the national tests – but it’s not a perfect science.

Post navigation

The DfE announced today that it plans to introduce a multiplication tables check in Year 4 – and I’m angry.

I’m not alone in feeling angry it seems, but my reasons are very different than those of so many. The multiplication check has been government policy for some time, has been moved to Year 4 on the basis of feedback from the profession, and will not form part of the high stakes assessment information that is published every year. Perhaps more importantly, the check focuses on something which is undoubtedly useful for mathematics. It’s a classic case of where teaching to the test is absolutely desirable.

So why the anger?

Well, the DfE also chose today – perhaps not coincidentally – to release the updates to the Teacher Assessment frameworks for KS1 and KS2. So while everyone was getting their knickers in a twist about whether an online check was helpful or harmful, the department managed to quietly sneak out the news that the useless writing assessment procedures we’ve been battling with for nearly three years now are here to stay.

It’s worth remembering that these are the frameworks against which statutory teacher assessments are made. The decisions which have seen wild volatility between and within local authorities, a failed moderation system, huge discrepancies in what is permitted, and a real lack of understanding of the circumstances under which judgements should be made. This is the system we’ll continue to have to use in the years to come.

Notably, the DfE doesn’t trust such judgements for the purposes of setting a baseline for secondary schools. The new progress 8 measure ignores the Writing judgement completely. Yet it will remain an integral part of the high stakes assessment process against which primaries are judged. Schools and school leaders will continue to have to choose between honest, accurate assessment, and playing the system to ensure that schools remain above the floor and coasting standards.

It’s clear from recent years’ results that the system isn’t a fair or useful reflection of how pupils are achieving in schools, and that the high stakes use of the outcomes will unjustly damage schools and careers. It’s obvious to most that the framework offers no sensible judgement on the quality of children’s writing, or their skill as a writer.

Yet here we all are, arguing about whether a 25-minute quiz in Year 4 is the problem.

I can’t help but think that that’s exactly what the DfE hoped for.

Post navigation

In DfE terms, it’s early days for being able to make decisions about KS2 Writing outcomes. After all, it wasn’t so long ago that we were reaching February without any exemplification at all, so for the STA to have released its “particular weakness” scenarios as early as mid-January is progress!

However, publishing the materials is one thing. Providing the clarity that a high stakes statutory assessment process dearly needs is quite another. The example scenarios offer some insight into the thinking at the STA about this new ‘flexibility’, but seem to have deliberately skirted round the key issues that keep coming up, such as dyslexia!

In an effort to get a sense of the interpretations out there, I put together some very brief scenarios of my own, and asked Y6 teachers to say whether or not they thought such pupils would be awarded the expected standard. And as I feared, there is a real lack of clarity about. The six example scenarios follow, accompanied by the pie charts showing decisions. In each case, the blue represents those who would award EXS (based on a sample of 668 responses)

Scenario 1

77% award EXS

Edith has shown herself to be a fluent and confident writer. She adapts her writing for a variety of purposes, and in many cases has evidence of elements of working at Greater Depth. However, there are no examples of the passive voice used in any of her writing, except through planned tasks.

Scenario 2

67% award EXS

Beowulf is a good writer, who meets almost all of the requirements for EXS. However, he has been identified as being at high risk of dyslexia. In his writing he has shown that he can use some of the Y5/6 words accurately. However, he struggles with some of the regular spelling patterns from the curriculum, and his work contains several errors, particularly for the more complex patterns.

Scenario 3

36% award EXS

Ethelred writes effectively for a range of audiences and purposes, with sound grammatical accuracy. He uses inverted commas correctly to mark speech, but does not yet consistently include punctuation within the inverted commas.

Scenario 4

71% award EXS

Boudicca writes well, showing an interesting range of language, sentence type and punctuation. However, she has developed a largely un-joined style of writing, which although clearly legible does not include the usual diagonal or horizontal strokes.

Scenario 5

55% award EXS

Cleopatra is a confident writer, who shows good grasp of technical aspects and a beautiful joined style of writing. She enjoys writing fiction and can develop good plot, with writing that flows well. However, in non-fiction texts she is not always able to use the cohesive devices that enable cohesion between paragraphs. There are some examples of stock phrases used (On the other hand, Another reason, etc.) when writing in a formal style, but these are not consistent across the non-fiction texts she writes

Scenario 6

92% award EXS

Englebert is a technically sound writer. He is able to adapt writing for fiction and non-fiction purposes and uses a variety of language and punctuation techniques. His spelling of common patterns is generally good. However, there are a number of examples of words from the Y5/6 lists which are mis-spelt in his writing generally. His teacher has shown that he could spell these words correctly when tested in the context of dictated sentences throughout the year.

Notably, all but one of the results were within 5 percentage points of the figures above when looking only at those who said they had had some training provided on this topic. The biggest difference came for scenario 4 (handwriting) where only 61% of those who said they’d been trained would award EXS compared to 71% of the full sample.

It’s hard to say what I expected when I set up these little scenarios. I certainly don’t know what any “correct” responses might be. I think I imagined that some would be fairly evenly split – as with the case of Cleopatra’s weak use of cohesive devices.

Scenario 6 has genuinely surprised me. I don’t know what a moderator would say, but my fear about dictated sentences would be that children could easily be tested on a handful of words each week, learned for Friday’s test, and then quickly forgotten. Is that sufficient to say they can spell at the Expected Standard? Who knows? (That’s not to say that I think ‘no’ is the correct answer either; I’m not persuaded that the importance of spelling those particular words is as great as the system might suggest).

I’m equally surprised at scenario 3. Is it really right that speech punctuation is so so important that 2/3 of teachers would deny a pupil an EXS judgement on this alone – even when so many are happy to overlook spelling or handwriting failures?

As I say – I don’t have any answers. If any moderator – or perhaps an STA representative would like to give a definitive response, I’d be glad of it. I suspect that as close as we’d get to an official answer is that a moderator would have more evidence upon which to make a decision. Which is all well and good. For the 3-4% of pupils whose work gets moderated. For everyone else, we have to hope that teachers have got it right. And judging by these results, that’s not that easy!