Category: Literacy

This is fascinating – the same writing task given to every child in 5 primary schools and their work judged using comparative judgement.

Paradigm Trust is using comparative judgement as part of it’s assessment for writing in English, science, and foundation subjects. We’ve just analysed the science writing.

The children all experienced dropping spinners with different length wings. Their teacher demonstrated how to measure how long different spinners took and then were asked to write up the experiment. We got 1200 scripts.

We used CJ to judge each script and award a score. By setting a score as the minimum standard for expected and greater depth, we were able to identify these scripts for each year group:

Orbit: the path followed by a moon, planet or artificial satellite as it travels around another body in space (NASA).

This definition is only useful once you are already pretty secure in your understanding of the word.

To develop a subtle and nuanced understanding of a word such as orbit, exposure to examples, especially the less common examples, such as the Mars Global Surveyor orbiting Mars, and to non-examples, where learners are told, “this may look like an orbit (something going round something else), but it isn’t an example.

I developed the resource on the left from Theory of Instruction: Principles and Applications by Siegfried Engelmann and Douglas Carnine (Chapter 4).

I use the images, typically one at a time on a presentation slide, explaining why it is or isn’t an example. You can rattle through this quite quickly. Follow it up with a Hockman ‘but, because, so‘

In my book (due out this month!) I have adapted some of the Learning Scientist strategies for physics classrooms. In this blog, I am sharing a technique I like to use in my classes – similar/different.

Learners complete as many of the text boxes as they can, showing the similarities and differences between the two objects/concepts. Cognitive psychologists call this elaboration.

Elaboration works by highlighting the similarities and differences between concepts (I first used it for Hadrian’s Wall and Trump’s Wall). In physics, elaboration helps learners develop their knowledge by adding subtle details.

I do this by providing my learners with a sheet to complete. If I do this at the start of the lesson, I am also making use of retrieval practice and interleaving (great podcasts here). If I do it at the end of the lesson (as a check out), I am typically using it more as assessment.

I often make use of “solo, pair, share” – my students complete their sheet solo for two minutes, then pair-up with a neighbour for one minute – this gives me three minutes to check everyone and identify the answers I want shared (I usually put a dot beside the sentences I want read out). Sharing takes a further couple of minutes.

The publishers (Taylor&Francis) have asked me to prepare the text for a publicity poster. I’m please with the wording, so I thought I’d share. Tom Eden from T&F has done a great job with the graphics, so I’m looking forward to seeing what he does with this.

In my previous post (here) I tried to explain how bar-model supports learning using dual-coding. In this post, I want to use Cognitive Load Theory to explain that bar-models reduce cognitive load. (I should point out that as of now, I have no research evidence to show that using bar-model leads to improved long-term learning and improved problem solving – but I’m working on it).

This diagram represents the three elements of cognitive load (I’m referring to the book Efficiency in Learning, Clark, Nguyen and Sweller – 2006).

I’ve just received an email from TES advertising a book they are publishing titled: tes guide to STEM.I was hoping to see a summary of the best evidence based STEM practice. I haven’t read the book, so I might be 100% wrong here but the choice of topics covered strike me as odd – maybe old fashioned.

I was convinced by the Singapore bar-model when I invigilated the 2016 Key Stage 2 maths reasoning exam. One of my pupils, who I’d come to realise wasn’t going to score well, was faced with this problem:

This post is part of a series – a symposium – on AfL. The previous posts are well referenced and the result of much thought. My contribution is more anecdotal and speculative.

Part one of the series is by by Adam Boxer here. In it he sets the context of the following posts.

Part two is by Rosalind Walker here. She discusses the nature of school science and implications for the classroom.

Part three is by Niki Kaiser here. This post explores concepts, threshold concepts, misconceptions, knowledge and understanding.

Part four is by Deep Ghataura here. It is about the validity of formative assessment.

My post is about writing in science, how assessment often distorts writing and how we might be able to improve both scientific writing in school and its assessment.

Measuring and Improving The Quality of Science Writing in Schools

After 18 years of secondary science teaching, I left secondary school to become a primary school teacher. I had a suspicion that I would learn a lot. For three of my four primary years, I taught in year 6: SATs year. I had to learn quickly about teaching reading, writing and mathematics. But I have an issue with writing.

My experience is limited to a small group of primary schools in socially disadvantaged areas where literacy and numeracy is (rightly) prioritised. On one principal’s wall was an optician’s chart reading: