Research Ed 2016: evidence-fuelled optimism

One of the great things about the Research Ed conferences is that whilst their aim is to promote a sceptical, dispassionate and evidence-based approach to education, at the end of them I always end up feeling irrationally excited and optimistic. The conferences bring together so many great people and ideas that it’s easy to think educational nirvana is just around the corner. Of course, I also know from the many Research Ed sessions on statistics that this is a sampling error: the 750+ people at Capital City Academy yesterday are entirely unrepresentative of just about anything, and educational change is a slow and hard slog, not a skip into the sunlit uplands. Still, I am pretty sure there must be some research that says if you can’t feel optimistic at the start of September, you will never make it through November and February.

And there was some evidence that the community of people brought together by Research Ed really are making a difference, not just in England but in other parts of the world too. One of my favourite sessions of the day was the last one by Ben Riley of the US organisation Deans for Impact, who produced the brilliant The Science of Learning report. Ben thinks that English teachers are in the vanguard of the evidence-based education movement, and that we are way ahead of the US on this score. One small piece of evidence for this is that a quarter of the downloads of The Science of Learning are from the UK. There clearly is a big appetite for this kind of stuff here. In the next few years, I am really hopeful that we will start to see more and more of the results and the impact of these new approaches.

Here’s a quick summary of my session yesterday, plus two others I attended.

My session

For the first time, I actually presented some original research at Research Ed, rather than talking about other people’s work. Over the last few months, I have been working with Dr Chris Wheadon of No More Marking on a pilot of comparative judgment of KS2 writing. We found that the current method of moderation using the interim frameworks has some significant flaws, and that comparative judgment delivers more reliable results with fewer distortions of teaching and learning. I will blog in more depth about this soon: it was only a small pilot, but it shows CJ has a lot of promise!

Heather Fearn

Heather (blog here) presented some research she has been working on about historical aptitude. What kinds of skills and knowledge do pupils need to be able to analyse historical texts they have never seen before, or comment on historical eras they have never studied? The Oxford Historical Aptitude Test (HAT) asks pupils to do just that, and I have blogged about it here before. In short, I think it is a great test with some bad advice, because it constantly tells pupils that they don’t have to know anything about history to be able to answer questions on the paper. Heather’s research proved how misleading this advice was. She got some of her pupils to answer questions on the HAT, and then analysed their answers and looked at the other historical eras they had referred to in order to make sense of the new ones they encountered on the HAT. Pupils were much better at analysing eras, like Mao’s China, where comparisons to Nazi Germany were appropriate or helpful. When asked to analyse eras like 16th century Germany, they fell back on to anachronisms such as talking about ‘the inner city’, because they didn’t really have a frame of reference for such eras.

This is a very very brief summary of some complex research, but I took two implications from it, one for history teachers, and one for everyone. First, the more historical knowledge pupils have, the more sophisticated analysis they can make and they more easily they are able to understand new eras of history. Second, there are profound and worrying consequences of the relentless focus in history lessons on the Nazis. Heather noted that her pupils were great at talking about dictatorships and fascism in their work, but when they had to talk about democracy, they struggled because they just didn’t understand it – even though it was the political system they had grown up with. This seems to me to offer a potential explanation of Godwin’s Law: we understand new things by comparing them to old things; if we don’t know many ‘old things’ we will always be forcing the ‘new things’ into inappropriate boxes; if all we are taught is the Nazis, we will therefore end up comparing everything to them. I think this kind of research shows we need to teach the historical roots of democracy more explicitly – perhaps by focussing more on eras such as the ancient Greeks, and the neglected Anglo-Saxons.

Ben Riley

Ben is the founder of Deans for Impact, a US teacher training organisation. The Science of Learning, referenced above, is a report by them which focusses on the key scientific knowledge teachers need to understand how pupils learn. In this session, Ben presented some of their current thinking, which is more about how teachers learn. Their big idea is that ‘deliberate practice’ is just as valuable for teachers as it is for pupils. However, deliberate practice is a tricky concept, and one that requires a clear understanding of goals and methods. We might have a clear idea of how pupils make progress in mathematics. We have less of an idea of how they make progress in history (as Heather’s research above shows). And we probably have even less of a clear idea of how teachers make progress. Can we use deliberate practice in the absence of such understanding? Deans for Impact have been working with K Anders Ericsson, the world expert on expertise, to try and answer this question. I’ve been reading and writing a lot about deliberate practice over the last few months as part of the research for my new book, Making Good Progress?, which will be out in January. In this book, I focus on using it with pupils. I haven’t thought as much about its application to teacher education, but there is no doubt that deliberate practice is an enormously powerful technique which can lead to dramatic improvements in performance – so if we can make it work for teachers, we should.

One thought on “Research Ed 2016: evidence-fuelled optimism”

What I would like to see researched by non-educators (i.e., by mathematicians) is whether the way performance and progress statistics are generated and used gives an accurate picture of what’s going on in schools.

I am particularly outraged by the AVERAGING of Progress 8 and Achievement 8 numbers: it produces meaningless sludge that becomes the headline figures for a school.
Very much enjoyed “Seven Myths”!