peer instruction

Have you ever worked as student tutor? Then you’ve probably felt like you understood the content of the course you tutored a million times better after tutoring it. Or at least that’s what I hear over and over again: People feel like they understood a topic. Then they prepare to teach it, and realise how much more there was to understand and that they actually understood it.

And there is research that shows that you don’t actually need to teach in order to get the deeper understanding, it is enough to anticipate that you will teach: “Expecting to teach enhances learning and organization of knowledge in free recall of text passages” by Nestojko, Bui, Kornell & Bjork (2014).

In that article, two groups of participants are given texts that they are to study. One group is told that they will be tested on the text, the other one that they will have to teach someone else who then will be tested. After all participants study the text, they are then all tested (and nobody gets to teach). But it turns out that even expecting to teach had similar benefits to what we see in student tutors who actually taught: Participants expecting to teach have a better recall of the text they had to study, can answer more questions about it and especially questions regarding main points.

So what does that mean for teaching? As the authors say: “Instilling an expectation to teach […] seems to be a simple, inexpensive intervention with the potential to increase learning efficiency at home and in the classroom.” And we should definitely use that to our advantage! :-)

It probably doesn’t come as a surprise to you that how you behave as an instructor influences how your students work during peer instruction phases. But do you know what you can do to make sure that student discussions are reaching the level of critical thinking that you want? I.e., how do you construct classroom norms? There is a paper by Turpen and Finkelstein (2010) that investigates just that.

On the continuum between low and high faculty-student collaboration, there are a couple of behaviors in which mainly those instructors engage who have a high collaboration with students: leaving the stage during PI phases to walk around and listen to or engage in student discussions, answering student questions, and hear student explanations publicly (often several explanations from different students). Here students have many opportunities to discuss with the instructor, and the correct response is often withheld until the students have reached a consensus. Unsurprisingly, in classes where instructors are on the high end of faculty-student collaborations, students talk to the instructor more often, have lower thresholds of asking questions, and feel more comfortable discussing with the instructor.

Looking at student-student collaboration, there are again instructor practices that appear helpful. For example, low-stakes grading does provoke competitive behavior the same way high-stakes grading would.

When using clickers, collaboration is more prevalent when discussion phases are sufficiently long, when collaboration is explicitly encouraged (“talk to your neighbor!”), and when the instructor often models scientific discourse. Modeling scientific discourse (“can you explain your assumption?”) is more effective when the instructor talks to student groups during peer instruction and they have the chance to practice the behavior rather than being one out of several hundred students listening passively, but even modeling the behavior you want in front of the class is better than not doing it.

Sense-making (in contrast to answer-making) can be encouraged by the instructor through practices like explicitly putting emphasis on sense-making, reasoning, discussion, rather than just picking an answer, which means that ample time for discussions needs to be given.

Another practice is providing explanations for correct answers (also in the lecture notes) rather than just which answer was correct.

I find it really interesting to see that the observations made by researchers on concrete teaching practices can be related to what students perceive the classroom norms in a particular course are. This means that you can explicitly employ those behaviors to influence the norms in your own classroom and create a climate where there is more interaction both between the students and yourself, and among the students. So next time you are frustrated about how students aren’t asking questions even though they obviously haven’t understood a concept, or about how they just pick a random answer without sufficiently thinking about the reasons, maybe try to encourage the behavior you want by explicitly stating what you want (and why) and by modeling it yourself?

Make sure it stays silent during the first step of the clicker process.

When using clickers in class, there are many different possible ways of implementing clicker questions and peer instruction, for example the Mazur sequence (which is our default sequence) and the Physics Education Research Group at UMass (PERG) sequence. Let’s recall:

The Mazur sequence:

1. A concept question is asked

2. Students think individually for a couple of minutes

3. Students vote on the question

4. The result of the vote is shown as a histogram

5. Students are asked to convince their neighbor of their answer (“peer instruction”)

So the difference here is that in the Mazur sequence, students get the chance to think and vote individually before entering the peer-instruction phase, whereas in the PERG sequence, students first discuss and then discuss in an even bigger group (which is, in my experience, basically what happens when you don’t explicitly ask students to think for them selves first in the Mazur sequence).

Firstly, for both models students report that the clickers helped them learn compared to a conventional lecture, because they were more actively involved, felt motivated by receiving the immediate feedback, and felt that the instructor adapted instruction to meet their learning needs.

Secondly, in both cases students liked peer instruction, for many of the reasons we use it: They felt like they were convinced by the best arguments in the discussion, thus practicing putting forward strong arguments as well as learning the “actual content” of the course. They also mention how scaffolding, i.e. learning something from someone who only just learned it themselves is easier than learning from an expert, helps, because it is more accessible both in language and in explanation itself.

But do the different sequences make a difference? Rhetorical question, of course they do!

Almost all students preferred starting with individual thinking and voting rather than with peer discussion. They state that the individual vote forced them to think for themselves, whereas in an initial peer discussion they might slide into a passive role and unthinkingly accept answers from others.

As for class-wide discussions, while some students liked hearing both correct and incorrect responses from outside their own peer group, and some also liked the pressure that comes with knowing that you might be called upon to answer a questions as a motivator for staying focussed in class, there are drawbacks to it, too. For example, it takes a lot of time, it is easy to drift away from the question and it can easily become confusing, in addition to threatening. Benefit of class-wide discussion is seen mostly in cases where the class was clearly divided between two answer choices.

So based on this study, we should definitely make sure to have students vote individually before peer discussion, and this means enforcing silence in the classroom while the students think about what to vote.

—

David J. Nicol, & James T. Boyle (2003). Peer Instruction versus Class-wide Discussion in Large Classes: a comparison of two interaction methods in the wired classroom Studies in Higher Education, 28 (4)

On different approaches to peer-instruction and why one might want to use them.

Having sat in many different lectures by many different professors over the last year, and having given feedback on the methods used in most of those lectures, I find myself wondering how we can define a standard or even best practice for using clickers. Even when professors go through the classical Mazur steps, there are so many different ways they interpret those! Do we, for example, make sure that the first vote is really an individual vote, so that no interaction happened between students before they have to make this very first decision? I have not seen that implemented at my university. But does that matter? And why would one decide for or against it? I would guess that in most cases I have observed there was really no conscious decision being made – things just happen to happen a certain way.

A paper that I liked a lot and which describes a framework for describing and capturing instructional choices is “Not all interactive engagement is the same: Variations in physics professors’ implementation of Peer Instruction” by Turpen and Finkelstein (2009). I don’t want to talk about their framework as such, but there are a couple of questions they ask that I think are a helpful basis for reflection on our own teaching practices. For example there are questions clustering around the topic of listening to students and using the information from their answers. For example “what do I want students to learn, and what do I want to learn from the students?” might seem basic at first, but it is really not. What do I want students to learn? No matter what it is, what this question implies is “is the clicker question I am about to ask going to help them in that endeavor?”. The clicker question might be just testing knowledge, or it might make students think about a specific concept which they might get an even better grasp of by reflecting on your question.

And what do I want to learn from my students? The initial reaction of people I have talked with over the last year or so is puzzlement at this question. Why would I want to learn anything from my students? I am there to teach, they are there to learn. But is there really any point in asking questions if you are not trying to learn from them? Maybe not “learn” as in “learn new content”, but learn about their grasp of the topic, their learning process, where they are at right now. Do I use clicker questions as a way to test their knowledge, to inform my next steps during the class, to help them get a deeper understanding of the topic, to make them discuss? Those are all worthwhile goals, for sure, but they are different. And any one clicker question might or might not be able to help with all of those goals.

Another question is “do I need to listen to students’ ideas and reasoning and what are the benefits of listening to students’ ideas?”. Again, this is a question that I am guessing many people I have recently worked with would find strange. Why would I listen to student reasoning that doesn’t lead to the correct answer, or student reasoning that is different from how I want them to reason? Yes, I might learn something about where they go wrong, which might make it easier for me support them in getting it right. But isn’t it a really bad idea to expose the other students to something that is wrong? I would argue that no, it is not a bad idea. Students need to learn to distinguish between good reasoning and bad reasoning. And they can only do that if they see both good and bad reasoning, and learn to figure out why one is good and one is bad. I know many people are very reluctant of having students explain their reasoning that lead them to a wrong answer. It takes time and it doesn’t seem to lead towards the correct answer. But then what do we want? Answer-making or sense-making? Sense-making might involve taking a wrong turn occasionally, and realizing why it was a wrong turn before taking the right turn in the end. If the wrong answer isn’t elicited, it can’t be confronted or resolved.

I would really recommend you go read that paper. The authors are describing different instructional choices different instructors made, for example how they interact with students during the clicker questions. Did they leave the stage? Did they answer student questions? Did they discuss with students? (And yes, answering questions and discussing with students is not necessarily the same!). Even though there is not one single best practice to using clickers, it is definitely beneficial to reflect on different kinds of practice, or, at to at least become aware that there ARE different kinds of practice. Plenty to think about!

As you might have noticed by now, I’m a big fan of concept questions combined with “talk to your neighbor” peer instruction. And studies show that talking to your neighbor is often more successful in teaching you new things than listening to the lecturer is.

In order to separate those effects, the authors first ask a multiple choice question, let the students vote, use peer instruction, and let students vote again. They then ask a very similar question, which students who didn’t vote correctly the first time for the first question likely wouldn’t be able to answer correctly, either. So if those students answer correctly now, that supports the idea that they gained understanding during discussion rather than being just influenced by the knowledgeable students in the previous case. And their data shows that the third vote consistently gives better results than the first vote, and, surprisingly, often even better results than the second vote after peer instruction.

The power of increasing understanding through conversations with the neighbor is also supported by 47% of students disagreeing with the statement “When I discuss clicker questions with my neighbors, having someone on the group who knows the correct answer is necessary in order to make the discussion productive”. Discussing concepts seems to be the key, not being convinced by someone more knowledgeable.