What is PER?

Physics Education Research (PER) is a field of research focused on understanding how students think about physics and how to teach physics more effectively. Researchers in PER typically have (or are working towards) PhDs in physics or education. PER is different from traditional education research because researchers in this field are experts in the subject of physics, and can therefore look at student learning in ways that are particular to physics content. Over the last few decades, researchers in PER have made enormous advances in understanding how students learn physics most effectively and in developing teaching methods that apply this understanding to achieve improved student learning.

What is the best way to teach physics?

According to PER, the best way to teach physics is through "interactive engagement" methods, defined by Hake as methods "designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors." See our Methods and Materials page for a complete list of methods that use interactive engagment, and our What makes them work page for a list of features that make these methods work.

How do we know that PER-based teaching methods really work?

Many studies in PER have illustrated that PER-based teaching methods lead to significant learning gains, much greater than traditional lecture. One of the most well-known and thorough of these studies was conducted by Hake in 1998. Hake collected data from 6000 students in introductory physics classes around the country in a range of academic settings with different types of instruction. All these students took a test called the Force Concept Inventory (FCI) at the beginning and end of instruction. The FCI is a multiple-choice test with conceptual questions based on research into student thinking about physics. These questions ask about students’ basic model of forces and acceleration and require minimal calculations. Many physics instructors are reluctant to give their students this test because the questions appear “too easy.” Most physics instructors imagine that their students should be able to answer these questions easily and fear that students will be insulted by the simplicity of them.

For each class, Hake calculated the improvement on the FCI using the normalized gain, a measure of how much the students learned as a percentage of how much they could have learned:

Normalized gain = <g> = (%post-%pre)/(100%-%pre)

His results are shown in the figure at right. Hake found that the average normalized gain for classes taught with traditional lecture instruction was 23%, with a range from 13% to 29%. There were no classes taught with this method in which students improved more than 30% between the beginning and the end of the course! For courses taught with interactive engagement methods, the average normalized gain was 48%, with a range from 21% to 69%.

Since 1998, many instructors have been inspired by the Hake study to try giving the FCI in their courses, including popular award-winning lecturers, and no one has ever reported a gain higher than 30% for a course using traditional lecture instruction. Many studies since 1998 have reported gains similar to or higher than those reported by Hake in courses using interactive engagement methods.

If you teach introductory physics, try out the FCI with your students. If you can get more than 30% gain, publish your results!Research-based assessment tools similar to the FCI exist in many other subjects as well.Our Assessment Tool Guide will soon include a complete list and a discussion of how to use them.

Our Methods and Materials page will soon include references to research studies documenting the effectiveness of each of the methods listed on this site.

Which PER-based teaching method should I use? Are some more effective than others?

One of the less well-known results of the Hake study was that, while there were large differences in student learning between courses using traditional lecture and those using PER-based teaching methods, there were no significant differences between different PER-based teaching methods. Many studies done since 1998 have confirmed that the variations in student learning between different implementations of a particular method are much greater than the variations between methods. Further, it is difficult to directly compare methods because they may cover different topics, have different goals, and be studied using different research methods. Hake concludes, "At this stage (a) the particular method used by an instructor may be less important than the skill of that instructor in promoting effective interactive engagement of students, (b) teachers might be well advised to try first those methods which best match their own inclinations, course objectives, teaching styles, students' characteristics, and resources."

Our Methods and Materials page allows you to filter teaching methods according to the criteria that matter to you in order to find the ones that best match your particular environment and goals.

How can it be true that people don't learn from lectures? I learned physics from lectures, and have managed to become a successful physicist. If no one learns from lectures, why do we go to talks at conferences?

Studies in PER (and other education research) show that people learn when they are actively engaged in the material. This is certainly possible in a lecture, if you are asking yourself questions and actively working to make sense of what you are hearing. The problem is that most students, unlike those of us who ended up as physicists, are not doing this during physics lectures, as demonstrated by many studies that show very little physics learning in classes taught with lectures. PER-based methods work by helping students learn to to engage with physics in the way that physicists do.

It is possible for students to learn from a lecture if they are prepared to engage with it. For example, Schwartz et al. found that if students work to solve a problem on their own before hearing a lecture with the correct explanation, they learn more from the lecture. (For a short summary of this article aimed at physics instructors, see these posts - part 1 and part 2 - on the sciencegeekgirl blog.) Schwartz and Bransford argue that lectures can be effective "when students enter a learning situation with a wealth of background knowledge and a clear sense of the problems for which they seek solutions."

Will using PER-based teaching methods take up a lot of class time and make it difficult for me to cover enough content?

It is certainly true that you can't "cover" as much content when you take the time to have students actively work through it as you can when you simply explain it. A good rule of thumb for many PER-based teaching methods is that you will probably need to eliminate about 10% of your content. However, depending on the method and the implementation, you may find that you need to eliminate much more, much less, or none at all.

Since research suggests (see previous questions) that students don't learn much from lectures, simply covering the content may not be doing your students much good anyway. Some advocates of interactive engagement argue that in order to achieve the more important goal of students actually understanding anything in your class, you need to give up on the goal of covering a lot of content. Others recognize that institutional constraints often do not allow such a radical stance, and suggest that it is possible to use interactive engagement and still cover just as much content. One strategy that allows you to cover just as much content in your course, while still covering less in lecture, is to shift some of the content into out-of-class reading and/or homework. One way to do this is with Just-in-Time Teaching.

If you have institutional constraints that make content coverage an issue for you, you can find interactive engagement methods that allow for more content coverage on our Methods and Materials page by opening the "Coverage" section in the menu on the left and selecting "Many topics with less depth."

How do we know that the supposed benefits of PER-based teaching methods aren't just due to spending more time on task?

A study by Redish, Saul, and Steinberg provides one of the cleanest demonstrations that PER-based teaching methods achieve better learning gains with equal time on task. They replaced traditional recitations with Activity-Based Tutorials and found significant improvements without any additional time on task. In one case they actually reduced time-on-task and still found improvements: The same award-winning professor taught two sections of the same class. In one section he gave three hours of lectures on the same topic "teaching to the test" and they had a traditional recitation on the topic. In the other section he gave only one hour of lecture and they did a tutorial in recitation section, and the students did better.

A study by Lasry et al. demonstrated that peer discussion during Peer Instruction is more effective than asking students to spend an equal amount of time reflecting on the question on their own.

Do PER-based teaching methods help good students, or are they only for weak students?

Physics instructors often think that PER-based teaching methods are somehow "remedial" or only appropriate for weaker students. After all, most successful physicists learned from traditional lecture instruction, and we’ve been doing it for generations, so it must be the best way. However, there are several research studies showing that PER-based teaching methods help students at all levels of ability, including the strongest students. These results suggest that perhaps successful physicists learned physics in spite of, rather than because of, the traditional lecture instruction they received in school.

Beichner et al. found that using SCALE-UP at MIT and the University of Central Florida led to large improvements over traditional lecture on a range of assessments. The learning gains were large for the bottom, middle, and top third of the class, but largest for the top third, showing that these top students benefited even more than the weak students from this reformed instructional method.

At the University of Minnesota, Heller, Keith, and Anderson showed that when students used Cooperative Group Problem Solving, the solutions of students working in groups were significantly better than the solutions of the best individual student in the group working alone. Further, the problem-solving scores of the weak, medium, and strong students all improved over time. These results demonstrate that even the strongest students benefit from group work.

At Harvard, Crouch and Mazur found that in a class using Peer Instruction, no student initially gave the correct answer to a concepTest more than 80% of the time. Even at Harvard, there are no students who always know the answer to these conceptual questions. This suggests that such questions are valuable even for the strongest students.

In a longitudinal study of physics majors at the University of Colorado, Pollock found that students who had taken introductory electromagnetism courses using Tutorials in Introductory Physics with learning assistants received slightly better course grades in a traditionally taught junior-level electromagnetism course, suggesting that the tutorials did not hurt and may have helped these students' performance in the one of the core physics courses most valued by physics faculty.

How can I facilitate students working well in groups?

Most PER-based teaching methods include some form of group work and provide a structure for incorporating group work into your class. Resources for supporting faculty in facilitating group work, including the Carl Wieman Science Education Initiative's Guide to Group Work in Educational Settings and PhysTEC's LA Video Project. We'll have more resources here soon.

How do we know that the supposed benefits of group work aren't just due to the smart students telling the dumb students the answer?

Further, research at the University of Minnesota demonstrates that the gains from group work are too large to be explained by this alone. Heller, Keith, and Anderson showed that when students used Cooperative Group Problem Solving, the solutions of students working in groups were significantly better than the solutions of the best individual student in the group working alone. This demonstrates that all students were contributing to the group. Based on observations of group interactions, the researchers said, "…although the best problem solver in each group usually provided the leadership in generating approaches to the problem, the medium and lower ability students often provided the monitoring and checking to make sure that conceptual and procedural mistakes were not made."

Won't all this emphasis on conceptual understanding hurt traditional problem-solving and calculation abilities?

There are several research studies showing that using PER-based methods that emphasize conceptual understanding at the expense of calculations does not hurt, and can actually improve, traditional problem-solving.

At Harvard, Mazur tested this question using a final exam that was entirely focused on traditional problem solving. He gave the same exam in 1991, after implementing Peer Instruction, that he gave in 1985, when he was using traditional lecture methods. The average score in 1991 was 69%, compared to 63% in 1985, a statistically significant difference, despite his shift away from problem-solving during lecture.

Mazur also tested problem-solving ability by giving the Mechanics Baseline Test, a test similar to the FCI but with more quantitative questions, in classes using Peer Instruction and traditional lecture. Students in the Peer Instruction classes scored higher on the test as a whole and on the quantitative questions.

At North Carolina State University, Abbot et al. found that replacing a single traditional laboratory with a modified version of Tutorials in Introductory Physics on circuits led to no change in performance on most exam problems, but a dramatic improvement in scores on one traditional quantitative exam question involving the application of Kirchoff's Laws.

If students have an incorrect idea about physics, why not just show them a demonstration illustrating the correct principle? Surely seeing is believing!

Demonstrations are a valued tool of many physics instructors, who often think that one of the best ways to help students understand a phenomenon is to show them a demonstration that clearly illustrates it. Unfortunately, research shows that when students watch a demonstration, they may actually see something completely different than we expect.

In a study at Harvard, Crouch, Fagen, Callan, and Mazur looked at scores on tests related to demonstrations for students assigned to one of four groups: those who didn’t see the demo, those who just observed the demo, those who were asked to predict the result before seeing the demo, and those who were asked to predict the result before seeing the demo and discuss it with their neighbor afterwards. They found that students who observed the demos did slightly better than those who didn’t at recalling the outcomes of the demos, but no better at explaining the reasons for those outcomes. Asking students to predict the results beforehand led to some improvement on both tasks, and asking them to also discuss the results led to a little more improvement. But even for the students who predicted and discussed, only 82% were able to recall the outcomes of the demos and only 32% were able to explain them. The authors suggest that perhaps there is a limit to how much students can learn from standard demos, which are based on what instructors think would be convincing, rather than on research into where students struggle most. In a follow-up study, Fagen found that when Interactive Lecture Demonstrations, which are based on research into student difficulties, were used instead of standard demonstrations, the scores of students who predicted and discussed the demos were much higher. (For more about this study, see the Learning about Teaching Physics Podcast on classroom demonstrations.)

Kraus found similar results in her doctoral dissertation at the University of Washington. She compared scores on pretest and exam questions between sections that had and had not seen related demos, for a wide variety of physics topics. She found that when the questions asked students to describe a phenomenon that was similar to or identical to a demonstration, students who saw the demo performed 10-25% better than students who did not see the demo. When questions required explanations or extending what they had seen in the demo to different situations, in most cases there was no difference between the groups that had and had not seen the demo. Even when students had seen a demo, 30-55% of them were unable to answer simple descriptive questions about it.

Halloun and Hestenes interviewed students after showing them demonstrations and found, “As a rule, students held firm to mistaken beliefs even when confronted with phenomena that contradicted those beliefs. When the contradiction was recognized or pointed out, they tended at first not to question their own beliefs, but to argue that the observed instance was governed by some other law or principle and the principle they were using applied to a slightly different case.”

What if my students don't like PER-based teaching methods?

Students may initially be uncomfortable with teaching methods that are unfamiliar and require them to engage in new (and often more difficult) ways. Instructors who have implemented these methods report that explaining what you are doing and why can help make students more comfortable, and that students can eventually get used to and come to appreciate PER-based teaching methods.

"I argue that it would be a waste of their time to have me simply repeat what is printed in the textbook or the notes. To do so implies that they are unable to read, and they ought to be offended when an instructor treats them that way. I explain how little one tends to learn in a passive lecture, emphasizing that it is not possible for an instructor just to pour knowledge in their minds, that no matter how good the instructor, they still have to do the work. I challenge them to become critical thinkers, explaining the difference between plugging numbers into equations and being able to analyze an unfamiliar situation."

Many instructors also find that there are only a few vocal students who dislike the new methods, while a less vocal majority actually appreciate them. If this is the case, you can help bolster your own confidence, silence the vocal minority, and get useful feedback by giving students a survey about their impressions of your teaching methods early in the semester. Sharing the results in aggregrate can help the vocal minority realize that they are a minority, and help everyone realize that you are taking their feedback seriously.