Julie SchellWebsite: http://peerinstructi…Julie Schell is a Director and Assistant Clinical Professor at The University of Texas at Austin. She is also a research associate in the Mazur Group at Harvard University. Follow Julie on twitter @julieschell or at www.julieschell.com

Applying the science of teaching to the teaching of scienceby James Fraser

“How can I tell if Peer Instruction is really working?”As a beginning instructor, I was skeptical of teaching approaches that seemed to delegate teaching responsibilities to the student. Back in high school, I hated “group” work which seemed to leave me (the keener) doing most of the work for my less motivated classmates. On the flip side, as an instructor, I hated the silence that followed my request for “Any questions?” There had to be a better way.

In the research literature, there is an overwhelming amount of evidence that PI works for students in a variety of classrooms and subjects. But another meta-study of >5000 students is not what I needed. What I really needed to know was that PI worked for me and for my students. In my research lab, I am constantly testing hypotheses with quantitative data. Why not approach my teaching in the same way?

Actually there are a lot of reasons not to do this. For example, as instructors we have limited class time and if we are going to ask students to do anything extra that is not for grades, we need to find ways to get buy-in. In addition, there are few robust testing protocols available for someone who is not an educational researcher to follow.

Fraser listening to his students in a Peer Instruction class session.

I found, however, that taking a scientific approach to teaching actually SAVES time, and I suggest it will also allow you to improve implementation of any new approach to teaching through evidence-based analysis, rather than through hunches. An added bonus comes when you are able to actually report results such as these back to your students: “Through your hard work, you have achieved double the learning gains of a traditional lecture course.” And on my student evaluations, I have had students comment that they liked knowing they were doing better than standard classes.

Just a few weeks before the start of Fall term is the perfect time to plan how to take a scientific approach to improving your students’ learning. The key tool that you can implement to start teaching more scientifically is pre/post testing. The first step is to identify a simple instrument that (somewhat) aligns with your learning objectives and that has been used in a variety of classrooms so you can access “baseline” results. Having this data will allow you to answer – How do your students compare to other schools and other teaching techniques? or How did my students perform when I used this new teaching technique?

Don’t expect such an instrument to do everything. I lean toward those that can give you a sense of your students’ mastery of the fundamental concepts. These tests are often called conceptual inventories. For Newtonian mechanics, the universal standard is the Force-Concept Inventory (translated for over 15 languages). Here is a simple protocol for using a conceptual inventory in your classrom:

1. Find time in the first week for students to do it: I do it in the lab orientation session using Scantron cards, but self-paced clicker tests or even online implementations also works (through the Interactive Learning Toolkit, or your own classroom management system).

2. Make sure the students take the inventory seriously, stressing that to teach them well, you need to know their strengths and weaknesses.

3. Do not assign marks for correctness, since that is not fair at this stage.

4. Feel free to use other “motivators”: “This will allow me to see how you compare to students from XXX [insert rival school name, or rival program in same school]”, “This will allow you to see how your background preparation compares to others in the class”. Be creative, but of course, honest!

5. After the unit or course is done, give them the exact same test again, using a similar testing protocol. I find 30-45 minutes in a tutorial to do this, but it can be done online or in a lecture hall.

6. As soon as possible, report back the results to the students, within the context of other schools and teaching techniques (such as comparing to Fig. 1 below). I love being able to say “look, you guys are learning at the same pace as Harvard students.”

Figure 1. R.R. Hake, Am. J. Phys. 66(1): 64

If the results are not what you want them to be, take the opportunity to change your teaching implementation by going back to literature to find best practices, or ask an informed colleague to sit in on your class to give you a fresh perspective.
There are a large range of instruments that you can use for pre/posttesting. As a physicist, for my Freshman students taking Electricity and Magnetism, I use the Conceptual Survey of Electricity and Magnetism. Do I think these instruments are perfect?

A definite NO, but they are better than trying to test learning using just problem sets or exam results, which can be strongly effected by factors beyond my control like copying or scheduling. Even worse is to change your teaching strategy after you have tried something innovative due to a few vocal students who are used to learning by rote and resistant to change. Having some additional “hard data” will help you make the best choices for your students overall.

Share this:

Like this:

Related

6 Comments

Interesting article! Can you recommend any additional resources on this topic? I am very interested in quantifying the student learning results of my inverted classroom for my skeptical colleagues… Thank you!

Petr

August 31, 2012

Some intriguing observations here. Thanks for a good article.

Concerning your outline of a scientific approach to testing what’s been learned: it assumes that there is some basic knowledge to which the course contributes, and which could have been assumed to be available to the students prior to the course. Then you ask the students some questions from this volume of basic knowledge pre-and post-facto. That would measure what the contribution of the course to this basic knowledge was. That is valuable, but how about testing how much _new_ knowledge the students have acquired? That seems much harder because there is no baseline.

In other words, using the described approach one will measure how much a given course contributed to understanding that the students should have acquired in all of their previous education.

jamesmfraser

September 5, 2012

Petr, that is a great point! Some conceptual surveys do test basic knowledge that one would expect students to have already assimilated, but not at all. In my classes, students on average perform very poorly on the CSEM pretest since a lot of the material is brand new. The pretest is still useful since it provides a specific baseline for each student to measure their improvement over the term. Also, though the FCI tests understanding of basic Newtonian mechanics, I think its power is in using everyday language so students who are surface learners will tend to do poorly. Only if the students have assimilated the knowledge and realize that it applies to their everyday world (and not just the classroom problems), will they do well.

Scott

jamesmfraser

September 5, 2012

Hi Scott, the pretest result is the independent coordinate on the graph so it is really just a measure of how strong the students are before the course. The y-axis is (post_test- pre_test) so ideally your students land on the line with the steepest slope (slope magnitude 1). This would correspond to your students all getting perfect on the post-test. The meta-analysis described in that paper showed that “traditional lecture” provided results that averaged around the line with slope magnitude 0.23 and interactive engagement 0.48. This is a somewhat limited analysis, but is useful since the final output is a single number. Other researchers have proposed richer ways of understanding student learning gains from pre- and post-test analysis.

Julie Schell is Director of OnRamps, an innovative dual-enrollment program at The University of Texas at Austin. She is also an Assistant Clinical Professor in the College of Education, where she teaches on Technology and Innovation. Julie is on the Board of Directors of the Flipped Learning Network and continues driving innovative pedagogy as a member of the Mazur Group at Harvard University.
Read more about Julie at www.julieschell.com.

Topics

Topics

Translated Posts & Guides

Archives

Archives

Next Door Innovator

Next Door Innovator is a series on educational innovation inspired by Deborah Solomon's New York Times column, Questions For. We engage in short conversations with educators, students, and others to gain their insights about relevant topics. The series moderator is Julie Schell.

The Neighborhood

ABOUT THE NEIGHBORHOOD
In The Neighborhood blog post series, Turn To Your Neighbor invites innovative educators from around the globe to discuss a variety of education topics.