Summary: In this short article, the authors present the results of a study comparing three different in-class response methods: clickers, response cards, and hand-raising. About 140 introductory psychology students, mostly first-year undergraduates, were assigned to four different groups. The first group experienced a “standard lecture” with no formal response mechanisms. The other three groups experienced lectures enhanced with formal response mechanisms (clickers, response cards, hand-raising). The authors analyzed the students’ participation and accuracy rates on these in-class questions, as well as their performance on a post-lecture quiz and their responses to the Academic Emotions Questionnaire (AEQ) as a way of investigating impact on the students’ affective responses.

All three groups involving formal response methods reported significantly less boredom during lecture than the “standard” lecture group.

The clicker group appeared to answer in-class questions more honestly than the response card and hand-raising groups. This was the authors’ conclusion after noting that the percent of questions answered correctly using clickers more closely mirrored the percent of questions answered correctly on the post-lecture quiz. (There was a 22% drop in accuracy from during-lecture to post-lecture for clickers versus a 38% drop for hand-raising and 40% drop for response cards.)

Comments: The first two findings aren’t particularly surprising, but are encouraging. The third finding is the most interesting one. Anecdotal evidence from instructors has certainly indicated that the hand-raising method leads to the “bandwagon” effect, where students change their responses after seeing how their peers respond, making it difficult to use that method to accurately assess student understanding. However, I hadn’t heard until this article that the bandwagon effect was present in the response card method, a method that has a fairly long history in the teaching of psychology

It’s worth noting that in this study, the lecturers did not practice any kind of agile teaching; they just stated the correct answer to a question after the students voted. Although there was no significant difference on the performance of the four groups on the post-lecture quiz in this study, it’s quite possible that there would be an improvement in the clicker group were agile teaching to be implemented. By using response data to guide classroom lecture and discussions in ways that focus on student difficulties, I believe that students are likely to learn more. If that’s the case and if clickers provide more accurate data on student difficulties, then that would be a strong argument for using clickers to improve student learning, not just increase participation and reduce boredom.

One variable not discussed in the article is the type of question asked with the various response methods and on the post-lecture quiz. It’s possible that the effects seen here would be different with different types of questions: factual questions, conceptual questions, application questions, etc.

Finally, I like the use of the Academic Emotions Questionnaire here to explore the affective aspects of clickers. I’m glad to know about this instrument.

11 Responses to "Article: Stowell & Nelson (2007)"

[…] are surprised to learn about the diversity of opinions held by their peers.) Finally, there’s some evidence that the response card method provides less accurate information about student learning and student […]

[…] Such choices are likely most effective when based on accurate assessments of student learning. As Stowell and Nelson (2007) showed, the flashcard method can lead to instructors overestimating their students’ […]

[…] I would argue that other methods of having students respond during class (hand-raising, flashcards) are even more likely to give instructors the false impression that their students are following […]

[…] response system is a more useful response mechanism than hand-raising or flash cards, according to Stowell and Nelson (2007). This is a strong argument in favor of low-stakes clicker questions. If it is indeed the case […]

[…] method leads to inaccurate representations of student understanding (see, for instance, Stowell and Nelson, 2007), it could be that the more accurate reporting of student responses to questions allowed by the […]

[…] as they answer, students can’t answer independently! This means that you’ll see the same lemming effect you see with hand-raising. Once students start to figure out which cursors belong to the “smart kids,” […]

[…] Such choices are likely most effective when based on accurate assessments of student learning. As Stowell and Nelson (2007) showed, the flashcard method can lead to instructors overestimating their students’ […]