No matter how much we know it’s coming, somehow we always seem surprised by how quickly the middle of the semester arrives. While this is often a time for mid-term exams and papers, it can also be an opportunity to encourage students to reflect on how they are learning, perhaps even consider it as a metacognitive pause (link).

Collecting formative feedback about how students are learning can help both faculty members and students gain insight about what is working to support learning in the course. From an instructor’s standpoint, this feedback can be diagnostic, and allow for possible adjustments to the course or certain pedagogical approaches. For the students, it makes their learning more visible, and encourages them to acknowledge what is working, and what they might do to improve their learning and performance in the course.

Collecting this feedback can take several forms, from web-based surveys to more structured in-class approaches. Using a freely available and easy to use tool like Google Forms, you can create a brief series of questions that encourage reflection on learning in the course (sample mid-semester feedback form). The collected responses are automatically tabulated, and can be used to facilitate follow-up discussion about how students are learning in the course. There are also more structured in-class approaches to collecting formative feedback, which can provide a greater level of detail than survey instruments. One such approach is the Small Group Instructional Diagnosis (SGID). The SGID takes a structured focus group approach, which a facilitator conducts in three phases – individual, small group, whole class – identifying examples of what is supporting learning and what can be done to enhance learning in the course. In each of these phases, students work to prioritize and reach consensus on their examples. The SGID facilitator then synthesizes this feedback, and prepares a summary report to share with the faculty member. Participation in the process is completely confidential. More information on the SGID process can be found here (link).

As you think about the approach of mid-term, the CLTR would look forward to consulting with faculty members to explore ways to collect formative feedback to better understand how students are learning in their courses. Colgate faculty should feel free to contact Doug Johnson (djohnson@colgate.edu) or Jeff Nugent (jnugent@colgate.edu) for assistance.

The recently released paper, “Unpacking Relationships: Instruction and Student Outcomes,” from the American Council on Education, provides a thoughtful overview of evidence-based practices in teaching and how they support student learning. The paper begins with acknowledging the importance of relationships between students and faculty members – a hallmark of the liberal arts experience – as a foundational element of integrating students into the academic culture of higher education. Perhaps of equal, or even greater importance, is the additional recognition of meaningful engagement in learning environments as central to achieving student outcomes.

As more than just a rhetorical framing, the paper’s author, Natasha Jankowski, invites us to consider the question: “What is the relationship between instruction and student outcomes?” While answers to this question may appear obvious, she notes that despite broad awareness of the positive impact of evidence-based teaching practices, faculty members may not always implement these practices in their instruction.

The paper provides us with a valuable opportunity for reflection on our own pedagogy, as we consider the instructional practice areas that play a critical role in contributing to learner success. The five practice areas are clearly described with examples and references to research studies supporting each area, and they include the following:

Transparency – emphasizing the importance of making teaching and learning visible for all students.

Pedagogical Approaches – focus on practices that enhance student learning, involvement and engagement. Additional attention is given to the role of high impact practices in supporting deep learning.

Assessment – recognizing the value of a balanced assessment approach that integrates formative feedback into course design, and providing multiple opportunities for students to demonstrate learning.

Self-regulation – encouraging and supporting learners to reflect not only on what they are learning, but also how they are learning.

Alignment – seen as a design principle that drives toward holistic coherence among content, teaching approach, activities and assessments.

You may find it a helpful exercise to read through the paper and reflect on the practices you already engage in, and to perhaps consider ways of further integrating or refining some practices to enhance student learning.

A number of Colgate faculty (some examples below) have been taking advantage of empirical findings from applied cognitive science. One class of findings, “Desirable Difficulties“, refers to conditions where a learner feels that an approach is difficult or ineffective, but that approach actually leads to better learning. While at first this content area may seem to be oriented to the learner (our students), there are implications for us as faculty.

An example of a desirable difficulty is retrieval practice. Students tend to like to “re-study” material versus testing themselves on it, as forcing themselves to try and retrieve information is effortful, not always successful, and feels premature. The data, however, are clear that retrieval practice (whether self-testing, low-stakes quizzes, or “real” tests) has a powerful impact on learning (and not just memorization). A comprehensive review of this literature may be found here (Roediger, H., & Karpicke, J., 2006), and the reference and resource section below has other related links.

Three Colgate faculty, Spencer Kelly and Neil Albert in Psychology, and Liz Marlowe in Art and Art History, were kind enough to share with me how they use frequent assessment techniques in their courses. As you will see, Spencer, Liz and Neil report other benefits to their approaches in addition to the learning effects predicted from the literature.

Spencer wrote:

In my Language and Thought and Cognitive Neuroscience seminars, I ask daily cumulative exam questions at the start of almost every class that test students’ comprehension of the day’s readings and their ability to integrate readings and discussions across the semester. The questions range from 5 minutes to 30 minutes, with most of them being around 8 to 10 minutes. These questions (after dropping the two lowest scores) are worth roughly 35% of the student’s total grade.

The main benefit is that students come prepared every day not only to discuss the readings for that day but to explore them in the context of everything that has come before them. This creates a coherence to the discussion that drives home (on a daily basis) the main points and themes of the class. Another benefit is that students get immediate feedback on where they are in the class, and this allows them to make necessary adjustments. Not to mention it helps me know when I also need to adjust!

Thus in addition to potential learning effects, Spencer notes his approach helps students come to class prepared, better integrate the information, and receive early and frequent feedback on their performance. Spencer also uses the results of the frequent assessments to adjust his teaching during the semester, rather than waiting for end-of-semester Student Evaluations of Teaching (SETs) and adjusting his course the next time it is offered.

Liz uses a graded mini-exam every class period, and wrote:

Students in my ARTS 101 class answer an exam question during the first 10 minutes of class (75-minute period) almost every day, for a total of 22-24 questions over the course of the semester. I drop their lowest score. After the first week of classes, the questions are always cumulative and usually comparative, drawing on something from earlier in the semester plus either the reading from the night before or the previous lecture. For example, I might show a vase with a scene of a ritual offering from Mesopotamia that we studied in week 2 and a plaque with a scene of a ritual offering from Crete that we examined the day before (in week 5), and ask them something like “How do the differences in the iconography of these two scenes reveal larger differences between the respective cultures that created them?” This is a very standard type of question in art history, but usually students are only asked to think in this sort of integrative way twice a semester, at the midterm and at the final exam. Having them do this more frequently not only keeps all the material fresh in the active part of their brain (I leave it to my colleagues in Neuroscience to put that in more precise language), but also shows them on a daily basis what art history is for, how studying a culture’s art allows us to develop a very big picture of the various ways human society has been organized as well as the differing values and ideals that various cultures have embraced over time. In other words, they see they pay-off of this discipline in a relatively low-stress context every day, rather than just at high-stakes exams twice a semester. Having them think cumulatively (and put their cumulative thinking into writing) every day increases not only how much they learn in the course but also their understanding of the value of learning such material.

Like Spencer, Liz notes that the frequent assessments help students integrate information from various parts of the course, and to treat their learning as incremental and connected as opposed to packed knowledge acquisition and a “brain dump” during a high-stakes exam.

Neil also uses student work to adjust his teaching (“Just in Time Teaching”, see reference and resource section below), and wrote:

Much of the focus of my Cognitive Neuroscience seminar is on reading primary source material for a biweekly theme (e.g., perception or learning). Class discussions focus on describing conceptual frameworks, main findings and the degree to which papers complement and contradict each other – both within and across themes. For each article we read, students complete a brief article response that includes the main findings of the paper and its greatest strengths and weaknesses; article responses are submitted in Moodle and due a little over an hour before class, but are not graded. The primary impact of these short responses is that students have thought carefully about keys aspects of a paper before coming to class to discuss it – and they do so within a structured framework. I spend an hour or so before class going over these responses as I tailor my discussion plan based on the students’ mastery of the material. In class, we discuss these same aspects of each paper and students benefit from a reflective presentation of the strengths and weaknesses their peers have identified. Over the course of the semester, students become dramatically better readers and they are able to identify recurring strengths and weaknesses that occur across themes. Ultimately, students come to recognize the strengths and weakness of the relatively immature field of Cognitive Neuroscience and how to be a productive force in an imperfect world.

Thus, Neil’s approach to frequently asking students to reflect on the material indirectly requires retrieval practice by its very design, while also encouraging coming to class prepared and allowing the adjustment of his approach to the material based on where the students are struggling.

It is important to note that helping students utilize retrieval practice does not have to depend upon graded assignments nor use a lot of class time. For another example, in my Psychology 309 course, I utilize a number of low-stakes quizzes that take place out of class and that are submitted electronically. Each student has a separate data set, so they can share concepts with each other, but each must do their own calculations and they cannot “check” answers with each other, since each will be unique. Setting up the quizzes takes some time, but I can then use spreadsheets and a form plug-in in Google to quickly grade responses and provide feedback.

If you have been experimenting with more frequent assessments or finding other ways to enhance learning related to the “testing effect”, please do share your approach, as well as any consequences that you noticed (pro and/or con) in the comments section.

The Spring 2016 schedule of events from the Center for Learning, Teaching, and Research has been released. The schedule features a panel discussion on ColgateX/edX and digital pedagogy, a series of engaging Teaching Tables, reading groups, and more. Please note that teaching tables have limited capacity and require pre-registration.

Course Redesign Workshop? If you may be interested in a course redesign workshop after White Eagle this spring, contact Doug Johnson (djohnson@colgate.edu). If there is sufficient demand I will organize a mini-retreat on campus for those of us who are interested.