Pages

Wednesday, August 1, 2012

Barriers to Change

As promised in my last column (Learning from the Physicists, July 2012), this month I will look at work from the Physics Education Research (PER) community on the barriers to adoption of empirically validated teaching practices. I will conclude with three steps that the mathematics community needs to undertake if we are to improve the teaching and learning of undergraduate mathematics.The primary source for this article is the report by Henderson et al. [1] of a survey completed by 722 physics faculty from the United States, but I also will draw on some of the other literature and I recommend the slides from two talks, one given by Melissa Dancy [2] at the conference on Transforming Research in Undergraduate STEM (Science, Technology, Engineering, and Mathematics) Education and the other given by Charles Henderson [3] at the American Society for Engineering Education, both given in June of this year.For their study, Henderson et al. questioned faculty about their awareness and use of twenty-four different research-based instructional strategies (RBIS) for which there are published studies establishing their effectiveness. These include a variety of materials produced in the 1990s and 2000s as well as Mazur’s Peer Instruction, developed at Harvard. The 722 respondents represent just over 50% of those who were contacted. While there may be some selection bias, only 12% had no knowledge of any of these strategies, and an impressive 72% had used at least one of them. But 32% of those who had used at least one strategy no longer used any of them. At least in physics, the issue is not getting people to try proven but innovative approaches to teaching and learning; it is enabling them to stick with it.In trying to understand what influences faculty decisions about the use of innovative approaches, Henderson et al. looked at twenty possible explanatory variables that range from class size, type of institution, gender, rank, and number of years in the profession to research productivity (measured separately by presentations, articles, and research grants within the past two years), to departmental encouragement and discussions with peers about teaching, to goals for teaching (the importance of developing conceptual understanding and problem-solving ability) and interest in research in instructional strategies.At the first level, simple familiarity with some of the instructional strategies, two of the strong predictors are attendance at talks or workshops on teaching and regular reading of journal articles about teaching. The strongest predictor is attendance at one of the New Faculty Workshops (NFW), 3½-day workshops held by the American Association of Physics Teachers (AAPT) for new physics faculty. The intent of these workshops is to introduce new faculty to the results of Physics Education Research, so it is not surprising that those who have participated in NFW are over ten times as likely to be aware of these strategies as the average faculty member. Two other variables were also significant: level of satisfaction that one’s goals for teaching were being met and whether one’s position was full- or part-time. At the next level, actually trying one of these strategies, a new but not surprising variable becomes important: interest in using RBIS.I am intrigued by the list of variables that did not come into play. These include class size, type of institution, faculty rank or years in position, or research productivity. None of these significantly impacted knowledge of or willingness to try innovative strategies.The critical stage however, is not the decision to try RBIS but the willingness to stick with it. The only survey variables that were significant at this stage were a desire to try more RBIS and gender: Women were significantly more likely to continue the use of RBIS than were men. Henderson et al. cite four other studies that also showed that women are far more likely to have student-centered physics classrooms than are men. The authors speculate that gender may be serving as a proxy for a collection of beliefs about and attitudes toward teaching.Many researchers have studied the phenomenon of trying and then abandoning innovative approaches to teaching. The Henderson et al. article includes an extensive bibliography. The hurdles that faculty face include student complaints, inability to cover the same quantity of material, and weaker than promised student outcomes. One of the issues is that implementation frequently is not faithful. As I have seen first-hand both at Penn State and at Macalester, if the course that students experience is not cohesive, if bits and pieces are poorly articulated or motivated, students will complain and resist. That does not mean that faculty dare not try to modify a received instructional strategy. It does mean that whatever changes they incorporate into their classes must be reflected upon and monitored for effectiveness. In the Henderson, Beach and Finkelstein [4] review of the literature on facilitating change in undergraduate STEM education, one commonly observed phenomenon is the importance of evaluation and feedback, an aspect of change implementation that often is omitted.Among the interesting findings of the survey by Henderson et al. were the variables that demarcated the distinction between high and low users of RBIS. Low users were defined as using one or two of these strategies, high as three or more. This is the first time that some of the traditionally assumed barriers come into play: research productivity, type of institution, and class size. Highly productive faculty at research universities teaching large classes of introductory physics are significantly less likely to be high users of RBIS. What is surprising is that this is the first time that these variables arise. These highly productive research faculty were no less likely to know about or try some of these innovative strategies, just less likely to be high users.

Also intriguing is the fact that age, as implied by rank and number of years in service, had nothing to do with knowledge of or willingness to try new approaches to teaching. Young faculty who have attended a New Faculty Workshop are more likely to be familiar with the education research and to have tried some of these approaches, but they are no more likely than older faculty to stay with them.

What are the implications for Undergraduate Mathematics Education?

First, we need to conduct a comparable study of mathematics faculty. I suspect that knowledge of what is being done and the evidence of its effectiveness is weaker among mathematicians than it is in the physics community. The combination of the near-universal acceptance of the Force Concept Inventory as a measure of student conceptual knowledge and the high profile work of Eric Mazur on the use of clickers for Peer Instruction make it far more difficult for a physicist to be ignorant of everything that is happening in Physics Education Research. Part of the study of awareness of and responses to Research in Undergraduate Mathematics Education will need to be a thorough assessment of the effect of Project NExT. We have mountains of anecdotal evidence of the importance and effectiveness of this program, but now we need to learn what impact it really has had. The New Faculty Workshops in physics do raise awareness of what can be done, but appear to have little long-term impact on adoption of these strategies. Project NExT is different in many respects, most notably providing a much broader introduction to what it means to be a professional mathematician in an academic position. It also builds communities and provides mentorships. Project NExT is almost twenty years old. We should be able to get good data on its long-term influence.Second, MAA and the mathematics community need to work on expanding knowledge of what works, including the development of web resources comparable to AAPTs PERusersguide.org. As I explained last month, this will be a much more challenging undertaking than it has been in physics. However, the next CUPM Curriculum Guide—now in its early stages of development and slated for publication in 2015—will address some of this lack.Third, as the literature amply demonstrates, simply developing effective strategies and putting them out there for people to find is not sufficient. As I travel around the country and talk with faculty at a broad range of colleges and universities, I encounter a lot of dissatisfaction with the way undergraduate mathematics instruction is now conducted and frustration with the difficulty of maintaining quality in the face of budget cuts. I also see a lot of uncertainty about how to proceed and a fear of undertaking radical changes that might prove disastrous. MAA and the mathematics community need to develop mentorship programs that promote small, coherent steps and provide instruments for collecting feedback and monitoring the effectiveness of these efforts. This is part of the critical task of equipping faculty for continual development and refinement of their teaching. We also need to offer the kind of flexible guidance that enables each institution, ideally each instructor, to take ownership of these changes, allowing for modifications that meet local needs and preferences while signposting the dangerous mutations that would destroy the integrity and coherence of the strategy. The third step will be complex and difficult, yet absolutely critical. I am optimistic that we can accomplish this. There is broad recognition that the teaching and learning of undergraduate mathematics needs to improve. Corporate, foundational, and governmental resources are lining up to bring about improvements. We also know much more than we ever have before about the barriers to change. In cooperation with the Physics Education Research community and all those working on STEM education, we can do this.

[1] Henderson, C., M. Dancy, and M. Niewiadomska-Bugaj. 2012. The Use of Research-Based Instructional Strategies in Introductory Physics: Where do Faculty Leave the Innovation-Decision Process? Accepted for publication in Physical Review Special Topics - Physics Education Research.[2] Dancy, M. 2012. Educational Transformation in STEM: Why has it been limited and how can it be accelerated? Available at http://www.chem.purdue.edu/Towns/TRUSE/TRUSE docs/TRUSE Talks 2012/Dancy TRUSE Talk .pdf[3] Henderson, C. 2012. The Challenges of Spreading and Sustaining Research-Based Instruction in Undergraduate STEM. Available at http://homepages.wmich.edu/~chenders/Publications/2012HendersonASEETalk.pdf[4] Henderson, C., A Beach, and N. Finkelstein. 2011. Facilitating Change in Undergraduate STEM Instructional Practices: An Analytic Review of the Literature. Journal of Research in Science Teaching. Vol. 48, no. 8, pp. 952–984.

"the issue is not getting people to try proven but innovative approaches to teaching and learning; it is enabling them to stick with it."

I (earnestly) suggest that the problem you are actually having is with "proven". You will find that these alternatives only prove themselves in their own terms, not in terms we expect. For example, the following is what we would accept as proof without hesitation...

You teach an AP Calculus class using the current method and the students barely even pass, if they pass. You teach them using the new method and they score 4's and 5's.

But this is what we get for "proven" instead...

You teach an AP Calculus class using the current method and the students barely even pass, if they pass. You teach them using the new method and they do better on a "concept inventory".

Hake described my last post as being too verbose and redundant (and a lot of other things), he was correct about the verbosity and redundancy. I tried to jam too much into 10 minutes. This time I have kept it very brief and very simple. There is NOTHING standing in the way of any of these approaches, or any approach for that matter, except proof.