2020-04-30T11:20:23-04:00http://teachbetter.co/Octopress2020-04-30T00:00:00-04:00http://teachbetter.co/blog/2020/04/30/exams-and-online-classes“How are you handling exams?” That’s the question I’ve heard most from colleagues this semester as we transition our classes from in person to online, and it’s a tough nut to crack.

Exams have two major purposes: Measure how much students know about a subject and incentivize studying. A good in-person exam can satisfy both purposes, and two-stage exams add icing to the cake by increasing the amount students learn during the exam itself. I’m going to ignore the issue of the substance of the exam because that’s a very important issue for both in-person and online exams. Let’s assume for the moment that we’re reasonably happy with how our in-person exams measure the skills we’re trying to teach. Right now, we’re just trying to figure out the best way to modify them for online.

The primary challenge with online exams is ensuring academic integrity as it’s very hard to control or even observe the students’ behavior during this kind of exam. It can be done on a small scale with proctoring services like Examity that have software that locks down a student’s computer and have humans (or AI) watching students through their webcam. These services are expensive, a little creepy, and just aren’t practical when you have more than a 10-20 students.

I believe many long-time online instructors solve the integrity problem by giving students multiple choice tests that are randomly generated from a large question bank. If students have a limited number of minutes to complete the test, they just won’t have time to get help, and because they get different exams, they can’t just gather in a Zoom chat room and solve the same questions together. Just make sure you don’t ask lots of questions where the answers can be quickly looked up in the textbook. This approach works well when you either already have a large question bank or have the serious resources required to build such a bank: Developing good multiple choice questions is way harder than it looks.

This semester I don’t have an existing question bank or a lot of help to write such questions. Additionally, my applied econometrics course teaches students to build econometric models and analyze results, and it’s particularly challenging (though not impossible) to test these skills with multiple choice questions.

For my recent midterm exam, I chose a very different approach to assessment. I posted a pdf to the course web site that contained the same kind of pencil and paper multi-part problem exam I usually give in person. Students had a 24 hour period to download and complete the exam. I made the exam open-book open note because I had no way to prevent them from looking through books and notes during the test. My in-person exams are mostly open note, so this wasn’t a big change. I thought about asking them to work alone, but because I couldn’t enforce this either, I felt a non-negligible number of students would collaborate anyway. This would put the law-abiding students at a big disadvantage, and that wouldn’t be fair. Instead, I allowed collaboration between members of the small groups that they have been working in all semester.

I strongly encouraged individuals to complete the exam on their own first and then meet to discuss their solutions—If you think this sounds an awful lot like a two-stage exam, you’re right! After the fact, many students told me they learned a lot during the exam through this collaboration process.

Scores on the exam were (not surprisingly) very high, but I worry about how well this structure measured individual learning and motivated studying. I believe some (though far from all) students simply copied the work of their teammates, and other students engaged in “just-in-time” studying figuring they could learn what they needed during the exam.

One tweak that could help would be to allow groups to start the exam at any point during the 24 hour period, but then give them just a couple hours to complete it. I can’t do this this semester because many of my groups have students in very different time zones. In the future I could make sure group members resided in similar time zones, but I didn’t want to break up groups that have built up quality social capital all term.

For me, the ten million dollar question is what to do for the final exam. I’m seriously considering a hybrid approach where I first ask them to take a short (say 10 question / 30 minute) randomly generated multiple choice exam on their own. Then they take a collaborative exam like the one I gave as a midterm. It would be much less work than a full-on multiple choice set up, but it would still let me identify those students who have no idea what’s going on and free-rode on the midterm. The collaborative piece would let me ask tougher questions and keep all the learning that happens during the exam. Their score would be a weighted average of what they get on the two parts.

One thing I know for sure is that I’ll talk with my students before making any decisions. They always bring up things I haven’t thought of. And when I do decide what to do, I’ll explain exactly why.

]]>2020-04-17T00:00:00-04:00http://teachbetter.co/blog/2020/04/17/more-zoom-teachingSince my last post on Zoom teaching, I got some great tips from my friends on Twitter and thought of a couple more things I want to share:

Another way to share an iPad screen is to login to Zoom separately from that device. This makes it easier to manage your windows on the “main” computer, but be careful that you don’t randomly assign the iPad to a breakout room. Preassigning students to breakout rooms assures this won’t happen. Thx @BenFinio and @tedsvo

You can share your iPad screen’s wirelessly using Zoom’s built-in AirPlay server. This allows apps like GoodNotes and PowerPoint to use Presenter Mode and show students your slides while you see more information (likes notes and tools) on your iPad’s screen. Zoom says it is supported on Windows, but I have yet see it actually work. Zoom’s AirPlay server is pretty solid on the Mac, though I use a wired connection myself. Thx @tedsvo and George Orlov

On Tuesday Zoom successfully preassigned 12 of my students to breakout rooms. I don’t know why, but this time it decided to leave the other 70 in the main room. Luckily, I just had to click “Recreate > Recover to pre-assigned rooms” from the breakout room dialog box to fix the problem.

Arranging windows is tricky in Zoom because it tends to move them around when you do thinks like share your screen. That’s why I start by sharing my slides. Then I put my students into grid view to maximize the number I can see. The student grid wants to be on top so I move it aside before opening my other windows. I open Participants (so I can see hand raises), Chat (so I can see questions), and Breakout rooms (so it’s fast to start them). Last, I put the student grid in place as close to my camera as possible without covering my tool bar. This way my students see me looking at them when I’m actually looking at them. The screen share window is big (so it’s high resolution) and covered up (since I look at my slides on the iPad). It would be nice if Zoom remembered these window settings from session to session or when I return from breakout rooms, but for now, I just get a lot of practice organizing my windows.

This last tip might seem obvious, but if you are going to have your students work on problems in breakout rooms, post them to your course website ahead of time. Your screen share isn’t propagated to the breakout rooms, so they will need to download the slides in order to see them.

Got more tips? Feel free to contribute in the discussion below or tweet @TeachBetterCo!

]]>2020-04-13T00:00:00-04:00http://teachbetter.co/blog/2020/04/13/big-zoom-classesIt’s been a long time since I taught my first big lecture course, and at this point, I’ve got a system that I think works well. My students take notes while I lecture, but they also spend lots of class time working in small groups solving problems and answering meaty questions.

When colleges around the world (including my own) sent students home this spring and switched into online teaching mode, I wanted to replicate as much of the in-person experience as possible. This might sound familiar to regular readers of this site as that was my plan way back in 2014 when I taught a much smaller version of this class using Zoom in Yale’s Summer Session.

Cornell classes started up again last week, and we finally got to reconnect with our students. I have 130 scattered all over the world, and about 100 showed up for two live sessions. Overall it went smoothly, but there were definitely lessons learned:

It’s important to actually see at least some of your students while you teach, so ask them explicitly to turn their cameras. The non-verbal feedback you get will tell you when you’re being unclear or boring. Do tell your students that being on camera is optional and that they shouldn’t do it if it makes them uncomfortable! Cornell alum Richard Thaler taught us all that default options matter, so be sure to make video on the default for your course meeting.

Require students to authenticate themselves through your school’s server. This helps prevent zoombombing. I thought getting zoombombed would be a fun distraction until I saw some truly scary video. Let’s not have that experience!

Some students will have trouble with the authentication process on the first day. If you can find a willing teaching assistant (or colleague), have them run a separate Zoom meeting that doesn’t require authentication for the first 15 minutes of class to help these students.

Get yourself an iPad Pro, put your slides on it, and annotate them in realtime while you teach. I use GoodNotes on the iPad for this. You won’t regret it. If you have a Mac, you can share your iPad screen from Zoom directly (wired or unwired). If you run Windows, it should theoretically work, but it’s finicky. My colleagues use AirServer to mirror their iPads on the PC, and then they share that window in Zoom.

Zoom breakout rooms are great—They let you organize your students into small groups in their own conference rooms with the push of a button. I did this in my summer classes whenever I asked my students to solve a problem required substantial work. The Zoom user interface lets the instructor quickly jump between rooms and then bring the whole class back together when you’re ready. I believe online students really benefit from peer to peer contact, so I try to encourage it as much as possible.

Most people let Zoom randomly assign students to new breakout rooms each time. This worked well when my class was small and everyone knew each other. For a bigger class, I strongly recommend pre-assigning students to the same groups each time so they can build strong relationships during the semester. I do this in my in-person class, so this term they are already used to working in these fixed groups.

When they are in breakout rooms, students can press a button to ask for help—It pops a message up on the instructor’s screen with a button that takes you straight to that breakout room. Encourage your students to use this button.

Have all your students keep their microphones muted while you lecture. When they want to talk, they can press the hand-raise button. I was worried I wouldn’t see hand-raises, but it turns out I notice most of them as there’s a subtle (but not too subtle) pop up.

Record your class for students that are in far away time zones. I teach at 1:30 in the afternoon east coast (US) time, and that makes it pretty tough for my East Asian students. I set up Zoom to record automatically so I don’t forget to press record, but it means I usually need to trim the first few minutes of pre-class chit chat.

When class is over, stick around to answer questions and just provide live personal interaction with students who want it. Most instructors do this in physical classrooms, and it’s even more useful in an online environment.

Zoom will give you a report of who attended your class. I matched this report with my course roster after the first day to identify the students that didn’t show up or just showed up for a few minutes. That let me reach out to them to see if they needed help connecting or had any other issues I should know about.

Live Zoom classes really can be extremely similar to in-person lectures, even with lots of students. They are dynamic in ways pre-recorded lectures aren’t, and they allow for lots of instructor-student and student-student interactivity. It might take some time to get used to, but it’s totally worth the effort.

9:11⏯ Hearing the students’ voices. The Zen idea of the “beginner’s mind.” Doug’s search for the perfect explanation–which never means the same thing to the student. And we’re not like most of our students, because we were the invested ones.

11:48⏯ Kevin’s from a family of teachers and wanted to be a teacher since high school. Observing high school teaching cured him of part of that. Adjuncting Communications 101 to get rich. But now Kevin works with middle and high school students around Des Moines.

15:47⏯The documentary 13th (2016) and the prison system in America as ‘slavery by other means.’ College and high school students find Kevin online, and he Skypes into classrooms, and more. History as something more than names and dates.

20:11⏯ Coming into someone else’s learning space. Knowing who the learner is. Vs. thinking of our students in terms of deficits. Students as knowledge creators. Active learning doesn’t just mean being active: It means making knowledge, and students shouldn’t have to wait until senior year to do it. The freshman major course is a gateway, not a death march.

24:20⏯ Working your way through school as a pinsetter mechanic in a bowling alley. Kevin didn’t read a book and do quizzes. But do avoid running 220 volts through your body. Active learning also means a variety of ways of learning.

27:50⏯ Role models for high school and college. It matters that the teacher likes the students and enjoys teaching. Just because you enjoy a good lecture doesn’t mean you can do one. Letting the subject matter be complex.

32:32⏯ Where is there room for improvement? Where do you want to be in ten years? Teaching one class a year but trying NOT to make it a masterpiece. Still relying on lectures, however short. Getting quicker at giving feedback. Giving feedback on writing by focusing it on one area.

40:20⏯ Incorporating student reflection. Asking students how they might improve–and following through. Letting students earn back points on an exam by identifing what they did that did NOT work. Helping students get over the idea that “time on task” determines learning. The New Science of Learning: How to Learn in Harmony With Your Brain by Doyle and Zakrajsek

44:03⏯ A teaching mistake involving a table. But it did set the tone. Thanks and signing off.

Show Notes

5:00⏯ Defining active learning: The students need to do more than pay attention and take good notes. The importance of the instructor in active learning. The instructor in an Active Learning Classroom (ALC) may be quiet but may take 5,000 steps in 50 minutes. Active Learning can include lectures, they’re just not long and play a different role. “The professor’s just there to resolve the chord.”

11:28⏯ What does a classroom effectively customized for Active Learning look like? Good ALC design facilitates collaboration and physical movement. Supporting the flow of information around a classroom: whiteboards vs. screens. A polycentric space: there is no ‘front of the room.’ A good ALC gives instructors and students many ways to do the same thing.

18:21⏯ The challenges of researching the literature on active learning classrooms: different names for the same thing, similar names for different things. ALC’s have an impact on the institutions themselves. Professional development and what triggers it. Working with Steelcase: no pressure to shade the results. Steelcase likes ALC’s to have glass walls–and indeed, this seems to have some tangible benefit. Anecdata.

27:02⏯ Analog tools as ‘reducing the amount of friction beween a student and her own thoughts.’ On the virtue of portable whiteboards. The on/off switch is the first thing to go. The need for low ignition costs for capturing ideas.

32:35⏯ Spaces designed for active learning (AL) foster AL strategies even even when instructors are told NOT to do AL. Students make choices in all learning spaces.

37:24⏯ Equity issues and the ‘bowling alley’ classroom space. Sharing teaching resources with learners to downplay the centrality of a single screen. The big research challenge: comparing the same pedagogy in two different spaces. What’s the active ingredient–the AL or the ALC?

42:30⏯ Architectural features making certain AL behaviors easier, and those behaviors support things we know support learning: like engagement, motivation, etc. But what’s left on the table by not having the best room. How much are instructors missing out on when they do AL in a sub-optimal room? ALC’s make instructors more excited about AL practies. But then AL research becomes less convincing than AL anecdotes.

48:12⏯ Initiative fatigue: the flipped classroom, writing across the curriculum, time-on-task, and the zone of proximal development. But can you get access to the room?!

56:13⏯ How many students can you fit into an ALC? An experiment in getting more students in a smaller space by rotating its use and making the rest of the work online. Applying self-determination theory to learning: supporting competency, connectedness, and autonomy.

]]>2019-04-05T00:00:00-04:00http://teachbetter.co/blog/2019/04/05/dangersMachine learning is capable of amazing things. Speech recognition was a fragile novelty 15 years ago and now it’s ubiquitous. Self driving cars are on the verge of breaking through. Chess and Go are now mastered by machines. At the same time we are gathering unprecedented amounts of data on our students. We track their behavior in class and their usage of the Learning Management System (LMS) outside class. We measure their performance through exam scores, quiz scores, answers to in-class questions, and evaluations of their writing. To supplement this information, we have demographics, surveys, and measures of their performance in other classes. It seems obvious that combining these two technologies should yield important insights into student learning, and in fact big money is being invested by the smallest and biggest edtech companies to do exactly this. And I think it’s really dangerous.

Machine learning is very good at prediction. It identifies what combinations of values of a large number of variables are associated with particular outcomes. e.g., Males, ages 12-18 who play video games are likely to enjoy Marvel movies. These predictions, while highly accurate, are often not easily phrased in human language. It’s as if the algorithm says “Trust me—I know from looking at the data that people with characteristics like Bob’s prefer Marvel movies to DC movies. Just don’t ask me why.” We’re only slowly figuring out how to summarize these patterns in ways that are useful beyond pure “trust me” predictions. Without insight into “why?” I’m not sure how much we can learn about student learning.

The bigger problem is that correlation does not equal causation. Doctors talk about risk factors for a disease. They don’t explicitly say that old age, fatty foods, and a passive lifestyle cause heart disease, even though they are strong predictors. Social scientists work extremely hard to figure out when an observed correlation is a causal effect. Vitamin D is unambiguously associated with great health outcomes, but a large study recently found that the relationship isn’t causal. Instead, people with high levels of vitamin D are those that spend more time outside, and it seems to be the outdoor physical activity that has the positive causal effect on health. That is, even though the positive association exists, supplementing people’s diets with Vitamin D has no effect on their health.

In my own classes, the students who spend the most time studying are often not earning the highest exam scores. If I were to interpret this as a causal effect, I would want to discourage them from studying so much. This doesn’t take into account the fact that the students who study the most are often starting with weaker skills than other students, and they are studying hard in order to catch up. It’s also possible that the students who study most are studying inefficiently.

Here’s another example: Students who attend my scheduled office hours tend to do better on my exams. It’s so seductive to interpret this as evidence of the value of my one-on-one teaching, but that would ignore what econometricians call selection bias: The students who attend office hours are often the most curious and hard-working and they would do better than other students even if I wrote gibberish on my blackboard and recited bad poetry when they came to my office.

The best case scenario is that unleashing machine learning on student data identifies students at risk and allows us to focus our teaching energy on identifying what those students need in order to succeed. It’s also possible that as the technology improves it will generate interesting hypotheses about the causal determinants of academic success. But we will need to be very careful not to over-reach. What if we find that students who regularly interact with the LMS during the semester are more likely to get A’s? Does this mean we should push all students to do so? If it’s causal, then yes. Perhaps this spaced interaction induces more learning than cramming right before an exam. But it’s equally likely that students who have lots of other good study habits are the ones driving this positive association. And it’s these other good study habits (which we don’t observe) that actually induce more learning. And that just encouraging (or forcing) students to interact with the LMS more regularly would have no effect at all. It could even have a negative effect if students shift their effort away from more constructive activities.

At the beginning of the term most of my students walk in the door of my econometrics classroom knowing that correlation does not always equal causation. They spend the next several weeks learning methods that can tease out the difference through carefully designed experiments or a careful analyses of observational data. Machine learning is great for prediction, but right now it’s lousy for learning how causal processes work. And it’s knowledge of how the learning process actually works that we need to improve our teaching.

]]>2019-04-02T00:00:00-04:00http://teachbetter.co/blog/2019/04/02/zipgradeThis semester I’m teaching two big classes, and for each, I’m giving two midterms and a final. All six of these exams are composed entirely of free response questions. Some questions require calculations, some require interpretations, and some require longer explanations. You wouldn’t think I’d have much use for an app like ZipGrade that’s designed to grade multiple choice quizzes, but you’d be dead wrong.

Standard assessments

Alongside my high-stakes exams, I also give a low-stakes multiple choice standard assessment of learning at the beginning and end of every semester. This is more common in other STEM disciplines, but I’m working with several colleagues to develop a suite of assessments (e.g., ESSA and AESA) that can be used in economics courses so that we too can have objective measures of student skills.

We’ve implemented these assessments as online Qualtrix surveys, but we usually give them in a classroom environment with bubble sheets where students have fewer distractions. To save us money and give us total control over the scanning process, we initially used the open source FormScanner software. We printed custom bubble sheets, scanned them to PDF’s, and processed them, but the process turned out to be quite sensitive to exactly how we printed and scanned. More often than not we ended up doing a fair bit of hand-tweaking.

ZipGrade is a bundled app and service that lets us print custom bubble sheets, scan answers using a phone (or tablet), and download the results. I was pretty skeptical that it would work well with large (100+) classes compared to something that used a sheet feeder, but it’s been fantastic.

Exam wrappers

At the beginning of every one of my exams, students answer a short set of questions about how they prepared for it. These “exam wrappers” include questions like “How many lectures did you attend?” “How many hours did you study specifically for this exam?” and “How many hours per week do you normally study for this exam?” The questions are all multiple choice, and up until now the data entry has been tedious. Automating the process over the past couple weeks has been a great low-stakes opportunity to try ZipGrade.

ZipGrade provides three “standard” bubble sheets with room for 20, 50, and 100 questions, but I created a custom bubble sheet in about two minutes through their online wizard. My custom sheet has room for 30 questions and asks students to bubble in their 7 digit Cornell Student ID. They also write their name on the sheet.

I’ve learned it’s well worth the effort to upload my student roster (with names and id’s) into ZipGrade before fielding an exam because it lets ZipGrade tell me when a scanned “quiz” does not match. This isn’t common, but some students bubble in their id incorrectly or just skip the bubbling all together.

The magic happens when my students start handing in their tests. I made the bubble sheet the top page so I didn’t have to turn any pages before scanning. I put the app in scanning mode and just hold it over an exam to start. As soon as the little green squares on the screen line up with the little black squares on the exam, the phone buzzes and shows the matching students name and id. And without my pressing a single button, it’s ready to scan the next exam. Scanning is literally moving exams from one pile to another with one hand while lining up squares. I can scan about 20 per minute, and if you have a few helpers (e.g., teaching assistants), you can all scan simultaneously.

Tracking who took a test and who didn’t

Before I used ZipGrade, I had another fairly time consuming process where my TA’s and I would alphabetize all the physical exams and manually match them to the roster to identify any students that didn’t hand in an exam. I would follow up with those missing students to make sure they were okay.

ZipGrade radically speeds up the accounting process by showing the scanned names of the students whose id’s didn’t match any students in the class roster. I can then manually associate the exam with the correct student by tapping on the student’s scanned name and choosing “Change Student” from the review menu. The final step is to simply look at the class Grade Book Report on the ZipGrade website to see which students didn’t hand in exams.

The bottom line

The scanning technology behind ZipGrade is amazing—Its accuracy and speed enable use scenarios I didn’t think were feasible with an app that runs on a phone. The service piece of the system is easy to overlook, but the UI is clean and the functionality is solid. To me, this system is worth FAR more than the crazy low $7/year that ZipGrade charges. I understand that this might be a reasonable price for the underfunded K-12 teacher market, but I also believe most folks in higher ed would be happy to pay substantially more. I know I would.

]]>2019-03-11T00:00:00-04:00http://teachbetter.co/blog/2019/03/11/invention-activitiesPretty much anyone who has talked to me recently has heard me sing the praises of invention activities. These differ from more typical in-class activities in that students are asked to grapple with challenging problems BEFORE they are taught how to solve them. The experimental work of Dan Schwartz and colleagues shows that this struggle prepares students well to learn from the lecture.

Over the last two years, George Orlov and I have developed and refined six invention activities that we’ve been using to teach key concepts in applied econometrics. These include Ordinary Least Squares, controlling for a categorical variable, interacting explanatory variables, difference in differences, regression discontinuity, and fixed effects. I’m thrilled to announce that we’ve written “Using Invention Activities to Teach Applied Econometrics” that includes the details of all six activities as well as our arguments for why you should use them.

The abstract: An invention activity is a teaching technique that involves giving students a difficult substantive problem that cannot be readily solved with any methods they have already learned. The work of Dan Schwartz and colleagues (Schwartz & Bransford, 1998; Schwartz & Martin, 2004), suggests that such activities prepare students to learn the “expert’s solution” better than starting directly with a lecture on that solution. In this paper we present six new invention activities appropriate for a college econometrics course. We describe how we introduce each activity, guide students as they work, and wrap up the activity with a short lecture.

Show Notes

1:49⏯ The best teacher Doug knows, bar none. Doug McKee welcoming Doug Robertson. Talking about how the fundamental principles of teaching apply equally to K12 and higher ed. You can’t be an overexposed K12 teacher.

5:10⏯ How long Doug Robertson’s been teaching and how many students he teachers. Classroom management as engagement. Engagement online vs. face-to-face.

9:39⏯ Student engagement comes from great lesson plans AND the teacher’s personality–Neither works alone. Doug Robertson stands on desks, uses puppets. Other great teachers take a calmer more conventional approach. Teaching like Paul McCartney (and Stevie Wonder) vs. teaching like KISS.

13:00⏯ Taking students out of the comfort zone. They’ve learned to play school, but some of that they must unlearn. Being a freeform teacher but giving the students an end goal. Struggle as necessary for learning. “I wanna see you do it wrong, and then I’l help you.” Type A 4th graders vs. Type A college freshmen. Don’t scaffold so much that failure never happens. The minimalist program in instruction. Edutwitter’s ideology of freedom.

19:37⏯ Wanting to be a teacher. High school Doug McKee’s ambiguous answers. From lifeguarding to teaching: helping people learn to do something you love, and all of the excitement and energy. Developing a new skill–to get in touch with learning. Doug Robertson’s mother was a teacher. Teaching is the job most likely to be inherited. Doug McKee is a third-generation teacher.

27:10⏯ Doug McKee’s structure for the conversation: using Susan Ambrose et al’s How Learning Works. Comparing K12 vs. a college a classroom. Meeting students where they are. Seeing students as functional human people. Spending 180 days with students 6 hours a day. First seeing their unique quirks, and then seeing them as having a rich inner life. Being The Authority and yet helping students question appropriately. “I Am The Man. And there is no man.”

35:39⏯ Watching people listen to music they don’t know much about. The Reaction Video or ‘Watching Watching’ Youtube genre as learning: encountering something new and unfamiliar, the discomfort of re-shaping your mental paradigms. “Lost in Vegas” YouTube Channel

39:44⏯ Helping students organize knowledge and giving students freedom. A presentation for Doug Robertson isn’t Google Slides: it isn’t a means, it’s aiming for a certain goal. Doug Robertson stumbling across his core teaching philosophy: the student who wouldn’t stop doodling. Not expecting the students to do the thing you want the way you want. Our brains encode information in multiple systems.

46:29⏯ Segue to thinking about how to motivate students. Motivation and entertainment, edutainment vs. motivating students to motivate themselves. “I believe in you, Sweetie” vs. giving students concrete reasons why they can do it. Not being scared of students doing crappy work: “This thing will be bad first. It’s okay.” A payoff of developing trust with your student: the student knows you know them. Combatting “I’m not a math person.” “You’re not good at math…yet.”

54:29⏯ Doug Robertson’s Hobby Project. Learn ventriloquism. Or to juggle. In three weeks. Document what you did & how it felt. “Grandma taught me to knit.” Encouraging students to try. Supporting good wrong answers. Accepting the fear of failure to banish it. “Goal-free problems” is the technical term for asking for many relevant vs. the right answer. See Sweller (1988) “Cognitive load during problem solving.” Cognitive Science 12:257-285

1:02:38⏯ The Hobby Project was: teaching students how to learn. A memorable failure. Teaching persuasive writing through an unpersuasive topic. Then changing it to a hot topic. Students having conversations they care about. Hope for the future.

]]>2019-01-14T00:00:00-05:00http://teachbetter.co/blog/2019/01/14/more-on-two-stage-examsI have a lot of conversations with all sorts of people about teaching. Sometimes they are happy to listen, and sometimes it’s clear they’d rather be somewhere else. The one thing almost everyone gets excited about is the two stage exam. The benefits of having students work together to solve exam problems they’ve just thought hard about are glaringly obvious, and the implementation costs compared to many other potential teaching innovations are minimal.

There isn’t a huge amount written down about two stage exams, so I thought it would useful to provide links to the resources I know of. In the comments below, please share papers/pages I’ve missed or burning questions that aren’t answered!

]]>2019-01-09T00:00:00-05:00http://teachbetter.co/blog/2019/01/09/ali-lessons-learnedI try to spend most of my time living in the present, but in practice, I end up spending a little too much of my time alternately planning for and worrying about the future. Winter break is a time for looking back and gleaning some lessons from the past. In this article I try articulate some lessons learned from an intense 18 months running my department’s Active Learning Initiative.

In February of 2017, I got my dream job when the Cornell Department of Economics won an internal grant to transform our entire undergraduate core curriculum using evidence-based active learning methods. We had written a comprehensive proposal that detailed the process we would use: pairing teaching-focused postdocs with faculty to transform one class at a time and carefully estimating the impact of the new methods on student learning. Since we started work in earnest that summer, our students have learned a ton, and so have we:

Hire great postdocs. Our two postdocs (George Orlov and Daria Bottan) have been amazing. The key was to look for someone who has a real passion for teaching, has some knowledge of modern pedagogy, and has good quantitative skills. Specialized expertise in the area of the classes they will be transforming is a plus. This kind of candidate is absolutely out there.

Choose the right first class. If I could do it again, I’d pick a class that’s taught as a pure lecture by a faculty member who is open and even excited about trying active learning. This maximizes the probability of seeing real improvements in student outcomes, and it’s always nice to start with some success.

Draft learning goals for courses early and get a broad group of department faculty involved in reviewing those learning goals. It turns out faculty often have strong opinions about what should be taught it in a course, even if they don’t normally pay much attention to it.

Communicate clearly with the instructor early on about the transformation process. A lot of measurement happens during the semester (e.g., COPUS, focus groups, student assessment), and it’s important to get instructor sign-off before it starts.

Assess entering students skills as best you can at the beginning of the semester. This is critical for controlling for differences when comparing outcomes across classes. We invested heavily in developing an economic statistics assessment that we gave our Applied Econometrics students. We gave our math-intensive introduction to economic statistics students a basic math skills assessment. And we gave our intermediate micro students the same math assessment as well as a micro principles assessment that we developed.

Assess exiting students skills as best you can. This is primarily how we measure impact of our transformation effort. We’ve developed assessments for the Applied Econometrics course as well as the math-intensive intro stats course. Our intermediate micro assessment is still in development. Our plan is to eventually publish all of our assessments so instructors everywhere can use them.

Get demographic data on students. You can either collect your own or (if possible) get access to university administrative data. This is crucial for controlling for differences in student populations as well as estimating sub-population specific effects. There’s evidence that active learning is even more effective for URM’s and women, and using these teaching methods will reduce performance gaps–This data lets us see if that’s happening in our classes.

Create an explicit transformation plan for the class you’re transforming so it’s clear to everyone involved what’s going to be done. People have very strong (and different) ideas about what active learning is.

Communicate regularly with the whole department. The past few months I’ve been maintaining a shared folder of ALI documents for the whole department and sending a monthly update email. We also give occasional 5 minute updates at faculty meetings.

Share what you’re doing and what you’re finding with folks outside the university (e.g., conferences, invited talks and journals). This lends the effort credibility, and that’s crucial when you are trying to convince faculty to do something uncomfortable. Also, once you have good measures of student learning and are gathering all this other data, it enables really interesting research projects. We ended up submitting abstracts for four new projects to the AEA Conference on Teaching and Research in Economic Education (CTREE) this year.

Get the best TA’s you can in your ALI classes, especially if you are changing what happens in discussion sections. A TA in an active classroom doesn’t just sit there—they spend a lot of time interacting with students and guiding them in activities.

While we’ve learned a lot and accomplished a lot in our first 18 months, our focus has been on process and measurement. The next 18 months will be about reaping the benefits, and I couldn’t be more excited!

]]>2018-12-21T00:00:00-05:00http://teachbetter.co/blog/2018/12/21/tbp-episode-78This fall Doug and Edward both taught classes of their own. In their latest episode, they reflect on their challenges, what they tried, and what they learned.

Show Notes

1:09⏯ What’s this episode all about? Looking back at the courses we taught this fall. What were our challenges? Podcaster, teach thyself. The challenges of educational research.

4:33⏯ Doug tries to re-work a course he’s taught many times: applied econometrics, the first economics class to be transformed through Cornell’s Active Learning Initiative. They measure in a baseline semester and then try new things in following semesters.

10:44⏯ Edward’s big challenges were: explaining what historians do, helping students avoid making the same mistakes again and again, helping them to write better, re-organizing the course while it was running, and rethinking assessment.

14:37⏯ Doug wasn’t changing the technology in his course: a laptop and an iPad work together so Doug can avoid being stuck behind a lectern. The iPad hybridizes the chalkboard and slides. Doug’s challenges were two-fold:

Doug used quizzes to encourage pre-reading before the lecture, but that wasn’t working well.

He used in-class “invention activities” to help students understand why the econometric methods were valuable, but sometimes students just drew a blank–so probably some scaffolding was in order.

20:49⏯ Edward also needed to do a lot of scaffolding, as it turned out.His students needed to find and then analyze primary and secondary historical documents, journalistic and academic criticism–as well as other sorts. To scaffold the distinctions, he gave quizzes on primary vs. secondary, journalistic academic, and he started by giving the students documents to use. Then rethinking assessment based on “habits” rather than criteria: writerly, scholarly, critical, and historical. And scaffolding these habits by expanding the rubric week by week.

31:06⏯ Doug and his postdoc (George Orlov) refined their invention activities and plans to publish them this year. Sidetrack #1: Why do we create our own courses and course materials? Why don’t we share and publish materials to be reviewed and evaluated? Doug also gave up on pre-reading quizzes the lecture, and moved the quizzes and reading to after the lecture–and he made the quizzes harder. Sidetrack #2: Using quizzes for a very focused purpose: to help the students go deeper into the material.

36:01⏯ The surgical use of quizzes. Edward’s good news: The students’ writing improved dramatically. Once an auto-graded quiz is written, you can use it forever. You can configure the quiz so the students don’t simply search for the right answers. Three bits of bad news:

Re-building the course as you go leads to instructor errors. Edward learned a few tricks about managing versions in Canvas. Letting students know when you’ve made a mistake.

Challenges using ‘outcomes’ in Canvas: you can’t extract the performance along the dimensions you use for grading.

Rebuilding the course as you go gets you behind in grading. (It turns out, though, the correlation amongst student grades is high.)

46:09⏯ Sometimes our assessment just consists of telling students they did the prep work correctly. Using points rather than letter grades. You can’t assume the students know how to do some of these basic things.

50:11⏯ Focusing on what students achieve rather than whether or not they do all the work. Encouraging students to do the work when they may not have the greatest study skills. The final project or exam may show what the student learned–but it may not. That’s why our grading/weighting schemes are what they are.

57:17⏯ What worked for Doug: The revised invention activities worked much better. Some students responded to clicker systems from their dorm rooms. The quizzes were much harder–but the grades were higher than expected. The students had time to work on them, and they wanted to get high scores. The surgical uses of quizzes (again). A sample Evernote Dossier on BLAZING SADDLES

1:01:14⏯ Edward would like to use peer assessment, give space for students to revise their writing, and separate out dates and times from other forms of content (to avoid misunderstandings about due dates).

1:04:44⏯ Doug wants to focus next on group work and what kinds of standards and criteria students can use to rate each other, ways to improve the students’ poster presentations, and dive into the assessment data and see where the teaching could be improved.

1:09:05⏯ Try debriefing your semester with a colleague over coffee. What challenges did you face? What did you do to address them? What worked and what didn’t? What do you plan to do next? Signing off with “We Three Kings” recorded by Ben Devries

]]>2018-12-04T00:00:00-05:00http://teachbetter.co/blog/2018/12/04/sweating-the-detailsAs regular readers know, I’m a huge believer in iterative refinement. It’s important to try new things, and it’s equally important not to give up too soon. Pretty often, even with bad ideas, there’s a glimmer of promise that just needs to be nurtured. My first poster session was good, but they’ve become so much better over the years through incremental improvement.

Last night we had an awesome poster session with my 86 Applied Econometrics students, and I thought it would be useful to write down some recent lessons learned:

Figure out a good location early and reserve it. The room I’ve used for the last couple years is currently undergoing renovation, but the Cornell College of Agriculture and Life Sciences (CALS) graciously let me use their big fancy conference room in 401 Warren Hall.

Figure out how posters are going to be displayed early on, and if tech is involved, test it well before the big day. I far prefer providing big displays for students to show their posters electronically to requiring them to print out paper copies. The ClickShare stations we use are getting a little long in the tooth, but they’re mobile, easy, and mostly reliable. Another big thanks to CALS for sprucing them up and letting us use eight!

Organize the space. You absolutely need lots of open room for walking around. Place the posters around the edge. Provide some, but not too much seating. Provide a place to put coats and bags. Have tables for the food and drinks. Don’t forget trash cans.

Keep track of what you buy for food and drinks and make a note at the end about what you over or under bought. This time around egg nog was a big hit!

Get to the venue early—Moving furniture, setting up the food, and getting the music and lighting right takes time. It sure is nice when you have two super-competent teaching assistants to help.

Kick off the festivities with a very short speech about how excited you are to see what folks have been working on. Lay out the schedule and thank everyone for coming.

The first few poster sessions I did required a fair bit of focus on the logistics during the event. Now that I’ve got the routine down, I can spend almost all my energy on the poster projects themselves. I try hard to have a real conversation with every group about the substance of their project. My TA’s do the same thing.

The session is always organized as a sequence of three rounds where a third of the students present in each round while the rest circulate. I noticed last time around that many students were arriving for the their own round and then leaving, and that meant the audience was a little thin. This time I strongly encouraged students to attend at least one round in addition to their own, and it made a big difference.

I always have students vote (using a Google form) at the end of each round for best project and best poster. Last year I was shocked to see evidence of a whole lot of voting fraud. For example, there would be batches of several votes placed within seconds of each other for the same groups. This year voters had to identify themselves and vote for a group other than their own. Unless they’ve become far more sophisticated, this eradicated all the bogus voting.

Was the poster session perfect? No way! But it was absolutely more productive and fun than ever. Given the logistics are in pretty good shape, I think I’ll be focusing on improving the actual content of the posters next time around. Progress!

]]>2018-11-21T00:00:00-05:00http://teachbetter.co/blog/2018/11/21/tbp-episode-77Jose Vasquez has been teaching economics at the University of Illinois for 14 years. He teaches one of the largest introductory microeconomics classes in the world every semester with more than 900 students. He also teaches one of the biggest intro micro MOOC’s in the world: His Coursera course has had more than 100,000 students register in the last five years. He thinks deeply about how best to use his class time and what he wants students to do outside class. Our conversation covers a wide range as Jose explains what still excites him about teaching and how he got to where he is. We also talk about the joys of active learning, the importance of motivating our students, and the benefits (and costs) of peer assessment.

Show Notes

0:00⏯ Introducing and welcoming Jose Vazquez. Trying to focus on peer assessments. How Jose came to focus on teaching economics: bringing together research, social benefit, and teaching.

5:44⏯ The illusion that active learning methods means giving up control: if anything, it requires a high level of comfort. Putting the lectures on video–and putting his best economic jokes in the videos.

9:19⏯ Active learning is not just students working problems together. Some short pieces of lecture are usually needed. Jose: “You need to know the kinds of questions and misconceptions students are going to have.” Helping students identify their problems–so they can correct them. Comparing lecturing and lecture capture with listening to Miles Davis live and on CD: One’s better, the other is still pretty good!

15:56⏯ There’s still a lot we don’t know about classroom instruction. But we are starting to have the tools to collect the data we need. Teaching large classes also give you more data.

19:41⏯ In 50 minutes, there’s not a lot of cognitive change. It’s wiser to focus on motivaton. The illusion we sometimes have that they’re learning when we’re talking. Plus a similar notion that there’s one best way of expressing any idea. Uta Hagen writing a book that can’t be misunderstood

25:10⏯ Benefits and challenges of peer evaluation: students giving each other either A’s or F’s. Using a very detailed rubric for the purposes of evaluating the results.

32:32⏯ Simplifying the rubric by getting rid of rows and columns: using a holistic rubric with only three levels. In favor of simpler rubrics: Doug’s rubrics for peer evaluation of project.

38:24⏯ Can we reduce teaching to a science? It’s not just about learning objectives: it’s also about inspiration. Not everything can be quantified, and the process itself matters.

52:22⏯ Quality control in peer grading with Moodle: giving students a sample to grade, or having someone spot check the grades. Asking students to explain their score improves the results. (Also: students grading a few pieces of peer work reduces the effect of fatigue on grades.)

]]>2018-09-25T00:00:00-04:00http://teachbetter.co/blog/2018/09/25/tbp-episode-76Justin Cerenzia teaches history at St George’s School in Middletown, Rhode Island. We don’t usually have guests from high schools on the show, but Justin is no ordinary high school teacher. He’s also the director of the school’s teaching center and someone who pays keen attention to research on pedagogy across the board. In this episode we talk to Justin about how teaching methods and ideas being popularized in STEM fields can translate to the humanities.

11:08⏯ Historians learning from chemists about teaching. At St. George’s, the classrooms aren’t all that different from each other. How Justin got there: his education and teaching experience.

14:59⏯ Some ideas work across across disciplines: the micro/macro distinction in economics and history. How the scope of the time frame helps make different thing significant. The History Manifesto by Armitage and Guldi.

18:47⏯ Is flipping the classroom a big deal in the humanities? Justin thinks about how best to help his students learn.

21:25⏯ Stanford’s Sam Wineberg and history education is critical of the sage-on-the-stage approach. Do Edward and Justin actually use learning objectives in the humanities? Oh yes.

26:52⏯ Can humanists use multiple choice questions? The AP History exam does–so Justin does. And a USC law school professor (Tom Lyon) uses clickers in a law school course to help skills practice legal thinking live in-class.

40:43⏯ How Justin teaches metacognition. Using non-digital games in the classroom to help students define concepts. When a student fails to learn is a good opportunity to think about metacognition. Humanity and teaching.

48:50⏯ Teaching and motivation. The ABC’s of How We Learn. Intrinsic motivation vs. self-efficacy. Some up’s and down’s of students collaborating with peers.

]]>2018-09-10T00:00:00-04:00http://teachbetter.co/blog/2018/09/10/ipad-unleashed-videoSince writing about how I use my iPad Pro to control and annotated my presentations while walking around the classroom, I’ve had several readers request a video tutorial. Today your prayers are answered! George Orlov handles the Windows side and I handle the Mac side in this video:

Please do let us know if you have any questions all in the comments below.

]]>2018-08-08T00:00:00-04:00http://teachbetter.co/blog/2018/08/08/tbp-episode-75Outside observers can give instructors valuable formative feedback, and with the right observers and the right instruments, classroom observation can also be a useful (if incomplete) measure of teaching quality. Our guest, Marilyne Stains, teaches in the Department of Chemistry at the University of Nebraska in Lincoln where she specializes in chemical and science education. She has used a range of measures of instructor and student behavior in her research and recently co-authored the largest-ever study of STEM teaching practices that analyzed classroom observation data for more than 2,000 classes. In this episode, we discuss the pros and cons of a variety of classroom observation techniques from reliable objective measures like COPUS to completely unstructured note-taking.

Show Notes

0:34⏯ Welcoming Marilyne Stains.
What are people doing in the classroom? The culture of privacy around teaching.
Faculty don’t automatically observe each other. But there is a lot to gain–for both the observed and the observer.

5:25⏯COPUS (Classroom Observeration Protocol for Undergraduate Stem) is an objective measure of instructor and student behaviors. It’s NOT a measure of teaching quality.
RTOP (Reformed Teaching Observation Protocol) measures the degree to which the instructor is doing inquiry-type teaching. The data collection gives the opportunity to, for example, measure the impact of faculty professional development. And these may be used for formative teacher assessment. COPUS has high inter-rater reliablity, and it’s easy to train undergraduates to use it.

8:28⏯ Marilyne uses COPUS for research on teaching.
But she has colleagues who uses COPUS for faculty development and formative feedback.
How might you observe a peer or be observed for faculty development? Have a conversation first.
Doug has been observed using COPUS ten times with an interval between, and he got valuable data. You may think you’re active, but you’re not–or vice-versa. COPUS is objective.

11:58⏯ There are lots of things going on in a classroom that COPUS doesn’t capture.
The OTOP (Oregon Teacher Observation Protocol) is also useful as a more qualitative measure.
It’s a problem in general when the observer doesn’t know much about teaching.
Summative teaching observation evaluation is a tough nut to crack.
We often assume that experience teaching translates into knowledge about teaching. Newer faculty might actually have more exposure to how learning works.
And senior faculty can ‘freak out’ when they visit an active learning classroom. Some faculty can imagine that active learning is less challenging: they may see it as entertainment.

17:28⏯ Humanities teachers have done active learning for a long time. It’s not just seminars, it’s group work, projects, peer instruction, etc. Faculty who want to do active learning may not know what to do. POGIL (Process-Oriented Guided Inquiry Learning) for chemistry, biology, and math is one set of resources. But seminars can still be run in a teacher-centered way. It’s the bad Socratic Method: “Everyone Socrates talks to is stupid.” Classroom observation doesn’t measure any of the teaching that happens outside the classroom.

22:43⏯ We may spend more time teaching outside the classroom than in–but we don’t know.
Many organizations set standards for and evaluate the quality of online courses, but there is no equivalent for face-to-face courses.

25:23⏯ Who should be doing the classroom observation? Only senior faculty observing pre-tenure junior faculty is just not enough.
Options include faculty outside the department from neighboring disciplines.
There are issues with observers with no disciplinary knowledge. So ultimately we may need difference people who bring different lenses.
There are dangers in faculty feeling judged by colleagues who do research on teaching.
Observation is ideally one part of a conversation and driven by faculty questions and goals.

32:34⏯ Doug’s experience observing at another institution as part of a tenure review.
Marilyne gave Doug the NSF report on teaching quality evaluation, which referred to the OTOP (Oregon Teacher Observation Protocol), and Doug found that useful.
Some protocols focus on behaviors, others on how students participate in producing knowledge.

36:23⏯ COPUS is built on the TDOP (Teaching Dimensions Observations Protocol).
Marilyne had trouble establishing inter-rater reliability with TDOP and RTOP and found that easier with COPUS.
Doug and Edward go meta on observation protocols.

38:55⏯ Marilyne has been using COPUS to monitor change before and after participation in faculty development programs–and longitudinally.
They found that faculty behavior do change, although the new behaviors show a downward movement over the long term.
Doug and Marilyne discuss experimental design, and Marilyne shares the actual process of recruiting research subjects.
Those who DON’T sign up have higher self-efficacy about teaching–which is probably why they didn’t sign up.
But the workshop raised the participants self-efficacy about teaching in just two days to the level of non-participants.
They tried to create communities to maintain the teaching behaviors, but they have not had luck.

45:05⏯ COPUS adoption across whole departments seems to be rare.
Doug suggests COPUS could be used to compare departments, not evaluate instructors.
There is little research to link COPUS behaviors to learning outcomes. UBC has seen trends.
The data may be noisy, like the correlation between a certain vitamin and certain health outcomes.
Doug worries about teachers trying to ‘game’ COPUS by pseudo-active learning.
Marilyne emphasizes that COPUS does NOT measure teaching quality.
People ask Marilyne what good teaching is, and she says there are too many factors.
Qualitative evaluation is hard–as you recognize when grading papers.

52:13⏯ A mistake in the faculty development classroom. Faculty don’t credit education research. “That’s not true in my classroom.” Using principles and ‘case studies’ instead of data.
Faculty believe in ‘personal empiricism: “I tried it once, and it didn’t work.”
Using personal anecdotes to think about learning.
Building a culture of collecting and analyzing data in order to talk about learning.
The case for student portfolios.

]]>2018-07-10T00:00:00-04:00http://teachbetter.co/blog/2018/07/10/math-timeMy DBER journal club recently read “A Mathematician’s Lament.” While I couldn’t attend the actual discussion, I really enjoyed the essay. The gist is that math is practical, but it can also be a creative art form, and this is completely ignored in the vast majority of K-12 math classes. Kids have no exposure to math as play beyond gamified drills of arithmetic facts. Working mathematicians on the other hand don’t just know a whole bunch of definitions and algorithms–They actively create and try to see a beautiful abstract world in ways no one has before. Mathematicians have a lot more in common with painters and sculptors than they do accountants or even engineers.

While reading the essay it occurred to me that this is exactly what the Math Club at my daughters’ elementary school brings to the table. Tom and I present new concepts and problems, and we let the kids play with them. The journey and the attitude is way more important than getting to the solution.

Playing with math is a pretty common activity in my home too (see Potty Math), and over the last year the best example is something we call Math Time. Several times a day one of us will spontaneously yell out “Math time!” Everyone else stops what they are doing to stare at the (digital) clock to see how the time can be turned into an equation.

At the beginning, the equations involved simply addition and subtraction:

3:12 → 3 = 1 + 2 or 3 - 1 = 2

Then we added multiplication and division:

12:34 → 12 = 3 x 4 or 12 / 3 = 4

These days it can get a little crazy:

3:29 → 32 = 9

or even better:

12:05 → 11:65 → 11 = 6 + 5

Whenever I think we’ve played it out, one of us comes up with a new variant. And when the math time well does eventually run dry, we will find something else because the math well will never run dry.

]]>2018-06-20T00:00:00-04:00http://teachbetter.co/blog/2018/06/20/tbp-episode-74Monroe Weber-Shirk has taught engineering at Cornell for 24 years, and in 2005 he started the AguaClara Cornell program where he works closely with local partners, graduate students, and up to 80 undergraduates at a time. Together they develop, implement, and maintain sustainable water treatment facilities in multiple developing countries. It’s an incredible model of deeply engaged learning at scale, and in this episode Monroe tells us how it works and how he got here.

Show Notes

0:41⏯ Welcoming Monroe Weber-Shirk. The AguaClara Cornell clean water project and how it started. Monroe’s experience with education abroad: Honduran refugee camps. Goshen College (Monroe’s alma mater) requires all its students to spend some time studying outside the US. A long-held belief in the importance of experiential and engaged learning.

10:27⏯ How the student experience has changed. Scaling engaged learning from 25 students to 80 by finding student leaders. This is not a one-semester project: students may participate for four years. Students work in teams; the teams have leaders; more experienced students become experts and serve as research advisors.

15:02⏯ Learning about leadership. The team leads organize the syllabus, design and create tutorials, deliver content. The instructor only gives two lectures per semester. Two lab sessions per week, sometimes a Monday night lecture, one symposium of student presentations. All the students are in upstate New York designing solutions for people in Honduras. Why not bring students to Honduras? The importance of partner organizations and student accountability.

22:40⏯ What do employers say they want in engineers? The student is not a widget. Going beyond a mechanistic view of how engineering solves problems. Both conventional and low-tech/appropriate designs have their problems. Complex systems fail. e.g., Jacques Tati’s Monsieur Hulot character often battles with modern technology. Even “appropriate technologies” like slow-sand technology are expensive and don’t work with dirty water. Monroe’s students design non-electrical systems with no moving parts, save for levers.

29:58⏯ Solving the right problem and designing solutions that can be maintained. The first plant failed. The second plant is ten years old and has been upgraded multiple times. Employers also want teamwork and leadership skills. The typical college course has a false model: that the student is a tabula rasa. Teaching servant leadership. Innovation requires failure. Some students go in to careers that involve social justice. More people now that don’t have safe drinking water than at any other time in history. Flint, Michigan and Ithaca public schools.

38:47⏯ The Big Question: Is there still a place for didactic lecture courses? The challenges of teaching in domains where the existing knowledge is not sufficient: Monroe’s focus is flocculation. Edward’s intro psych course and Doug’s two graduate micro-economics courses.

48:54⏯ Should we teach problems? Or theories? Monroe’s students have trouble applying theories they know well to a practical problem. Edward on teaching creative activities.

52:10⏯ Monroe’s teaching fails? Every course is an experiment. His students want better explanations in his lecture slides. On taking a knee during a lecture while a protest was taking place. Edward’s motto: It helps to care.

]]>2018-05-03T00:00:00-04:00http://teachbetter.co/blog/2018/05/03/tbp-episode-73Mac Stetzer from the University of Maine Department of Physics and Astronomy is an active physics education researcher with lots of experience teaching teachers how to teach physics better. In this episode he shares his lessons learned working with undergraduate learning assistants, graduate student teaching assistants, and teachers at the K-12 level.

Show Notes

0:35⏯ Welcome Mac Stetzer. The benefits of comparing teaching K-12 and college. Flashing back to Episode 11: Teaching Undergraduates and Preschoolers with Carla Horwitz. In K-12, professional development is required, while in higher ed it’s optional. Being a resource for more colleagues without pushing things on them. Mac’s position and UMaine’s faculty incentive grants. Crediting teachers and undergraduate ‘learning assistants’ for what they know about teaching and learning.

9:38⏯ How Mac discovered the study of the teaching of physics. The importance of teachers and students being comfortable with not knowing the answer. Teachers can also model misconceptions. The importance of listening to learners. The U Washington Physics by Inquiry approach. Alan Schoenfeld’s research on metacognition in math education: when the instructor never makes mistakes, students get a false sense of how problem-solving works

18:24⏯ Supporting students in the experience of struggle and making incorrect predictions as part of learning. Eliciting student errors without making the students feel incapable. How Doug handles this on the first day of the term. Doug’s high school math teacher proves 1=2.

22:32⏯ What is physics in K-12? Balancing, ramps, and tracks. Using the Physics by Inquiry approach. How to show and experience the concept vs. merely knowing the equations. Students who teach learn more–because they need to go beyond ‘this is the equation.’ Sometimes teachers only know the takeaway concepts, not underlying proof. Using analogies to teach and reason. Good teaching is good listening.

33:40⏯ Models: closing the gap between the current state and the goal state. But you need to know where the learner is. Leveraging what the students already think. Good teachers have good models of cognition and meta-cognition. Interpreting students’ responses can actually be tricky.

40:13⏯ Aligning the faculty development experience with the methods being taught. Seeing everyone as bringing something unique. Tailoring instruction in small groups. And allowing people to bring their own experiences to problem-solving.

46:24⏯ Reverting to didactic methods–because you’re less comfortable with the material. So teaching also involves tolerating one’s own discomfort–hence getting closer to the student’s experience. That often gets lost.

51:30⏯ A short answer to a large question. Things you must include in a faculty development program. Get good learning materials. Create a climate where failing is okay. Go in depth: don’t just skim through it.