“I Tried It and It Didn’t Work!”

Someone sought me out recently to say that she’d tried something I had recommended and it didn’t work. “You need to stop recommending that to people,” she told me. “How many times did you try it?” I asked. “Once … and the students hated it,” she responded. This rather direct feedback caused me to revisit (and revise) a set of assumptions that can create more accurate expectations when implementing new instructional approaches.

No strategy, policy, activity, or assignment “works” the same way for every student.

Students experience course events and activities differently, depending on their background knowledge, prior learning experiences, and what happened before class at home, on the job or during the weekend. For some of them the new instructional activity will be one of those great learning experiences, for others it will be satisfactory, and for some it will be less than either of you had hoped. It’s hard to predict how many students will fall into each of these categories or the spaces in between.

No strategy, policy, activity or assignment “works” for every teacher.

How you plan and execute a new activity matters. If you spend time on the design and implementation details, it will likely be a great learning experience for more students. Doing it well also means doing it your way. What you heard or read is how someone else made it work. Planning for an instructional change should include figuring out what will make it work for you. If you try something and it doesn’t work it could be a bad idea, but it could also be that it doesn’t fit comfortably with how you teach.

No strategy, policy, activity, or assignment “works” in every course.

What you teach also plays a role in what you can do and how successful it will be. Some content is easier to discuss. Some lends itself to demonstration. Some content can be mastered in groups. It’s useful to consider how the configuration of content implicates instructional method. The shape of what we teach doesn’t lock us into a predetermined set of approaches, but it does make some things easier to implement than others. It’s important to consider what works in light of what you teach.

Make predictions but don’t be surprised by the outcome.

You want to select those new strategies, policies, or activities with high probabilities of success. And you want to enhance those chances further with careful planning, by preparing students, and giving it your best effort. But don’t assume doing everything right guarantees success, and the implication here goes both ways. There’s also merit in trying some things that don’t seem all that likely to work. Instructional changes often produce surprising results.

No new approach is the best it can be the first time you try it.

How many times should you try something that doesn’t work? Do we tell students to give up after one try? And once is definitely not enough if the decision to abandon is based on student feedback focused on their feelings. New approaches can look good on paper and turn out to be not so good in practice. But often they can fixed—by fussing with the design details and trying it again.

The success of any strategy, policy, activity, or assignment should be measured by how well it promotes learning.

We can’t ignore student response to what’s happening in class. Nobody wants to teach a class where students “hate” everything, and a class where students “like” everything is just as problematic. We do have professional standards to uphold. But how students feel about an instructional approach shouldn’t be the main measure of its success. Did the new approach get students dealing with content and developing relevant skills? To answer that question, teachers must look at their learning outcomes and make some predictions about how the change will help students meet those outcomes. Then they must decide what kind of evidence indicates that learning has occurred. Lastly, they must collect and analyze the evidence. Then, and only then, is it an appropriate time to decide how successful the change has been. That way, any “it didn’t work” conclusions can be more fact based than feeling based.

12 comments on ““I Tried It and It Didn’t Work!””

Great little article. And a heartfelt "amen!" for Dr. Weimer's rejection of student "feelings" as a primary indicator of success. But maybe this is a good forum to turn the underlying question around a bit. So much research time and publishers' ink has been wasted on this very easily answered question.

What works? Damn near anything works . . . Some times, with some students, with some classes, with some teachers. Heck, stark terror works very nicely at the Parris Island Marine Corps Recruit Depot. Systematic brutality and child abuse worked very effectively for ancient Sparta, which never had walls. . . It had Spartans. Needless to say, I don't recommend these approaches for a twenty first century public school. But let's ask a different question: what works with this group, for that objective? This requires first of all knowing your kids, and knowing the subject. "Big Data" is unlikely to help much with these questions. The current craze for data-based decisiin making has great potential at the policy and logistical levels. But in the classroom, teaching and learning remain intensely personal. An exchange between human beings who somehow have connected. This can be quantified at the assessment phase of teaching, and usually should. But let's not fool ourselves that reporting thousands of data points per kid for masses of kids will solve our problems. With such masses of data, the tendency will be to look at correlation only, then make blanket statements about treatment X working with N percent of the population, and follow that with yet another top down dictation that might work for N percent, but doesn't work at all next Tuesday with YOUR kids. Note the capitalization: they are YOUR kids. Don't abdicate your responsibility to the number crunchers.

""Big Data" is unlikely to help much with these questions. The current craze for data-based decision making has great potential at the policy and logistical levels. But in the classroom, teaching and learning remain intensely personal."

These are the most important sentences ever to appear on Faculty Focus.

I have read this in a number of articles that give advice on the implementation of e-portfolios. The evidence suggesting that e-portfolios can enhance student learning is very suggestive that this teaching and learning strategy can work. But the astute authors/researchers/teachers will state that its success is dependent upon how it is implemented. The best teaching and learning strategy in the world will not necessarily produce good teaching and learning. It is still dependent upon how the instructor has prepared the learning environment and whether or not the students wish to learn.

The most useful variable I found was the degree to which a student was equipped to be self-directed.

Students with background knowledge, learning skills, and motivation can take an assignment and run with it. Many of them can learn from anyone, in any kind of assignment.

Other students need more direction from the teacher in acquiring prior knowledge, learning skills, and in staying focused and motivated. Those learners need, in varying degrees, closer supervision and more interaction.

How do you find out a student's degree of self-direction? Through short graded assignments early in the course — assignments designed to help you learn more about how much (or little) close supervision your students require. And which students.

If 20% of the class can't read Chapter 1, for example, and arrive able to pass a quiz on its contents, you may need to place them in a different section where you show them explicitly what this course requires of them and how to achieve that. By giving unforgiving little tests early in the term, you can establish the standards for the course and identify students who are motivated enough to overcome their deficits and do well.

"Make predictions but don’t be surprised by the outcome."???
I'd change that to "…but expect the unexpected."
A couple of other adages seem to relate here:
[modified] "You can't teach every student every time with every strategy but…"
and
"Try, try again."
My magic number is at least three times, at least three resources, at least three examples…
Using teaching strategies is like cooking. The best cooks do not follow the recipe exactly. So too, here we are advised to "do it our own way." There certainly is no rule that says we can't make adjustments based on what may work better for our own specific circumstances.
Great advice. Thanks.

If we really believed that 'if it doesn't work, we should abandon it', the lecture method word be ancient history. Learning Style and Multiple Intelligence theories tell us that one-size fits nobody. Teaching, learning and testing cannot be standardized.

I agree with Wade's comments. You have to take into account the type of student, the students internal representation system most used, what you want to achieve from this question, this class etc. I find some of my best "aha" moments have come when a technique I used causes students/me to see things in a different light. This often comes as we are discussing/presenting etc. I make notes and ask students for their opinions when they do the student surveys for each class/end of term.

The student surveys I get back for English lit are often contradictory: for every student that finds one strategy effective (e.g., peer review, class discussion, debate, journal writing, quizzes, group/individual work), there's an equal yet opposite student who finds the same approach an interference to their learning. The key is to rotate methods, allow students to choose between options when possible and keep class time active. I also change my curriculum every single semester–because I'm always learning new material and new approaches myself. When it's fun for me, my students get more interested.