Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

Picture a state education administrator who wants to transform his state’s schools directly. A phone conversation might go like this:

Charlie (administrator): Hey John, how are you doing?

John: Pretty well. Can’t complain. Yourself?

Charlie: Give me a minute and I could find something to complain about, but I’m glad we could talk.

John: So tell me what you folks want to do.

Charlie: It’s the basic thing, increase learning across the board. I sent you our stats and you said you had a direct approach to suggest. Tell me about it.

John: I believe that a single policy, one request from the state level, can transform learning state-wide, but it depends on asking people to change something they do. Change in outcome depends on change in effort. Many districts continue doing basically what they do already and wonder why they don’t improve.

Charlie: I understand. It’s easy to get swallowed up with issues that are really out on the edge of the learning process.

John: If the focus people adopt doesn’t translate directly into altered student behavior, nothing changes. And if the behavior also doesn’t generate long-term learning, again little changes. Long-standing research has identified the activity linked best with permanence in learning.

Charlie: Instruction is delivered in lots of ways. What are you saying?

John: Could we divide the ways into two categories? First, initial practice obtains the knowledge, and later distributed practice holds onto it. You learn something, let it rest a while, bring it back up to mastery, let it rest awhile, bring it back to mastery, and so on. Teachers might object that they don’t have time to return to prior material, but good teachers find that prior material is much easier to refresh than to master at first, so that the time needed drops rapidly. But secondly, material learned well becomes a matrix to which subsequent learning readily attaches. New learning quickly finds a niche when the overall mental field is already stocked with well-mastered knowledge that offers many links of association.

Charlie: Wouldn’t it be complicated to have everyone returning constantly to previous material on top of moving ahead with new material?

John: If you want mastered, permanent knowledge, you at least ask for it and let teachers and students decide how best to use their time to accomplish it. Once we have a clearly identified objective, we can begin removing the obstacles. If you know what you want but you never ask for it, who’s to blame? One approach, however, could solve the problem cleanly. It involves a different way of using the tests you already administer to guarantee distributed practice.

Charlie: We have such a public argument already about testing that I’m wary about adding another reason to make it contentious.

John: Contention dissolves quickly when something really works. Isn’t the argument over testing about whether it actually helps or hurts? I suggest instead that we can use it in a way that stimulates distributed practice. Today’s design practically demands cramming followed by discard of one subject after another throughout the entire curriculum.

Charlie: How do we “guarantee” that? What do you mean?

John: Think about the logical response to the structure of learning. You delineate subject matter carefully and then schedule a test for it at a particular date and time. What’s the only reasonable response?

Charlie: You study that specific material right up to the time of the test.

John: Right. Inescapably you cram a limited array of knowledge. Then because a different subject is imminent or you just had a “final,” you drop the subject for as long as you can get away with. You set it up for students to do that. Isn’t that obvious?

Charlie: What could be an alternative?

John: If you want students to study over a longer period of time, tell them you’ll test them unannounced over a longer period of time. Behavior follows information. Stop scheduling the test. Say to them, “You might be tested on this material anytime in the month.” Hearing that, what do they do? They would probably bring the material up to mastery as quickly as possible, and then maintain it there with distributed practice so they are always ready.

Charlie: There could be scheduling challenges.

John: Here’s how to resolve them. Use all the section-level tests teachers create as they go along that essentially contain the entire curriculum. In other words, if students master these tests, they know the entire subject beginning to end. A given test might not even be very long, but instead something you could administer in fifteen minutes. A test doesn’t need to “cover” everything in order to stimulate study that does cover everything. To take up the least time, number each section-level test as you proceed through the semester, maybe thirty of them. Place the same quantity of numbered counters in a bowl. Every week, draw randomly the number for a test to administer all the way back to the start of the year—or from the prior year if you want–and also pick a day of the week drawing from a cup. When they walk into class that day, you say “Today we start with a brief exam on (e.g.) section 13, Chapter 2. Please put away your books.” Administer the test drawn and then proceed with the current lesson. Return the counter to its bowl, so it might come up again. They need to keep fresh in mind all the sections represented by the counters that could be selected.

Charlie: How would you assign grades?

John: Your goal is for all to master the material so they essentially obtain an “A.” As they are re-tested on a prior section, which occurs randomly due to your manner of selecting the test to administer, you use their last grade on each section to calculate their overall course grade. Because the tests they do take are a random sample of all the tests they could take, they provide a legitimate picture of students’ overall, permanent learning.

Charlie: It would surely give students a reason to do the distributed practice, and it should show that the state was serious about expecting learning in depth.

John: Do you notice any flaws?

Charlie: The hardest thing is convincing teachers to make a change. They’ve been “burnt” by so many new directions that didn’t live up to their hype.

John: I think like most people, teachers are ultimately moved by results. And if you ask a small change from them that obtains big results, they’re much more likely to buy in. You may want to offer a few seminars for them to absorb the plan, but they need no new skills. A very few changes carry out the steps.

Charlie: You’ve written about more ideas than just this one. How do they fit in?

John: If people decide they want to obtain permanent learning efficiently, there’s much more to say about how. But our starting point is to declare that that’s what we want. Everything else depends on a decision to structure learning activity so it achieves depth of knowledge.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-school-transformation-from-the-top-down/feed/0John Jensen: De-fogging High Stakes Testing, Part 4http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-4/
http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-4/#commentsMon, 01 Jul 2013 19:30:01 +0000http://www.educationnews.org/?p=228169by John Jensen, PhD In my prior articles (Part I, Part II, Part III), I have suggested: 1) Don’t do anything to children that kills their motivation to learn, 2) gather data about the system by testing children anonymously and unscheduled, and 3) design learning around explanation instead of test-taking. A fourth issue in the […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

In my prior articles (Part I, Part II, Part III), I have suggested: 1) Don’t do anything to children that kills their motivation to learn, 2) gather data about the system by testing children anonymously and unscheduled, and 3) design learning around explanation instead of test-taking.

A fourth issue in the high-stakes testing debate is what we mean by “challenging” children.

The thinking goes like this: If their childhood is too comfortable, if life is too easy for them, they form an unrealistic picture of adult life and need to “man up.” Hard knocks enable them to learn to cope with stresses they will face later, and providing information is just a temporary challenge.

“We need the test data,” a correspondent writes. “We need the kids to be invested and work on performing well. We just need to quit acting like it’s something other than data. It’s a weird situation.” She shifts then to children’s broader need.

“But wait,” she continues. “Should there be no tests? Life is a test. What you wear to an interview is a test. Climbing a tree is a test. Kids will have to take SAT, Acuplacer, drivers tests, medical and law school tests. We over-test to be sure, and use the data horrifically, but it’s no favor to them, is it, to protect them from all adversity. Tests are one form of useful adversity. No?”

So we want “useful adversity.” What’s that?

Here’s harmful adversity. The crux is that you could not handle the adversity.

Any counselor, facing people’s problems day by day, commonly encounters thought-patterns that undermine mature behavior. People go to a counselor because they are stuck, and time and again the thinking in which they’re stuck originated with authority figures who attacked their belief in their intrinsic worth, in the efficacy of their efforts, in the likelihood of their eventual success, and so on. While some stresses stimulate, others inflict damage. The result is that people do not successfully handle what they face. So if you’re willing to stress a child, you should have sense enough to stop short of wounding him/her.

If that idea doesn’t elicit a spontaneous “Of course!” from you, you might examine your thinking. The big danger to children is from adults who exercise power over them without caring about its effect. Adults usually want to manage the education system, while the effect on individual students recedes from their screen. Pyrrhic victories become okay. Management controls the scene even though it destroys many students’ interest in learning.

Impact matters. Schools are a significant authority to children, representing the first official face of society they ever encounter. Its initial message is ”You are who we say you are,” even though a student’s condition is due to influences largely beyond his control. Entering from a home of semi-literate, non-functioning, addicted, criminal, indifferent, or non-functioning parents—or merely from prior classrooms that failed to teach him–he is not likely to place high on the school’s ladder of esteem. The more adversity we apply to him that he cannot successfully manage, the earlier he quits formal education..

If we need data about his district, school, or classroom as my correspondent asserted, the need is for data describing the group, which we can obtain easily with no damage to the student. We just administer anonymously, unpredictably, and spontaneously any test that does not offer direct help to individual’s learning.

Useful adversity. Thinking explicitly for the student’s benefit, however, however, we place him with an effective teacher who administers diagnostic/formative tests, and gets him on a track of instruction optimal for him.

For most people, useful adversity actually lies in a fairly narrow band at any given time although changing the level is not complicated. Every continuum of skill implies successive points of progress. Inescapably, the operative principle is “From the point you’re at now, move to the next.” It’s like crossing a stream by stepping on a few boulders protruding above the water. You don’t skip steps. Skills and knowledge are developed incrementally and assimilated in a process rather than a leap. You take a student “where he is at” and place before him tasks that he can perform with skills he already has, but that move him to a new point.

The core of such a system is arranging challenges a student can meet using existing skills with appropriate effort. A way to understand this is through the concept of an ability periphery.

Imagine standing with your toes on the edge of a field. Out in the field are tasks of varying difficulty. The easy ones are right in front of you. Further out are those you achieve sometimes. On the opposite edge of the field are tasks you fail at every time. Three kinds of tasks are easy, challenging, and impossible.

The middle band is particularly important and where most growth occurs, where struggle and effort pay off the most. Some aspects of the tasks are already assimilated, and using them moves one along the continuum to expand understanding to the next level. Think of how essential it is to know the formula before trying to do the problems, or how bewildered students are when they don’t understand the terms that must be used to analyze the scenario. Effort in the mid-range succeeds because the student knows what to do with what is already under his control. Sometimes he fails and sometimes succeeds at the current micro-step, but he’s not confused. He knows where to point his effort.

At the too-easy edge of the ability periphery, this key–pointing his effort– remains unfulfilled. Tasks fall so far inside the reach of his competence that he can practically multi-task around them. Quantities of instructional time in most classrooms go to listening to the teacher—easily. Students may follow a teacher’s words while writing notes to classmates, doodling, conversing in whispers, or transferring information from one paper to another.

Easy tests require little considered thought—which is often the explicit aim of review questions beforehand. We want to make this coming test easy so students appear to be achieving and the teacher and school look good. Little constructive thought is demanded.

Effort we ask of students that they can perform without even giving it their full attention probably doesn’t teach them much, and also fails to sustain their interest. As soon as the teacher quits that kind of discussion, no student hand shoots up with a question. Nothing need be actually “taken to heart” nor “learned by heart,” which require focused effort.

At the far edge of the ability periphery are all the reasons for which schools classify students as lacking–all the wrong answers, all the test questions they should have known but didn’t, all the material taught too hastily to result in their increased competence, all the presentations they didn’t understand well enough to apply to the assignment given. Tested on this material, students predictably feel helpless. Anxious beforehand, they turn their feelings against themselves. A handful of class leaders do get it and grades compare one to another to let everyone know that they should have gotten it also.

How students perceive the meaning of their failures matters fundamentally. But for the rare few who understand how to convert mistakes into success, most students cannot claim as their own the learning from mistakes and failures. The motivational margin is narrowest with the sub-group that concerns us. If they have fragile resolve due to a stream of messages telling them that they are not good at this sort of thing, it takes little added evidence to convince them that school is not for them. Something in them gives way and they leave school when they can.

While calling up knowledge by taking a test deepens learning, the limitations of the test remain in circumscribed knowledge, manner of thought, and messages of inadequacy. If we value students’ well-being, we obtain generalized data in other ways than high stakes testing. If we remain alert to the genuine impact on students due to practices that serve adult needs, we will find a different way to obtain the information.

Interest is sustained best, again, by tasks in the mid-range of the periphery where effort makes the most difference. Success is so close by, so near to hand, that just by changing their focus students recognize a corresponding change in skill. That’s the rewarding part. Even though mistakes may occur often, students’ aspirations are in such close reach of their actual achievement that it spurs their willingness to try again, try harder, have courage, assert faith in their ability, and exert the will to continue.

Students’ faith in such a process is essential, the belief that carrying it through day by day despite error and setback moves them ahead. Doing so they can recognize in weeks and even days that they improve, and buy into the idea that the partial success they experience is not accidental but came from themselves. With steady effort, their ability periphery continually shifts its boundaries. A task impossible in one month moves to the easy edge in the next. Once they commit to ongoing practice, their ability expands steadily.

To speed students along, all we need to do is focus them on tasks of moderate challenge—which, by the way, makes the entire Common Core enterprise unnecessary. The problem with U.S. education has not been that we have been uncertain what to teach. The problem instead has been how. If we were poor at teaching version 3 of a dozen alternate curricula, it doesn’t help to shift to version 9 unless we know how to teach it differently.

Group-based data may alert us to where we might expect the group’s mid-range tasks to fall, which we discover by anonymous testing. But to help a particular student, teachers employ diagnostic and formative tests to know where to point their next instruction. A teacher who cares about him as an individual accurately understands his needs by means of observation, interaction with him, and tests appropriate to the current focus of his learning effort. The learning effort is not designed for the test, but vice versa.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-4/feed/0John Jensen: De-fogging High Stakes Testing, Part 3http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-3/
http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-3/#commentsMon, 03 Jun 2013 18:00:19 +0000http://www.educationnews.org/?p=227068by John Jensen, PhD Is there a solid, rational, evidence-based reason to limit high-stakes testing? In my previous articles about it, I suggested one: Students’ motivation to learn is the linch-pin of their progress. We should stop doing what undermines it. We should drop tests that discourage, embarrass, pressure, or threaten them. If we need […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

Is there a solid, rational, evidence-based reason to limit high-stakes testing?

In my previous articles about it, I suggested one: Students’ motivation to learn is the linch-pin of their progress. We should stop doing what undermines it. We should drop tests that discourage, embarrass, pressure, or threaten them.

If we need to collect information about their learning for decisions made outside the classroom, we can leave students anonymous. To make clear that a test is meant for understanding the group as a whole, they don’t put their name on it. Details about personal progress remain between teacher, student, and parents, where diagnostic/formative tests help provide guidance and immediate steps can address needs,

But without the pressure of high-stakes tests, what does education look like?

The question may seem strange, as though classrooms hold a mystery. In fact, for years testing has defined much of what we do. Without it hammering at us, we may be at loose ends and call to the teacher in the next room: “Hey Victoria. Can you remember what we were doing before tests took over our classroom?” What else is there?

The best alternative to test-organized learning, I believe, is explanation-organized learning. Explaining fuses together memory and sense-making, both of them standard goals of education. To appreciate the difference between test-organized and explanation-organized knowledge, we can use a section on the Civil War. Imagine a fifth grader standing up and saying:

Grant and Lee had this thing going of trying to figure out what the other would do. This head game was especially important for Grant because Lee was usually a brilliant tactical commander and had carried the fight north. We think of the south as Mississippi, but big battles like Fredericksburg, Antietam, and Gettysburg were all within a hundred miles of Washington D.C.! Lee knew he didn’t have forever. He had to knock out the Union Army, whereas Grant felt that with the greater resources of the North, he could eventually grind down the Confederates.

Contrast that with test-structured learning:

1. The top commander of the Union forces in the Civil war was_________ and of the Confederate forces was________.

2. The objective of the Confederates was__________, and of the Union forces was_____________.

3. Name three battles (__________, __________, and ________) within a hundred miles of an important northern city (_____________).

With nearly the same information in both, note the personalness and integration of ideas in the first compared to the sterility of the second, and think which would appeal more to students. The first implies that students can think and talk about a network of ideas, which is a clue to genuine knowledge–the ability to think about something without help.

On Wednesday you might have said to them, “Today and tomorrow I want you to explain back and forth to a partner everything we’ve studied about the Civil War. On Friday, I’ll draw names at random and ask you to stand and explain parts of it.” Using only classroom time, you organize knowledge so they (1) integrate their thinking, (2) sustain interest, (3) practice to gain mastery, (4) demonstrate mastery, and (5) develop ownership. Paring down the same ideas into isolated answers breaks up the natural links between ideas.

The shift away from narration of knowledge and the consequent loss of interest came home to me one day at a public library while browsing through books about math instruction. An old text explained concepts through the history of their development. Challenges in their lives moved mathematicians to pursue particular ideas. As I read into it, interest spurred by real-life time and place worked like a current carrying concepts along. People’s experience constituted a framework within which mathematical ideas could easily be recalled and developed.

So why would curriculum designers ignore such interest-generating details? My guess is due to objectives too limited. When you believe students are in serious deficit and you want to give them at least something, you first provide them basic answers–doing problems according to formulas. For mathematicians, these would be only the conclusions of a long experience of effort. But upon noticing that answers installed to pass a test are not an adequate foundation for adult thought, we raise our standards and aim to teach more than bare-bones conclusions. We need them to think mathematically—i.e. able to explain what they learn.

In the same library, I found a book with several dozen short chapters, each a conversation with someone about the challenges in their occupation, and how they met them daily.

The narratives were fascinating. People’s response to their job expressed heart-felt values, and their thought processes were intriguing. Every occupation became interesting as people explained how they used their imagination and energy to work past obstacles. To interest students in a job they might pursue, we can assign them to learn someone’s explanation of their job and explain it to another student. We stimulate their thought by arranging for them to explain what they understand, which is a critical point. Knowledge comes alive when students put to work their own ability to assemble and express it. .

Contrast this with test questions that supply information boiled down to four answer-options and the student places an X by a choice in the array instead of explaining the array itself. Students are robbed of appreciating how humans use knowledge to form their world. They are constantly left to react to others’ ideas.

Explanation is a better framework for student effort than are simple memory, selecting from options, or transferring knowledge from one paper to another. Expressing ideas builds a lifetime skill, sometimes drawing more on memory and sometimes more on sense-making, and also changes the social atmosphere of a school. If you present an idea to students and tell them, “Get a partner and explain this to each other so you both know it,” explaining becomes a bridge to relationships. Having another listen to our ideas cues our social instincts, stimulating us to elevate the quality of our thinking so the other grasps what we say.

Upon designing instruction so students explain what they learn, we need to grade it without creeping back to the very testing we tried to escape.

In our first article, we noted the solution of counting up sections completed. We can divide the curriculum into lessons, each having defined effort and a specific signal of completion. We check these off one at a time, and students graduate when their list is finished. Because the actual residue of learning retained varies among students, however, we could use a more exacting standard.

Three conditions solve the problem: (1) Credit effort for points learned instead of subtracting credit for points not learned. Stop tallying mistakes against an arbitrary standard. (2) Count up maintained knowledge instead of temporary knowledge. An idea learned receives one point of score the student keeps by retaining the learning itself. (3) Award a point of score for every new point of knowledge that took independent effort to learn. In sum, maintain positive points-of-knowledge, and count them up.

Maintaining knowledge constitutes a fundamental redirection of U.S. education. The transitory nature of what it purports to measure is an inherent problem with current testing. Nearly unquestioned in the education community is the assumption that knowledge appears and disappears. Once accepting this, we then depend on testing to drive students sporadically to “study hard” so we can catch at least some of their knowledge on the fly.

A student takes a 10-question quiz on a new section, and gets 2 wrong. On a monthly test on several sections, he corrects some errors, makes more, is preoccupied with other activities, and scores 75%. For a “final” (which he anticipates because then he can drop the subject), he knows he must bring up his scores. Some knowledge appears, other disappears, and he scores 85% with his knowledge-wave at its peak. But if he were given the same test a month later, his score would drop to 65%, from a low B to an F.

Once presuming that we cannot fix knowledge (like “fixing” a photo), we must continually retest as the wave of knowledge rises and falls, and constantly repeat even superficial knowledge that soon dissipates. The alternative is designing instruction so material is retained permanently from the start (I explain how in detail in my books, cf. below).

For scoring this accumulating knowledge:

1. You may wish to award points for verbal explanations like you grade a written essay question. In the Civil War illustration above, you might tell them, “In explaining this section, include eight details or factors, and you have a score of eight.” The details each required independent effort to learn, and would have counted against the student if gotten wrong on a test. But instead of waiting for a mistake on a test, you award one point of score for every point of knowledge a student produces either for you or a partner. In the explanation above, a score of eight points reasonably approximates the work that went into constructing it.

Explaining to a partner helps piggyback learning on students’ drive to show off their competence to peers, measure up to peer standards, and count up each other’s gains accurately. Students let you know who really knows what.

2. You can also score verbal explanation by time spent at it. When I first suggested this to several students, one came to me the next day. He had watched a TV science program the night before, and as I timed him gave me a 19 minute discourse on the chemistry of the sun. The length of time one takes for an explanation (minus undue repetition, etc.) is an objective measure you can post. It appeals to students the same way they already count progress on many personal activities. Explaining-time and the accumulation of points of knowledge are both objective measures that correlate with effort and hence are appreciated by students.

3. Accounting for precise details accords with the mind’s natural bent. To learn anything, we focus on one aspect at a time even while organizing many. We nail down this piece and this and this, learning subjects in a step-wise fashion. All math is one step after another, each a new point of knowledge. The glossary of a middle school math text may contain 250 terms–a perfect year-long project one term at a time. Word meanings, spellings, rules of grammar, definitions of important words in all subjects, parts of wholes, steps of processes, formulas, key details, important dates and events are all learned one point-of-knowledge at time, and can be practiced, retained, and scored the same way.

4. By counting up points of knowledge and time-explaining, at the conclusion of a school year you can collect the data in an information-rich, one-page Academic Mastery Report summing up everything a student knows. Divide subjects into sections and report the maintained score of knowledge for each. (In my books noted below, I explain how to construct this report).

5. If such changes seem to require too much accounting, consider what testing does now. It counts up micro-details against students, while their natural orientation is to accumulate such details for themselves. Once you show them how to demonstrate their learning, score it, claim it, and monitor each other’s progress through partner practice, they readily cooperate and relieve you of detail work you now supply yourself.

6. If this approach sounds too novel, you can observe its effect on students within a couple weeks. You’ll note that they respond eagerly because it meets their needs and there is no emotional downside to it. Because students know the increment of effort they exerted in order to learn a new point, they appreciate the logic of receiving a point of score for it. We invigorate them by counting up their increasing knowledge point by point.

There is no defensible, pedagogical rationale for counting mistakes. Post everyone’s cumulative progress on a wall chart so they can inspire each other, and set a quantity of points of knowledge achieved by all together that will earn them a party.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-3/feed/1John Jensen: De-fogging High Stakes Testing, Part 2http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-2/
http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-2/#commentsTue, 28 May 2013 12:00:07 +0000http://www.educationnews.org/?p=226592by John Jensen, PhD In my prior article (De-fogging high stakes testing, Part 1, May 17, 2013), I proposed a starting point for resolving the debate: first agree on our primary value. Then as we proceed, we align our plans with it. I nominated student motivation as the condition we should refuse to surrender. We […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

In my prior article (De-fogging high stakes testing, Part 1, May 17, 2013), I proposed a starting point for resolving the debate: first agree on our primary value. Then as we proceed, we align our plans with it.

I nominated student motivation as the condition we should refuse to surrender. We lose everything if we lose that, so we think first how to design instruction to sustain it in the first place. Once motivation propels progress, we consider how to assess it.

The current emphasis on testing has often reversed that order, tail wagging the dog. Measuring how bad off we are at anything does not tell us how to do it right. We easily find out how high a pole vaulter can jump, but knowing how to do it is knowledge of a different species. All our skill at testing does not tell us how to educate, but it can twist us into believing that knowledge equals passing tests!

While many have pointed out that testing tends to narrow a curriculum, a fundamental issue concerns the very nature of learning. Testing overdone distorts knowledge because thought takes on the structure of its use. If you know someone is about to ask you a question, you arrange your knowledge according to how you expect to answer. Students organize their effort according to how they will express it, doing inwardly what they will later do outwardly. How we foresee demonstrating to others that we “know our stuff” guides how we set it up beforehand.

Told “Your ten-question test Friday will be drawn from these thirty questions,” you picture clearly how you will demonstrate your knowledge, so you organize it that way. You practice answering each of the thirty questions, and ignore the knowledge in questions 31-40. As you study question-answer, question-answer, test-organized learning configures your mind:

It discourages formation of a comprehensive mental field, since the important effort is question-answer. This has lifetime consequences because the strongest intrinsic motives for learning emerge instead from the enjoyment of what we master. Bruner’s competence, reciprocity, curiosity, and identification all presume the presence of a personalized field of knowledge.

The more important the test, the more it limits all other kinds of knowledge. Student effort is forced to align with it. If you know you may be promoted, flunked, embarrassed, or praised on the basis of your test score, it is hard (almost impossible for the typical emotionally-driven student) to think independently past test requirements.

Question types tend to be those efficiently administered by paper-and-pencil and then machine-graded, which is a distorted way to know anything. In your own life, can you remember anytime at all when real situations asked you to think like multiple choice tests required you to think? All the mental energy we spend accommodating to test-structured thinking is essentially a waste, displacing something else more valuable.

The scope of the question is chunked downward often to a single word, phrase, or check-mark so that integrated explanations do not receive their due. The common plea for teaching higher order thinking remains unheard because even thinking is unwelcome. To move to higher order thinking, you first possess a body of knowledge and then can consider angles to add.

It places ownership with the one asking questions and removes it from the one answering. If someone says to me, “Stand up and tell me all you know about X,” the ball is in my court. I have a chance to express myself, and am free to draw on my entire bank of resources in order to demonstrate my competence. If instead I am asked questions I can answer in a word or phrase, and then another and another, control remains outside me, and I find it harder to experience ownership of my knowledge.

This is not to disparage all tests. As an extension of the relationship between teacher and student, diagnostic/formative tests can help guide what students study. But if our bottom line is to refuse to administer any test that causes a student to vomit, where do we go next? And what about the needs of schools, districts, states, and the nation for data on which to base judgments about the allocation of resources and the design of policies?

The question invites use of principle which Robert Fritz refers to as structural tension in his book The Path of Least Resistance. It stimulates the mind to dig deeper and goes like this: (1) Acknowledge that you experience a conflict you have been unable to solve cleanly. (2) Identify the intractable facts or principles that appear to comprise the conflict. (3) Affirm both sides at once, refusing to allow either to pre-empt a solution. (4) Continue to focus the mind on sustaining both poles of the conflict until a resolution emerges.

In the present debate, the two poles are (1) “I refuse to injure student motivation, and current testing does that.” On the other hand, (2) “We need the information students can supply about their educational progress.” If you were personally tasked with solving this problem, what would you do?

You would examine each viewpoint more deeply, probing for any corner where movement was possible. Eventually Voila! such a realization occurs. You notice that motivation lies at the individual level, but the data needed for decision-making occurs at the group level! You realize you can obtain the latter without messing with the former, making the unifying direction simple: Evaluate classes and schools any way you like as long as students remain anonymous.

Obtain your information with as light a hand as you can so you don’t interfere with learning. Observers, for instance, can float in and out without affecting students, tallying this and that. But if you believe the data students supply is so important that you must interrupt their learning to gather it, don’t announce it ahead of time.

Pre-scheduling adds unnecessary pressure. The further ahead students see a test coming, the more likely they are to believe that they must cram if they can, and to fear consequences if they do poorly. Pre-scheduling also skews the data to appear that more learning exists than really does since cramming-based knowledge dissipates quickly.

But arriving in the morning to face an unexpected test completely removes personal tension. You can make it almost incidental as you say to them:

Guess what! Today we’re going to do a favor for the state legislature . Those are the people who give us our money to buy gymnasium equipment and computers (etc., whatever students can identify with). They keep the school going and just want to know how we’re doing overall. Because it matters to everything we use to help you learn, we would like you just to do your best on the test. But also notice that we don’t ask you to write your name on the test. Because you don’t write your name on the test, no one even knows your personal score. We just want to know about all of you as a group, as representing your school.

Two outcomes provide valuable information. First, how well do students cooperate, and second, how well do they score?

On the first point, the number of children sabotaging the test gives feedback about school atmosphere, about how well students feel they are part of a team with a common purpose. The school’s message may be cheerfully optimistic but a critical subtext is, “If you really hate school, we want to know this. We don’t scold you for not cooperating. You have provided us with something we need to take to heart if we are truly invested in your well-being.”

Every school hosts a handful who, despite every outreach toward them, feel like outsiders, but the cooperation overall is essential information. Any undercurrent of disaffection and alienation deserves top priority as a school assesses its practices.

For those who fear massive rebellion among students who suddenly receive a tiny measure of choice, an instructive note surfaced in the 1960s shortly after the Soviet Union sent Sputnik into orbit. Many thought at the time that Soviet education might teach us something. Royce Van Norman wrote then in the Phi Delta Kappan:

Is it not ironic that in a planned society of controlled workers given compulsory assignments, where religious expression is suppressed, the press controlled, and all media of communication censored, where a puppet government is encouraged but denied any real authority, where great attention is given to efficiency and character reports, and attendance at cultural assemblies is mandatory, where it is avowed that all will be administered to each according to his needs and performance required from each according to his abilities, and where those who flee are tracked down, returned, and punished for trying to escape — in short in the milieu of the typical large American secondary school — we attempt to teach “the democratic system”?

The point of Van Norman’s surprise ending that I find appropriate to our discussion of high stakes testing is depersonalization. Because depersonalized people are more likely to rebel, we might weigh what we are about: reducing students to a set of numbers that wound their motivation? Are we so sure of the value of our data that we are willing to sacrifice children to get it?

This is not to blame any person or persons. I believe that the emphasis on testing has arisen from a well-intentioned but mistaken application of a principle beyond its proper venue. Control of mechanics and materials works to an amazing, microscopic degree in manufacturing, but not with people who remain unique. Control breaks down at the doorway of consciousness. We may force students’ physical compliance while their heart and mind journey elsewhere. Assembly-line approaches do not work even with material objects when they must be hand-tooled.

So if massive student resistance shows up when we ask them to take a test, we need to face the fact that we have generated this by how we treated them and guided them to treat each other. That fact deserves our first attention, is the first condition we must address if instruction is to succeed. Attempting to teach students while ignoring that they are miserable guarantees failure.

If you do test anonymously, the second outcome of the test is a compilation of aggregate scores. Those who cooperate provide ample cross-district data for comparison purposes, meeting needs from the classroom upward with no harm to students who simply “helped out” the legislature.

In our final paper on high stakes testing, we will look at a practical alternative to designing instruction around test requirements, and also at a way to maintain objective accountability for the learning.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

The debate over high-stakes testing pits the need for assessing student progress against the negative effects of doing so. Three recent articles offer a glance into it.

In a guest post for Education Week (“Monty Neill: Building a Successful Test Reform Movement”, May 14, 2013), Monty Neill proposes halting or reducing state-level testing, citing as reasons teaching to the test, cost, school climate, time from teaching, narrowing the curriculum, and increased juvenile incarceration.

In the same issue, Michael Petrilli (“Am I Part of the Cure … or the Disease?”, May 14, 2013) maintains that not testing but student achievement is the point, but that even small gains in test-verified reading and math enhance life trajectories, and teaching quality is what limits better instruction. Acknowledging that testing can generate temptations of cheating, a culture of fear, and narrowing of the curriculum, he would retain it nonetheless but suggests a goal of improving mediocre schools even a little, and teaching systematically the skills making the most difference.

Deborah Meier (“Problem vs. Solution: A Response”, Education Week, May 16, 2013) regards the testing issue as a distraction from more fundamental problems such as a public polarized by a growing gap between rich and poor, and that the wealthy steer resources to the schools their own children attend. She holds that a competitive education marketplace produces outcomes woefully wrong for children, that public education should address problems one at a time in light of the entire spectrum of needs.

So apart from altering the nation’s political makeup, we face two immediate problems–one improving education and the other finding out how well we do it. Both matter. Though a school’s quality may be low, how we test may depress even that.

There are many dogs in the fight about testing. Picture a round table discussion of stakeholders. At the table are a parent, teacher, district administrator, state legislator, and federal official. Each asserts, “I need to know X, and here’s why.” They are arguing over competing priorities when one of them points her thumb over her shoulder.

Seated against a wall is a student. Everyone falls silent as they realize he heard everything they said. Someone addresses him.

“So what do you want?”

“I just want to learn something,” he answers quietly.

The stakeholders try to resume their discussion but find no traction. Their urgency evaporates as they realize how superficial are their demands compared to the substance of the student’s need. The student is the elephant in the room. They look at each other and wonder, “How can we even begin to find a way to resolve this?”

By way of answer, consider a different analogy. Imagine you are on a research team investigating gases rising from the earth in a remote location. Your helicopter malfunctions and sets you down unexpectedly close to the emissions, and disembarking, your team realizes that it is in danger. Everyone must rapidly grab something and move away quickly. Before you are three canisters, one labeled AIR, another WATER, and a third FOOD.

Which do you seize? Your life may depend on your choice, and you recall the rule of three, that in general humans can live 3 minutes without air, 3 days without water, and 3 weeks without food. Knowing that in the toxic air of your surroundings you could be dead in three minutes, you grab the AIR canister first. Only after you have air under control do you pick up anything else. You secure your prime value before even considering a secondary one.

Back in the classroom, we search among the canisters concerned with testing to find the one labeled AIR. What is the most essential factor, the one we wish to establish with certainty, the one we refuse to sell off for the sake of a lesser value, the one to which we add others only if they do not detract from the first?

Finding an answer everyone can accept is, I believe, a direction that eventually resolves the dispute over testing. We first agree on our criterion value. I would like to nominate one on the basis of two axioms:

Axiom 1. Students progress through their own effort. Instruction works as it enables students to focus attention and apply effort on tasks that generate learning. The essence of instruction is directing students’ attention and effort.

Axiom 2. Effort is propelled by motivation. Aside from the sheer time available for their effort (jeopardized by countless intrusions including test-associated tasks), how students apply themselves arises directly from their interest, enthusiasm, ownership, sense of progress, and so on–signals of their motivational state directly preceding effort. If kids are bored and distracted and you want to teach them something, you either alter their motivation or forget about accomplishing anything. If in a psychological sense all behavior originates from a state that makes the behavior possible, we settle on students’ inner motivation as the key condition we must enhance.

A common complaint about testing, however, is exactly its effect on motivation. For teachers to appreciate this better, I would like them to experience an activity I often presented in training workshops in the 1970s. It goes like this. I’ll trust your imagination to figure out the lesson involved:

‘“We’re going to start off by giving you a spelling test for college freshmen,” the consultant announces to start off a morning. ”We’ll assign you to activities later based on the scores you get. Please take out a blank sheet of paper.”

People groan but cooperate. In a serious tone the consultant then reads the words while people write them:

“Please exchange papers,” the consultant says crisply, and then spells each word on the board. Checkers mark off wrong answers on the paper they have, and hand it back to its owner.

“How many got none wrong?” the consultant asks, writing a zero on the board. I’ve never seen zero wrong, but if people miss none, their number is jotted beside the zero. Under it the consultant lists numbers 1-20.

“How many got (number) wrong?” he or she says, going down the column. Everyone raises their hand at some point to acknowledge their number of mistakes. Most scores tend to fall around half wrong with some missing as many as 17 of 20.

People laugh, moan, and remember emotionally how it felt to be measured by their mistakes. The exercise concludes with a discussion of its implication for instruction–how discouraged they remembered feeling when they were in school, how they may have refused to try, how they preferred to be graded down than be humiliated by trying and failing, how disheartened they were at being labeled poor at anything, and so on.

If we wish both to teach and assess in a way that enhances motivation, how can we?

Competency-based instruction offers a clue. You declare it acceptable for students to have different competencies to practice even if they do much work together. You identify a discrete skill or chunk of knowledge you want them to know, tell them exactly the work needed and the signal marking its completion, and check it off when it’s done. Developed this way, their record shows unbroken success. Wherever they are on the continuum, they just work steadily at the next step.

This approach frees students from a peculiar psychic burden. If I have five units of knowledge to acquire and accomplish that, my working memory tells me “I got five.” My score matches my effort. I own the five and take pride in it.

This changes if I am told, “We expected you to get ten but you only got five.”

Only? My success becomes failure for a reason beyond my control, and my effort is devalued. I feel like a failure solely because someone measures me against a standard that does not serve me personally.

Think about yourself. Intuitively, do you mark your knowledge by knowing something or by not-knowing something else? Surely the former. Not-knowing measures are inherently antithetical to students’ natural motivation. While they spontaneously compare themselves to peers, they regard this measure of their not-knowing as fair. They are constituted to emulate standards demonstrated by peers, but for this they only need objective information.

For schoolwork, a wall chart serves adequately by counting up cumulatively the contents of each one’s growing bank of knowledge. They can use the differences between them if they wish, but no one drives them to feel bad. (And check me if I’m wrong about this, but do not some teachers still believe that imposing bad feelings on students is their bottom-line motivator? I infer this from observing students who actively fear their teacher.)

Once acknowledging positive motivation as our preferred long-term resource, we don’t even hint to a student that his effort is of secondary importance. We are clear that if we organize his effort so it’s effective, recognize the effort, and count up its outcome objectively, he is more likely to repeat it. The objective count of his progress on the specified tasks reveal exactly what he has learned. If his motivation and effort-driven success remain our primary values, we have no need to confine him under someone else’s web of meaning.

In my next article, I will show how to arrange effort for optimal motivation while accounting for its results in a way that fulfills stakeholders’ needs for information.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-de-fogging-high-stakes-testing-part-1/feed/1John Jensen: Setting the Conditions for Boys – and Everyone – to Learnhttp://www.educationnews.org/k-12-schools/john-jensen-setting-the-conditions-for-boys-and-everyone-to-learn/
http://www.educationnews.org/k-12-schools/john-jensen-setting-the-conditions-for-boys-and-everyone-to-learn/#commentsTue, 14 May 2013 13:00:41 +0000http://www.educationnews.org/?p=226104by John Jensen, PhD In his article “Solving the ‘Boy Crisis’ in Schools,” (Huffington Post, May 1, 2013), Michael Kimmel notes statistics indicating boys’ worse achievement in school than girls. He suggests boys’ perceptions of masculinity as the determining variable; what, in boys’ eyes, is respectable as “real work.” While boys and girls may indeed […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

In his article “Solving the ‘Boy Crisis’ in Schools,” (Huffington Post, May 1, 2013), Michael Kimmel notes statistics indicating boys’ worse achievement in school than girls. He suggests boys’ perceptions of masculinity as the determining variable; what, in boys’ eyes, is respectable as “real work.”

While boys and girls may indeed view classroom work alternately, a different elephant stands in the room. Its nature became clear to me in 1992 while watching my son play soccer in a light rain with his middle-school friends. They were motivated, by all visible measures, dashing about and encouraged constantly to the effort of the moment by their equally motivated coach (“Nicely Done, Nicely Done!). Standing on the sidelines I reflected on what I knew about the boys—across the board mediocre students, but here, “motivated.”

The reason for the difference between the boys’ behavior from playground to classroom struck me. It wasn’t the boys, it was the conditions! They were not unmotivated individuals. Instead they were subjected to unmotivating conditions! On the playfield they experienced practice, growth in competence, teamwork, clear direction, accurate accounting of their progress, and public recognition for it. In the design of soccer, effort gets you somewhere. But change places, focus on a different task, and their motivation changes instantly.

The point is critical: they instantly absorb every difference in glance, tone, concept, task, and relationship they perceive directed at them, and instantly adapt to it.

The point underlies teachers’ common experience of being required to try out some new approach the district buys. They know the first day whether it is reaching the kids. Maybe a couple weeks are used to tell for sure, but surely no more than that. In two weeks, any new teaching method reveals its portents. If it doesn’t work in two weeks, it probably won’t work at all.

Wondering how to generate the same enthusiasm in the classroom as on the soccer field set me on a twenty-year voyage of observation, development, testing, and application — of methods that generate in students the same energy they experience on the playfield.

A central stream unites factors coherently, not as isolated characteristics forced into an aggregation. Around a common channel of energy, different emphases draw it onward. Follow the thread:

Players practice together in order to become more individually and collectively competent, enabling them to perform skillfully to peers and significant others. Objective scoring enables them to plot their advancing skill with tangible evidence of progress they can take pride in. Because each one’s success contributes to the whole, they give good feelings to each other and communicate effectively about the issues involved with displaying their competence.

Note the intrinsic harmony of this picture that is reproduced in one sport and activity after another: practice, competence, performance, scoring, pride, good feelings, and communication. In view of the bizarre events displayed on TV that draw eager participants and crowds of spectators, we might guess that any activity that reproduces these conditions generates enthusiasm!

If climbing a slide flowing with whipped cream can generate enthusiasm, why not classroom learning? Really, it’s simple. All we need is a better understanding of a few things mainstream education misunderstands.

1. Practice is calling up and demonstrating an internal model of an activity. For learning, this just means calling up the knowledge that went in before, and making it understandable to someone else. Two features are integral to the practice of learning: memory and sense-making. You have to call up factual parts with sheer retention, but then you integrate them so they make sense to someone else. And what do we call this two-step process? We call it explaining. Students need to explain every part of every course back to its beginning, and do so often and thoroughly enough that by the end of the course a “final exam” is superfluous. Everyone knows that everyone else knows it back to the start of the course. This is the “real work” boys can respect.

2. Competence. Competence is achieved only by practice. There is no shortcut. The hints and aids and review questions and test-taking methods and scaffolds serve mainly to suggest to students that there is a defensible alternative to actually knowing the material. Competent with the material, you can start thinking about it from any angle and work your way into everything else. You can explain it to anyone of any age or sophistication. This is “real work” boys can appreciate.

3. Perform. Barely a few minutes a day can flavor a whole day’s work. Performing is the moment of demonstrating what was practiced and revealing the competence achieved. All a teacher need do is keep track of every question learned by the class as a whole, write it on a slip of paper, and drop it in a bag that eventually incorporates everything taught for the entire term. At the end of each day, save five minutes to draw a slip, draw a student’s name, read the question, and let the student rise and answer (competently, again, because all the questions were already practiced peer to peer). The teacher leads the class in applause. To stimulate everyone’s investment in everyone else’s performance, reward the class based on the performance of individuals.

4. Objective scoring. To score all learning objectively, notice the step of advance that occurs on the basis of student effort. In what you teach, where does effort go, and how do you identify an increment of progress? That’s the part to score unit by unit: more vocabulary words, key terms, rules of grammar, parts of speech, steps of a process, factors in a formula, meaning of technical terms. Ask yourself “If this were on a final exam and they got it wrong, what score would I mark off?” but then give them that positive score for knowing it instead of focusing on the mistake. Ask yourself “How many distinct pieces of knowledge would I expect from this question when it is performed?” Allot those distinctions as the score for a given process. Post a wall chart with everyone’s names, and a column for every section of a course treated. Add numbers steadily to each column identifying the cumulative, objective count of what each student continues to know.

5. Communications and good feelings. The same principles hold for teaching these two skill areas quickly and effectively. Part can be learned thoroughly as academic knowledge—about friendship, managing feelings, mutual problem-solving, personal goal-setting, grasping values, and so on. Other parts are learned readily by practice–getting an idea, applying it, and giving oneself and others a tally in recognition of having done so. The process is not complicated.

To return to our initial concern of boys not doing well, how does the analysis above tweak the picture?

It suggests going straight to the conditions that we know galvanize boys’ energy and notice that they motivate girls as well. Let them all practice to become competent. Let them perform what they know, be applauded for it, and receive objective scoring of what their effort achieves, and let them practice (and be recognized for) good communications and generating good feelings. Take two weeks to prove out this approach and you’ll discover that worries you had about your students evaporate. You will be able to see the knowledge being reproduced hour by hour and day by day, leaving you no doubt about the depth and breadth students achieve.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-setting-the-conditions-for-boys-and-everyone-to-learn/feed/1John Jensen: Picture the Brain Learninghttp://www.educationnews.org/k-12-schools/john-jensen-picture-the-brain-learning/
http://www.educationnews.org/k-12-schools/john-jensen-picture-the-brain-learning/#commentsMon, 06 May 2013 12:00:11 +0000http://www.educationnews.org/?p=225858by John Jensen, PhD As a school consultant tasked with drawing individual students from their classroom for a specific purpose, I soon recognized when this was unwelcome. Glancing inside the door, I could see that all were concentrating, heads angled toward their desks, bobbing up and down rhythmically. If the activity was presentation, all eyes […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

As a school consultant tasked with drawing individual students from their classroom for a specific purpose, I soon recognized when this was unwelcome. Glancing inside the door, I could see that all were concentrating, heads angled toward their desks, bobbing up and down rhythmically. If the activity was presentation, all eyes would be directed toward the teacher. Clearly in evidence was focus.

Walking in and saying to the teacher “I’d like to pull out Jeremy for a few minutes” could break everyone’s concentration and deprive Jeremy of the current high-value time. Particularly for teachers who fiercely guarded these delicious periods of concentration, scheduling pull-outs carefully (or not doing them at all with that teacher) was required.

The value of such concentrated activity was the focus of a recent Carnegie Mellon University study advanced by Bob Sullivan and Hugh Thompson (“Brain, Interrupted,” New York Times, May 3, 2013).

Studies to date have found that multi-tasking causes all the tasks to suffer in effectiveness due apparently to the cost of the effort at switching focus. To take the research a step further, the Carnegie Mellon study examined first the effect on mental tasks of interruption compared to non-interruption. Interruption made the brain 20% dumber.

Other variables were introduced. Research subjects were told to expect an interruption (which later occurred), and then were told to expect one but that did not occur. In the former, being able to expect the interruption improved mental efficiency from a 20% deficit to a 14% deficit. The results of expecting an interruption but it not occurring, however, were startling. Mental efficiency improved 43%, exceeding even the control group.

From these findings, researchers concluded that an expectation of interruption and an intent to counteract it enabled participants to learn how to adapt to the distraction and sustain their concentration. We draw from these findings that 1) concentration is important for mental tasks and 2) people can learn how to stay focused.

To appreciate the significance of this for classroom instruction, we can add a couple other factors. To begin with, teachers know that distracted students are not learning, while focused students usually are, but what’s the difference instructionally between these two conditions? Is it simply a matter of teacher discipline, pressing students to “pay attention?”

The difference lies deeper. In order to concentrate, students’ attention must be removed from the teacher and shift instead to a mental field they themselves sustain. Concentration and distraction each describe a relationship to a mental field, one attached to it and the other not. Students don’t just concentrate. They concentrate on something.

In distraction, the mind wanders off from a consistent focus, typically with attention directed outward to stimuli of sound, word, activity, or physical environment that bear little relationship to academic content. The distracted state is not a total loss, however, because alertness to changing outer conditions fills a role in our physical and emotional survival. Students allow themselves to be distracted by peers because it meets a perceived need.

In focus or concentration, however, the mind draws back from the outer stimuli to attend to internal stimuli it chooses consciously to invest in, internal stimuli it at least temporarily values more than the external stimuli. The presence of the teacher moves to the background, distractions from peers are fended off into a manageable periphery, and the mental field already carried within looms larger.

For the stimuli to be already accessible in the mind, they must have been placed there earlier—hence, dependent on memory and prioritization. The contents of the mind exist only because the individual has previously determined them to be important enough to save and return to for later processing. We might apply the word “education” to this added processing of sensory data, but it also incorporates the development of all the competences necessary for living.

The mental field is pivotal to education, however, because all consistent learning holds together as a field. To the extent that students perceive a given course they take as a random aggregation of data bits bound together only because they appear on the same test, and lack any intrinsic internal relationships, students can’t think about the information, passing their mind smoothly from one facet to another–feature to quality, event to concept, past to present, global to micro, system to detail.

A way to appreciate this point is to ask yourself “What am I good at?” In place of an academic discipline, you might name “personal relationships” as your field of mastery. But about whatever you choose, ask yourself then, “Can I think about this field with no further input right now?” Your answer is “Of course,” and you set about to demonstrate. Your mind picks any corner of the field, and zooms in on it. You draw up any experiences, actions, expectations, or questions, and dwell upon them. This very flexibility and breadth of movement within the field distinguishes your mind from the mind of someone less skilled. Your mastery means you can accomplish any task you wish within the field of attention you hold.

Now imagine that every school subject reached such a level of familiarity. Students could enter it (choosing to concentrate and determined to resist distraction as we noted above), and then develop whatever tasks naturally at hand there. In academics, this would draw from prior input (retention) of an array of factual material, but then the development of it in the direction the course suggested—more nuanced judgments, better problem-solving, mastery of sequences and relationships, and so on. All this would depend first on entry into the field and then development of the material that lay within.

The familiar activity that enables that development to occur is essentially explaining. Conducted for oneself it leads to increased understanding, and toward another it amounts to making sense. Four implications are suggested for a teacher:

1. Teachers need to deliver essential input by one means or another.

2. Teachers need to create the conditions for students to learn concentration.

3. Students need to sustain concentration on the mental field so that its intrinsic order binds it into a field of understanding.

4. Students need to explain the field to each other so they learn to make sense of what they retain.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-picture-the-brain-learning/feed/0John Jensen: Mastering Practice, Teaching’s Key Tradeoffhttp://www.educationnews.org/k-12-schools/john-jensen-mastering-practice-teachings-key-tradeoff/
http://www.educationnews.org/k-12-schools/john-jensen-mastering-practice-teachings-key-tradeoff/#commentsMon, 29 Apr 2013 19:00:02 +0000http://www.educationnews.org/?p=225652by John Jensen, PhD A tradeoff lies in how you use class time. Less of this, more of that, and you get different results. If you ever studied the piano, you’ll understand. Let’s say your mother resolved to provide piano lessons for you and your brother, but from different teachers. Yours required an hour of […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

A tradeoff lies in how you use class time. Less of this, more of that, and you get different results. If you ever studied the piano, you’ll understand.

Let’s say your mother resolved to provide piano lessons for you and your brother, but from different teachers. Yours required an hour of practice a day and your mother had to sign off on it. Your brother’s teacher instead relied on “motivation.” She believed that his interest was the key, and to encourage it she taught music appreciation, played music for your brother, told stories about musicians, gave background on musical forms, but never required him to practice anything.

After a year of lessons, which of you could play the piano better, you or your brother?

We know the answer from one simple measure. All we need to know is which of you practiced more. By some remote chance, maybe your brother was inspired to go to the piano on his own and begin to learn it. Maybe the key spurring him to discipline himself through the difficulties of learning was exactly what his teacher supplied but it’s an unlikely outlier.

Probably instead, your daily hour of practice vaulted you far ahead of your brother. Practice gave you confidence that you could learn. You noticed weekly improvement and that you were causing it by your practice. Progress built on itself. As your repertoire expanded, you realized ever more keenly that you could do piano, a thought that never crossed your brother’s mind—yet, anyway. Given comparable innate ability student to student, the bottom line for learning the piano is that progress is directly proportionate to the quality and quantity of practice.

Here we have a standard insight about progress, applying across the entire spectrum of human skill. Practice determines eventual competence, and turning to knowledge with this principle in mind, we would like to discern how to translate it into classroom time-use. In the standard U.S educational environment, can we expect the principle that works everywhere else to apply also to academics? Will learning still correlate with the amount and quality of practice?

Let’s say you answer no. If so, why? How could you assert that?

Maybe you believe that your enthusiasm or assignments or classroom activities supply something beyond practice, that they replace it.

Actually such influences don’t. They are incentives for or means of practice but do not replace it. They set practice in motion, but do not substitute for it. In fact we can assess every classroom activity for what we might call “the practice element,” a quality telling us that practice occurs. Something changes activity from a use of time about knowledge into the practice of knowledge.

To discern the difference, we examine the role of practice. As referred to here it is the repeated outward expression of an inward model. The key realization is that practice begins after its inward model is robust enough to answer a teacher’s question about it. A teacher presents something and asks a question here and there to assure herself that she got across what she intended. By then, students as yet have had no practice, but have barely installed in their mind the model of the knowledge they need to express, and the teacher has barely affirmed that she has handed the model over to them. The input phase has occurred, but practice itself begins with the output phase by students.

Instead of this effective second step, what we see most commonly is teachers themselves continuing to practice the knowledge at hand. They explain and re-explain to all. They re-explain one to one to students who don’t get it. They answer questions. They try to anticipate questions and answer them ahead of time. They practice their knowledge hourly but usually at students as their talk re-exposes student minds to the repeated input of knowledge.

Students themselves begin practice only when the arrow of action reverses. After ideas have come in, students then express them outgoing, needing four to five times as long talking as they formerly spent listening. If the teacher’s time is the first 10-15 minutes for input, the remaining 40-50 minutes should be the students’ turn for practice. With teacher’s time, students grasp the new material initially. With their time, they internalize it.

The internalizing activity for any skill is typically performing the action thoughtfully. Knowledge has an activity of its own, which is explaining. The mind receives an approximate model of the knowledge and practice occurs by expressing, discussing, or writing it. The outward expression of knowledge already grasped I refer to here as “the practice element.” Its significance is that the degree of the practice element in an activity determines its value for learning. Let’s list the practice element in common classroom activities and then consider each more deeply:

Teacher explains = zero practice element.

Students ask questions of the teacher = minimal practice element

Teacher assigns written questions = minimal practice element

Teacher asks scattered questions = medium practice element

Students write out their knowledge = medium to high practice element

Teacher gives pop quiz = high practice element

Students do Q and A practice with partner = high practice element

Students perform their learning = high practice element

Students run all their learning as mental movie = high practice element

Students explain the course back to the beginning = high practice element

While your personal approach to these activities may incorporate more practice than I note as the norm, the continuum between the first and last stages contains the key insight.

1. Teacher explaining has zero practice element. Practice requires output of an inner model, but this experience is the opposite. The teacher does all the output and students receive it. This is particularly telling since teachers appear generally to do 2/3 to 4/5 of the talking in most classrooms. Students’ minds often go into triage, dismissing what the teacher has already said in order to listen to what she says now. From a long presentation, students may retain almost nothing.

The practice element lies in the effort to express the knowledge, so whoever exerts that effort is the one practicing the knowledge. For this reason, teachers “learn a subject by teaching it.” They do the input/output cycle over and over—week by week, year by year. And if the teacher uses 70% of classroom time to talk, 30% at best remains for students to divide up. Do the math. 70% of a 50 minute period is 35 minutes for teacher time, leaving 15 minutes to apportion to students. With 15 students in the class, they would each have one minute or with 30 students a half minute if time were divided equally. In practice, the dominant ones hold the floor while those needing it most remain silent, and most of their comments anyway are short enough to deliver by Twitter.

This is not to discount teacher talk, which often is the most effective way to convey new material. But once it’s delivered, further teacher talk pre-empts the time students need for practice. With inadequate practice time, their learning remains surface, dependent on the random movements of their attention..

2. Students asking questions of the teacher offers minimal practice element. While an individual student may benefit by calling up assorted data pieces from within, the effect for the class overall is that of someone else explaining, adding details to the input phase. And because student questions rely on student initiative, the teacher cannot rely on them to deepen learning for all. They typically clarify what the teacher has presented, and hence draw on information current in the classroom.

3. Teacher assigns questions students answer in writing by referring to the Internet, a textbook, or a handout. Since they typically respond by transferring knowledge from one spot to another, they may draw little on their own retained knowledge, leaving this activity with a minimal practice element. They look up anything, cut and paste, and track ideas organized on a basis other than their own thought, often just plugging unassimilated data into an assignment structure. Search-and-collect may help them form knowledge and so offers minimal practice, but they typically dismiss it just when they could actually use the written form for practice in depth.

4. Teacher asking scattered questions of students offers medium practice element. When questions follow right after a teacher presentation, the goal is often just maintaining attention and checking understanding rather than an opportunity for in-depth practice of the content. Only one student answers the teacher at once, while everyone else listens. Deferring questions to the next day may help, but teacher-questioning allows only a few students to answer on selected points while the remainder coast. The benefit of answering a question is not spread evenly among all students nor all ideas.

5. Students write summaries, essays, notes, and syntheses of their learning. This has a medium to high practice element. Thorough note-taking during a presentation calls both on understanding what’s presented and processing it into a summary form—potentially at least a medium practice element.

The most challenging practice arises from students writing while drawing just on what they have already learned from all their sources. As the assignment asks less of them, it relies more on their skills in search-and-copy. As they merely string together what they collect, the practice element diminishes. Since this tool can be used constantly with all kinds of learning, however, it remains an important option.

6. Teacher gives a pop quiz. Expression is confined to the limits of the questions, but the quiz at least elicits prior learning so that practice is involved. Its value is minimized when used only to assess students rather than help them deepen their knowledge. Because it challenges students’ retention, it has high practice element although perhaps of limited value because employed infrequently.

7. Teacher breaks information into questions and answers and asks students to explain them to a partner until both know them. Here finally is clear-cut input and output. Presentation has already occurred and knowledge gathered. The teacher has made the information understandable and arranged it in a form suitable for practice. Students explain it to each other. They develop a mental model and then express it repeatedly to deepen and expand it.

Such practice also offers a logical end-point that encourages efficient time-use: “You’re done when you can explain it back anytime without looking.” In this high practice element activity, every minute spent at it deepens knowledge, and teachers can draw on it briefly or at length. It works with both new and familiar material, and deepens knowledge probably better than any other activity.

8. Teacher utilizes knowledge learned for daily performances of learning. A student name and a question are drawn randomly, and the student stands and answers impromptu. This activity leverages the value students place on peer opinion and admiration, and generates zest and interest. It has a high practice element, works with knowledge at all levels of sophistication, offers much stimulation with little time spent, and motivates partner practice.

9. Teacher conducts Mental Movie. Students close their eyes and review the day’s learning minute by minute. In this high practice element activity, they “run the film” of their day, bringing to mind everything they can recall. They discover the power of their mind to record with increasing detail each activity of their day.

Teachers need not worry that children will waste time if their eyes are closed. They love to exert effort in socially valued ways. This one matters because it expands their ability to practice and perform their knowledge in front of peers, and helps especially with subjects containing visual structures such as math and science. Observable forms, relationships, and sequences are absorbed as imagination paints them.

The more time teachers require in medium and high practice element activities, the more practice students obtain per hour, and the deeper they learn. The more they do this, the easier the teacher’s responsibility becomes. If we cease extinguishing knowledge, understand the power of steady accumulation, and use students’ time to arrange for them to practice properly, their learning cannot fail to take off.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/k-12-schools/john-jensen-mastering-practice-teachings-key-tradeoff/feed/0John Jensen: Evaluating Teachers Objectivelyhttp://www.educationnews.org/education-policy-and-politics/john-jensen-evaluating-teachers-objectively/
http://www.educationnews.org/education-policy-and-politics/john-jensen-evaluating-teachers-objectively/#commentsThu, 25 Apr 2013 19:00:01 +0000http://www.educationnews.org/?p=225578by John Jensen, PhD In evaluating teachers, we want to know how much a teacher contributes to student learning. Is his or her contribution high, medium, low, or a threat? If we could determine this, presumably we could hire and retain those on the optimal end of the scale. One challenge is to separate the […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

In evaluating teachers, we want to know how much a teacher contributes to student learning. Is his or her contribution high, medium, low, or a threat? If we could determine this, presumably we could hire and retain those on the optimal end of the scale.

One challenge is to separate the teacher’s influence from those originating in the student, the student’s parents, or alternate conditions. A separate concern is whether we can even find out how much students learn.

Perhaps solving the last issue first might suggest how to gauge the teacher’s contribution. So first off, how do we tell what children have learned?

I submit that a practical criterion available to any teacher has been almost universally ignored. Just occasionally I hear of (or recall) a teacher for whom it was their aim. The criterion is retained (instead of temporary) knowledge.

In a sense, all tested knowledge is “retained” in order to be tested. Some, however, has been engraved in a child’s mind for a lifetime, and other will disappear in a few days. A high school health teacher showed me a test he had just given without realizing he had administered it two weeks earlier.

“Not one student remembered that they had had the same test two weeks ago!” he told me in amazement.

However they studied in that class, the outcome was temporary rather than permanent knowledge. So how can we separate deep, retained knowledge from temporary, surface, disappearing knowledge? How do we find this out? We do so with three conditions.

1. Entire course mastery. We make students continually responsible for the entire course back to the beginning of the year.

“But I do this already,” you may object. You may or may not. The crux is whether your manner of instruction and testing back up your wish. Two other conditions apply your intent.

2. Make all tests impromptu. Test any part of the entire course at random moments with no prior announcement. So that you do not unwittingly tip off students, make up a bag of cards, each containing the title of a section of the course, some brief and others more comprehensive. Randomly draw a day of the week for a test (that you do not tell students), and randomly the name of the section to be tested. You don’t tell students “Tomorrow is your test on…” as you always have. They find out instead as the period begins: “Put away your books. Today we’re testing Chapter three, section ten, about…”. You make the entire curriculum subject to retest at any time.

3. Keep the last grade. Whatever is the last grade a student receives for a given section goes into his or her transcript as that section’s grade of record. Sections again can be retested at random as they are drawn from the bag.

These three conditions substantially alter instructional focus. By replacing the grades granted for knowledge obtained by cramming, review questions, scaffolding, test construction, and teacher hints, the three conditions extract the learning practiced sufficiently to persist on its own—retained learning.

The difference is illustrated by an experience from my sophomore year in high school. After we had worked our way through a 400 page world history text and with the end of the school looming, a brave student asked the teacher one day, “What will be on our final exam?”

We all leaned back and grinned. Review questions! I’d never heard of them, but their promise was that we could dismiss everything else we had studied! Because of such conditions that make testing easier, most scores can be regarded only as approximations of what students continue to know.

The conditions I suggest instead declare forcefully that the goal is permanent retention of as much knowledge as possible. Both teacher and students are recognized for the scores revealing it; scores both valid and reliable, and reflecting accurately the teacher’s ability to generate long-term learning.

A possible objection to this approach is that students already are tested too much, that testing is time taken away from actual learning and presents a distraction. Many would like to turn back the trend (cf. “Texas Considers Reversing Tough Testing and Graduation Requirements, “ New York Times, April 11, 2013).

The answer is to use tests to stimulate the practice that deepens knowledge. Not much class time is needed to achieve this. A ten-minute test twice a week may be enough by the means I suggest. Had such conditions been observed for the last couple decades, by now the issue of evaluating teachers would be moot. We would not be concerned about variances among them because all students would be learning simply by the standard focus on retained instead of temporary knowledge.

As the U.S. system gropes today for how to validate the substance of knowledge that might subsist behind a cloud of test scores, interest ranges in search of teacher qualities that make a difference. Anyone seeking this information should retrieve a landmark study by Arthur Combs and associates from the 1960s (“The Perceptual Organization of Effective Teachers,” Florida Studies in the Helping Professions, No. 37, in Arthur W. Combs et al., “Social Sciences”, Gainesville, University of Florida, 1969, cf. www.eric.ed.gov).

To summarize briefly, researchers sought to discover the difference between the best and worst teachers. They obtained valid groups of each by asking freshmen entering Florida colleges to name their best and worst teachers, compiled those named consistently, and obtained two pools–those unanimously viewed as the best versus worst. They visited these teachers, inviting them to participate in a study, and administered one test after another to them but turned up no differences.

Resorting finally to classroom observation, they found that clear differences existed not in their behavior but in their belief system. The good ones held positive beliefs about students, about learning, and about the world on twenty independent scales, while the worst held down the negative end of those scales. That these differences registered so powerfully with students year by year reveals that it really matters what teachers believe about what they do.

To separate other conditions from the teacher’s contribution, as we inquired at the start, ceases to be an issue when children learn well. Whatever the teacher’s influence is, it hasn’t held students back. But when students aren’t learning, adults parse details mainly to find out who to blame. We solve all issues, in other words, if we simply design standard classroom activity so that the practice of learning results in long-term results for all.

In sum, the two angles outlined above suggest a design for evaluating teachers. First, measure retained learning by the three steps noted above. Whatever the results are, the teachers arranged for them. Second, resurrect the tool that Combs and his team used to create their groups. Ask students past and present to name their best and worst teachers, and steadily winnow out the worst.

As the saying goes, this is not rocket science. Just be brave enough to insist on long-term learning and measure it objectively, and brave enough to invite comprehensive feedback. You’ll have no doubt which teachers are high and low and the conclusions will be solid, reliable, and politically defensible.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

]]>http://www.educationnews.org/education-policy-and-politics/john-jensen-evaluating-teachers-objectively/feed/0John Jensen: Re-thinking the Progressive Education Movementhttp://www.educationnews.org/education-policy-and-politics/john-jensen-re-thinking-the-progressive-education-movement/
http://www.educationnews.org/education-policy-and-politics/john-jensen-re-thinking-the-progressive-education-movement/#commentsWed, 10 Apr 2013 19:00:32 +0000http://www.educationnews.org/?p=225040by John Jensen, PhD In “How to Build a Progressive Education Movement,” (Edweek.org, April 2, 2013), David Bernstein scores the turn in recent years toward test-based education, and proposes that values embodied in the progressive movement of past years are urgently needed today. The initial means he suggests appear unfortunately to be of a negative […]

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.

In “How to Build a Progressive Education Movement,” (Edweek.org, April 2, 2013), David Bernstein scores the turn in recent years toward test-based education, and proposes that values embodied in the progressive movement of past years are urgently needed today. The initial means he suggests appear unfortunately to be of a negative nature such as don’t oppose all testing, don’t bash business, don’t oppose all school choice, and don’t name it “progressive education.”

While many values of progressive education will always remain valid (in passing he notes educating the whole child, enhancing creativity, and a focus on development), he appears unaware that the influence of progressive education essentially marked the beginning of the decline of American education.

What happened was simply a well-intentioned mistake. John Dewey, who practically embodied progressive education and whose thinking pervaded its design, wrote this in his influential 1916 book Democracy and Education:

“The development within the young of the attitudes and dispositions necessary to the continuous and progressive life of a society cannot take place by direct conveyance of beliefs, emotions, and knowledge. It takes place through the intermediary of the environment… The deeper and more intimate educative formation of disposition comes without conscious intent, as they gradually partake of the activities of the various groups to which they may belong.”

He builds on the impact of group norms by advocating communication, training, nurturing, cultivating, setting up conditions, direction and especially guidance.

Such a direction might have contributed to education’s transformation except for a crucial mistake. Note where the element of effort lies in the paragraph cited above; adults are acting upon students, and students are “learning” by osmosis. Dewey subverted the role of active personal effort. ”We never educate directly,” he wrote, “but indirectly by means of the environment” and specifically discounted “the piling up of knowledge.” The unfortunate effect of this was that it gave teachers permission not to require the effort that had been the key to students’ learning till then. Learning became familiarization in place of mastery.

The presence of progressive education remains enshrined in U.S. education in what I refer to as “the Learn and Lose System,” characterized by ten features. Note how each one listed below essentially declares that familiarization is sufficient, instead of retained learning. To transform education overnight, one need only reverse each of these features:

Courses begin and end by plan.

No expressed intent to retain a body of knowledge.

No complete hard copy kept permanently.

Teaching of small pieces not integrated.

Recognition-based tests.

Personal interest usually irrelevant.

Pretest reviews designed to improve scores.

Scheduled tests encourage cramming.

“Final” exam declares an end-point to effort.

Both learning and non-learning equally dismissed.

I applaud Mr. Bernstein’s appreciation of the need for a national movement. Progressive education, under whatever name, could make many contributions. A solid starting point, however, would be recognition of the enormous damage it has done, and exerting the effort needed to reverse the conditions it has bequeathed upon us. Effort properly directed is the coin of advancement—whether in system change or student learning.

John Jensen is a licensed clinical psychologist and author of the three-volume Practice Makes Permanent series (Rowman and Littlefield). He will send a proof copy of the volumes to anyone on request: jjensen@gci.net

Author information

John Jensen is a licensed clinical psychologist and education consultant. His three volume Practice Makes Perfect Series is in publication with Rowman and Littlefield, education publishers. The first of the series due in January is Teaching So Students Work Harder and Enjoy It: Practice Makes Perfect. He welcomes comments sent to him directly at jjensen@gci.net.