This column originally appeared at RealClearEducation.com on July 29, 2014 Over the weekend the New York Times Magazine ran an article titled “Why do Americans Stink at Math?” by Elizabeth Green. The article is as much an explanation of why it’s so hard not to stink as an explication of our problems. But I think in warning about the rough road of math improvement, the author may not have even gone far enough. The nub of her argument is this. American stink at math because the methods used to teach it are rote, don’t lead to transfer to the real world, and lead to shallow understanding. There are pedagogical methods that lead to much deeper understanding. U.S. researchers pioneered these methods and Japanese student achievement took off when the Japanese educational system adopted them. Green points to a particular pedagogical method as being vital to deeper understanding. Traditional classrooms are characterized by the phrase “I, we, you.” The teacher models a new mathematical procedure, the whole class practices it, and then individual students try it on their own. That’s the method that leads to rote, shallow knowledge. More desirable is “You, Y’all, We.” The teacher presents a problem which students try to solve on their own. Then they meet in small groups to compare and discuss the solutions they’ve devised. Finally, the groups share their ideas as a whole class. Why don’t US teachers use this method? In the US, initiatives to promote them are adopted every thirty years or so—New Math in the 60’s, National Council of Teachers of Mathematics Standards in the late ‘80’s--but they never gain traction. (Green treats the Common Core as another effort to bring a different pedagogy to classrooms. It may be interpreted that way by some, but it’s a set of standards, not a pedagogical method or curriculum.) Green says there are two main problems: lack of support for teachers, and the fact that teachers must understand math better to use these methods. I think both reasons are right, but there’s more to it than that. For a teacher who has not used the “You, Y’all, We” method it’s this bound to be a radical departure from her experience. A few days of professional development is not remotely enough training, but that’s typical of what American school systems provide. As Green notes, Japanese teachers have significant time built into their week to observe one another teach, and to confer.

Green’s also right when she points out that teaching mathematics in a way that leads to deep understanding in children requires that teachers themselves understand math deeply. As products of the American system, most don’t. Green’s take is that if you hand down a mandate from on high “teach this way” with little training, and hand it to people with a shaky grasp of the foundations of math, the result is predictable; you get the fuzzy crap in classroom that’s probably worse than the mindless memorization that characterizes the worst of the “I, we, you” method. But I think there are other factors that make improving math even tougher than Green says. First, the “You, Y’all, We” method is much harder, and not just because you need to understand math more deeply. It’s more difficult because you must make more decisions during class, in the moment. When a group comes up with a solution that is on the wrong track, what do you do? Do you try to get the class to see where it went wrong right away, or do you let them continue, and play out the consequences of the their solution? Once you’ve decided that, what exactly will you say to try to nudge them in that direction? As a college instructor I’ve always thought that it’s a hell of a lot easier to lecture than to lead a discussion. I can only imagine that leading a classroom of younger students is that much harder. There are also significant cultural obstacles to American adoption of this method. Green notes that Japanese teachers engage in “lesson study” together, in which one teacher presents a lesson, and the others discuss it in detail. This is a key solution to the problem I mentioned; teachers discuss how students commonly react during a particular lesson, and discuss the best way to respond. That way, they are not thinking in the moment, but know what to do. The assumption is that teachers are finding, if not the one best way to get an idea across, then a damn good one. As Green notes, that often gets down to details such as which two digit numbers to use for particular example. An expectation goes with this method; that everyone will change their classroom practice according to the outcome of lesson study. This is a significant hit to teacher autonomy, and not one that American teachers are used to. It’s also noteworthy that there is no concept here of honoring or even considering differences among students. It’s assumed they will all do the same work at the same time. The big picture Green offers is, I think, accurate (even if I might quibble with some details). Most students do not understand math well enough, and the Japanese have offered an example of one way to get there. As much as Green warns of the challenges in Americans broadly emulating this method, I think she may underestimate how hard it would be. It may be more productive to try to find some other technique to give students the math competence we aspire to.

A blog posting over at Schools Matter @ The Chalk Face has gathered a lot of interest--78 comments, many of them outraged.

The New York State Education Dept. has a website that is meant to help teachers prepare for the Common Core Standards. Author Chris Cerrone posted a bit of a 1st grade curriculum module on early civilizations. Here it is:

Cerrone asked primary grade educators to weigh in: "what do you think of the vocabulary contained in this unit of study?"

The responses in the 78 comments were nearly uniformly negative. As you might expect from that volume of commentary, the criticisms were wide-ranging, much of it directed more generally at standardized testing and the idea of the CCSS themselves.

But a lot of the commentary concerned cognitive development, and I want to focus there. This comment was typical (click for larger image).

There is an important idea at the heart of this criticism: developmental stages. This commenter specifically invokes Piaget, but you don't have to be a Piagetian to think that stages are a good way to think about children's thinking. Stage theories hold that children's thinking is relatively stable, but then undergoes a big shift in a relatively brief time (say, a few months) whereupon it stabilizes again.

So lessons would be developmentally inappropriate if they demanded a type of thinking that the child was simply incapable of, given his developmental stage.

I have argued in some detail that stage theories have two major problems: first, data from the last twenty years or so make development look like it's continuous, rather than occurring in discrete stages. Second, children's cognition is fairly variable day to day, even when the same child tries the same task.

I have argued elsewhere that trying to take a psychological finding and using it to draw strong conclusions about instruction--including what children are, in principle, ready for--is fraught with problems. How much the more is that true when using a psychological theory rather than an experimental finding.

So if Piaget will not be our guide as to what 1st graders are ready for, what should be?

The experience of early elementary educators, of course, and some of the people commenting on the blog posting are or were first grade teachers. And almost unanimously, they thought this material was inappropriate for first graders. (Some thought kids this age shouldn't be learning about other religions at this age. No argument there, that's a matter of ones values. I'm only talking about what kids can cognitively handle.)

But if we adopt a proof-of-the-pudding-is-in-the-eating criterion, lessons on ancient civilizations are fine because they are in use and children are learning. The material shown above is part of the Core Knowledge sequence, around for more than a decade and used by over a thousand schools. (NB: I'm on the Board of the Core Knowledge Foundation.)

And Core Knowledge is not alone. Another curriculum has had first-graders learn about ancient civilizations not for a decade, but for about a century: Montessori. (NB again: my children experienced these lessons at their school, and my wife teaches them--she's an early elementary Montessori teacher.)

Montessori schools teach the same "Five Great Lessons" at the beginning of first, second, and third grades. They are

The history of the universe and earth

The coming of life

The origins of human beings

The history of signs and writing

The story of numbers and mathematics

Photo from milwaukee-montessori.org

Naturally, these lessons are presented in ways that make sense to young children, but they are far from devoid of content. Montessori educators see them as the foundation and the wellspring of interest for everything to come: biology, geology, mathematics, reading, writing, chemistry and so on.

If it seems impossible or highly unlikely to you that 6 year olds could really get anything out of such lessons, I'll ask you to consider this. Our understanding of any new concept is always incomplete.

For example, how do children learn that some people they hear about (Peter Pan) are made up and never lived, whereas others (the Pharaohs) were real? Not by an inevitable process of neurological maturation that makes their brain "ready" for this information, whereupon they master it quickly. They learn it bit by bit, in fits and starts, sometimes seeming to get it, other times not.

And you can't always wait until children are "ready." Think about mathematics. Children are born understanding numerosity, but they understand it on a logarithmic scale--the difference between five and ten is larger than the difference between 70 and 75. To understand elementary mathematics they must learn to think of numbers of a linear scale. In this case, teachers have to undo Nature. And if you wait until the child is "developmentally ready" to understand numbers this way, you'll never teach them mathematics. It will never happen.

In sum, I don't think developmental psychology is a good guide to what children should learn; it provides some help in thinking about how children learn. The best guide to "what" is what children know now, and where you want their learning to head.

A teacher from the UK has just written to me asking for a bit of clarification (EDIT: the email came from Sue Cowley, who is actually a teacher trainer.)

She says that some people are taking my writing on the experiments that have tested predictions of learning styles theories (see here) as implying that teachers ought not to use these theories to inform their practice.

My own learning style is Gangnam

Her reading of what I've written on the subject differs: she thinks I'm suggesting that although the scientific backing for learning styles is absent, teachers may still find the idea useful in the classroom.

The larger issue--the relationship of basic science to practice--is complex enough that I thought it was worth writing a book about it. But I'll describe one important aspect of the problem here.

There are two methods by which one might use learning styles theories to inspire ones practice. The way that scientific evidence bears on these two methods is radically different.

Method 1: Scientific evidence on children's learning is consistent with how I teach.

Teachers inevitably have a theory--implicit or explicit--of how children learn. This theory influences choices teachers make in their practice. If you believe that science provides a good way to develop and update your theory of how children learn, then the harmony between this theory and your practice is one way that you build your own confidence that you're teaching effectively. (It is not, of course, the only source of evidence teachers would consider.)

It would seem, then, that because learning styles theories have no scientific support, we would conclude that practice meant to be consistent with learning styles theories will inevitably be bad practice.

It's not that simple, however. "Inevitably" is too strong. Scientific theory and practice are just not that tightly linked. It's possible to have effective practices motivated by a theory that lacks scientific support. For example, certain acupuncture treatments were initially motivated by theories entailing chakras--energy fields for which scientific evidence is lacking. Still, some treatments motivated by the theory are known to be effective in pain management.

But happy accidents like acupuncture are going to be much rarer than cases in which the wrong theory leads to practices that are either a waste of time or are actively bad. As long as we're using time-worn medical examples, let's not forget the theory of four humors.

Bottom line for Method 1: learning styles theories are not accurate representations of how children learn. Although they are certainly not guaranteed to lead to bad practice, using them as a guide is more likely to degrade practice than improve it.

Method 2: Learning styles as inspiration for practice, not evidence to justify practice.

In talking with teachers, I think this second method is probably more common. Teachers treat learning styles theories not as sacred truth about how children learn, but as a way to prime the creativity pump, to think about new angles on lesson plans.

Scientific theory is not the only source of inspiration for classroom practice. Any theory (or more generally, anything) can be a source of inspiration.

What's crucial is that the inspirational source bears no evidential status for the practice.

In the case of learning styles a teacher using this method does not say to himself "And I'll do this because then I'm appealing to the learning styles of all my students," even if the this was an idea generated by learning styles. The evidence that this is a good idea comes from professional judgment, or because a respected colleague reported that she found it effective, or whatever.

Analogously, I may frequently think about Disneyland when planning lessons simply because I think Disneyland is cool and I believe I often get engaging, useful ideas of classroom activities when I think about Disneyland. Disneyland is useful to me, but it doesn't represent how kids learn.

Bottom line for Method 2: Learning styles theories might serve as an inspiration for practice, but it holds no special status as such; anything can inspire practice.

The danger, of course, lies in confusing these two methods. It would never occur to me that a Disneyland-inspired lesson is a good idea because Disneyland represents how kids think. But that slip-of-the-mind might happen with learning styles theories and indeed, it seems to with some regularity.

Ben Goldacre is a British physician and academic, and is the author of Bad Science, an expose of bad medical practice that is based on wrong-headed science. For the last decade he has written a terrific column by the same name for the Guardian.

Goldacre has recently turned his critical scientific eye to educational practices in Britain. He was asked by the British Department for Education to comment on the use of scientific data in education and on the current state of affairs in Britain. You can download the report here.

So what does Goldacre say?

He offers an analogy of education to medicine; the former can benefit from the application of scientific methods, just as the latter has.

Goldacre touts the potential of randomised controlled trails (RCTs). You take a group of students and administer an intervention (a new instructional method for long division, say) to one group and not to another. Then you see how each group of students did.

Goldacre also speculates on what institutions would need to do to make the British education system as a whole more research-minded. He names two significant changes;

There would need to be an institution that communicates the findings of scientific research (similar to the American "What Works Clearinghouse.")

British teachers would need a better appreciation for scientific research so that they would understand why a particular practice was touted as superior, and could evaluate themselves the evidence for the claim

I'm a booster of science in education

As someone who has written shorter and book-length treatments of the role that scientific research might play in education, I'm very excited that Goldacre has made this thoughtful and spirited contribution.

I offer no criticisms of what Goldacre suggests, but would like to add three points.

First, I agree with Goldacre that randomized trials allow the strongest conclusions. But I don't think that we should emphasize RCTs to the exclusion of all other sources of data. After all, if we continue with Goldacre's analogy to medicine, I think he would agree that epidemiology has proven useful.

As a matter of tactics, note that the What Works Clearinghouse emphasized RCTs to the near exclusion of all other types of evidence, and that came to be seen as a problem. If you exclude other types of studies the available data will likely be thin. RCTs are simply hard to pull off: they are expensive, they require permission from lots of people. Hence, the What Works Clearinghouse ended up being agnostic about many interventions--"no randomized controlled trials yet." Its impact has been minimal.

Other sources of data can be useful; smaller scale studies, and especially, basic scientific work that bears on the underpinnings of an intervention.

We must also remember that each RCT--strictly interpreted--offers pretty narrow information: method A is better than method B (for these kids, as implemented by these teachers, etc.) Allowing other sources of data in the picture potentially offers a richer interpretation.

Second, basic scientific knowledge gleaned from cognitive and developmental psychology (and other fields) can not only help us to interpret the results of randomized trials, that knowledge can be useful to teachers on its own. Just as a physician uses her knowledge of human physiology to diagnose a case, a teacher can use her knowledge of cognition to "diagnose" how to best teach a particular concept to a particular child.

I don't know about Britain, but this information is not taught in most American schools of Education. I wrote a book about cognitive principles that might apply to education. The most common remark I hear from teachers is surprise (and often, anger) that they were not taught these principles when they trained.

Elsewhere I've suggested we need not just a "what works" clearinghouse to evaluate interventions, but a "what's known" clearinghouse for basic scientific knowledge that might apply to education.

Third, I'm uneasy about the medicine analogy. It too easily leads to the perception that science aims to prescribe what teachers must do, that science will identify one set of "best practices" which all must follow. Goldacre makes clear on the very first page of the report that's NOT what he's suggesting, but to the non-doctors among us, we see medicine this way: I go to my doctor, she diagnoses what's wrong, and there is a standard way (established by scientific method) to treat the disease.

That perception may be in error, but I think it's common.

I've suggested a different analogy: architecture. When building a house an architect must respect certain basic facts set out by science. Physics and materials science will loom large for the architect; for educators it might be psychology, sociology et al. The rules represent limiting conditions, but so long as you stay within those boundaries there is lots of ways to get it right. Just as physics doesn't tell the architect what the house must look like, so too cognitive psychology doesn't tell teachers how they must teach.

RCTs play a different role. They provide proof that a standard solution to a common problem is useful. For example, architects routinely face the problem of ensuring that a wall doesn't collapse when a large window is placed in it, and there are standard solutions to this problem. Likewise, educators face common problems, and RCTs hold the promise of providing proven solutions. Just as the architect doesn't have to use any of the standard methods, the teacher needn't use a method proven by an RCT. But the architect needs be sure that the wall stays up, and the teacher needs to be sure that the child learns.

There's more to this topic--what it will mean to train teachers to evaluate scientific evidence, the role of schools of education. Indeed, there's more in Goldacre's report and I urge you to read it. Longer term, I urge you to consider why we wouldn't want better use of science in educational practice.

The importance of a good relationship between teacher and student is no surprise. More surprising is that the "human touch" is so powerful it can improve computer-based learning.

In a series of ingenious yet simple experiments, Rich Mayer and Scott DaPra showed that students learn better from an onscreen slide show when it is accompanied by an onscreen avatar that uses social cues.

Eighty-eight college students watched a 4-minute Powerpoint slide show that explained how a solar cell converts sunlight to electricity. It consisted of 11 slides and a voice-over explanation.

Some subjects saw an avatar which used a full compliment of social cues (gesturing, changing posture, facial expression, changes in eye gaze, and lip movements synchronized to speech) which were meant to direct student attention to relevant features of the slide show.Other subjects saw an avatar that maintained the same posture, maintained eye gaze straight ahead, and did not move (except for lip movements synchronized to speech).

A third group saw no avatar at all, but just saw the slides and listened to the narration.

All subjects were later tested with fact-based recall questions and transfer questions (e.g. "how could you increase the electrical output of a solar power?") meant to test subjects ability to apply their knowledge to new situations.

There was no difference among the three groups on the retention test, but there was a sizable advantage (d = .90) for the high embodiment subjects on the transfer test. (The low-embodiment and no-avatar groups did not differ.)

A second experiment showed that the effect was only obtained when a human voice was used; the avatar did not boost learning when synchronized to a machine voice.

The experimenters emphasized the social aspect of the situation to learning; students process the slideshow differently because the avatar is "human enough" for them to treat it prime interaction like those learners would use with a real person. This interpretation seems especially plausible in light of the second experiment; all of the more cognitive cues (e.g., the shifts in the avatar's eye gaze prompting shifts in learner's attention) were still present in the machine-voice condition, yet there was no advantage to learners.

There is something special about learning from another person. Surprisingly, that other person can be an avatar.

In primary school, a student's relationship with his or her teacher has a significant impact on the student's academic progress. Students with positive relationships are more engaged and learn more (e.g., Hughes et al, 2008). In addition, teachers are more likely to have negative relationships with boys than with girls (e.g., Hamre & Pianta, 2001).

Previous research has not, however, accounted for the gender of the teacher. Perhaps conflict is more likely when teacher and student are of different sexes, and because there are more female than male teachers, we end up concluding that boys tend not to get along with their teachers.

Dependency Clinginess on the part of the student; sample item "This child asks for my help when he or she really does not need help."

All in all, the data did not support the idea that boys connect emotionally with male teachers.

For Closeness, female teachers generally felt closer to their students than male teachers. Male teachers did not feel closer to either boys or girls, but female teachers felt closer to girls than they did to boys.

For Conflict, female teachers reported less conflict than male teachers did. Both male and female teachers reported less conflict with girls than with boys.

For Dependency, female teachers reported less dependency than male teachers did. There were no differences among boys and girls on this measure.

This research has been difficult to conduct, simply because most groups of teachers don't have enough male teachers in elementary grades to conduct a meaningful analysis. This is just one study, but the results indicate that all teachers--male and female--have a tougher time with boys. More conflictual relationships are reported with boys than with girls, and female teachers report less close relationships with boys.