Blog Stats

Archive for the ‘Classic Experiments’ Category

Yale Law School is hosting a conference on the Legacy of Stanley Milgram this Saturday. Unsurprisingly, many Situationist Contributors (Thomas Blass, Jon Hanson, Dan Kahan, and Tom Tyler) and Situationist friends (Phoebe Ellsworth, Doug Kysar, and Jaime Napier) will be participating. The conference agenda is below.

Saturday, October 26, 2013Yale Law SchoolSponsored by the Oscar M. Ruebhausen Fund

11:00-12:00Situationism in lawJon D. Hanson, Alfred Smart Professor and Director, Project on Law and Mind Sciences, Harvard Law SchoolModerator: Douglas Kysar, Joseph M. Field ’55 Professor of Law, Yale Law School

12:00-12:15 Pick up box lunch12:15-1:00Reflections on the life and work of Stanley MilgramThomas Blass, Professor of Psychology, University of Maryland, Baltimore County, and author of The Man Who Shocked the WorldModerator: Tom Tyler, Macklin Fleming Professor of Law and Professor of Psychology, Yale Law School and Department of Psychology, Yale University

1:00-2:00Obedience to authority, Thoughts on Milgram as a filmmakerKathryn Millard, Professor of Film and Creative Arts, Department of Media, Music, Communication and Cultural Studies, Macquarie University, Sydney, AUModerator: Sarah Ryan, Empirical Research Librarian & Lecturer in Legal Research, Yale Law School

2:00-3:00Inattentive bureaucrats or engaged followers? Understanding Milgram’s subjectsS. Alex Haslam, Professor of Psychology and ARC Laureate Fellow, School of Psychology, The University of Queensland, St. Lucia, AUModerator: Jaime Napier, Assistant Professor of Psychology, Department of Psychology, Yale University

3:00-4:00Milgram’s legacy in social psychologyPhoebe Ellsworth, Frank Murphy Distinguished University Professor of Law and Psychology, University of MichiganModerator: Dan Kahan, Elizabeth K. Dollard Professor of Law and Professor of Psychology, Yale Law School

According to the headlines, social psychology has had a terrible year—and, at any rate, a bad week. The New York Times Magazine devoted nearly seven thousand words to Diederik Stapel, the Dutch researcher who committed fraud in at least fifty-four scientific papers, while Nature just published a report about another controversy, questioning whether some well-known “social-priming” results from the social psychologist Ap Dijksterhuis are replicable. Dijksterhuis famously found that thinking about a professor before taking an exam improves your performance, while thinking about a soccer ruffian makes you do worse. Although nobody doubts that Dijksterhuis ran the experiment that he says he did, it may be that his finding is either weak, or simply wrong—perhaps the peril of a field that relies too heavily on the notion that if something is statistically likely, it can be counted on.

Things aren’t quite as bad as they seem, though. Although Nature’s report was headlined “Disputed results a fresh blow for social psychology,” it scarcely noted that there have been some replications of experiments modelled on Dijksterhuis’s phenomenon. His finding could still out turn to be right, if weaker than first thought. More broadly, social priming is just one thread in the very rich fabric of social psychology. The field will survive, even if social priming turns out to have been overrated or an unfortunate detour.

Even if this one particular line of work is under a shroud, it is important not to lose sight of the fact many of the old standbys from social psychology have been endlessly replicated, like the Milgram effect—the old study of obedience in which subjects turned up electrical shocks (or what they thought were electrical shocks) all the way to four hundred and fifty volts, apparently causing great pain to their subjects, simply because they’d been asked to do it. Milgram himself replicated the experiment numerous times, in many different populations, with groups of differing backgrounds. It is still robust (in hands of other researchers) nearly fifty years later. And even today, people are still extending that result; just last week I read about a study in which intrepid experimenters asked whether people might administer electric shocks to robots, under similar circumstances. (Answer: yes.)

More importantly, there is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences.

Read the rest of the article, including more about the importance of this shift toward encouraging replication, here.

Like this:

Situationist friend Dave Nussbaum continues to write terrific posts over at, Random Assignments. Below, we have re-blogged portions of his recent post about how President Obama’s support of gay marriage led Republicans to become more opposed to it.

Yesterday, Andrew Sullivan posted a new Washington Post/ABC News poll tracking changes in approval for legalizing same sex marriage. Sullivan noted that following Obama’s announcement this month that his support of equal rights for same sex couples has “evolved” into support for marriage, there has been a rise in support for legalizing gay marriage among Democrats and Independents. Meanwhile, among Republicans the reverse is true:

“As the country as a whole grows more supportive of gay equality, the GOP is headed in the other direction. Republican support for marriage equality has declined a full ten points just this year – a pretty stunning result. Have they changed their mind simply because Obama supports something? In today’s polarized, partisan climate, I wouldn’t be surprised.”

I wouldn’t be surprised either. This is how partisans often react to anything coming from the other side: whatever it is, they don’t like it. Partisans will argue that they are opposed to whatever it is the other side is proposing purely on its merits. We all like to believe that when we evaluate a policy we are responding to the policy’s content, but very often we’re far more influenced by who is proposing it.

For example, in a pair of studies published in 2002, Lee Ross and his colleagues asked Israeli participants to evaluate a peace proposal that was an actual proposal submitted by either the Israeli or the Palestinian side. The trick they played was that, for some participants, they showed them the Israeli proposal and told them it was the Palestinian one, or they showed them the Palestinian proposal and told them it came from the Israeli side (the other half of participants saw a correctly attributed proposal). What they found was that the actual content of the plan didn’t matter nearly as much as whose plan they thought it was. In fact, Israeli participants felt more positively toward the Palestinian plan when they thought came from the Israeli side than they did toward the Israeli plan when they thought it came from the Palestinians. Let me repeat that: when the plans’ authorship was switched, Israelis liked the Palestinian proposal better than the Israeli one.

The same is true when it comes to Democrats and Republicans. In a series of studies published by Geoffrey Cohen in 2003 (PDF), he asked liberals and conservatives to evaluate both a generous and a stringent proposed welfare policy. Although liberals tend to prefer a generous welfare policy and conservatives tend to prefer a more stringent one, the actual content of the policy mattered far less than who proposed it. Not only were liberal participants perfectly happy to support a stringent policy when it was proposed by their own party (while the reverse was true for conservative participants), neither side was aware of the influence of the source of the policy proposal. So even though their partisan affiliations were more important than the content of the policy, both liberal and conservative participants claimed that they were basing their evaluations of the welfare policy strictly on its content. New research by Colin Tucker Smith and colleagues, published in the current issue of the journal Social Cognition (4), suggests that the influence of the policy’s source on our evaluation of the policy’s content happens at an automatic level and can happen without our awareness.

So perhaps it should not be terribly surprising that President Obama’s support for marriage equality has led to increased support among Democrats and more opposition from Republicans. . . . [continued]

The 1950s were a bleak time if you were a social psychologist interested in the empirical study of thoughts and feelings and how they affect human behavior. At that time, experimental psychology was dominated by behaviorism, an approach which focused exclusively on observable behavior, exiling ephemeral concepts like beliefs and emotions outside the boundaries of proper science. But things were about to change.

The Theory of Cognitive Dissonance, published by Leon Festinger in 1957, was one of those things. The theory was based on the simple idea that when a person simultaneously holds two conflicting beliefs he will experience a feeling of discomfort – cognitive dissonance – and that he will be motivated to end that discomfort by reducing the conflict between the beliefs, often by changing one of them.

Today, the term cognitive dissonance has entered our vernacular and the idea that we change or discard beliefs that don’t suit us seems like common sense. Research on how people rationalize their beliefs has spread to political science, medicine, neuroscience, and the law, and is one of the cornerstones of our understanding of human psychology. But in 1957, at a time when the field of psychology was dominated by behaviorism, the notion was far more controversial. Luckily, Leon Festinger and his colleagues and students conducted numerous experiments that tested predictions derived from Cognitive Dissonance Theory that could not be accounted for by behaviorist principles.

One of my favorite of these experiments (PDF), published by Elliot Aronson and Judson Mills in 1959, had college women reading obscene words out loud (words so obscene that I don’t feel comfortable writing them here myself, but the F word is in there, as is a four-letter word that also means rooster, and remember, this was 1959!). The women were reading these words as an initiation to get into a discussion group about the psychology of sex – they had to prove they were not going to be too embarrassed to take part in the conversation. This was the “severe initiation” condition. Another group of women recited a milder list of words (e.g., prostitute, virgin); this was the “mild initiation” condition. The women then heard a recording of a discussion by the group to which they had gained entry – as it turned out, the discussion was, according to the study’s authors, “one of the most worthless and uninteresting discussions imaginable.” The question was which group of women would like the psychology of sex discussion group more, the ones who had to undergo the severe initiation or the mild one?

On his blog, The Natural Unconscious, Situationist Contributor John Bargh has posted a long response to an article written by a group of social psychologists who were unable to replicate one of Bargh’s classic studies. Here’s the opening paragraph of Bargh’s post:

Scientific integrity in the era of pay-as-you-go publications and superficial online science journalism. What prompts the return of the blog is a recent article titled “Behavioral Priming: It’s All in the Mind, but Whose Mind?” by Stéphane Doyen, Olivier Klein, Cora-Lise Pichon, and Axel Cleeremans. The researchers reported that they could not replicate our lab’s 1996 finding that priming (subtly activating in the minds of our college-age experimental participants, without their awareness) the stereotype of the elderly caused participants to walk more slowly when leaving the experiment. We had predicted this effect based on emerging theory and evidence that perceptual mental representations were intimately linked with behavioral representations, a finding that is very well established now in the field (see below). Following their failure to replicate, Doyen et al. went on to show that if the experimenter knew the hypothesis of the study, they were able to then find the effect. Their conclusion was that experimenter expectancies or awareness of the research hypotheses had therefore produced the effect in our original 1996 study as well—in other words, that there was no actual unconscious stereotype effect on the participants’ behavior.

Cruelty, violence, badness… This episode of Radiolab, we wrestle with the dark side of human nature, and ask whether it’s something we can ever really understand, or fully escape.

We begin with a chilling statistic: 91% of men, and 84% of women, have fantasized about killing someone. We take a look at one particular fantasy lurking behind these numbers, and wonder what this shadow world might tell us about ourselves and our neighbors. Then, we reconsider what Stanley Milgrim’s famous experiment really revealed about human nature (it’s both better and worse than we thought). Next, we meet a man who scrambles our notions of good and evil: chemist Fritz Haber, who won a Nobel Prize in 1918…around the same time officials in the US were calling him a war criminal. And we end with the story of a man who chased one of the most prolific serial killers in US history, then got a chance to ask him the question that had haunted him for years: why?

For generations, social psychology students have read that Norman Triplett did the first social psychology experiment in 1889, when he found that children reeled in a fishing line faster when they were in the presence of another child than when they were alone.

But almost everything about that sentence is wrong. The new paper’s author, Wolfgang Stroebe of Utrecht University in the Netherlands, had recently published a handbook on the history of social psychology (with Aria W. Kruglanski) when he came across a 2005 reanalysis of Triplett’s data and dug farther.

It turned out that the children in the study were turning a reel, but not reeling in a fishing line, and that Triplett was studying whether children performed better with competition. For his study, he eyeballed the data—an acceptable scientific practice in the 19th century—and decided that some children performed better when competing, some performed worse, and others were not affected. The 2005 analysis found that these results were not statistically significant by modern standards.

So the modern textbooks have the details of the study wrong. But they’re also wrong that Triplett was the first psychologist to look at how people are affected by each other.

In the 1880s, Max Ringelmann studied whether workers pulled harder when they were together than when they worked alone. In 1894, Binet and Henri published a study of social influence among children and in 1887, Charles Féré authored a book that described experiments on how the presence of others could increase individual performance. But the field didn’t find its modern identity until 1924, says Stroebe, when Floyd Allport published a textbook defining social psychology as the experimental study of social behavior.

“I think the more interesting fact is that in the 1890s so many authors tried to answer questions relevant to social psychology with experimental methods,” Stroebe says. “This is much more important than to figure out who was really the first author.”

It’s time to fix the textbooks, Stroebe says. “I especially tried to get the article into a major journal in the hope that authors will take more notice of it than of articles published in historical journals.” He thinks his paper is important even though it isn’t at the cutting edge of research. “I was trained many decades ago in a period where one would have considered correcting the history of the origin of an important subfield of psychology to be important,” Stroebe writes in the conclusion of his article. “We even had a word for it. We called it scholarship.”

During my first several weeks as Stanley Milgram’s research assistant, I did the sorts of things that research assistants often do.

I transcribed Milgram’s dictations and drafts of research procedures into neatly typed pages. I began to keep files of research volunteers: Their age, educational background, occupation, address and phone number. I helped Milgram audition amateur actors for the important role of “experimenter” and the nearly-as-important role of “learner,” the research confederate whom we started to call the “victim.”

The real volunteers would be playing the role of “teacher” in what appeared to be an experiment where electric shocks were used to speed the learning of simple word pairs. As you probably know by now, 50 years later, the victim only pretended to be shocked and the experiment really measured obedience to authority.

When everything was ready for the first volunteers to assume the role of teacher and for the pretend-learner to become the victim, Milgram celebrated by inviting me to join him behind the large two-way mirrors in the Social Interaction Lab.

Observing the unfolding drama as I sat beside Stanley was not part of my official job description. But for the rest of that 1961 summer, I would work all day at scheduling subjects and doing other necessary support tasks, walk to my nearby apartment for a quick dinner, then return to the lab to watch what would happen next.

Neither Stanley nor I had any clear idea what moral dramas the next several hours would display. But in contrast to the artificial circumstances of most psychological experiments I had studied in graduate school until then, this felt like real life – this situation where every subject had to decide over and over again whether to administer the next higher shock on the shock board to another human being who was screaming in apparent pain.

What sorts of decisions did we see the teachers make? Over and over again they chose to obey the experimenter’s commands to shock the victim whenever he failed to remember the correct word pair. And on command they administered higher and higher voltage levels – or so the labels on the shock machine’s switches said, and the victim’s increasing screams appeared to confirm those levels.

Stanley and I had both expected modest levels of obedience at most. These were, after all, ordinary middle-class Americans, not chosen for either sadistic or rebellious tendencies.

But all teachers in the basic experimental situation went at least to 300 volts on the shock board, approaching the level labeled “Extreme Intensity Shock.” A stunning two-thirds of teachers obeyed all the way to the end of the shock board – 450 volts, “Danger Severe Shock X X X,” the red-lettered labels said.

Some subjects wept as they administered the higher-level shocks. Others smirked or giggled or laughed hysterically; still others sweated profusely or clenched their teeth or pulled their hair. But for the most part they obeyed.

Stanley and I were both appalled at such levels of obedience, but we could not ignore or deny what we saw. The subjects were sitting there right in front of us, very visible through the two-way mirrors in the bright laboratory illumination as their fingers depressed one shock switch after another.

Stanley Milgram remained a valued friend and mentor to me and to many others until his death at age 51, from a heart attack much like the one that had killed his father at a similar age. Stanley wrote clearly and thoughtfully about his research, especially in his book “Obedience to Authority” and his collection of essays, “The Individual in a Social World.”

I have been granted two decades longer to continue my own research and writing, some of it related to obedience, much of it struggling to understand the pioneering geniuses in a variety of fields. Most of my writing about the genius of Stanley Milgram can be found on my website, http://starcraving.com, especially in the section titled “Social Psychology.”

Everything was ready. The electrode was in place, threaded between the two hemispheres of a living cat’s brain; the instruments were tuned to pick up the chatter passing from one half to the other. The only thing left was to listen for that electronic whisper, the brain’s own internal code.

The amplifier hissed — the three scientists expectantly leaning closer — and out it came, loud and clear.

“We all live in a yellow submarine, yellow submarine, yellow submarine ….”

Dr. Gazzaniga, 71, now a professor of psychology at the University of California, Santa Barbara, is best known for a dazzling series of studies that revealed the brain’s split personality, the division of labor between its left and right hemispheres. But he is perhaps next best known for telling stories, many of them about blown experiments, dumb questions and other blunders during his nearly half-century career at the top of his field.

Now, in lectures and a new book, he is spelling out another kind of cautionary tale — a serious one, about the uses of neuroscience in society, particularly in the courtroom.

Brain science “will eventually begin to influence how the public views justice and responsibility,” Dr. Gazzaniga said at a recent conference here sponsored by the Edge Foundation.

And there is no guarantee, he added, that its influence will be a good one.

For one thing, brain-scanning technology is not ready for prime time in the legal system; it provides less information than people presume.

For another, new knowledge about neural processes is raising important questions about human responsibility. Scientists now know that the brain runs largely on autopilot; it acts first and asks questions later, often explaining behavior after the fact. So if much of behavior is automatic, then how responsible are people for their actions?

Who’s driving this submarine, anyway?

In his new book, “Who’s in Charge? Free Will and the Science of the Brain,” being published this month by Ecco/HarperCollins, Dr. Gazzaniga (pronounced ga-ZAHN-a-ga) argues that the answer is hidden in plain sight. It’s a matter of knowing where to look.

* * *

He began thinking seriously about the nature of responsibility only after many years of goofing off.

Mike Gazzaniga grew up in Glendale, Calif., exploring the open country east of Los Angeles and running occasional experiments in his garage, often with the help of his father, a prominent surgeon. It was fun; the experiments were real attempts to understand biochemistry; and even after joining the Alpha Delta Phi fraternity at Dartmouth (inspiration for the movie “Animal House”), he made time between parties and pranks to track who was doing what in his chosen field, brain science.

In particular, he began to follow studies at the California Institute of Technology suggesting that in animals, developing nerve cells are coded to congregate in specific areas in the brain. This work was captivating for two reasons.

First, it seemed to contradict common wisdom at the time, which held that specific brain functions like memory were widely — and uniformly — distributed in the brain, not concentrated in discrete regions.

Second, his girlfriend was due to take a summer job right there near Caltech.

He decided to write a letter to the director of the program, the eminent neurobiologist Roger Wolcott Sperry (emphasizing reason No. 1). Could Dr. Sperry use a summer intern? “He said sure,” Dr. Gazzaniga said. “I always tell students, ‘Go ahead and write directly to the person you want to study with; you just never know.’ ”

At Caltech that summer after his junior year, he glimpsed his future. He learned about so-called split-brain patients, people with severe epilepsy who had surgery cutting the connections between their left and right hemispheres. The surgery drastically reduced seizures but seemed to leave people otherwise unaffected.

Like this:

This essay was published originally in the online version of the APS Observer:

This year is the 50th anniversary of the start of Stanley Milgram’s groundbreaking experiments on obedience to destructive orders — the most famous, controversial and, arguably, most important psychological research of our times. To commemorate this milestone, in this article I present the key elements comprising the legacy of those experiments.

Milgram was a 28-year-old junior faculty member at Yale University when he began his program of research on obedience, supported by grants from the National Science Foundation (NSF), which lasted from August 7, 1961 through May 27, 1962.

As we know, in his obedience experiments Milgram made the startling discovery that a majority of his subjects — average and, presumably, normal community residents — were willing to give a series of what they believed were increasingly painful and, perhaps, harmful electric shocks to a vehemently protesting victim simply because they were commanded to do so by an authority (although no shock was actually given). They did this despite the fact that the experimenter had no coercive powers to enforce his commands and the person they were shocking was an innocent victim who did nothing to merit such punishment. Although Milgram conducted over 20 variations of his basic procedure, his central finding obtained in several standard, or baseline, conditions was that about two-thirds of the subjects fully obeyed the experimenter, progressing step-by-step up to the maximum shock of 450 volts.

First and foremost, the obedience experiments taught us that we have a powerful propensity to obey authority. Did we need Milgram to tell us this? Of course, not. What he did teach us is just how strong this tendency is — so strong, in fact, that it can make us act in ways contrary to our moral principles.

Milgram’s findings provided a powerful affirmation of one of the main guiding principles of contemporary social psychology: That often it is not the kind of person we are that determines how we act, but rather the kind of situation we find ourselves in. To perceive behavior as flowing from within — from our character or personality — is to paint an incomplete picture of the determinants of our behavior. Milgram showed that external pressures coming from a legitimate authority can make us behave in ways we would not even consider when acting on our own.

Foreshadowing the widespread attention the obedience experiments were to receive was an early article appearing in the New York Times, titled “Sixty-five Percent in Test Blindly Obey to Inflict Pain,” right after the publication of Milgram’s first journal report. Although Milgram had just begun his academic career and he would go on to do other innovative research studies — such as “The small-world problem” and “The lost letter technique” — they would always be overshadowed by the obedience work. Of the 140 or so talks he gave during his lifetime, more than a third dealt with obedience. His book Obedience to authority: An experimental view has been translated into 11 languages.

I believe that one of the most important aspects of Milgram’s legacy is that, in demonstrating our extreme readiness to obey authorities, he has identified one of the universals, or constants, of human behavior, straddling time and place. I have done two analyses to support this contention. In one, I correlated the results of Milgram’s standard obedience experiments and the replications conducted by others with their date of publication. The results: There was absolutely no relationship between when a study was conducted and the amount of obedience it yielded. In a second analysis, I compared the outcomes of obedience experiments conducted in the United States with those conducted in other countries. Remarkably, the average obedience rates were very similar: In the U.S. studies, some 61 percent of the subjects were fully obedient, while elsewhere the obedience rate was 66 percent.

A more recent, modified replication of one of Milgram’s conditions (Exp.#5, “A new base-line condition”) conducted by Jerry Burger, a social psychologist at the Santa Clara University supports the universality argument. Burger’s replication added safeguards not contained in Milgram’s original experiment. Although carried out 45 years after Milgram conducted the original Exp. #5, Burger’s findings did not differ significantly from Milgram’s.

From the beginning, the obedience studies have been embroiled in controversy about its ethics. They were vilified by some and praised by others. A well-known ethicist commented rhetorically: “Is this perhaps going too far in what one asks a subject to do and how one deceives him?” A Welsh playwright expressed his disdain by arguing that many people “may feel that in order to demonstrate that subjects may behave like so many Eichmanns, the experimenter had to act the part, to some extent, of a Himmler.” On the other hand, Milgram received supportive letters from fellow social psychologists such as Elliot Aronson and Philip Zimbardo. And in 1964, the American Association for the Advancement of Science (AAAS) awarded him its annual social psychology award for his most complete report on the experiments up to that time, “Some Conditions of Obedience and Disobedience to Authority.”

The furor stirred up by the obedience experiments, together with a few other ethically problematic studies, has resulted in a greater sensitivity to the well-being of the human research participant today. More concretely, the obedience experiments are generally considered one of the handful of controversial studies that led Congress to enact the National Research Act in 1974, which mandated the creation of Institutional Review Boards (IRBs). Harold Takooshian, one of Milgram’s outstanding students at CUNY, recalls him saying that “IRBs are an impressive solution to a non-problem.”

A distinctive aspect of the legacy of the obedience experiments is that they can be applied to real life in a number of ways. They provide a reference point for certain phenomena that, on the face of it, strain our understanding — thereby, making them more plausible. For example, Milgram’s findings can help us fathom how it was possible for managers of fast-food restaurants throughout the United States to fall for a bizarre hoax over a nine-year period between 1995 and 2004. In a typical case, the manager of an eatery received a phone call from a man claiming to be a police officer, who ordered him to strip-search a female employee who supposedly stole a pocketbook. In over 70 instances, the manager obeyed the unknown caller.

The implications of Milgram’s research have been greatest for understanding the Holocaust. In his book “Ordinary Men,” Christopher Browning, a historian, describing the behavior of a Nazi mobile unit roaming the Polish countryside that killed 38,000 Jews in cold blood at the bidding of their commander, concluded that “many of Milgram’s insights find graphic confirmation in the behavior and testimony of the men of Reserve Police Battalion 101.”

Legal scholarship and practice has made wide use of the obedience studies. Several Supreme Court briefs, as well as over 350 law reviews have referenced them. The U.S. Army also has taken the lessons of Milgram’s research to heart. In response to a letter-writer’s question in December 1985, the head of the Department of Behavioral Sciences and Leadership at West Point wrote: “All cadets…are required to take two psychology courses…. Both of these courses discuss Milgram’s work and the implications of his findings.”

There is typically a gray cloud of gloom hovering over any discussions of Milgram’s research. This is not surprising since Milgram himself repeatedly and almost exclusively drew troubling implications. So let me end on a more positive note.

Milgram recognized that obedience is a necessary element of civilized society. As he once wrote: “We cannot have society without some structure of authority, and every society must inculcate a habit of obedience in its citizens.” So, once he felt that he had probed the destructive side of obedience in sufficient detail, he was ready to turn his attention to its positive aspects.

Milgram submitted a continuation grant proposal to NSF in early 1962, after he had completed almost all of the experimental conditions dealing with destructive obedience. One of the proposed experiments he listed in that grant proposal was titled “Constructive Obedience.” The grant proposal was only approved in modified form with reduced funding, so Milgram never did carry out such an experiment. But, nonetheless, the fact that he planned such an experiment is informative, because it implied that Milgram apparently thought that the unexpected strength of the obedient tendencies he had discovered so far was just one part of a more general, full-spectrum predisposition.

Embodied cognition, the idea that the mind is not only connected to the body but that the body influences the mind, is one of the more counter-intuitive ideas in cognitive science. In sharp contrast is dualism, a theory of mind famously put forth by Rene Descartes in the 17th century when he claimed that “there is a great difference between mind and body, inasmuch as body is by nature always divisible, and the mind is entirely indivisible… the mind or soul of man is entirely different from the body.” In the proceeding centuries, the notion of the disembodied mind flourished. From it, western thought developed two basic ideas: reason is disembodied because the mind is disembodied and reason is transcendent and universal. However, as George Lakoff and Rafeal Núñez explain:

Cognitive science calls this entire philosophical worldview into serious question on empirical grounds… [the mind] arises from the nature of our brains, bodies, and bodily experiences. This is not just the innocuous and obvious claim that we need a body to reason; rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment… Thus, to understand reason we must understand the details of our visual system, our motor system, and the general mechanism of neural binding.

What exactly does this mean? It means that our cognition isn’t confined to our cortices. That is, our cognition is influenced, perhaps determined by, our experiences in the physical world. This is why we say that something is “over our heads” to express the idea that we do not understand; we are drawing upon the physical inability to not see something over our heads and the mental feeling of uncertainty. Or why we understand warmth with affection; as infants and children the subjective judgment of affection almost always corresponded with the sensation of warmth, thus giving way to metaphors such as “I’m warming up to her.”

Embodied cognition has a relatively short history. Its intellectual roots date back to early 20th century philosophers Martin Heidegger, Maurice Merleau-Ponty and John Dewey and it has only been studied empirically in the last few decades. One of the key figures to empirically study embodiment is University of California at Berkeley professor George Lakoff.

Lakoff was kind enough to field some questions over a recent phone conversation, where I learned about his interesting history first hand. After taking linguistic courses in the 1960s under Chomsky at MIT, where he eventually majored in English and Mathematics, he studied linguistics in grad school at Indiana University. It was a different world back then, he explained, “it was the beginning of computer science and A.I and the idea that thought could be described with formal logic dominated much of philosophical thinking. Turing machines were popular discussion topics, and the brain was widely understood as a digital computational device.” Essentially, the mind was thought of as a computer program separate from the body with the brain as general-purpose hardware.

Chomsky’s theory of language as a series of meaningless symbols fit this paradigm. It was a view of language in which grammar was independent of meaning or communication. In contrast, Lakoff found examples showing that grammar was depended of meaning in 1963. From this observation he constructed a theory called Generative Semantics, which was also disembodied, where logical structures were built into grammar itself.

To be sure, cognitive scientists weren’t dualists like Descartes – they didn’t actually believe that the mind was physically separate from the body – but they didn’t think that the body influenced cognition. And it was during this time – throughout the 60s and 70s -Lakoff realized the flaws of thinking about the mind as a computer and began studying embodiment.

The tipping point came after attending four talks that hinted at embodied language at Berkeley in the summer of 1975. In his words, they forced him to “give up and rethink linguistics and the brain.” This prompted him and a group of colleagues to start cognitive linguistics, which contrary to Chomskyan theory and the entire mind as a computer paradigm, held that “semantics arose from the nature of the body.” Then, in 1978, he “discovered that we think metaphorically,” and spent the next year gathering as many metaphors as he could find.

Many cognitive scientists accepted his work on metaphors though it opposed much of mainstream thought in philosophy and linguistics. He caught a break on January 2nd 1979, when he got a call from Mark Johnson . . . . What came next was one of the more groundbreaking books in cognitive science. After co-writing a paper for the journal of philosophy in the spring of 1979, Lakoff and Johnson began working on Metaphors We Live By, and managed to finish it three months later.

Their book extensively examined how, when and why we use metaphors. Here are a few examples. We understand control as being UP and being subject to control as being DOWN: We say, “I have control over him,” “I am on top of the situation,” “He’s at the height of his power,” and, “He ranks above me in strength,” “He is under my control,” and “His power is on the decline.” Similarly, we describe love as being a physical force: “I could feel the electricity between us,” “There were sparks,” and “They gravitated to each other immediately.” Some of their examples reflected embodied experience. For example, Happy is Up and Sad is Down, as in “I’m feeling up today,” and “I’m feel down in the dumbs.” These metaphors are based on the physiology of emotions, which researchers such as Paul Eckman have discovered. It’s no surprise, then, that around the world, people who are happy tend to smile and perk up while people who are sad tend to droop.

Metaphors We Live By was a game changer. Not only did it illustrate how prevalent metaphors are in everyday language, it also suggested that a lot of the major tenets of western thought, including the idea that reason is conscious and passionless and that language is separate from the body aside from the organs of speech and hearing, were incorrect. In brief, it demonstrated that “our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.”

After Metaphors We Live By was published, embodiment slowly gained momentum in academia. In the 1990s dissertations by Christopher Johnson, Joseph Grady and Srini Narayanan led to a neural theory of primary metaphors. They argued that much of our language comes from physical interactions during the first several years of life, as the Affection is Warmth metaphor illustrated. There are many other examples; we equate up with control and down with being controlled because stronger people and objects tend to control us, and we understand anger metaphorically in terms of heat pressure and loss of physical control because when we are angry our physiology changes e.g., skin temperature increases, heart beat rises and physical control becomes more difficult.

* * *

As Lakoff points out, metaphors are more than mere language and literary devices, they are conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior. For example, in a study done by Yale psychologist John Bargh, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. Similarly, at the University of Toronto, “subjects were asked to remember a time when they were either socially accepted or socially snubbed. Those with warm memories of acceptance judged the room to be 5 degrees warmer on the average than those who remembered being coldly snubbed. Another effect of Affection Is Warmth.” This means that we both physically and literary “warm up” to people.

The last few years have seen many complementary studies, all of which are grounded in primary experiences:

• Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead

• Those who held heavier clipboards judged currencies to be more valuable and their opinions and leaders to be more important. Important is Heavy.

• Subjects asked to think about a moral transgression like adultery or cheating on a test were more likely to request an antiseptic cloth after the experiment than those who had thought about good deeds. Morality is Purity

Studies like these confirm Lakoff’s initial hunch – that our rationality is greatly influenced by our bodies in large part via an extensive system of metaphorical thought. How will the observation that ideas are shaped by the body help us to better understand the brain in the future?

Last week, Phil Zimbardo delivered another remarkable lecture at Harvard Law School — this time tracing his journey from studying evil to inspiring heroism. We hope to post that video in several weeks. For his introduction, Situationist Editor Jon Hanson assembled this short video comparing Professor Zimbardo’s Prison Experiment and Professor Kingsfield’s Harvard Law School (The Paper Chase), both of which reached their 40th anniversary this year.

“The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice, you must go on.”

These were the words spoken to participants of Yale professor Stanley Milgram’s social psychology experiment testing obedience to authority figures. Milgram’s experiment, conducted at Yale in the early 1960s, was one of the most controversial studies in the history of psychology and remains so today — 50 years since the experiment took place.

“This was a landmark study in psychology and in Yale history,” said psychology professor Jack Dovidio. “He had a profound impact on the public recognition, appreciation and, in some ways, concern of the power of psychology.”

“The Milgram experiment,” as it is now called, was designed to observe the extent to which individuals would perform acts that violated their personal conscience when under orders from an authority figure. Milgram hoped such research might explain how the German people allowed for the terrible war crimes committed in the Holocaust, Milgram wrote in his 1974 book “Obedience to Authority.”

During the experiment, a scientist — the “authority figure” — ordered participants to ask another individual a series of questions and administer increasingly painful electric shocks for every wrong answer. The intensity of the shocks started at a level of mild pain when the experiment began but could be built up to lethal doses of electricity as the experiment continued. Unbeknownst to the participant, the setup was fake — there was no real electricity shocking anyone, all other people in the experiment were actors, and the actual purpose of the study was to observe how much pain the participant would inflict under orders. Milgram found that 65 percent of participants administered the final, lethal shock.

The results of the Milgram experiment, published in the December 1963 issue of the Journal of Abnormal and Social Psychology, stunned the public, Dovidio said.

“Much of the public at the time criticized that psychology only told us things about human nature we already knew,” Dovidio said. “This showed there are a lot of things we really don’t know that are important to everyday life.”

In 1963, Milgram told the News that the experiment, which used 43 Yalies as participants and took place in Linsly-Chittenden Hall, reduced several “naturally poised” undergraduates to “twitching, stuttering wrecks, on the verge of nervous collapse.” In the process, Milgram said they proved themselves willing to obey people in positions of higher authority, even suggesting that they would agree to drop a bomb or push a button launching an atomic missile.

Milgram tested over 1,000 men from the Yale and New Haven community, some of whom he said fell into fits of “bizarre” laughter and flashed “unnatural smiles” as they pressed buttons marked “Danger: Severe Shock.”

Equally chilling as these accounts were the questions Milgram’s procedure raised about human testing in psychology. Milgram’s study incited national controversy and led in part to major human testing regulation reform from Yale administrators and the federal government.

“At the time, we didn’t have ethics committees or even consent forms for these tests,” Dovidio said. “Milgram’s study made people think more seriously about the ethics of research.”

By 1980, Yale had instituted reforms mandating that any experiment using paid subjects receive approval by a six-member Committee for the Protection of Human Subjects, and much tighter rules were put in place limiting the degree of deception that could be used in an experiment, a 1980 article in the News stated.

Throughout the reforms, Yale students did not forget Milgram’s role in the controversy. In a 1979 News article discussing potential weekend events at Yale, Arnold Schwartz ’79 suggested “The Milgram Show: Hilarious game show in which students are given a choice of flunking out of Yale or electrocuting fellow students into unconsciousness.”

In 2008 a Santa Clara University professor replicated an altered version of the experiment to see whether people today still obey orders against their consciousness. A 2008 Ohio State University study applied statistical analysis to Milgram’s data, researching which voltages were the crucial turning points in the experiment after which participants refused to deliver further shocks.

Like this:

In 1971, at Stanford University, a young psychology professor created a simulated prison. Some of the young men playing the guards became sadistic, even violent, and the experiment had to be stopped.

The results of the Stanford Prison Experiment showed that people tend to conform — even when that means otherwise good people doing terrible things. Since then, the experiment has been used to help explain everything from Nazi Germany to Abu Ghraib.

Now, in a new project, [Situationist Contributor] Philip Zimbardo, the psychologist who created the prison experiment, is trying to show that people can learn to bring out the best in themselves rather than the worst.

An Unwanted Legacy

Four decades after he created the Stanford Prison Experiment, Zimbardo says he’s still hearing about it.

“I hate the idea that the Stanford prison study is the main thing most people know me for,” he says.

Zimbardo has done many things. He was a professor of psychology at Stanford University for 40 years. He’s been president of the American Psychological Association. He’s written a book about the psychology of time and established a clinic for shy people. But he says his other achievements are often overlooked.

“Soon as people meet me, I go around the world, ‘Oh you’re the prison guy,'” he says.

Here’s how the experiment worked: Zimbardo recruited 24 male college students and paid them $15 a day to spend two weeks in a fake prison in a basement on the Stanford campus. Half the students were assigned to be guards, the others were prisoners.

As an educational video made about the experiment put it, “What happened surprised everyone, including Zimbardo. The illusion became reality. The boundary between the role each person was playing and his real personal identity was erased.” Some of guards in the experiment became abusive, and prisoners showed signs of mental breakdown. After six days, Zimbardo shut the experiment down and sent everyone home.

‘Here I Am, This Evil Scientist’

His reputation was sealed: He was the guy who had revealed that normal people can do very bad things — if you expose them to wrongdoing, even evil, they’ll join in.

“So here I am, this evil scientist, creating this situation where evil is dominating good,” he says.

The problem is, Zimbardo doesn’t see himself that way. He sees himself as a force of good in the world, not evil. And so now, retired from teaching at the age of 78, he has a new project, one that aims to change his legacy in a dramatic way: to turn regular people into heroes.

Not comic book heroes. But, rather, someone who would have helped Jews escape the Holocaust. Or even something more ordinary, like standing up for a classmate who’s being bullied.

“Heroes are not extraordinary people,” he says. “They’re ordinary people who do an extraordinary thing, step out of themselves, put their best self forward in service to humanity. And it starts with internalizing heroic imagination, namely — I could do it.”

So he’s calling it the Heroic Imagination Project. It’s a nonprofit training organization based in San Francisco. One of the first programs has been to teach heroism at a charter high school, called ARISE, in one of the tougher neighborhoods of Oakland, Calif.

Over the course of the year, Clint Wilkins been teaching students in the heroism course to recognize how their environment can shape their behavior. “As you can see,” he tells his students, “there are two kinds of ways of conforming, right? Do you guys remember which they are?”

Heroes Needed

Conformity is not an abstract concept to these students. Two years ago, the bystander effect happened not far from here. A 16-year-old girl was gang-raped by at least six men during a homecoming dance. Dozens of kids watched, some sent texts to their friends, telling them to come check it out. It took two hours before anyone called the police.

So this class is about training kids to break away from the pack, to be the person who defies conformity and does the right thing.

“They had to see the girl, you know?” says Phillip Johnson, a senior in the heroism class. “They had to see that girl go in the back with all those guys. Like if I see a group of guys in a circle, or something, I’m going to be like, what’s going on here? It’s like, oh. Woah. But that didn’t happen, apparently.”

The other students fall silent. Like Phillip, they’d like to think they would have been the one to call the cops. But if there’s one lesson to be learned from this class, it’s this: You aren’t always the person you think you’re going to be. Being able to imagine a different life for yourself is part of this school — and it’s the point of this heroism course.

It seems to be taking hold in Brandon Amaro. He’s a sweet-faced, 16-year-old kid who grew up in a farm town in southern California. Brandon says sometimes he feels like he could do something really exceptional with his life, something even his parents might not know he’s capable of. But then he starts having doubts.

He says it’s as if there are two Brandons. “The good one,” he says, “is like an over-energetic bee in my ear, always buzzing and buzzing, telling me, ‘You can do it, you can do it. Go for it.'” Then there’s the bad one, “who’s kind of like someone pressing down on your shoulders. He sees something good, he says, ‘No, you can’t try.'”

Brandon says when he imagines himself grown up, he’s just not so conflicted anymore. “The older me is going to be much more mature, more confident,” he says. “He’s going to walk and everybody’s going to just know it’s him. He’s going to know who he is.”

Can Courage Be Taught?

Zimbardo says he sees himself in these kids. After all, he grew up poor, too. “Growing up on welfare, in poverty, in the ghetto, in the south Bronx, amidst evil, drugs, prostitution and gangs and violence — I rose above that,” he says. “In some mystical way, I have always been the leader.”

But the question is, why did Zimbardo rise above? Why does anybody become a leader, or a hero, and someone else becomes a follower, or worse? And do we have a choice? Zimbardo’s class is teaching the students that they do.

But other social psychologists believe humans are more hard-wired than that. For example, they say criminal behavior comes from individual differences in personality, things like lack of self-control. These are differences we’re either born with or things we never learned as children.

Augustine Brannigan, who studies criminal behavior at the University of Calgary in Canada, is one of these people. When asked what he thinks of the idea of a heroism class, he replies, “Whether you can teach them to be heroes? No. What you can do is you can expose them to the narrative about heroes. If it takes, it takes. If it doesn’t, they still have the narrative, and they can respect it. But that doesn’t mean you’ve changed their behavior.”

Zimbardo aims to prove this thinking wrong. He’s betting that by studying heroic narratives, learning about human nature and taking on community service projects, the students will actually change the way they act.

A Practical Lesson

One afternoon, while the students are in heroism class, a fight breaks out in the hallway. Not students, but some neighborhood kids, possibly gang-affiliated, drift in from the street and start causing trouble.A teacher calls 911.

The students in the heroic imagination class cluster in the doorway, craning their necks to get a better look. And when they return to their seats, they begin to wonder: Maybe this was exactly one of those opportunities they’d been talking about, a chance to step up and be a hero. But it all happened so fast, and no one did anything.

“Students could have been, like, you know, someone come get this person,” remarks senior Phillip Johnson. “It shouldn’t have been a group of people watching.” On the other hand, other students argue, maybe having a teacher call the police was exactly the right thing to do.

Sometimes it’s hard to tell when it is the right time to do something extraordinary, they say, and when it’s better to just stay on the sidelines.