Blog Stats

Archive for the ‘Classic Experiments’ Category

“Steven Armstrong was the first child to arrive to Elliot’s classroom on that day, asking why “a King” (referring to Martin Luther King Jr.) was murdered the day before. After the rest of the class arrived, Elliot asked them what they knew about Negros. The children responded with various racial stereotypes such as Negros were dumb or could not hold jobs. She then asked these children if they would like to find out what it was like to be a Negro child and they agreed.”

Solomon Asch . . . . became famous in the 1950s, following experiments which showed that social pressure can make a person say something that is obviously incorrect.

This experiment was conducted using 123 male participants. Each participant was put into a group with 5 to 7 “confederates” (People who knew the true aims of the experiment, but were introduced as participants to the naive “real” participant). The participants were shown a card with a line on it, followed by another card with 3 lines on it labeled a, b, and c. The participants were then asked to say which line matched the line on the first card in length. Each line question was called a “trial”. The “real” participant answered last or penultimately. For the first two trials, the subject would feel at ease in the experiment, as he and the other “participants” gave the obvious, correct answer. On the third trial, the confederates would start all giving the same wrong answer. There were 18 trials in total and the confederates answered incorrectly for 12 of them, these 12 were known as the “critical trials”. The aim was to see whether the real participant would change his answer and respond in the same way as the confederates, despite it being the wrong answer.

Solomon Asch thought that the majority of people would not conform to something obviously wrong, but the results showed that participants conformed to the majority on 37% of the critical trials. However, 25% of the participants did not conform on any trial. 75% conformed at least once, and 5% conformed every time.

Milgram’s experiment was again repeated — this time as part of the BBC documentary “How violent are you?” first shown in May 2009. It’s another remarkable rendition. Of the 12 participants, only 3 refused to continue to the end of the experiment. The relevant portions of that documentary are below.

THE BELIEVER: I take it that one of the goals of the Stanford Prison Experiment was to build on Milgram’s results that demonstrated the power of situational elements. Is that right?

PHILIP ZIMBARDO: It was really to broaden his message and put it to a higher-level test. In Milgram’s study, we don’t know about those thousand people who answered the ad. His subjects were not Yale students, although he did it at Yale. They were a thousand ordinary citizens from New Haven and Bridgeport, Connecticut, ages twenty to fifty, and in his advertisement in the newspaper he said: college students and high-school students cannot be used. It could have been a selection of people who were more psychopathic. For our study, we picked only two dozen of seventy-five who applied, who on seven different personality tests were normal or average. So we knew there were no psychopaths, no deviants. Nobody had been in therapy, and even though it was a drug era, nobody (at least in the reports) had taken anything more than marijuana, and they were physically healthy at the time. So the question was: Suppose you had only kids who were normally healthy, psychologically and physically, and they knew they would be going into a prison-like environment and that some of their civil rights would be sacrificed. Would those good people, put in that bad, evil place—would their goodness triumph?

***

That sitautionist snippet should convince you to check out the rest of the interview! Also, it is worth pointing out the Sommers has a forthcoming collection entitled A Very Bad Wizard: Morality Behind the Curtain, which includes past interviews with philosophers and psychologists such as Galen Strawson, Michael Ruse, Jon Haidt, Frans de Waal, Steve Stich, Josh Greene, Liane Young, Joe Henrich, William Ian Miller, and Zimbardo. So, make sure to check it out as well once it comes out.

“The video below describes a remake of Milgram’s famous study originally done in the ’60’s. Until recently, no one was authorized to replicate it due to ethical considerations. However, in 2007, ABC News was granted such permission and did so with many of the original researchers and some of the actual partipants. New data was also added.”

Is your brain like a Swiss Army knife? . . . Is it jam-packed with specialized tools that are unfolded only when a specific situation arises? Or is it more all-purpose, with a few parts that tackle many different situations? Convention Keynoter and APS Fellow Nancy Kanwisher (Massachusetts Institute of Technology) is attempting to find out.

Following centuries of debate about specialized brain regions — from the phrenologists to Broca — the development of fMRI technology has ushered in a new era of studying brain regions. By monitoring the blood flow necessary to support neural activity, fMRI has allowed researchers to track which regions of the brain are involved in processing specific stimuli. With this new weapon in her arsenal, Kanwisher performed a now-classic study published in 1997. In it, participants sat in an fMRI scanner while looking a series of faces and objects. She and her colleagues identified an area in the fusiform gyrus on the bottom surface of the temporal lobe that responded more strongly when the participants viewed faces than when they viewed objects. Dubbed the fusiform face area, this region seemed like it could be specialized for processing faces.

But the researchers could not yet be sure. What if the area responded to everything animate, or everything round? A decade of more detailed research confirmed their original hypothesis — the fusiform face area lived up to its name. At the same time, Kanwisher’s lab discovered two other specialized areas: the parahippocampal place area, which specializes in processing places, and the extrastriate body area, which specializes in processing images of the body.

These answers only lead to more questions. These areas are involved in processing certain categories, but do they merely process perceptual input or actually reflect conscious experience? What are the roles of genes and experience in wiring up these areas? And finally, how much of the brain is like this? Is our entire cortex broken up into small pieces, each with their own special domain? Kanwisher and her colleagues are tackling these questions head on.

Do these areas only engage in their respective categorical processing or do they perform other functions as well? For example, take the fusiform face area. It is most active when viewing faces, but it also shows lesser activity when the participant is looking at other visual stimuli, like objects. Something in the pattern of this lower activity could be crucial in processing input other than faces. Evidence against this idea comes from research on patients with neurological trauma, who sometimes lose face perception abilities without losing object perception. But, the low chance of finding subjects with a lesion in just the right spot make this research limited. Other researchers have turned to transcranial magnetic stimulation (TMS), a method that uses magnetic fields to transiently disrupt neural activity. The fusiform face area is too deep in the brain to be affected by TMS, but the extrastriate body area is closer to the scalp and susceptible to the TMS disruption. When the neural activity in this region is disturbed, participants are impaired in the ability to recognize bodies but have no difficulty recognizing faces or other objects. Although these specialized areas may collect information about other types of stimuli, it seems that they are only necessary for processing information of their specific type.

Are the functionally specific regions merely perceptual processers or do they reflect our conscious experience? To illustrate the difference between perception and experience, Kanwisher instructed the audience to pick up the 3-D glasses left on the seats. But, before we could put them on, she showed us two images, a red-tinted image of a face and a green-tinted image of a house. Then she superimposed the house on the face creating a red/green face/house jumble. But, when looking at this jumble through glasses with one red-tinted and one green-tinted lens, so that the house image goes to one eye and the face image to the other, you don’t experience a jumble — you experience a red face that fades to a green house and back and forth as your brain attempts to make sense of this new situation. Even though your experience of what you are seeing is changing, the image beamed to your retina is constant the whole time. Work from Kanwisher’s lab showed that in this situation, activity in the fusiform face area corresponds with one’s experience, not with the actual perceptual input.

Further, not only does the activity in specialized areas correspond with what we consciously see, it also corresponds with what we imagine. Kanwisher has put people in the fMRI machine and asked them to imagine familiar faces and places. The same areas are active when participants are imagining faces and places as when they are actually looking at faces and places. It’s not just what you are physically seeing, but what you are consciously aware of that is processed by this area.

So, where do these specialized areas come from? What role do genes and experience play in their construction?

* * *

For answers to those questions and the rest of Conkle’s summary of Kanwisher’s talk, click here.

To watch a video of Kanwisher’s Keynote presentation, click on the video below.

From the excellent Big Think, here’s a worthwhile video of social psychologist Robert Cialdini talking about some of the social psychologists who influenced his work, including Situationist contributor Phil Zimbardo.

Most of our readers are familiar with Walter Mischel‘s landmark experiment on marshmallows, delayed gratification, and success. For the rest of you, here are a couple of videos, including one by Situationist Contributor Philip Zimbardo, summarizing the study.

Despite advances in computer graphics, few people would think virtual characters or objects are real. Yet placed in a virtual reality environment most people will interact with them as if they are really there. European researchers are finding out why.

In trying to understand presence – the propensity of humans to respond to fake stimuli as if they are real – the researchers are not just gaining insights into how the human brain functions. They are also learning how to create more intense and realistic virtual experiences, opening the door to myriad applications for healthcare, training, social research and entertainment.

“Virtual environments could be used by psychiatrists to help people overcome anxiety disorders and phobias . . . by researchers to study social behaviour not practically or ethically reproduced in the real world, or to create more immersive virtual reality for entertainment,” explains Mel Slater, a computer scientist at ICREA in Barcelona and University College, London, who led the team behind the research.

Working in the EU-funded Presenccia project, Slater and his team, drawn from fields as diverse as neuroscience, psychology, psychophysics, mechanical engineering and philosophy, conducted a variety of experiments to understand why humans interpret and respond to virtual stimuli the way they do and how those experiences can be made more intense.

For one experiment they developed a virtual bar, which test subjects enter by donning a virtual reality (VR) headset or immersing themselves in a VR CAVE in which stereo images are projected onto the walls. As the virtual patrons socialise, drink and dance, a fire breaks out. Sometimes the virtual characters ignore it, sometimes they flee in panic. That in turn dictates how the real test subjects, immersed in the virtual environment, respond.

Panic and pain . . . virtually

“We have had people literally run out of the VR room, even though they know that what they are witnessing is not real,” says Slater. “They take their cues from the other characters.”

In another instance, the researchers re-enacted controversial experiments conducted by American social psychologist Stanley Milgram in the 1960s that showed people’s propensity to follow orders even if they know what they are doing is wrong. Instead of using a real actor, as Milgram did, the Presenccia team used a virtual character to which the test subject was instructed to give progressively more intense electric shocks whenever it answered questions incorrectly. The howls of pain and protest from the character, a virtual woman, increased as the experiment went on.

“Some of the test subjects felt so uncomfortable that they actually stopped participating and left the VR environment. Around half said they wanted to leave, but said they did not because they kept telling themselves it wasn’t real,” Slater says.

All had physical reactions, measured by their skin conductivity, perspiration and heart rate, showing that, at a subconscious level, people’s responses are similar regardless of whether what they are experiencing is real or virtual. The plausibility of the events enhances the sense that what is happening is real. Plausibility, Slater says, is therefore more important to presence than the quality of the graphics in a VR environment.

For example, when a test subject was made to stand on the edge of a virtual pit, staring down at an 18-metre drop, their level of anxiety increased if they could see dynamically changing shadows and reflections of their virtual body even if the graphics were poor. In other experiments, the researchers made people believe that a virtual hand was their own – replicating in VR the so-called “rubber hand illusion” – or that they were looking at themselves from another angle, creating a kind of out-of-body experience. In one trial, they even gave male test subjects a woman’s body.

Help with phobias and paranoia

By understanding what makes people perceive virtual objects and experiences to be real, the researchers hope to create applications that could revolutionise certain psychiatric treatments. Patients with a fear of spiders or heights, for example, could be exposed to and helped to overcome their fears in virtual reality. Similarly people who are shy or paranoid about public speaking could be helped by having to face virtual people and crowds.

“One application we are working on is designed to help shy men overcome their fear of meeting women by making them interact with a virtual woman,” Slater says.

The technology is also being used for social research which, much like the Milgram experiments, would not be practical or ethical to conduct in the real world. One experiment due to be run at University College, London, will use a virtual environment to study how people respond to violence in public places, such as a bar fight between football hooligans.

Besides healthcare and research, more immersive VR would also help in training, potentially greatly improving the results of flight or driving simulators. Slater also envisions VR environments being used to train people to use prosthetic limbs and wheelchairs through mind control before trying them out in the real world. A brain-computer interface (BCI) developed for just such a purpose was tested in the Presenccia project and in a similarly named predecessor called Presencia, which received funding under the EU’s Sixth and Fifth Framework Programmes for research, respectively.

* * *

Vodpod videos no longer available.

* * *

Though immersive VR is likely to have many applications in healthcare, research and training, the biggest market is probably entertainment. With the cost of VR technology coming down, people could eventually be exploring virtual worlds and interacting with virtual characters and other people through VR rooms in their homes akin to the “holodecks” seen in Star Trek, Slater says.

From Situationist friend Michael Cross, we received the following message regarding Tuesday night’s John Stewart interview of Cliff May on The Daily Show (below).

* * *

Vodpod videos no longer available.

* * *

In the beginning of the interview, May says that he doesn’t believe anyone in the current or previous administration was “pro-torture.” He then explains that what have traditionally been called the “Torture Memos” are really “Anti-Torture Memos” because they draw lines regarding what are acceptable and unacceptable interrogation techniques.

* * *

What is interesting from a Situational perspective is that he then describes the intent of these memos as to lay out a complex set of rules and requirements that are intended to prevent torture from occurring. What he fails to recognize is that the rules and requirements he has laid out are so complex that most people, given the pressures of, say, war, would find that they were met. What May seems to be saying, without knowing it, is that the Justice Department set up a Milgram experiment in which the stresses of war served as the man in the white lab-coat.

* * *

Anyway, the interview is [posted above]. The relevant stuff is in the first few minutes; the remaining interview is just good, quality stuff.”

In his Guardian article, “Hands up if you’re an individual,” Stuart Jeffries offers a brief summary of some social psychology classics. Below, we have included excerpts. After reviewing Milgram’s famous experiments on obedience, Jeffries writes:

* * *

This was one of the classic experiments of group psychology, though not all have involved duping volunteers into believing they had electrocuted victims. Group psychology has often involved experiments to explain how individuals’ behaviours, thoughts and feelings are changed by group pressures.

It is generally thought to have originated in 1898 when Indiana University psychologist Norman Triplett asked children to spin a fishing reel as fast as they could. He found that when the children were doing the task together they did so much faster than when alone. Triplett found a similar result when studying cyclists – they tended to record faster times when riding in groups rather than alone, a fact that he explained because the “bodily presence of another contestant participating simultaneously in the race serves to liberate latent energy not ordinarily available”.

More than a century later, social psychology explores how other people make us what we are; how unconscious, sometimes ugly, impulses make us compliant and irrational. Why, for example, do I smoke even though I know it could be fatal? How can there be such a gap between my self-image and my behaviour (this is known as cognitive dissonance)?

Why do high-level committees of supposed experts make disastrous decisions (for example, when a Nasa committee dismissed technical staff warnings that the space shuttle Challenger should not be launched, arguing that technical staff were just the kind of people to make such warnings – this is seen as a classic case of so-called “groupthink”)?

Why do we unconsciously obey others even when this undermines our self-images (this is known as social influence)? What makes us into apathetic bystanders when we see someone attacked in the street – and what makes us have-a-go-heroes? What makes peaceful crowds turn into rioting mobs?

Group psychological studies can have disturbing ramifications. Recently, Harvard psychologist [and Situationist contributor] Mahzarin Banaji used the so-called implicit association test to demonstrate how unconscious beliefs inform our behaviour. [Sh]e concluded from [her] research that the vast majority of white, and many black respondents recognised negative words such as “angry”, “criminal” or “poor” more quickly after briefly seeing a black face than a white one. . . .

* * *

The nature of conformism has obsessed social psychologists for decades. In 1951, psychologist Solomon Asch did an experiment in which volunteers were asked to judge the correct length of a line by comparing it with three sample lines. The experiment was set up so that there was an obviously correct answer. But Asch had riddled a group with a majority of stooges who deliberately chose the wrong answer. The pressure of the majority told on Asch’s volunteers. He found that 74% conformed with the wrong answer at least once, and 32% did so all the time.

What impulses were behind such conformism? Social psychologists have long considered that we construct our identities on the basis of others’ attitudes towards us. Erving Goffman, in The Presentation of Self in Everyday Life (1959), analysed social encounters as if each person was engaged in a dramatic performance, and suggested that each such actor was a creation of its audience.

Through such performances of self we internalise role expectations and gain positive self-esteem. We cast other individuals and groups in certain roles. Such behaviour may make some of us unconscious racists, but it also lubricates the wheels of social life.

French psychologist Serge Moscovici developed what is called social representation theory, arguing that shared beliefs and explanations held by a group of society help people to communicate effectively with one another. He explored the notion of anchoring, whereby new ideas or events in social life are given comforting redescriptions (or social representations). For example, a group of protesters against a motorway might be described demeaningly by the road lobby as a “rent-a-mob,” while the protesters themselves might anchor themselves more falteringly as “eco-warriors”.

* * *

Social psychologists have also been long-obsessed by the psychology of crowds. In 1895, French social psychologist Gustave le Bon described crowds as mobs in which individuals lost their personal consciences. His book, The Crowd: A Study of the Popular Mind, influenced Hitler and led many later psychologists to take a dim view of crowds.

After the war, German critical theorist Theodor Adorno wrote of the destructive nature of “group psychology.” Even as late as 1969, Stanford psychologist [and Situationist contributor] Philip Zimbardo argued that a process of deindividuation makes participants in crowds less rational.

Most recent crowd psychology has not been content to brand crowds necessarily irrational. Instead, it has divided into contagion theory (whereby crowds cause people to act in a certain way), convergence theory (where crowds amount to a convergence of already like-minded individuals) and emergent norm theory (where crowd behaviour reflects the desires of participants, but it is also guided by norms that emerge as the situation unfolds). . . .

In the age of MySpace, Facebook and online dating, group psychologists are now trying to find out what goes on when we present ourselves to the world online, how we are judged for doing so and how groups are formed online. Other social psychology touches on such voguish areas of research as social physics (which contends that physical laws might explain group behaviour) and neuroeconomics (which looks at the role of the brain when we evaluate decisions and interact with each other), but the age-old concerns remain part of our zeitgeist.

Situationist Contributer Philip Zimbardo has authored the preface to a new edition of social psychologist Stanley Milgram’s seminal book Obedience to Authority. This is the second of a two-part series derived from that preface. In Part I of the post, Zimbardo describes the inculcation of obedience and Milgram’s role as a research pioneer. In this part, Zimbardo answers challenges to Milgram’s work and locates its legacy.

* * *

Unfortunately, many psychologists, students, and lay people who believe that they know the “Milgram Shock” study, know only one version of it, most likely from seeing his influential movie Obedience or reading a textbook summary.

He has been challenged for using only male participants, which was true initially, but later he replicated his findings with females. He has been challenged for relying only on Yale students, because the first studies were conducted at Yale University. However, the Milgram obedience research covers nineteen separate experimental versions, involving about a thousand participants, ages twenty to fifty, of whom none are college or high school students! His research has been heavily criticized for being unethical by creating a situation that generated much distress for the person playing the role of the teacher believing his shocks were causing suffering to the person in the role of the learner. I believe that it was seeing his movie, in which he includes scenes of distress and indecision among his participants, that fostered the initial impetus for concern about the ethics of his research. Reading his research articles or his book does not convey as vividly the stress of participants who continued to obey authority despite the apparent suffering they were causing their innocent victims. I raise this issue not to argue for or against the ethicality of this research, but rather to raise the issue that it is still critical to read the original presentations of his ideas, methods, results, and discussions to understand fully what he did. That is another virtue of this collection of Milgram’s obedience research.

A few words about how I view this body of research. First, it is the most representative and generalizable research in social psychology or social sciences due to his large sample size, systematic variations, use of a diverse body of ordinary people from two small towns—New Haven and Bridgeport, Connecticut—and detailed presentation of methodological features. Further, its replications across many cultures and time periods reveal its robust effectiveness.

As the most significant demonstration of the power of social situations to influence human behavior, Milgram’s experiments are at the core of the situationist view of behavioral determinants. It is a study of the failure of most people to resist unjust authority when commands no longer make sense given the seemingly reasonable stated intentions of the just authority who began the study. It makes sense that psychological researchers would care about the judicious use of punishment as a means to improve learning and memory. However, it makes no sense to continue to administer increasingly painful shocks to one’s learner after he insists on quitting, complains of a heart condition, and then, after 330 volts, stops responding at all. How could you be helping improve his memory when he was unconscious or worse? The most minimal exercise of critical thinking at that stage in the series should have resulted in virtually everyone refusing to go on, disobeying this now heartlessly unjust authority. To the contrary, most who had gone that far were trapped in what Milgram calls the “agentic state.”

These ordinary adults were reduced to mindless obedient school children who do not know how to exit from a most unpleasant situation until teacher gives them permission to do so. At that critical juncture when their shocks might have caused a serious medical problem, did any of them simply get out of their chairs and go into the next room to check on the victim? Before answering, consider the next question, which I posed directly to Stanley Milgram: “After the final 450 volt switch was thrown, how many of the participant-teachers spontaneously got out of their seats and went to inquire about the condition of their learner?” Milgram’s answer: “Not one, not ever!” So there is a continuity into adulthood of that grade-school mentality of obedience to those primitive rules of doing nothing until the teacher-authority allows it, permits it, and orders it.

My research on situational power (the Stanford Prison Experiment) complements that of Milgram in several ways. They are the bookends of situationism: his representing direct power of authority on individuals, mine representing institutional indirect power over all those within its power domain. Mine has come to represent the power of systems to create and maintain situations of dominance and control over individual behavior. In addition, both are dramatic demonstrations of powerful external influences on human action, with lessons that are readily apparent to the reader, and to the viewer. (I too have a movie, Quiet Rage, that has proven to be quite impactful on audiences around the world.) Both raise basic issues about the ethics of any research that engenders some degree of suffering and guilt from participants. I discuss at considerable length my views on the ethics of such research in my recent book The Lucifer Effect: Understanding Why Good People Turn Evil (2008). When I first presented a brief overview of the Stanford Prison Experiment at the annual convention of the American Psychological Association in 1971, Milgram greeted me joyfully, saying that now I would take some of the ethics heat off his shoulders by doing an even more unethical study!

Finally, it may be of some passing interest to readers of this book, that Stanley Milgram and I were classmates at James Monroe High School in the Bronx (class of 1950), where we enjoyed a good time together. He was the smartest kid in the class, getting all the academic awards at graduation, while I was the most popular kid, being elected by senior class vote to be “Jimmie Monroe.” Little Stanley later told me, when we met ten years later at Yale University, that he wished he had been the most popular, and I confided that I wished I had been the smartest. We each did what we could with the cards dealt us. I had many interesting discussions with Stanley over the decades that followed, and we
almost wrote a social psychology text together. Sadly, in 1984 he died prematurely from a heart attack at the age of fifty-one.

[Milgram] left us with a vital legacy of brilliant ideas that began with those centered on obedience to authority and extended into many new realms—urban psychology, the small-world problem, six degrees of separation, and the Cyrano effect, among others—always using a creative mix of methods. Stanley Milgram was a keen observer of the human landscape, with an eye ever open for a new paradigm that might expose old truths or raise new awareness of hidden operating principles. I often wonder what new phenomena Stanley would be studying now were he still alive.

* * *

To read Part I of this post, click here. To read three related Situationist posts by Phil Zimbardo, see “The Situation of Evil,” Part I, Part II, and Part III.

In the mid-1970s, Situationist contributor Timothy Wilson with Richard Nisbett conducted one of the best known social psychology experiments of all time. It was strikingly simple and involved asking subjects to assess the quality of hosiery. Situationist contributors Jon Hanson and David Yosifon have described the experiment this way:

Subjects were asked in a bargain store to judge which one of four nylon stocking pantyhose was the best quality. The subjects were not told that the stockings were in fact identical. Wilson and Nisbett presented the stockings to the subjects hanging on racks spaced equal distances apart. As situation would have it, the position of the stockings had a significant effect on the subjects’ quality judgments. In particular, moving from left to right, 12% of the subjects judged the first stockings as being the best quality, 17% of the subjects chose the second pair of stockings, 31% of the subjects chose the third pair of stockings, and 40% of the subjects chose the fourth—the most recently viewed pair of stockings. When asked about their respective judgments, most of the subjects attributed their decision to the knit, weave, sheerness, elasticity, or workmanship of the stockings that they chose to be of the best quality. Dispositional qualities of the stocking, if you will. Subjects provided a total of eighty different reasons for their choices. Not one, however, mentioned the position of the stockings, or the relative recency with which the pairs were viewed. None, that is, saw the situation. In fact, when asked whether the position of the stockings could have influenced their judgments, only one subject admitted that position could have been influential. Thus, Wilson and Nisbett conclude that “[w]hat matters . . . is not why the [position] effect occurs but that it occurs and that subjects do not report it or recognize it when it is pointed out to them.”

One of the core messages of more recent brain research is that most mental activity happens in the automatic or unconscious region of the brain. The unconscious mind is the platform for a set of mental activities that the brain has relegated beyond awareness for efficiency’s sake, so the conscious mind can focus on other things. In his book, “Strangers to Ourselves,” Timothy Wilson notes that the brain can absorb about 11 million pieces of information a second, of which it can process about 40 consciously. The unconscious brain handles the rest.

The automatic mind generally takes care of things like muscle control. But it also does more ethereal things. It recognizes patterns and construes situations, searching for danger, opportunities or the unexpected. It also shoves certain memories, thoughts, anxieties and emotions up into consciousness.

A lot has been learned about how what we think we know about what moves us is wrong. And much has also been learned about how what we don’t know we know can influence us. Psychologist Susan Courtney has an absolutely terrific article in Scientific American titled “Not So Deliberate: The decisive power of what you don’t know you know.” We excerpt portions of her article below.

* * *

When we choose between two courses of action, are we aware of all the things that influence that decision? Particularly when deliberation leads us to take a less familiar or more difficult course, scientists often refer to a decision as an act of “cognitive control.” Such calculated decisions were once assumed to be influenced only by consciously perceived information, especially when the decision involved preparation for some action. But a recent paper by Hakwan Lau and Richard Passingham,“Unconscious Activation of the Cognitive Control System in the Human Prefrontal Cortex,” demonstrates that the influences we are not aware of can hold greater sway than those we can consciously reject.

Biased competition

We make countless “decisions” each day without conscious deliberation. For example, when we gaze at an unfamiliar scene, we cannot take in all the information at once. Objects in the scene compete for our attention. If we’re looking around with no particular goal in mind, we tend to focus on the objects most visually different from their surrounding background (for example, a bright bird against a dark backdrop) or those that experience or evolution have taught us are the most important, such as sudden movement or facial features — particularly threatening or fearful expressions. If we do have a goal, then our attention will be drawn to objects related to it, such as when we attend to anything red or striped in a “Where’s Waldo” picture. Stimulus-driven and goal-driven influences alike, then, bias the outcome of the competition for our attention among a scene’s many aspects.

The idea of such biased competition (a term coined in 1995 by Robert Desimone and John Duncan, also applies to situations in which we decide among many possible actions, thoughts or plans. What might create an unconscious bias affecting these types of competition?

For starters, previous experience in a situation can make some neural connections stronger than others, tipping the scales in favor of a previously performed action. The best-known examples of this kind of bias are habitual actions (as examined in a seminal 1995 article by Salmon and Butters and what is known as priming.

Habitual actions are what they sound like — driving your kids to school, you turn right on Elm because that’s how you get there every day. No conscious decision is involved. In fact, it takes considerable effort to remember to instead turn left if your goal is to go somewhere else.

Priming works a bit differently; it’s less a well-worn route than a prior suggestion that steers you a certain way. If I ask you today to tell me the first word that comes to mind that starts with the letters mot and you answer mother, you’ll probably answer the same way if I ask you the same thing again four months from now, even if you have no explicit recollection of my asking the question. The prior experience primes you to repeat your performance. Other potentially unconscious influences are generally emotional or motivational.

Of course, consciously processed information can override these emotional and experience-driven biases if we devote enough time and attention to the decision. Preparing to perform a cognitive action (“task set”) has traditionally been considered a deliberate act of control and part of this reflective, evaluative neural system. (See, for example, the 2002 review by Rees, Kreiman and Koch — pdf download.) As such, it was thought that task-set preparation was largely immune to subconscious influences.

We generally accept it as okay that some of our actions and emotional or motivational states are influenced by neural processes that happen without our awareness. For example, it aids my survival if subliminally processed stimuli increase my state of vigilance — if, for example, I jump out of the way before I am consciously aware that the thing at my feet is a snake. But we tend to think of more conscious decisions differently. If I have time to recognize an instruction, remember what that means I’m supposed to do and prepare to make a particular kind of judgment on the next thing I see, then the assumption is that this preparation must be based entirely on what I think I saw — not what I wasn’t even aware of.

Yet Lau and Passingham have found precisely the opposite in their study — that information we’re not aware of can more strongly influence even the most deliberative, non-emotional sort of decision even more than does information we are aware of.

Confusing cues

Lau and Passingham had their subjects perform one of two tasks: when shown a word on a screen, the subjects had to decide either a) whether or not the word referred to a concrete object or b) whether or not the word had two syllables. A cue given just before each word — the appearance of either a square of a diamond — indicated whether to perform the concrete judgment task or the syllables task. These instruction cues were in turn preceded by smaller squares or diamonds that the subjects were told were irrelevant. A variation in timing between the first and second cues determined whether the participants were aware of seeing both cues or only the second.

As you would expect, the task was more difficult when the cues were the not same — that is, when a diamond preceded a square or a square a diamond. The surprising finding was that this confusion effect was greater when the timing between the cues was so close that the participants didn’t consciously notice the first cue. When the cues were mixed but the subjects were consciously aware of only the second instruction, their responses — and their brain activity as measured by functional magnetic resonance imaging (fMRI) — indicated that the “invisible” conflicting cue had made them more likely to prepare to do the “wrong” task. Although similar effects have been shown on tasks that involved making a decision about the appearance of the image immediately following the “invisible” image, this is the first time this effect has been demonstrated for complex task preparation.

It may not be surprising that we juggle multiple influences when we make decisions, including many of which we are not aware — particularly when the decisions involve emotional issues. Lau and Passingham, however, show us that even seemingly rational, straightforward, conscious decisions about arbitrary matters can easily be biased by inputs coming in below our radar of awareness. Although it wasn’t directly tested in this study, the results suggest that being aware of a misleading cue may allow us to inhibit its influence. And the study makes clear that influences we are not aware of (including, but not limited to, those brought in by experience and emotion) can sneak into our decisions unchecked.

Situationist contributer Philip Zimbardo has authored the preface to a new edition of social psychologist Stanley Milgram’s pathbreaking and now-classic book Obedience to Authority. This is the first of a two-part series derived from that preface. In this post, Zimbardo describes the inculcation of obedience and Milgram’s role as a research pioneer. In Part II, Zimbardo answers challenges to Milgram’s work and locates its legacy.

* * *

What is common about two of the most profound narratives in Western culture—Lucifer’s descent into Hell and Adam and Eve’s loss of Paradise—is the lesson of the dreadful consequences of one’s failure to obey authority. . . [T]hey are designed, as all parables are, to send a powerful message to all those who hear and read them: Obey authority at all costs! The consequences of disobedience to authority are formidable and damnable. Once created, these myths and parables get passed along by subsequent authorities, now parents, teachers, bosses, politicians, and dictators, among others, who want their word to be followed without dissent or challenge.

Thus, as school children, in virtually all traditional educational settings, the rules of law that we learned and lived were: Stay in your seat until permission is granted by the teacher to stand and leave it; do not talk unless given permission by the teacher to do so after having raised your hand to seek that recognition, and do not challenge the word of the teacher or complain. So deeply ingrained are these rules of conduct that even as we age and mature they generalize across many settings as permanent placards of our respect for authority. However, not all authority is just, fair, moral, and legal, and we are never given any explicit training in recognizing that critical difference between just and unjust authority. The just one deserves respect and some obedience, maybe even without much questioning, while the unjust variety should arouse suspicion and distress, ultimately triggering acts of challenge, defiance, and revolution.

Stanley Milgram’s series of experiments on obedience to authority, so clearly and fully presented in this new edition of his work, represents some of the most significant investigations in all the social sciences of the central dynamics of this aspect of human nature. His work was the first to bring into the controlled setting of an experimental laboratory an investigation into the nature of obedience to authority. In a sense, he is following in the tradition of Kurt Lewin, although he is not generally considered to be in the Lewinian tradition, as Leon Festinger, Stanley Schachter, Lee Ross, and Richard Nisbett are, for example. Yet to study phenomena that have significance in their real world existence within the constraints and controls of a laboratory setting is at the essence of one of Lewin’s dictums of the way social psychology should proceed.

This exploration of obedience was initially motivated by Milgram’s reflections on the ease with which the German people obeyed Nazi authority in discriminating against Jews and, eventually, in allowing Hitler’s Final Solution to be enacted during the Holocaust. As a young Jewish man, he wondered if the Holocaust could be recreated in his own country, despite the many differences in those cultures and historical epochs. Though many said it could never happen in the United States, Milgram doubted whether we should be so sure. Believing in the goodness of people does not diminish the fact that ordinary, even once good people, just following orders, have committed much evil in the world.

British author C. P. Snow reminds us that more crimes against humanity have been committed in the name of obedience than disobedience. Milgram’s mentor, Solomon Asch, had earlier demonstrated the power of groups to sway the judgments of intelligent college students regarding false conceptions of visual reality. But that influence was indirect, creating a discrepancy between the group norm and the individual’s perception of the same stimulus event.

Conformity to the group’s false norm was the resolution to that discrepancy, with participants behaving in ways that would lead to group acceptance rather than rejection. Milgram wanted to discover the direct and immediate impact of one powerful individual’s commands to another person to behave in ways that challenged his or her conscience and morality. He designed his research paradigm to pit our general beliefs about what people would do in such a situation against what they actually did when immersed in that crucible of human nature.

* * *

We’ll post Part II of this series later this week. You can review a sizeable collection of Situationist posts discussing the work of Stanley Milgram here.

When Eliot Spitzer resigned his governorship for committing the very crimes he’d publicly denounced only a few months before, he seemed mystifyingly inconsistent. Yet one character trait does shine through the separate, supposedly incompatible compartments of his life. A self-described “steamroller,” he had that self-confident drive to do what he’d decided needed doing, never mind others’ expectations, never mind who or what gets hurt. In politics, and in his sexual life, he embodied nonconformity. Voters ate it up when he ran for governor, because Americans have a prejudice in favor of lone wolves. Moral superiority, we like to think, belongs to the person who stands alone.

Until recently, social science went along with this idea. Lab-based research supposedly furnished slam-dunk evidence that, as the social psychologist Solomon Asch put it, “the social process is polluted” by “the dominance of conformity.” That research, though, was rooted in its time and place: The United States in the aftermath of World War II, when psychologists and sociologists focused on the conformity that made millions give in to totalitarian regimes.

Lately, however, some researchers have been dissenting from the textbook version. Where an earlier generation saw only a contemptible urge to go along, revisionists see normal people balancing their self-respect against their equally valuable respect for other people, and for human relationships. For evidence, revisionists say, look no further than those very experiments that supposedly proved the evils of conformity.

The psychologists Bert Hodges and Anne Geyer recently took a new look at a well-known experiment devised by Asch in the 1950s. Asch’s subjects were asked to look at a line printed on a white card and then tell which of three similar lines was the same length. The answer was obvious, but the catch was that each volunteer was sitting in a small group whose other members were actually in on the experiment. Asch found that when those other people all agreed on the wrong answer, many of the subjects went along with the group, against the evidence of their own senses.

But the question (Which of these lines matches the one on the card?) was not posed just once. Each subject saw 18 sets of lines, and the group answer was wrong for 12 of them. Examining all the data, Hodges and Geyer found that many people were varying their answers, sometimes agreeing with the group, more often sticking up for their own view. (The average participant gave in to the group three times out of 12.)

This means that the subjects in the most famous “people are sheep” experiment were not sheep at all — they were human beings who largely stuck to their guns, but now and then went along with the group. Why? Because in getting along with other people, most decent people know, as Hodges and Geyer put it, the “importance of cooperation, tact and social solidarity in situations that are tense or difficult.”

* * *

Without that kind of trust society would fall apart tomorrow, because most of what we know about the world comes to us from other people. . . .

. . . . Of course, no society should ask for knee-jerk obedience to any command. But, as the dissenters point out, there are dangers in a knee-jerk refusal to get in line. For example, in a version of the Milgram experiment in which the dupe is seated in a group of three, he will defy the “experimenter” and behave humanely — if the other two people refuse to inflict further shocks. That kind of conformity is, to put it mildly, desirable.

“The ability to map numbers onto a line, a foundation of all mathematics, is universal, says a study published this week in the journal Science, but the form of this universal mapping is not linear but logarithmic. The findings illuminate both the nature and the limits of the human predisposition to measurement, a foundation for science, engineering, and much of our modern culture.”Read more . . .

“Back in the 70’s, a classic study (PDF) showed that people using a photocopier were just as likely to give way to a line-pusher who gave the nonsense excuse “because I need to make copies”, as they were to one who gave the more sensible excuse “because I’m in a rush”. Ellen Langer and colleagues interpreted their finding as showing how mindless we often are. As soon as we hear the word “because”, we assume the excuse that follows is justified and respond accordingly. Now Scott Key and colleagues have replicated this classic study, with the further aim of finding out if some personality types are more likely than others to give way.” Read more . . .

“It seems that the closer animals are to humans, the easier it will be to explain the human phenomenon. But it may not help social and cognitive scientists to go too far too fast. Five years ago, Brosnan and de Waal did a similar experiment with capuchins and then chimps. They showed that capuchins and chimps refuse a reward if another individual gets more for the same task. However, this does not show that monkeys are averse to inequity, only that they reject a lesser reward when better rewards are available (see also here and here).” Read more . . .

“. . . . An intriguing hypothesis, that the ocupation, and perhaps more importantly, the personal disposition of the individuals in this crowd explained their behavior that morning. It’s an intuitively appealing hypothesis as well: it seems logical that had the tunnel been filled with professional wrestlers, Baltimore Ravens defensemen, or gubernatorial candidates from Illinois, the outcome might not have been so peaceful. Or does it? Four decades ago, in a study that has become one of the most famous in all of psychology, John Darley and Daniel Batson wanted to test this very question, to assess the relative importance of predisposition and social context in shaping pro-social behavior.” Read more . . .

* * *

For previous installments of “Situationism on the Blogosphere,” click on the “Blogroll” category in the right margin.

Like this:

From the Harvard Crimson, by Weiqi Zhang, here is a fascinating article titled “A Chance Road to Harvard” about the remarkable journey of Situationist contributor Mahzarin Banaji.

* * *

Fifteen-year-old Mahzarin R. Banaji says she dreamed of living the adventurous life of a secretary upon graduating from high school because she believed that further academic pursuit was useless and was thirsting for an independent life away from her home in Secunderabad, India.

But a little less than a decade later—after a series of self-described “fortuitous” events—Banaji found herself a student at Ohio State University, studying for a Ph.D. in social psychology. And in 2002 she became a Harvard professor at the invitation of University President Drew G. Faust, then-dean of the newly-founded Radcliffe Institute of Advanced Studies.

Today Banaji is the Richard Clarke Cabot Professor of Social Ethics in the psychology department, where she has pioneered the study of unconscious prejudice. “Professor Banaji is one of the most celebrated, most cited, and most influential social psychologists of her generation for good reason—her work on unconscious bias has revolutionized how we think about the topic,” Psychology professor Daniel T. Gilbert wrote in an e-mail.

In his best-selling book “Blink,” Malcolm Gladwell devoted 12 pages to Banaji’s research on the Implicit Association Test, a methodology created by Banaji’s dissertation advisor, Anthony G. Greenwald, in 1994. The test is designed to measure the strength of automatic association and is an important tool in social psychology today.

“Individuals are created and shaped by social circumstances far more than they or their observers are able to recognize,” Banaji wrote in an unpublished biography for the Guggenheim Foundation. “Mainly in retrospect, I see my career as a textbook case of how fortuitous circumstances and responsive bystanders eased the path for my growth.”

THE ROAD TO THE IVORY TOWER

After a few late-night talks with her mother, who never attended college herself, Banaji suspended her plans to enter the typing pool and agreed to give college a try for one semester, after which the two agreed that Banaji would be free to choose her own path. That one semester proved to be worthwhile for Banaji. Initially attracted to Nizam College for its co-educational system and proximity to the largest cricket stadium in Hyderabad, Banaji says she found the cosmopolitan social and academic environment a liberating experience. Her ambition of becoming a secretary aborted, she went on to pursue an M.Phil/Ph.D. in general psychology at Jawaharlal Nehru University in New Delhi.

Mind-opening as JNU would prove to be, Banaji never finished her study there. “There are few occasions in one’s life when a course of action presents itself with such clarity that there is nothing to do but pursue it,” she wrote in her Guggenheim biography. While she was on the train home from New Delhi for the holidays, Banaji purchased five volumes of the Handbook of Social Psychology edited by Lindzey and Aronson, for five dollars, lured not so much by the books’ content as by their low price. By the time she arrived home 24 hours later, she had already devoured an entire volume. She says the combination of a focus on social process with an experimental approach presented by the book particularly appealed to her.

A year later, Banaji boarded a plane to Columbus, Ohio, leaving the psychophysics and Marxist sociology she had been studying in India behind. After receiving her Ph.D. from Ohio State in social psychology, Banaji traveled around the country as research assistant, instructor, and post-doc fellow in several different institutes, before she finally settled at Yale to study unconscious bias.

When the Radcliffe Institute of Advanced Studies was founded in 2001, Banaji accepted the invitation to teach at Harvard, becoming a professor in the Department of Psychology and the Carol K. Pforzheimer Professor at Radcliffe.

‘THE MADONNA OF OUR FIELD’

Reflecting on her unusual career path, Banaji says she is reluctant to use her own life as a model for students. But she says she is a strong advocate for women in science careers.

Her implicit association experiments have shown that even female scientists can unconsciously associate men with terms like “astronomy” and “chemistry” and women with “music” and “history.”

Former University President Lawrence H. Summers once quoted Banaji on unconscious prejudice against women in science, recommending people to visit the Web site for the Implicit Association Test, which is maintained by Banaji and her former student Brian A. Nosek, now a psychology professor at the University of Virginia. Just months before, Summers had made his now-infamous comments that attributed the dearth of women in top science careers to “issues of intrinsic aptitude.”

Knowing this prejudice well, Banaji says she always goes out of her way to support aspiring female students in science.

“For younger women whose identity as women in science is not fully formed, I need to keep an eye out,” Banaji says. “If somebody like that comes along and asks, ‘I wanna give up mathematics for social studies,’ [I would suggest to her] ‘well, hold on, maybe you should go. But maybe you shouldn’t.’”

“As a young woman, I cannot tell you how she has influenced the generations after her,” Dana R. Carney, a former post-doctoral student of Banaji’s, wrote in an e-mail. “She is like the Madonna of our field: masculine, feminine, fierce, warm, irreverent, creative, inspiring.” Carney is now an assistant professor at Columbia University’s Graduate School of Business. She says she still remembers listening to Banaji’s “mesmerizing” speech as a first year graduate student, five years before she became her post-doc student. “I remember going up to her…and I shook her hand, and I told her that it’s so rare as a young woman you’ve got to model yourself after those people that sort of defy gender stereotype,” Carney recalls. “She is just a scientist. She is not a woman. She is not a man. She is just so inspiring.”

CRITICAL, BUT CARING

In 2006, Banaji journeyed to Philadelphia to pay tribute to her Ph.D. adviser Greenwald, who was receiving the Distinguished Scientist Award, a prestigious prize given by the Society of Experimental Social Psychology. Banaji delivered an address recounting her memory of Greenwald. “It made the entire room of five or six hundred people cry,” Carney says, referring to Banaji’s speech. “It was just mind blowing.”

“Tony Greenwald changed my life,” Banaji says. “He didn’t care what I knew, or how little I knew, or how poor a writer I was…he just wouldn’t let go. If I wrote a draft and I gave it to him he would mark it again, but the 13th draft he would still mark it. Every draft, I saved them all.”

Banaji attributes her pedagogical skill in part to Greenwald, saying that he has had an important impact on how she interacts with her students. Greenwald, however, says he disagrees.

“As a teacher, she has abilities that I barely comprehend,” he wrote in an e-mail. “I have no idea how she acquired her skill, other than to be sure that she didn’t get it from me.”

Greenwald does say that both he and Banaji require their advisees to learn how to thrive while receiving more criticism than praise, as they share the belief that perseverance in the face of criticism is a trait shared by almost all successful scientists.

“She just really had a unique combination of being incredibly rigorous and demanding, but at the same time you knew she supported you and that her heart was with you,” said Curtis D. Hardin, Banaji’s first Ph.D. student and now a professor at Brooklyn College. “For one thing, any paper, from paper in the class to thesis, was met with just incredibly detailed line by line comments, suggestions, questions…this is probably the biggest sign of love because it takes a lot to do that.”

Hardin adds that Banaji and her husband always open their home to graduate students. “She taught me how to cook simple Indian dishes, and we watched elections together. Their company was just so good,” Hardin says. “They knew what it was like to be a starting graduate student.”

Banaji is deeply involved in the undergraduate experience at Harvard, too. As Head Tutor for the Psychology Department, she leads a committee on undergraduate instruction and oversees all students writing theses. She also has three freshman advisees.

“Even though she’s my academic advisor she would talk to me like my old-time friend, and she really cares about how I am doing and my emotional state,” says Nam Hee Kim ’12, one of her advisees.

Looking back, Banaji says that her teaching experience in India at the age of five might have shaped how she communicates with students today. After Banaji and her sister were born, their mother, a school teacher, became unhappy that she had to stay at home and could not teach anymore. She had a carpenter make three small tables and very little chairs, and opened a school in the house.

“I would be home, so I had to deal with all the kids. So my mother would say, ‘you are five years old, you teach this four-year old how to write letters. Every year I was teaching somebody one year younger than I,” Banaji says. “By the time I was eight, I had three years of teaching experience.”

A meta-analysis of data from eight of Milgram’s obedience experiments reveals previously undocumented systematicity in the behavior of disobedient participants. In all studies, disobedience was most likely at 150 v, the point at which the shocked “learner” first requested to be released. Further illustrating the importance of the 150-v point, obedience rates across studies covaried with rates of disobedience at 150 v, but not at any other point; as obedience decreased, disobedience at 150 v increased. In contrast, disobedience was not associated with the learner’s escalating expressions of pain. This analysis identifies a critical decision point in the obedience paradigm and suggests that disobedient participants perceived the learner’s right to terminate the experiment as overriding the experimenter’s orders, a finding with potential implications for the treatment of prisoners.

* * *

To download a pdf draft version of the article, click here. For a collection of Situationist posts discussing Stanley Milgram’s obedience experiments, click here.

Replicating one of the most controversial behavioral experiments in history, a Santa Clara University psychologist has found that people will follow orders from an authority figure to administer what they believe are painful electric shocks.

More than two-thirds of volunteers in the research study had to be stopped from administering 150 volt shocks of electricity, despite hearing a person’s cries of pain, professor Jerry M. Burger concluded in a study published in the January issue of the journal American Psychologist.

“In a dramatic way, it illustrates that under certain circumstances people will act in very surprising and disturbing ways,” said Burger.

The study, using paid volunteers from the South Bay, is similar to the famous 1974 “obedience study” by the late Yale University psychologist Stanley Milgram. In the wake of Nazi war criminal Adolf Eichmann’s trial, Milgram was troubled by the willingness of people to obey authorities — even if it conflicted with their own conscience.

* * *

The subjects — recruited in ads in the Mercury News, Craigslist and fliers distributed in libraries and communities centers in Santa Clara, Cupertino and Sunnyvale — thought they were testing the effect of punishment on learning.”They were average citizens, a typical cross-section of people that you’d see around every day,” said Burger.

* * *

Burger found that 70 percent of the participants had to be stopped from escalating shocks over 150 volts, despite hearing cries of protest and pain. Decades earlier, Milgram found that 82.5 percent of participants continued administering shocks. Of those, 79 percent continued to the shock generator’s end, at 450 volts.

Burger’s experiment did not go that far.

“The conclusion is not: ‘Gosh isn’t this a horrible commentary on human nature,’ or ‘these people were so sadistic,” said Burger.

“It shows the opposite — that there are situational forces that have a much greater impact on our behavior than most people recognize,” he said.

The experiment shows that people are more likely to comply with instructions if the task starts small, then escalates, according to Burger.

“For instance, the suicides at Jonestown were just the last step of many,” he said. “Jim Jones started small, asking people to donate time and money, then looked for more and more commitment.”

Additionally, the volunteers confronted a novel situation — having never before been in such a setting, they had no idea of how they were supposed to act, he said.

Finally, they had been told that they should not feel responsible for inflicting pain; rather, the “instructor” was accountable. “Lack of feeling responsible can lead people to act in ways that they might otherwise not,” said Burger.

“When we see people acting out of character, the first thing we should ask is: ‘What’s going on in this situation?”’

* * *

To read the entire article, click here. To watch an ABC news video about Professor Burger’s research, click on the video below.

Robert Zajonc, pioneer of social psychology, dies at 85

He witnessed and survived some of the worst of human behavior to become one of the world’s leading experts on how people behave.

And during the 85 years between his birth in Poland and death Dec. 3 in Palo Alto—a span that led him through Nazi bombings and prisons before winding toward a life in academia—Robert Zajonc laid the foundation for the field of social psychology by exploring the connections between how people feel and how they think.

As an emeritus professor of psychology at Stanford since 1994, Zajonc (his name rhymes with “science”), focused his research on genocide, racism and terrorism.

He had already made a name for himself while teaching at the University of Michigan, conducting groundbreaking experiments that attracted controversy and acclaim.

As the scientist who demonstrated and coined the “mere exposure effect,” Zajonc found that people have positive feelings about things they’re familiar with. In a series of studies in the 1960s, Zajonc flashed random images in front of his subjects—Chinese characters, faces and geometric figures. When asked which images they liked the most, the subjects picked the ones they saw the most.

Zajonc also made news a decade later when he found that larger families have lower overall IQ scores than smaller ones. His studies showed that IQs would decline among siblings from the oldest to the youngest. Part of the reason, he explained, was that older children had more time to receive the undivided attention of their parents.

He found that first-borns do better on college entrance exams, and older children who tutor their little brothers and sisters get the biggest benefit out of the arrangement.

“Explaining something to a younger sibling solidifies your knowledge and allows you to grow more extensively,” he told the New York Times last year. “The younger one is asking questions, and challenging meanings and explanations, and that will contribute to the intellectual maturity of the older one.”

These were the conclusions of an only child who triumphed over the most difficult situations, finally succumbing to the pancreatic cancer that he fought for the last few years.

Born in Lodz in 1923, Zajonc and his parents fled to Warsaw in 1939 when the Nazis invaded Poland. They moved into a relative’s apartment, but the building was bombed two weeks after they arrived. Zajonc’s parents were killed, and his legs were broken.

After recuperating in a hospital for six months, the 16-year-old was arrested by Nazi soldiers for not having any identification papers and was sent to a German labor camp. Put to work on a farm, he managed to escape with two other prisoners in 1942. They walked more than 200 miles into France, but were recaptured by the Germans after crossing the border and sent to a French prison.

He again staged a breakout with another prisoner, and the two walked for about 550 miles, stealing food and clothes before finding a fisherman who brought them to Ireland. From there, Zajonc made his way to England, where he worked as a translator for the U.S. Army.

After World War II, he came to the United States and earned bachelor’s, master’s and doctoral degrees from the University of Michigan. It was there that he established himself as a leading psychologist and met wife, Hazel Rose Markus, who is the Davis-Brack Professor in the Behavioral Sciences at Stanford.

Along with showing how family structure influences intellectual performance and proving that people prefer things they are familiar with, Zajonc’s work at Michigan also found that people who do something well do it even better with others watching. But they’re more likely to make mistakes doing a new task in front of others as opposed to working alone.

He also studied married couples and determined that, after years of unconsciously mimicking one another’s facial expressions, husbands and wives start looking alike.

Zajonc retired from Michigan in 1994 and became an emeritus professor at Stanford. He remained active, urging interdisciplinary research on massacres and analyzing responses to the 9/11 terrorist attacks.

In addition to Markus, Zajonc is survived by their daughter, Krysia Zajonc, of Puerto Viejo, Costa Rica; three sons from a previous marriage, Peter Zajonc of Nyack, N.Y., Michael Zajonc of Leuven, Belgium, and Joseph Zajonc of Seattle; and four grandchildren.

* * *

To read an excellent New York Times piece about Robert Zajonc, click here.