Blog Stats

Archive for the ‘Illusions’ Category

Featuring members of the the Harvard Placebo Study Group, “Placebo: Cracking the Code” examines the power of belief in alleviating pain, curing disease, and the healing of injuries.

The placebo effect is a pervasive, albeit misunderstood, phenomenon in medicine. In the UK, over 60% of doctors surveyed said they had prescribed placebos in regular clinical practice. In a recent Time Magazine article, 96% of US physicians surveyed stated that they believe that placebo treatments have real therapeutic effects.

Work on the placebo effect received an intellectual boost when the Harvard Placebo Study Group was founded at the beginning of 2001. This group is part of the Mind-Brain-Behavior Initiative at Harvard University, and its main characteristic is the interdisciplinary approach to the placebo phenomenon. The group is made up of 8 members: Anne Harrington (Historian of Science at Harvard), Howard Fields (Neuroscientist at Univ. of California in San Francisco), Dan Moerman (Anthropologist at Univ. of Michigan), Nick Humphrey (Evolutionary Psychologist at London School of Economics), Dan Wegner (Psychologist at Harvard), Jamie Pennebaker (Psychologist at Univ. of Texas in Austin), Ginger Hoffman (Behavioral Geneticist at Harvard) and Fabrizio Benedetti (Neuroscientist at Univ. of Turin). The main objective of the group is two-fold: to devise new experiments that may shed light on the placebo phenomenon and to write papers in which the placebo effect is approached from different perspectives.

Solomon Asch . . . . became famous in the 1950s, following experiments which showed that social pressure can make a person say something that is obviously incorrect.

This experiment was conducted using 123 male participants. Each participant was put into a group with 5 to 7 “confederates” (People who knew the true aims of the experiment, but were introduced as participants to the naive “real” participant). The participants were shown a card with a line on it, followed by another card with 3 lines on it labeled a, b, and c. The participants were then asked to say which line matched the line on the first card in length. Each line question was called a “trial”. The “real” participant answered last or penultimately. For the first two trials, the subject would feel at ease in the experiment, as he and the other “participants” gave the obvious, correct answer. On the third trial, the confederates would start all giving the same wrong answer. There were 18 trials in total and the confederates answered incorrectly for 12 of them, these 12 were known as the “critical trials”. The aim was to see whether the real participant would change his answer and respond in the same way as the confederates, despite it being the wrong answer.

Solomon Asch thought that the majority of people would not conform to something obviously wrong, but the results showed that participants conformed to the majority on 37% of the critical trials. However, 25% of the participants did not conform on any trial. 75% conformed at least once, and 5% conformed every time.

The good folks at Big Think interviewed behavioral economist Dan Ariely and asked him about the the nature of objective reality. Among other things, Ariely had this to say:

It turns out that if a physician comes to you and injects you with whatever – saline water – your body expects pain relief. And your body secretes substances that are very much like morphine. So it doesn’t matter what you get from the injection. You actually get pain relief from your own body as a reaction to that. Now you can’t just close your eyes and say, “Please can I have some pain killers.” That doesn’t work. But when a physician injects you with anything – even saline water – you get the pain relief that is actually a substance you can’t buy over the counter. It’s like morphine. But what is the reality there?

Our favorite radio program, This American Life, broadcast an especially situationist episode in July, which you can listen to here. The program’s description is as follows.

* * *

Prologue.

Amy Roberts thought it was obvious that she was an adult, not a kid, and she assumed the friendly man working at the children’s museum knew it too. Unfortunately, the man had Amy pegged all wrong. And by the time she figured it out, it was too late for either of them to save face. Host Ira Glass talks to Amy about the embarrassing ordeal that taught her never to assume she knows what someone else is thinking. (8 1/2 minutes)

Act One. The Fat Blue Line.

While riding in a patrol car to research a novel, crime writer Richard Price witnessed a misunderstanding that for many people is pretty much accepted as an upsetting fact of life. Richard Price told this story—which he describes as a tale taken from real life and dramatized—onstage at the Moth in New York. Price’s most recent novel is Lush Life, which he’s adapting for film. (12 minutes)

Act Two. Stereotypes Uber Alles.

When writer Chuck Klosterman got back from a trip to Germany, friends asked him what Germans were like. Did nine days as an American tourist make him qualified to answer? In this excerpt of an essay he wrote for Esquire magazine, Chuck explains why not. (6 minutes)

Act Three. Yes, No or Baby.

There are some situations where making judgments about people based on limited amounts of information is not only accepted, but required. One of those situations is open adoption, where birth mothers actually choose the adoptive parents for their child. TAL producer Nancy Updike talks to a pregnant woman named Kim going through the first stage of open adoption: reading dozens of letters from prospect parents, all of whom seem utterly capable and appealing. With so many likeable candidates to choose from, Kim ends up focusing on tiny details of people’s lives. (6 minutes)

Act Four. Paradise Lost.

Shalom Auslander tells the story of the time he went on vacation, pegged the guest in the room next door as an imposter and devoted his holiday to trying to prove it. Shalom Auslander is the author, most recently, of the memoir Foreskin’s Lament.

Daniel Dennett is the co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. Here is a brief Big Think video of Dennett discussing some of the problems of the human brain, including, the “very sharp limit to the depth that we as conscious agents can probe our own activities.”

From Chautauqua Institution, here’s a worthwhile video in which renowned social psychologist, Elizabeth Loftus, Distinguished Professor at the University of California, Irvine, discusses her remarkable research on human memory and the prevalence of false memories. She also explains how her findings are relevant for everything from law to dieting.

The op-ed excerpted below, “America’s Best Colleges: Merit by the Numbers,” by Harvard Law School Professor Lani Guinier and Columbia Law Professor Susan Sturm, appeared in the August 5, 2009, edition of Forbes. It eloquently examines the role played and not played by universities in educating young people to promote the system-justifying illusion of merit.

* * *

In its recent commencement issue, an Ivy League college newspaper displayed a snapshot of the Class of 2009 “by the numbers.” Although the students had by then been at the college for four years, all of the relevant “numbers” were based on a profile of the class at the time of enrollment. Prominently featured were the 157 children of alumni; the 9.7% of applicants who were admitted; and, last but not least, the median SAT math and verbal scores–740 and 750, respectively–of the class.

Amazingly, all the “merits” of the graduating seniors involved attributes that predated the students’ arrival on campus. Totally missing from the portrait was the “merit” that the Class of 2009 developed as a result of the four years they had spent at the school. Nor was any mention made of the contributions they were poised to make. Apparently, the defining qualities of merit–and what was most valued in the students–were the attributes they already had as incoming freshmen.

Selective colleges and universities, like this one, act more like consumers than producers of merit. They build their reputation based on the credentials of the people they admit rather than the contributions of the people they graduate. Trapped by a rankings culture that ties their reputation to admissions inputs, they also define themselves by whom they exclude. Schools that attract a lot of applicants, and then reject 92.3% of them, are held in the highest esteem.

But this process valorizes a uniform set of test-taking skills that produce results no better at predicting college performance than family wealth. In fact, Jesse Rothstein, a Princeton economist, found that the socio-economic status of a high school is a better predictor of what kind of grades its students will earn their first year in college than the individual SAT scores of its students. In effect, the testocracy reproduces privilege and stratifies the higher-education system by race and class. Individuals who perform well on high-stakes tests are awarded admission to college or law school as a prize for performance on a test that best predicts not aptitude, but parental income and education. This inequality effect of the SAT undermines a range of public values, from providing access to college independent of wealth and privilege to developing problem-solving capabilities.

The preoccupation with backwards-looking statistical criteria also severs the tie between admissions and mission. The testing regime deflects the college’s responsibility away from, for example, producing a diverse and dynamic learning environment that actually builds capacity among the students to become the leaders, thinkers and entrepreneurs of the next generation. Reputation based on numerical ranking assumes greater importance than reputation based on the development of innovative ideas and publicly spirited graduates. The primary function of admission becomes status and prestige enhancement for the institution itself and for those who enroll.

Beyond that, the pre-eminent role of high-stakes tests also negatively affects student engagement and learning. The standardized and time-limited nature of the SAT and the ACT fixate student attention on the mastery of test-taking techniques, rather than on developing qualities such as creativity, ability to collaborate, critical thinking and drive. It misleads, by reinforcing the view that ability is fixed when in fact intelligence is both malleable and incremental.

A static view of intelligence has been widely debunked; it undermines intellectual risk-taking. It is also self-fulfilling. According to social psychologist Carol Dweck, students who hold a “fixed” theory of intelligence expend enormous energy worrying about how smart they are–they become preoccupied with avoiding mistakes and are less likely to engage in the excitement of learning. In contrast, people who believe that one’s capacity to learn is “expandable” are more willing to challenge themselves.

An openness to learning from others is crucial in today’s complex environment. People who tackle the world’s problems with the benefit of different perspectives are less likely to get stuck in the same dead ends. As Scott Page, a professor of complex systems, has demonstrated, diverse groups have greater potential to generate effective and innovative solutions to tough problems.

* * *

To read the entire column, including Guinier and Sturm’s discussion of solutions and alternatives to the “testocracy,” click here.

Sam has been an active racist his entire life. For decades, he has called blacks demeaning names; he has written about their inferiority; he has threatened them and beaten them; he has attended lynchings.

Under great pressure from various acquaintances and friends, in his seventieth year of life, he stops using the “n” word and ends the explicit prohibition on hiring blacks at his factory.

Ten years later, however, his business still has an almost all white workforce, despite getting lots of black applications, and no managers.

Should we trust Sam that racial bias has nothing to do with the disparity?

If you are like me, despite hoping that Sam has changed, you are deeply skeptical. A person carries his past with him, and it continues to shape his life—even when he genuinely believes he has left it far behind

The same is true of countries.

Our own dear old Uncle Sam has come a long way from the Montgomery bus boycott and the Greensboro sit-ins: today, fifty years later, there is broad agreement in society that bias and discrimination based on race are abhorrent.

But we must not forget that, in the history of our country, this consensus is a very recent development. For most of our past, bias and discrimination were the norm—permitted by statutory and constitutional law and supported by public opinion that openly held whites to be superior to blacks.

At a speech last week celebrating the 100th anniversary of the National Association for the Advancement of Colored People, President Obama made exactly that point, even as he urged black America to do its part to help black children succeed: “Make no mistake, no mistake: the pain of discrimination is still felt in America.”

When asked several days later about the arrest of African-American Harvard professor Henry Louis Gates, Jr. in his own home, Obama emphasized the “long history in this country of African-Americans being stopped disproportionately by the police” and suggested that the incident was “a sign of how race remains a factor in this society.”

There are many out there who strongly disagree with the president, who believe that we have reached the end of our long journey out of night—that we stand at the dawning of post-racial America. As former Bush administration official John Yoo argued in the Philadelphia Inquirer, protections for minorities written into employment law, election law, and college admissions “might have been justified in the 1960s . . . [but] they are necessary no longer.” Our nation has fulfilled its promise of creating a nation that ensures “the proposition that all men are created equal.”

While I share Yoo’s desire to embrace progress and to step into the light, I cannot ignore the evidence that suggests that his assessment is wrong.

First, blacks do not enjoy equal outcomes to whites with respect to income, education, health care, and numerous other areas.

Consider just the statistics on criminal law: Forty percent of felony defendants are black and a black male is five times more likely to serve time in prison over his lifetime than a white male. Blacks also receive significantly higher bail amounts, are given longer sentences, and are more likely to be sentenced to death than their white counterparts. In fact, the more stereotypically black your features are, the more likely you are to receive the death penalty.

Second, this disparate impact appears to have its roots in implicit biases held by many Americans beyond their conscious awareness or control.

Over the last 10 years, hundreds of thousands of individuals have participated in research studies measuring their racial stereotypical associations using the Implicit Association Test, developed by psychology professors from Harvard, the University of Washington, and the University of Virginia.

Approximately 70 percent of those who have taken the test have demonstrated a preference for whites to blacks.

Just as critically, the test has significant value at predicting social judgment and behavior, as an overview analysis of 122 research reports published in the Journal of Personality and Social Psychology last month documented. Physicians with a white preference on the Implicit Association Test, for example, provided less effective treatments to hypothetical black coronary artery disease patients than to white patients. Likewise, individuals with a white preference on the test were more likely to shoot a black target in a simulation than a white target engaging in identical behavior.

This evidence does not mean that, today, Americans are all hate-filled bigots. One of the major findings is that many egalitarians—those genuinely committed to racial equality, including the test designers themselves—show automatic race preference.

What it means is that the hundreds of years of explicit racism in our country have left a mark within us. We may be completely unaware of its existence, but it is influencing our actions.

Uncle Sam is on the right path. The election of our first minority president and the likely confirmation of Judge Sonia Sotomayor to the Supreme Court are testaments to how far we have come, but they are welcome signs of progress not an indication that we have reached our destination.

Like this:

In the following TED Talk video, Dan Ariely, Professor of Economics at Duke University, behavioral economist, and the author of Predictably Irrational, offers some now-standard but still interesting illustrations of how situation influences our perception and choices.

Despite advances in computer graphics, few people would think virtual characters or objects are real. Yet placed in a virtual reality environment most people will interact with them as if they are really there. European researchers are finding out why.

In trying to understand presence – the propensity of humans to respond to fake stimuli as if they are real – the researchers are not just gaining insights into how the human brain functions. They are also learning how to create more intense and realistic virtual experiences, opening the door to myriad applications for healthcare, training, social research and entertainment.

“Virtual environments could be used by psychiatrists to help people overcome anxiety disorders and phobias . . . by researchers to study social behaviour not practically or ethically reproduced in the real world, or to create more immersive virtual reality for entertainment,” explains Mel Slater, a computer scientist at ICREA in Barcelona and University College, London, who led the team behind the research.

Working in the EU-funded Presenccia project, Slater and his team, drawn from fields as diverse as neuroscience, psychology, psychophysics, mechanical engineering and philosophy, conducted a variety of experiments to understand why humans interpret and respond to virtual stimuli the way they do and how those experiences can be made more intense.

For one experiment they developed a virtual bar, which test subjects enter by donning a virtual reality (VR) headset or immersing themselves in a VR CAVE in which stereo images are projected onto the walls. As the virtual patrons socialise, drink and dance, a fire breaks out. Sometimes the virtual characters ignore it, sometimes they flee in panic. That in turn dictates how the real test subjects, immersed in the virtual environment, respond.

Panic and pain . . . virtually

“We have had people literally run out of the VR room, even though they know that what they are witnessing is not real,” says Slater. “They take their cues from the other characters.”

In another instance, the researchers re-enacted controversial experiments conducted by American social psychologist Stanley Milgram in the 1960s that showed people’s propensity to follow orders even if they know what they are doing is wrong. Instead of using a real actor, as Milgram did, the Presenccia team used a virtual character to which the test subject was instructed to give progressively more intense electric shocks whenever it answered questions incorrectly. The howls of pain and protest from the character, a virtual woman, increased as the experiment went on.

“Some of the test subjects felt so uncomfortable that they actually stopped participating and left the VR environment. Around half said they wanted to leave, but said they did not because they kept telling themselves it wasn’t real,” Slater says.

All had physical reactions, measured by their skin conductivity, perspiration and heart rate, showing that, at a subconscious level, people’s responses are similar regardless of whether what they are experiencing is real or virtual. The plausibility of the events enhances the sense that what is happening is real. Plausibility, Slater says, is therefore more important to presence than the quality of the graphics in a VR environment.

For example, when a test subject was made to stand on the edge of a virtual pit, staring down at an 18-metre drop, their level of anxiety increased if they could see dynamically changing shadows and reflections of their virtual body even if the graphics were poor. In other experiments, the researchers made people believe that a virtual hand was their own – replicating in VR the so-called “rubber hand illusion” – or that they were looking at themselves from another angle, creating a kind of out-of-body experience. In one trial, they even gave male test subjects a woman’s body.

Help with phobias and paranoia

By understanding what makes people perceive virtual objects and experiences to be real, the researchers hope to create applications that could revolutionise certain psychiatric treatments. Patients with a fear of spiders or heights, for example, could be exposed to and helped to overcome their fears in virtual reality. Similarly people who are shy or paranoid about public speaking could be helped by having to face virtual people and crowds.

“One application we are working on is designed to help shy men overcome their fear of meeting women by making them interact with a virtual woman,” Slater says.

The technology is also being used for social research which, much like the Milgram experiments, would not be practical or ethical to conduct in the real world. One experiment due to be run at University College, London, will use a virtual environment to study how people respond to violence in public places, such as a bar fight between football hooligans.

Besides healthcare and research, more immersive VR would also help in training, potentially greatly improving the results of flight or driving simulators. Slater also envisions VR environments being used to train people to use prosthetic limbs and wheelchairs through mind control before trying them out in the real world. A brain-computer interface (BCI) developed for just such a purpose was tested in the Presenccia project and in a similarly named predecessor called Presencia, which received funding under the EU’s Sixth and Fifth Framework Programmes for research, respectively.

* * *

* * *

Though immersive VR is likely to have many applications in healthcare, research and training, the biggest market is probably entertainment. With the cost of VR technology coming down, people could eventually be exploring virtual worlds and interacting with virtual characters and other people through VR rooms in their homes akin to the “holodecks” seen in Star Trek, Slater says.

Skill in manipulating people’s perceptions has earned magicians a new group of spellbound fans: Scientists seeking to learn how the eyes and brain perceive — or don’t perceive — reality.

“The interest for magic has been there for a long time,” says Gustav Kuhn, a neuroscientist at Durham University in England and former performing magician. “What is new is that we have all these techniques to get a better idea of the inner workings of these principles.”

A recent brain imaging study by Kuhn and his colleagues revealed which regions of the brain are active when people watch a magician do something impossible, such as make a coin disappear. Another research group’s work on monkeys suggests that two separate kinds of brain cells are critical to visual attention. One group of cells enhances focus on what a person is paying attention to, and the other actively represses interest in everything else. A magician’s real trick, then, may lie in coaxing the suppressing brain cells so that a spectator ignores the performer’s actions precisely when and where required.

Using magic to understand attention and consciousness could have applications in education and medicine, including work on attention impairments.

Imaging the impossible

Kuhn and his collaborators performed brain scans while subjects watched videos of real magicians performing tricks, including coins that disappear and cigarettes that are torn and miraculously put back together. Volunteers in a control group watched videos in which no magic happened (the cigarette remained torn), or in which something surprising, but not magical, took place (the magician used the cigarette to comb his hair). Including the surprise condition allows researchers to separate the effects of witnessing a magic trick from those of the unexpected.

In terms of brain activity patterns, watching a magic trick was clearly different from watching a surprising event. Researchers saw a “striking” level of activity solely in the left hemisphere only when participants watched a magic trick, Kuhn says. Such a clear hemisphere separation is unusual, he adds, and may represent the brain’s attempt to reconcile the conflict between what is witnessed and what is thought possible. The two brain regions activated in the left hemisphere — the dorsolateral prefrontal cortex and the anterior cingulate cortex — are thought to be important for both detecting and resolving these types of conflicts.

Masters of suppression

Exactly how the brain attends to one thing and ignores another has been mysterious. Jose-Manuel Alonso of the SUNY State College of Optometry in New York City thinks that the answer may lie in brain cells that actively suppress information deemed irrelevant by the brain. These cells are just as important, if not more so, than cells that enhance attention on a particular thing, says Alonso. “And that is a very new idea . . . . When you focus your attention very hard at a certain point to detect something, two things happen: Your attention to that thing increases, and your attention to everything else decreases.”

Alonso and his colleagues recently identified a select group of brain cells in monkeys that cause the brain to “freeze the world” by blocking out all irrelevant signals and allowing the brain to focus on one paramount task. Counter to what others had predicted, the team found that the brain cells that enhance attention are distinct from those that suppress attention. Published in the August 2008 Nature Neuroscience, the study showed that these brain cells can’t switch jobs depending on where the focus is — a finding Alonso calls “a total surprise.”

The work also shows that as a task gets more difficult, both the enhancement of essential information and suppression of nonessential information intensify. As a monkey tried to detect quicker, more subtle changes in the color of an object, both types of cells grew more active.

Alonso says magicians can “attract your attention with something very powerful, and create a huge suppression in regions to make you blind.” In the magic world, “the more interest [magicians] manage to draw, the stronger the suppression that they will get.”

Looking but not seeing

In the French Drop trick [see video below], a magician holds a coin in the left hand and pretends to pass the coin to the right hand, which remains empty. “What’s critical is that the magician looks at the empty hand. He pays riveted attention to the hand that is empty,” researcher Stephen Macknik says.

Several experiments have now shown that people can stare directly at something and not see it. For a study published in Current Biology in 2006, Kuhn and his colleagues tracked where people gazed as they watched a magician throw a ball into the air several times. On the last throw, the magician only pretended to toss the ball. Still, spectators claimed to have seen the ball launch and then miraculously disappear in midair. But here’s the trick: In most cases, subjects kept their eyes on the magician’s face. Only when the ball was actually at the top part of the screen did participants look there. Yet the brain perceived the ball in the air, overriding the actual visual information.

Daniel Simons of the University of Illinois at Urbana-Champaign and his colleagues asked whether more perceptive people succumb less easily to inattentional blindness, which is when a person doesn’t perceive something because the mind, not the eyes, wanders. In a paper in the April Psychonomic Bulletin & Review, the researchers report that people who are very good at paying attention had no advantage in performing a visual task that required noticing something unexpected. Task difficulty was what mattered. Few participants could spot a more subtle change, while most could spot an easy one. The results suggest that magicians may be tapping in to some universal property of the human brain.

“We’re good at focusing attention,” says Simons. “It’s what the visual system was built to do.” Inattentional blindness, he says, is a by-product, a necessary consequence, of our visual system allowing us to focus intently on a scene.

Magical experiments

Martinez-Conde and Macknik plan to study the effects of laughter on attention. Magicians have the audience in stitches throughout a performance. When the audience is laughing, the magician has the opportunity to act unnoticed. Understanding how emotional states can affect perception and attention may lead to more effective ways to treat people who have attention problems. “Scientifically, that can tell us a lot about the interaction between emotion and attention, of both the normally functioning brain and what happens in a diseased state,” says Martinez-Conde.

He expects that the study of consciousness and the mind will benefit enormously from teaming up with magicians. “We’re just at the beginning,” Macknik says. “It’s been very gratifying so far, but it’s only going to get better.”

From Situationist friend and situationist legal scholar Andrew Perlman, we received the following message:

* * *

“In case you haven’t seen it, this video of a talent show contestant in Britain has become a world-wide phenomenon. The reason is simple — situational cues prepare us for a stunningly bad performance, and we end up getting quite the opposite. You can find the video here.

* * *

It’s really quite moving. Among other reasons, I think we intuitively realize how much appearance matters to us when we assess other people.”

* * *

For an examination of the connection between situationism and music, see Jon Hanson and Michael McCann’s “Busker or Virtuoso? Depends on the Situation.” In their post, Hanson and McCann explore how the situation in which persons listen to acclaimed violinist Joshua Bell–either while he is disguised as a subway peddler or while performing normally at a symphony–enormously influences how they regard his music. For another post exploring how our taste in music is situationally contingent, see “The Situation of Music.”

Like this:

In the mid-1970s, Situationist contributor Timothy Wilson with Richard Nisbett conducted one of the best known social psychology experiments of all time. It was strikingly simple and involved asking subjects to assess the quality of hosiery. Situationist contributors Jon Hanson and David Yosifon have described the experiment this way:

Subjects were asked in a bargain store to judge which one of four nylon stocking pantyhose was the best quality. The subjects were not told that the stockings were in fact identical. Wilson and Nisbett presented the stockings to the subjects hanging on racks spaced equal distances apart. As situation would have it, the position of the stockings had a significant effect on the subjects’ quality judgments. In particular, moving from left to right, 12% of the subjects judged the first stockings as being the best quality, 17% of the subjects chose the second pair of stockings, 31% of the subjects chose the third pair of stockings, and 40% of the subjects chose the fourth—the most recently viewed pair of stockings. When asked about their respective judgments, most of the subjects attributed their decision to the knit, weave, sheerness, elasticity, or workmanship of the stockings that they chose to be of the best quality. Dispositional qualities of the stocking, if you will. Subjects provided a total of eighty different reasons for their choices. Not one, however, mentioned the position of the stockings, or the relative recency with which the pairs were viewed. None, that is, saw the situation. In fact, when asked whether the position of the stockings could have influenced their judgments, only one subject admitted that position could have been influential. Thus, Wilson and Nisbett conclude that “[w]hat matters . . . is not why the [position] effect occurs but that it occurs and that subjects do not report it or recognize it when it is pointed out to them.”

One of the core messages of more recent brain research is that most mental activity happens in the automatic or unconscious region of the brain. The unconscious mind is the platform for a set of mental activities that the brain has relegated beyond awareness for efficiency’s sake, so the conscious mind can focus on other things. In his book, “Strangers to Ourselves,” Timothy Wilson notes that the brain can absorb about 11 million pieces of information a second, of which it can process about 40 consciously. The unconscious brain handles the rest.

The automatic mind generally takes care of things like muscle control. But it also does more ethereal things. It recognizes patterns and construes situations, searching for danger, opportunities or the unexpected. It also shoves certain memories, thoughts, anxieties and emotions up into consciousness.

A lot has been learned about how what we think we know about what moves us is wrong. And much has also been learned about how what we don’t know we know can influence us. Psychologist Susan Courtney has an absolutely terrific article in Scientific American titled “Not So Deliberate: The decisive power of what you don’t know you know.” We excerpt portions of her article below.

* * *

When we choose between two courses of action, are we aware of all the things that influence that decision? Particularly when deliberation leads us to take a less familiar or more difficult course, scientists often refer to a decision as an act of “cognitive control.” Such calculated decisions were once assumed to be influenced only by consciously perceived information, especially when the decision involved preparation for some action. But a recent paper by Hakwan Lau and Richard Passingham,“Unconscious Activation of the Cognitive Control System in the Human Prefrontal Cortex,” demonstrates that the influences we are not aware of can hold greater sway than those we can consciously reject.

Biased competition

We make countless “decisions” each day without conscious deliberation. For example, when we gaze at an unfamiliar scene, we cannot take in all the information at once. Objects in the scene compete for our attention. If we’re looking around with no particular goal in mind, we tend to focus on the objects most visually different from their surrounding background (for example, a bright bird against a dark backdrop) or those that experience or evolution have taught us are the most important, such as sudden movement or facial features — particularly threatening or fearful expressions. If we do have a goal, then our attention will be drawn to objects related to it, such as when we attend to anything red or striped in a “Where’s Waldo” picture. Stimulus-driven and goal-driven influences alike, then, bias the outcome of the competition for our attention among a scene’s many aspects.

The idea of such biased competition (a term coined in 1995 by Robert Desimone and John Duncan, also applies to situations in which we decide among many possible actions, thoughts or plans. What might create an unconscious bias affecting these types of competition?

For starters, previous experience in a situation can make some neural connections stronger than others, tipping the scales in favor of a previously performed action. The best-known examples of this kind of bias are habitual actions (as examined in a seminal 1995 article by Salmon and Butters and what is known as priming.

Habitual actions are what they sound like — driving your kids to school, you turn right on Elm because that’s how you get there every day. No conscious decision is involved. In fact, it takes considerable effort to remember to instead turn left if your goal is to go somewhere else.

Priming works a bit differently; it’s less a well-worn route than a prior suggestion that steers you a certain way. If I ask you today to tell me the first word that comes to mind that starts with the letters mot and you answer mother, you’ll probably answer the same way if I ask you the same thing again four months from now, even if you have no explicit recollection of my asking the question. The prior experience primes you to repeat your performance. Other potentially unconscious influences are generally emotional or motivational.

Of course, consciously processed information can override these emotional and experience-driven biases if we devote enough time and attention to the decision. Preparing to perform a cognitive action (“task set”) has traditionally been considered a deliberate act of control and part of this reflective, evaluative neural system. (See, for example, the 2002 review by Rees, Kreiman and Koch — pdf download.) As such, it was thought that task-set preparation was largely immune to subconscious influences.

We generally accept it as okay that some of our actions and emotional or motivational states are influenced by neural processes that happen without our awareness. For example, it aids my survival if subliminally processed stimuli increase my state of vigilance — if, for example, I jump out of the way before I am consciously aware that the thing at my feet is a snake. But we tend to think of more conscious decisions differently. If I have time to recognize an instruction, remember what that means I’m supposed to do and prepare to make a particular kind of judgment on the next thing I see, then the assumption is that this preparation must be based entirely on what I think I saw — not what I wasn’t even aware of.

Yet Lau and Passingham have found precisely the opposite in their study — that information we’re not aware of can more strongly influence even the most deliberative, non-emotional sort of decision even more than does information we are aware of.

Confusing cues

Lau and Passingham had their subjects perform one of two tasks: when shown a word on a screen, the subjects had to decide either a) whether or not the word referred to a concrete object or b) whether or not the word had two syllables. A cue given just before each word — the appearance of either a square of a diamond – indicated whether to perform the concrete judgment task or the syllables task. These instruction cues were in turn preceded by smaller squares or diamonds that the subjects were told were irrelevant. A variation in timing between the first and second cues determined whether the participants were aware of seeing both cues or only the second.

As you would expect, the task was more difficult when the cues were the not same — that is, when a diamond preceded a square or a square a diamond. The surprising finding was that this confusion effect was greater when the timing between the cues was so close that the participants didn’t consciously notice the first cue. When the cues were mixed but the subjects were consciously aware of only the second instruction, their responses — and their brain activity as measured by functional magnetic resonance imaging (fMRI) — indicated that the “invisible” conflicting cue had made them more likely to prepare to do the “wrong” task. Although similar effects have been shown on tasks that involved making a decision about the appearance of the image immediately following the “invisible” image, this is the first time this effect has been demonstrated for complex task preparation.

It may not be surprising that we juggle multiple influences when we make decisions, including many of which we are not aware — particularly when the decisions involve emotional issues. Lau and Passingham, however, show us that even seemingly rational, straightforward, conscious decisions about arbitrary matters can easily be biased by inputs coming in below our radar of awareness. Although it wasn’t directly tested in this study, the results suggest that being aware of a misleading cue may allow us to inhibit its influence. And the study makes clear that influences we are not aware of (including, but not limited to, those brought in by experience and emotion) can sneak into our decisions unchecked.

The kind of storytelling my grandmother did after a series of strokes . . . [n]eurologists call . . . confabulation. It isn’t fibbing, as there is no intent to deceive and people seem to believe what they are saying. Until fairly recently it was seen simply as a neurological deficiency – a sign of something gone wrong. Now, however, it has become apparent that healthy people confabulate too.

Confabulation is clearly far more than a result of a deficit in our memory, says William Hirstein, a neurologist and philosopher at Elmhurst College in Chicago and author of a book on the subject entitled Brain Fiction . . . . Children and many adults confabulate when pressed to talk about something they have no knowledge of, and people do it during and after hypnosis. . . . In fact, we may all confabulate routinely as we try to rationalise decisions or justify opinions. Why do you love me? Why did you buy that outfit? Why did you choose that career? At the extreme, some experts argue that we can never be sure about what is actually real and so must confabulate all the time to try to make sense of the world around us.

Confabulation was first mentioned in the medical literature in the late 1880s, applied to patients of the Russian psychiatrist Sergei Korsakoff. He described a distinctive type of memory deficit in people who had abused alcohol for many years. These people had no recollection of recent events, yet filled in the blanks spontaneously with sometimes fantastical and impossible stories.

Neurologist Oliver Sacks of the Albert Einstein College of Medicine in New York wrote about a man with Korsakoff’s syndrome in his 1985 book The Man Who Mistook His Wife for a Hat. Mr Thompson had no memory from moment to moment about where he was or why, or to whom he was speaking, but would invent elaborate explanations for the situations he found himself in. If someone entered the room, he might greet them as a customer of the shop he used to own. A doctor wearing a white coat might become the local butcher. To Mr Thompson, these fictions seemed plausible and he never seemed to notice that they kept changing. He behaved as though his improvised world was a perfectly normal and stable place.

* * *

So confabulation can result from an inability to recognise whether or not memories are relevant, real and current. But that’s not the only time people make up stories, says Hirstein. He has found that those with delusions or false beliefs about their illnesses are among the most common confabulators. He thinks these cases reveal how we build up and interpret knowledge about ourselves and other people.

It is surprisingly common for stroke patients with paralysed limbs or even blindness to deny they have anything wrong with them, even if only for a couple of days after the event. They often make up elaborate tales to explain away their problems. One of Hirstein’s patients, for example, had a paralysed arm, but believed it was normal, telling him that the dead arm lying in the bed beside her was not in fact her own. When he pointed out her wedding ring, she said with horror that someone had taken it. When asked to prove her arm was fine, by moving it, she made up an excuse about her arthritis being painful. It seems amazing that she could believe such an impossible story. Yet when Vilayanur Ramachandran of the University of California, San Diego, offered cash to patients with this kind of delusion, promising higher rewards for tasks they couldn’t possibly do – such as clapping or changing a light bulb – and lower rewards for tasks they could, they would always attempt the high pay-off task, as if they genuinely had no idea they would fail.

* * *

What all these conditions have in common is an apparent discrepancy between the patient’s internal knowledge or feelings and the external information they are getting from what they see. In all these cases “confabulation is a knowledge problem”, says Hirstein. Whether it is a lost memory, emotional response or body image, if the knowledge isn’t there, something fills the gap.

Helping to plug that gap may well be a part of the brain called the orbitofrontal cortex, which lies in the frontal lobes behind the eye sockets. The OFC is best known as part of the brain’s reward system, which guides us to do pleasurable things or seek what we need, but Hirstein . . . suggest that the system has an even more basic role. It and other frontal brain regions are busy monitoring all the information generated by our senses, memory and imagination, suppressing what is not needed and sorting out what is real and relevant. According to Morten Kringelbach, a neuroscientist at the University of Oxford who studies pleasure, reward and the role of the OFC, this tracking of ongoing reality allows us to rate everything subjectively to help us work out our priorities and preferences.

* * *

Kringelbach goes even further. He suspects that confabulation is not just something people do when the system goes wrong. We may all do it routinely. Children need little encouragement to make up stories when asked to talk about something they know little about. Adults, too, can be persuaded to confabulate, as [Situationist contributor] Timothy Wilson of the University of Virginia in Charlottesville and his colleague Richard Nisbett have shown. They laid out a display of four identical items of clothing and asked people to pick which they thought was the best quality. It is known that people tend to subconsciously prefer the rightmost object in a sequence if given no other choice criteria, and sure enough about four out of five participants did favour the garment on the right. Yet when asked why they made the choice they did, nobody gave position as a reason. It was always about the fineness of the weave, richer colour or superior texture. This suggests that while we may make our decisions subconsciously, we rationalise them in our consciousness, and the way we do so may be pure fiction, or confabulation.

More recent experiments by philosopher Lars Hall of Lund University in Sweden develop this idea further. People were shown pairs of cards with pictures of faces on them and asked to choose the most attractive. Unbeknown to the subject, the person showing the cards was a magician and routinely swapped the chosen card for the rejected one. The subject was then asked why they picked this face. Often the swap went completely unnoticed, and the subjects came up with elaborate explanations about hair colour, the look of the eyes or the assumed personality of the substituted face. Clearly people routinely confabulate under conditions where they cannot know why they made a particular choice. Might confabulation be as routine in justifying our everyday choices?

* * *

Even when we think we are making rational choices and decisions, this may be illusory too. The intriguing possibility is that we simply do not have access to all of the unconscious information on which we base our decisions, so we create fictions upon which to rationalise them, says Kringelbach. That may well be a good thing, he adds. If we were aware of how we made every choice we would never get anything done – we cannot hold that much information in our consciousness. Wilson backs up this idea with some numbers: he says our senses may take in more than 11 million pieces of information each second, whereas even the most liberal estimates suggest that we are conscious of just 40 of these.

Nevertheless it is an unsettling thought that perhaps all our conscious mind ever does is dream up stories in an attempt to make sense of our world. “The possibility is left open that in the most extreme case all of the people may confabulate all of the time,” says Hall.

. . . . Magicians . . . are masters of exploiting nuances of human perception, attention, and awareness. In light of this, a recent Nature Reviews Neuroscience paper, coauthored by a combination of neuroscientists (Stephen L. Macknik, Susana Martinez-Conde, both at the Barrows Neurological Institute) and magicians (Mac King, James Randi, Apollo Robbins, Teller, John Thompson), describes various ways magicians manipulate our perceptions, and proposes that these methods should inform and aid the neuroscientific study of attention and awareness.
Magicians Secrets Revealed

The underlying concept of using quirks in human perception to learn about how the mind works is an old one. Visual, auditory and multisensory illusions, in which people’s perceptions contradict the physical properties of the stimuli, have long been used by psychologists to study the mechanisms of sensory processing. Magicians use such sensory illusions in their tricks, but they also heavily use cognitive illusions, manipulating people’s attention, trains of logic and even memory. Although magicians probably haven’t studied these phenomena with the scientific method—they don’t do controlled experiments—their techniques have been tested over time, perfected by practice and performed under conditions of high scrutiny by skeptical audiences looking to spot the trick.

An example of a visual illusion used by magicians is spoon bending, in which a rigid horizontal spoon appears flexible when shaken up and down at a certain rate. This effect occurs because of how different parts of objects (in this case, the spoon) are represented in the brain. Certain neurons are responsive to the ends/corners of the object, whereas others respond to the bars/edges; the end-responsive neurons respond differently to motion than do the bar-responsive neurons, such that the ends and the center of the spoon seem misaligned when in motion.

Attention can greatly affect what we see—this fact has been demonstrated in psychological studies of inattentional blindness. To misdirect people’s attention and create this effect, magicians have an arsenal of methods ranging from grand gestures (such as releasing a dove in the theater to distract attention), to more subtle techniques (for instance, using social miscues). An example of the latter can be found in the Vanishing Ball Illusion . . . .

At the last toss, the magician does not actually release the ball from his or her hand. Crucially, however, the magician’s gaze follows the trajectory the ball would have made had it been tossed. The magician’s eye and head movement serves as a subtle social cue that (falsely) suggests a trajectory the audience then also expects. A recent study examining what factors produced this effect suggests that the miscuing of the attentional spotlight is the primary factor, and not the motion of the eyes. In fact, the eyes aren’t fooled by this trick—they don’t follow the illusory trajectory! Interestingly, comedy is also an important tool used by magicians to manipulate attention in time. In addition to adding to the entertainment value of the show, bouts of laughter can diffuse attention at critical time points.

* * *

Magic’s Role in Neuroscience

Cognitive neuroscience can explain many magic techniques; this article proposes, however, that neuroscientists should use magicians’ knowledge to inform their research. . . .

More concretely, the use of cognitive illusions—for example, during brain imaging—could serve to identify neural circuits underlying specific cognitive processes. They could also be used to map neural correlates of consciousness (the areas of the brain that are active when we are processing a given aspect of consciousness) by dissociating activity corresponding to processing of actual physical events from the activity corresponding to the conscious processing.