JREF Swift Blog

In August, I reported the details of a class that I was planning to teach at Arizona State University, examining psychology through the lens of magic and illusion (see The Culture of Psychology and Magic). I am pleased to report that the class was a major success. Fifty upper-division students from a variety of majors spent the semester immersing themselves in the psychology of magic. They were treated to guest lectures by many of the movers and shakers in the emerging “science of magic” field, including James “The Amaz!ng” Randi, himself, and D.J. Grothe, president of the JREF. Please enjoy this brief montage of some of the goings-on from the previous semester.

The course was such a rousing success that I have been asked to mount a reprise this semester. Once again, 50 students have enrolled to experience this unique window on brain and behavior. While the course wasn’t specifically designed to promote skepticism, its content almost automatically encouraged it by highlighting the fallibility of human perceptual and memorial systems. The course fosters an environment where students can easily discover skepticism for themselves. As students learn about the mechanisms of perception, they also stumble upon the inescapable realization that our memories and biases disallow the objective interpretation of our senses, creating a perfect storm for superstition and magical thinking to develop. Magicians exploit the same set of biases that allow for the development of superstition.

One of the most important themes that emerged over the course of the semester was how our awareness of the world results from the interplay between knowledge derived from prior experiences (top-down information) and current sensory information sampled from our surroundings (bottom-up information). This theme first emerged in a discussion of the neural mechanisms of perception. Effective perception correctly matches a pattern of sensory stimulation to an internal representation of the world that is stored in memory. Illusions (especially magical illusions) often occur when too much weight is given to expectations relative to bottom-up sensory processes.

As I noted in the video above, this conceptualization of perception fits with Richard Gregory’s (1980) analogy of perception to hypothesis testing. If too much weight is given to top-down processes in perception, an individual requires very little sensory information to come to a perceptual conclusion (i.e., to accept a perceptual hypothesis as correct), which is apt to be inaccurate. This is akin to increasing the chances of a false-positive in statistical analyses, detecting an effect that isn’t really there. Alternatively, if very little weight is given to top-down processes, an individual will be more aware of the ambiguity inherent in the sensory stream and will require far more sensory information to validate their perceptual hypothesis. In statistical terms, this strategy would increase the odds of a false-negative (i.e., failing to detect a real signal). Researchers must use agreed upon statistical techniques to find a “sweet spot” that balances these odds, but for the human brain operating in the real world, striking an even balance may not be the most adaptive solution. In fact, it seems more like the brain has established something akin to Pascal’s Wager when it comes to pattern detection. The benefits of detecting a real, but weak signal are greater than the costs associated with detecting an illusory signal, so we tend to make inferences based on very little evidence.

An interesting brain imaging study meant to highlight the mechanisms of top-down perceptual processing was carried out by Dolan et al. (1997) using degraded, ambiguous images like the now-famous Dalmatian demonstration (Time Inc., 1965). In images of this kind, it’s initially quite difficult to detect the signal within the noise. In essence, no perceptual hypothesis is able to win out. However, once the signal has been detected, there’s no going back; Memory intervenes, driving later perception, and the viewer can no longer experience the absence of a perceptual hypothesis for the image. This quality of the images allowed Dolan et al. to compare neural activity in the absence of successful perception (composed mostly of bottom-up signals associated with sensory processing) to neural activity after the object had been detected. The change in activity would point to the neural structures that contribute to top-down processing. They found that brain regions implicated in categorical object perception interacted with regions associated with visual imagery to facilitate the disambiguation of the bottom-up signal.

This isn’t entirely surprising. However, what would happen if there was no signal in the noise to begin with? Might memory still intervene in perception, leading to the detection of illusory forms? And are there situations that encourage this sort of heavily-weighted top-down processing? Indeed, research shows that our relative weighting of top-down and bottom-up processes can differ across individuals and change as a consequence of environmental factors. As you might expect, culture can be a substantial determinant of how ambiguous sensory information is resolved. For example, whereas this person is likely to see the face of Jesus in her kitchen cabinet, I’m more apt to see the face of the creature from the Black Lagoon.

Perhaps more interestingly, Whitson and Galinsky (2008) showed that people who are in a situation where they feel as though they lack control are more apt to perceive illusory visual figures in noise. However, the finding isn’t just limited to visual perception. The same holds true for pattern recognition processes in general, including those used to infer causal relationships between events. A perceived lack of control breeds superstitious thinking and promotes the detection of illusory patterns, including those underlying the acceptance of conspiracy theories and “psychic” readings. Thus, as it turns out, people are most susceptible to the wiles of a pseudo-psychic at the same times they’re most apt to visit a psychic, when an event has occurred that elicits a feeling of reduced control over their surroundings (such as the unexpected death of a loved one). In the case of many superstitions, the behaviors associated with them (e.g., “knock on wood”) can renew the feeling of control in the absence of real control.

Discussion of the interplay of top-down and bottom-up mechanisms re-emerged later on when discussing the reconstructive nature of memory. Recently, Simons and Chabris (2011) polled the American public on their beliefs about how memory works. Almost two-thirds of their respondents expressed a belief that memory acts like a video camera, accurately recording our experiences in the world. However, ample evidence suggests that the formation and recollection of memories is highly influenced by prior experience and often fails to accurately reflect reality. Magicians count on this fact. Magical performances regularly contain language that encourages audience members to encode faulty memories of what they have experienced. Magic theorist Dariel Fitkee (1945) believed that magicians should never let their audiences interpret sensory information without guidance, stating, “Convincingly interpreting, to the spectator, what the senses bring to him in such a way that the magician’s objectives are accomplished, is the true skill of the skilled magician” (p. 33).

Perhaps my favorite example of how memories can be manipulated by seemingly irrelevant information came from an early experiment carried out by Carmichael, Hogan, and Walter (1932) wherein participants were asked to memorize a series of ambiguous line drawings. The researchers found that the mere labeling of each line drawing caused memories for each image to be distorted in a way that resonated more strongly with the label applied to the image. That is, when participants were later asked to reproduce the drawings from memory, the subsequent drawings looked more like the items they were labeled as than the original images (see Figure 1).

While mere suggestions can shape the way that a memory is encoded, it can also cause distortion in the way that memories are retrieved. The influence of external cues on memory retrieval is exemplified in a famous experiment on eye-witness memory carried out by Elizabeth Loftus and John Palmer (1974). Participants in their study watched a video of two cars colliding. Afterwards, they were asked to estimate how fast one car was going when it hit the other. However, the verb contained in the probe question changed across conditions. Some participants were asked how fast the car was going when it “contacted” the other car. Some were asked how fast it was going when it “bumped” the other car, “hit” the other car, “collided” with the other car, or “smashed” the other car. Obviously, each verb implies a greater amount of damage in the accident. Amazingly, as the verbs became more violent, participants’ estimates of the car’s speed increased, suggesting that they were reconstructing their memory for the event and that the question itself shaped this process of reconstruction.

Thus, the brain neither processes visual stimulation verbatim like a video camera nor stores perceptual information in an unalterable form like video tape. As Sahlins (1985) and others have said, “There is no such thing as an immaculate perception” (p. 146). These processes are malleable and apt to err. We see what we expect to see and remember that which fits with our conception of the world. My hope is that increasing awareness of the cracks in the system will inevitably lead one to skepticism when confronted with bold claims.