Category: Cognition

One of the hallmarks of Autism Spectrum Disorder (ASD) is an impairment in social cognitive skills. This manifests in individuals with ADS having trouble orienting their attention towards people. Accordingly, they also show deficits orienting their attention in response to social cues from others, such as eye gaze, head turns and pointing gestures.

Understanding the social cognitive impairments associated with ASD has been challenging in that studies set in naturalistic settings often reveal the deficit but lab experiments performed on computers don’t.

For example, some naturalistic studies have looked at home movies of infants and found that those later diagnosed with ASD showed less social orienting and were less responsive to cues from others to orient to objects. For example, if their mom was in the room, they would look at her a lot less and they’d also be less likely to respond when their mothers tried to direct their attention to a toy in the room by looking or pointing at it.

However, people with ASD have been shown to respond to non-naturalistic social cues in the lab. Social orienting has been frequently been tested by use of a variation on Michael Posner’s spatial cueing paradigm. This works as follows:

1. Participants are seated in front of a computer
2. A stimulus – a pair of eyes gazing to either side (or straight ahead) or arrows pointing to either side or neither – appears on the screen
3. Shortly after, a stimulus (the target object) appears to one side or the other, either on the side which the eyes or arrows were pointing towards or the opposite side.
4. Participants have to indicate which side the target object appeared on by pressing either a right or left button.
5. Performance on the task is assessed by measuring the amount of time it takes to participants to press the button indicating on which side the target appeared. Most participants, including ASD patients, are as quick with the gaze cue (the eyes) as with the arrow cue.

(The left side of the above figure shows a single trial (with “directional eyes”), in which participants first see a fixation cross, then one of four directional/non-directional stimuli, after which the target appears either on the same side indicated by the cue or the opposite side. Participants need to indicate which side a target stimulus appeared on by pushing a button. The right side shows the three other trial types (from top to bottom): neutral arrow, directional arrow, neutral eyes)

Past studies have shown that people orient faster to cued (like in the left side of the above figure) versus noncued locations, known as the facilitation effect. Previous studies using this task have produced inconsistent results, but most of them have shown ASD populations performing comparably to non-ASD populations.

In this study, researchers used the above-described cue task to examine the neural mechanisms underlying social orienting in ASD, with the hope that if there were no behavioral differences, neural activity might reveal that ASD individuals are performing the task differently. Other studies have shown that non-ASD populations treat social and non-social cue stimuli differently. It was hoped that neural activity revealed in this study would shed light on the discrepancies in behavioral results for ASD populations in lab versus computer settings.

Results
In terms of behavior, both the control and the ASD group showed quicker responses for gaze and arrow cues with no between group difference, which is consistent with previous lab studies.

However, neural activation patterns showed significant group differences. The control group showed greater activation for social vs. nonsocial cues in many different brain regions, with gaze (eyeball) cues eliciting increased activity in many frontoparietal areas, supporting the idea that neurotypical brains treat social stimuli different from non-social stimuli. The ASD group, on the other hand, showed much less difference in neural activation between social vs. non-social cues. Although these differences in neural activation are too numerous to cover here, one region of interest, superior temporal sulcus (STS), stood out. The STS has been shown to be associated with the perception of eye gaze and other work has suggested the region may be involved in understanding the intentions and mental states of others. In this study, ASD individuals showed decreased STS in the gaze cue condition (versus controls). This data suggests that the STS may not be sensitive toward the social significance of eye gaze in ASD individuals.

Implications
The authors point out that although ASD individuals don’t seem to rely on the same neural circuitry to perceive social cues such as eye gaze, they have found a way to use the low-level perceptual information available in social cues to adapt a strategy that allows them to discern that gaze direction conveys meaning about the environment. That being said, ASD individuals mostly don’t do this very well in more naturalistic environments. So, although this strategy might work in a scanner with “cartoon” eyes and where there are no environmental distractions, it’s unlikely that ASD individuals could adapt this strategy in a naturalistic environment. On the contrary, one could also frame these results from the perspective of the ASD individual; that is, given the non-naturalistic environment of the scanner, and the fact that the task demands were very simple and not dependent on social cognitive processing, why should non-ASD individuals treat the gaze vs. arrow stimuli differently? Why not just rely on low-level information and thus expend less cognitive energy? It’s a good example of the automaticity of social cognitive processes. Give humans a set of cartoon eyeballs to look at and they can’t help but process these as distinct from something non-social.

An additional take away from this paper is that even when one finds no behavioral differences between groups, there might be some interesting differences in neural activity worth exploring via fMRI or EEG.

The ability to dance to music comes naturally to most members of the human species, and even exists in some species of bird, most famously a cockatoo and YouTube celebrity named Snowball.

But it doesn’t come naturally to everyone.

Researchers from McGill University and the University of Montreal (Phillips-Silver, 2011) have recently published a case study of a student named Matthieu, who not only can’t dance to the beat, but also can’t tell when someone else is dancing asyncronously, although he can dance in time if he is able to watch someone else doing it.

“Mathieu was discovered through a recruitment of subjects
who felt they could not keep the beat in music, such as in clapping
in time at a concert or dancing in a club. Mathieu was the
only clear-cut case among volunteers who reported these problems.
Despite a lifelong love of music and dancing, and musical
training including lessons over several years in various instruments,
voice, dance and choreography, Mathieu complained that
he was unable to find the beat in music. Participation in music
and dance activities, while pleasurable, had been difficult for
him.”

Experimenters put Matthieu and a group of control subjects through a series of tests in which they danced to various types of music. Measurements were gathered by way of a Wii controller (which contains a accelerometer) that was strapped to the trunk of each subject’s body and was able to track and quantify their movements. They also had participants tap their hands to the beat, while not dancing. Finally, they watched videos of someone else dancing (increasingly out of sync) to some Merengue music, and were to asked to identify if the person dancing in the videos was in sync with the music or not.

Matthieu couldn’t tap a beat in time and the style of music didn’t seem to matter; across numerous styles of music, he couldn’t dance in sync with the groove.*

*He was able to sync himself somewhat to a techno beat, which is basically a glorified metronome but nonetheless slightly more complex.

However, he had no problem locking his movements to the beat of a metronome and could bounce with a consistent tempo without music, while showing normal levels of pitch and tonal perception. He demonstrated normal intelligence, presented no history of neurological or psychiatric disorders and showed so signs of obvious cognitive deficits. It seems Matthieu’s deficit is specific to perceiving the underlying pulse in a piece of music and moving his body to it. In other words, he’s got beat (rhythm) deafness.

Scientists have been aware of the condition for quite a while.

In an Australian Medical Journal from 1890, a surgeon from the Victorian Eye and Ear Hospital in Melbourne described a case of rhythm deafness in a 27-yr. old farmer named W.M.:

(Unlike Matthieu, the farmer’s deficit was much less selective; he also suffered from tone deafness and had severely reduced pain sensitivity)

More recently, Oliver Sacks touched upon rhythm blindness in his book Musiciophilia:

Google and PubMed searches find numerous casual references to “rhythm deafness”, but this does seem to be the first well documented case in the scientific literature. So, if its been talked about for so long but documented so infrequently, how rare is it?

Lead author Jessica Phillips-Silver suggested that it might be as rare as tone deafness, which affects about 4 to 5% of the population. If that’s the case, it could be a real challenge locating enough participants to conduct an fMRI study, which would help reveal the neural regions implicated in the condition. But the research team is confident, in part due to ample press coverage of the paper, that they’ll find more subjects.

So, what might an fMRI study reveal about the condition?

A 2005 study (Brown) examining the neural substrates of dance points to one possibility. In this study, subjects lay in a PET scanner and danced a tango with their legs only, both accompanied by music and free form (without music).
Participants in the dancing-to-music condition showed BOLD activation suggesting that audio-motor entrainment might be mediated through a connection between subcortical auditory areas and the cerebellum. This would make sense give that one of the primary functions of the cerebellum is to coordinate motor actions, particularly precision and accurate timing, by receiving input from the sensory system and integrating those incoming signals to execute fine tuned motor activity.

The authors suggest that the deficit might be primarily perceptual and point to the fact that he failed on a task which did not require body movement, nor does not have any basic motor impairments They also suggest that basal ganglia connections between auditory and motor cortices could play a role, particularly the dorsal auditory pathway leading to the dorsal premotor cortex. Silver and colleagues already have some neuroimaging work underway with Matthieu.

As for future directions, Silver-Phillips said that her group will be looking at exactly what level of musical complexity is required for Matthieu’s beat deafness to emerge. They’re also interested in exploring whether there is any sign of entrainment occurring on a neuronal level, even in the face of the behavioral deficit. In other words, maybe his neurons are dancing to the beat even if he’s not.

As many a former smoker will probably attest, quitting cigarettes ranks high in the hard-to-kick category. I made several unsuccessful attempts before finally kicking the habit after a 10 year pack-a-day run. Ultimately what worked for me was to go cold turkey, but there were perhaps other alternatives which I might have tried. In a paper from Nature Neuroscience, researchers from University of Michigan provided participants with interventions involving individually tailored messages* designed to encourage quitting and found that participants’ brain activity while listening to the messages predicted how likely they would be to successfully quit smoking.

*Tailored messages are statements about an individuals’ issues and thoughts about quitting smoking, derived from pre-screen interviews with them. e.g., “You are worried that when angry or frustrated, you may light up”.

Here’s the premise: Anti-smoking messages custom made for an individual can be more effective than generic ones, but only if said individual processes those messages in a self directed manner. Past research has shown a specific set of neural regions – primarily the mPFC and precuneus/posterior cingulate – to be associated with self referential thinking. Therefore, researchers hypothesized, activity in these brain regions while processing tailored anti-smoking messages might predict the likelihood of quitting.

The Study
The experiment was carried out over three days with a follow-up visit four months later.

Day 1: 91 participants completed a health assessment, demographic questionnaire and a psychosocial characteristics scale related to quitting smoking. Responses were then used to create smoking cessation messages tailored to each individual.Day 2: Participants went into scanner and performed 2 fMRI tasks: The first task had participants listen to anti-smoking messages of three different types: personally tailored anti-smoking, non-tailored anti-smoking and neutral.

Here are some examples of what they heard:

Tailored messages
A concern you have is being tempted to smoke when around other smokers.
Something else that you feel will tempt you after you quit is because of a craving.
You are worried that when angry or frustrated, you may light up.Untailored messages
Some people are tempted to smoke to control their weight or hunger.
Smokers also light up when they need to concentrate.
Certain moods or feelings, places, and things you do can make you want to smoke.Neutral messages
Oil was formed from the remains of animals and plants that lived millions of years ago.
Sighted in the Pacific Ocean, the world’s tallest sea wave was 112 feet.
Wind is simple air in motion. It is caused by the uneven heating of the earth’s surface by the sun.

Then, participants completed a self appraisal task to identify brain regions active during self relevant thought processes. In this task, participants saw adjectives appear on the screen and had to either rate how much the adjective described them or whether the adjective was positive or negative.

Day 3: Participants completed a web-based smoking cessation program and were instructed to quit smoking. (They were given a supply of nicotine patches to get themselves started)

ResultsBehavioral
Experimenters checked in with subjects four months later to see if they were abstaining from smoking. Out of 87 who participated in the smoking cessation program, 45 were not smoking, while 42 were still (or had quit briefly and restarted) smoking.

Subjects were given a surprise memory test for the anti-smoking messages they’d received four months prior and remembered self relevant, tailored messages most well. However, their memory performance was not related to whether they successfully quit smoking.

fMRI
As for the fMRI data, experimenters used a mask of tailored vs. untailored message conditions AND self-appraisal to identify the region common to both processes. This seems like a mild case of double dipping, no? That is, finding a brain region that responds to the condition of interest (in this case, voxels more active in tailored vs. untailored conditions) and then using the same data to test the hypothesis. Ideally, the ROI would be obtained independently of the main task.

A blow by blow on the different contrasts of interest:

1. Researchers looked at brain regions more active during tailored vs. untailored messages and found differential activation in the regions below.

2. The localizer task (used to isolate neural areas involved in self appraisal) had participants process adjectives either by relating them to self or by judging their affective value. This suggests an alternative explanation for the categorical contrast in that it isn’t specific to self per se, but really more specific to people vs. non-people. A more widely used version of this task has participants process adjectives with regards to self or an other. As a further control, a third condition is often included in which participants identify whether words are in upper case or lower case. The contrast applied is (self – control) – (other – control). It’s not clear why the researchers chose the task they did, which seems significantly noisier.

Here’s the contrast from the present study:

And here’s a contrast from another study (Jenkins 2010) that looked at three different types of self-referential processing.

Although roughly similar, the current study shows cortical midline activation seems to be much more dorsal than that found in Jenkins (2010). Using an ROI derived from this localizer task to correlate neural activity in tailored vs. untailored statements with quitting led to a non-significant result (from supplementary materials). This could explain why the researchers used the composite mask to define the ROI.

3. Again, the primary ROI was defined as a composite of overlapping regions between the self reference task AND the tailored vs. untailored statements task, which was used to compare neural activity with quitting behavior. They found that activity in these regions – which included dmPFC, precuneus and angular gyrus – during tailored smoking cessation messages predicted the likelihood of successfully abstaining from smoking. dmPFC and precuneus activation also individually predicted smoking cessation success, although angular gyrus did not.

Perhaps few findings in the cognitive sciences have received more press in recent years than the discovery by Rizolatti and colleagues in macque monkeys of mirror neurons; that is, neurons that preferentially activate both when a monkey performs some action and when observing someone else perform the same action. There is evidence that these neurons exist in humans, although it’s indirect (however, see Keysers 2010). They’ve quite captivated the publics’ attention, these crafty little neurons.

The mirror neuron system is thought to help primates, non-human and human, understand what others are doing by simulating the motor plan of an observed action and also allowing for prediction of the most likely outcome of an observed action. In other words, mirror neurons are sensitive both to actions and outcomes, and to some extent, inferring the why behind the what. Many have suggested that they play a significant role in comprehending mental states and empathic processes. But it’s in regards to these latter claims where the evidence is not as clear.

So, how does the brain intuit others’ inherently unobservable mental states in the absence of biological action? Much of the research evidence points to the mentalizing system, also known as the theory-of-mind network, as the neural network tasked to the job (see meta-analysis by Van Overwalle and Baetens, 2009). Anatomically speaking, these networks are distinct, with the mirror neurons located primarily in the ifraparietal sulcus, superior temporal sulcus and the prefrontal cortex, while the mentalizing system constitutes a distinct set of brain regions that lie along the cortical midline and in the temporal lobes, including the mPFC, TPJ, temporal poles, PCC and posterior STS.

One of the big challenges in this area of research is in designing tasks that are able to effectively disentangle processing of motor action from mentalizing. This is quite a challenge because it’s difficult to know what kind of mental process participants are applying to any given set of social stimuli. Do participants engage in higher-order abstract mentalizing automatically, and even when the stimuli might not necessarily demand it? How can we know what mental process subjects are engaging in? In other words, how might one capture the distinction between perceiving what others are doing vs. obtaining a more abstract representation of why they might be doing it?

UCLA’s Bob Spunt and colleagues (2011) designed a study that would attempt to do just that. They had participants observe short video clips of a human performing an action and directed the participants, in the scanner, to covertly describe each video clip in terms of (1) what an actor was doing, (2) why he was doing it, (3) how we was doing it or (4) to just passively view the video. They were to start the process of covert description once the video started playing, begin their description with the word “he” (e.g. he is reading) and to press a button once they were done.

(Thanks to the researchers for providing the video)

For example, in the above example, participants might have covertly described that the man is reading (WHAT), that he wants to learn or is bored (WHY), or that he is flipping pages or gripping the book (HOW).

This had the effect of creating three levels of mentalizing “depth” while holding the action component constant. If the mirror neuron network was involved in the mentalizing process, then one would expect to see neural activation increases in the mirror neuron network covarying with the increase in participants presumed mentalizing about the actor. And if the mirror neuron network was involved in mentalizing, then one would expect to see increased activations in neural regions which have been previously suggested to contain mirror neurons.

Results
In support of the theory that mirror neurons don’t play a significant role in mentalizing, the researchers found no increase in the mirror neuron network in response to increases in mentalizing. But they did find increased activation in brain regions associated with mentalizing, including dorsal and ventral medial pFC, posterior cingulate cortex, and the temporal poles.

Conclusion
The study does provide another piece of support to the position that although the mirror neuron system might be necessary in understanding actions of the body, it’s not sufficient to explain the cognitive processes required to infer unobservable mental states.

An essay by June Carbone regards the role of neuroscience in determining punishment for adolescents who commit crimes such as murder. Focused on a recent US Supreme Court decision on the juvenile death penalty, the piece points out some of the limitations of applying neuroscientific findings to issues of jurisprudence.

Amongst the forests of trees cut down in support of self-help books on dating, some not inconsequential percentage of lumber has probably been exclusively dedicated to variants on the theme that one must “play hard to get” to find love. If you never heard this expression, I would be curious to know what solar system you’ve just arrived from. And for you, my alien friend, I say welcome to our planet, and please allow me to explain. As it’s been passed down through the ages, the story goes that If a person wants to attract the interest of another toward whom they have romantic inclinations, they should be aloof and slightly stand-offish, so as to gain the attention and interest of their betrothed. Popular self-help manifestos such as “Why men love bitches,” are based on the idea of making oneself more attractive by decreasing one’s availability (to put it mildly). Going back several hundred years, even Juliet, from Shakespeare’s most famous play, knew as much when she told Romeo that “…if you think it’s too easy and quick to win my heart, I’ll frown and play hard-to-get, as long as that will make you try to win me.” I remember hearing some variant on this theme (“Don’t be too eager”) from older boys when I was a scrawny young fellow just becoming interested in girls for the first time.

Erin Whitchurch and colleagues from Harvard University were interested in how this seeming truism (the Uncertainty Principle), conflicted with another observation from the social psychological literature, which is that people tend to like those who like them back (known as the Reciprocity Principle). Interested in testing this principles against each other, they designed a study whereby 47 women viewed Facebook profiles of a set of men who, they were told, had previously viewed their profiles and rated how much “they would get along with each woman if they got to know her better.” A subject knew that a given man either liked her a lot, liked her about average or they were uncertain as to whether the man liked her or not. Then she would rate each man on a number of different dimensions including how much they might like him as “a potential boyfriend” and how much they would be interested in “hooking up” with him.

The results showed that women liked men who liked them a lot more than men who liked them a little, supporting the Reciprocity Principle; that is, we like those who like us back. But the study also supported the Uncertainty Principle (that we like those who we are uncertain about) by showing that women liked men about whose feelings they were uncertain more than men who said they liked them a lot. They also reported thinking a lot more about these men. The authors mention that although this advice has often appeared in the popular press, social psychological research has never confirmed it. The authors also point out important constraints and limitations that bear on the ecological validity of the effect. Among these are the fact that participants didn’t know anything about the men and it’s not clear that this effect would hold after meeting someone and/or beginning a relationship. Perhaps, as they suggest, this finding would be most applicable in the context of online dating, in which people don’t know very much about the other person initially. Also, only females participated and there may be gender differences.

One important point to reiterate is that participants mentioned thinking about the men in the “uncertain” condition much more than in the other conditions. Although I’m not overly familiar with research on dating, romantic attraction, relationships, etc., the reason I was inspired to write about this study is because I’m quite interested in the effect of uncertainty on cognitive and attentional processes and decision making, topics that a handful of recent studies have addressed. One 2010 study from researchers at Cornell found that people were more distracted by hearing one person on a cell phone nearby vs. two people having a conversation, the suggestion being that it was the unpredictable nature of the one-sided conversation that led to participants’ increased distraction. Zachary Tormala at Stanford has performed studies in which he has found that expert advice expressed with low certainty can be more persuasive than that expressed with high certainty. Tormala says that when an expert, say a restaurant critic, is unsure of themselves in a review of a restaurant, this is surprising to people. And “…surprise increases readers’ interest in and involvement with the review, which is essentially a persuasive message, and this promotes persuasion,” says Tormala. “Experts … get more attention and can have more impact when they express uncertainty.” In standard human fear conditioning experiments, the strongest fear conditioning is generally achieved when you shock participants only about a third of the time. That is, the strongest fear responses are generated when participants are maximally uncertain as to when they’re going to be shocked. That’s when you’ve really got their attention.

So, over a variety of different domains and outcomes, uncertainty is the variable common to all of the above situations. From an adaptive perspective it makes sense that we would direct extra attention to unsolved vs. solved problems. As organisms whose very survival is dependent upon our ability to learn about the world around us, we often have no choice but to devote an inordinate amount of attentional energy to the unknown. But from a mechanistic point of view, what motivates this orientation?

There is a rather large scientific literature discussing the role of reward signals in learning and uncertainty. And while its well-established that learning is largely dependent on reward systems, primarily the dopamine system, it’s also been shown that uncertainty alone is subserved by the same system. Going back to the study at hand, it doesn’t seem outrageous to imagine that participants had conflated the reward signal associated with uncertainty about being liked with the reward signal associated with liking someone.
This wouldn’t be surprising as the same dopamine neurons that preferentially report subjectively pleasant events also seem to signal attention-inducing ones. Dopamine neurons fire much more strongly to unexpected rewards and they may also fire strongly when presented even with the prospect of such a possibility, a kind of second-order reward effect. So, the effect could be framed as a kind of neural parlor trick, a technique whereby one can hijack the reward system of another person, causing that person to experience feelings that they then misinterpret.

I’m concerned with how science is presented to the public and how misunderstood scientific findings often become cultural “memes” that permeate the culture and plant incorrect ideas about human nature inside of people’s heads. Whitchurch’s study has already generated just these kind of sensationalistic and over-simplified headlines:

It’s important to keep in mind that the more rewarding your dream girl or guy finds uncertainty, the more likely it is that this “technique” will work on them. And there is evidence that individuals vary in how sensitive they are to uncertainty, so this may not work on everyone. Furthermore, if their attraction to you is driven largely by uncertainty, then what’s going to happen once the uncertainty is no longer there?

(a page from an imagined self-help book):

“You shouldn’t play hard to get. Why? Because, as Harvard researchers have pointed out, it will attract (wo)men to you, but it’s an illusion. They might not be attracted to you because they like you, but because you’ve increased their attention to you through manipulating uncertainty as to how you feel about them! While this might work for as long as you maintain that uncertainty, remember this is not the same thing as someone liking or loving you. While human beings might be wired to pursue certain situations and people because of this, if you’re a healthy, well-adjusted person, you don’t want that kind of person in your life, because they’ll always be interested in chasing the unknown and once they’ve “figured you out,” they’ll be on to the next mystery.”

Imagine the following scenario: you’re sharing a table with a stranger at a coffee shop. You’ve exchanged a few pleasantries with the person but not much else. You need to go to the bathroom and would like to leave your laptop and bag at the table while you’re gone. Can you trust your new, and perhaps only temporary, acquantaince not to walk off with your stuff? Most people would base such a decision on “a gut feeling.” But, upon what basis? Back in 2006, researchers from Rice University examined a factor that might play a role in whether you might feel comfortable sashaying to the john sans laptop: the person’s attractiveness. Basically, is she/he hot or not?

Past research has shown that people exhibit considerable levels of trust for strangers and that this trust is often made via a snap judgment based on minimal information. Psychologists at Rice were curious to know (1) if others’ attractiveness might serve as a basis for these snap judgments, (2) if these judgments were accurate and (3) if attractive people are the beneficiaries of others’ heightened trust for them.

Upon arriving for their experimental session, participants posed for four photos, of which they picked one which would be used in a series of trust games. The trust game worked as such: A participant was given $10. Seated in front of a computer, he was then shown pictures of other students, one at at time, to whom he was to give part, or all, of the $10. The recipient would receive triple whatever the participant chose to give, and would then return as much as he wanted back to the participant. For example, if the participant gave $10, then the recipient would get $30. If he wanted to be fair, the recipient could give $15 back to the participant, leaving them both with $15 (the best and most equitable solution). Conversely, the recipient could return nothing to the participant, leaving him with $0. The amount given by the participant really depends on how much he trusts the recipient to return an equitable amount to him. Trust can then be measured by the amount the participant chooses to give to the recipient. After playing the trust games, participants rated all of the photos of other students on a number of different traits, including attractiveness.

So, did participants trust good looking people more? The beauties made out, receiving more from participants, on average, then their less good looking peers. But were participants correct to trust good looking people more? Yes, they were. Attractive people seemed to reciprocate with higher amounts of money compared to those less attractive. But there was an interesting twist here. The more attractive the participant, the higher the recipients expectations were, such that if they didn’t receive what they expected from an “attractive” participant, they would enforce a “beauty penalty” by returning less.

These results aren’t particularly surprising, given similar research showing the multitude of ways in which attractiveness can positively modulate people’s perception of others. Given the above findings, one might be prudent to surmise that the more physically attractive candidate in a political race, all else being equal, should be more likely to win than lose. However, experimental results are mixed. On the one hand, Budesheim et al. (1994) found that physical attractiveness influenced candidate evaluation despite the provision of information about the candidate’s policy stances and personality characteristics. But, on the other hand, Rosenberg et al. (1991) found no relationship between physical attractiveness and beliefs that a candidate would make a reasonable political leader. Similarly, Sigelman and colleagues (1987) found no relationship between physical attractiveness and vote choice. And Riggle et al. (1992) found that physical attractiveness had an effect when no other candidate information was present, but failed to have an effect when policy information about the candidate was provided.