Month: July 2010

People love pictures of brains. And, as a result, companies have been trying hard to find ways to incorporate MRI data into their sales pitches and business plans. One such company, Johnson O’Connor Research Foundation, has jumped on the bandwagon in a big way, having recently added a brain scan to the standard occupational aptitude test they offer to job seekers (they charge around $700 for the assessment):

The Johnson O’Connor Research Foundation is a nonprofit scientific research and educational organization with two primary commitments: to study human abilities and to provide people with a knowledge of their aptitudes that will help them in making decisions about school and work. Since 1922, hundreds of thousands of people have used our aptitude testing service to learn more about themselves and to derive more satisfaction from their lives.

See the Neurocritic for a spot-on criticism of the “study” upon which their new marketing pitch is based.

Bad neuroscience seems to be appearing increasingly frequently in the public media space. From misleading articles in the mainstream press to the poorly conducted studies that often form the basis for one or another misconceived business plan, fMRI research runs the danger of being victimized by its own success. Part of the problem stems from the general public’s inability to properly interpret neuroscientific data in the context of human psychology studies. Not that they should be blamed. Neuropsychology is a somewhat complicated discipline, and there isn’t any reason to believe that someone lacking in understanding of the basic principles of neural science, or psychology, or both, should be able to parse such data out correctly. The problem, however, is that the average public citizen isn’t neutral toward such data, but tends to be more satisfied by psychological explanations that include neuroscientific data, regardless of whether that data adds value to the explanation or not. The mere mention of something vaguely neuroscientific seems to increase the average reader’s satisfaction with a psychological finding, legitimizing it. Even worse, its the bad studies that benefit the most from this so-called “neurophlia”, the love of brain pictures. That’s according to a study from a research team led by Jeremy Grey at Yale University.

Participants read a series of summaries of psychological findings from one of four categories: Either a good or bad explanation, with or without a meaningless reference to neuroscience. After reading each explanation, participants rated how satisfying they found the explanation. The experiment was run on three different groups of participants: random undergraduates, undergrads who had taken intermediate-level cognitive neuroscience course and a slightly older group who had either already earned PhDs in neuroscience, or were in or about to enter graduate neuroscience programs.

The first group of regular undergrads were able to distinguish between good and bad explanations without neuroscience, but were much more satisfied by bad explanations that included reference to neural data ( The y-axis on the following figures stands for self-rated satisfaction):

Nor were the cognitive neuroscience students any more discerning. If anything, they were a bit worse than the non-cognitive neuroscience undergrads, in that they found good explanations with meaningless neuroscience more satisfying than good ones without :

But the PhD neural science people showed the benefits of their training. Not only did they not find bad explanations to be more satisfying by the addition of meaningless neuroscience, they found good explanations with meaningless neuroscience to be less satisfying.

As to why non-experts might have been fooled? The authors suggest that non-experts could be falling pray to the “the seductive details effect,” whereby “related but logically irrelevant details presented as part of an argument, tend to make it more difficult for subjects to encode and later recall the main argument of a text.” In other words, it might not be the neuroscience per se that leads to the increased satisfaction, but some more general property of the neuroscience information. As to what that property might be, it could be that people are biased towards arguments that possess a reductionist structure. That is, in science, “higher level” arguments that refer to macroscopic phenomena often refer to “lower level” explanations that invoke microscopic explanation. Neuroscientific explanations fit the bill in this case, by seeming to provide hard, low level data in support of higher level behavioral phenomenon. The mere mention of lower level data – albeit meaningless data – might have made it seem as if the “bad” higher level explanation was connected to some “larger explanatory system” and therefore more valid or meaningful. It could be simply that bad explanations – those involving neuroscience or otherwise – are buffered by the allure of complex, multilevel explanatory structures. Or it could be that people are easily seduced by fancy jargon like “ventral medial prefrontal connectivity” and “NMDA-type glutamate receptor regions.”

Whatever the proximal mechanisms of the “neurophilia” effect, the public infatuation with all things neural probably won’t be fading any time soon and, as such, its imperative that scientists, journalists and others who communicate with the public about brain science be on the lookout for bad, and incorrectly presented good, neuroscience, and be quick to issue correctives when it appears.

Like this:

Imagine the following scenario: you’re sharing a table with a stranger at a coffee shop. You’ve exchanged a few pleasantries with the person but not much else. You need to go to the bathroom and would like to leave your laptop and bag at the table while you’re gone. Can you trust your new, and perhaps only temporary, acquantaince not to walk off with your stuff? Most people would base such a decision on “a gut feeling.” But, upon what basis? Back in 2006, researchers from Rice University examined a factor that might play a role in whether you might feel comfortable sashaying to the john sans laptop: the person’s attractiveness. Basically, is she/he hot or not?

Past research has shown that people exhibit considerable levels of trust for strangers and that this trust is often made via a snap judgment based on minimal information. Psychologists at Rice were curious to know (1) if others’ attractiveness might serve as a basis for these snap judgments, (2) if these judgments were accurate and (3) if attractive people are the beneficiaries of others’ heightened trust for them.

Upon arriving for their experimental session, participants posed for four photos, of which they picked one which would be used in a series of trust games. The trust game worked as such: A participant was given $10. Seated in front of a computer, he was then shown pictures of other students, one at at time, to whom he was to give part, or all, of the $10. The recipient would receive triple whatever the participant chose to give, and would then return as much as he wanted back to the participant. For example, if the participant gave $10, then the recipient would get $30. If he wanted to be fair, the recipient could give $15 back to the participant, leaving them both with $15 (the best and most equitable solution). Conversely, the recipient could return nothing to the participant, leaving him with $0. The amount given by the participant really depends on how much he trusts the recipient to return an equitable amount to him. Trust can then be measured by the amount the participant chooses to give to the recipient. After playing the trust games, participants rated all of the photos of other students on a number of different traits, including attractiveness.

So, did participants trust good looking people more? The beauties made out, receiving more from participants, on average, then their less good looking peers. But were participants correct to trust good looking people more? Yes, they were. Attractive people seemed to reciprocate with higher amounts of money compared to those less attractive. But there was an interesting twist here. The more attractive the participant, the higher the recipients expectations were, such that if they didn’t receive what they expected from an “attractive” participant, they would enforce a “beauty penalty” by returning less.

These results aren’t particularly surprising, given similar research showing the multitude of ways in which attractiveness can positively modulate people’s perception of others. Given the above findings, one might be prudent to surmise that the more physically attractive candidate in a political race, all else being equal, should be more likely to win than lose. However, experimental results are mixed. On the one hand, Budesheim et al. (1994) found that physical attractiveness influenced candidate evaluation despite the provision of information about the candidate’s policy stances and personality characteristics. But, on the other hand, Rosenberg et al. (1991) found no relationship between physical attractiveness and beliefs that a candidate would make a reasonable political leader. Similarly, Sigelman and colleagues (1987) found no relationship between physical attractiveness and vote choice. And Riggle et al. (1992) found that physical attractiveness had an effect when no other candidate information was present, but failed to have an effect when policy information about the candidate was provided.