Friday, 26 March 2010

I recently had to write a mini essay on how the mainstream press might misrepresent data from neuroimaging studies, such as functional magnetic resonance imaging (fMRI). I thought that, given the nature of the question, it might be a good thing to put up on my blog, so here it is, with a few tweaks and extra explanations of some of the terminology thrown in along the way.

How might fMRI data be misrepresented in news articles targeted to the generalpublic?

Like many complex disciplines, cognitive neuroscience is plagued by misrepresentation in the media. Most journalists lack the skills, or the integrity, to identify reputable sources of research, and there is a tendency to sensationalise and exaggerate the significance of research, which falsely presents findings as a series of epoch-defining breakthroughs, which is a fundamental misunderstanding of the aggregate nature of scientific advancement. Additionally, the mainstream media reports research in isolation, detached from the context of a theoretical framework. The journalist may cite a few key comments from the author, falsely presenting scientists as infallible authority figures. Often, in an attempt to provide balance, a dissenting voice from an opposing scholar is also presented, making science appear to the public as a series of contradictory, diametrically opposed irrelevancies.

These are generic flaws of pseudoscientific journalism, whereas cognitive neuroscience is specifically vulnerable to misrepresentation. This is because it is is a field in which the public has a great interest, but very little knowledge. This enthusiasm for neuroimaging is understandable, as cognitive neuroscience examines intrinsically human topics, such as consciousness, personality, and emotion. Because of the perceived impenetrability of the brain in the public consciousness there is a tacit belief that this research is utterly incomprehensible, leading to ready acceptance of the press reports without critical appraisal. For example, research has shown that presenting an image of a brain in an article will make the scientific credibility of the research appear greater to the reader (McCabe & Castel, 2008). A similar effect has been demonstrated when neuroscientific terminology is inserted into an article, even when it bears little relevance to the discussion (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). Clearly, this mystification of brain function amongst the public is open to abuse, deliberate or unintentional.

Reports of fMRI in the press tend to be overly optimistic about its potential, fail to explain its limitations, and are not sufficiently critical of the methodology (Racine, Bar-Ilan, & Illes, 2005). This problem is exacerbated by the contention that much fMRI data has already been distorted and excagerated by the researcher, knowingly or otherwise (Vul, Harris, Winkielman, & Pashler, 2009). This means that the general public, who are untrained in how to critically appraise neuroscientific research, are often presented with news reports which bear little resemblance to the true results.

(At this point, I will interject to remind you about the difference between forward and reverse inferences. A forward inference is when the researcher plots in advance what brain activation they expect to see when a person does a given task in the scanner. This might be as simple as "the participant will do a language task, so we expect to see the left posterior section of the superior temporal gyrus activated, because this is where we process and understand language". This kind of inference is not perfect, but it is better than the alternative, because we are making and testing a prediction.

The alternative, the reverse inference, is severely frowned upon in cognitive neuroscience. Imagine we did the experiment I just described, and as well as activation in the superior temporal gyrus we also saw activation in several other areas which we were not expecting to be activated. We could not then start to speculate on why these other areas were activated, because brain areas often have more than one function, so guess work is considered very bad science indeed. With that little aside out of the way, you should all be able to follow my next point!)

Additionally, the fallacy of reverse inference, avoided by credible neuroscientists, still appears viable to laypeople, especially given the popular myth that we humans only use 10% of our brain (this is not true - every part of the brain has a known function). This myth masks the complex and multipurpose nature of brain function, supporting the notion that brain activity is easily attributable to specific cognitive functions.

Racine, Bar-Ilan, and Illes (2005) examined the portrayal of fMRI research in the media, and identified three key trends, “‘neuro-realism’, ‘neuro-essentialism’ and ‘neuro-policy’” (p. 2). Neuro-realism, they claim, is the phenomenon whereby subjective findings falsely appear as objective fact when viewed within the context of a neuroimaging study. The second, neuro-essentialism, describes the tendency to attribute a self or personality to the brain itself, almost to the point where the brain is depicted as being self-aware, admonishing the individual from any degree of control. Finally, neuro-policy is the politicisation of neuroscientific research to fit and reinforce a social or political agenda. Taken as a whole, these trends misrepresent the findings and usage of fMRI in cognitive neuroscience, ignoring its true purpose as a scientific tool to devise and test hypotheses, and to detail the true workings of the mind and brain.

Tuesday, 23 March 2010

When I said after lectures finish it was just the research project to work on, I neglected to mention:

A 1,500 word critical analysis of a neuroimaging paper due on Friday,A 1,500 word essay due on April 9th.A 2,000 - 3,000 word essay also due on April 9th.A 2,000 - 3,000 word essay due on April 23rd.Another 2,000 - 3,000 word essay due on May 7th.

Tuesday, 16 March 2010

Somewhat unbelievably, we are now in the last 2 weeks of formal lectures, although as usual there is still an array of optional and supplementary talks going on at the various affiliated institutions. However, the taught aspect of the MSc is now all but over. This leaves only the prospect of the research project looming, lumbering into sight like some gigantic beast from a 1950s B-movie, pulverising anything that dare get in its way.

Tomorrow I meet with my supervisor/collaborator to really get the process started, and we will draw up exactly what will happen, when, and who will be responsible for what.

A few weeks ago I posted here a simplified explanation of our research proposal. I will now attempt to explain, in as accessible a form as possible, what this research has got to do with the real world. But first, a very quick recap of the basics.

The field is auditory psychophysics. At first the name alone was enough to make me want to run for the hills, but it's not half as scary as it sounds. As is so often the case in neuroscience the intimidating nomenclature masks a surprisingly simple concept. Psychophysics, it turns out, is just the study of how the brain processes the information we get from the senses, in this case, sound. Therefore the grand old title 'auditory psychophysics' really just means 'what the brain does with everything you hear'.

There is now a wealth of evidence to suggest that the structure of the brain, i.e. which cells connect with which and how strong the connections are, is constantly changing, and that this change is driven by the importance of the sensory information we receive.

In the case of sound, incoming information is processed mainly in the primary auditory cortex, or A1. I have explained before the concept of tonotopic organisation, but as a little refresher imagine that a little part of the surface of the brain is like the keys of a piano. When you hear a high pitched sound the braincells at the top of the piano are activated, and as the pitch decreases the activity moves further down as the musical pitch gets deeper.

As a result of this type of organisation each cell in A1 has what is termed a best frequency, or BF. This is frequency to which the cells responds the strongest, and as the frequency moves away from the BF the response gets smaller, until there is no response at all.

So, what happens if a certain frequency suddenly becomes very important to your behaviour. For example, consider the sound of a screeching predator, which would be a very good indicator that you should make yourself scarce as soon as possible. It would be very helpful if you processed these behaviourally relevant sounds quicker than irrelevant background sounds.

Well, when a sound like this suddenly becomes very important we find that more of the auditory neurons will change their BF to the frequency in question, making the animal more sensitive to that sound.

It sounds fairly straightforward, but we are talking about a small cluster of cells amongst tens of billions, a great many of which show a similar adaptability for the area to which they are specialised. So this will be going on not just for sound frequency, but also for the other properties of sound, such as volume. Additionally, plasticity has been shown in other domains such as vision, touch and smell. And that is just the senses, our own internal states are also constantly being monitored in a similar fashion.

The bigger picture is one of a brain that is constantly adapting to perform at peak performance in whatever environment it is placed.

This plasticity is greatest in infancy. Babies are born with far more connections between brain cells than are present in adults, perhaps as many as double. This is because most of our adaptation to our environment happens in the first few years of life. Once the infant is adapted to its environment, the irrelevant brain connections are pruned away, remaining if not dead then largely dormant.

This extreme early adaptability has a few intriguing applications. For example, if a human baby is exposed to enough monkey faces early in development it will be able to distinguish monkey faces just as well as human faces (presumably into adulthood), although for an adult this would be almost impossible to learn. Another example of this early adaptability and pruning is seen in the use of language, with babies able to learn all the different vocal intonations seen in languages around the world, even sounds that are almost indistinguishable to Western adults, such as certain African dialects that involve communication through clicks produced in the throat. This potential bilingualism does not last long, and beyond the first couple of years of life we become locked into the grammatical constraints of our first language (which incidentally is the reason that native Japanese speakers find it so hard to distinguish between R and L, a feature of language that is nor present in Japanese).

However, as I said, the connections that are pruned after infancy remain dormant rather than dead, and plasticity experiments suggest that with appropriate training they can be revived to some degree.

Plasticity, therefore, is like Darwinism happening in real time. It takes many generations for a species to physically adapt to their environment, but the clever old brain can do it in a matter of hours.