Saturday, 28 August 2010

At about 4pm on Friday the 20th of August, if you were anywhere in the vicinity of central London, you may have noticed an audible collective gasp; a long and satisfied expulsion of anxious tension. This was the moment that my masters drew to a close, as each and every student on the course handed over their thesis, the fruits of nine months hard slog, before promptly retiring to either the pub, or to bed. In my case, the latter was my chosen option, having spent the entire previous evening, night, morning and afternoon tweaking, redrafting, and desperately trying to get the darn thing printed in time. I did, with five minutes to spare.

In the end, I think I turned in a really good piece of work. I hope so. It has certainly had a lot of praise from the two individuals who will be marking my work, which can only be a good sign. As an extra bonus, it looks likely to be published in a peer-reviewed academic journal, which will be a huge boost to my fledgling career.

Essentially, our project set out to examine what happens when a specific sound becomes behaviourally important. Numerous studies on animals have shown that when a target frequency is paired with an electric shock (to make it behaviourally significant) the area of the brain which ‘looks out’ for that sound gets bigger. What isn’t understood is how this affects the ability to perceive that tone.

We paired a target frequency with a shock, like the animal studies, and participants had to discriminate between the target frequency and other frequencies, some very close and some much further from the target. If, as the animal studies suggest, this leads to an expansion of the target representation on the cortex, will the participants get better at telling the target frequency from tones that are very very close to the target?

The answer, is yes. When subjects were being conditioned with the shock, they became much better at telling apart tones that were very close in frequency. This effect happened rapidly, and did not occur when participants were not being conditioned.

The neuroimaging results also indicated that there was greater brain activity in response to the frequency that was paired with the shock, compared to all other tones. This would fit with the expanded representation demonstrated in the animal studies.

So what? What does all this mean? Well, firstly, we have demonstrated that the human brain begins to adapt and change to our environment within minutes, something that would have been inconceivable a few years ago. Secondly, studies like ours help us to understand the basics of more complex mechanisms, which future studies will elucidate further. How does early musical training produce a child genius? How are our sensory memories stored, and what does this tell us about memory as a whole? What are the limits of the brains ability to change itself, and how can we use this information to treat brain damage or stroke? All these bigger questions will need a basic foundation to expand upon, and studies like ours, which may in isolation appear trivial, can provide the basis for these foundations.

Sunday, 16 May 2010

Lectures - finished.Coursework - completed.By pure chance, it turns out that my research project supervisor is out of the country this week, the week immediately after my final coursework assignment was handed in, meaning that I have had an entire week with absolutely nothing to do. It has been total bliss.Starting on Tuesday we should hopefully be in a position to start the research. Initially we will pilot the equipment on ourselves, in order to check everything is in working order. Then we go live and begin testing our participants, and we are hoping to complete the bulk of the research in two or three weeks, provided we can keep a steady stream of particpants coming in and out of the lab.Obviously I shall keep you up to date with our progress!

Tuesday, 27 April 2010

I have noticed a worrying trend in my academic career - they always leave things on a downer.

My last lecture on the MSc was on the subject of depression.

When I did my undergraduate degree, our final module was on the psychology of ageing, which was structured in a chronological way so that we ended up covering the cheery topic "bereavement, loneliness and dying".

My final essay topic for the MSc? "Has research into the biological basis of depression had any impact on its treatment?"

Hmm, am I subtly being set-up for a lifetime of professional disappointment?

Tuesday, 20 April 2010

I am currently attempting to steer myself through the penultimate essay on my MSc course, which unfortunately is proving to be without a doubt the hardest essay I have ever had to write.

It's about 'the functional role of brain oscillations', and is quite interesting at the same time as being obscenely dull. Essentially, I am writing about how there has always been a bias toward examining where in the brain things are processed, but now there is loads of evidence to suggest that observing when the brain works it's magic are just as essential.

It turns out that brain cells oscillate - the electrical activity they display is not random, it has a rhythm. Nothing particularly remarkable about that, but huge numbers of brain cells actually oscillate together - they become synchronised in their activity, and are extremely accurate (to under a millisecond).

To complicate matters, the frequency at which they oscillate, the speed of the rhythm, varies across different parts of the brain, and also varies for different activities. So we have clusters of cells all 'wobbling' together, at a certain frequency, and other clusters all doing the same at a different frequency.

These oscillations are the brain waves you may have seen on TV, when people are made to do experiments wearing funny electrodes on their head, which produces squiggly lines on a monitor.

So, as I said - the timing of brain activity is equally as valid a field of study as the physical layout of the brain. Neuroscientists are easily seduced by fancy looking brain images, and this may go some way to explain the bias towards the where, but this has meant out knowledge of the when is now lagging behind.

Anyway, its horrifically complicated in places, and the afore mentioned bias in the research means its patchy and inconsistent, and this essay is proving to be a real challenge. But then, I knowingly chose what I thought was the hardest question, as it is potentially the one that may be of must use to me in my future research, so I think I made the right decision in picking this essay.

Still, another 1,500 words to write by Friday, and I stupidly signed up for a two day course starting on thursday...

Friday, 26 March 2010

I recently had to write a mini essay on how the mainstream press might misrepresent data from neuroimaging studies, such as functional magnetic resonance imaging (fMRI). I thought that, given the nature of the question, it might be a good thing to put up on my blog, so here it is, with a few tweaks and extra explanations of some of the terminology thrown in along the way.

How might fMRI data be misrepresented in news articles targeted to the generalpublic?

Like many complex disciplines, cognitive neuroscience is plagued by misrepresentation in the media. Most journalists lack the skills, or the integrity, to identify reputable sources of research, and there is a tendency to sensationalise and exaggerate the significance of research, which falsely presents findings as a series of epoch-defining breakthroughs, which is a fundamental misunderstanding of the aggregate nature of scientific advancement. Additionally, the mainstream media reports research in isolation, detached from the context of a theoretical framework. The journalist may cite a few key comments from the author, falsely presenting scientists as infallible authority figures. Often, in an attempt to provide balance, a dissenting voice from an opposing scholar is also presented, making science appear to the public as a series of contradictory, diametrically opposed irrelevancies.

These are generic flaws of pseudoscientific journalism, whereas cognitive neuroscience is specifically vulnerable to misrepresentation. This is because it is is a field in which the public has a great interest, but very little knowledge. This enthusiasm for neuroimaging is understandable, as cognitive neuroscience examines intrinsically human topics, such as consciousness, personality, and emotion. Because of the perceived impenetrability of the brain in the public consciousness there is a tacit belief that this research is utterly incomprehensible, leading to ready acceptance of the press reports without critical appraisal. For example, research has shown that presenting an image of a brain in an article will make the scientific credibility of the research appear greater to the reader (McCabe & Castel, 2008). A similar effect has been demonstrated when neuroscientific terminology is inserted into an article, even when it bears little relevance to the discussion (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). Clearly, this mystification of brain function amongst the public is open to abuse, deliberate or unintentional.

Reports of fMRI in the press tend to be overly optimistic about its potential, fail to explain its limitations, and are not sufficiently critical of the methodology (Racine, Bar-Ilan, & Illes, 2005). This problem is exacerbated by the contention that much fMRI data has already been distorted and excagerated by the researcher, knowingly or otherwise (Vul, Harris, Winkielman, & Pashler, 2009). This means that the general public, who are untrained in how to critically appraise neuroscientific research, are often presented with news reports which bear little resemblance to the true results.

(At this point, I will interject to remind you about the difference between forward and reverse inferences. A forward inference is when the researcher plots in advance what brain activation they expect to see when a person does a given task in the scanner. This might be as simple as "the participant will do a language task, so we expect to see the left posterior section of the superior temporal gyrus activated, because this is where we process and understand language". This kind of inference is not perfect, but it is better than the alternative, because we are making and testing a prediction.

The alternative, the reverse inference, is severely frowned upon in cognitive neuroscience. Imagine we did the experiment I just described, and as well as activation in the superior temporal gyrus we also saw activation in several other areas which we were not expecting to be activated. We could not then start to speculate on why these other areas were activated, because brain areas often have more than one function, so guess work is considered very bad science indeed. With that little aside out of the way, you should all be able to follow my next point!)

Additionally, the fallacy of reverse inference, avoided by credible neuroscientists, still appears viable to laypeople, especially given the popular myth that we humans only use 10% of our brain (this is not true - every part of the brain has a known function). This myth masks the complex and multipurpose nature of brain function, supporting the notion that brain activity is easily attributable to specific cognitive functions.

Racine, Bar-Ilan, and Illes (2005) examined the portrayal of fMRI research in the media, and identified three key trends, “‘neuro-realism’, ‘neuro-essentialism’ and ‘neuro-policy’” (p. 2). Neuro-realism, they claim, is the phenomenon whereby subjective findings falsely appear as objective fact when viewed within the context of a neuroimaging study. The second, neuro-essentialism, describes the tendency to attribute a self or personality to the brain itself, almost to the point where the brain is depicted as being self-aware, admonishing the individual from any degree of control. Finally, neuro-policy is the politicisation of neuroscientific research to fit and reinforce a social or political agenda. Taken as a whole, these trends misrepresent the findings and usage of fMRI in cognitive neuroscience, ignoring its true purpose as a scientific tool to devise and test hypotheses, and to detail the true workings of the mind and brain.

Tuesday, 23 March 2010

When I said after lectures finish it was just the research project to work on, I neglected to mention:

A 1,500 word critical analysis of a neuroimaging paper due on Friday,A 1,500 word essay due on April 9th.A 2,000 - 3,000 word essay also due on April 9th.A 2,000 - 3,000 word essay due on April 23rd.Another 2,000 - 3,000 word essay due on May 7th.

Tuesday, 16 March 2010

Somewhat unbelievably, we are now in the last 2 weeks of formal lectures, although as usual there is still an array of optional and supplementary talks going on at the various affiliated institutions. However, the taught aspect of the MSc is now all but over. This leaves only the prospect of the research project looming, lumbering into sight like some gigantic beast from a 1950s B-movie, pulverising anything that dare get in its way.

Tomorrow I meet with my supervisor/collaborator to really get the process started, and we will draw up exactly what will happen, when, and who will be responsible for what.

A few weeks ago I posted here a simplified explanation of our research proposal. I will now attempt to explain, in as accessible a form as possible, what this research has got to do with the real world. But first, a very quick recap of the basics.

The field is auditory psychophysics. At first the name alone was enough to make me want to run for the hills, but it's not half as scary as it sounds. As is so often the case in neuroscience the intimidating nomenclature masks a surprisingly simple concept. Psychophysics, it turns out, is just the study of how the brain processes the information we get from the senses, in this case, sound. Therefore the grand old title 'auditory psychophysics' really just means 'what the brain does with everything you hear'.

There is now a wealth of evidence to suggest that the structure of the brain, i.e. which cells connect with which and how strong the connections are, is constantly changing, and that this change is driven by the importance of the sensory information we receive.

In the case of sound, incoming information is processed mainly in the primary auditory cortex, or A1. I have explained before the concept of tonotopic organisation, but as a little refresher imagine that a little part of the surface of the brain is like the keys of a piano. When you hear a high pitched sound the braincells at the top of the piano are activated, and as the pitch decreases the activity moves further down as the musical pitch gets deeper.

As a result of this type of organisation each cell in A1 has what is termed a best frequency, or BF. This is frequency to which the cells responds the strongest, and as the frequency moves away from the BF the response gets smaller, until there is no response at all.

So, what happens if a certain frequency suddenly becomes very important to your behaviour. For example, consider the sound of a screeching predator, which would be a very good indicator that you should make yourself scarce as soon as possible. It would be very helpful if you processed these behaviourally relevant sounds quicker than irrelevant background sounds.

Well, when a sound like this suddenly becomes very important we find that more of the auditory neurons will change their BF to the frequency in question, making the animal more sensitive to that sound.

It sounds fairly straightforward, but we are talking about a small cluster of cells amongst tens of billions, a great many of which show a similar adaptability for the area to which they are specialised. So this will be going on not just for sound frequency, but also for the other properties of sound, such as volume. Additionally, plasticity has been shown in other domains such as vision, touch and smell. And that is just the senses, our own internal states are also constantly being monitored in a similar fashion.

The bigger picture is one of a brain that is constantly adapting to perform at peak performance in whatever environment it is placed.

This plasticity is greatest in infancy. Babies are born with far more connections between brain cells than are present in adults, perhaps as many as double. This is because most of our adaptation to our environment happens in the first few years of life. Once the infant is adapted to its environment, the irrelevant brain connections are pruned away, remaining if not dead then largely dormant.

This extreme early adaptability has a few intriguing applications. For example, if a human baby is exposed to enough monkey faces early in development it will be able to distinguish monkey faces just as well as human faces (presumably into adulthood), although for an adult this would be almost impossible to learn. Another example of this early adaptability and pruning is seen in the use of language, with babies able to learn all the different vocal intonations seen in languages around the world, even sounds that are almost indistinguishable to Western adults, such as certain African dialects that involve communication through clicks produced in the throat. This potential bilingualism does not last long, and beyond the first couple of years of life we become locked into the grammatical constraints of our first language (which incidentally is the reason that native Japanese speakers find it so hard to distinguish between R and L, a feature of language that is nor present in Japanese).

However, as I said, the connections that are pruned after infancy remain dormant rather than dead, and plasticity experiments suggest that with appropriate training they can be revived to some degree.

Plasticity, therefore, is like Darwinism happening in real time. It takes many generations for a species to physically adapt to their environment, but the clever old brain can do it in a matter of hours.

Saturday, 27 February 2010

Gosh, it has been quite some time since I last posted an update.

Everyone I talk to on the MSc keeps saying how tired they are feeling, and I am no exception. I find it very hard to concentrate on getting work done, as I am distracted by the prospect of just finishing this course and getting out into the world, finding a decent job and (hopefully) earning a respectable wage for the first time in my life.

Still, I am struggling to stay grounded, to get the work done and my mind on the tasks in hand. I am about to throw myself into this research project. I have hundreds and hundreds of pages I need to read and understand in the coming weeks,and at the moment this task is quite daunting. Ho hum, it must be done.

Oh yeah, and we are just about to be set our final batch of coursework, and I should hopefully get my marks for that exam I had in January next week. Keep your fingers crossed for me!

Monday, 15 February 2010

How might historical thinking about relationships between brain and language relate to current theories and interpretation of behavioural and neuroimaging data?

Introduction

There is no single canonical historical perspective on the relationship between brain and language; anecdotal reports of prototypical aphasic conditions were detailed by the ancient Egyptians, and many contradictory discoveries of dubious veracity emerged over the subsequent millennia. Therefore we shall consider the generally accepted model of the brain and language immediately prior to the twentieth century, shortly before the emergence of neuroscience as a formalised empirical pursuit, as the zenith of historical understanding of the subject, which we shall term the classical model. We then examine how this model relates to the predominant twentieth century assumptions of modularity and functional localisation, and in turn our current understanding of the brain. Finally, we consider what modern neuroscience has taught us that is beyond the technological capabilities of the historical pioneers of language.

The classical model of language

The two main contributors to the classical model of language were Paul Broca and Carl Wernicke, who pioneered our understanding of language production and comprehension respectively (Binder et al., 1997). Broca expanded on a hypothesis posited by his contemporary Jean-Baptiste Bouillaud to propose that the posterior inferior frontal gyrus of the left hemisphere was the region of the brain responsible for language production, based on lesion evidence gathered from his own patients. Wernicke identified that damage to the posterior part of the superior temporal gyrus in the left hemisphere resulted in a receptive aphasia with retained speech production, and postulated that this region was responsible for the memories of words and in turn the comprehension of the speech of others. These discoveries were historically significant in several respects; firstly they identified the left hemisphere as being the predominant hemisphere in language production and processing. Secondly, whilst these discoveries were a major step in the neurological understanding of language, they were even more significant for neuropsychology as a whole, as for the first time they elevated the concept of functional localisation above the pseudoscience of phrenology. Whilst both Broca and Wernicke undoubtedly made a substantial contribution to our understanding of language, the classical model of language can be challenged in several respects. For example, lesions to the areas described by Broca and Wernicke do not always result in the deficits they described. Furthermore, aphasia of both production and comprehension of language can occur without lesions to the regions outlined in the classical model (Caplan, et al., 2007; Willmes, & Poeck, 1993). These discrepancies suggest that the neural basis of language is far more complex than can be accounted for by the historical model, meaning it must be either expanded or replaced. As a result language is still an intensely scrutinised field in modern neuroscience.

Language in the age of neuroscience

Early imaging studies somewhat supported the classical model of language (Cabeza & Nyberg, 2000), and more recently Broca’s area and the surrounding region have been shown to be active when translating phonemic information into articulatory information in preparation for speech (Papoutsi et al. 2009). The use of transcranial magnetic stimulation (TMS), when applied over Broca’s area in normal patients can temporarily prevent articulate speech, and almost paradoxically, the therapeutic use of TMS, when applied over the right hemispheric Broca’s area has been shown in some cases to improve picture naming in subjects with Broca’s aphasia (Naeser, et al., 2005). This behavioural data implicates Broca’s area in speech production, as the classical model dictates. Broca was not without his critics, even in his own time, and shortly after Broca’s death the French neurologist Pierre Marie noted that expressive aphasia was not exclusively associated with lesions to Broca’s area, but also with regions such as the insula and the basal ganglia (Marie, 1906, as cited in Dronkers, Plaisant, Iba-Zizen, & Cabanis, 2007). Modern neuroscientists have also identified additional linguistic brain areas, including the thalamus, insula and basal ganglia (Damasio & Geschwind, 1984; Mazzocchi & Vignolo, 1979; Naeser & Helm-Estabrooks, 1985). This posthumous vindication for Marie’s remarkably prescient discovery is also somewhat ironic, as modern MRI studies have since shown significant damage to the insula and the basal ganglia in the preserved brains of Broca’s own patients (Dronkers et al., 2007). The region commonly known as Wernicke’s area, the posterior superior temporal gyrus, has also been scrutinised by modern neuroscientists, and many have found evidence to support Wernicke’s hypothesis that this region plays a key role in language comprehension (Friederici, Makuuchi, & Bahlmann, 2009). An fMRI study by Grodzinsky and Friederici (2006) found this area to be involved in integration of syntactic and lexical information, crucial in decoding the speech of others. Increased activation of the posterior superior temporal gyrus has also been shown to be correlated with violations of grammatical structures, such as word order (Bornkessel, Zyssett, Friederici, von Cramon, Schlesewsky, 2005; Stowe, et al. 1998) and various other factors associated with the complexity of language comprehension tasks (Friederici, Makuuchi, & Bahlmann, 2009). This appears to support Wernicke’s hypothesis that this region is implicated in syntactic and lexical comprehension.

The modularity debate

The classical model of language was formed largely on the assumption of functional localisation. This theory suggested that specific cognitive faculties are functionally compartmentalised into distinct structures within the brain, and would later become entrenched in cognitive psychology by the likes of Fodor (1983), although many others subscribed to the concept of an integrated brain (Farah, 1994; Uttal, 2003). The modular doctrine has become somewhat challenged in light of advanced neuroimaging techniques, and some have speculated that the modular account of language must be drastically reconsidered, if not abandoned, in the light of modern imaging studies (Bates & Dick, 2000). Contemporary neuroscientists have now identified a wide variety of structures within the brain that appear to be implemented in language production and processing (Fedorenko & Kanwisher, 2009). Furthermore, and perhaps uncomfortably for proponents of localisation, almost every part of the brain has at some point been prescribed a linguistic function by modern neuroscience (Bates & Dick, 2000). Whilst some of these studies support the classical model, many also flatly contradict it. For example, certain linguistic comprehension processes have been observed in regions independent of Wernicke’s area, including in Broca’s area (Ben-Shachar, Hendler, Kahn, Ben-Bashat, & Grodzinsky, 2003; Bornkessel-Schlesewsky, Schlesewsky, & von Cramon 2009) which would surely cause both of the behemoths of the classical model some distress. However, clear correlations have been observed between left hemispheric lesions and specific patterns of deficits (Cooper, Eichhorn, & Rodnitzky, 2008), therefore one might conclude that while it may be a gross oversimplification, there is almost certainly some specialisation for language in this region, and neither historical nor contemporary science has yet been able to fully explain how these interact with the brain as a whole.

The contributions of modern neuroscience

Contemporary scientific methods have helped to elucidate the linguistic brain in ways that the early pioneers, who could only examine the brain on post-mortem, were unable. Broca’s decision to avoid dissecting his subjects meant he grossly underestimated the extent of their subcortical damage, and the surface lesion had engulfed a far greater area of cortex than he had described (Dronkers et al., 2007), as they included additional parts of the frontal lobe and the lateral and medial prefrontal cortex, now known to also have important linguistic functions (Binder et al., 1997). This also explains why modern patients with lesions confined to Broca’s area often show rapid signs of recovery (Bakheit, Shaw, Carrington, & Griffiths, 2007), whilst Broca’s own patients symptoms continued to deteriorate until death. However, the recovery of modern aphasics also serves to illuminate a phenomenon which none of the historical researchers could have foreseen, neuroplasticity. Modern neuroimaging has revealed the ability of the brain to compensate for damage to many areas, including linguistic regions. For example, in patients with left hemispheric lesions the right hemisphere can often adopt some of the language functions normally associated with the left, and if the patient begins to recover from the aphasia the left hemisphere can begin to regain the lost language functions (Knopman, Rubens, Selnes, Klassen, & Meyer, 1984). In infants, congenital damage to the left hemisphere does not typically result in aphasic conditions (Bates & Roe, 2001). Damage to the left hemisphere can lead to some developmental delays in language compared to damage to the right hemisphere but after infancy there is no significant difference in language deficit between those with congenital left or right damage (Bates, et al, 1997). Imaging studies have suggested that in cases such as these the right hemisphere can adapt to adopt the language faculties normally observed in the left hemisphere (Feldman, 2005). This strongly supports the assertion that whilst the left hemisphere is by default the predominant language centre of the brain, this is not a concrete configuration and can be developmentally subverted where necessary. Additionally, some linguistic functions have recently been identified in the right hemisphere in the normal population, such as the right homologue to Wernicke’s area, which is thought to process words with ambiguous meaning (Harpaz, Levkovitz, & Lavido, 2009). This has implications for the historical model of language, as it challenges the assertion that the left hemisphere is the categorical language centre of the brain, and in that it suggests that the functional areas described by Broca and Wernicke can be circumvented.Modern neuroscience has also suggested that there is a strong association between Broca’s area and motor control (Jang, 2009), which is perhaps unsurprising given its close proximity to the primary motor regions of the brain. This may suggest that Broca’s aphasia may be at least partially a motor deficit rather than a strict language deficit. Although this seems to contradict the classical model, it was proposed by Broca’s contemporary, John Hughlings Jackson, as long ago as 1868 (Lorch, 2008). A motor association with Broca’s area may indicate that verbal language ability evolved as a result of motor expressions, such as hand gestures, that preceded it.

The limitations of comparing classical and contemporary models

The concept of language as defined by Broca, Wernicke and their contemporaries was arguably rather crude by the standards of modern neurolinguistics, as it consisted almost solely of expression and reception. The current interpretation of language is somewhat more nuanced, and encompasses concepts such as the difference between spoken and written language, semantic meaning, visual imagery, phonetics, working memory and non-verbal cues. For example, Fedorenko, Gibson, and Rohde (2006) examined the role of working memory in sentence comprehension, and Just (2008) observed that when we hear spoken sentences that prime mental imagery, activation occurs in the intraparietal sulcus. This level of scrutiny highlights the failure of the historical model to appropriately consider the complicated subcomponents of language. Therefore one could argue that historical and contemporary thinkers are not necessarily describing the same phenomena when they refer to language. In addition, the region referred to as Broca’s area varies considerably between studies and over time (Dronkers, et al., 2007), meaning many studies of language may not even be describing the same neural region as Broca discovered.As a note of caution, both the techniques employed by classical and contemporary neuroscientists are to some extent methodologically flawed. One must use caution when using a lesioned brain to make inferences about normal functioning as we cannot guarantee that the unlesioned areas are functioning as normal (Rorden & Karnath, 2004), and we now know that many neural regions can adapt their function to compensate for damage elsewhere (Robertson & Murre, 1999), such as the right hemisphere adopting some of the lost language functioning resulting from lesions to the left hemisphere (Knopman, Rubens, Selnes, Klassen, & Meyer, 1984). Similarly, the interpretation of modern imaging techniques can be scientifically dubious. For example, the association between activation and cognitive processes can be ambiguous, it is unclear whether observed neural activity is related to excitation or inhibition of a particular function, and the arguments raised in the modularity debate may well confound any activity observed as most imaging studies assume a brain modular in function (Poldrack, 2006). For these reasons many imaging studies of language may well be methodologically unsound.

Conclusion

Although Broca and Wernicke introduced some of the key neural regions associated with language, these represent only a fraction of the brain regions that now seem to be involved in language, and the historical model was unable to speculate on the complex circuitry by which the many areas involved in production and comprehension interact. Neither Broca nor Wernicke was wrong per se, but the discoveries of each continue to be refined and re-evaluated in the light of technological and methodological advances. This is not to diminish their discoveries; the cumulative way in which science progresses means that the pioneering research conducted by the likes of Broca and Wernicke, and their unabashed willingness to disregard the received wisdom of the age, was a vital bedrock for the rapid acceleration in what we know today about language in the brain.

Thursday, 11 February 2010

I just submitted 2 major pieces of coursework I have been working on for some time, what a relief.

Over the last week I have become somewhat obsessed. On Monday, I took part in a neuroimaging experiment which involved going in an MRI machine. The lady who conducted the study was nice enough to let me keep the data file produced by my scan.

Which means I can now do all sorts of fancy things on my laptop, not just make images like this:

But also make fun videos like this:

And, I just discovered, I can convert the scan in 3D, which looks a bit like this:

Monday, 1 February 2010

On Sunday I got my first glimpse of some serious neuroimaging machinery.

My supervisor invited me along to watch him testing a participant in an experiment that is very similar to the one he and I will be conducting in a few months time.

The participant was required to under go an MEG, or magnetoencephalogram.

Just like this one...

Although MEG is not strictly an imaging tool, it does allow us to view brain waves in different parts of the cerebral cortex. It works by recording the electrical activity at over 270 points in the brain. Once this is done you can examine the data to see what activity correlated with the task the participant was doing. Clever stuff.

The participant was sat in the machine and played a series of tones, at 2 different pitches, some of which were doubled in length. He had to press a button every time he heard a longer tone.

Halfway through the test one of the tones became associated with an electric shock to the forearm.

Christian didn't explain to me the exact nature of this study, but it is very similar in nature to the one we proposed (a few entries back in this very blog, if you want to read about it).

Oh, and I've volunteered to take part myself next week, and in addition I might be taking part in an fMRI experiment too, which I will tell you all about if it comes to pass.

Monday, 18 January 2010

As hard as it may be to believe, I am now in the second of the 'taught' terms of my MSc. This means I have lectures until the end of March, and after that I will concentrate solely on my research project.

But before the second term could start there was the small issue of the exam for Marty Sereno's module, 'structure and measurement of the human brain'. This was the module I had been struggling with a little bit, as had many of my cohorts, as it straddled cellular biology, the molecular physics of magnetic resonance imaging, and topics such as chemistry and embryonic development. This was a bit tough for poor old me, who had not done any science or maths since my GCSEs, and had always been stronger in the humanities.

Anyway, it wasnt half as bad as many of us had feared, and frankly as long as I pass (of which I am pretty confident) then I'm happy. Plus there is the added bonus of knowing that this will probably be the last exam I ever take.

Now the exam is out of the way the pressure if off for a little while, but I do have a couple of essays to be getting on with, as well as getting on top of my research project. This quiet moment won't last long, so I intend to try and get a head start on the assignments for this term as soon as possible.