Menu

Month: July 2007

In 1992 Rizzolatti and his colleagues found a special kind of neuron in the premotor cortex of monkeys (Di Pellegrino et al., 1992).

These neurons, which respond to perceiving an action whether it's performed by the observed monkey or a different monkey (or person) it's watching, are called mirror neurons.

Many neuroscientists, such as V. S. Ramachandran, have seized upon mirror neurons as a potential explanatory 'holy grail' of human capabilities such as imitation, empathy, and language. However, to date there are no adequate models explaining exactly how such neurons would provide such amazing capabilities.

Perhaps related to the lack of any clear functional model, mirror neurons have another major problem: Their functional definition is too broad.

Typically, mirror neurons are defined as cells that respond selectively to an action both when the subject performs it and when that subject observes another performing it. A basic assumption is that any such neuron reflects a correspondence between self and other, and that such a correspondence can turn an observation into imitation (or empathy, or language).

However, there are several other reasons a neuron might respond both when an action is performed and observed.

First, there may be an abstract concept (e.g., open hand), which is involved in but not necessary for the action, the observation of the action, or any potential imitation of the action.

Next, there may be a purely sensory representation (e.g., of hands / objects opening) which becomes involved independently of action by an agent.

Finally, a neuron may respond to another subject's action not because it is performing a mapping between self and other but because the other's action is a cue to load up the same action plan. In this case the 'mirror' mapping is performed by another set of neurons, and this neuron is simply reflecting the action plan, regardless of where the idea to load that plan originated. For instance, a tasty piece of food may cause that neuron to fire because the same motor plan is loaded in anticipation of grasping it.

It is clear that mirror neurons, of the type first described by Rizzolati et al., exist (how else could imitation occur?). However, the practical definition for these neurons is too broad.

How might we improve the definition of mirror neurons? Possibly by verifying that a given cell (or population of cells) responds only while observing a given action and while carrying out that same action.

Alternatively, subtractive methods may be more effective at defining mirror neurons than response properties. For instance, removing a mirror neuron population should make imitation less accurate or impossible. Using this kind of method avoids the possibility that a neuron could respond like a mirror neuron but not actually contribute to behavior thought to depend on mirror neurons.

Of course, the best approach would involve both observing response properties and using controlled lesions. Even better would be to do this with human mirror neurons using less invasive techniques (e.g., fMRI, MEG, TMS), since we are ultimately interested in how mirror neurons contribute to higher-level behaviors most developed in homo sapiens, such as imitation, empathy, and language.

Everyday (spoken) language use involves the production and perception of sounds at a very fast rate. One of my favorite quotes on this subject is in "The Language Instict" by Steven Pinker, on page 157.

"Even with heroic training [on a task], people could not recognize the sounds at a rate faster than good Morse code operators, about three units a second. Real speech, somehow, is perceived an order of magnitude faster: ten to fifteen phonemes per second for casual speech, twenty to thirty per second for the man in the late-night Veg-O-Matic ads […]. Given how the human auditory system works, this is almost unblievable. […P]honemes cannot possibly be consecutive bits of sound."

One thing to point out is that there is a lot of context in language. At a high level, there is context from meaning which is constantly anticipated by the listener: meaning imposes restrictions on the possibilities of the upcoming words. At a lower level there's context from phonetics and co-articulation; for example, it turns out that the "l" in "led" sounds different from the "l" in "let", and this may give the listener a good idea of what's coming next.

Although this notion of context at multiple levels may sound difficult to implement in a computer program, the brain is fundamentally different from a computer. It's important to remember that the brain is massively parallel processing machine, with millions upon millions of signal processing units (neurons).
(I think this concept of context and prediction is lost on more traditional linguists. On the following page of his book, Pinker misrepresents the computer program Dragon NaturallySpeaking by saying that you have to speak haltingly, one-word-at-a-time to get it to recognize words. This is absolutely not the case: the software works by taking context into account, and performs best if you speak at a normal, continuous rate. Reading software instructions often results in better results.)

Given that the brain is a massively parallel compuer, it's really not difficult to imagine that predictions on several different timescales are taken into account during language comprehension. Various experiments from experimental psychology have indicated that this is, in fact, the case.

The study of the brain and how neural systems process language will be fundamental to advancing the field of theoretical linguistics — which thus far seems to be stuck in old ideas from early computer science.

Experiments?

Because language operates on such a rapid timescale, and involves so many different brain areas, there is need to use multiple non-invasive (as well as possibly invasive) recording techniques to get at how language is perceived and produced such as ERP, MEG, fMRI and microelectrodes.

In addition to recording from the brain, real-time measurements of behavior are important in assessing language perception. Two candidate behaviors come to mind: eye movements and changes in hand movements.

Eye movements are a really good candidate for tracking real-time language perception because they are so quick: you can move your eyes before a word has been completely said. Also, there has been some fascinating work done with continuous mouse movements towards various targets to measure participant's on-line predictions of what is about to be said. These kinds of experimental approaches promise to provide insight on how continuous speech signals are perceived.

After a bit of a hiatus, I'm back with the last three installments of "Grand Challenges in Neuroscience".

Topic 4: Time

Cognitive Science programs typically require students to take courses in Linguistics (as well as in the philiosphy of language). Besides the obvious application of studying how the mind creates and uses language, an important reason for taking these courses is to realize the effects of using words to describe the mental, cognitive states of the mind.

In fact — after having taken courses on language and thought, it seems that it would be an interesting coincidence if the words in any particular language did map directly onto mental states or brain areas. (As an example, consider that the amygdala is popularly referred to as the "fear center".)

It seems more likely that mental states are translated on the fly into language, which only approximates their true nature. In this respect, I think it's important to realize that time may be composed of several distinct subcomponents, or time may play very different roles in distinct cognitive processes.

Time. As much as it is important to have an objective measure of time, it is equally important to have an understanding of our subjective experience of time. A number of experimental results have confirmed what has been known to humanity for some time: Time flies while you're having fun, but a watched pot never boils.
Time perception strongly relates cognition, attention and reward. The NSF committee proposed that understanding time is going to be integrative, involving brain regions whose function is still not understood at a "systems" level, such as the cerebellum, basal ganglia, and association cortex.

Experiments?

The NSF committee calls for the develpoment of new paradigms for the study of time. I agree that this is critical. To me, one of the most important issues is the dissociation of reward from time (e.g., "time flies when your having fun"): most tasks involving time perception in both human and non-human primates involved rewarding the participants.

In order to get a clearer read on the neurobiology of time perception and action, we need to observe neural representations that are not colored by the anticipation of reward.