Friday, October 30, 2015

Ten
years ago the action understanding interpretation of monkey mirror neurons was
the only game in town. There really was no other viable account so even
if there were problems with the theory (e.g., 8 in particular), it was the best we had.
Now there are alternative explanations. Cecelia Heyes has argued
that they reflect learned sensorimotor associations (that don't support
understanding), recent writings of Michael Arbib and separately James Kilner
have argued that they fundamentally serve a motor control function but which
are used fruitfully to augment perceptual function via predictive coding, and I
have argued for something of a hybrid between Heyes and Arbib/Kilner: MNs
reflect learned sensorimotor associations that are critical for motor control
(action selection specifically) and may modulate perception a tiny bit under
rather rare circumstances.

This
is great progress because it means we are now in position to evaluate the
various theories against existing data and just see which one does a better job
of explaining the facts.

I have argued extensively that the action
understanding theory does not hold up well to lesion data. Disruption of the
mirror system by stroke, sodium amytal, degenerative disease, or developmental
disease does not impair action understanding in the way that the Parma story
should predict. Add to this an impressive, new, large N study on gesture comprehension, and the evidence against the action understanding
theory in humans is overwhelming.

But
what about monkey mirror neurons? They still look like they are coding some
sort of action understanding, right? Not if you actually look at the data rather than reading the headlines.

I
discussed this issue in my debate with Gallese. You can watch the whole
thing here, but to put the argument into a condensed
form, I reiterate it below.

First,
what got everyone so excited about mirror neurons in the first place is that
some of them showed a fairly strict congruence in their response preference for
executed and observed actions: cells that responded to, say, whole hand
grasping in execution and observation. There are other, more broadly
congruent mirror neurons too, but these took a theoretical back seat in the
1990s. But there was a problem: strictly congruent mirror neurons aren't
that useful for understanding because they can't recognize that grasping with a
whole hand grip and grasping with a pincher grip are both instances of
grasping. They are simply too specific. So the bulk of the
theoretical work with monkey mirror neurons has shifted to broadly congruent
mirror neurons, which in fact, are more common anyway (see below). Here's some quotes from this paperby the Parma group to prove that
I'm not making this stuff up:

How is understanding achieved?

“The similarity between the motor representation generated in
observation and that generated during motor behavior allows the observer to
understand others’ actions, without the necessity for inferential processing.”

What counts as similar?

“neurons in F5 code the
goal of the motor act [grasping, holding, tearing], regardless of how it is
achieved.”

“The defining characteristic of F5
mirror neurons is that they fire in response to the presentation of a motor
act, which is congruent with the one coded motorically by the same
neuron.”

“Thus, like the visual system, where, as postulated by Shepard (1984), resonating
elements (neurons or neuronal assemblies) respond maximally to a set
of stimuli, but are also able to respond to similar stimuli when they are
incomplete or corrupt, a set of mirror neurons (broadly congruent)
appears to resonate to all visual stimuli that have sufficient critical
features to describe the goal of a given motor act.”

So what do the data show? Most of the relevant data come from the first major mirror neuron study in which a range of actions was examined.
After that initial study research on monkey mirror neurons has focused
almost exclusively on one type of action: grasping. (We should probably worry about that.) So let's look at
that first and more thorough study.

Here is the distribution of cell types:

1. Strictly congruent: 31.5%

Same goal (e.g., grasping), same motor
act (e.g., precision grip)

Can’t capture the similarity in goal
between grasping with precision grip vs. whole hand grip as pointed out above.

Captures the similarity between
precision and whole hand grasping, but “interprets” them as one or the
other specific type of grasping and doesn’t capture the similarity between
grasping with hand & mouth, for example. Therefore these have a similar problem to the strictly congruent MNs.

Responds to goals! This is a useful subtype for action understanding. But only 3 cells out of 92 mirror
neurons & only one goal represented (grasping). If you want to maintain an action understanding theory, this is what you have to hang your hat on.

3. Non-congruent (7.6%)

No obvious relation between execution
and observation preferences. Not useful for understanding.

There are more problems, which may apply to the 3/92 cells that have the right response properties for understanding, making their suitability for understanding questionable. Mirror neurons are sensitive to all sorts of features that have nothing to do with action understanding. Here's a list:

And indeed the Parma group has acknowledged this and claimed that mirror neuron system "contributes to choosing
appropriate behavioral responses to those actions" (Caggiano et al. 2009)

Notice that all of these response properties make sense if this system is simply coding relations between a range of actions and a range of possible action responses. For example Type 2 congruent mirror neurons (by far the most common) take multiple possible observed actions and map them onto a single executed response. This is useful for motor selection if a single motor response is appropriate to multiple cue types but not useful for understanding. For another example, the value of the grasped object should modulate response selection (do I want to grasp that object?) but should not play a role in action understanding.

The evidence is overwhelming:

1. Monkey mirror neurons have response properties that do not fit the action understanding theory and instead fit an action selection account.

Thursday, October 29, 2015

Observing someone else being touched seems to activate one's own somatosensory cortex (e.g., this report). It is has been claimed that this effect contributes to action understanding via embodied simulation. Some view this as an example of the "mirror mechanism" by which we understand others by mirroring their experience in our own bodies (or something like that).

First note that this touch-based "mirror mechanism" is quite different from so-called motor mirroring. The motor claim is non-trivial: perceptual understanding is not achieved by perceptual systems alone, but must (or can benefit from) involvement of the motor system.

What about perceptual mirroring? At the most abstract level, the claim is this: perceptual understanding is based on perceptual processes. Not so insightful is it? Perhaps it's even vacuous. But maybe this is too harsh an analysis. One could presumably understand the concept of someone being touched on the arm without involving an actual somatosensory representation. So maybe it is non-trivial, insightful even, that we do activate our touch cortex when observing touch. In fact, for the sake of argument, let's grant that the empirical observation is true and that it does contribute to our understanding.

What might it add to understanding? Or put differently, how much does that somatosensory "simulation" add to our understanding of an observed touch? Consider the following narrative scenarios.

Scenario #1: After he expressed his affection during the romantic dinner, the man reached out and touched the girl gently on the arm.

Scenario #2: After subduing his victim during the home invasion, the man reached out and touched the girl gently on the arm.

How much our understanding of the meaning of that touch action is encoded in the somatosensory experience? Almost none of it. The "meaning" of the action is determined for the most part by the context as it interacts with the observed action. The touch wouldn't even have to actually happen, or it could occur on a different body part (all very different experiences from a somato standpoint!), and it wouldn't alter our understanding of the event. Yes, it's true that simulating the actual touch might add something, i.e., having a sense of what the actual gentle touch felt like on the arm, but what drives real understanding is the interpretation of that touch in its context, not the somatopically specific touch sensation itself.

Conceptualized in these terms, to say that somatosensory simulation contributes to understanding of others' touch experiences is like saying that "acoustic simulation" of the voiceless labiodental fricative in the experience of hearing "fuck you" contributes to the understanding of that phrase. Yes, I suppose the /f/ plays a role, but how it combines with "uck you" and more importantly who said it to whom and under what circumstances is where the meat of the understanding will be found.

It's interesting and worthwhile to understand all the cognitive and neural bits and pieces that contribute to understanding. Lowish-level embodied "simulation," whether motor or sensory, may have a role to play. But it is important to understand these effects in the broader context. Don't for a second think that we've cracked the cognitive code for understanding just because M1 or S1 activates when we see someone do something.

Tuesday, October 13, 2015

Typical embodied cognition experiments ask whether low-level sensory or motor information affects performance on this task or that. The journals are filled with these kinds of experiments. Some of these effects might even be real. Assuming some of these effects are indeed real, let's now move on to the next questions: How much of the variance in performance does embodied cognition explain? And can embodied models improve on standard models?

I've pointed out previously that embodied effects are small at best. Here's an example--a statistically significant crossover interaction--from a rather high-profile TMS study that investigated the role of motor cortex in the recognition of lip- versus hand-related movements during stimulation of lip versus hand motor areas:

Effect size = ~1-2% This is typical of these sorts of studies and beg for a theory of the remaining 98-99% of the variance.

A Challenge

So, let me throw out a challenge to the embodied cognition crowd in the context of well worked out non-embodied models of speech production. Let's take a common set of data, build our embodied and non-embodied computational models and see how much of the data is accounted for by the standard versus the embodied model (or more likely, the embodied component of a more standard model).

Here is a database that contains naming data from a large sample of aphasic individuals. The aim is to build a model that accounts for the distribution of naming errors.

Here is a standard, non-embodied model that we have called SLAM for Semantic-Lexical-Auditory-Motor. (No, the "auditory-motor" part isn't embodied in the sense implied by embodied theorists, i.e., the level of representation in this part of the network is phonological and abstract.) Here's a picture of the structure of the model:

This model accounts for about 98% of the variance in patient naming error-type distributions. Here is an example fit for a single patient (figure from Walker & Hickok, in press, PB&R), which shows the percent response for various categories of response types (correct, semantic error, formal error etc) for the patient (dotted line) and the model (solid line):

Incidentally, Matt Goldrick argued in a forthcoming reply to the SLAM model paper that this fit represents a complete model failure due to the fact that the patient had zero semantic errors whereas the model predicted some. This is an interesting claim that we had to take seriously and evaluate quantitatively, which we did. But I digress.

The point is that if you believe that embodied cognition is the new paradigm, you need to start comparing embodied models to non-embodied models to test your claim. Here we have an ideal testing ground: established models that use abstract linguistic representations to account for a large dataset.

Thursday, October 8, 2015

The Language, Cognition and Brain Sciences (LCBS) laboratory at Queensland University of Technology (QUT) is seeking a motivated and enthusiastic Postdoctoral Research Fellow to contribute to a range of research projects investigating the neurobiology of language in both healthy and language-impaired individuals. Applicants should have completed a PhD or have submitted a PhD for qualification in psychology, linguistics, cognitive neuroscience, speech pathology or an equivalent field, and have proven technical ability with a demonstrated publication track record in diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI). Appointment may be made at Level A or B, depending on the qualifications and experience of the successful applicant.

Work conducted within the lab focuses on investigating the neural and cognitive mechanisms responsible for language processing in healthy individuals, how these mechanisms are affected by brain tumours and stroke, and how language recovery can be facilitated by various treatments. It is anticipated that the appointee will work across a range of projects involving neuroimaging, brain stimulation, genetic and psycholinguistic methods. There will also be opportunity for the appointee to develop new projects and obtain competitive funding based on their own research interests, in alignment with the goals and interests of the lab.

The position will entail conducting research at the Herston Imaging Research Facility (HIRF), a purpose built state-of-the-art imaging centre. The HIRF is a joint initiative between the Queensland University of Technology, Metro North Hospital and Health Service, the QIMR Berghofer Medical Research Institute, the University of Queensland, and industry partner Siemens. The primary focus of HIRF is on research, with a 3 Tesla Siemens Prisma MRI equipped for cognitive neuroscience research (including a 64-channel BrainProducts MR-compatible EEG system), in addition to Siemens PET-MR and PET-CT systems.

Monday, October 5, 2015

The Program in Language Science (http://linguistics.uci.edu) at the University of California, Irvine (UCI) is seeking applicants for a tenure-track assistant professor faculty position. We seek candidates who combine a strong background in theoretical linguistics and a research focus in one of its sub-areas with computational, psycholinguistic, neurolinguistic, or logical approaches.

The successful candidate will interact with a dynamic and growing community in language, speech, and hearing sciences within the Program, the Center for Language Science, the Department of Cognitive Sciences, the Department of Logic and the Philosophy of Science, the Center for the Advancement of Logic, its Philosophy, History, and Applications, the Center for Cognitive Neuroscience & Engineering, and the Center for Hearing Research. Individuals whose interests mesh with those of the current faculty and who will contribute to the university's active role in interdisciplinary research and teaching initiatives will be given preference.

Interested candidates should apply online at https://recruit.ap.uci.edu/apply/JPF03107 with a cover letter indicating primary research and teaching interests, CV, three recent publications, three letters of recommendation, and a statement on previous and/or past contributions to diversity, equity and inclusion.

Application review will commence on November 20, 2015, and continue until the position is filled.

The University of California, Irvine is an Equal Opportunity/Affirmative Action Employer advancing inclusive excellence. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, protected veteran status, or other protected categories covered by the UC nondiscrimination policy.

The Department of Psychology, Rutgers University – New Brunswick, NJ, invites applications for a tenure-track faculty position at the Assistant Professor level in Cognitive/Computational Neuroscience. We will consider applicants with a Ph.D. in Psychology, Cognitive Science, Cognitive Neuroscience or a related field, and a track record of excellence in any sub-area of cognitive/computational neuroscience (such as language, development, perception, decision-making, learning). We particularly welcome applicants whose research combines multiple techniques, e.g. studies of atypical populations, computational modeling, and/or neural measurements and manipulations. We also encourage applications from scientists whose research programs are synergistic with research being conducted by faculty in the Psychology Department and affiliated units such as the Rutgers University Center for Cognitive Science and the Brain Health Institute.

Candidates must have a Ph.D. by September 1, 2016, with a strong record of research and publication and potential for extramural funding. The successful candidate will be expected to develop and maintain an active, extramurally-funded research program and also to teach undergraduate and graduate psychology courses. Rutgers University-New Brunswick, is located in central New Jersey between New York City, Princeton, NJ, and Philadelphia. Questions regarding this position may be addressed to recruit4@rci.rutgers.edu.

Applicants should submit a CV, personal statement of research aims and teaching philosophy, representative reprints and 3 letters of recommendation to apply.interfolio.com/31131. Review of applications will begin on Oct. 12 and continue until the position is filled.

The
Department of Cognitive Science at Johns Hopkins University seeks to make an
open rank, tenure-track faculty appointment in the area of psycholinguistics. The ideal candidate will have an exceptionally
strong record of conducting and directing psycholinguistic research that makes substantive
contact with linguistic theory, and integrates experimental and computational
methods. Research areas of interest
include, but are not limited to, the processing of syntax, semantics or pragmatics.
Candidates should carry out integrative work in the target area, and have the
ability to conduct effective teaching, student supervision, and collaboration
in a formally-oriented, highly interdisciplinary cognitive science department. The
search committee and Johns Hopkins University are committed to hiring
candidates who, through their research, teaching, and/or service will
contribute to the diversity and excellence of the academic community. The
University is an Affirmative Action/Equal Opportunity Employer of women,
minorities, protected veterans and individuals with disabilities and encourages
applications from these and other protected group members. Consistent with the
University’s
goals of achieving excellence in all areas, we will assess the comprehensive
qualifications of each applicant. Please upload CV, statements of research and
teaching interests, and up to three articles tohttp://apply.interfolio.com/31295.
Applicants should also send requests for three letters of recommendation
from their Interfolio account.For questions about
Interfolio, call (887) 997-8807 or email help@interfolio.com.
Review of applications
will begin immediately, with a deadline of November 1, 2015.

Subscribe to Talking Brains

Blog Moderators

Greg Hickok is Professor of Cognitive Sciences at UC Irvine, Editor-in-Chief of Psychonomic Bulletin & Review, and author of The Myth of Mirror Neurons. DavidPoeppel, after several years as Professor of Linguistics and Biology at the University of Maryland, College Park, is now Professor of Psychology at NYU. Hickok and Poeppel first crossed paths in 1991 at MIT in the McDonnell-Pew Center for Cognitive Neuroscience where Hickok was a post doc, and Poeppel a grad student. Meeting up again a few years later at a Cognitive Neuroscience Society Meeting in San Francisco, they began a collaboration aimed at developing an integrated model of the functional anatomy of language. Research in both the Hickok and Poeppel labs is supported by NIDCD.