Friday, December 19, 2008

Several observations suggest to me a connection between conduction aphasia and disruption to our proposed sensory-motor integration area Spt.

1. Spt is located in the posterior planum temporale region. The lesion distribution in conduction aphasia seems to be centered on this same location (Baldo, et al. 2008).

2. Spt activity is modulated by word length (Okada, et al., 2003) and frequency, and has been implicated in accessing lexical phonology (Graves et al. 2008). Conduction aphasics commit predominately phonemic errors in their output and these errors are increased by longer, less frequent words.

3. Spt is not speech specific in that tonal/melodic tasks also activate this region (Hickok, et al. 2003). Conduction aphasics appear to have deficits that also affect tonal processing (Strub & Gardner, 1974).

The idea is that this sensory-motor circuit is critical in supporting sensory guidance of speech output, and that such guidance is most critical for phonemically complicated words/phrases and/or for low frequency words or for items with little or no semantic constraint (e.g., non-words, phrases like "no, ifs, ands, or buts"). If a word is short or used frequently, the claim goes, its motor representation can be activated as a chunk rather than programmed syllable by syllable.

One problem, raised by Alfonso Caramazza in the form of a question after a talk I gave, is that sometimes conduction aphasics get stuck on the simplest of words. Case in point, in my talk, I showed an example of such an aphasic who was trying to come up with the word cup. He showed the typical conduit d'approche, "its a tup, no it isn't... it's a top... no..." etc. Alfonso justifiably noted that conduction aphasics shouldn't have trouble with such simple words if the damaged sensory-motor circuit wasn't needed as critically in these cases.

So here is a sketch of a possible explanation. I'd love to hear your thoughts. There is a difference between repetition and naming: repetition shows the typical length/frequency effects, whereas naming doesn't. Here's why:

In repetition, a common word like cup can be recognized/understood and then semantic representations can drive the activation of the motor speech pattern. As the word gets more phonologically complicated or less semantically constrained, this route becomes less and less reliable and the sensory-motor system is required. This is the classic explanation of invoked to explain why conduction aphasics sometimes paraphrase in their repetition; a view that has gained some recent support (Baldo, et al. 2008).

In naming, the main hang up in conduction aphasia is in trying to access the phonological word form. Since the lesion in conduction aphasia typically involves the STG, systems involved in representing word forms are likely partially compromised leading to more frequent access failures. Further, in lexical-phonological access simple, high-frequency forms that share a lot of neighbors (cup, pup, cut, cop, cope ...) will actually lead to more difficulty because of the increased competition.

References

Baldo JV, Klostermann EC, and Dronkers NF. It's either a cook or a baker: patients with conduction aphasia get the gist but lose the trace. Brain Lang 105: 134-140, 2008.

Wednesday, December 10, 2008

This is a follow up to my previous post on the (reduced) effect of delayed auditory feedback (DAF) in conduction aphasia. Here we consider the possible relation between anatomical abnormalities in the planum temporale and DAF in stutterers.

Paradoxically, DAF can improve fluency in people who stutter (it decreases fluency in control subjects). Some stutterers also have an anatomically atypical planum temporale. A study published in Neurology by Foundas et al. (2004) sought to determine whether there was a relation between the paradoxical DAF effect and planum temporale anatomy. There was: stutterers with atypical planum temporale asymmetries (R>L) showed the paradoxical DAF effect, whereas stutterers with typical planum asymmetries did not show the paradoxical DAF effect.

This line of investigation provides a further bit of evidence linking an auditory-motor integration system to the planum temporale. Our functionally defined area Spt (e.g., Hickok et al., 2003), which we believe supports auditory-motor integration, is located in the posterior portion of the left planum temporale. I suspect that it is this region that is somehow implicated in stuttering. Why the symptoms of conduction aphasia and developmental stuttering are different is an important question (assuming that some aspect of the same system is involved)...

Other disorders have been linked to planum temporale (dys)function including dyslexia, schizophrenia, and autism. I seriously doubt that dysfunction of the auditory-motor integration system involving the planum is going to explain the speech/auditory symptoms of all these disorders as there are probably lots of ways to disrupt speech/auditory functions. Following the example in the Foundas et al. study, I wonder if planum temporale atypicalities plus DAF effects might be used in combination to better characterize what might be going on in these disorders.

Tuesday, December 9, 2008

Here's an interesting nugget of information: conduction aphasics appear to be less susceptible to the disruptive effect of delayed auditory feedback. Why is this interesting? Because it is more evidence for a link between systems supporting auditory-motor interaction and the deficit in conduction aphasia. Here are the details...

Delayed auditory feedback (DAF) disrupts speech production. You can prove this to yourself either by trying to talk on a microphone in a large stadium (where your echo is delayed) or, if you don't regularly speak in large stadiums, you can simply talk to yourself on two cell phones: call one phone with the other, hold them both to your ears and start talking; there is a slight delay in transmission leading to delayed auditory feedback, and so speaking becomes difficult. DAF is strong evidence that auditory speech information interacts with speech production systems.

While the classic view is that conduction aphasia is a disconnection syndrome resulting from damage to the arcuate fasciculus, this view is no longer tenable. I have been promoting the view that the syndrome results from damage to our favorite brain region, Spt, which we believe is a critical node in a network that supports auditory-motor interaction (e.g., see Hickok et al., 2000). This, we claim, explains why conduction aphasics make phonemic errors in production (because speech planning is guided to some degree by auditory speech systems) and why they have trouble with verbatim repetition under conditions of high phonological load such as with multisyllablic words, unfamiliar phrases, or non-words (because these kinds of stimuli maximally rely on sensory speech guidance). One prediction of this view is that conduction aphasics should exhibit other "symptoms" of a disrupted auditory-motor integration system.

So I was digging through some old papers on conduction aphasia and came across two, both published by Francois Boller in 1978, that suggested that conduction aphasics are less susceptible to DAF than controls and patients with other aphasia types. One was a group study that found that conduction aphasics were the least affected by DAF of the groups studied (Boller, Vrtunski, Kim, & Mack, 1978), and the other was a case study showing no effect of DAF (and even some improvement!) on the repetition of speech in a conduction aphasic (Boller & Marcie, 1978). This decreased DAF effect in conduction aphasia makes sense if the system that supports auditory-motor interaction is disrupted in that syndrome.

Monday, December 8, 2008

This is the title of a new paper in J. Neuroscience by Alexander Leff and company (Jennifer Crinion, Karl Friston, and Cathy Price among others) at the Wellcome Trust Centre, University College London. The report is beautifully straightforward and fills an important gap in our understanding of the pathways that support the processing of meaningful speech.

They set out to test two competing hypotheses regarding information flow in the temporal and frontal lobes during the processing of intelligible speech. One hypothesis, put forward by Sophie Scott and Richard Wise, suggests that the pathway for intelligible speech projects anteriorly into the temporal lobe from primary auditory cortex. The other hypothesis, recently promoted by us (Hickok & Poeppel, 2000, 2004, 2007), but by no means unique to us (it is a rather conventional view), holds that the posterior STS is an important projection target for acoustic speech information on its way to being comprehended.

Leff, et al. used fMRI to identify a network of brain regions active during the perception of intelligible speech, which was defined as regions that responded more to word pairs than to time reversed versions of word pairs. Here is a summary map of the regions activated by this contrast:

They didn't see much bilateral activation (must be something in the London water because we have just finished a similar experiment and see TONS of activation on both sides -- more on this in the future), but that's not the point of the paper. Notice that there are foci of activation in the posterior as well as anterior STS, and an inferior frontal area as well that falls within BA47, outside of Broca's region.

They then used dynamic causal modeling and Baysian parameter estimation to determine the model of information flow among these three nodes that best fit their data. Of 216 models tested -- all possible combinations of input (squares with arrows) and interactions between ROIs (dotted lines), diagram on left -- the winning model (right diagram) had sensory input entering the network only via pSTS and projecting in separate pathways to aSTS on the one hand and IFG on the other.

In other words, information flow is not exclusively anterior from primary auditory cortex, nor is it flowing in parallel from A1 to aSTS and pSTS, but rather projects first posteriorly and then anteriorly within the temporal lobe; i.e., the ventral stream runs through the pSTS.

In proposing an exclusively anterior-going pathway from primary auditory cortex, Scott and Wise were particularly persuaded by three observations. (i) monkey data suggested anterior projections from the auditory core, (ii) their own imaging data suggested an anterior focus of activity for intelligible versus unintelligible speech, and (iii) semantic dementia involves word level semantic deficits and has anterior temporal degeneration as a hallmark feature. Their proposal was quite reasonable in light of these facts, but it just didn't seem to pan out: (i) monkey data is useful as a guide, but may not generalize to humans especially when language systems are involved, (ii) subsequent experiments looking at intelligible speech, such as the present one, clearly identified posterior activation foci, and (iii) it seems that the deficit in semantic dementia is to some extent supramodal, i.e., may be well beyond the linguistic computations that appear to be supported by the pSTS, and lesion (stroke) evidence implicates posterior temporal regions in word-level semantic deficits.

To be fair, we didn't completely predict the findings of the Leff, et al. study either. Specifically, we posited no direct projection from pSTS to aSTS, and discussed the function of the anterior temporal region in the context of grammatical type processes only. Neither did we discuss a direct influence of pSTS on the IFG (BA47) within the ventral stream. (Notice that this link does not, presumably, reflect the dorsal stream, which involves more posterior portions of the IFG and should not be a dominant node in network supporting language comprehension.)

Know that we know a bit more about the nature of information flow in this network, it's time to try to figure out exactly what these different regions might be doing. Our suggestion regarding the posterior STS is that it supports phonological processing of some sort. This still makes sense I think. But what is the anterior STS doing?

Wednesday, December 3, 2008

The Dual Stream model of speech/language processing holds that there are two functionally distinct computational/neural networks that process speech/language information, one that interfaces sensory/phonological networks with conceptual-semantic systems, and one that interfaces sensory/phonological networks with motor-articulatory systems (Hickok & Poeppel, 2000, 2004, 2007). We have laid out our current best guess as to the neural architecture of these systems in our 2007 paper:

It is worth pointing out that under reasonable assumptions some version of a dual stream model has to be right. If we accept (i) that sensory/phonological representations make contact both with conceptual systems and with motor systems, and (ii) that conceptual systems and motor-speech systems are not the same thing, then it follows that there must be two processing streams, one leading to conceptual systems, the other leading to motor systems. This is not a new idea, of course. It has obvious parallels to research in the primate visual system, and (well before the visual folks came up with the idea) it was a central feature of Wernicke's model of the functional anatomy of language. In other words, not only does the model make sense for speech/language processing, it appears to be a "general principle of sensory system organization" (Hickok & Poeppel, 2007, p. 401) and it has stood the test of time.

So, all that remains is to work out the details of these networks. A new paper in PNAS by Saur et al. may provide some of these details. In an fMRI experiment, they used two tasks, one that they argued tapped the dorsal stream pathway (pseudoword repetition), and the other the ventral stream pathway (sentence comprehension). The details of the use of these tasks leave something to be desired in my view, but they did seem to highlight some differences, so I'm not going to quibble for now. Here's the activation maps (repetition in blue, comprehension in red):

Notice the more ventral involvement along the length of the temporal lobe (STS, MTG, AG) for comprehension relative to repetition, as well as the more posterior involvement in the frontal lobe for repetition.

They then used peaks in these activations as seeds for a tractography analysis using DTI. Here is a summary figure showing the distinction between the two pathways (red = ventral, blue = dorsal).

The authors localize the white matter tract of the dorsal pathway as being part of the arcuate/superior longitudinal fasciculi and the tract of the ventral pathway as part of the extreme capsule (not the uncinate).

I haven't looked closely at the details of the analysis (I would love to hear comments!), but this sort of study seems just the ticket to getting us closer to delineating the functional anatomical details of the speech/language system.

Subscribe to Talking Brains

Blog Moderators

Greg Hickok is Professor of Cognitive Sciences at UC Irvine, Editor-in-Chief of Psychonomic Bulletin & Review, and author of The Myth of Mirror Neurons. DavidPoeppel, after several years as Professor of Linguistics and Biology at the University of Maryland, College Park, is now Professor of Psychology at NYU. Hickok and Poeppel first crossed paths in 1991 at MIT in the McDonnell-Pew Center for Cognitive Neuroscience where Hickok was a post doc, and Poeppel a grad student. Meeting up again a few years later at a Cognitive Neuroscience Society Meeting in San Francisco, they began a collaboration aimed at developing an integrated model of the functional anatomy of language. Research in both the Hickok and Poeppel labs is supported by NIDCD.