You are here

Specificity Versus Patterning Theory: Continuing the Debate

Does the brain process and interpret innocuous and noxious stimuli by “reading” a pattern of activity across multimodal lines of activity, or are there specific, labeled lines that carry functionally distinct modalities from the periphery to the spinal cord and then rostrally in the neuraxis?

The specificity camp has its origins in the nineteenth-century studies of German neurophysiologists who concluded that there are modality-specific spots on the skin (e.g., touch, cold, etc.), and that a percept is generated by activation of specific neuronal pathways in the periphery and CNS (for reviews, see Craig, 2003; Ma, 2010; and Perl, 2007). This concept was originally articulated by Müller (Müller, 1840), who proposed that the stimulus did not even determine the perception, but rather that connectivity of the afferent and its ultimate projection site in the brain were critical.

The pattern theory, by contrast, proposed that afferent fibers respond to a host of stimulus modalities, and that the ultimate perception depends on the brain’s deciphering and interpretation of the patterns of activity across the different nerve fibers. The late Patrick Wall, almost 30 years after coauthoring the Gate Control Theory of Pain in 1965 with Ronald Melzack (Melzack and Wall, 1965), wrote that “…specificity theory has failed to generate any explanation for clinical pains. Worse yet…it has encouraged ineffective, often counterproductive, surgical attempts to destroy the cells or their axons” (Wall, 1996). The poster boy/dartboard for the specificity camp, of course, is Descartes’ little boy whose “pain pathway” runs from his foot to a pain center in his brain.

It is of interest that the focus of Melzack and Wall’s seminal 1962 article “On the nature of cutaneous sensory mechanisms,” which preceded the Gate Control Theory paper by three years, was on the processing of non-noxious thermal and mechanical stimuli, as compared to pain-producing stimuli. The paper dealt much less so, if at all, with the question of the processing of different modalities of noxious/painful stimuli (i.e., heat, cold, mechanical, and chemical pain) (Melzack and Wall, 1962). Pat Wall was, of course, strongly influenced by his discovery, with Lorne Mendell, of the wide dynamic range (WDR) dorsal horn neuron (Mendell and Wall, 1965). The WDR neuron clearly responds to both innocuous and noxious stimulation, which argued against a specific Cartesian “pain” pathway. However, subsequent demonstrations by Ed Perl and colleagues of primary afferents (nociceptors) and lamina I dorsal horn neurons that respond only to noxious stimulation, at least in uninjured animals (Bessou et al., 1971; Christensen and Perl, 1970) provided fodder for the labeled line advocates.

So where are we today? As a student of both Melzack and Wall, I grew up with a firm belief in pattern theory. However, results from our recent studies have encouraged me to reexamine this basic tenet. Just as the ability to record from single fibers made possible the identification of afferents that respond rather exclusively to noxious stimulation, so the molecular revolution has revealed a detailed subclassification of the nociceptors. In fact, nociceptors are remarkably heterogeneous; different subsets express channels that are responsive to different noxious stimulus modalities (e.g., heat [TRPV1], cold [TRPM8], mustard oil [TRPA1], etc.) (Basbaum et al., 2009; Julius and Basbaum, 2001). Of course, there is some overlap of these channels, and there is no question that many of the nociceptors are polymodal, responding to thermal as well as mechanical stimulation. Nevertheless, our studies argue strongly for a modality-specific contribution of subsets of primary afferents to the presumptive pain-generated behaviors evoked by the particular modality, (e.g., heat, cold, mechanical) (Cavanaugh et al., 2009; Scherrer et al., 2009).

Although I have written (in a syllabus for an IASP Refresher course) that “there are no labeled lines….”, I now believe that there is behaviorally relevant specificity, at least at the level of the primary afferent. To what extent the behaviorally relevant (pain, and likely itch) information generated by primary afferents is also manifest at the level of circuits in the spinal cord and at higher levels of neuraxis is the critical unanswered question. Yes, there are nociceptive-specific neurons in the dorsal horn, but the WDR neuron cannot be ignored (Price et al., 2003). Indeed, the relative contribution of these two neurons is worthy of continued discussion in this forum.

In their 1962 article, Melzack and Wall proposed that, “A satisfactory theory of somesthesis must be able to provide answers to two essential problems: 1) What is the nature of the information that is sent to the central nervous system when the skin is stimulated, and 2) How do the central cells select or abstract from this information to provide the many different qualities of our sensory experience?” We have come a long way to answering the first question, butclearly, we need more information as to the specificity versus patterning question at the level of CNS circuitry, so that answers to the second question can be generated.

These questions are not merely of interest to the basic scientist, but are relevant to the development of approaches to the clinical management of pain. Are there clinical pain conditions that arise from activity in subsets of nociceptors, and if so, can drugs be developed to block selectively the contribution of those afferents? For example, TRPV1 is generally associated with noxious heat transduction and heat hypersensitivity, but clearly that is not its function in a visceral afferent that innervates the pancreas. Thus, a drug that blocks TRPV1-expressing afferents selectively may have great utility, beyond regulating heat-pain sensibility (which is clearly not a major clinical concern).

The question of specificity is also relevant to ablative procedures. Because attempts in animals are now made to ablate chemically subsets of spinal cord neurons (e.g., with substance P-conjugated to saporin; Nichols et al., 1999), the controversy, unquestionably, has significant clinical implications. Does the preclinical development of these techniques mean that we are poised to go beyond what Pat Wall, as noted above, referred to as “ineffective, often counterproductive, surgical attempts to destroy the cells or their axons” (epitomized by anterolateral cordotomy), or are the new approaches built upon a misunderstanding of the way pain is generated? Should we be concerned that reducing information flow in a specificity-based “pain transmission network” can contribute to the development of central pain syndromes (as can occur post-stroke)?

Finally, and very importantly, this author certainly appreciates that this contemporary perspective on the question of specificity versus patterning relates more to the processing of nociceptive messages, and much less so to the sensory experience/perception of pain. The latter is clearly influenced and in some cases dominated by emotional and cognitive factors. To what extent specificity or patterning or some hybrid model integrates with these factors is not at all clear, and will not be determined by studies directed only at the primary afferent or spinal cord circuits. Studies that integrate an analysis of the pathways through which inputs are transmitted from the cord to the brain, with others directed at identifying where and how emotional/affective and cognitive factors are processed, will require combining psychophysical studies with novel imaging methods that can monitor the process of information transfer from the spinal cord to the brain.

Remember that the original discussion of specificity versus patterning was not a neurophysiological one. Rather, it was a perceptual one: How does the brain generate a pain percept? It is certainly possible that convergence of “specific” inputs at the level of the spinal cord or higher in the neuraxis generates an integrated pattern of activity that is read by the brain, the product of which is the ultimate percept.

Comments

Transcending Specificity
Although many philosophical questions may ultimately be answered by neurobiology—the nature of self-knowledge and free will, for example—generally the two disciplines live in parallel separate universes. My first exposure in the late 1970s to the Specificity versus Pattern Theory debate, repeated endlessly over too many pints of ale at the Jeremy Bentham pub near University College London, was from Pat Wall. He was passionate in his hostility to Specificity, based I think on his philosophical certainty that the Pattern theory was much more intellectually challenging and interesting than “mere fixed telephone lines,” and therefore it just had to be correct, a question of neuroaesthetics. However, in the end, facts are our currency and the accumulated data from the last 30 years demonstrate quite conclusively that at the primary afferent level, there is clear specificity. Sensory neuron subtypes display exquisite selectivity in deconvoluting the external work into defined neural representations by virtue of their specialized transduction processes, extracting information about particular aspects of the sensory environment, its intensity, nature, place, and time. Individual afferents capture enough information from defined stimuli to tempt us to conflate their firing patterns with the sensation generated, leading us into calling them warm, tickle, itch, or pain fibers. However, as Basbaum points out in describing his conversion from the Pattern Theory to the Specificity School, the way in which the CNS represents the sensory information at a systems level and the way in which it is read from neural circuitry as a sensation remains almost a closed book. Is the exquisite specificity engineered in the periphery maintained in the CNS, or is perception the consequence of an interrogation of complex multidimensional information across space and time, the translation of a pattern? Here, due to our lack of data, we can remain, for the moment, philosophical. However, the capacity to silence or activate designated subsets of central neurons, to record activity simultaneously from many neurons in vivo, and the new field of connectomics that will reveal the synaptic wiring diagram of the brain, means that it is only a matter of time before the answer will be revealed. Take your bets. I know mine.

This discussion by Alan Basbaum and Clifford Woolf deserves more attention than it has received, at least as judged by the number of commentaries. Conceptual models of the pain mechanism strongly influence the research agenda and our approach to diagnosis and treatment. The concept of a labeled line from the periphery to a labeled perceptual apparatus in the brain is very attractive because we can understand it and because it continues to justify invasive and usually expensive treatments directed at the (presumably) identified pathway. In a few instances, some of these procedures provide at least some relief, but the medical literature is full of reports of long-term failure and often serious complications from many of these interventions. The slow and difficult progress on the pharmacological front has been discussed in this Forum by Jeff Mogil and colleagues and is due in large part to the inconvenient number and distribution of genes, receptors, and neurotransmitters. Nonetheless, the evidence for peripheral receptors and afferent fibers selectively responsive to noxious stimuli or inflammation is well established and there is little doubt that interrupting the centripetal flow of this information will block or attenuate pain when the peripheral source is clearly identified; this evidence and experience encourages acceptance of the labeled line model. The problem arises, as Basbaum and Woolf acknowledge, when considering the function of the central nervous system (CNS).

For over 40 years, the pain research and clinical communities have acknowledged that pain requires the participation of supraspinal circuits mediating both affective experience and the recognition of uniquely noxious stimulation (nociception) [ 1 , 3 ]. Animal and human lesion studies and human functional brain imaging experiments have consistently supported this concept, which is embedded in the IASP definition of pain and which is consistent with the widely accepted operation of parallel distributed networks in the brain. In addition, there has long been evidence that central pathways activated by nociceptors initiate centrifugal control of nociceptive input, supporting Basbaum’s concern about central lesions of “pain pathways” resulting in central pain. Thus, the peripheral label is not lost; but it is distributed among anatomically distinct but interacting circuits mediating nociception, affect, and the modulation of each.....to say nothing of cognitive functions. This arrangement is complex, but it is not disorderly and it does not suggest that the CNS creates specific sensations out of some chaotic pattern. Perhaps, as Woolf suggests, we will someday identify, anatomically and biochemically, the unique circuit (read connectome) for pain; it is theoretically possible, but by then we will probably have learned how to recognize each unique experience through the interpretation of the dynamic, ultra-high resolution functional brain images of the future.

The “pattern” and “specificity” theories are ghosts of the past. Basbaum and Woolf have evolved to their present conceptual models, as I have, in the face of contemporary evidence. The same is true of premier specificity champions of the past. Ed Perl writes: “Considering the evidence, it seems reasonable to propose pain to be both a specific sensation and an emotion, initiated by activity in particular peripheral and central neurons.....The mechanisms of pain include specialized receptive organs, selective and convergent pathways, plasticity of responsiveness and interactive modulation“( [ 2 ], p.78). Yet, the ghosts remain an unwelcome, and sometimes dangerous, presence in our research, clinical practice, and teaching. We still have much work to do.

I thank Allan, Clifford and Ken for shining light into this important issue and re-opening the question of labeled line versus pattern analysis that predominates in the steps leading to the subjective experience of pain. My opinion is that rather than an either/or debate it is a question of if-and-when. The full-blown labeled line concept is that activity in a specific class of primary afferent nociceptor is necessary and sufficient to produce a specific, reproducible sensation. It seems likely that this can happen-- perhaps something like the pain one feels when immersing a sunburned hand in water that would be hot but not painful in normal skin. On the other hand, there are circumstances in which pain is a compound sensation produced by simultaneous activation of two or more classes of primary afferent. Our understanding of this goes back to Henry Head who observed that following transection of a small cutaneous nerve there was what he called a transitional zone (between the insensitive region and the region of normal innervation). In this transitional zone, innocuous stimuli were experienced as painful. Later, Landau and Bishop showed that selective block of myelinated primary afferents innervating a skin region created a situation in which light mechanical stimulation, or innocuous warm or cold, gave rise to a burning pain. This was interpreted to mean that pain was a compound sensation that depended upon pro-pain nociceptor activity and anti-pain activity in non-nociceptive primary afferents. These observations demonstrate that the labeled line idea has, at best, limited generality for understanding how primary afferents produce pain sensation. These observations were critical antecedents to the Gate Control Hypothesis.

The other important observation is that primary afferents that respond maximally to innocuous stimulation can produce pain provided there is also nociceptor input. This was most clearly demonstrated by Torebjork and colleagues (J. Physiol. 1992) who used low intensity electrical stimulation of a peripheral cutaneous nerve to activate low threshold mechanoreceptive primary afferents. In normal skin, the stimulation produced innocuous tingling. However, following injection of skin with capsaicin to selectively activate primary afferent nociceptors, an area of secondary hyperalgesia developed that included the area innervated by the nerve they were stimulating electrically. Following the development of hyperalgesia, electrical stimulation at the intensity that produced innocuous tingling prior to capsaicin now produced burning pain. This clearly shows a modality switch, such that a single set of primary afferents can produce two distinct sensations, depending upon what other primary afferents are firing at the time. This observation points to the critical role of central processing-- whether you want to call this pattern-theory is a matter of semantics. While there is no doubt that labeled lines are the rule in the periphery, the subjective experience of pain typically involves input from two or more labeled lines, i.e., it is a compound sensation.

I too had thought that the debate between specificity and pattern theory was one that was buried with the passing of its main proponents. I too have been privy to heated discussions that seem to repeat themselves from one scientific meeting to the next: one group arguing that specific neurons are all that are needed for pain perception and the second that non-specific neurons are really where pain perception occurs. I also remember scratching my head, with the nagging thought that the evidence for both camps is certainly adequate and wondering “What am I missing?” Here, I want to address what, I think, has been and continues to be missing in this debate.

Let me begin with a discussion of information coding in the visual system. In visual neuroscience, where knowledge of brain mechanisms are perhaps a century ahead of that in pain research, there is little doubt that at least in the primary visual cortex the outside world is coded by neurons with receptive fields comprised of lines and/or edges, that is in complete compliance with the pattern theory. Moreover, the bulk of the evidence now indicates that CONSCIOUS visual perception requires primary visual cortex [4]. On the other hand, the so-called grandmother neurons [3], that is neurons that preferentially or only specifically respond to the face of one’s grandmother (or to Marilyn Monroe, or Clinton, or Lewinsky), first hypothesized to exist in the cortex back in 1940s, are now known to exist in higher visual cortical regions. The important point is that even in a system where (at the earliest levels) information is coded in a pattern, this information is then recombined to generate neurons with very specific response properties.

A second important point that vision neuroscience has elegantly demonstrated is the role of individual neurons in perception. Newsome and colleagues in the 1990s [2] showed that in awake monkeys manipulating the firing rate of a single neuron in the cortex changes the animal’s behavioral responses to the visual input. This result is important given the fact that about one-third of the primate brain is dedicated to vision and in any visual task one-third to one-half of the brain seems involved in the task. Thus, even when 3 billion neurons (assuming that primate brain has 10 billion neurons) are involved in a task, the activity of a single one within that pool can still influence perception. Let me remind the reader that the equivalent has been repeatedly demonstrated in the somatosensory system for peripheral afferents in humans. Namely microneurographic studies [6] have repeatedly demonstrated that electrical activation of a single peripheral axon with a single pulse can give rise to touch or pain perception (not true for some specific types of tactile afferents). Yet, again brain imaging studies indicate that such perception involves activation of some 10-20% of the neocortical mantle, and human neuroimaging studies have yet to demonstrate the existence of a single cortical voxel that shows nociceptive specificity. The point I am trying to make is that even though the brain is a highly interconnected network, comprised of a huge number of elements, the common saying that you only use some small proportion of your neurons in life is simply false and instead we now know that every neuron has the potential to contribute to perception, behavior, and/or other brain functions.

With this viewpoint of the role of single neurons in perception, and given the ample evidence that nociceptive neurons come in a huge variety, some showing specificity to unique stimulus kinds, others showing highly convergent responses to many types of energy applied to tissue, we need a redefinition of the debate from one of existence to that of functional relevance. Let me first register a general complaint about the pain research community. Unfortunately the field has been, and continues to remain, primarily empirical and observational, and continues to bank on the idea that better and more detailed observations would clarify these issues. Clifford Woolf’s solution to this debate offers such an example, I paraphrase him saying: “All we are missing is the connectome, and that with the latter the issue would be resolved.” I suspect, instead, the connectome too will show evidence for a myriad of types of nociceptive specific and non-specific neurons contributing to varying extents to a long list of pathways.

I contend instead that what we are missing is a theoretical construct within which the specificity vs. pattern theory of pain can be framed (Wall’s gate control theory remains the only serious theoretical construct in the field, and that was published in 1965!). With a simplistic view that the brain is a highly interconnected self-organizing network with the intention of promoting the survival of the DNA, and with the idea that sensory systems extract specific and optimal types of information to promote this survival, I can assert that computational constructs are relevant to nociceptive information processing just as they are for all other sensory systems. In fact the properties of computational models of simulated networks shed light on the topic. Any simulated parallel distributed processing network comprised of multiple layers and composed of Hebbian learning rule-based connectivity, exhibits hidden layer receptive fields composed of both specific and non-specific receptive fields [5]. Moreover, depending on the learning paradigm used, one would observe shifts in the relative prominence of the two types of hidden layer neurons. I contend that such simulations exhibit the constraints of the problem and exhibit, what in signal detection theory would be equivalent to sensitivity and specificity of the solution, based on the training paradigm. With this construct one can pose the notion that specific and non-specific nociceptors respectively contribute to the specificity and sensitivity at which pain will be experienced, or equivalently set the accuracy and dynamic range of the perception. From such a theoretical view one can further speculate that shifts in nociceptive response properties, for example following injury or change of state that has been amply demonstrated for over the last 20 years by Clifford Woolf’s studies [7] as well as by many others (state-dependent shifts of nociceptive neurons from specificity to non-specificity was first demonstrated by Casey in 1966 [1]), correspond to a new computational state with unique signal detection, or perception, characteristics. I can even envision extending this construct to the unique reorganization that the brain exhibits for different chronic pain conditions.