What Are Numbers, Really? A Cerebral Basis For Number Sense

What Are Numbers, Really? A Cerebral Basis For Number Sense

Stan Dehaene is a thirty-two year old mathematician turned cognitive neuropsychologist who studies cognitive neuropsychology of language and number processing in the human brain. He was awarded a masters degree in applied mathematics and computer science from the University of Paris in 1985 and then earned a doctoral degree in cognitive psychology in 1989 at the Ecole des Hautes Etudes en Sciences Sociales in Paris. He is at present a researcher at the Institut National de la Sante in Paris.

Dehaene claims that number is very much like color. "Because we live in a world full of discrete and movable objects, it is very useful for us to be able to extract number. This can help us to track predators or to select the best foraging grounds, to mention only very obvious examples. This is why evolution has endowed our brains and those of many animal species with simple numerical mechanisms. In animals, these mechanisms are very limited, as we shall see below: they are approximate, their representation becomes coarser for increasingly large numbers, and they involve only the simplest arithmetic operations (addition and subtraction). We, humans, have also had the remarkable good fortune to develop abilities for language and for symbolic notation. This has enabled us to develop exact mental representations for large numbers, as well as algorithms for precise calculations. I believe that mathematics, or at least arithmetic and number theory, is a pyramid of increasingly more abstract mental constructions based solely on (1) our ability for symbolic notation, and (2) our nonverbal ability to represent and understand numerical quantities."

He argues that many of the difficulties that children face when learning math and which may turn into full-blown adult "innumeracy" stem from the architecture of our primate brain, which has not evolved for the purpose of doing mathematics.

It is his view that the human brain does not work like a computer and that the physical world is not based on mathematics -- rather math evolved to explain the physical world the way that the eye evolved to provide sight.

In a recent book as well as in a heated discussion at the EDGE forum, the mathematician Reuben Hersh has asked "What is mathematics, really?". This is an age-old issue that was already discussed in Ancient Greece and that puzzled Einstein twenty-three centuries later. I personally doubt that philosophical inquiry alone will ever provide a satisfactory answer (we don't even seem to be able to agree on what the question actually means!). However, if we want to use a scientific approach, we can address more focused questions such as where specific mathematical objects like sets, numbers, or functions come from, who invented them, to what purpose they were originally put to use, their historical evolution, how are they acquired by children, and so on. In this way, we can start to define the nature of mathematics in a much more concrete way that is open to scientific investigation using historical research, psychology, or even neuroscience.

This is precisely what a small group of cognitive neuropsychologists in various countries and myself have been seeking to do in a very simple area of mathematics, perhaps the most basic of all : the domain of the natural integers 1, 2, 3, 4, etc. Our results, which are now based on literally hundreds of experiments, are quite surprising: Our brain seems to be equipped from birth with a number sense. Elementary arithmetic appears to be a basic, biologically determined ability inherent in our species (and not just our own ? since we share it with many animals). Furthermore it has a specific cerebral substrate, a set of neuronal networks that are similarly localized in all of us and that hold knowledge of numbers and their relations. In brief, perceiving numbers in our surroundings is as basic to us as echolocation is to bats or birdsong is to songbirds.

It is clear that this theory has important, immediate consequences for the nature of mathematics. Obviously, the amazing level of mathematical development that we have now reached is a uniquely human achievement, specific to our language-gifted species, and largely dependent on cultural accumulation. But the claim is that basic concepts that are at the foundation of mathematics, such as numbers, sets, space, distance, and so on arise from the very architecture of our brain.

In this sense, numbers are like colors. You know that there are no colors in the physical world. Light comes in various wavelengths, but wavelength is not what we call color (a banana still looks yellow under different lighting conditions, where the wavelengths it reflects are completely changed). Color is an attribute created by the V4 area of our brain. This area computes the relative amount of light at various wavelengths across our retina, and uses it to compute the reflectance of objects (how they reflect the incoming light) in various spectral bands. This is what we call color, but it is purely a subjective quality constructed by the brain. It is, nonetheless, very useful for recognizing objects in the external world, because their color tends to remain constant across different lighting conditions, and that's presumably why the color perception ability of the brain has evolved in the way it has.

My claim is that number is very much like color. Because we live in a world full of discrete and movable objects, it is very useful for us to be able to extract number. This can help us to track predators or to select the best foraging grounds, to mention only very obvious examples. This is why evolution has endowed our brains and those of many animal species with simple numerical mechanisms. In animals, these mechanisms are very limited, as we shall see below: they are approximate, their representation becomes coarser for increasingly large numbers, and they involve only the simplest arithmetic operations (addition and subtraction). We, humans, have also had the remarkable good fortune to develop abilities for language and for symbolic notation. This has enabled us to develop exact mental representations for large numbers, as well as algorithms for precise calculations. I believe that mathematics, or at least arithmetic and number theory, is a pyramid of increasingly more abstract mental constructions based solely on (1) our ability for symbolic notation, and (2) our nonverbal ability to represent and understand numerical quantities.

So much for the philosophy now, but what is the actual evidence for these claims? Psychologists are beginning to realize that much of our mental life rests on the operation of dedicated, biologically-determined mental modules that are specifically attuned to restricted domains of knowledge, and that have been laid down in our brains by evolution (cf. Steve Pinker's How the Mind Works). For instance, we seem to have domain-specific knowledge of animals, food, people, faces, emotions, and many other things. In each case - and number is no exception -, psychologists demonstrate the existence of a domain-specific system of knowledge using the following four arguments:
one should prove that possessing prior knowledge of the domain confers an evolutionary advantage. In the case of elementary arithmetic, this is quite obvious.
there should be precursors of the ability in other animal species. Thus, some animals should be shown to have rudimentary arithmetic abilities. There should be systematic parallels between their abilities and those that are found in humans.
the ability should emerge spontaneously in young children or even infants, independently of other abilities such as language. It should not be acquired by slow, domain-general mechanisms of learning.
the ability should be shown to have a distinct neural substrate. My book The Number Sense is dedicated to proving these four points, as well as to exploring their consequences for education and for the philosophy of mathematics. In fact, solid experimental evidence supports the above claims, making the number domain one of the areas in which the demonstration of a biologically determined, domain-specific system of knowledge is the strongest. Here, I can only provide a few examples of experiments.
Animals have elementary numerical abilities. Rats, pigeons, parrots, dolphins, and of course primates can discriminate visual patterns or auditory sequences based on number alone (every other physical parameter being carefully controlled). For instance, rats can learn to press one lever for two events and another for four events, regardless of their nature, duration and spacing and whether they are auditory or visual. Animals also have elementary addition and subtraction abilities. These basic abilities are found in the wild, and not just in laboratory-trained animals. Years of training, however, are needed if one wants to inculcate number symbols into chimpanzees. Thus, approximate manipulations of numerosity are within the normal repertoire of many species, but exact symbolic manipulation of numbers isn't ? it is a specifically human ability or at least one which reaches its full-blown development in humans alone.
There are systematic parallels between humans and animals. Animals' numerical behavior becomes increasingly imprecise for increasingly large numerals (number size effect). The same is true for humans, even when manipulating Arabic numerals: we are systematically slower to compute, say, 4+5 than 2+3. Animals also have difficulties discriminating two close quantities such as 7 and 8. We too: when comparing Arabic digits, it takes us longer to decide that 9 is larger than 8 than to make the same decision for 9 Vs 2 (and we make more errors, too).
Preverbal human infants have elementary numerical abilities, too. These are very similar to those of animals: infants can discriminate two patterns based solely on their number, and they can make simple additions and subtractions. For instance, at 5 months of age, when one object is hidden behind a screen, and then another is added, infants expect to see two objects when the screen drops. We know this because careful measurements of their looking times show that they look longer when, a trick makes a different number of objects appear. Greater looking time indicates that they are surprised when they see impossible events such as 1+1=1, 1+1=3, or 2-1=2. [Please, even if you are skeptical, don't dismiss these data with the back of your hand, as I was dismayed to discover Martin Gardner was doing in a recent review of my book for The Los Angeles Times. Sure enough, "measuring and averaging such times is not easy", but it is now done under very tightly controlled conditions, with double-blind video tape scoring. I urge you to read the original reports, for instance Wynn, 1992, Nature, vol. 348, pp. 749-750 ? you'll be amazed at the level of detail and experimental control that is brought to such experiments.]

Like animals and adults, infants are especially precise with small numbers, but they can also compute more approximately with larger numbers. In passing, note that these experiments, which are very reproducible, invalidate Piaget's notion that infants start out in life without any knowledge of numerical invariance. In my book, I show why Piaget's famous conservation experiments are biased and fail to tell us about the genuine arithmetical competence of young children.
Brain lesions can impair number sense. My colleagues and I have seen many patients at the hospital that have suffered cerebral lesions and, as a consequence, have become unable to process numbers. Some of these deficits are peripheral and concern the ability to identify words or digits or to produce them aloud. Others, however, indicate a genuine loss of number sense. Lesions to the left inferior parietal lobe can result in a patient remaining able to read and write Arabic numerals to dictation while failing to understand them. One of our patients couldn't do 3 minus 1, or decide which number fell between 2 and 4! He didn't have any problem telling us what month fell between February and April, however, or what day what just before Wednesday. Hence the deficit was completely confined to numbers. The lesion site that yields such a number-sense deficit is highly reproducible in all cultures throughout the world.
Brain imaging during number processing tasks reveals a highly specific activation of the inferior parietal lobe, the very same region that, when lesioned, causes numerical deficits. We have now seen this activation using most of the imaging methods currently available. PET scanning and fMRI pinpoint it anatomically to the left and right intraparietal sulci. Electrical recordings also tell us that this region is active during operations such as multiplication or comparison, and that it activates about 200 ms following the presentation of a digit on a screen. There are even recordings of single neurons in the human parietal lobe (in the very special case of patients with intractable epilepsy) that show specific increases in activity during calculation.

The fact that we have such a biologically determined representation of number in our brain has many important consequences that I have tried to address in the book. The most crucial one is, of course, the issue of how mathematical education modifies this representation, and why some children develop a talent for arithmetic and mathematics while others (many of us!) remain innumerate. Assuming that we all start out in life with an approximate representation of number, one that is precise only for small numbers and that is not sufficient to distinguish 7 from 8, how do we ever move beyond that "animal" stage? I think that the acquisition of a language for numbers is crucial, and it is at that stage that cultural and educational differences appear. For instance, Chinese children have an edge in learning to count, simply because their number syntax is so much simpler. Whereas we say "seventeen, eighteen, nineteen, twenty, twenty-one, etc..", they say much more simply: "ten-seven, ten-eight, ten-nine, two-tens, two-tens-one, etc."; hence they have to learn fewer words and a simpler syntax. Evidence indicates that the greater simplicity of their number words speeds up learning the counting sequence by about one year! But, I hasten to say, so does better organization in Asian classrooms, as shown by UCLA psychologist Jim Stigler. As children move on to higher mathematics, there is considerable evidence that moving beyond approximation to learn exact calculation is very difficult for children and quite taxing even for the adult brain, and that strategies and education have a crucial impact.

Why, for instance, do we experience so much difficulty in remembering our multiplication tables? Probably because our brain never evolved to learn multiplication facts in the first place, so we have to tinker with brain circuits that are ill-adapted for this purpose (our associative memory causes us to confuse eight times three with eight times four as well as will eight plus three). Sadly enough, innumeracy may be our normal human condition, and it takes us considerable effort to become numerate. Indeed, a lot can be explained about the failure of some children at school, and about the extraordinary success of some idiot savants in calculation, by appealing to differences in the amount of investment and in the affective state which they are in when they learn mathematics. Having reviewed much of the evidence for innate differences in mathematical abilities, including gender differences, I don't believe that much of our individual differences in math are the result of innate differences in "talent". Education is the key, and positive affect is the engine behind success in math.

The existence of mathematical prodigies might seem to go against this view. Their performance seems so otherworldly that they seem to have a different brain from our own. Not so, I claim ? or at the very least, not so at the beginning of their lives: they start in life with the same endowment as the rest of us, a basic number sense, an intuition about numerical relations. Whatever is different in their adult brains is the result of successful education, strategies, and memorization. Indeed, all of their feats, from root extraction to multidigit multiplication, can be explained by simple tricks that any human brain can learn, if one were willing to make the effort.

Here is one example: the famous anecdote about Ramanujan and Hardy's taxi number. The prodigious Indian mathematician Ramanujan was slowly dying of tuberculosis when his colleague Hardy came to visit him and, not knowing what to say, made the following reflection: "The taxi that I hired to come here bore the number 1729. It seemed a rather dull number". "Oh no, Hardy", Ramanujan replied, "it is a captivating one. It is the smallest number that can be expressed in two different ways as a sum of two cubes."

At first sight, the instantaneous realization of this fact on a hospital bed seems incredible, too amazingly bright to be humanly possible. But in fact a minute of reflection suggests a simple way in which the Indian mathematician could have recognized this fact. Having worked for decades with numbers, Ramanujan evidently had memorized scores of facts, including the following list of cubes:

1x1x1 = 1

2x2x2 = 8

3x3x3 = 27

4x4x4 = 64

5x5x5 = 125

6x6x6 = 216

7x7x7 = 343

8x8x8 = 512

9x9x9 = 729

10x10x10 = 1000

11x11x11 = 1331

12x12x12 = 1728

Now if you look at this list you see that (a) 1728 is a cube; (b) 1728 is one unit off 1729, and 1 is also a cube; (c) 729 is also a cube; and (d) 1000 is also a cube. Hence, it is absolutely OBVIOUS to someone with Ramanujan's training that 1729 is the sum of two cubes in two different ways, naming 1728+1 and 1000+729. Finding out that it is the smallest such number is more tricky, but can be done by trial and error. Eventually, the magic of this anecdote totally dissolves when one learns that Ramanujan had written this computation in his notebooks as an adolescent , and hence did not compute this on the spur of the moment in his hospital bed: he already knew it!

Would it be farfetched to suggest that we could all match Ramanujan's feat with sufficient training? Perhaps that suggestion would seem less absurd if you consider that any high school student, even one that is not considered particularly bright, knows at least as much about mathematics as the most advanced mathematical scholars of the Middle Ages. We all start out in life with very similar brains, all endowed with an elementary number sense which has some innate structure, but also a degree of plasticity that allows it to be shaped by culture.

Back to the philosophy of mathematics, then. What are numbers, really? If we grant that we are all born with a rudimentary number sense that is engraved in the very architecture of our brains by evolution, then clearly numbers should be viewed as a construction of our brains. However, contrary to many social constructs such as art and religion, number and arithmetic are not arbitrary mental constructions. Rather, they are tightly adapted to the external world. Whence this adaptation? The puzzle about the adequacy of our mathematical constructions for the external world loses some of its mystery when one considers two facts.
First, the basic elements on which our mathematical constructions are based, such as numbers, sets, space, and so on, have been rooted in the architecture of our brains by a long evolutionary process. Evolution has incorporated in our minds/brains structures that are essential to survival and hence to veridical perception of the external world. At the scale we live in, number is essential because we live in a world made of movable, denumerable objects. Things might have been very different if we lived in a purely fluid world, or at an atomic scale ? and hence I concur with a few other mathematicians such as Henri Poincare, Max Delbruck, or Reuben Hersh in thinking that other life forms could have had mathematics very different from our own.
Second, our mathematics has seen another evolution, a much faster one: a cultural evolution. Mathematical objects have been generated at will in the minds of mathematicians of the past thirty centuries (this is what we call "pure mathematics"). But then they have been selected for their usefulness in solving real world problems, for instance in physics. Hence, many of our current mathematical tools are well adapted to the outside world, precisely because they were selected as a function of this fit.
Many mathematicians are Platonists. They think that the Universe is made of mathematical stuff, and that the job of mathematicians is merely to discover it. I strongly deny this point of view. This does not mean, however, that I am a "social constructivist", as Martin Gardner would like to call me. I agree with Gardner, and against many social constructivists, that mathematical constructions transcend specific human cultures. In my view, however, this is because all human cultures have the same brain architecture that "resonates" to the same mathematical tunes. The value of Pi, thank God, does not change with culture ! (cf. the Sokal affair). Furthermore, I am in no way denying that the external world provides a lot of structure, which gets incorporated into our mathematics. I only object to calling the structure of the Universe "mathematical ". We develop mathematical models of the world, but these are only models, and they are never fully adequate. Planets do not move in ellipses - elliptic trajectories are a good, but far from perfect approximation. Matter is not made of atoms, electrons, or quarks - all these are good models (indeed, very good ones), but ones that are bound to require revision some day. A lot of conceptual difficulties could be clarified if mathematicians and theoretical physicists paid more attention to the basic distinction between model and reality, a concept familiar to biologists.

Reality Club Discussion

I have been waiting anxiously for Dehaene's book to reach the local bookstores here. I am, however, familiar with his previous work and applaud it. I assume his current book is based on his earlier work and takes the case further. This research, and earlier research on subitizing in animals, has made it clear that our capacity for number has evolved and that the very notion of number is shaped by specific neural systems in our brains.

Dehaene is also right in comparing mathematics to color. Color categories and the internal structures of such categories arise from our bodies and brains. Just as color categories and color qualia are not just "out there" in the world, so mathematics is not a feature of the universe in itself. As Dehaene rightly points out, we understand the world through our cognitive models and those models are not mirrors of the world, but arise from the detailed peculiarities of our brains. This is a view that I argued extensively inWomen, Fire, and Dangerous Things back in 1987 and which has characterized Cognitive Linguistics as a field for two decades.

Rafael Nunez and I are now in the midst of writing a book on our research on the cognitive structure of mathematics. We have concluded, as has Dehaene, that mathematics arises out of human brains and bodies. But our work is complementary to Dehaene's. We are concerned not just about the small positive numbers that occur in subitizing and simple cases of arithmetic. We are interested in how people project from simple numbers to more complex and "abstract" aspects of mathematics.

Our answer, which we have discussed in previous work and will spell out in our book, is that other embodied aspects of mind are involved. These include two particular types of cognitive structures that appear in general in conceptual structure and language

1. Image-schemas, that is, universal primitives of spatial relations, such as containment, contact, center-periphery, paths, and so on. Terry Regier (in The Human Semantic Potential, MIT Press) models many of these in terms of structured connectionist neural networks using models of such visual cortex structures as topographic maps of the visual field, orientation-sensitive cell assemblies, and so on.

2. Conceptual metaphors, which cognitively are cross-domain mappings preserving inferential structures. Srini Narayanan, in his dissertation, models these (also in a structured connectionist model) using neural connections from sensory-motor areas to other areas. Narayanan's startling result is that the same neural network structures that can carry out high-level motor programs can also carry out abstract inferences about event structure under metaphorical projections. Since metaphorical projections preserve inferential structure, they are a natural mechanism for expanding upon our inborn numericizing abilities.

Nunez and I have found that metaphorical projections are implicated in two types metaphorical conceptualization. First, there are grounding metaphors that allow us to expand on simple numeration using the structure of everyday experiences, such as forming collections, taking steps in a given direction, and so on. We find, not surprisingly, that basic arithmetic operations are metaphorically conceptualized in those terms: adding is putting things in a pile; subtracting is taking away. Second, there are linking metaphors, which allow us to link distinct conceptual domains in mathematics. For example, we metaphorically conceptualize numbers as points on a line. In set-theoretical treatments, numbers are metaphorized as sets. Sets are, in turn, metaphorically conceptualized as containers - except in non-well-founded set theory, where sets are metaphorized as nodes in graphs. Such a "set" metaphorized as a node in a graph can "contain itself" when the node in the graph points to itself. Such sets have been used to provide models for classical paradoxes (e.g., the barber paradox).

We have looked in detail at the conceptual structure of cartesian coordinates, exponentials and logarithms, trigonometry, infinitesimals (the Robinson hyperreals), imaginary numbers, and fractals. We have worked out the conceptual structure of e to the power pi times i. It is NOT e multiplied by itself pi times and the result multiplied by itself i times-whatever that could mean! Rather it is a complex composition of basic mathematical metaphors.

Our conclusion builds on Dehaene's, but extends it from simple numbers to very complex classical mathematics. Simple numeration is expanded to "abstract" mathematics by metaphorical projections from our sensory-motor experience. We do not just have mathematical brains; we have mathematical bodies! Our everyday functioning in the world with our brains and bodies gives rise to forms of mathematics. Mathematics is not "abstract", but rather metaphorical, based on projections from sensory-motor areas that make use of "inferences" performed in those areas. The metaphors are not arbitrary, but based on common experiences: putting things into piles, taking steps, turning around, coming close to objects so they appear larger, and so on.

Simple numeration appears, as Dehaene claims, to be located in a confined region of the brain. But mathematics - all of it, from set theory to analytic geometry to topology to fractals to probability theory to recursive function theory - goes well beyond simple numeration. Mathematics as a whole engages many parts of our brains and grows out of a wide variety of experiences in the world. What Nunez and I have found is that mathematics uses conceptual mechanisms from our everyday conceptual systems and language, especially image-schemas and conceptual metaphorical mappings than span distinct conceptual domains. When you are thinking of points inside a circle or elements in a group or members of set, you are using the same image-schema of containment that you use in thinking of the chairs in a room.

There appears to be a part of the brain that is relatively small and localized for numeration. Given the subitizing capacity of animals, this would appear to be genetically based. But the same cannot be said for mathematics as a whole. There are no genes for cartesian coordinates or imaginary numbers or fractional dimensions. These are imaginative constructions of human beings. And if Nunez and I are right in our analyses, they involve a complex composition of metaphors and conceptual blends (of the sort described in the recent work of Gilles Fauconnier and Mark Turner).

Dehaene is right that this requires a nonplatonic philosophy of mathematics that is also not socially constructivist. Indeed, what is required is a special case of experientialist philosophy (or "embodied realism"), as outlined by Mark Johnson and myself beginning in Metaphors We Live By (1980), continuing in my Women, Fire and Dangerous Things(1987) and Johnson's The Body In The Mind (1987), and described and justified in much greater detail our forthcoming Philosophy In The Flesh.

Such a philosophy of mathematics is not relativist or socially constructivist, since it is embodied, that is, based on the shared characteristics of human brains and bodies as well as the shared aspects of our physical and interpersonal environments. As Dehaene said, pi is not an arbitrary social construction that could have been constructed in some other way. Neither is e, despite the argument that Nunez and I give that our understanding of e requires quite a bit of metaphorical structure. The metaphors are not arbitrary; they too are based on the characteristics of human bodies and brains.

On the other hand, such a philosophy of mathematics is not platonic or objectivist. Consider two simple examples. First, can sets contain themselves or not? This cannot be answered by looking at the mathematical universe. You can have it either way, choosing either the container metaphor or the graph metaphor, depending on your interests.

Or take a second well-known example. Are the points on a line real numbers? Well, Robinson's hyperreals can also be mapped onto the line. When they are, the real numbers take up hardly any room at all on the line compared to the hyperreals. There are two forms of mathematics here, both real mathematics. Moreover, as Leon Henkin proved, given any standard axiom system for the real numbers and a model for it containing the reals, there exists another model of those axioms containing the hyperreals. The reals can be mapped onto the line. So can the hyperreals.

So given an arbitrarily chosen line L, does every point on L correspond to a real number? Or does every point on L correspond to a hyperreal number? (If the answer is yes to the latter question, it cannot be yes to the former question - not with respect to the same correspondence.) This is not a question that can be determined by looking at the universe. You have a choice of metaphor, a choice as to whether you want to conceptualize the line as being constituted by the reals or the hyperreals. There is valid mathematics corresponding to each choice. But it is not a matter of arbitrariness. The same choice is not open for, say, the integers which cannot map onto every point on a line.

Mathematics is not platonist or objectivist. As Dehaene says, it is not a feature of the universe. But this has drastic consequences outside the philosophy of mathematics itself. If Dehaene is right about this-and if Reuben Hersh and Rafael Nunez and I are right about it-then Anglo-American analytic philosophy is in big trouble. The reason is that the correspondence theory of truth does not work for mathematics. Mathematical truth is not a matter of matching up symbols with the external world. Mathematical truth comes out of us, out of the physical structures of our brains and bodies, out of our metaphorical capacity to link up domains of our minds (and brains) so as to preserve inference, and out of the nonarbitrary way we have adapted to the external world. If you seriously believe in the correspondence theory of truth, Dehaene's work should make you worry, and worry big time. Mathematics has been, after all, the paradigm example of objectivist truth.

Dehaene's work is also very bad news for the theory of mind defended in Pinker's How The Mind Works (pp. 24-25), namely, functionalism, or the Computer Program Theory of Mind. Functionalism, first formulated by philosopher Hilary Putnam and since repudiated by him, is the theory that all aspects of mind can be characterized adequately without looking at the brain, as if the mind worked via the manipulation of abstract formal symbols. This is like a computer program designed independent of any particular hardware, but which happened to be capable of running on the brain's wetware. This computer-program mind is not shaped by the details of the brain.

But if Dehaene is right, the brain shapes and defines the concept of number in the most fundamental way. This is the opposite of what is claimed by the Computer Program Theory of Mind, namely, that the concept of number is part of a computer program that is not shaped or determined by the peculiarities of the physical brain at all and that we can know everything about number without knowing anything about the brain.

Challenging the Computer Program Theory of Mind is not a small matter. Pinker calls it "one of the great ideas in intellectual history" and "indispensable" to an understanding of mind. Any time you hear someone talking about "the mind's software" that can be run on "the brain's hardware," you are in the presence of the Computer Program Theory.

Dehaene is by no means alone is his implicit rejection of the Computer Program Theory. Distinguished figures in neuroscience have rejected it (e.g., Antonio Damasio, Gerald Edelman, Patricia Churchland). Even among computer scientists, connectionism presents a contrasting view. In our lab at the International Computer Science Institute at Berkeley, Jerome Feldman, I, and our co-workers working on a neural theory of language, have discovered results in the course of our work suggesting that the program-mind is not even a remotely good approximation to a real mind. Among these are the results mentioned above by Regier and Narayanan indicating that conceptual structure for spatial relations concepts and event structure concepts are created and shaped by specific types of neural structures in the visual system and the motor system.

Dehaene's work is important. It lies at the center of some of the deepest and most important issues in philosophy and in our understanding of what the mind is and, hence, what a human being is. What is at stake in Dehaene's work? (1) Platonism: The objective existence of mathematics external to all beings and part of the structure not only of this universe but of any possible universe. (2) The correspondence theory of truth, and with it all of Anglo-American analytic philosophy. If the correspondence theory falls, the whole stack of cards falls. And if it fails for the paradigm case of mathematics, the fall is all the more dramatic. (3) Functionalism, or The Computer Program Theory of Mind as essentially brain-free.

Having worked on the problem of numerical representation in animals, I would like to raise a few quick points. First, and as Paul Bloom and a few others have articulated, one of the central issues that we must grapple with is whether the combinatorial power underlying both number and language are separate systems with separate origins, whether they share one system, and whether the power of combinatorics evolved for language first or number first. In this sense, animals contribute in a fundamental way, assuming of course that we do not wish to grant them the symbolic power of human language. When Stanislas says that animals lack the symbolic power of human number systems, this is an assumption. There is not proof of this yet, because the only data we have come from massive training programs. and yet, some of the non-training studies that we have begun to run (not yet published) suggest that rhesus monkeys and tamarins may go beyond the low level numerical discriminations that Stanislas has in mind.

This raises yet another questions. Of course language contributes in some way to our numerical sense, but in precisely what way? I have only just dipped into the new book, so have no sense of whether Stanislas speaks to this issue, but it is critical, not only for our general interest in the organization of domains of knowledge, but the kinds of selection pressure that ultimately allowed us to leave animals in their dust. For example, it is conceivable that the key pressure for fine level discrimination of number (i.e., beyond more versus less) evolved when our system of social exchange emerged, when change of a $1 bill was critical? In most animal interactions, there is never really a situation where a more versus less distinction fails. In cases where it does fail, animals clearly have the capacity to solve the problem...these are small number situations, for example, fights between two allies against a third, or an assessment of fruit in a pile, or individuals in one group versus a second. What ecological or social pressure created the push for more powerful symbolic systems, dedicated to number juggling?

1) One of my formative experiences in understanding the psychology of mathematics occurred in grad school. I noticed that students learning new material would complain about how hard the professor pushed them, how little time they had. But other students who had mastered the same material would show no sympathy at all. "Trivial, trivial, once you've seen it", they would mutter. (Visual metaphors seem to be the most common when mathematicians explain their insights.) I began to wonder if all mathematicians were assholes. Then I realized that these students were simply reporting their experiences honestly, if not courteously. Mathematical ideas are easier in hindsight. The language of mathematics is awkward and seems ill suited to learning. But mathematical ideas can seem simple once internalized. Mathematicians often report that they struggle to "see" their ideas in the right way, so that they become simple.

We have here a case of trains passing in the night. Dehaene highlights developmental links between numeracy and a variety of things; language, abstraction, logic skills, real world problem solving activities, money. He suggests that these might be vital to fighting innumeracy. Yet many educators trying to reach math-phobic kids have been moving in the direction of visual, somatic, and musical representations of math. (For a great visual/kinetic example, see Jim Blinn's work.) Many kids seem to be allergic to abstractions, and to real world problems (the dreaded "word problems"), but can relate to math ideas presented in a more experiential modality. There is not necessarily a core disagreement here. It could be the case that a connection between innate numeracy and language, logic, and/or problem solving was critical in evolution, but that other modalities are nonetheless useful in education.

Historically, the good counters of Western civilization (the Mesopotamians) arose quite distinctly from the good logicians (the Greeks, who were poor counters). This seems to me to be a bit of anecdotal evidence that the two skills are not as innately connected as Dehaene suggests, but were connected together by cultural development.

2) Let's not deify money just yet. There are other things we do that animals don't. Music is an intriguing alternative. There seem to be a disproportionate number of musical mathematicians, and music has a numerical quality. All human societies make music, even though no definitive purpose for it has been identified. There are, however, human societies without money. Mathematics seems more likely to have sprung from music than from money. (Of course it need not have sprung from any one thing at all.)

3) As I commented earlier in response to Hersh's presentation, finding a non-platonic basis for the ability to count doesn't make math any less objective. In this respect numbers are different from colors. While certainly numbers and colors evolved similarly, as "local variables" in the brain, what we must do with them at this time is quite different. We don't have any reason to treat the experience of "yellow" as undesirable in the conduct of our lives. Instead we must design user interfaces for our computers and the rest of the human-made world to work as well as possible with colors as we see them. While there is nothing universally right about yellow, there is also nothing wrong with it. Indeed we celebrate our yellows; Van Gough's studio comes to mind. (Yellow is not, however, isolatable and abstract in its meaning. Peter Warshall has done some wonderful research on cross-species functions of colors in the natural world.) On the other hand, we have no reason to retain or celebrate our innate/naive sense of numbers. We find no occasion when it is desirable, or even acceptable, to confuse 1006 with 1007, or pi with 3.

4) Dehaene's extrapolations of his work might at times be a little too simple and linear. Perhaps there is some hidden benefit to awkward number names like "seventeen", since they force the young brain to do a tougher, more painstaking job of learning. Japanese kids can be said to be "behind" English speaking kids in language skills at certain ages because of the difficulties of the Japanese language, but that does not mean that less is being learned, or that the kids will turn into less able adults. I am thinking of Einstein, who commented that he was rather slow as a child and considered that to be an advantage, in that he developed basic skills and intuitions in a more considered way at an older than usual age.

A related point: Perhaps Einstein started an alternate neural numerical scratch pad. Might it not be possible that people develop additional representations of numbers in their brains? Why must the inborn representation remain the only one.

Professor, Department of Cognitive Science, University of California, San Diego

Commentary on S. Dehaene's La Bosse des Maths.

Several months ago I had the chance to read the original French version of S. Dehaene's book La Bosse des Maths. Not only I deeply enjoyed the manner in which he presents an amazing amount of experimental findings, but also I learned quite a bit about neuroscience, methodological issues, and even about interesting anecdotes. The book is certainly a rich contribution to the study of the cognitive origins of numbers and its philosophical implications.

The truth is that much of what I would like to say about Dehaene's work has been already said by George Lakoff in his comment (this issue). This is not surprising, since Lakoff explicitly refers to the work he and I have been doing on the cognitive science of mathematics, and to the book we are currently writing -- tentatively titled The Mathematical Body. So I will not repeat how much we consider Dehaene's work important. I would like to insist, however, that both Dehaene's work and ours, in a complementary way, show that (1) Platonism and transcendental objectivism, (2) the correspondence theory of truth, and (3) functionalism, are untenable when it comes to explain the mind.

But despite of the fact that I endorse most of Dehaene's claims, and I agree with most of his conclusions, I still would like to make a few comments. I have to confess that his book left me with the sensation that very relevant things were said, but that there were important parts missing in the account of "la bosse des maths." These are some of them:

1. Mathematics is really not just numbers. I am sure Dehaene knows this very well.

This is not a mystery. After all, there are journals of Mathematics where the only numbers you see are those of the pages. However, in his book (as in his article) Dehaene makes too many conclusions about "mathematical objects," "mathematical talents," "mathematics education," "matematical knowledge," "mathematical thinking," and so on. Based mainly on the work done in the neuroscience of quantity discrimination, counting, and simple arithmetic operation with small numbers he does make conclusions about mathematics as a whole. After reading the book I didn't know, for instance, what to do with the mathematical cognition of those very good mathematicians who are quite bad at dealing with everyday numbers and arithmetic operations. This, of course, does not invalidate his work. But it is an unfortunate fast extrapolation that has important theoretical consequences. In fact in jumping too fast to conclusions about the mathematics as a whole one overlooks fundamental cognitive phenomena underlying what mathematics is. Phenomena that does not have anything to do with quantities (or numbers) as such (and that may have confer evolutionary advantage as well). The universe of bodily grounded experiences is much richer than the one of quantities, and as Lakoff and I have found, some of them are essential in providing grounding for what mathematics is (see Lakoff's comment). The task is to find the neural and bodily basis of this grounding as well.

2. What is REALLY "..." or "etc.?"

Even if one doesn't intend to make conclusions about mathematics as a whole, and stays in the domain of small everyday "numbers," one should be very careful in making fast extrapolations. In fact when we refer to the "natural numbers" or to "1, 2, 3, 4, etc. we are doing something that cognitively is much more complex than what is shown in Dehaene's book. It is true that we may be building on the kind of phenomena Dehaene describes very well, but the "etc." hides a very complex cognitive universe that is missing in Dehaene's account. In fact, with the "etc." we implicitly invoke a whole universe of entities that have never been explicitely thought or written by anybody in the whole history of humankind (e.g., really huge integers)! Still, some cognitive mechanisms allow us to know that these entities preserve certain properties, such as order or induction, that they share some kind of inferential structure such that we put them in the same cognitive category, and so on. All that is hidden when we say "integers" or we write "..." or "etc." after 1, 2, 3. This fact, is an extraordinary cognitive accomplishment that is not in Dehaene's account of the "number" sense. That cognitive activity requires other fundamental mechanisms that are not intrinsically related to quantity and that can't be overlooked.

Brains in living bodies.

After building up the "number sense" Dehaene is right in referring to language and culture in order to explain more abstract and sophisticated mathematics. This move is crucial, since if one simply believes that everything happens in the brain, then one would expect Euclid or Archimedes to be theorizing about the Cantor's set or some strange diffeomorphisms. It is clear that there is a historical process with semantic change that we need to account for. But in Dehaene's account it is not clear how is this grounded in the brain he claims to be essential. Marc Hauser has pointed it out in his comment: "of course language contributes in some way to our numerical sense, but in precisely what way?" Well, it is here where genuine embodiment plays a vital role. Brains, although extremeley relevant in giving accounts of cognitive phenomena, evolved in and withbodies, not in isolation. Moreover, these human brain-bodies have been permanently in interaction, in an ongoing way, throughout evolution. Language and Culture emerged from these processes and are manifestations of non-arbitrary extensions of shared bodily grounded phenomena. This means that big cultural differences don't necessarily mean dramatic changes in brains. When looking at mathematics, one should look at these embodied extensions. As Lakoff says in his comment, an important part of these extensions are conceptual metaphorical mappings, that is, cross-domain mappings that preserve inference. When one studies these extensions empirically it is possible to observe, for instance, that the different concepts of continuity before and after the XIX century don't necessarily correspond to a dramatic change in the brain of the mathematicians, but it corresponds to different extensions of bodily grounded primitives. In the case of continuity, Lakoff and I have postulated that before the XIX century, mathematicians such as Euler, Fourier, Newton, and Leibniz thought of continuity using the cognitive mechanisms we use today when in everyday activity we think of something as being continuous (Euler referred to continuity as "freely leading the hand"). That is, such an idea is realized through simple bodily grounding mechanisms that have to do with motion, wholeness, and source-path-goal schemata. But with Weierstrass' work in the XIX century, continuity has been conceived through radically different cognitive primitives: static entities, discreteness, and container-schemata. Does this mean that the brains of the mathematicians changed dramatically at the end of last century? No. Weierstrass-continuity requires, among others, mappings that allow us to understand a line as being composed of points (sets of points), and static preservation of closeness as the dynamic approach to a limit. It is only under these cognitive extensions of other bodily grounded mechanisms that one can do post Weierstrass mathematics, a kind of mathematics that Euler, although having a relatively similar brain, couldn't do. The point is that language and culture are also brain-bodily grounded, and there are techniques to study these phenomena empirically without forcing ourselves to remain at the level of isolated brains, but rather extending from it.

But, in any case, and after all this being said, I am sure that I will enjoy the English version of La Bosse des Maths, that I hope to have in my hands soon.

I read with interest the recent piece by Stanislas Dehaene on what are numbers. The metaphysical and ontological status of mathematics is a subject of great concern to me -- largely because of its relevance to the question of the ontological status of the mathematical relations we find in the physical world -- i.e. the so-called "laws of nature."

While I found Dehaene's piece interesting, I would like to draw the attention of the group to a radically different approach to the same issue. The philosopher of mathematics Brian Rotman has written a superb book on just this question. His book isAd Infinitum (Stanford University Press). Here, Rotman (who was a mathematician for 20 years before turning to the philosophy of mathematics and computing) puts forward what he calls a "post-Euclidean philosophy of number." Essentially he repudiates Platonism and presents instead a semiotic model of the ontology of number. Basically Rotman argues that numbers emerge semiotically from the act of counting. His careful and intricate treatment of this subject is a truly superb example of intellectual integration between phisilophy, culture and science and I would strongly recommend to all those interested in this subject to take a look at this work. It is not an easy read -- but then it is not an easy subject. FYI: there will be a piece about Rotman's work in a forthcoming issue of The Sciences.

I would like also to make a comment on the recent issue raised by Natalie Angier and Carl Djerassi re the under-representation of women in the EDGE Salon. I agree with the poster who suggested that women scientists are perhaps too busy to participate in such "extra-curricular" activities -- being tied up with consolidating their positions as scientists. This is indeed a very real issue for many female scientists, at the same time I suggest there are other factors that ought also to be considered.

First, I would ask the question whether anyone has actually tried to solicit participation by many women? I notice that in the list of EDGE contributors well over 90% of them are men. Someone presumably approached many of them and asked them to participate. It is my belief that the under-representation of women scientists in the public sphere ultimately does science great harm -- perhaps the EDGE group could take a pro-active stance towards seeking out more women. Many potential candidates may not even know that the Salon exists.

A final point re. the intersection science and culture and how to get such dicussions "out there" in the wider world. In fact I have been writing about this subject for over ten years and I lecture about it at universities all over the country. In my experience, there is a great desire "out there" both among students and among the public at large to have such discussions -- the hard question is to find ways to integrate it into existing curriculums and subject categories. Many people are in fact desperate for interdiscilpinary dialog about science -- but what they also want is for the culture part of the equation to be taken seriously as well. In other words what we need is not just a discussion by scientists but one that really is a dialog between the sciences and the humanities. That dialog is what drives my own work and I'd be very interested to participate in any projects along those lines generated by the EDGE group.

I admire Stan Dehaene's very clear exposition of recent findings about the number sense. Of special value is his excellent synthesis of findings from psychology, brain research, computational models, and cross-cultural investigations. Our understandings of human cognitive achievements are most likely to be advanced by efforts to integrate the sometimes disparate data from the several discipline.

I take issue with Dehaene's argument that prodigies are like the rest of us -- or that, at the very least, they have the same initial endowment. Having reviewed the literature on prodigies in various domains (e.g., music, mathematics, chess) I have reached a quite different conclusion.

As Ellen Winner argues in Gifted Children (Basic Books, 1996), prodigies can be distinguished from an early age from their peers. Prodigies show a fascination (bordering on obsession) with a certain content (e.g., numbers, visual patterns, auditory musical patterns) and they have a rage to master the domains that deal with such specific content. While they may have parental support, this support is reactive rather than initiating. Moreover, prodigies -- unlike the rest of us -- do not simply follow the conventional educational pattern. They pose new questions, and they often solve domain issues wholly or largely on their own. Philosopher Saul Kripke conjectures that if algebra had not existed when he was in elementary school, he would have invented it; and this kind of comment (whatever its truth value in the specific case) captures quite accurately the mental attitudes and powers of prodigies.

No one understands the origins of prodigies. We simply have to generate satisfying ways of thinking about them. I find it useful to think of prodigies as having the same strategies and parameters with reference to their chosen content domain that all normal individuals have with respect to the mastery of one natural language. (In other words, we are all linguistic prodigies, while prodigies in other domains are rare). The prodigy seems ÏpretunedÓ to discover patterns in the domain, including ones that have eluded others. Perhaps, if it is to result in achievements that are valued by the adult society, this gift has eventually to be wedded to strong motivation (to succeed, to master) and to be creative (to step out in new directions); and, if it is to be distinguished from the mechanistic ability of the savant, it has eventually to be linked to wider problems, including issues from other domains. Dean Keith Simonton has written interestingly about the possibility that genius involves the very occasional concatenation of these disparate human proclivities and talents.

I think that one is far more likely to understand Mozart, Bobby Fischer, or Ramanujun if one assumes that they differ in fundamental ways from the rest of the population than if one has to gerrymander an explanation that simply builds on the general abilities of the general public. Whether Ramanujun may have recalled an earlier feat of calculation, and whether the rest of us could also recognize the special features of the number 1729 is beside the point. Ramanujun is honored because he covered several hundred years of mathematics on his own in India and then made original contributions to number theory after he joined G. H. Hardy in Cambridge.

My disagreement with Dehaene is of more than academic interest. If parents believe that they can convert their children into the next Karl Gauss, John Stuart Mill, Gary Kasparov, or Felix Mendelsohn, they are likely to subject them to training regimes that are inappropriate and even cruel. Recognition that all of us can become numerate and that many of us can eventually master the calculus is an achievement in itself -- one that Dehaene has done much to foster; it is neither necessary nor desirable to imply that the average individual can exhibit prodigious achievements.

I like the experimental research being done by Stanislas Dehaene. Some questions for Stanislas:

* What is the relation between the intuitive notion of number and number representations? For example, the Anasazi of Chaco Canyon must have had an intuitive notion of number to engage in trade, do astronomy, and build villages like Pueblo Bonito. As far as I know, they did not have a written language or number representation. (Archaeologists who read this, please correct me if I'm wrong.)

* What determines whether a civilization achieves a written language or number representation? Again, the Anasazi provide an example of a complex civilization which didn't have those achievements

George Lakoff and I share an admiration for Dehaene's excellent book, and we also agree in many ways about the cognitive basis for mathematical reasoning, as I point out in How the Mind Works. Lakoff claims to disagree with one of the foundations of my book, the computational theory of mind, which he repeatedly misidentifies as "the computer program theory of the mind." The misidentification does not rest with the name, however; Lakoff describes the theory in a way that I explicitly disavow:

"all aspects of mind can be characterized adequately without looking at the brain."

"This computer program mind is not shaped by the details of the brain."

"The concept of number is part of a computer program that is not shaped or determined by the peculiarities of the physical brain at all and they we can know everything about number without knowing anything about the brain."

Nowhere do I espouse these mad beliefs. I do believe that thinking is a form of computation -- as does Pat Churchland, co-author of a book entitled The Computational Brain. I also believe that to understand the mind, one must look at what the mind computes, at how it computes those things, and at the brain structures that carry out the computations. The gratuitous bits about "not looking at the brain," "not shaped by the details of the brain," and "not knowing anything about the brain" are inventions by Lakoff.

Of course I enjoyed the interview with Dehaene and I am also looking forward to the forthcoming Hungarian (or English?) translation. In the spirit of brainstorming I will make some comments on the comments.

I am very much in agreement with Prof. Gardner on prodigies: my belief is also that they use a special encoding for a domain, an encoding that is the most similar to the natural language sense that every normal child has. It seems even possible to me that the prodigies subvert their language organ for certain language-like, that is abstract domains: chess, music, mathematics. The quality of "fluency" is an excellent description of how prodigies navigate in the domain - it is easy to imagine Mozart as the native speaker of music making fun of Salieri the earnest foreign student.

I have a friend, Mr. A, in his sixties who is a language phenomon: he has the ability to learn languages to native level as far as grammar, fluency, and pronunciation are concerned - his vocabulary and cultural information are still proportional to the time he spends with the language. Proof: real natives regarded him as a retarded compatriot rather than a skilled foreigner. I had many discussions with him about his methods. My point is that studying adults who are particularly successful with languages might give us insight into unusually effective encodings which could be - may be - the same that the language organ employs. Mr. A believes, for example, in "basis vectors" that is random prototype sentences which he fancies, and learns by heart, repeats frequently, with great precision and great fluency ("You should have come by the way of the Opera square") and which he then subverts to specific purposes ("You should have counted the change").

Number sense is not very language-like as many comments noted. Perhaps this is why numerical prodigies are rare and are idiot/savants: they have to subvert their language organ beyond what is harmless.

As to the Ramanujun number 1729: has anybody noticed the same number appearing in Feynman's book ("You are surely joking...") when he competes using mental arithmetic with the Japanese abacus master: after losing on simple arithmetic the problem given was to calculate the cube root of 1729! Feynman apparently have not heard the Ramanujun story because he does not mention it at all in the book: his insight was to remember from his wartime engineering days that one cubic foot equals 1728 cubic inches! So as the Japanese sweated on the abacus he slowly emitted the obvious digits 1 2 , 0 while feverishly trying to approximate a little bit of the rest using a power series. Also the octal representation of this number 3301 was for a long time the secret password to the central computer of Xerox Parc. Maybe this will spur someone onto more insights.

On Steve Pinker's response to George Lakoff: I thought I was a believer in Steve's "computational theory of mind" but when I originally read them I did not find Lakoff's exaggerations shocking, I thought they were explanatory. Without agreeing with Lakoff's conclusions, is it not the case, Steve, that "all aspects of the mind can be characterized adequately without looking at the brain" if we take this statement as a test that a future goal will have been reached (the goal of very strong AI) rather than a description of an absurd ("mad") process which I do not believe was intended? I also interpreted "Looking at" as "referenced in any way except for the fact that none of this would have existed without the prior existence of many brains both looking and looked at" that is "looking at the time that the test is successful and very strong AI has been reached". By very strong AI I mean consciousness without brain/body

Neuroscientist; Collège de France, Paris; Author, Consciousness and the Brain

Many thanks to George Lakoff, Marc Hauser, Jaron Lanier, Rafael Nunez, Margaret Wertheim, Howard Gardner, Joseph Traub, Steve Pinker, and Charles Simonyi for taking the time to write careful commentaries and addenda to my "number sense" paper. Their comments often address additional points that I did not have room to comment on in Edge, but which are discussed at greater length in my book. Unfortunately, the commentaries themselves are now two to three times as long as my original article! In my reply, I will only confine myself to a few generic remarks and rejoinders.

Higher mathematics (comments by George Lakoff and Rafael Nuñez)

I am delighted to hear that George Lakoff and Rafael Nunez think that they can now address the cognitive structure of higher mathematics using an approach somewhat similar to the one I was using for number, and I'm looking forward to reading their forthcoming book. There is no doubt that arithmetic is but a very small aspect of mathematics. If I am right , however, much of the mathematical edifice will be found to have the same structure and origin as arithmetic. Namely, mathematics is a construction on the basis of raw intuitions or primary cerebral representations that have been engraved in our brains through evolution. Lakoff and Nunez suggest that, besides number, spatial relations such as containment, contact, etc are also basic foundations of mathematics. That sounds very likely. They also suggest that the mathematical edifice is built on "conceptual metaphors", so that structures from one domain (e.g. space) can be mapped metaphorically to another domain (e.g. number), giving rise to novel mathematics in the process. This is a genuinely novel and interesting idea which, if formalized, could flesh out the notion of a "construction of mathematics".

Number processing and the brain-computer metaphor (comments by George Lakoff and Steve Pinker)

My own view stands in between the contrasting or even clashing views held by George Lakoff and Steve Pinker. I would grant, with Steve, that computational models of the mind/brain are key to our current rapid progress in cognitive science. I also think, however, that the computational metaphor to brain function, which is currently the best metaphor we have, has lead some to an extreme form of functionalism according to which studying the brain has absolutely nothing to do with how the mind work. This extreme form of functionalism, which is not the one held by Steve Pinker, can be found in the writings of Jerry Fodor or Philip Johnson-Laird (e.g."The physical nature [of the brain] places no constraints on the pattern of thought"). This is, I think, a rather absurd statement.

I concur with George Lakoff in thinking that by looking at the organization of the brain, we will be able to discover novel ways of doing computations that are radically different from those in digital computers. My own research refutes a simple-minded brain-computer metaphor, and in fact the last chapter of my book is largely dedicated to this point. As far as number crunching is concerned, the brain is unlike any digital computer that we know of - it is very poor at exact symbolic processing, and seems to process numerical quantities in an analog rather than a digital fashion.

To answer Joseph Traub's specific query, I do not have information about the Anasazi representation of numbers. It is quite possible, however, to have no written or spoken symbols for number and yet to develop accurate arithmetic systems. Many tribes in New Guinea use a body-pointing system to represent integers and to calculate. The abacus is also an excellent, non-arbitrary way of representing and computing with numbers. The exact relation to the infant and animal "number sense" is unclear. One idea, which was defended by UCLA psychologists Rochel Gelman and Randy Gallistel, is that the preverbal quantity representation provides strong principles that guide the acquisition of verbal counting and of symbolic notation.

Mark Hauser states that "in most animal interactions, there is never really a situation where a more versus less distinction fails". Is that really true? There certainly are many experiments in which animals perform barely above chance when dealing with two close numbers such as 6 and 7. It might be argued that such failures, even if minor, provide at least some evolutionary pressure for the development of exact number systems. It seems to me equally plausible, however, that human language and symbolic abilities emerged through entirely non-numerical evolutionary pressures, and the number system merely exploited and pre-empted those systems later on. Indeed, the connection between verbal numerals and preverbal numerical quantities is one that is very difficult for children to acquire. In my book, I suggest that this divide, and the ensuing blind execution of symbolic algorithms without consideration of their quantitative meaning, is the main reason for "innumeracy".

I disagree with Jaron Lanier, who thinks that "we find no occasion when it is desirable or even acceptable to confuse 1006 with 1007 or pi with 3". Of course we do - approximation is basic to the way we talk about or work with numbers. All languages throughout the world are full of expressions for approximation (e.g. I have ten or twenty books). Our ability to approximate calculations is certainly what distinguishes most clearly from current digital computers, who can compute pi to the umpteenth decimal and yet have no idea what the result means. We know what it means, I think, partly because we can relate to other quantities using our sense of approximation.

Certainly, our adult brain eventually must comprise multiple representations of numbers (quantities, dates, number line, arithmetic facts, etc...). Indeed, in some brain-lesioned patients these various representations may dissociate. Exactly how many of these representations are available to an adult is, I think, highly variable from culture to culture and even from individual to individual. We all start with a core "number line" for approximate numerical quantities, but our end state almost certainly is very different depending on our level of mathematical achievement. This need not mean that the basic architecture of the brain is transformed (here I concur with Rafael Nunez). Rather, we seem to have a limited ability to recycle our pre-existing cerebral networks for a different use that the one they were initially selected for. For instance, we seem to use a cortico-subcortical circuit initially destined to the storage of routine motor behaviors, in order to store rote multiplication facts. Perhaps George Lakoff's notion of a "conceptual metaphor" will help understand this recycling process (which does not mean that the brain is fully "plastic").

It is interesting to speculate on the role of music in the acquisition of higher mathematics (thanks to Jaron Lanier for point this out). It is often claimed that mathematicians are "gifted" for music. I believe that the causal relation may well be the converse. Children who take music courses early on are exposed to what amounts to highly abstract mathematical concepts very early on in life, way earlier than they would if they were only taking a regular school curriculum. For instance, fractions and powers of two are fundamental to musical notation. Thus, early musical training may provide early mathematics training as a bonus.

Great mathematicians and prodigies (comments by Howard Gardner and Jaron Lanier)

One final thought about prodigies. I concur with Howard Gardner in thinking that "No one understands the origins of prodigies". However, I wanted to draw attention to the fact that there is really very little evidence that prodigies start in life with distinct neural hardware. Perhaps the most crucial difference lies in the amount of attention that they are willing to dedicate to a narrow domain such as numbers. Indeed, this is how retarded autistic children eventually manage to learn about arithmetic or the calendar, after thousands of hours of training themselves. My belief is that they initially have little or no special talent for numbers or the calendar to begin with - indeed, how could neural networks be pretuned to the structure of the calendar, which is a very recent cultural invention? Whatever little evidence there is suggests that training alone can turn normal subjects into apparent "prodigies", if one is willing to dedicate enough time to it. Also I think that it's important to debunk the idea that prodigies use radically different algorithms from our own - they simply don't. Rather they use simple variations or shortcuts that anyone can learn.

"Perhaps Einstein started an alternate neural numerical scratch pad", says Lanier. Who knows? There is a real chicken-and-egg problem here. Research has seemingly shown that Einstein's brain had an abnormally high concentration of glial cells in the inferior parietal region. Is this because he was born with this "malformation", however, or because he developed it through putting that brain region to frequent use? Understanding the neural bases of talent is bound to be among the most difficult questions that cognitive neuroscience will have to address.

One final word on Charles Simonyi's remarks

I would like to thank Charles Simonyi for drawing my attention to new anecdotes surrounding number 1729. The story about Richard Feynman remembering that 1728 is 12x12x12 is particularly interesting, because it strengthens the idea that, indeed, 1728 is a very recognizable number, and that many people - including at least Ramanujan, Feynmann, Simonyi, myself, and surely scores of others-- have it stored in their mental number lexicon. This does not in the least diminish my admiration for Ramanujan's feats, but it does make seem more human and understandable.

Charles Simonyi notion that prodigies "navigate fluently" in their domain of predilection is very similar to the metaphor I use in my book - I talk about a landscape of numbers (or more abstract mathematical objects) within which prodigies and top mathematicians can move freely. These people claim to experience numbers in a phenomenal way, often within a spatial setting, and they claim that numbers and their properties immediately pop to mind. Furthermore, many claim to experience strong pleasure associated with this - some go as far as to prefer the company of numbers to that of other fellow humans! ("Number are friends to me, more or less," says Wim Klein. "It doesn't mean the same for you, does it, 3844? For you it's just a three and an eight and a four and a four. But I say: 'Hi, 62 squared!'").

It seems plausible, indeed, as Charles Simonyi, that these people have "subverted" several of their mental organs to mathematical purposes - certainly their numerosity system, but also perhaps part of their spatial navigation system, their language system, and their internal reward system. Perhaps someday we shall be able to see this "recycling" of brain areas for a different cultural purpose using brain imaging. There are already a few examples of this in non-mathematical domains, for instance the recycling of visual cortex for Braille processing in blind subjects, or the recycling of part of the infero-temporal object recognition system for word identification in alphabetized people.

My own view of prodigies is that their brain structure is initially not very different from the rest of us mortals. What is special is their ability to focus an amazing amount of attention on a narrow domain (such as arithmetic) and to get strong pleasure out of it. This gets them started, very early on in life, on a learning curve where they acquire more and more memories and automatisms about a domain, possibly subverting several initially unrelated brain systems to their passion, and eventually turning into experts. In a nutshell: passion breeds talent.

I am absolutely delighted to hear that Steve Pinker believes that the Computer Program Theory of Mind is "mad." I agree with him completely. It is mad.

However, the discussion I cited from "How The Mind Works" might lead other readers to interpret Pinker as saying something that he does not believe. If I misread Pinker (as I hope I have), other readers may misread him too. This is a fine opportunity to set the record straight.

The issue needs a bit of elaboration. One possible source of confusion is that there is not one "Computational Theory of Mind" but two, with variations on each. Those two principal computational theories are at odds with one another and the disagreement defines one of the major divisions within contemporary cognitive science. Here are the two computational theories of mind:

1. The Neural Computational Theory of Mind.

The neural structure of the brain is conceptualized as "circuitry," with axons and dendrites seen as "connections", with activation and inhibition as positive and negative numerical values. Neural cell bodies are conceptualized as "units" that can do basic numerical computations such as adding, multiplying, etc. Synapses are seen as points of contact between connections and units. Chemical action at the synapses determines a "synaptic weight" -- a multiplicative factor. Learning is modeled as change in these synaptic weights. Neural "firing" is modeled in terms of a "threshold", a number indicating the amount of charge required for the "neural unit" to fire. The computations are all numerical.

The Neural Computational Theory comes in a number of flavors, each reflecting research programs that focus on modeling different kinds of phenomena: (1) Highly structured special purpose neural circuits that describe low-level functions, e.g., topographic maps of the visual field or assemblies of center-surround structures that form line detectors. (2) Highly structured, sparsely connected, special purpose neural circuits that model higher-level functions, e.g., high-level motor control, spatial relations, abstract reasoning, language, etc. (the so-called "structured connectionist models") (3) Layered, densely connected neural circuits for modeling general learning mechanisms (the so-called "PDP connectionist models"). These are not necessarily mutually exclusive approaches. Given the complexity of the brain, it would not be surprising if each was used in different regions for different purposes.

The fundamental claim is that "higher level" rational functions like language and thought are carried out in the same way as "lower-level" descriptions of the details of the visual system, of motor synergies, etc.

The Neural Computational Theory of Mind states that the mind is constituted by neural computations carried out by the brain, and that those neural computations are the ONLY computations involved in the characterization of mind. The result is a Brain-Mind, a single entity characterized by (1) the specific detailed neural architecture of the brain, (2) the neural connections between the brain and the rest of the body, and (3) neural computation.

The connections between the brain and the rest of the body are crucial to all this. The brain, after all, is structured to function in combination with a body. Its specific neural architectures, which are central to the neural computational theory, are there to perform bodily functions - movement, vision, audition, olfaction, and so on, and their structures have evolved to work with the bodies we have and with the kinds of systems that neurochemistry allows (e.g., topographic maps). Thus, the Neural Computational Theory is inherently an embodied theory.

Patricia Churchland and Terry Sejnowski's wonderful book, The Computation Brain, is about the Neural Computational Theory of Mind.

2. The Computer Program Theory of Mind (aka "The Symbolic Computational Theory of Mind")

In the Symbolic Computational Theory, a "mind" is characterized via the manipulation of uninterpreted symbols, as in a computer program. The "symbols" are arbitrary: they could be strings of zeroes and ones, or letters of some alphabet, or any other symbols as long as they can be distinguished from one another. Nothing the symbols "mean" can enter into the computations. Computations are performed by strictly stated formal rules that convert one sequence of symbols into another.

The symbols and the computations are abstract mathematical entities. In the general case, this kind of symbolic computational "mind" is disembodied, and nothing about real human bodies or brains is needed to define what symbols or rules can be. A mind is conceptualized as a large computer program, written in symbols. It is an abstract, disembodied entity with a computational structure of its own.

The symbol system becomes physical when it is "implemented" in some physical system like a physical computer made of silicon chips, or (it is often claimed) a human brain. The manner of "implementation" doesn't matter to characterization of mind. Only the symbolic program does. This kind of "mind" thus has an existence independent of how it is implemented. It is in the form of software than can be run on any suitable hardware.

Of course, it is possible to MODEL a neural computational model of brain structure using such a general symbol system and it is done all the time: You just impose severe limitations: Model only neural units, connections, levels of activation, weights, thresholds, delays, firing frequencies, etc. and compute only the numerical functions that the neural units compute. This is a model of a very specific type of model of a physical system, the brain.

But this fact is not really germane to the Symbolic Computational Theory of Mind. Such models of how the physical BRAIN computes are not what the Symbolic Computational Theory claims a MIND is. Minds are to be characterized by symbolic computations that are supposed to characterize reasoning, for example, the kind of "reasoning" carried out by the pure manipulations of symbols in symbolic logic or in "problem solving" programs.

A special case of the Computer Program Theory of Mind is obtained by adding a constraint, namely, that the program be implementable by a human brain. Let us call this the Brain-Implementable version of the Computer Program Theory. In a Brain-implementable Computer Program Theory, the program is LIMITED by what a brain could implement, but nothing in it is DETERMINED by the structure of the brain. Its computations are not brain computations-they are still computer software that can presumably be "run on brain hardware." Naturally, such a brain-implementable computer program theory would allow the program to also be implementable on all kinds of hardware other than a brain. The "mind" defined by the computations of the program would be unaffected by how the program was implemented.

There is in addition a Two Minds theory, in which the mind is separated into two parts: one part of the mind works by the Neural Computational Theory and the other part of the mind works by the Symbolic Computational (or Computer Program) theory. The Two Minds Theory separates mind and body: it posits a form of faculty psychology in which there is a rational faculty governing thought and a language faculty governing language, which are autonomous and distinct from bodily faculties governing perception, motor activity, emotions, and all other bodily activities. In the Two Minds Theory, the Neural Computational Theory is reserved for the bodily functions: low-level vision, motor synergies, the governing of heartbeat rate, and so on are left to neural computation alone. But the "higher" faculties of mind and language are characterized by the Brain-implementable version of the Computer Program Theory which works by symbolic computation. The Computer Program parts of the mind in this theory-the rational faculties and language-are characterized in a disembodied way, with no structure imposed by the brain, and can be implemented on either brain or nonbrain hardware.

Reading Pinker, I was (I hope mistakenly) led to believe that he had accepted the Computer Program Theory in the Brain-implementable version for rational functions and language. Here are some passages from both The Language Instinct and How The Mind Works that led me to the conclusion that he held such a theory.

In The Language Instinct, there is chapter called "Mentalese." The title is from Jerry Fodor's Language of Thought theory of mind, which is a version of the computer program theory. On pages 73-77, Pinker describes a Turing machine, an instance of the Computer Program Theory of Mind, as "intelligent" (p. 76). On p. 77, he describes how the abstract symbolic representations might be implemented neurally. At this point he adds: "Or the whole thing might be done in silicon chips. . . Add an eye that might detect certain contours in the world and turn on representations that symbolize them, and muscles that can act on the world whenever certain representations symbolizing goals are turned on, and you have a behaving organism (or add a TV camera and a set of levers and wheels, and you have a robot)."

"This, in a nutshell, is the theory of thinking called "the physical symbol system hypothesis" or the "computational" or "representational" theory of mind. It is as fundamental to cognitive science as the cell doctrine is to biology. . . The representations that one posits in the mind have to be arrangements of the symbols."

There are also passages in How The Mind Works that sound as if Pinker is advocating a version of the Computer Program Theory. On page 24, Pinker says,

"This book is about the brain, but I will not say much about neurons . . . The brain's special status comes from a special thing the brain does . . . information processing, or computation."

One might think that here Pinker was leading up to the Neural Computational Theory of Mind, but then he says:

"Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them." He describes how a message might be carried by neurons, and continues, "Likewise a given program can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons . . . The computational theory of mind . . . says that beliefs and desires are information, incarnated as configurations of symbols. The symbols are physical states of bits of matter, like chips in the computer or neurons in the brain."

This sure sounds as though Pinker is accepting the Computer Program Theory of Mind in its Brain-implementable version. Other readers may not have been as badly misled on this matter as I was, but it will be useful to hear from Pinker why these passages are not versions of the Computer Program Theory of Mind (aka The Symbolic Computational Theory) in its brain-implementable version.

Indeed, later in the book (p. 112), Pinker seems to be advocating the Two Minds Theory:

"Where do the rules and representations in mentalese leave off and the neural networks begin? Most cognitive scientists agree on the extremes. At the highest level of cognition, where we consciously plod through steps and invoke rules we learned in school or discovered ourselves, the mind is something like a production system, with symbolic inscriptions in memory and demons that carry out procedures. At a lower level, the inscriptions and rules are implemented in something like neural networks, which respond to familiar patterns and associate them with other patterns. But the boundary is in dispute. Do simple neural networks handle the bulk of everyday thought, leaving only the products of book learning to be handled by explicit rules and propositions? ... The other view-which I favor- is that those neural networks alone cannot do the job. It is the structuring of networks into programs for manipulating symbols that explains much of human intelligence. That's not all of cognition, but it is a lot of it; it's everything we can talk about to ourselves and others."

This sure sounds like the Two Minds Theory with the Computer Program Theory of Mind applying to rational thought and language - "everything we can talk about to ourselves and others."

At this point, the Dehaene book becomes relevant. Since mathematics is part of rational thought, part of "everything we can talk about to ourselves and others," it would seem that Pinker is implicitly claiming that mathematical cognition too is to be characterized not by the Neural Computational Theory of Mind, but by the Computer Program (or Symbolic Computational) Theory. If so, this would seem to directly contradict Dehaene, who claims that very elementary arithmetic is characterized by neural circuitry in the brain, not by a symbol manipulation system. Again, I may be misreading Pinker and he can explain the apparent disparity. Dehaene's research seems to contradict what Pinker is taking as his basic beliefs.

The issue, of course, is not just who advocates what position, but what the evidence is. What kind of evidence could separate out the Neural Computational Theory from the Two Minds Theory in which concepts, reason, and language are all characterized by the Computer Program theory (aka Symbolic Computation) in its Brain-instantiable version, while the bodily functions are characterized by the Neural Computation Theory? There is such evidence, and it comes down on the side of the pure Neural Computational Theory.

The argument hinges on the Two Minds Theory's use of faculty psychology, in which visual perception, mental imagery, motor activity, and so on are NOT part of the rational/linguistic faculty (or faculties). Neither Pinker nor anyone else these days proposes that human visual and motor systems work by symbolic rather than neural computation. So, if we can assume that the visual and motor systems work according to the Neural Computational Theory of Mind, can we show that the conceptual system, including human reason and language, makes use of aspects of the motor and visual system that use neural computation not symbolic computation?

The first evidence for such a view came in the mid-1970's, when Eleanor Rosch showed that basic-level categories in the conceptual system-categories like Car and Chair-made essential use of mental imagery, gestalt perception and motor programs. (For discussion, see my Women, Fire, and Dangerous Things, pp. 46-52). Similarly, research on the neuroscience of color vision indicated that the linguistic and conceptual properties of color concepts were consequences of neural structure of color vision. More recently, contemporary neuroscience research has shown that visual and motor areas are active during linguistic activity.

Recent neural modeling research also supports the idea that the sensory-motor system enters into CONCEPTS and LANGUAGE. Terry Regier has argued that models of topographic maps of the visual field, orientation-sensitive cell assemblies, and center-surround receptive fields are necessary to characterize and learn spatial relations CONCEPTS and linguistic expressions. (See discussion in Regier's The Human Semantic Potential, MIT Press, 1995, especially Chapter 5, pp. 81-120.) In the past year, David Bailey and Srini Narayanan in their Berkeley dissertations have provided further arguments. Bailey demonstrated that verbs of hand motion in the various of the world's languages and hand-motion CONCEPTS can be defined and learned on the basis of the motor characteristics of the hand - neural motor schemas and motor synergies. Narayanan, even more dramatically showed that semantics of aspect (event structure) in the world's languages and its logic arise from motor control systems and that the same neural control system involved in moving your body can perform abstract reasoning about the structure of events. (For details, the dissertations can be found on the website of the Neural Theory of Language group at International Computer Science Institute at Berkeley (www.icsi.berkeley.edu/NTL).

These results should not be surprising. Our spatial relations concepts are about space, and it is not surprising our neural systems for vision and negotiating space should shape those CONCEPTS, their LOGIC, and the LANGUAGE that expresses them. Nor should it be surprising that our CONCEPTS about bodily movement and their LOGIC and LANGUAGE should be shaped by our actual motor schemas and motor parameters. And one should not have been surprised to learn that our aspectual concepts, that is, our conceptual system for structuring, reasoning about, and talking about actions and events in general are shaped by the most important actions we perform, moving our bodies, and that general neural motor control schemas should be used for structuring and reasoning about events in general. Furthermore, given that conceptual metaphor maps body-based concepts onto abstract concepts preserving their logic and often their language, it should be no surprise that the Neural Computational Theory governing the detailed structures of our sensory-motor system ought to apply as well not only to sensory-motor concepts, but to abstract concepts based on them. This is exactly what has been confirmed in studies over two decades.

Dehaene's book presents an important piece of that evidence, that the rational activity of basic arithmetic is neural in character and to be characterized by the Neural Computational Theory of Mind. Dehaene's work fits perfectly with recent work on conceptual systems and language in cognitive linguistics and structured neural modeling. The research that Dehaene cites-by himself, Changeux, and others-seems to disconfirm the Two Minds Theory and the idea from faculty psychology that there is an autonomous faculty of reason that humans have entirely and that animals have none of, with mathematics as an example of that faculty of reason.

If these results about basic arithmetic and body-based concepts are correct, as they seem to be, then the assumed faculty psychology is wrong. There are no separate faculties of reason and language that are fully autonomous and independent of the visual and motor systems. Instead, CONCEPTS, REASONING, and LANGUAGE make use of parts of the visual and motor systems. Since these must be characterized using the Neural Computational Theory, it follows that the Neural Computational Theory must be used in concepts, reasoning, and language. If Dehaene is right, as he seems to be, then the Neural Computational Theory needed to characterize the structure of basic arithmetic is also used in REASONING about basic arithmetic, which is a rational capacity. For this reason, the rational and language capacities cannot be characterized purely in terms of the Symbolic Computational Theory. Therefore, it would appear that the evidence falls on the side of the pure Neural Computational Theory of Mind. The Two Minds Theory does not work. What makes the Symbolic Theory of Mind for reason and language a "mad theory " (in Pinker's terminology) is that it does not fit the facts. Read the sources and make up your own minds.

Despite Pinker's writings advocating the Symbolic Computational Theory for reason and language, Pinker really ought to like the version of the Neural Computational Theory of Mind coming out of the Berkeley group, Regier's Chicago group, and other groups. In that version of the theory, neural modeling is done by highly STRUCTURED connectionist models (rather than PDP connectionist models). We agree with Pinker that conceptual structure, reasoning, and language require structure, and that is just what structured neural models of the sort we and others have been developing over the past couple of decades provide.

The field of neural modeling is evolving very quickly. At the time Pinker was writing How The Mind Works, Regier's book had not yet been published and some of the more important recent research on structured neural models of mind and language had not been completed. Perhaps Pinker was under the impression that the Neural Computational Theory could not characterize the kinds of conceptual and linguistic structures we now know it can characterize. Perhaps Pinker, correctly seeing that important parts of the structure in thought and language cannot be characterized by PDP connectionist models, and not being aware of structured neural models, was driven to what he saw as the only alternative: the "mad" Two Minds Theory, with Symbolic Computation providing the structure to thought and language.

I am encouraged by Pinker's present dismissal of the Computer Program Theory of Mind, even though he previously espoused it in his books. With the field developing this rapidly, changes in position are natural. There is no reason for us to disagree on this matter, given that we both recognize the need for conceptual and linguistic structuring, and given that structured neural modeling provides that structure in a biologically responsible way. I hope Pinker's dismissal of the Computer Program Theory means that he has given up on the Two Minds Theory and has adopted the sensible alternative that also best fits the facts-the Neural Computational Theory of Mind. The evidence warrants it.