British biologist Charles Darwin (1809–1882) laid the foundations of the theory of evolution and transformed the way we think about the natural world. Few books have influenced human thought more than his On the Origin of Species. Published in 1859, it expounded his theory of natural selection, shocking society and revolutionized science.

What Kinds of Creatures are We?: Noam Chomsky on the Scope and Limits of Human Knowledge

British biologist Charles Darwin (1809–1882) laid the foundations of the theory of evolution and transformed the way we think about the natural world. Few books have influenced human thought more than his On the Origin of Species. Published in 1859, it expounded his theory of natural selection, shocking society and revolutionized science.

Noam Chomsky is a contemporary psychologist, linguist, and political activist known both for his theory of innate grammar and for his political activism.

Simply Charly: The notion that certain mental structures are inborn and not derivable from our environments is one that goes back to the ancient Greeks, particularly Plato who first raised this question in his works Meno and Phaedo. Much later, 17th-century rationalists like René Descartes and Gottfried Leibniz further developed this area of inquiry. In fact, you have referred to your enterprise as Cartesian Linguistics. But while Descartes discussed the creative use of language, it’s unclear if he thought that language was itself an innate trait like mathematics or the idea of God. Can you elaborate?

Noam Chomsky: Just to clarify, I never referred to what I do as “Cartesian Linguistics,” and in fact explained at the beginning of my book Cartesian Linguistics (CL) that I am using the term as a cover for a set of ideas that are related to (and sometimes influenced by) Cartesian thought but are often developed by critics of Descartes. There is, strictly speaking, no Cartesian Linguistics. As discussed in CL, though Descartes’s usage of the term “idea” is not fully clear, he appears to use it broadly enough so that linguistic expressions generally, not just the individual concepts of which they are constituted, are within the range of ideas. If so, then language would fall under his dispositional concept of innateness—and it’s hard to see how the properties he alludes to in his scanty references to language could be acquired otherwise.

SC: In his book, Human Knowledge: Its Scope and Limits, philosopher Bertrand Russell, wrote: “How comes it that human beings, whose contacts with the world are brief and personal and limited, are nevertheless able to know as much as they do know?” You have referred to this quote with approval in your own work. Can you tell us how you see the relationship between the “scope” of human knowledge and the “limits” of human knowledge?

NC: I assume that humans are biological organisms, hence with scope and limits determined by their innate endowment. The genetic endowment that yields arms and a mammalian visual system bars wings and an insect visual system; etc. Scope and limits are intrinsically related, for organic creatures. Correspondingly, humans can acquire certain kinds of languages in the normal way, thanks to their genetic endowment, but not the communication system of bees (or, for that matter, other kinds of symbolic systems that we can invent). There is every reason to suppose that the observation extends to cognitive capacities quite generally. And as I’ve discussed in publications over the years on what I’ve called “problems” (which fall within our cognitive reach, in principle) and “mysteries” (which do not), there seem to be significant examples in intellectual/scientific history. I’ve discussed the matter most recently in my book What Kinds of Creatures are We?

SC: If we compare humans with other animals, we see two striking differences: humans have language, and humans are vastly more intelligent. Are these two differences separate, in your view? And if they are separate, which of them is the more basic one? Are we more intelligent because we have language, or do we have language because we are more intelligent?

NC: Darwin was only one of the many great figures who regarded language as the crucial property that distinguishes humans from other animals, for good reasons. Of course, there are many other distinguishing features of a general cognitive nature. As noted, humans cannot communicate as bees do, or navigate with the remarkable capacities of bees and ants. Or perform the astonishing memory feats of certain corvids—who, in turn, lack the “theory of mind” (TOM) that humans develop normally at about ages 3-4 (there is debate as to whether apes have TOM in some rudimentary form). More generally, the term “intelligence” covers a wide (and loosely characterized) range of capacities, and animals don’t line up simply in some Great Chain of Being in this regard.

SC: Speaking of Darwin, he found it hard to accept that mental phenomena can be studied scientifically: he once wrote that “thought is only known subjectively … the brain only objectively.” This is a widely held view. You don’t agree: can you tell us why?

NC: By now I think the question has been answered. There are scientific studies of mental phenomena, with many significant results. The mental computations that lead to semantic-interpreted expressions and their externalization are one example, but not the only one.

SC: In your earlier work, you wrote a lot about Universal Grammar and relatively little about the (phylogenetic) evolution of language. In your recent book Why Only Us?, co-authored with Bob Berwick, you say a great deal about evolution and biology, and (relatively) less about language. You seem to have more confidence now in linking your work in linguistics with research in evolutionary biology. Why is this? Have the two fields converged promisingly, in your view, in recent times?

NC: There is some discussion of this matter in our book. In brief, concern with evolution of language arose along with the program of generative grammar in the ‘50s. In Eric Lenneberg’s 1967 classic Biological Foundations of Language, which founded modern biology of language, the topic is insightfully analyzed, and the topic regularly arose in conferences, classes and seminars, and other discussions. Little was published because there was little to say. That changed about 25 years ago when progress in the relevant fields made it possible to pose the basic questions in a relatively clear way: in linguistics, with simplification of basic principles that led to the formulation of the “minimalist program.” I should make clear, incidentally, that our book develops a point of view that is quite different from the mainstream of work in the field that is misleadingly called “evolution of language”—misleadingly, because languages do not evolve (nor does “language”), but rather the capacity for language, “universal grammar” (UG) in the terminology of the past half century, apparently a true species property: unique to humans in essentials and uniform among humans, pathology apart.

SC: In the recent book, Berwick and you say that “the Basic Property of language {is}: that a language is a finite computational system yielding an infinity of expressions” (p 1). To a non-linguist, “a computational system” sounds like a system for doing mathematics, or something like Microsoft Windows. Can you explain what you mean by “computational” here?

NC: The most elementary property of the human language faculty is discrete infinity: there are 6-word sentences and 7-word sentences, and so on without limit, but no 6-and-a-half-word sentences. Rather like the natural numbers. The brain, in which the faculty is stored, is plainly finite. A finite system with infinite yield is based on rules and principles that apply to their own outputs. Thus rules of English yield the structure “John thinks that S,” where S could be “it’s raining,” and the latter sentence could fill the role of S in “Bill knows that S” (= “Bill knows that John thinks it’s raining”). And so on, in innumerable ways. A system of this sort is, by definition, a computational system. Such systems were given a firm foundation in the work of Alan Turing and other great mathematicians in the early part of the 20th century, making it possible, for the first time, to address a challenge that was posed at the outset of the modern scientific revolution by Galileo and the scientist-philosophers of Port Royal, who were awed by the fact that with a few symbols, humans are able to form an unlimited array of thoughts, and can externalize them to allow others to have access to the inner workings of their minds.

SC: You also say that if a Martian scientist were to look at planet Earth, she would decide that there was basically only one multicellular animal (and similarly, only one human language). Please, can you expand on these claims, which many people will find astonishing given the diversity of animals and languages on planet Earth?

NC: Sixty years ago it was widely assumed that languages could vary almost arbitrarily and that the same was true of the diversity of organisms. So it often appears when systems are not understood. By now, biologists recognize that there are very narrow constraints on life forms, so much so that the thesis that there might be a universal genome has been seriously advanced and contemplated. I think that progress in linguistics has led in the same direction. To mention only one famous case, 40 years ago it appeared that many languages allowed virtually free word order with no hierarchy, in particular, Australian aboriginal languages studied extensively by the late Kenneth Hale, at MIT; Warlpiri was the most striking example. In more recent years, deeper analysis by Hale and his student Julie Legate revealed that in fact, Warlpiri shares the hierarchical structures of familiar languages, conforming to general principles of UG, despite what appears to be the case on the surface. There are a great many such examples. Many open questions remain, of course, but there seems to be good reason to anticipate further progress along the lines of what we have seen in biology in recent years.

SC: Many people in your field believe that language evolved as a tool for communication, that it was selected for this particular purpose. However, you’ve downplayed this aspect of language as ancillary, as an empty notion. Why?

NC: I haven’t suggested that it is an empty notion. It might have been the case, as is apparently true of animal systems. But this conventional doctrine appears to be false. As Berwick and I review in our book, there is mounting evidence from the fundamental design of language (and for other reasons) that language evolved as an instrument of thought, with externalization in one or another sensory modality (typically, but not necessarily, speech) an ancillary development, hence such particular uses of language as communication. These are not questions of doctrine but of fact.

SC: You suggest that by some “abrupt genomic/phenotypic shift,” possibly about 200,000 years ago, humans acquired a simple but powerful mental operation called “Merge,” which operates on “word-like elements” that you informally call “the lexicon.” I can see that animals probably lack “Merge,” but don’t some animals also have word-like elements?

A: As we mention in the book, and as I have discussed much more extensively elsewhere, even the simplest word-like “atoms” of human language, those that are used to refer (“river,” “tree,” “person,” “city,” etc.), appear to be radically different from the elements of animal symbolic systems.

NC: You call your approach to language “Biolinguistics.” Can you pick out for us some exciting current work in Biolinguistics that non-specialists could look at? Or at least, can you name some of the interesting research questions that people are investigating?

SC: The term “biolinguistics” was suggested in 1974, by Massimo Piattelli-Palmerini, as the name for the work in generative grammar that had been developing since about 1950. This work regarded the language capacity and each individual language that is an instantiation of it, as a biological property of humans, roughly on a par with the human visual system and other components of the organism. That is the perspective adopted, for example, in the classic text of Eric Lenneberg—a close friend, who was one of the handful of Harvard graduate students who began developing these ideas from the early ‘50s. So all questions of the nature of language, its acquisition and use, its neural basis, its evolution, fall within biolinguistics, insofar as they are investigated within this general framework.