Archives

Meta

Psychological Nativism

In the field of psychology, nativism is the view that certain skills or abilities are ‘native’ or hard wired into the brain at birth. This is in contrast to empiricism, the ‘blank slate’ or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs. Some nativists believe that specific beliefs or preferences are hard wired. For example, one might argue that some moral intuitions are innate or that color preferences are innate.

A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism (which argued that the content of consciousness can be explained by the association and reassociation of irreducible sensory and perceptual elements).

Nativism has a history in philosophy, particularly as a reaction to the straightforwardly empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate. The philosopher Immanuel Kant (1724–1804) argued in his ‘Critique of Pure Reason’ that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one – causality (a way to describe how different events relate to one another) – which presupposes the others.

Nativism is most associated with the work of Jerry Fodor, Noam Chomsky, and Steven Pinker, who argue that we are born with certain cognitive modules (specialized genetically inherited psychological abilities) that allow us to learn and acquire certain skills (such as language). For example, children demonstrate a facility for acquiring spoken language but require intense training to learn to read and write. In ‘The Blank Slate,’ Pinker cites this as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition). A number of other theorists have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.

The concept of Universal Grammar is the linguistic link to nativism. It is made up primarily of principles and parameters. Most of these rules come in the form of ‘if a language has a feature X, it will also have the feature Y.’ In addition, Universal Grammar also describes certain switches that can be seen in language, which can be briefly described as ‘a certain language either has X attribute, or Y, but not both, and not neither.’ Work done with creoles (stable natural language developed from the mixing of parent languages), which show linguistic attributes not seen in any of the parent languages, and poverty of the stimulus (the assertion that natural language grammar is unlearnable given the relatively limited data available to children learning a language, and therefore that this knowledge is supplemented with some sort of innate linguistic capacity), which shows that children learn language from incomplete sources, both contribute to the idea that there is linguistic knowledge hard wired into the brain.

Nativism is sometimes perceived as being too vague to be falsifiable, as there is no fixed definition of when an ability is supposed to be judged ‘innate.’ (As Jeffrey Elman and colleagues pointed out in ‘Rethinking Innateness,’ it is unclear exactly how the supposedly innate information might actually be coded for in the genes) Further, modern nativist theory makes little in the way of specific testable (and falsifiable) predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of ‘psychological creationism.’ As influential psychologist Henry L. Roediger III remarked, ‘Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him.’

Some researchers argue that the premises of linguistic nativism were motivated by outdated considerations and need reconsidering. For example, nativism was at least partially motivated by the perception that statistical inferences made from experience were insufficient to account for the complex languages humans develop. In part, this was a reaction to the failure of behaviorism and behaviorist models of the era to easily account for how something as complex and sophisticated as a full-blown language could ever be learned. Indeed, several nativist arguments were inspired by Chomsky’s assertion that children could not learn complicated grammar based on the linguistic input they typically receive, and must therefore have an innate language-learning module, or language acquisition device.

Over the last several decades, with the advent of more complex and sophisticated brands of mathematics such as complexity theory and game theory, it has become increasingly apparent that extremely complicated systems can evolve from agents with few (if any) pre-programmed rules. Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success. Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (distributional hypothesis: the theory that words that occur in the same contexts tend to have similar meanings).