How intelligent are intelligence tests?: Whitehead responds

Dear readers. Dr. Charles Whitehead wrote a long and thoughtful response to my earlier post on the Flynn Effect, but I worried that comments may not get read as often (or carefully) as the main posts, so I’m taking the liberty of giving Dr. Whitehead his own post. For more about Charles Whitehead’s work and his online activities, see Charles Whitehead: Social Mirrors here at Neuroanthropology.

From an anthropological point of view cognitive scientists are being less than rational when they treat intelligence scales as though they are measuring something fundamental and innate in human beings. No doubt innate abilities are used by people when they tackle IQ tests, but it is unlikely that such abilities evolved under selection pressure for this kind of problem solving.

Intelligence scales are culturally embedded artifacts designed to meet the idiosyncratic needs of postindustrial western societies, and reflect the equally idiosyncratic assumptions found in the west – such as our habit of referring to someone as “brainy” when we mean “intelligent”, and the widely held assumption that brains got bigger during human evolution because of selection pressure for “intelligence” (and/or language: e.g. Deacon 1992). The idea that human intelligence is the ultimate pinnacle of biological evolution may be little more than colonialist propaganda, suggesting that “scientific” societies are the ultimate pinnacle of cultural evolution – and hence morally entitled to dominate others who formerly managed perfectly well without the blessings of “modernity”.

Sir Francis Galton devised the first intelligence test in the late 19th century and this was followed by the scale developed by Alfred Binet and Théophile Simon between 1905 and 1911 (Atkinson et al., 1993: 457-8). As early as 1884 Galton examined more than 9,000 visitors to the London exhibition and found to his chagrin that eminent British scientists could not be distinguished from ordinary citizens on the basis of head size (ibid: 458). From that point on the kind of assumptions made by Galton have continued to pervade scientific thinking with little or no empirical encouragement.

A curious more recent example is the spate of papers attempting to correlate brain size with “intelligence” as assessed by (notably) the Wechsler Adult Intelligence Scale (Andreason et al., 1993 ; Egan et al., 1994, 1995; Peters, 1995; Flashman et al., 1998; Rushton & Ankney, 1995, 1996, 2000; Vernon et al., 2000; Thompson et al., 2001; MacLullich et al., 2002; Staff, 2002; Drachman, 2002). Some of this research has provoked controversy over issues of anthropological concern, including ethnocentrism, sexism, and racism (Peters, 1995; Rushton & Ankney, 1995, 1996). Researchers did indeed find a positive correlation and this has been acclaimed as a vindication of the studies and the underlying ideology of “big-brained people are smarter” (McDaniel, 2005). In point of fact, a meta-analysis of 37 studies, involving 1,530 people, yielded a best estimate for the population correlation (r) of 0.33 (McDaniel, 2005), suggesting that the intelligence factors measured are associated with around 11% (r2) of brain volume, and cannot account for the bulk of brain expansion during the last 2.5 million years.

Why the scientists concerned should feel they have achieved something useful becomes all the more mysterious when you realize that many components of the scales used assess culturally acquired skills which were invented in historic times – particularly numeracy and written language (other questions address institutionalized factors such as money and banking, and none can be claimed with confidence to be free from cultural conditioning). Many preliterate societies even today lack numbers higher than two, and others no higher than five. Numeracy and literacy originated with the bureaucratic needs of the first civilizations along the river valleys of the Nile, Tigris, Euphrates, Indus, Ganges, and the Yellow River in China. These of course post-date the agricultural revolution (around 10,000 years ago).

An analysis of 217 fossil crania (De Miguel & Henneberg, 2001) – the largest sample we have to date – suggests that average human cranial capacity just prior to the agricultural revolution was around 1,500 cm3, which is about 12% larger than the average human capacity today (1,340 cm3). In other words, intelligence scales measure abilities that developed at a time when brains were most probably getting smaller, and any correlation between such abilities and brain size is less than informative (since it serves only to reinforce current biases in cognitive science). All these studies would seem to be a prodigal waste of research funding and resources – a waste that could easily have been avoided with a little anthropological input.

Currently the dominant theory of brain expansion in primates is the social or Machiavellian intelligence hypothesis, which holds that social intelligence makes greater cognitive demands than object intelligence. So why did all these researchers choose individualistic instruments such as the Wechsler scale (1939) rather than, say, Gardner’s (1983) measures of “six intelligences”? Gardner argued that social, musical, artistic, and “bodily-kinaesthetic” (including dance and sports) skills have been important since the “dawn of civilization” whereas logical scientific thought only came to the fore after the European Renaissance (Atkinson et al., 1993: 476). But the very term we use to define our species – Homo sapiens – presupposes an evolutionary trajectory ultimately directed towards the production of scientists.

The idea of a “general” (as opposed to social) intelligence is at best dubious. The point can be illustrated by a brain scanning study which contrasted “theory of mind” (ToM) with “non-ToM” stories and cartoons (Gallagher et al., 2000). The investigators assumed that brain structures activated by non-ToM stories and cartoons were “general reasoning” areas, and only those uniquely activated by ToM stories and cartoons were “ToM” (i.e. social reasoning) areas. They concluded that ToM involves a rather small area in ventromedial prefrontal cortex. However, the “general reasoning” areas were much more strongly activated during ToM than non-Tom tasks. It would seem more reasonable to infer that “general reasoning” involves a subset of social reasoning areas, and that “general intelligence” is a spin-off benefit of social intelligence. Animals which score most highly in laboratory studies of intelligence and language are invariably highly social – such as chimpanzees, dolphins, and Congo grey parrots.

The discovery of the Flynn effect should have alerted us by now to the culturally conditioned limitations of western intelligence scales. These tests may predict academic performance in western institutions, but they cannot provide reliable information about innate functions of the human brain. More plausible views of the social brain and human brain expansion (in my opinion, of course) can be found at http://www.socialmirrors.org. The Human Evolution page is not yet up, but relevant papers are referenced on the Social Brain page and my own papers can be downloaded from here and from my CV (see About Charles Whitehead: Publications).

Welcome

Neuroanthropology is a collaborative weblog created to encourage exchanges among anthropology, philosophy, social theory, and the brain sciences.
We especially hope to explore the implications of new findings in the neurosciences for our understanding of culture, human development, and behaviour.
If you would like more information, please contact Greg Downey at Macquarie University greg.downey (at) mq.edu.au (remove spaces).