Intelligence testing

Intelligence testing began in an organized way in the early 20th Century with the Stanford-Binet test. The purpose was to form an estimate of a person's aptitude before investing time and money in their education, in the belief that aptitude or intelligence was a fixed quantity.

At the time these tests were developed, medical science and psychology were
in their infancy. Ideas such as eugenics still had currency, and intelligence tests were used to weed out so-called "defectives".

Testers classified people as idiots, imbeciles, morons, normal, above average, and geniuses on a numerical scale based on mental age. For children, dividing the mental age by chronological age produces the intelligence quotient (IQ) used for placement in slow or rapid classes.

The study based on survey data from 702 Nordic research works concludes that a typical neuropsychologist used 9 tests in a standard assessment, and 25 tests overall in their practice. The selection of tests to use has been influenced by nationality, competence level, practice profile, and by attitude toward test selection. Testing patients with psychiatric disorders was associated with more tests.[1] The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century (Flynn effect). It is not clear if whether "people are getting more and more clever" or just there are differences between past and current testing.[2][3]

In the United States, intelligence tests were to classify recruits and draftees for service in World War II. The more technical jobs were assigned to men with higher scores. Sociologists also noted racial differences in IQ scores, and some supposed that these differences were inherent. A controversy over this matter has raged ever since.

Open questions for psychologists and sociologists focus on what produces differences in intelligence, whether intelligence can be developed or stunted by upbringing and education, and why different races or cultural groups have significantly different scores.

The two main reasons given are:

that there may be inherent differences between the races

that upbringing, education and social factors can skew the results

Contents

Influention of environment

After nearly a century of IQ testing and educational reforms, some researchers have begun to assert that IQ can be influenced by the child's environment by as much as 30 points.[4] See Educational games.

A 1976-year study that covered 258 children (IQ between 80 and 119) finally concluded that the major correlating factor is "a favorable parental social and educational background".[5] The more recent 2016-year study also showed that the measured IQ potential is significantly lower in developing countries.[6] and increases after the national statistics improves ([6] references Nigeria and Kenya).

Test results can also be influenced by other factors. Steele and Joshua Aronson found that when they gave a group of Stanford undergraduates a standardized test and told them that it was a measure of their intellectual ability, the white students did much better than their black counterparts. But when the same test was presented simply as an abstract laboratory tool, with no relevance to ability, the scores of blacks and whites were virtually identical.[7] See Pressure and failure.

Liberal bias

The field of intelligence testing has been acrimonious and contentious. Researchers such as Arthur Jensen and Charles Murray have been pilloried in the mass media, accused of maintaining views which different or even opposite of their actual views.[8]

One professor had to come to Jensen's defense, after Stephen Jay Gould attacked him in print:

I object to Gould's tendency to visit the alleged sins of early investigators on present day investigators. If Goddard, Brigham, and others once tended to view various human races as relatively superior or inferior in intelligence and therefore relatively worthy or unworthy, this does not mean that present-day investigators, like Jensen (1980) or Rushton (1995), are necessarily guilty of such views. In fact, from my personal acquaintance with Jensen and his publications, I can attest that he does not view the African race (if one accepts that it is a race) as in any way less worthy than other so-called races. [5]

As soon as anyone argues that racial differences in intelligence are authentic, not an artifact of biased tests, everyone decodes that as saying the differences are grounded in genes. It is a non-sequitur, but an invariable one in my experience. America's intellectual elites are hysterical about the possibility of black - white genetic differences in IQ.

As you know, The Bell Curve actually took a mild, agnostic stand on the subject. Dick Herrnstein and I said that nobody yet knows what the mix between environmental and genetic causes might be, and it makes no practical difference anyway. The only policy implication of the black - white difference, whatever its sources, is that the U.S. should return forthwith to its old ideal of treating people as individuals.

But how many people know this? No one who hasn't read the book. Everyone went nuts about genes, so much so that most people now believe that race and genes is the main topic of our book. [6]

The Bell Curve book is not the only controversial study on IQ among different races or groups of people. Danish professor Helmuth Nyborg has conducted studies of IQ which indicate politically incorrect results such as: on average, white people have higher IQs than black people, and men have higher IQs than women. His latest research finds that on average, atheists have IQs about 5.8 points higher than people of faith, prompting similar outrage among his critics, who claim that such results might be due to cultural biases.[7]

↑Evolution advocate Gould portrays Murray as saying intelligence can be measured with a single number; Murray, of course, makes it clear in his book (as well as in a follow-up retort to Gould) that intelligence is too complex to be measured with a single number.