Monday, 29 June 2015

Among the many outrages in this troubled world, cheating in sport may not be considered a matter of the greatest importance. During an international football match a week ago a Chilean player patted the backside of Uruguayan player Edson Calvani, and then stuck a finger up his bottom. Calvani responded with a flick of the back of his hand, and which point the Chilean fell on the floor, writhing in agony, and the referee then sent off, not the offending Chilean but the provoked Calvani. Since I know my readers to be earnest seekers after the truth, they will be horrified by this outrageous provocation, and by the gullibility of the arbiter. On the other hand, if by chance any passing readers are of more Machiavellian personality, and eschew rules in sports, then they will approve of the adroitness of the anal molestation and subsequent play acting by the perpetrator, which led to the diminished Uruguayan team losing 0-1.

What are the characteristics of cheaters in sport, that athletic activity in which fair play should be paramount? A description of psychopathic personality includes generalpoverty of affect (emotion), defective insight, absence of nervousness, lack ofremorse or shame, superficial charm, pathological lying, egocentricity, inability tolove, failure to establish close or intimate relationships, irresponsibility, impulsiveantisocial acts, failure to learn from experience, reckless behaviour under theinfluence of alcohol, and a lack of long term goals. Is any of this relevant to sports cheats?

The authors have looked at American Football (National Football League: NFL) players; American Basketball (National Basketball Association: NBA); and English Premier League Football (Soccer) players. They have studied two kinds of cheating, namely the use of performance enhancing drugs and breaking the rules of the game. In a black/white comparison they have looked at the proportions of each group in each sport, using a visual and biographical analysis of each of the sportsmen in the games to identify their race. They then calculated the percentages of blacks and whites identified as cheaters compared with their percentages among the players.

In the NFL black players in 2010 were over-represented in suspensions for drug use, for suspensions for more than 4 matches, for indefinite or entire season suspensions, and for being suspended more than once. A similar pattern was found for 2013.

In the NBA black players in 2013-2014 3 were somewhat more likely to be fined or suspended but the differences were not statistically significant.

In the English Premier League Football (Soccer) the authors studied the number of red cards handed out from 2006 to 2013. Interestingly, many players receive prior warning in the form of a a yellow card for a minor offence, and thus know that they must be on best behaviour.

However, these are not overwhelming differences. There is an over-representation of black players, but only in the 2006-7 season.

I will not pretend that I know anything about the first two sports, and have only a layman’s understanding of football, so I am absolutely open to correction on any of these matters. Nonetheless, I would not be persuaded by these findings that there is a significant effect overall. I turn to those who are more engaged in sports to either point out other studies or to suggest other data sets which could be examined for patterns of cheating.

So there you have it. Now back to the important matter. One devious finger up an innocent bottom, and Uruguay is unfairly cast out of the Copa America. I hope you will join me in demanding a replay. I am aware that one notable Uruguayan player has bitten opponents (and been suspended for his crimes) but if one cannot kick a football about of a Wednesday afternoon without unwarranted fundamental intrusion, what is the world coming to?

I have always had a warm spot for cold Finland. The people are much friendlier than their prices, which tend to be either high or higher. The Finns have a habit of singing about their landscape whilst drinking sahti, but no one is perfect. I spent some happy midsummer days in Vaasa, near the Artic circle, the guest of Per Fortelius and family, meeting his friends, photographing the local architecture and doing some artic temperature wind-surfing.

Finland is the sort of place where they do things thoroughly, things like testing the intelligence of a total population cohort of Finnish males born in 1987 and following up the results. Gold dust.

They found that lower levels of intelligence are associated with greater levels of offending, that the IQ-offending association is mostly linear, with some curvilinear aspects at highest and lowest levels, and that the pattern is consistent across multiple measures of intelligence and offending. In some ways this is exactly as predicted and already observed, since the available literature shows that individuals with lower IQ are more likely to engage in criminal behaviour. Criminal offending was measured with nine different indicators from official records and intelligence was measured using three subscales (verbal, mathematical, and spatial reasoning) as well as a composite measure. The results show consistent evidence of mostly linear patterns, with some indication of curvilinear associations at the very lowest and the very highest ranges of intellectual ability.

However, the advantage of these data is that they deal with an entire birth cohort, so there are no distorting effects caused by the loss of a few miscreants who might account for lots of crimes. The population is restricted to males n = 21,513 because only males in Finland do military service and sit the intelligence tests. Offending is judged from real documentary data, not from fallible self report, even more fallible when painful memories are involved. Lastly, they have verbal, mathematic and spatial IQ measures, so can investigate whether verbal intelligence has a particular effect, as some have argued.

Here are the results, for general intelligence, and all crime:

Note that violent crime is an order of magnitude higher in the bottom 20% of the population by ability than the top 20% of population by ability. The pattern is generally a linear one. The subscales of intelligence show the same pattern, though perhaps the spatial scores show a slightly less pronounced differential effect.

So, why do dull minds carry out criminal acts? The main effect is driven by general intelligence, so that raises a number of possibilities, in that highly g-loaded factors such as deficits in executive functions, including inhibition, processing speed, and attention are potentially linked to criminal behaviour. People with higher levels of intelligence are more dependable ( Deary et al., 2008b) and conscientious ( Luciano, Wainwright, Wright, & Martin, 2006), suggesting that they are more likely to think about the moral consequences of their actions compared to individuals with lower levels of intelligence. People with lower intelligence have been found to act more impulsively ( de Wit et al., 2007 and Funder and Block, 1989). People with lower levels of impulse control and related constructs, such as low self-control, have also been found to be significantly more likely to engage in various forms of criminal and antisocial behavior ( Gottfredson and Hirschi, 1990,Moffitt et al., 2011 and Pratt and Cullen, 2000). While only preliminary, current research suggests that lower levels of intelligence reduces the ability to weigh the costs and benefits of individual action, resulting in a greater propensity to make impulsive decisions, which in some cases involve illegal behaviour.

It is a minor finding, but the dullest are not quite the most criminal, an honour reserved for those in the 2nd decile of ability. It may be that those in the 1st decile are slightly restricted in their behaviours by their very low ability, and may be under supervision from care givers.

Equally minor, there is a slight uptick for criminality in the most intelligent, though hardly the torrent of criminal master-minds beloved of popular entertainments.

The authors say: “low intelligence is a strong and consistent correlate of criminal offending. For example, the risk of acquiring a felony conviction by age 21 is nearly four times (3.6) higher among those in the three lowest categories (1–3) of total intelligence as compared to those scoring in the top three categories (7–9). We observed differences of similar magnitude across each indicator of criminal offending and regardless of the measure of intelligence. We found no evidence for the hypothesis that deficits in verbal intelligence are more salient to criminal offending than deficits in other dimensions of cognitive ability.”

The authors mildly point out that, strictly speaking, these results may be confined to Finland. However, an easy test comes to mind: have a look at crime statistics in your country, and work out, for example, whether the crime rate for those below the 30th percentile rank is higher than those above the 70th percentile rank, and how much higher. Or, look at the crime rate in your country for those whose ability is equivalent to Finnish 30th percentile or below, which would be Greenwich IQ of 92 or below.

For group differences within nations use Emil’s calculator on tail effects for group distributions:

For example, if IQ 92 is the point below which criminality increases considerably, then 30% of the blue group are at risk, and 68% of the red group. In this way one can model what levels of crime would be expected if IQ were the main cause. A hypothesis worth testing.

Monday, 22 June 2015

On the day following the Solstice, which should have ushered in high summer, the English weather instead served up an imitation of winter, punctuated by uncertain rain, cold winds and brief glimpses of ironic sunshine. Probably the ancient spirits of Stonehenge have been disturbed by the presence of druidic revellers, and their clouds of stupefying substances have upset the very astronomical events they had gathered to celebrate. Cowering under this disordered weather I was searching for a distraction when an esteemed person of my acquaintance peremptorily instructed me from her nearby study that I had to watch the Royal Society Croonian Lecture 2015 given by Professor Nicholas Davies FRS on “Cuckoos and their victims: An evolutionary arms race”.

Prof Davies lectures well. He understands that a general audience responds best to clear and uncluttered language, and he strives to explain rather than to impress or confuse: a noble but dying tradition. His findings are displayed in simple formats which require no explanation, and there is only one mention of the word “correlation” which is then given as “good” and without a tedious number. Proper statistics. Even better, he works within a well-demonstrated evolutionary framework, so he has a good theory to rely on, and can use experimentation to weigh up different explanations, and map out the battle between host and parasite in exquisite detail.

Seen from the viewpoint of the hosts upon whom the cuckoo predates, they are horrible, lethal parasites. With arriviste insouciance the fledgling cuckoos begins life by expertly throwing out of the nest the host chicks and the clutch of un-hatched eggs. It is born to be genocidal, and having usurped the nest is then fed to massive proportions by the deluded parents, their devotion like that of concentration camp prisoners slavishly handing food to their guards. One wants to shout to the deceived birds: “Look right in front of you: your blood line is dying out”.

Prof Davies shows each stage in the evolutionary arms race: female cuckoos are able to remove a host egg from an unguarded host nest, and lay their own egg in its place in 10 seconds, then fly away without a backward glance. Talk about absent mothers farming out their brood. Their eggs look like those of the host bird eggs, and thus are often accepted. Host birds have in turn evolved techniques to spot the imposter eggs, and older host mothers learn to look for signs of foreignness: egg size and colour differences, and are thus better at rejecting the invader DNA. Better still, host mothers have evolved to “sign” their eggs with individual signals of colouration in complicated designs, the better to recognise their very own eggs and thus detect the odd one out. Cuckoos in turn have evolved to fake those signatures, and lay their forgeries in all nests, accepting a high rate of loss in return for high rates of acceptance in the few nests where they happen to achieve a close match.

Davies shows that the optimal acceptance/rejection rate for host birds depends on the parasite infestation level, and can show that when birds are moved from cuckoo-infested West Africa to a cuckoo-free Caribbean island the need for egg “signing” diminishes over many generations, and dies out. In places where the move to cuckoo free areas is more recent, the drift towards more similar eggs is less advanced.

Of course, birds are very different from primates, so we should allow ourselves some monkey snootiness, but evolution applies to all species. Did you, like me, identify with the host birds, and hate the parasites? Davies, quite understandably, speaks of Cuckoos and their “victims” but these are very human interpretations. In the light of evolution the cuckoo is as opportunist as any organism should be, if they want to flourish. The cuckoo is the itinerant chancer, the picaresque swindler living off the stupidity of the locals, who deserve to die out for their misplaced altruism, ripe for the plucking. Cuckoos outwit the trusting locals, are better at deception, and keep ahead by recruiting them into acquiescing in their destruction, outbred to death. From an evolutionary point of view, they are fit, very fit, until the hosts learn how to retaliate.

Host birds mob intruding cuckoos, which can scare them off, and certainly raises a hue and cry which warns other local birds that there is an interloper on the genetic prowl.

At that point the cuckoos have evolved new strategies, making their plumage ape that of acceptable birds. And so it goes on, and endless battle in which, as Lucretius says (in De rarum natura):

Some nations increase, others diminish, and in a short space the generations of living creatures are changed and like runners pass on the torch of life.

Friday, 19 June 2015

The “metric shift” illusion is a common ploy, particularly when someone has an axe to grind. “If everyone were to switch off just one light bulb, we could close a power station”. (Marvellous, but how many power stations are there? A small reduction in power consumption will lead to a small reduction in power generation, and not a kilowatt more than that). “Just one penny of tax will raise X million of money for good causes”. (Bless, but removing an additional penny from each pound of income will take a large amount of earnings from every citizen, and they probably regard their own choices as better than bureaucrats’ choices). “An enormous number of citizens are diagnosed every year with Horrible Disorder X”. (My sincere commiserations, but either give me the total numbers for all other disorders, or just give me the rate per 100,000 so I can put all disorders in all nations onto a common metric. Also, “diagnosed” is not equivalent to “about to die from”).

So, seeking to impress readers and recruit more of them, should my boast be that Psychological Comments has achieved 500,000 readers or that it now stands at Half a Million readers? I assume that the word “million” has a dramatic impact, but “half” does not have quite the same ring to it. It clearly indicates incompleteness, with much left to be achieved. True. Sticking to the bare numbers gives the appearance of due modesty and, as everyone across the world knows, the English have much to be modest about. I will restrict myself to simply 500,000.

Even conceding that we are talking about the common metric of page views, not the more important and elusive metric of actual readers, there seems to be some quickening of pace

0 23 November 2012 -

100,000 12 January 2014 415 days

200,000 4 July 2014 174 days

300,000 6 November 2014 126 days

400,000 17 March 2015 131 days

500,000 20 June 2015 96 days

The notable performer in this last period was “Gone with the Wind” a meta-analysis of twin research, which took second place in blog history in a matter of a week or two, displacing some of my well-established popular posts. The other big performer was the post on income, brain and race, which was boosted by a mention from Steven Pinker. Poorer children have smaller brains, but very probably not because they are poorer.

A word about blogging’s noisy younger brother: Twitter. To my shame, when I had an early adopter’s first fling with blogging and tweeting in 2009, I abandoned both within two weeks. There seemed to be no audience, and I certainly did not see any reason to tweet about my blog, because all the academics I knew were on email, not on Twitter.

Now Twitter has become a familiar gateway to my blog and also a terse conversation in its own right. This is not because of the number of followers, which even by a psychologist’s standard is a lowly 1,100 but because of their loyalty and impact. They re-tweet quickly, comment, and direct me to new work. The standard Twitter Analytics records that in the last 28 days my 289 tweets have been seen 210,000 times, garnering 452 re-tweets and triggering 6,121 visits to have a look at my profile. My 3 tweets per day get 16 re-tweets and 23 favourites.

Of more interest is tweets ranked by re-tweets. In pole position is a graph of effect sizes for early childhood interventions: 58 RTs. The last three (truncated at the bottom of this screen grab) are 21, 21 and 18 RTs respectively.

My blog readers are 77% of them below 35, which is gratifying.

According to more restrained observers, page views are an inflated measure, counting both robots and visitors who leave after 10 seconds, never to return. On the contrary, perhaps page views is a much better measure of what has been read than “number of readers” and certainly better than number of books on a shelf. Glancing even at my diminished store of books there are several I have not read, and others I have read only partially. Interesting if every book recorded and displayed exactly how many of its pages had been read. So many bookcases could be cleared of their surplus content, the virginal tomes discarded like spinsters, remaindered in ignominy.

I digress. The milestone is history, already surpassed after having been recorded. I really ought to go back to three papers which are half-read and awaiting comment. But the sun is shining, so I will consider the dilemma while having a coffee in the garden.

Thursday, 18 June 2015

The Rachel Dolezal story seemed too silly to comment on, but silliness thrives if left unchallenged.

A white woman in the US has pretended to be a black woman, lied on her application forms, and risen to represent a local chapter of the National Association for the Advancement of Colored People. She has mendaciously presented an unrelated older black man as her father and a younger black man as her son. She says she “self-identifies as black” and when asked precisely when she began to deceive people about her ancestry she replied:“I do take exception to that because it’s a little more complex than me identifying as black, or answering a question of ‘Are you black or white?’ She also described herself as “transracial” and said: “Well, I definitely am not white. Nothing about being white describes who I am.” She was in fact born to white parents, married a black man and had a child with him and later divorced, and had attended an almost all black college where, reportedly, she complained that as a white woman she was not treated fairly.

In her mitigation, her parents had adopted 4 black children, so she must have concluded that either her parents were very kind people in the public and general sense of that word, or very unkind parents in a very personal sense, and apparently she eventually came to the latter conclusion, and they are estranged. Her parents, Christian missionaries, certainly kept making a point. A purely psychological interpretation (I occasionally indulge in those, so please bear with me) is that she became convinced that her parents loved black children four times more than her, and thus wanted to become black to regain their love. A less convoluted interpretation is that after a racially perplexing childhood which favoured Black Americans she just exploited a particular set of historic circumstances in the US in which her assertion that she was black was accepted “at face value” though inspection of her face showed she was not. Whatever her confusions, she appears to have played the system, choosing whichever self-identification seemed convenient at the time, either as the aggrieved white person who had chosen a black college, only to be let down by perceived black racism; or as the noble black person championing black causes, and being subjected to perceived White racist threats, for which the Police could not find supportive evidence. Her parents, who shopped her to the Press for her deceit, might have have done well to have kept quiet, and to have thought more carefully about their own contribution to her confused reactions. (Please note: I have given you the environmentalist/cultural explanation, which comes easiest to me. The genetic explanation is that she is the biological daughter of missionaries, and is imbued with missionary zeal, and as she ages she is becoming more and more like them in rescuing fallen Africans and battling for social justice).

Enough about this poor lady. If there is any doubt about who her real parents are, one good quality DNA test will sort out her ancestry in exquisite detail, going back for as far as anyone is interested. No such test result has been provided so far, and she doubts her parents are her biological parents, birth certificate not withstanding.

Now for the real oceans of silliness: public figures and journalists in the UK have sought to excuse the deceit, not on the basis of the confusion very probably engendered by her parents’ adoption strategy, but on the basis of “race doesn’t exist because we are all confused about it, just as she is, and who can say what race they are anyway?”

In the Sunday Times a past leader of the Race Relations authorities followed this line, discussing discrepancies in census classifications but also using reference to past African slaves in London to explain the occurrence of sickle cell anaemia in white British populations and, by implication, their being somewhat African in racial terms. In fact, as regards racial self-classification, most people have no difficulty deciding from which genetic group the majority of their ancestors come. They look at themselves in the mirror, look at people round them or in books and films, and do a match. However, the “what is race anyway?” commentators have the megaphone, and broadcast their obfuscations as the new, fashionable position, thus: in certain situations race exists, as in race crimes (some people noticing the race of others, drawing unwarranted conclusions, and treating them badly) and in other situations race does not exist (some people pretending to be another race, and drawing unwarranted benefits). Race becomes a “now you see it, now you don’t” classification, a free pass to whichever charmed circle is desired.

Does one really need to spell out the difference between an emotional affiliation and the genetic code? People can support the cause of the Palestinians or the Israelis without claiming to share their DNA. A white person can be a supporter of black causes. Patently, this lady’s white parents did not pretend to be black when they adopted black children. Of course, individuals of mixed race can choose to favour one set of ancestry over another when they describe themselves socially, but they cannot change their genome. Accepting the occasional quirky racial self-description may be a courtesy socially, but it ceases to be credible when it is being used to deny ancestry and gain advantage.

The availability of genomic analysis will very probably lead to a much better classification of race. “Hispanic” needs revision, and perhaps just a scintilla of more specification should be applied to “Other”. The original classifications of race were a good match at a time when generation after generation had lived in relative geographic isolation. Now that about 1% of the global population are on the move, updating is required. Eventually the genetic code may substitute for census categories, even though self description usually matches the genetic facts pretty well. Race exists as a fact in the genome, whereas the classification of boundaries involves social choices, but so does the evaluation of poverty and inequality, and few of the social commentators want to abandon those latter concepts.

To me the main surprise is that we are truly living in a age in which a person can say “white is black and black is white” and be confident that no-one will have the courage to challenge them.

The kindest thing one can say about the silly obfuscators is that they are mired in the past. The genome is our ultimate birth certificate. It traces the history of the pairings that gave rise to us, and gives the lie to fanciful stories and evasions. Look at your genetic code, find your individual dot among the branch of your close relatives in the scattered family tree of 7 billion, and learn to live with it.

Monday, 15 June 2015

Peru did not come off well after being visited by Spaniards. That painful confrontation is the stuff of legend. W.H. Prescott’s A History of the Conquest of Peru (1847) and more recently John Hemming’s The Conquest of the Incas (1970) are the books to read, the latter the best.

Peru is 45% Amerindian, 37% mestizo (mixed Amerindian and white), 15% white (European background) and 3% other (e.g., black, Japanese, Chinese). Geographically, the country is divided in three regions: the Coast (53% of inhabitants), the Andean mountains (38%); and the jungle (9%). Although domes and arches, iron smelting and wheeled vehicles were unknown during the Inca Empire, the Incas were brilliant stonemasons and goldsmiths, and used Quipu, a base 10 coding system of knots on strings indicating that some sophisticated mental abilities were present in that ancient population. Subsequent history has not been happy and in a 1979 visit my conversations with farmers in the countryside were about hard times, political stalemate and their shame at national backwardness in economic development. Things are much better now: life expectancy of 75 years; 74% of population live with improved sanitation facilities; 78% are living in urban areas; mortality rate under-5 (per 1000 live births) is 18.2; and 77% are enrolled in a secondary school. However, Peru is bottom of the list on PISA exams, and has no universities rated among the top 500. The discrepancy between Peruvian low school performance and reasonable cognitive performance (mean IQ in Lima 96, Andean samples 78) requires explanation, which the authors seek to supply.

Their sample was 1097 children (46.5% male), with mean age of 11.6 years (SD = 0.4; ) from 18 randomly selected schools in Lima (58.9% public, 41.1% private). They gave Raven's Standard Progressive Matrices and a local measure of socio-economic status and parental education based on a national housing quality assessment.

The first finding was that the results were skewed towards higher scores. SPM is tilted towards upper average performers, and the somewhat under-average are less well represented. (Good to see some data plotted: just the sort of statistics I can understand). Overall, the results are IQ 97 but the Flynn adjustment brings it down to IQ 91. From a philosophical point of view it is strange to apply the same adjustment to different cultures. I suppose one can argue that Peru has had 3 decades of improvement, but it is still far short of European living standards. Perhaps lifespan is the best measure, and the correction valid after all. The authors guestimate the Andean IQ at 66-78 and the Amazonian IQ in the same range. They say:

Weighing the distributions of inhabitants who live on the Coast (53%), in the Andean mountains (38%) and in the Amazonia region (9%), the mean IQ for the entire country could be 84 [(91 ∗ 53 + 78 ∗ 38 + 66 ∗ 9) / 100], almost the same IQ previously estimated by Lynn and Vanhanen (2012) for Peru.

The authors then look at the implied IQs of the 573 recent arrivals out of 1097 mothers, according to what region (coast, Andes, Amazon) they were born in before emigrating to Lima. This is a little difficult, because we do not know if these internal emigrants are brighter than the people they leave behind, which is usually the case. Also, although there was data on father’s education this is not utilized in this particular comparison, as far as I can see. (There is a supplementary file which may have it). The authors claim that there is an interaction effect which vitiates a biological interpretation of the basic differences between the three regional groups, but I am not sure about their argument. Here is what they say:

As expected, children whose mothers were born in the Coastal region had a higher mean IQ. The lowest mean IQ was obtained by children whose mothers were born in the Amazonian region. These results would favor the biological hypothesis (e.g. positive correlation between IQ mother and IQ offspring). However, the results indicated an interaction between genetics and environment. For instance, the mean IQ of children of Andean origin studying in Lima was higher than the estimated IQ of children who lived in the Andean zone (IQ between 78 and 66; Majluf, 1993; Raven et al., 2000). Effectively, our results indicated the following: a mean IQ of 84 (adjusted by the Flynn effect) for children whose mothers were born in the Amazonian region (N = 28; mean SPM score = 36); a mean IQ of 85 for children whose mothers were born in the Andean mountains (N = 147; mean SPM score = 37); and a mean IQ of 94 for children whose mothers were born in the Coastal region (N = 398; mean SPM score = 41). When weighing child distribution (in percentage) according to the place of birth of their mothers, the mean IQ for the partial sample (N = 573) was 91 [((5 ∗ 84) + (26 ∗ 85) + (69 ∗ 94)) / 100], the same IQ obtained when the total sample was used (N = 1121; IQ = 91).

If emigrating mothers marry coastal Lima fathers, a likely possibility, this would sufficient to explain the partial uplift in the intellects of the resultant children. Selective migration, as described above, might also be involved.

Fathers are included in the analysis of the education/SES effects with the apparent finding that educational differences have effect in lower classes but not in the highest class. The authors say: It should be noted that the size of relationship between parents' educational level and SES was not high (r = .387), due to the imperfect meritocratic social structure that exists in developing countries, such as Peru.

However, the very well developed United Kingdom shows exactly the same pattern, as described by Daniel Nettle (2003). Note that correlations are higher for lower social classes, consistent with higher intelligence being a way out of those occupations.

The authors say: countries like Peru need to improve and expand [ ] all the means relevant for educational and cognitive development. This begins [with] aggressive gains in health care and nutrition and concludes with university education that includes average students all the way to top ability levels. Finally, improvement in education will lead to significant advances in the cognitive condition for the next generation.

I do not argue with improving education anywhere, but one notable omission in the paper is any analysis of results by racial composition. On this purely biological hypothesis the authors are silent. Nonetheless, this is a very useful study, carefully done, and makes a good contribution to the literature on intelligence in Peru. A re-analysis looking at paternal and maternal educational and genetic backgrounds would strengthen the interpretation of the results.

Final disclosure: I am not a descendent of Inca Garcilaso de la Vega, the first high-born mestizo whose bones, sent back by Spain, I saw interred in Cuzco cathedral in 1979 (preceded by an excellent declamation by the Town Mayor that being mestizo was no cause for shame) but when I saw it his portrait it struck me as a stirring image of a chronicler, to which any scribbler could feel resemblance.

CCACE are pleased to host Professor Nick Fox for our last seminar before the summer break.

Professor Nick Fox from University College London is the Director of the Dementia Resaerch Centre in London and is also a Consultant Neurologist in a cognitive disorders clinic. His research interests are in improving diagnosis in dementia and in using biomarkers to accelerate the research for effective therapies. Please find details of his talk below.

Talk title: "Imaging the onset and progression of neurodegeneration: Prospects for prevention?"

Abstract: There is now consistent evidence to suggest there is a long and detectable preclinical period to a number of neurodegenerative diseases including Alzheimer’s disease. Although the exact sequence and time course of biomarker and imaging changes in these diseases are unclear, in Alzheimer’s disease, cerebral amyloid deposition appears to predate neuro-degeneration and clinical decline by more than a decade. Hippocampal and brain atrophy rates become abnormal much closer to symptoms with pathological rates of loss evident around five years before clinical diagnosis.

As a result of this, and motivated by recent failures of Phase III trials in mild to moderate Alzheimer’s disease, there is increased interest in undertaking trials at a much earlier stage in the disease – when less irreversible neuronal loss has taken place and there is more to save – perhaps even before individuals have any cognitive symptoms. The first trials in presymptomatic familial and sporadic AD are underway and further studies are planned. Similarly there are now a number of initiatives in other neurodegenerative diseases such as frontotemporal dementia or Huntington’s disease where treatments will be trialed in presymptomatic or very early disease. Designing such “prevention” trials raises a number of challenges including how best to identify subjects for inclusion, how to assess how near to symptoms they are and how to assess progression. Imaging and biomarkers will have important roles to play in meeting these challenges.

This seminar is open to all and a wine reception will follow the talk. Please do send this email across your own mailing lists as this talk has a wide appeal to those interested in both pathological and nonpathological ageing.I look forward to seeing you there tomorrow.Many thanks,Beverly

Sunday, 14 June 2015

There are few territories more blessed with natural treasures, agreeable climates and regional cuisines than the Grand Hexagonal, that gorgeous chunk of Europe called France. By every environmental theory this part of the planet should breed a race of super-folk: Asterix the Gaul on natural geographical steriods, a fraternal band of clever Gallic communards. Instead, if we are to believe French self-perceptions, they have fallen into a morose morass of despondency. They have lost their joie de vivre, esprit d'aventure, esprit de corps, sens de la nationalité, and in a mangling of Bonnie Tyler’s song, they are truly Perdu en France.

What ails our French cousins? One of their problems seems to be that they are cursed with a reverse Flynn Effect, a degeneration of the national intellect. So argue Edward Dutton and Richard Lynn in “A negative Flynn Effect in France, 1999 to 2008–9” Intelligence 51 (2015) 67–70.

They have looked at a small, probably representative sample used by the Wechsler team in their 2011 French standardisation of the adult form of their general intelligence test. They say: The results of the French WAIS III (1999) and the French WAIS IV (2008–9) are compared based on a sample of 79 subjects aged between 30 years and 63 years who took both tests in 2008–2009. It is shown that between 1999 and 2008–9 the French Full Scale IQ declined by 3.8 points.

Please draw hard on your Galloise, then bite the bar of Menier chocolate in your baguette, sip your Carte Noir coffee, and have a look at the results.

The Wechsler Adult Intelligence Scale III (WAIS III) was standardized in France in 1999 (Wechsler, 2000) and the Wechsler Adult Intelligence Scale IV (WAIS IV) was standardized in France in 2008–9 (Wechsler, 2011). The two tests were administered to 79 subjects (a separate sample from the 876 subjects who composed the broader French WAIS IV) who were aged between 30 years and 63 years (mean age 45 years), approximately half of whom took the WAIS IV first and half took the WAIS III first, in order to control for practice effects. The time between the administration of the two tests varied from between 6 and 76 days, with an average of 27 days' gap. The manual does not state whether there were significant differences in the test spacing between the two groups. However, the sample of 79 was a means of comparing the norms yielded by the two standardized samples. As such, if there were significant differences in test spacing between the two groups this would substantially undermine the purpose of administering the tests in this way.

There are a couple of problems. As the authors themselves make clear, 79 subjects is a rather small sample from which to draw any conclusions about the Fall of the Fifth Republic. The Wechsler team haven’t said much about the standardisation sample, not even the proportion that are recent immigrants. More important, in my view, is the whole business of giving contemporary subjects a new and an old test, and then making judgments about intellectual levels then and now. “Then” has been digested and is part of current collective knowledge, “Now” is still inchoate, fugitive and capable of surprising.

Giving the same subtests isn’t even a totally straight comparison: the constituent items can vary, and the actual responses (in terms of number correct per subtest) have to be converted to standardised scores using a table which attempts to bundle the raw results into ranges which are then expressed as a single standardised subtest score. For example, if a person gets a raw score of 12 out of 28 on a test, then the examiner looks up the conversion table to establish what the scaled score should be (mean score always 10, standard deviation always 3). Depending on the particular subtest, a range of raw scores may share the same standardised score. For example, raw scores of 12 or 13 might both have the same standardised score of, say, 9. The standardisation procedure loses fine detail, and the Flynn effect should be based on the most accurate measures possible.

The comparison of subtests is shown above. If subjects do better on the more recent test than the one designed a decade before, this suggests that the early test was based on norms for brighter persons, and that the current test has been dumbed down to a new, lower, average. Sure enough, save for the rather dull and mechanical Symbol Search information-processing task (find a particular letter in an array of letters) all the subtests are now a bit harder for contemporary test-takers. Their vocabularies have shrunk considerably. Either that, or Wechsler screwed up the choice of new words (impressive fall only if they were the same words). Even if only one or two are wrong in terms of contemporary word frequency then the large change is explicable. Digit span (which could have been shown in raw scores, because it it a real ratio measure) is unchanged. Arithmetic, which follows standard, eternal rules is effectively unchanged. Comprehension and Information, which are hard to craft precisely in terms of difficulty, show the biggest apparent changes.

All that said, there is no particular reason that Matrix Reasoning should have dropped a bit, nor Picture Completion, nor Block Design. I would need to see both versions to see what, if anything, had been changed in all the subtests. In UK standardisations there are always perturbations in the new tests when compared with the old. Often the newer ones are less good in clinical practice.

The authors do well to draw this unremarked finding to our attention. They consider various explanations, but do not find any strong candidates. While it would be good to know how many recent immigrants were in the sample they judge it unlikely to have been so many as to influence the results greatly, and although it might be due to dysgenic fertility, drops of this magnitude have not been found in other countries. As a corollary, one cannot help but conclude that if indeed national intellect is leaking away so quickly, it may explain why French intellectuals are revered in their home country, and less admired elsewhere.

All this remains a French puzzle, best not mentioned whilst on holiday in that estate, lest it deepen the all-encompassing Gallic gloom.

Friday, 12 June 2015

Statistician AE Maxwell used to say, as I put my head cautiously past his open door and then sat in front of his desk “Have you plotted the data?” His doctoral thesis consisted of one factor analysis, done by hand, which reportedly took him almost three years. By that time, he had got to know his data.

Brian Everitt, in the room next to Maxwell in the Biometrics Department at the Institute of Psychiatry, used to add: “It is a big limitation of statistics that when you ask a question, you are given a number in reply. You should be given an answer to your question.”

With these paragons in mind it is a delight to be guided to Emil Kirkegaard’s site, where he plots the data and answers questions. Yes, there are some numbers, but they are closely linked to the plotted data, which aids understanding.

I know that my esteemed readers might regard all this as old hat, but I think it has great utility.

Restriction of Range

Psychology samples tend to be drawn from college students, and although it may be hard to believe sometimes, they are of above average intelligence. Even if one excludes only those of below average intelligence (try it with the slider set at a Z value of zero) that restriction reduces the variance by 63%. In standard present day university samples where IQ 115 is the minimum required, variance will be reduced by 80%. In proper, old style universities where IQ 130 is the entry requirement, the reduction in variance is 88%. I think this is very important, particularly when some researchers make claims about multiple intelligences based on Ivy League and Oxbridge students showing that some particular skill, say gastro-intestinal intelligence, is unrelated to g because the correlation is only 0.18, which in fact means that the general population correlation is very probably a much larger 0.50

Tail effects

“Small differences in means are great at the extremes”

Having repeated the quip, I should have added to it: “and small differences in standard deviations cause large perturbations”. Here it is again, ready for a tweet:

“Small differences in means are great at the extremes and small differences in standard deviations cause large perturbations.”

In this example Emil introduces us to the Blues and the Reds. These two tribes differ by one standard deviation on a score which is very similar to intelligence. That means that at a threshold of IQ 130 (old style good university) the proportions of Blue to Red students will be about 17 to 1. That is to say, if entry to such a university is based only on ability, that will be the ratio. If in addition the standard deviation of Red intelligence is a bit narrower (say only 14, and not the usual Blue sd of 15) then the ratio of Blue to Red will be 35 to 1 on intelligence alone. Please stick to Blue and Red, because that makes the concept easier for many people to understand.

Regression towards the mean

This has been explained many times, but plotting the data helps. “Regression” implies a process which takes time: some magical shrinking or reversion to a primitive ancestral state. Partly this is due to psychoanalytic notions about childhood, partly due to an analogy with the loss of function which is part of ageing. Engaging ideas, but not what is being discussed here. I think I am in favour of the more general title of “errors in repeated measurements”. The simplest verbal explanation is to say that the more often you test someone the less their overall results will be affected by flukes, and if you select people on the basis of extreme scores at first testing, those individuals are unlikely to be so extreme at second testing, just because of testing un-reliabilities. Flukes get lost, because they are flukes.

I put in a test-retest reliability figure of 0.8 which corresponds to that observed for Wechsler intelligence subtests. Even in those subtests there will be an apparent regression caused by measurement error. As Emil notes, this may falsely create the impression that a group with low scores has been raised to a higher standard by some educational intervention carried out before they are re-tested. Ideally, one would re-test half the group who had obtained low scores first time round without giving them any educational intervention, in order to find out how much of the “improvement” was mere measurement error.

Even when you set test-retest reliability at 0.93 (true of Wechsler Full Scale IQ with 6 months between test sessions) then there is still a small regression slope of –0.07 and there will be quite a few outliers with large apparent changes in ability levels.

In conclusion, having these interactive visualising tools handy could help you make critical comments when reading 98% of psychology papers.

Thursday, 11 June 2015

You will be well aware that ever since brain imaging became cheaper neuro-bollocks has become more prevalent, since sticking a few persons in a scanner reliably leads to a publication. The best approach would be to get scanner-researcher-publishers into a dark basement and not let them out till they had agreed upon a) standard ways of conducting a scan b) standard ways of analysing a scan c) a few standard cognitive tasks to be done in the scanner d) a few cognitive assessments to be done outside the scanner and d) increasing the sample sizes and representativeness. Until that happy day, the best that can happen is that scholarly souls wade through the heterogenous hodgepodge to elucidate some general features.

So, let us begin with the caveats: The original studies available for the meta-analyses show heterogeneity with respect to (a) the assessment of intelligence, (b) the cognitive challenges used during the measurement of brain activation, and (c) the consideration of potential moderator variables like sex or age. [We included only] studies that used measures from established tests of intelligence.

The tasks target different cognitive functions such as working memory, reasoning, mental rotation, and set shifting. We argue that despite this heterogeneity in task paradigms, summarising the corresponding studies in meta-analyses is well justified by the fact that many of these diverse cognitive demands are known to trigger very similar patterns of activation in the brain (Duncan & Owen, 2000). At the very least, this approach provides us with a conservative estimate of where in the brain intelligence makes a difference with respect to the strength of activation that is required for successful cognitive performance.

Let me add a bigger caveat: all these analyses show areas of activation which are then averaged. As Rich Haier has pointed out, individual brains show patterns of activation in a dynamic flux, the enchanted loom in action. Finding a way to categorise this choreography of thought is another way into the system, and might give more clues as to what is happening, about which we have no useful functional theories.

I know most of these researchers, and they are at the very serious end of the business, paying great attention to the reliability of measures, and with above average sample sizes, so they give the best chance of finding signals among the noise.

So, what meta-conclusions can be drawn from all these publications?

We found substantial convergence across studies as well as overlap with theoretical models of a fronto-parietal basis for intelligence. Our meta-analyses of structural grey matter correlates of intelligence identified widespread clusters of convergence across the brain. Notably, there was no overlap between brain regions identified as relevant for intelligence in the functional and structural meta-analyses, respectively. We propose an updated neurocognitive model for the brain bases of intelligence that includes insular cortex, posterior cingulate cortex and subcortical structures in addition to the previously considered frontal, parietal, temporal, and occipital brain lobes, and that explicitly distinguishes between structural and functional brain correlates of intelligence.

In sum, the P-FIT (2007) model is supported, but has needed to be extended. To my mild irritation, the results do not clearly support the finding that brighter brains show less activation because they have higher neural efficiency. Tired of hearing how people were being trained “to use more of their brain” I kept pointing out that the cool thing was to be able to solve problems by using less of your brain. The picture which emerged from these studies was mixed. On a brighter note, they make a good analytic point about the measurement of efficiency:

smart brains do not generally show weaker activation. Importantly, an interpretation of individual differences in brain activation in terms of differences in neural efficiency must take into account the associated behavioural performance. Only if behavioural performance (= effect) is equal across subjects, can efficiency be inferred directly from brain activation (= neural effort).

The authors say that 4 improvements are required:

(i) Continue the trend of studying larger samples. This is particularly important for individual differences analyses of brain imaging data to result in reliable results at the level of the original studies.

(ii) Test for an association between intelligence and brain characteristics across the whole brain — as opposed to using a priori defined regions of interest and thus excluding parts of the brain from the analyses.

(iii) Choose intelligence tests that are comparable across studies and that cover a broad range of cognitive demands that allow deriving measures of ability corresponding to different factors at the different levels of the hierarchy in intelligence structure.

(iv) Systematically test for moderator effects of sex and age on the association between intelligence and brain characteristics.

Here is an offer: I am willing to provide a cellar in central London, disguised as a small lecture theatre, with suitable supplies of refreshments, to let the international scanner gangs assemble and sort out their differences, merge their different territories, agree their enforcement techniques, and then distribute the nectar of neuro-intelligence to a public thirsting for knowledge. Get your people to talk to my people.

Monday, 8 June 2015

If there is one watchmaker who can claim to have mastered time it is Patek Phillipe.

I must explain that I do not own so much as an infinitesimal sliver of this Geneva based family company, nor am I in their pay, nor do I own any of their watches, though I able to press my nose against their shop windows and hope to do so one day. It is simply that I can recognise good work when I see it, and though I have not been paid for this advertisement, the London representative of the company can send me their simplest watch as modest recompense. They do not need the publicity, but an optimistic frame of mind is probably an advantage in these matters.

Clocks are a commonplace now, but their invention and development did more than record the passage of time: they began to change the conception and pace of time in everyday life, taking us away from the sun and moon, which had seemed time enough to our ancestors, down into the inner workings of a secret world, linked to the stars but closer to our heartbeat, the spinning oscillation of a fluttering balance wheel, the watchmaker’s version of the pendulum.

There is an intricacy to clockwork best seen under a stereoscopic microscope, particularly when there are several layers of workings stacked one upon another. The company’s craftsmen and craftswomen were on duty when the big corporate exhibition came to the Saatchi Gallery in London (which ended yesterday). It takes craft workers about 10 years to get the required skills, and the guy from Copenhagen to whom I spoke was an engaging teacher, taking the attentive crowd through the power chain, and following the lines of force through each bearing down to the controlling spinning balance wheel and then back up again to the watch face.

The simplest watch (one of those, please) was launched during the First World War, and it remains a classic, because it does the very basic task elegantly, showing the time with roman numerals, and the seconds in a sub dial. What we think of this arrangement as the essentials of a watch now, but it was an innovation then, when the larger fob watch was more usual. Taking a fob watch out of a waistcoat pocket and prising open the gold cover lid was a little time consuming when one had to check the time so as to blow a whistle at a precise time, and go over the top to get one’s head blown precisely off. A flick of the wrist was faster, thus giving rise to a gesture we make many times a day, without noticing, and usually without being in peril of dying in the trenches.

Wealthier clients want complications: mechanical contraptions to display perpetual calendars, the moon and the stars, or even melodious miniature gongs which repeat the most recent passing of the hours in sonorous recapitulations. Perhaps I complain about these contrivances too much, but to my taste they seem to be trying too hard, cramming more and more movements into larger cases, till even such a gem as the 1933 Graves super-complication belongs in a library case, not on a wrist. I cannot restrain myself when the craftsman finishes his tour of the intense mechanical world, to wickedly ask if they themselves wear a simple Casio quartz watch. They give me a wan smile, but the doubting enquiry is apposite: quartz is more accurate, and what watchmakers regard as complications the modern quartz timepiece regards as simply one of many features: stop watches, lap counters, phases of moons and sun, the time in other countries, the tides and even, at any moment but usually at night, quiet communication with atomic clocks of scary precision, so that the humble wristwatch now out-measures the observable universe. The equation of time (drift of clock face time with respect to solar time) has now been superseded.

Worse, mechanical clocks cannot easily make the watch I most want, one which shows day length, sun rise and set, and moon phase and rise and set for any place on earth. Digital clocks can do that with relative ease.

This carping missed the point. The death of the mechanical watch, much trumpeted, and which almost happened during the quartz invasion 30 years ago, has been postponed once again. Collectors see in these ticking mechanisms the antiques of the future, more likely to survive than corruptible electronic circuits, able to surmount most calamities and vagaries of fashion, to give pleasure always. Timepieces make time visible in a way that electric circuits never can, showing what clever monkeys we are, and making every ticking cog reveal the spinning world on which we live, time suspended, time measured, time lived and time enjoyed.

Is that enough? An officer’s watch, as made for the London Exhibition. No need to gift wrap, I will take it out on my wrist.

Saturday, 6 June 2015

There has been a furore in England about an exam question which made people think. Many students did not take kindly to it, and complained it was unfair. They felt angry and frustrated, and wanted the pass mark to be lowered. The examining body Edexcel explained that they had designed it to test “the full range of abilities” (which may count as a rare public admission that students vary in ability). In more precise terms one could say that the examiners were trying to distinguish between students who could solve individual mathematical problems one operation at a time, and those (a minority) who, without any specific guidance, could integrate the individual steps into a full solution to a novel problem.

Readers of this blog will know that this sounds very familiar: it is the shift at around IQ 115-120 from being restricted to specific examples and written instructions, to being able to gather information and make inferences: the level at which students have to think for themselves.

In the former case each step has to be rehearsed, but tends to be seen as a task on its own. In the latter case students realise that they are being given tools they can apply in novel circumstances: they learn general principles, and apply them generally. In a nutshell, in designing this question the examiners were searching for A and A* students, or in my terms, Tribe 5.

Readers of this blog will also know that I try to confess my errors as quickly as possible, so you should know that I did not get the problem right. I will give you the problem and ask you to solve it, just for your own fun. I know that my readers are persons of distinction, and will have a go and keep their workings, and not immediately jump to the BBC link provided to find the school answer. Take a separate piece of paper, which you mark with the date and time, and then draw a line underneath at the end of your solution. Use Registrar’s ink, which darkens with age, and lasts at least 500 years.

I then go on my the main task: I want to simplify the problem so that it contains no algebra, and can thus be used as a puzzle for the general public. I regard myself as a disciple of Gerd Gigerenzer, and so this is intended to be a pace or two in his footsteps.

There are n sweets in a bag.

6 of the sweets are orange.

The rest of the sweets are yellow.

Hannah takes at random a sweet from the bag.

She eats the sweet.

Hannah then takes at random another sweet from the bag.

She eats the sweet.

The probability that Hannah eats two orange sweets is 1/3

(a) Show that n^2 - n - 90 = 0

(b) Solve n^2 - n - 90 = 0 to find the value of n

Once you have solved that, look at the slightly simpler version:

There are an unknown number of sweets in a bag.

6 of the sweets are orange.

The rest of the sweets are yellow.

Without looking in the bag, Hannah takes a sweet from the bag.

She eats the sweet.

Without looking in the bag, Hannah then takes another sweet from the bag.

She eats the sweet.

The probability that Hannah, just by chance, got out two orange sweets from the bag (took one orange sweet out which wasn’t put back in the bag, then took another orange sweet out which wasn’t put back in the bag) is 1 in 3.

(a) Show what you think are the number of orange and yellow sweets before Hannah eats any sweets, and then after she eats the first sweet and the second sweet.

(b) Work out how many sweets were in the bag to begin with.

My reworking of the problem is to remove the algebra. I know that this is a maths exam, and that knowing algebra is part of maths, and that it might help you to solve this problem, and many other problems. However, I want to avoid people being frightened off by algebraic notation and thus not realise that they might be able to solve it by using natural frequencies. In that vein, I also got rid of and explain “random”. I use “1 in 3” because people might find that more easy than a fraction, which some adults do not understand. In a major simplification, I am also reminding people that the sweets, once eaten, do not go back into the bag (which I forgot in the second phase of calculation when attempting a quick and lazy calculation myself).

What I am attempting to do is to remove any of the “surface” distractors and put the problem into its most practical and essential form. This allows us to judge whether the difficulty lies in the format of the question, on in the irreducible complexity of the arguments to be considered. Pace Gigerenzer, most doctors fail probability questions when they are couched in percentages and symbolic logic, but solve them easily when they are presented in natural frequencies, say as patients out of 1000 people in the population.

Here are some explanations as to why some question forms mislead, but do not necessarily tell us much about underlying complexity.

I notice that the official accounts dare not mention the notion of intelligence, which is a pity. The OECD funded Pisa study also avoid the topic. Some critics argue that children are not being taught properly, and that maths education needs to be improved. Of course, that may well be true, but some ideas are hard to grasp, even when we make them as clear as we possibly can. An individual’s intelligence level is found at the point where problem complexity defeats problem solution. Something has to give, and it is usually the problem solver.

Friday, 5 June 2015

It may seem sacrilegious to suggest otherwise, but perhaps we make too much of early childhood, and harbour illusions which are, quite frankly, blank-slate-ist. Blank slate-ists, as you already know, are deluded persons who imagine that children’s minds are a blank slate upon which care-givers can impose their will. Leave aside the fact that from birth onwards it is evident that neonates differ in their reactions and behaviours, notably whether they are calm and easy to care for, or fractious and unable to sleep through the night (easy, difficult or slow to warm, as Alexander Thomas and Stella Chess described them in the The New York Longitudinal Study on temperament and personality, started in 1956). Blank-slate-ists ignore or downplay temperamental differences, and believe that the waking moments of early childhood have sacrosanct status, and are ripe for heroic interventions. I should confess at this juncture that, for most of my professional life, I have been a big fan of early childhood intervention studies, so I am struggling with my prior assumptions at this point. Onwards, rational beings, onwards.

A converse view is that kids will be pretty much OK so long as they are given the basics that most families provide: a reasonable life, though nothing fancy, which is how most humans were raised for most of history. Indeed, prior to 1840, getting too attached to children before the age of 7 did not make much sense, because one third of them died before that age. John Graunt’s 1662 comments on the Bills of Mortality show that in England only 64 in 100 persons survived to age 6. In fact, there was little point getting too attached to teenagers, since another third of them had died by 16. By the way, the 1993 figures are ancient history, because now people live for ever (almost).

(The bottom of the slide shows that in 1662 1 in a 100 made it to 76 years of age, in 1993 70 in a 100 reached that age).

I digress. Even when children have a 99+% chance of surviving into adulthood, it is still an open question whether extra education should be given to all children prior to age 5, or at age 11, or 13, 14, 15 or only at precisely the age when an individual child encounters a problem. Early intervention studies can only follow general risk guidelines, like betting that poor and dull mothers will have children who, from an environmental rather than primarily a genetic point of view, will benefit from an early childhood compensatory education program. Waiting until the child is older, while providing standard subsidised education from 5 or 6 or 7 onwards will give good results for most children, and extra help can be targeted on those who really need it precisely when and how they need it.

In his paper to the London Conference, Andrew Sabisky argued in favour of targeted later interventions. Most striking, to my mind, was a meta-analysis of the early childhood intervention studies, the key figure from which I reproduce below.

Sabisky has gathered a lot of data for his presentation, so you should look at the slides first, and then at his comments which go with the slides, and then at the review paper.

My impression is that he has made a good case for abandoning the view that early childhood is a special period for learning (which the Danes have known for ages) and that you might be better of letting kids enjoy family life and the usual experiences of childhood until their brains are capable of beginning academic learning, in the age range 5 to 7 according to ability and interest.

Even better than his publication list, Ritchie has an
open-minded and optimistic attitude to research. Better still, he has infinite
patience with clever sillies, and answers their expostulations and obfuscations
with gentle reference to good quality contemporary research.

These characteristics serve him in good stead when taking up
the unenviable task of bringing Ian Deary’s “Intelligence: A very short
introduction” up to date. Although it is a hard act to follow, Ritchie proves a
very worthy successor, capable of explaining without condescending, and
confident enough to develop his own prose style, which is clear, direct, uncluttered
and at the service of his intended aim: to give an up to date summary of the
modern science of intelligence.

The book clears up confusions briskly, with a good
understanding of the essentials. The almost compulsory and off-putting walk
through the graveyard of past researchers is dumped in favour of simply picking
up the important themes and amplifying them. Edinburgh is making history in
intelligence research, so why dwell on anything but the best ideas?

The content sticks to the brief: all that matters. The
concept of intelligence is introduced, testing explained, why intelligence
matters explained, the biology laid out (virtually all of this brand new research),
the “boost your IQ” meme dissected (stay at school longer?) and the IQ controversies
patiently listed and their varying claims teased out and answered. In no other
scholarly topic have the protestors and hecklers against intelligence been
given such an easy ride in the popular culture, so it is good to see them wait
their turn at the end, not poison the well of research before anyone can drink
from it.

I think that it is unwise to summarize a book which is
itself a summary of contemporary research and debate, so I will make only a few
detailed comments. You will be better off reading the book than reading any
further exposition about what is already a concise book.

Quibble: I am not as convinced as Ritchie that the Norwegian
“experiment” in increasing the years spent at school really boosted IQ by 3.7
points ayear. I think this claim is
based on the “difference in difference” statistic which I find problematical.
The matter is statistically complex, so I would not argue my view over Ritchie’s
with any great confidence, but it seems an unlikely result.

Quibble: On the contentious matter of racial differences in
intelligence, although I agree with Ritchie that the data are not good enough
to resolve the matter beyond reasonable doubt, I think that on balance of
probabilities about half of racial intelligence differences are probably due to
genetics. However, I can see that it is hard to get very far with this topic in
a brief introduction.

In my opinion “Intelligence: All that matters” is the best available
short introduction to intelligence, and word for word the most effective. It
shows a keen understanding of the misconceptions which bedevil the public
understanding of the subject, and the scholarly benefits of replying to these
wild imaginings with cool evidence. Indeed, as many intelligence researchers
know to their cost, the word “intelligence” carries so much baggage which has
been heaped upon it that there is a temptation to retreat to euphemisms: cognitive
abilities, learning styles, executive functions, and learning readiness. Anyone
who reads this book will understand that intelligence is real, and has
important consequences. This is intelligence re-claimed.

I hope the fact that Stuart Ritchie knows a great deal about
his subject will not stop his book from being sold in very large quantities. So
many other books about intelligence are effusions of babbling, evasion and
misapprehension that a work of education imparted by a real researcher deserves
a very wide readership. As Ritchie concludes: the intelligent way forward is
that which helps us uncover the science of what makes us differ in this most
human of attributes.