Friday, 7 November 2008

The short answer is: because the selection and training process ruthlessly weeds-out any interesting people

Scientists are, as a group, dull and getting-duller. Duller both in term of less intelligent and more boring. And the science they produce is increasingly dull – although its tediousness is often concealed by shamelessly dishonest hype and spin.

This dullness is not accidental but a product of the fact that scientists are not even trying to do interesting research, funders are not prepared to fund interesting research (because it has a high risk of failing to deliver) and most journals are not keen to publish interesting research (because it is more likely to be wrong).

The premier scientific and medical journals have almost abandoned science reporting in favour of political advocacy or politically-correct moralizing. I hear that Nature has plans to recognize reality and rename itself Journal of the Theology of Climate Change; the British Medical Journal is to become Newsletter of the National Health Service Bureaucracy and the Lancet will soon be Acta AntiWar Propagandica…

No, I’m kidding, but it is almost believable.

Why are scientists duller than journalists?

The point is the editors and journalists running even the premier journals – those having the pick of modern science – themselves find science too dull to bother writing about. And they are too often correct.

The science journalists are themselves a clue. We need to ask why the smart and interesting people who nowadays run the premier science journals (and the many similarly-talented folk who work in the media generally, including bloggers) are functioning as pundits instead of doing science themselves.

The answer is obvious enough: being a modern scientist is too dull. In particular the requirement for around ten to fifteen years of postgraduate training before even having a shot at doing some independent research of one’s own choosing (but more likely with the prospect of functioning as a cog in somebody else’s research machine) is enough to deter almost anyone with a spark of vitality or self-respect.

And the whole process and texture of doing science has slowed-up. Read the memoirs of scientists up to the middle 1960s – doing science was nimble, fast-moving. Many experiments could be set-up and done in days. For the individuals concerned there was a palpable sense of progress, a crackling excitement.

Now there is an always-expanding need for advanced planning, committee permissions, and logistical organization; combined with a proliferation of mindless and damaging bureaucracy. The timescale of scientific action and discourse has gone up from days and weeks to months and years.

What a contrast with journalism! Where is equivalent hourly and daily stimulus of journalism in the life of a scientist? The kind of person attracted to modern science is (I presume) somebody who likes long term project management especially form-filling; and can persevere through difficulties without wavering in determination or changing tack (especially not deviating to explore unexpected leads or insights).

The filtering-out of intelligence and creativity

The kind of individual who can plough through endless years of coursework, a PhD, and cycles of postdoctoral training; and can stay out of trouble with their peers until they – eventually - get a long-term or tenured position; is on average going to be characterized by personality attributes of conscientiousness and agreeableness. The modern scientist who has passed these tests of character is not likely to be the kind of awkward, abrasive and somewhat wildly-creative personality which characterized many of the greatest scientists of the past.

Nor are the modern scientists likely to be as intelligent as in the old days, because IQ and the personality trait of conscientiousness are only slightly (or some people suggest inversely!) correlated. This means that that greatly increasing the demand for perseverance in a training program will inevitably tend to depress the IQ of successful trainees.

Having adding 5-10 years to science training over the past 40 years, means that those who now survive to apply for permanent positions are indeed more conscientious than scientists of yore. But since the most intelligent people are not always the most conscientious, this enhancement in perseverance has been achieved at the serious cost of filtering-out some of the highest IQ scientists. When appointing independent scientists in the fourth decade of their life, we are scraping the barrel for attributes of high intelligence (and creativity).

We can only conclude that science is dull mainly because its requirements for long-term plodding perseverance and social inoffensiveness have the effect of ruthlessly weeding-out too many smart and interesting people.

The smart and interesting people instead gravitate to fast-moving fields like journalism (or finance, or management, or entrepreneurship of many types) where they get hourly or daily stimulus, and have a chance of following their own inclinations and making their mark before reaching their mid forties.

Since people who nowadays eventually emerge from the lengthening pipeline of scientific training are quite different from the scientists of 50 years ago, they naturally tend to move science further in the direction which created their own success. So that modern scientific leader often elevate the requirements for very long periods of tedious make-work, and judge scientists mainly by their capacity for steady and reliable production.

Needed: more clever crazies

At the same time, high level journalism in science and medicine is full of very high IQ people who are virtuosically able to manipulate words and concepts (and, sometimes, numbers); but who often lack the common sense of a new-born kitten and indeed frequently propagate world views which are near-psychotic in their detachment from social reality.

These clever crazies should be working as scientists, not journalists! Science is the activity that really benefits from this kind of brilliant unorthodoxy, puts it to use in generating, critiqueing and testing new ideas, and passes it through the evaluative social mechanisms of science which tend to filter-out the mistaken craziness and leave-behind the correct-craziness.

Instead, these idiots-savants are going into journalism after graduating from the best universities; where they infuse their naïve and lunatic perspectives into the realms of public policy discourse.

On the whole, I believe that these brilliant fools usually do a lot more social harm than good as journalists - but either way, their personal contributions are invariably ephemeral. They have sacrificed long-term creative and constructive satisfaction for short-term stimulation and mischief-making. It is hard to blame them for making this choice – but this situation is neither optimal for the individuals nor for society at large.

What should be done? How can science be reformed and re-structured to enable the kind of people who now work in journalism and punditry to become the kind of people who work as scientists?

Can science again become a career that attracts and rewards the most intelligent and most creative individuals (even, or especially, when they are serious oddballs).

One thing is for sure, the answer is not going to come from within science.

One of C.S. Lewis’s most famous arguments in support of Christianity is that the instinctive but otherworldly yearning emotion of ‘joy’ (in German, Sehnsucht) implies that there exists some means of satisfying this urge; otherwise humans would not experience it.

This is sometimes termed the ‘argument from desire’. In brief, it states that because humans profoundly and spontaneously desire something not of this world, the experience suggests the reality of the supernatural. Lewis used the argument in many of his best known Christian writings. In Mere Christianity, he argues that ‘[i]f I find in myself a desire which no experience in this world can satisfy, the most probable explanation is that I was made for another world’. In ‘The Weight of Glory’, he notes that ‘we remain conscious of a desire which no natural happiness will satisfy’. And in the autobiographical Surprised by Joy, he comments that ‘[i]n a sense, the central story of [his] life is about nothing else’.

But Lewis is not the only among his friends to formulate an argument from desire. Perhaps the idea’s most powerful and compelling exposition can be found in a little-known and recently-published (1993) story by Lewis’ great friend J.R.R. Tolkien; a tale which was written in about 1959 and appears in the middle of Volume X of The History of Middle-earth, edited by Christopher Tolkien and published in twelve volumes between 1983 and 1996 [1]. Since The History of Middle-earth is read only by Tolkien scholars and enthusiasts, this wonderful dialogue is at present little known or discussed.

It is, of course, no coincidence that both Lewis and Tolkien should write of the argument from desire, since Lewis’s own conversion to Christianity was shaped by this argument: both Tolkien and Hugo Dyson used it in the famous late night conversation of September 1929 on Addison’s Walk in Magdalen College – an event which was recorded by both Lewis and Tolkien. Tolkien’s epistolary poem ‘Mythopoeia’ (addressed to Lewis) outflanks the counter-argument that this is mere wishful thinking or day-dreaming by asking the question: ‘Whence came the wish, and whence the power to dream?’ And Tolkien used the argument again in a letter to his son Christopher dated 30 January 1945, in reference to the human yearning for the Garden of Eden:

…certainly there was an Eden on this very unhappy earth. We all long for it, and we are constantly glimpsing it: our whole nature at its best and least corrupted, its gentlest and most humane, is still soaked with the sense of ‘exile’ [2].

But in ‘The Marring of Men’, Tolkien makes the argument from desire the basis of a fiction – and, as so often, Tolkien’s most personal concerns are most powerfully expressed in the terms of the mythic ‘secondary world’ he created.

‘The Marring of Men’

Tolkien’s story was never formally named – but probably the most compelling of its alternative titles was ‘The Marring of Men’ which I have adopted here . In the History of Middle-earth, the story is given its Elven name, ‘Athrabeth Finrod ah Andreth’, translated as ‘The Debate of Finrod and Andreth’. The text of J.R.R. Tolkien’s story is about twenty pages long, with a further forty pages of notes and supplementary material compiled from other writings by J.R.R. Tolkien and notes by Christopher Tolkien.

‘The Marring of Men’ is part of the Silmarillion body of texts, which were composed over many decades, from Tolkien’s young adulthood during World War I right up until his death in 1973. This body of texts is sometimes referred to in its totality as Tolkien’s ‘Legendarium’, to distinguish it from the single volume Silmarillion selected by J.R.R. Tolkien’s son Christopher, and published in 1978.

The situation in ‘The Marring of Men’ is that of a conversation between Andreth, a mortal human woman, and Finrod Felagund, an immortal Noldo, a ‘High’ Elf. The explicit subject of their conversation is the nature and meaning of mortality, and its implications for the human condition – a subject which is probably the most fundamental of all religious topics, and which is certainly the single main interest and underlying theme of most of Tolkien’s fiction, including The Lord of the Rings. The implicit subject of the conversation is original sin and the fallen nature of Man – which is why the title ‘The Marring of Men’ seems appropriate.

But the conversation between Andreth and Finrod is not simply an abstract philosophical debate: It is fuelled both by world events and by personal experiences. The protagonists are aware of the imminent prospect of Middle-earth being irrevocably overrun and permanently destroyed by Morgoth. (The selfishness and assertive pride of Morgoth, the corrupt Vala or ‘fallen angel’ analogous to the Christian devil, are the primary origin of evil in Tolkien’s world.)

The personal element comes from the fact that the now middle-aged woman Andreth had fallen mutually in love with Finrod’s brother Aegnor in her youth, and had wished to marry the immortal Elf; but she was ultimately rejected by the Elf, who left to follow the call of duty and fight in the (believed hopeless) wars against Morgoth. It emerges during the conversation that Aegnor’s most compelling reason for rejecting Andreth was that he did not want love to turn to pity at her advancing age, infirmity and ultimate mortality – but (in Elven fashion) wished to preserve a memory of perfect love unstained by pity.

The ‘marring’ referred to in the title is mortality. The first question is whether Men were created mortal, or whether Men were originally immortal but lapsed into mortality due to some event analogous to original sin.

Immortal Elves and Mortal Men

While mortality is a universal feature of the human condition as we know it in the primary world, the Elven presence in Tolkien’s secondary world brings to this debate a contrast unavailable in human history. Tolkien asks in which ways the issue of mortality would be sharpened and made inescapable if mortal Men found themselves living alongside immortal Elves – creatures who, while they can be killed, do not die of age or sickness, and, if killed, can be reincarnated or remain as spirits within the world.

Tolkien’s Elves are fundamentally the same species as Men – both are human in the biological sense that Men and Elves can intermarry and reproduce to have viable offspring (who are then offered the choice whether to become immortal Elves or mortal Men). Elves are also religious kin to Men in that both are ‘children’ of the one God (Elves having been created first). But Elves seem, at the time of this story, to be superior to Men, in that Elves are immortal in the sense defined above. Elves do not suffer illness; they are more intelligent (‘wise’) than men, more beautiful, more knowledgeable and more artistic; Elves also have a much more vivid, lasting and accurate memory than Men.

The question arises in the secondary world: If Elves are immortal and generally superior in abilities, what is the function of Men? Why did Eru (the ‘One’ God) create mortal Men at all, when he had already created immortal Elves? Implicitly, Tolkien is also asking the primary world question why God created mortal and imperfect Men when he could have created more perfect humans – like the immortal Elves?

Tolkien’s answer is subtle and indirect, but seems to be related to the single key area in which the greatest mortal Men are superior to Elves: courage. Most of the ‘heroes’ in Tolkien’s world, those who have changed the direction of history, are mortal Men (or indeed Hobbits, who are close kin to mortal Men); and there seems to be a kind of courage possible for mortals which is either impossible for, or at least much rarer among, Elves. Elves have (especially as they grow older) a tendency to despondency, detachment and the avoidance of confrontation. On a related note, Tolkien hints that Men are free in a way in which Elves are not, and that this freedom is integral to the ultimate purpose of Men in Tolkien’s world – and by implication also in the real world.

C.S. Lewis once stated (albeit from the pen of a fictional devil!) that courage was the fundamental human virtue, because it underpinned all other virtues: Without courage other virtues would be abandoned as soon as this became expedient:

"Courage is not simply one of the virtues, but the form of every virtue at the testing point, which means, at the point of highest reality. A chastity or honesty, or mercy, which yields to danger will be chaste or honest or merciful only on conditions. Pilate was merciful till it became risky." [3]

At any rate, courage seems to be one virtue in which the best of Tolkien’s mortal Men seem to excel.

The Fall of Men

The first question is whether Tolkien’s One God ‘Eru’ originally created immortal Men, who had been ‘marred’ and made mortal by the time of Andreth (and, by implication, our time). This is Andreth’s first view – the mortal woman suspects that Men were meant to be immortal but have been punished with mortality:

‘[T]he Wise among men say: “We were not made for death, nor born ever to die. Death was imposed upon us.” And behold! the fear of it is with us always, and we flee from it ever as the hart from the hunter’ [4].

‘We may have been mortal when first we met the Elves far away, or maybe we were not.... But already we had our lore, and needed none from the Elves: we knew that in our beginning we had been born never to die. And by that, my lord, we meant: born to life everlasting, without any shadow of an end’ [5].

Naturally, this prompts the Christian reader to think of parallels with the Fall of Man and original sin; and this analogy is clearly intended by Tolkien.

Andreth talks of a rumour she has heard from the wise men and women among her ancestors, that perhaps in the past Men committed a terrible but undefined act which was the cause of this marring. The implication, never made fully clear, is that Men in their freedom may have deviated from their original role as conceived by ‘the One’, and been corrupted or intimidated into worshipping Morgoth, or at least into doing his will and in some way serving his purposes. This, it is suggested, may be the cause of Men’s mortality as such, along with a progressive shortening of their lifespan and a permanent dissatisfaction and alienation from the world they inhabit and even their own bodies. In the dialogue, Finrod asks:

‘[W]hat did ye do, ye men, long ago in the dark? How did ye anger Eru?... Will you not say what you know or have heard?’

‘I will not’, said Andreth. ‘We do not speak of this to those of other race. But indeed the Wise are uncertain and speak with contrary voices; for whatever happened long ago, we have fled from it; we have tried to forget, and so long we have tried that now we cannot remember any time when we were not as we are’ [6].

Men’s Lifespan

By contrast to their uncertainty about the origin of mortality, the decline in mortal lifespan caused by Morgoth’s corruption of the world seems certain to both Andreth and Finrod. Later in Tolkien’s history, those Men who help defeat Morgoth are rewarded with a lifespan of about three times Men’s usual maximum, i.e. about 300 years; greater strength, intelligence and height; and a safe island off the coast of Middle-earth on which to dwell (Numenor, Tolkien’s Atlantis).

It seems possible that the enhancements of ‘Numenorean’ Men are simply a restoration of the original condition of Men. Or it may be that these enhancements are compensations of Elvenness, rendering Men more Elven (though still mortal), perhaps with the ultimate aim of a unification of Elves and Men. At any rate, the majority of Numenoreans eventually succumb to corruption and evil, and are destroyed by Eru in a massive reshaping of the world, which drowns the island and the vast Numenorian navy that is landing on the shores of the undying lands.

For Tolkien, it is a characteristic sin of Men to cling to life, and it is this clinging which corrupts the mortal but long-lived Numenoreans who try to invade the undying lands – either in the mistaken belief that they will become immortal by dwelling there, or with the intention to compel the Valar to grant them immortal life.

While Men are characteristically tempted to elude mortality – to stop change in themselves – the almost-unchanging Elves are tempted to try to stop change in the world – to embalm beauty in perfection. This Elven sin is related to the first tragedy of the Silmarillion, when ultimate beauty – the light of the primordial trees – is captured in three jewels; and it later leads to the creation of the Rings of Power, which are able to slow time almost to a stop, and thereby to arrest the pollution and wearing-down of Middle-earth.

As well as having an increased lifespan, Numenoreans surrender their lives voluntarily at the appropriate time, and before suffering the extreme degenerative changes of age. This voluntary death (or transition) at the end of a long life is described in the most moving of the appendices to The Lord of the Rings, when Aragorn (the last true Numenorean) yields his life at will to move on to another world. His wife Arwen pleads with him to hold on to life for a while longer to keep her company in this world; however Aragorn kindly but firmly refuses her request:

‘Let us not be overthrown at the final test, who of old renounced the Shadow and the Ring. In sorrow we must go, but not in despair. Behold! we are not bound forever to the circles of the world, and beyond them is more than memory. Farewell!’ [7]

Arwen’s fate is tragic, because she is one of the ‘half-elven’ who may choose whether to become Man or Elf; she chooses to become mortal in order to marry Aragorn and share his fate. However, her resolve to accept mortality at the proper time is undermined by her ‘lack of faith’ in Man’s destiny of life after death. In the appendix, she is portrayed as regretting becoming a mortal instead of an Elf; and as having succumbed to the sin of clinging to mortal life rather than accepting mortality and trusting that there is life after death.

"…and the light of her eyes was quenched, and it seemed to her people that she had become cold and grey as nightfall in winter than comes without a star." [8]

The half-elven Arwen has failed to embrace the mortal need for courage to underpin all other virtues; and one possible interpretation of this passage is that this has consequences for her fate in the next world.

At Home in the World, or Exiled?

For Tolkien (and Lewis), the sense of exile is a ‘desire’ which implies the possibility of its gratification; in other words, it reflects the fact that Men have indeed been ‘exiled’ from somewhere other than this world.

Finrod makes clear that Elves, by contrast, feel fully at home in the world to which they are tied:

‘Each of our kindreds perceives Arda differently, and appraises its beauties in different mode and degree. How shall I say it? To me the difference seems like that between one who visits a strange country, and abides there a while (but need not), and one who has lived in that land always (and must)’. [9]

‘Were you and I to go together to your ancient homes east away I should recognize the things there as part of my home, but I should see in your eyes the same wonder and comparison as I see in the eyes of Men in Beleriand who were born here’. [10]

Elves therefore care for the world more than Men, and do not exploit nature as Men do, but nurture and enhance the world. And indeed Elves are not truly immortal, since when the world eventually ends, they will die; and to Finrod it seems likely that this death will mean utter annihilation:

"You see us...still in the first ages of being, and the end is far off.... But the end will come. That we all know. And then we must die; we must perish utterly, it seems, because we belong to Arda (in [body] and in [spirit]). And beyond that what? The going out to no return, as you say; the uttermost end, the irremediable loss?" [11]

Partly because of this prospect, the almost-unchanging Elves become increasingly grieved by the ravages of time upon the world, and cumulatively overcome by weariness with their extended lives. Hence the characteristically Elven temptation to try to stop time, to arrest change.

By contrast, Men seem to Finrod like ‘guests’, always comparing the actual world of Middle-earth to some other situation. This opens up the question of Tolkien’s version of ‘the argument from desire’. Finrod thinks that Men have an inborn, instinctive knowledge of another and better world. Hence, he thinks that they never were immortal, but have always known death as a transition to another, more perfect world – not as the prospect of annihilation which Elves face. Thus, he considers the possibility that Men’s ‘mortality’ is ultimately preferable to Elven ‘immortality’.

But even in this world Finrod suspects that the destiny of Men may eventually be higher than that of Elves. He acknowledges that at the time of his debate with Andreth the Elves are the superior race in most respects; but he can envisage a time when mortal Men will attain leadership, and the Elves will be valued mainly for the scholarly and artistic abilities fostered by their more accurate and vivid memories. This projected role of Men will be related to the healing of the world from the evil that was permeated through it by Morgoth.

One possible interpretation of this is that Elves cannot heal the marred world because they are tied to, part of, that world; but that mortal Men may be able to heal it because, although they themselves share the marring of the world, they are ultimately free from that world through death.

Tolkien’s Vision of Heaven

Building on hints by Andreth, Finrod intuits that if things had gone according to Eru’s original plan, there would have been no need for Men. The first-born, immortal Elves would have been the best inhabitants and custodians of an unmarred world, because their very existence was tied to it.

But since the demiurgic Morgoth infused creation with evil at a very early stage, Eru made a second race of mortals – Men – who lived in the world for a while, then passed on to another condition. Because mortals were not tied to the world, they had the freedom to act upon the world in a way that Elves did not. This freedom of Men could be misused to exploit the world short-sightedly; but it could also be used to heal the world, to the benefit of both mortals and immortals alike.

[Finrod]: ‘This then, I propound, was the errand of Men, not the followers but the heirs and fulfillers of all: to heal the marring of Arda’.

Indeed, Finrod perceives that to clarify this insight may be the main reason for their discussion: so that Andreth may learn the meaning of mortality from Finrod, and pass this knowledge on to other Men, to save them from despair and encourage them in hope.

[Finrod]: ‘Maybe it was ordained that we [Elves], and you [Men], ere the world grows old, should meet and bring news to one another, and so we should learn of the Hope from you; ordained, indeed, that thou and I, Andreth, should sit here and speak together, across the gulf that divides our kindreds’. [12]

Andreth suggests that Eru himself may intervene for this hope.

[Andreth]: How or when shall healing come?…To such questions only those of the Old Hope (as they call themselves) have any guess of an answer.… [T]hey say that the One will himself enter into Arda, and heal Men and all the Marring from the beginning to the end’.[13]

Finrod cannot at first understand how this could be, and Andreth herself seems to regard it as highly implausible – a wishful dream. But on reflection, Finrod argues:

‘Eru will surely not suffer [Morgoth] to turn the world to his own will and to triumph in the end. Yet there is no power conceivable greater than [Morgoth] save Eru only. Therefore, Eru, if he will not relinquish His work to [Morgoth], who must else proceed to mastery, then Eru must come in to conquer him’. [14]

The Christian parallels are obvious. Indeed, ‘The Marring of Men’ can be seen as part of Tolkien’s lifelong endeavour to make his legendarium (originally conceptualized as a ‘mythology for England’) broadly compatible with known human history, particularly Christian history [15].

Andreth’s hints inspire Finrod to a vision which offers ultimate hope to the immortal but finite Elves as well as to mortal Men:

‘Suddenly I beheld a vision of Arda Remade; and there the [High Elves] completed but not ended could abide in the present forever, and there could walk, maybe, with the Children of Men, their deliverers, and sing to them such songs as, even in the Bliss beyond bliss, should make the green valleys ring and the everlasting mountain-tops to throb like harps’.

‘We should tell you tales of the Past and of Arda that was Before, of the perils and great deeds and the making of the Silmarils. We were the lordly ones then! But ye, ye would then be at home, looking at all things intently, as your own. Ye would be the lordly ones’. [16]

This, then, is Tolkien’s vision of Heaven, pictured in the context of Arda, his sub-created world.

Myth and reality

The conversation of Andreth and Finrod occurs during a lull before the storm of war breaks upon Middle-earth; and Finrod foresees that the next stage of war will claim the life of his brother Elf Aegnor, whom the mortal woman Andreth loved in her youth and loves still. The fragment ends with Finrod bidding Andreth farewell by reaffirming, ‘you are not for Arda. Whither you go you may find light. Await us there, my brother – and me’. Andreth’s destiny lies beyond the world, and Finrod dares to hope that this is true for the Elves also.

In Tolkien’s legendarium, loss or transmission of knowledge is always a matter of concern. The message we take away from ‘the Marring of Men’ is hopeful. We are called to infer that this conversation has ‘come down’ to us today: that it was remembered, recorded, and has survived the vicissitudes of history, possibly because we modern readers need or are meant to know this.

Just as Morgoth’s marring of the World and of Men is analogous to the Christian account of the Fall of Satan and of original sin, Finrod and Andreth’s intuitions and hopes, Tolkien implies, were vindicated in real history by the coming of Jesus Christ. And Tolkien’s sub-creative vision of heaven, as explicated by Finrod, is meant to be taken seriously as an image of true heaven, in which Tolkien believed as a Christian. It is entirely characteristic that Tolkien’s heaven should have a place for Elves as well as for Men.

Tolkien’s story ‘The Marring of Men’ – though so brief a tale – seems to me one of his most beautiful and profound: a product of deep thought and visionary inspiration. It encapsulates nothing less than Tolkien’s mature understanding of the human condition and the meaning of life. Scholars and admirers of C.S. Lewis, who are unfamiliar with Tolkien’s legendarium, may find a way into his magnificent fantasy by reading it as complementary to Lewis’s great idea of ‘joy’ and his characteristic ‘argument from desire’: Tolkien engaged in developing and completing themes which underpin much of his old friend’s best and most serious work.

Thursday, 22 May 2008

Social class differences in IQ: implications for the government’s ‘fair access’ political agenda

A feature for the Times Higher Education - 23 May 2008

Also at: http://www.timeshighereducation.co.uk/Journals/THE/THE/22_May_2008/attachments/Times%20Higher%20IQ%20Social%20Class.doc

Bruce G Charlton

Since ‘the Laura Spence Affair’ in 2000, the UK government has spent a great deal of time and effort in asserting that universities, especially Oxford and Cambridge, are unfairly excluding people from low social class backgrounds and privileging those from higher social classes. Evidence to support the allegation of systematic unfairness has never been presented, nevertheless the accusation has been used to fuel a populist ‘class war’ agenda.

Yet in all this debate a simple and vital fact has been missed: higher social classes have a significantly higher average IQ than lower social classes.

The exact size of the measured IQ difference varies according to the precision of definitions of social class – but in all studies I have seen, the measured social class IQ difference is substantial and of significance and relevance to the issue of university admissions.

The existence of substantial class differences in average IQ seems to be uncontroversial and widely accepted for many decades among those who have studied the scientific literature. And IQ is highly predictive of a wide range of positive outcomes in terms of educational duration and attainment, attained income levels, and social status (see Deary – Intelligence, 2001).

This means that in a meritocratic university admissions system there will be a greater proportion of higher class students than lower class students admitted to university.

What is less widely understood is that – on simple mathematical grounds – it is inevitable that the differential between upper and lower classes admitted to university will become greater the more selective is the university.

***

There have been numerous studies of IQ according to occupational social class, stretching back over many decades. In the UK, average IQ is 100 and the standard deviation is 15 with a normal distribution curve.

Social class is not an absolute measure, and the size of differences between social classes in biological variables (such as health or life expectancy) varies according to how socio-economic status is defined (eg. by job, income or education) and also by how precisely defined is the socio-economic status (for example, the number of categories of class, and the exactness of the measurement method – so that years of education or annual salary will generate bigger differentials than cruder measures such as job allocation, postcode deprivation ratings or state versus private education).

In general, the more precise the definition of social class, the larger will be the measured social class differences in IQ and other biological variables.

Typically, the average IQ of the highest occupational Social Class (SC) - mainly professional and senior managerial workers such as professors, doctors and bank managers - is 115 or more when social class is measured precisely, and about 110 when social class is measured less precisely (eg. mixing-in lower status groups such as teachers and middle managers).

By comparison, the average IQ of the lowest social class of unskilled workers is about 90 when measured precisely, or about 95 when measured less precisely (eg. mixing-in higher social classes such as foremen and supervisors or jobs requiring some significant formal qualification or training).

The non-symmetrical distribution of high and low social class around the average of 100 is probably due to the fact that some of the highest IQ people can be found doing unskilled jobs (such as catering or labouring) but the lowest IQ people are very unlikely to be found doing selective-education-type professional jobs (such as medicine, architecture, science or law).

In round numbers, there are differences of nearly two standard deviations (or 25 IQ points) between the highest and lowest occupational social classes when class is measured precisely; and about one standard deviation (or 15 IQ points) difference when SC is measured less precisely.

I will use these measured social class IQ differences of either one or nearly two standard deviations to give upper and lower bounds to estimates of the differential or ratio of upper and lower social classes we would expect to see at universities of varying degrees of selectivity.

We can assume that there are three types of universities of differing selectivity roughly corresponding to some post-1992 ex-polytechnic universities; some of the pre-1992 Redbrick or Plateglass universities (eg. the less selective members of the Russell Group and 1994 Group), and Oxbridge.

The ‘ex-poly’ university has a threshold minimum IQ of 100 for admissions (ie. the top half of the age cohort of 18 year olds in the population – given that about half the UK population now attend a higher education institution), the ‘Redbrick’ university has a minimum IQ of 115 (ie. the top 16 percent of the age cohort); while ‘Oxbridge’ is assumed to have a minimum IQ of about 130 (ie. the top 2 percent of the age cohort).

***

Table 1: Precise measurement of Social Class (SC) – Approx proportion of 18 year old students eligible for admission to three universities of differing minimum IQ selectivity

Ex-poly - IQ 100; Redbrick - IQ 115; Oxbridge IQ 130

Highest SC– av. IQ 115: 84 percent; 50 percent; 16 percent

Lowest SC– av. IQ 90: 25 percent; 5 percent; ½ percent

Expected SC diff: 3.3 fold; 10 fold; 32 fold

Table 2: Imprecise measurement of Social Class (SC) – Approx proportion of 18 year old students eligible for admission to three universities of differing minimum IQ selectivity

Ex-Poly - IQ 100; Redbrick - IQ 115; Oxbridge - IQ 130

Highest SC –av. IQ 110: 75 percent; 37 percent; 9 percent

Lowest SC –av. IQ 95: 37 percent; 9 percent; 1 percent

Expected SC diff: 2 fold; 4 fold; 9 fold

***

When social class is measured precisely, it can be seen that the expected Highest SC to Lowest SC differential would probably be expected to increase from about three-fold (when the percentages at university are compared with the proportions in the national population) in relatively unselective universities to more than thirty-fold at highly selective universities.

In other words, if this social class IQ difference is accurate, the average child from the highest social class is approximately thirty times more likely to qualify for admission to a highly selective university than the average child from the lowest social class.

When using a more conservative assumption of just one standard deviation in average IQ between upper (IQ 110) and lower (IQ 95) social classes there will be significant differentials between Highest and Lowest social classes, increasing from two-fold at the ‘ex-poly’ through four-fold at the ‘Redbrick’ university to ninefold at ‘Oxbridge’.

Naturally, this simple analysis is based on several assumptions, each of which could be challenged and adjusted; and further factors could be introduced. However, the take-home-message is simple. When admissions are assumed to be absolutely meritocratic, social class IQ differences of plausible magnitude lead to highly significant effects on the social class ratios of students at university when compared with the general population.

Furthermore, the social class differentials inevitably become highly amplified at the most selective universities such as Oxbridge.

Indeed, it can be predicted that around half of a random selection of kids whose parents are among the IQ 130 ‘cognitive elite’ (eg. with both parents and all grandparents successful in professions requiring high levels of highly selective education) would probably be eligible for admission to the most-selective universities or the most selective professional courses such as medicine, law and veterinary medicine; but only about one in two hundred of kids from the lowest social stratum would be eligible for admission on meritocratic grounds.

In other words, with a fully-meritocratic admissions policy we should expect to see a differential in favour of the highest social classes relative to the lowest social classes at all universities, and this differential would become very large at a highly-selective university such as Oxford or Cambridge.

The highly unequal class distributions seen in elite universities compared to the general population are unlikely to be due to prejudice or corruption in the admissions process. On the contrary, the observed pattern is a natural outcome of meritocracy. Indeed, anything other than very unequal outcomes would need to be a consequence of non-merit-based selection methods.

Selected references for social class and IQ:

Argyle, M. The psychology of social class. London: Routledge, 1994. (Page 153 contains tabulated summaries of several studies with social class I IQs estimated from 115-132 and lowest social classes IQ from 94-97).

Nettle D. 2003. Intelligence and class mobility in the British population. British Journal of Psychology. 94: 551-561. (Estimates approx one standard deviation between lowest and highest social classes).

Validity of IQ – See Deary IJ. Intelligence – A very short introduction. Oxford University Press 2001.

Note - It is very likely that IQ is _mostly_ hereditary (I would favour the upper bound of the estimates of heredity, with a correlation of around 0.8), but because IQ is not _fully_ hereditary there is a 'regression towards the mean' such that the children of high IQ parents will average lower IQ than their parents (and vice versa). But the degree to which this regression happens will vary according to the genetic population from which the people are drawn - so that high IQ individuals from a high IQ population will exhibit less regression towards the mean, because the ancestral population mean IQ is higher. Because reproduction in modern societies is 'assortative' with respect to IQ (i.e. people tend to have children with other people of similar IQ), and because this assortative mating has been going on for several generations, the expected regression towards the mean will be different according to specific ancestry. Due to this complexity, I have omitted any discussion of regression to the mean IQ from parents to children in the above journalistic article which had a non-scientific target audience.

Friday, 9 May 2008

What has happened to class sizes in Russell Group universities? The need for national data.

Oxford Magazine - 2007

Bruce Charlton

In a decade my final-year class size has gone from around 16-24 to 84 and 123. First and second year classes are around 200 students. In other words, aside from a handful of tutorials and supervisions of dissertations or projects, it seems as if students now go through the whole degree in very large classes.

What I would like to know is whether this massive decline in teaching quality is typical of the top 50 (ie. roughly pre-1992) UK universities in general, and of the large Russell Group research universities in particular.

Anecdotally, the answer would seem to be yes, such increases in class size are typical. In the past, introductory lectures were big, but as students progressed groups became smaller. But the remarkable fact is that no one really knows what's going on, because information on university class sizes is not collected nationally – or if it is, it is not publicized or distributed.

In particular, although the national university "teaching inspectorate", the Quality Assurance Agency (QAA), examined a great deal of paperwork (a whole roomful of box files in the case of my department), and indirectly generated vastly more, it failed to measure class size.

Just think about that for a moment. The QAA neglected to measure the single most important, best understood, most widely-discussed measure of teaching quality: class size.

It is no mystery why class sizes have expanded. Over 25 years, funding per student has declined by more than half, and the average number of students per member of staff increased from about 8 to 18. In the face of long-term cuts, a decline in teaching quality was inevitable. Indeed, it was anticipated: the QAA was created in order to monitor and control this decline.

But is class size important? Of course it is! In the first place, the public regard class size as the single most significant measure of teaching quality. Every parent with a child at school knows their class size. Parents who pay for their children to attend private schools are often explicitly paying for smaller classes.

It is not just in schools that size matters. US universities publish class size statistics that are closely scrutinised by applicants. For instance US News gives data on percentage of classes with fewer than 20 and percentage of classes with 50 or more. Around 70 per cent of classes at top research universities such as Princeton currently have groups of fewer than 20, and less than 15 percent of classes with more than 50 students. The expensive and prestigious undergraduate liberal arts colleges offer not only about this proportion of small classes and an even smaller proportion of large classes, but guarantee that classes that are always taught by professors (rather than teaching assistants).

A way of measuring the importance of class size is to see what people are prepared to pay. In a small comparison of public and private universities in America, Peter Andras and I found that students at the private institutions paid on average 80 per cent more in tuition fees, for which they got 80 per cent more time in small classes.

Given the usefulness of a valid and objective measure of university teaching quality, and the overwhelming evidence of public demand for small classes, the case for publishing national data on university class sizes seems unanswerable. I would guess that class size data is already available within the central administration of many UK universities, because they record the number of students registered for each course for their own internal administrative purposes. It is just a matter of collecting and summarizing the information, and publishing it nationally.

However, I doubt that universities will publish class size data unless they are made to do so. University bosses probably feel too embarrassed to admit the real situation: nobody wants to be first above the parapet with shocking statistics. Alternatively, if or when the three thousand pound cap is taken off university fees, some universities with small classes may start to publish this data in order to justify charging higher fees, and eventually all universities may be forced to follow suit.

But why wasn’t the QAA interested in collecting and publishing data on UK university class sizes? I can not think of any good educational reason. It managed to spend well over £53m in data collection and auditing since being set-up, plus many times that amount in opportunity costs incurred by UK universities, but amazingly failed to provide a valid measure of teaching quality.

Incompetence and inefficiency on this scale would beggar belief if the QAA really was concerned with teaching quality - but of course it never has been.

There were about 50 UK universities pre-1992 (when the former polytechnics were re-christened). The current ‘elite’ of these pre-1992 institutions are usually considered to be those thirty-eight research-orientated universities who are members of either the Russell Group (larger institutions) or the 1994 Group (smaller institutions).

But how many UK universities are elite? Are all of the Russell and 1994 Group universities elite, or just the Sunday Times top-20, or more, or fewer? The answer depends on how terms are defined.

Defining the cognitive elite of students

I will define elite universities as those recruiting mostly from the top 10 percent of the population in terms of IQ. Since IQ in the UK has an average of 100 with a standard deviation of 15, the top 10 percent of the UK population have an IQ of about 120 plus.

IQ mainly measures rapidity of learning and ability at abstract logical thinking. It is highly predictive of a wide range of successful outcomes in modern societies such as educational attainment, salary, life expectancy and social class. But IQ does not measure all valuable attributes – for example a ‘conscientious’ personality capable of sustained and methodical work also predicts success in many domains. (For a clear and balanced discussion of IQ see Intelligence: a very short introduction, by Ian J Deary from OUP.)

My definition of the cognitive elite derives from the work of IQ scholars such as Linda Gottfredson or Richard Herrnstein and Charles Murray (authors of The Bell Curve). They note that US data suggest that relatively few ‘high-IQ’ professions have an average entry standard of 120 plus and absorb about half the cognitive elite.

These professions include accountants, architects, scientists, computer scientists, social scientists, university teachers, mathematicians, engineers, lawyers, dentists and physicians. Leading Chief Executives and senior managers make up the other main high-IQ group.

The suggestion is that the great majority of the national elite in societies such as the US and the UK are drawn from the top ten percent of people with an IQ of at least 120; since in modern developed societies (although less-so in less-complex societies) almost all leadership positions require a high level of those attributes such as rapid learning and abstract thinking which are measured by IQ.

Defining an elite university – a majority of elite students

Using the IQ 120 threshold, I will define an elite university as an institution that has a majority of students in the top ten percent, with an IQ at or above 120.

There are currently approximately 800,000 eighteen year olds in the UK population in any given year. This means there are about 80,000 potential undergraduates per year in the cognitive elite group having an IQ above 120 (ignoring undergraduates from abroad).

I roughly estimated the numbers of first year undergraduates in the Sunday Times guide top-20 most selective UK universities by looking at the number of undergraduates listed in Wikipedia and dividing the number by three (this will somewhat overestimate the number of first years because some undergraduate degrees last for longer than three years – for example MAs in the Scottish universities and also several professional and vocational degrees).

In round numbers it turns-out that there are around 80,000 undergraduate first year places at the top-20 most selective UK universities – i.e. about the same number of first year places at top-20 universities as there are 120-IQ 18 year olds. I will assume that virtually all of the top ten percent of 18 years olds by IQ will go into higher education – and this seems to be largely correct.

So, if there was a perfect system and assortment of students by IQ, there would be enough 120-IQ students completely to fill the top twenty universities with none left over, or else to provide between 20 and 40 universities with a slim majority of cognitive elite students.

However, this cannot be the case; because in practice cognitive elite students are spread across a much larger number of institutions. This happens due to personal choice (students who choose to attend a less-selective institution than their qualifications would allow), constraints on personal mobility (eg. students’ need to attend a local institution), centres of excellence located in lower-ranked and less-selective institutions (such as medical schools and law schools – which may be attracting 120-IQ students to institutions that are considerably less selective than this on average) – and of course the inevitable imperfections of national examinations and institutional selection procedures.

My guesstimate, therefore, is that less than half of the age cohort of 80 000 elite – not more than 35,000 students per year - will find their way into the top 20 most-selective UK universities.

It is worth focussing on this number for a moment. My proposition is that there are at-most just 35,000 IQ-120 university students for whom all the best universities are competing. It does not take very many universities to absorb 35,000 UK students per year.

This analysis implies that at most twenty UK universities can be regarded as truly elite in the defined meaning of it being possible for them to have a majority of students from the top 10 percent of IQ.

Fewer than twenty elite UK universities

However, twenty elite UK universities is an upper limit, and in practice the number of elite universities must be lower than twenty.

A further down-grading of this estimate is required because there will be large differences in the proportion of the cognitive elite even among elite universities defined in this fashion.

If US data on the Ivy League are taken as a guide, a university such as Oxford or Cambridge will probably have students with an average IQ more like 145; which is three standard deviations above average – or roughly the top 0.1 percent of IQ, or roughly the top thousandth of the UK population. So that we should assume that virtually all Oxbridge students will have an IQ above 120. This would mean that more than six thousand of the best of the top ten percent students in each year cohort will go to Oxbridge alone.

Recall that there are only about 35,000 potential elite undergraduates. If the top-two universities pretty-much fill-up with elite students, then the same applies – to a decreasing extent – as we descend the selectivity league table. Each decrement in university selectivity will take a lower proportion of the elite among their n-thousand first year entrants; nonetheless the threshold at which there is less than a majority of IQ-120 undergraduates in an institution will be reached considerably before the twentieth university.

The conclusion is that there is currently something between ten and fifteen elite universities in the UK.

But if we go back forty-something years, the average intake of a UK university was less than half, more-like a third of what it is today. In those days, even the largest of the most selective universities took just a few thousand new undergraduates per year, and some took less than a thousand. Inevitably this meant that the cognitive elite was spread thickly across a much larger number of institutions.

My hunch is that forty years ago, instead of about ten to fifteen elite universities there would have been more like thirty to forty elite universities. In other words, a couple of generations ago most UK institutions with the title of ‘university’ could legitimately have been considered ‘elite’.

This means that twenty-something previously elite UK universities have declined to non-elite status over a fairly short period of time – mostly during the past twenty or so years of rapid university expansion .

Who are the current elite among UK universities?

This analysis suggests that there has been a rapid decline from elite status in more than half of the less-selective pre-1992 universities as the most-selective universities have expanded their intake; because relatively few top universities can now hoover-up almost all of the top ten percent of students available for selection.

My point is that a major but neglected cause of the average students’ cognitive decline, which has been noticed in many of the UK’s most prestigious universities, must surely have been the several-fold expansion in the size of the most selective universities. As the annual undergraduate intake of the top UK universities doubled, then trebled in size; they became able to mop-up almost all of the limited supply of circa 35,000 students per year who constitute the UK cognitive elite.

There must therefore have been a very-significant decline in average cognitive ability of undergraduate students at most (but not all) of the Russell and 1994 Group universities – especially a decline of IQ-related abilities such as rapidity of learning and capacity for abstract logical thinking.

The outcome is that the student intake at the minority of most-selective Russell/ 1994 Group universities is bigger in numbers and has largely retained the same high levels of average IQ as before the massive UK university expansion; while among the lower-ranked majority of the Russell/ 1994 universities the post-expansion intakes are bigger in numbers but also the average students are significantly lower in terms of IQ. So that most of the Russell and 1994 Group universities are now non-elite.

In conclusion, I suggest that there are now likely to be only between ten and fifteen elite universities in the UK; where an elite university is defined as one in which the majority of the undergraduates have an IQ in the top ten percent of the population.

Assuming that the Sunday Times data are correct, my tentative suggestion is be that the only current elite UK undergraduate universities are: Oxford, Cambridge, LSE, Imperial, Warwick, St Andrew's, UCL, York, Bristol, Edinburgh, Bath and Durham.

***

Second thoughts: 18 March 2009

I would now consider that in the modern educational system, the personality trait of Conscientiousness counts for as much, or more than, IQ in determining examination results.

Thursday, 17 January 2008

What has the RAE ever done for Oxford University’? Improvement in normal science – but decline in revolutionary science.

Bruce G Charlton

Oxford Magazine - 2008; 271: 3-5

Oxford has done well in the past UK national Research Assessment Exercises, and will probably do well in the current and any future RAEs. But what good has the RAE done Oxford, and the UK university system?

I think the answer is that the RAE has done some good and also some harm to UK research universities, with perhaps the good outweighing the harm in most places – but in Oxford (and also Cambridge) the damage from the RAE outweighs the benefit.

This is because the RAE has probably increased overall science production at Oxford and elsewhere; and this is good. But the RAE has also measurably reduced the production of the very highest quality science; and this is bad for Oxford. Oxford (and Cambridge) - more than other UK universities – used-to specialize in producing the highest quality of ‘revolutionary’ science. Not any more…

Revolutionary science compared with normal science.

Oxford and Cambridge are bigger and more productive than other UK universities – their science output is each about twice as big as the third and fourth most productive universities which are Imperial College and UCL (Oxford Magazine 2006; 254: 19-20). However, Oxbridge is no longer qualitatively superior to other UK universities in the way that it used to be. True, Oxbridge still produces more research than other places; but it is now hard to see any objective evidence that Oxbridge produces better-quality science than elsewhere.

Taking the terminology of Thomas Kuhn; ‘revolutionary’ science which changes the direction of science can be distinguished from ‘normal’ science which is an incremental extrapolation of established science. I suggest that Oxford has declined as a centre of revolutionary science at the same time as it has improved its production rate of normal science – and that the RAE has something to do with this.

In terms of its total number of publications and the citations to those publications, Oxford has been catching-up with the best US universities over the past ten to fifteen years. Indeed, this has been the pattern for all the best UK research universities. With Peter Andras (Medical Hypotheses, forthcoming) I have demonstrated that the average UK top-twenty university has improved its Web of Science ranking in terms of annual total citations from 83rd to 65th from 1990-94 to 2000-4.

This marks a change of trend, which was probably triggered by the RAE. Looking back over 30 years, the UK was somewhat declining relative to the USA up until about 1990, but since then has been catching-up. Overall, the pattern of data is consistent with the RAE having progressively increased the number both of publications and also the citations to those publications.

Sir David King, who was Chief Scientific Officer, calculated that from 1993-7 to 1997-2001 the percentage UK share of total world publications was second only to the USA and increased from 9.29 up to 9.43 while citations increased from 10.87 up to 11.39 (Nature 2004; 430: 311-16). At the same time, US percentage shares were declining for both publications (37.46 down to 34.86) and citations (52.3 down to 49.43). So it is plausible that the RAE should be credited for helping increase the total production of UK science more rapidly than it has increased in the USA.

In other words, the RAE has probably led to an improvement in UK normal science.

The Oxford (and UK) decline in revolutionary science

Nobel prize data

However, even as it improved production of normal science, I suspect that the RAE has simultaneously led to a decline in UK revolutionary science. My measures of revolutionary science come from two sources: science Nobel prizes and the number of ISI Highly Cited (HiCi) academics.

Nobel prizes are usually awarded for revolutionary science. I have looked at the trends in science Nobel prizes over the past 60 years (data available at http://medicalhypotheses.blogspot.com/2007/07/nobel-prize-trends-19472006.html) and it makes depressing reading for a UK scientist.

The UK has dropped from a clear second place to the USA in terms of total number of laureates, with per capita more Nobel prizes than the USA – to the present situation in which the UK won only 9 prizes from 1987-2006 compared with 126 in the USA. Five UK-born laureates had emigrated to the USA before winning prizes – which is evidence of the UK’s inability to hold-onto its very best scientists (by contrast, for 60 years, no US-born laureates have ever moved to the UK).

Oxford won 3 Nobel prizes 1947-66, another 3 Nobel prizes 1967-86 – but none since. In the past 20 years Cambridge managed 2 prizes. But MIT won 11 prizes, Stanford won 9, Columbia and Chicago 7, Princeton 6, Harvard 5… In the US public universities Berkeley got 4; UCSF, Irvine and Santa Barbara 3 each; Colorado at Boulder 4 and Washington (Seattle) 3. The message seems clear - the US university system is now totally out-performing Oxbridge in terms of science Nobel prizes – and (I suggest) probably therefore in revolutionary science in general.

Highly-cited (HiCi) academics

However, there are only a maximum of 12 Nobel science laureates each year (including economics) so Nobel prize data is statistically imprecise as a measure of revolutionary science. However, Nobel data can be amplified using ISI Highly Cited (HiCi) academics. HiCi academics are calculated by Thomson scientific from the Web of Science database to identify the researchers who have the greatest impact on their specific fields in terms of being cited by other researchers in the field. Most Nobel prize-winners are drawn from the ranks of HiCi academics, so it seems reasonable to use HiCi status as a measure of revolutionary science. However, some HiCi academics are probably more accurately regarded as very productive and highly-respected ‘normal scientists’.

Universities can be studied in terms of the numbers of HiCi academics on their faculty. My assumption is that HiCi faculty are a measure of the amount of revolutionary science at a university.

Using the current Thomson Scientific database (http://hcr3.isiknowledge.com/formBrowse.cgi) it can be seen that Oxford and Cambridge top the UK rankings again with 45 HiCi at Oxford and 52 at Cambridge. For comparison UCL has 24, Imperial College has 31, Bristol and Manchester 15 each. The number of HiCi academics among these UK universities broadly correlates with the total volume of annual publications and citations in Web of Science.

But when we look at the US universities it brings Oxbridge’s performance into a startling perspective. It is easy to lose count – but I think Harvard has about 185 HiCi. Even leaving aside the US private sector universities, Berkeley has 86 HiCi, and the University of California San Diego 60 HiCi, and University of Washington at Seattle 52.

Oxford equals Minnesota

So if we were to make an objective comparison of Oxford’s research performance – and tried to place Oxford among US universities using scientometric data such as total citations, the number of Highly Cited academics and Nobel prizes – what would be the nearest matching university?

Somewhat shockingly, the closest match to Oxford in research terms seems to be the University of Minnesota.

Oxford and Minnesota have the same number of citations for 2000-2004 (others at this level of production are the University of Pittsburgh and Duke); Oxford currently has 45 HiCi academics compared with 47 in Minnesota, again an almost exact match (Pittsburgh manages 31 HiCi and Duke 40). And neither Oxford nor Minnesota are winning significant numbers of Nobel prizes in science (Minnesota got its first one, for Economics, in 2007 but Oxford’s most recent laureate was 1973). So, both Oxford and Minnesota are big and successful universities mainly specializing in normal science.

While Oxford is justifiably proud of its distinguished history and current international name recognition; in terms of objective scientometric comparisons Oxford is currently equivalent to the University of Minnesota. Does this matter? Well… on the one hand Minnesota is a very good university by international standards. On the other hand it is probably only in the third tier of US universities, while Oxford is generally considered to one of the elite universities in the world. The fact is that Oxford nowadays is not one of the elite universities of the world in terms of scientific research. It used to be, but not any more.

Naturally the all-round academic strength of Oxford extends beyond the mostly scientific subjects which are measured using scientometrics. In particular Oxford is very strong in the arts and humanities, and probably publishes more in this area than any other university in the world (according to the Web of Science database – Oxford Magazine 2006; 256: 25-6). Furthermore, Oxford probably provides a much better undergraduate education than most other big research universities.

However, in a global system the status of universities will increasingly depend on their objectively measurable scientific research performance. Arts and humanities are culture-specific, while undergraduate teaching is notoriously hard to compare (is Oxford really better than Chicago, or small liberal arts colleges like Amherst and Wellesley?) Since a university can only prove its superiority in science, science is what is used to measure status. And in terms of science Oxford is no longer especially distinguished – neither in volume nor in quality. There is no room for complacency. Europe is littered with once-great but now merely-moderate universities – Halle, Gottingen, Berlin, Heidelberg, the Sorbonne, Bologna, Pisa, Salamanca… I suggest that Oxford may be on this path.

The pressure to down-shift from revolutionary to normal science

So - in terms of normal science, Oxford is improving – unfortunately this improvement seems to be at the expense of revolutionary science.

My explanation for these trends in opposite directions is that current incentives favour ‘down-shifting’ among the very best academics. Oxford is a highly-attractive and very selective institution; so it seems reasonable to assume that Oxford contains scores of individual scientists who are capable (with luck) of doing top quality revolutionary science (at the kind of level that might potentially win a Nobel prize). Why are these best scientists at Oxford no longer producing revolutionary science of the highest quality?

The matter is complex, but my hunch is that these top-notch Oxford scientists are not being encouraged to do the best work of which they are capable. They are not being encouraged to tackle the biggest scientific problems which they have a chance of solving.

Indeed, it is worse that mere lack of encouragement to do top quality scientific work, it is a matter of positive pressure to down-shift to second-class work. Maybe they are not even being allowed to take the risks entailed by aiming high?

Instead of being urged to take on tough problems where there is a significant chance of failure, the best young Oxford scientists are being pressurized (implicitly for sure, but probably sometimes explicitly) to do easier, more predictable, and more short-term research. Why? Because this is the kind of research which has a higher probability of getting funding, and leading to large numbers of well-cited papers; and all in a timeframe so as to be ready for the next up-coming RAE - with the extra money this brings-in.

Oxford’s success in expanding normal science and performing so well in the RAE may therefore have been achieved by sacrificing its performance in revolutionary science. To put it bluntly: Oxford looks as if it is taking some of the best young scientists in the world – scientists who have the potential to make revolutionary contributions to their subject – and Oxford is converting them into normal scientists – albeit highly-professional and unusually productive normal scientists. After they have down-shifted their ambitions they become superbly effective at winning big research grants and generating a large volume of well-respected papers, and this work is both necessary and valuable; but it looks like nowadays most of Oxford’s scientists are not even trying to solve big problems.

The RAE is almost certain to make this down-shifting more prevalent and more severe – because if a scientist chooses to aim-high this entails an inevitable sacrifice of short term productivity and funding; as well as a much greater chance of failure. A truly ambitious scientist may be prepared to risk his or her career in trying to solve a really important but tough problem – however if he or she does so then s/he may feel (or be perceived as) selfish.

Helping the cost centre achieve a high RAE score could mean hundreds of thousands of pounds extra money per year. Failing to optimise short term productivity implies the possible loss of hundreds of thousands per year. Scientific idealism then entails financial penalties – with results that many include sacrificing the department, colleagues, research teams jobs - all for the (perhaps delusional) hope of a paradigm-shattering scientific breakthrough. How selfish of them.

Most universities do not have many top-quality scientists – capable of doing work at the highest level - but Oxford does. However, it looks like Oxford is either failing to bring out their potential or (worse) actually thwarting their potential.

Conclusion

The government is prone to self-congratulation regarding the RAE, and I think it only fair to acknowledge that the RAE deserves credit for making UK science measurably more competitive and driving-up total production relative to the USA. Perhaps, on the whole, the RAE may even have benefited most of the researchers in most of the most research-active universities.

But equally, I think that supporters of the RAE need to acknowledge that there is objective evidence of significant decline in UK revolutionary science, and that this can also plausibly be linked to the RAE. For most universities this does not much matter, because they never did have much first class research going-on. But for Oxbridge this down-shifting has been lethal.

All this points to the surprising conclusion that – despite their unmatched success in winning RAE-linked funding - the RAE has probably damaged Oxford and Cambridge more than it has helped them – since the RAE has presided over the decline of Oxbridge from their elite status of being centres of elite revolutionary science.

Of course, it would be simplistic to blame the RAE alone for the decline of UK and Oxford revolutionary science. The RAE is merely the tip of an iceberg of UK government regulation which has made universities ever-more short-termist. And even assuming the RAE is indeed a major contributor to this decline, the RAE would never have been introduced in the first place nor would it have survived if there had been a strong culture of revolutionary science, or strong incentives to encourage revolutionary science. And of course the government must take the primary blame (as well as credit) for the consequences of the RAE.

A major (probably unintended) effect of the RAE on Oxford has been to make Oxford more like other UK universities in terms of the type and quality of research. Scientifically-speaking, Oxford looks like a bigger, more-productive version of the other Russell Group universities like UCL, Manchester and Bristol – different in production volume but not different in kind.

For Oxford again to become a major centre of revolutionary science would be a big ask. It would require a significant reining-in of the influence of the RAE. But it would also require more.

Oxford would need to recognize the primacy of science in international university comparisons, and also recognise the primacy of revolutionary science within science. Increased quantity does not compensate for decline at the level of highest quality. Oxford needs to lose a certain smugness in relation to its currently undeserved international reputation – because even anciently-established international reputations will eventually adjust-down to the level of performance.

As a corrective to wishful thinking may I suggest that the all-too-common Oxford daydream of being as-good-as Harvard, Stanford and Berkeley should be tempered with the sober reality of actually being as-good-as the University of Minnesota; and cheerful daydreams of ‘closing the gap’ with the US elite need to be supplemented by ‘nightmares’ about the distinct possibility of Oxford becoming as-mediocre-as Berlin, Bologna or the Sorbonne.

Bruce G Charlton teaches at Newcastle University, UK; and is the editor of Medical Hypotheses.