For decades, we’ve known from twin studies that psychological traits like intelligence and personality are influenced by genes. That’s why identical twins (who share all their genes) are not just more physically similar to each other than non-identical twins (who share half their genes), but also more similar in terms of their psychological traits. But what twin studies can’t tell us is which particular genes are involved. Frustratingly, this has always left an ‘in’ for the incorrigible critics of twin studies: they’ve been able to say “you’re telling me these traits are genetic, but you can’t tell me any of the specific genes!” But not any more.

With a new method, “polygenic scoring”, behaviour geneticists can now look to see whether people have specific genetic variants or not, and based on this, make some impressively accurate predictions about how they will behave in the future. Particularly in the case of two new studies, this development also means researchers are getting themselves into a whole new set of controversies.

The method is relatively simple. Imagine you’d found in a study that smarter people are much more likely to have one particular version of a gene than people who are less intelligent. You could then, in an entirely new sample, see if people with this “intelligence gene” were doing any better than those without it. The polygenic scoring method is just this, writ large – instead of looking at whether people have one specific genetic variant that’s linked to a particular psychological measure, geneticists look across hundreds of thousands of points on the genome, totting up who has more of the genetic variants that make them susceptible to the trait in question.

Two new papers have applied this technique in the context of educational success – in this case, they built their polygenic scores on the basis of previous gene-hunting studies that had identified the specific gene variants found more often in people who stay in school for longer.

In the first paper in Psychological Science, with the not-at-all-provocative title “The Genetics of Success”, Daniel Belsky and colleagues aimed the education polygenic score at a whole host of life outcomes in the thousand-strong Dunedin Study, a cohort of New Zealanders who have been followed from birth up to age 38. The results are comprehensive, and pretty stunning.

The children with more education-linked genes (that is, higher polygenic scores) learned to read faster. They did better on intelligence tests. They were more likely to go to university, and less likely to have financial problems. They were more likely to leave New Zealand to find job opportunities abroad, and more likely to choose a partner of higher social status. The idea is that the education-linked genes make people smarter, harder-working, and more socially successful—traits that help you lead a more “successful” life. Importantly, all this was predictable from a score that, in theory, could have been calculated on (or even before) the day the participants were born.

All of the above predictions were pretty weak though, in the sense that the polygenic score generally only explained one or two percent of the variance in each of the above traits and outcomes. This is where the second paper published in Molecular Psychiatry comes in. Using an even newer polygenic education estimate from a more recent gene-finding study (published in Nature this year), Saskia Selzam and colleagues found that their polygenic score explained a remarkable 9.1 per cent of the variance in age-16 GCSE results in a sample of 4,300 British teenagers. Because of this impressive effect size, Selzam and colleagues said their study “…represents a turning point in the social and behavioural sciences”. This is a little over the top: I see it more as an incremental (yet vital) step in a long march of studies that are changing the way we think about education.

The polygenic scores are already pretty good predictors: in Selzam’s study, they have just about half of the predictive value of asking about the parent’s socio-economic status, or testing the child’s IQ at age 7 (and the scores are based on DNA variants that are unchanged since birth and can be measured with a simple saliva or blood test). As we discover more about the genetics of education, the predictions will become more and more powerful. Then what? Do these studies usher in a new era of genetic testing to select children for different types of education? Not quite.

First, these results ‘explain variance’ at the group level, and we can’t easily translate this to individual prediction – yet. To do that, we’d need a representative reference panel with which to compare individuals, like the standardization samples used in IQ research. It’s only a matter of time before this becomes a reality for many traits. Second, as the authors of these new polygenic studies note, these results don’t necessarily have clear policy implications: some might argue that we should use the scores to predict who will do best in school and select them into higher-quality education; others may argue we should use the scores to identify those who are likely to struggle. Some will suggest both, some neither.

These varying interpretations – and the quickly advancing science – are why it’s crucial to begin a proper, fact-based debate about the potential uses of these genetic predictors. Is it acceptable to predict people’s educational attainment from a genetic score in practical settings, or should these methods only be used in research? Is it just like giving children an aptitude test, like the 11+, early in life, and if so do the same caveats apply? What if the long-term consequences of this genetic research involve screening and selecting embryos for higher polygenic education scores, as has previously been suggested? The only way to answer these thorny ethical questions is to understand the details and limitations of the genetic research: ignorance and denial are no longer an option.

Post written by Stuart J. Ritchie for the BPS Research Digest. Stuart is a Research Fellow in the Centre for Cognitive Ageing and Cognitive Epidemiology at the University of Edinburgh. His new book, Intelligence: All That Matters, is available now. Follow him on Twitter: @StuartJRitchie

Great article – thanks. It surprises me how researchers still completely ignore the role of the microbiome in gene expression. There may be gaps in the detail of microbial influence but the evidence the influence exists is irrefutable.

You should have kept reading Shawn, as you clearly need to learn more about psychology. Your article’s fact-free discussion of autism is evidence of that. I would strongly encourage you to take a psychology degree (a BPS-accredited one if you’re based in the UK) if you’re interested in these issues.

The microbiome is itself heritable. Let’s say microbial factors caused intelligence and other behavioral traits (which actually isn’t complete preposterous), then the traits are still just as heritable as if they were “directly” caused by genes.

Of course, there is little by way of evidence for the causal role microbial factors play in intelligence or behavioral traits. For all we know, any link we see could mean that they’re just along for the ride.

In any case, all human behavioral traits are heritable (indeed, this is First Law of behavioral genetics verbatim), including intelligence. See:

Sorry Jayman, your hypothesis is outdated, autistic and silly. First off, the human brain is not intelligence, it’s a survival mechanism optimized for selecting immune and other compatibilities. In other words, cognition is an illusion.. the human brain is just a feedback algorithm, simulator based on the feedback. It’s quite eloquent if you understand feedback loops. Even better if understand the algorithmic pattern programming from the genetic levels, biological differential stability network.

There’s a mutant gene that’s responsible for the growth of Europe, as the military capital of the world… it is a virus hosting human gene that compensates for the autistic damage. It spread throughout Europe thanks to Rome’s conquests and interbreeding with natives. It matters when a particular gene makes it possible for the photographic memory to enhance the autistic dying brain… it prevents it from over pruning by overgrowing the synapses and cells. The viruses specifically target the PKR pathway to increase their replication rates and the byproduct of this dysregulation of the protein replication rates that effect the neural transmitters and synapse formation rates (why memories encode too many details and trigger too easily, creating repeating behaviors). Same thing shows up in PTSD patients…. By the time the black plague comes around, it wipes out most of the people with the gene that allows them to be the virus host, except for the few that are also immune to the plague and hosts to a virus. This slows down the development but the gene is partially everywhere by the end of the 20th century. It recombines in about 1/1000 people but the virus is virtually everywhere by the late 1900s.

The gene isnt for innovation, innovation is more of a problem solving technique applied to available resources and usually that is just time and energy. You can create anything with time and energy. In general the gene for autism failures in healthy individuals result in more inhuman aggression (detached, no empathy) and genocide. When the virus switches the brain to become OCD/autistic, there’s no way to disengage the resulting “will to reproduce” and the brain becomes a slave to go after wealth and power and is pushed into wanting more and more power, whatever that is to the person (learning ever more, getting famous, political power, becoming rich, having power over other people. Business men/bosses who like to boss people around, policemen, domination of people for sexual pleasure, and numerous other sources of a feeling of power). The higher you go in the ranks or the hyper intelligent people, those are the ones you see it in most often.

The virus host gene is a knockout gene, meaning usually it only serves its purpose to disrupt and take revenge for harming the situation… that’s what a photographic memory does along with the anxiety induced chronic virus condition, “something is wrong” mindset. So in other words, you have the activation of “genome” code that is normally suppressed but is more than likely being blocked by an active virus that is attempting to block how the immune system is attacking it but it is helping out one of the most ancient viruses that effected our human ancestors. Again, the genome mutations can alter how the immune system suppresses parts of the genome from generation to generation resulting in this embedded virus being more active in some people as soon as their immune system is slightly compromised vs others require multiple compromising viruses/health issues. Mostly it is the severity of the brain alterations that cause the “disorder” to be diagnosed. The long term impact of how the viruses alter the brain can be huge, i.e. sensory level delusional activation due to regional overactivation and none coherent patterns across the cerebral cortex, which show up when too many different regions of the brain activation simultaneously.

The truth is that if you get a particularly nasty virus when you are young, then you are a prime suspect for developing the overactive brain that is edging towards the schizophrenia brain patterns depending on which proteins are over or under produced once the virus starts operating it’s control functions. So in other words, all human behavioral traits are not heritable based.

Interesting but lots of generalisations here about the relative impact of basic Nature vs Nurture. Yep, we are born with a genetic blueprint that limits or enhances our future potential. So, first stop- look at the ‘wolf boy’ studies et al. You can be born with all the genetic potential in the world, but you’ll never realise it if you aren’t exposed to key stimuli (human language, culture etc etc.) at key stages in babyhood and childhood. So, no exposure to humans, no language or pretty much anything else in cognition and socialisation..

Adolescence is another wild melting pot, where more of our intellect and behaviours are further reshaped and reformed. Permanently. I was blessed with Identical twins shortly after getting my Psych. degree, fresh from all the experimental Obs. of staff and student children. As a natural ‘monitor/evaluator’, I was fairly interested keen to validate the main assumptions- or not…

My identical twins were equally sporty and bright, but, then, they had the same upbringing and went to the same schools, so no real surprise. More spookily, they also demonstrated some telepathy in controlled tests. Their aptitudes and attainment overall seemed very similar.

Personality- some likenesses and some differences, not conclusive. In terms of ‘who’ they are in essence; emotions, warmth, sociability, however- massive differences.

Even at playschool the other children ‘saw’ each child as a totally different ‘person’. One made them feel warm, the other was more ‘spiky’. Even I couldn’t tell them apart to look at, but each attracted very different friends.

“More spookily, they also demonstrated some telepathy in controlled tests.” Really? What did you spend the million dollars on? You know, from winning the James Randi Educational Foundation prize for demonstrating paranormal abilities under controlled scientific conditions. How did you feel when you got the Nobel Prizes for fundamentally shaking the foundations of both physics and medicine? More prosaically, could you give us the reference information for the peer-reviewed journal article that details your method and results?

You can not be serious, Grimbeard! Randi ‘always has an out’, simply blocking applications he doesn’t like — see for example https://weilerpsiblog.wordpress.com/randis-million-dollar-challenge/ to see the process in action.
“To get on a blog and tell people you are psychic is to have skeptics immediately invite you to take magician James Randi’s million dollar challenge. … I will demonstrate here that there is no reason to take this challenge seriously.

whenever one of these things makes it to the popular press it makes me sad. the whole thing is meaningless.

for instance, since when is 9% R^2 “pretty good”? polygenic scores can be slightly interesting from the point of view of animal/plant breeding, where they might actually be usable/measurable. but human GWAS is too much of a hot mess for these things to be even slightly interpretable. what conceivable use does a measure have when it maybe sometimes can explain 1% of the variance? recall that polygenic scores are, basically, linear combinations of dozens of features–when there is no plausible reason to believe that these effects are statistically independent–so you’ve basically overfit the crap out of what little signal there is, and gee whiz, turns out the prediction is crap also.

if this is as good as anyone can do, maybe it’s time to think about something other than the genetic basis of these traits, and more importantly something that can actually be fixed with known large effects (nutrition, poverty, teacher quality etc.). the worst part is that we’ve known this for >40 years: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC1762622&blobtype=pdf

until we solve the basic problem that heritability measures are near-hopelessly unstable, this kind of study is just more noise harvesting; genetics of behavior is a much worse reproducibility fiasco than other issues in e.g. psychology, because in genomics there might as well not be a statistical significance filter. if you test a gazillion hypotheses, a bunch will pass whatever threshold you might happen to set. some might even be real signal! but if your odds ratio is 1.01, there is no perceptible reason to care.

Many of these genes are likely to be correlated with visible differences. There is extensive and robust evidence that individuals, and society at large, favour people based on “attractiveness”, height, and, of course, gender and ethnicity. This discrimination ranges from the subtle and unconscious, through to overt and systemic. It is inevitable, therefore, that the level of “success” associated with these genetic factors will be due, at least in part, to the way society treats people with different traits. Even where a gene (eg for Downs Syndrome), may have a direct effect on ability, the extent to which this impacts on “success” depends hugely on how society supports, adapts to and even celebrates such difference.

I wonder if Stuart Ritchie fully understands what he’s subscribing to. Polygenic scores may be all the rage, but they are a fudge. A decade’s efforts to identify individual bricks that build the house have failed: let’s just throw a few thousand suspicous bits of material together to see if they at least make a start. In a host of assumptions like additivity and linearity of effects, it entails a highly naïve – indeed outmoded – view of the gene and biological systems (see https://thepsychologist.bps.org.uk/so-what-gene).

But there is a deeper irony. Educational attainment at age sixteen – however measured – is a promotion ticket to higher status but a very poor predictor of success either in future education or in the world of work. School SATs (or, in the USA, SAT or Grade Point Averages) are poor predictors of college performance. That applies even to A levels. A 2012 review notes that “in U.K. data, a small correlation was observed between A level points and university GPA (r = 0.25), again reflecting previous findings.” As the authors say, that could be due to non-cognitive factors: “self-efficacy was the strongest correlate (of 50 measures)”(1). Elizabeth Kolbert wrote in the New Yorker that “the SAT measures those skills – and really only those skills – necessary for the SATs” (2).

And, like IQ, school grades have little if any statistical relationship with job performance (3). In a chapter in the Encyclopedia of the Sciences of Learning (2011), J. Scott Armstrong has claimed that the relationship between grades and job performance is low and is becoming lower in recent years. Trudy Steinfeld, executive director of the Wasserman Center for Career Development at New York University, has deplored the focus on grade scores as longer-term predictors of job performance: “Nobody even cares about GPA after a few years,” she says (4).

It is ironic that just as psychologists are claiming to have found genes for predicting success the wider world is eschewing their criteria of success as useless. In an interview with the New York Times (June 13, 2013) Laszlo Bock, a vice-president of human resources at Google said, “One of the things we’ve seen from all our data crunching is that GPAs [Grade Point Averages] are worthless as a criteria for hiring, and test scores are worthless – no correlation at all except for brand-new college grads, where there’s a slight correlation”. In the UK, as reported by the BBC (January 18, 2016), major employers are abandoning reliance on educational attainments because there is no clear link between holding a degree and performance on a job.

So whence that “genetic association” Plomin and colleagues are now proclaiming? The method entails a formidable battery of assumptions, data corrections, and statistical maneuvers. But the most fatal assumption is that human societies can be treated as random breeding populations in randomly distributed environments with equally random distributions of genes.

On the contrary, human populations reflect continuous emigration and immigration. Immigrants with related genetic backgrounds tend not to disperse randomly in the target society. In their flow to jobs they concentrate in different social strata. This creates (entirely coincidental, non-causal) correlation between social class and genetic background persisting across many generations. For example, the Wellcome Trust’s “genetic map of Britain” shows strikingly different genetic admixtures among residents of different geographic regions of the United Kingdom (5).

This is what is called “population structure.” As Evan Charney notes, it is “omnipresent in all populations and it wreaks havoc with assumptions about ‘relatedness’ and ‘unrelatedness’ that cannot be ‘corrected for’ by the statistical methods [devised]” (5). (For further critique of statistical maneuvres see ref 6). The correlation is simply a sophisticated re-description of the class structure of British society, its deeper history and all its corruptions including differential effects on childhood preparedness for schooling (7).

In his recent Edge interview (Sep 24, 2016), Robert Plomin says, “I’m frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.”

On the contrary, it could be judicious circumspection. In 2013 Plomin gave evidence to the Government’s Select Committee on Education. The Prime Minister has just announced the return of grammar schools and selection at eleven years. Is there a connection? I doubt Plomin would even wish such a thing. But there are unintended consequences. History shows that anyone committed to a “genes as destiny” narrative, and a mythological meritocracy, based on nothing but mountains of correlations, needs to tread very cautiously.

6. A.V. Rubanovich, N.N. Khromov-Borisov Genetic Risk Assessment of the Joint Effect of Several Genes: Critical Appraisal. Russian Journal of Genetics, 2016, Vol. 52, No. 7, pp. 757–769.
7. SES at some distant time from current sampling, by the way, is a rather poor descriptor of social class (see ref 3).

(Ken Richardson’s new book Genes, Brains and Human Potential: the Science and Ideology of Intelligence, will be published in early 2017 by Columbia University Press).

They have found zero genes. Those studies are not genetic studies, they are simply associations based on assumption. They assume effect sizes and then squeeze out as much as they can get with all kinds of formulas based on more assumptions. They provide zero genetic evidence, they are basically allele counting and making unsubstantiated claims. Thats why they keep asking for higher and higher populations, none of that 9%(in that sample) is actually proven to do anything, it might as well be zero %.

Read up Evan Charney, post genomics research. They entire genetic model these studies are based on is being proven wrong by modern molecular genetic studies, eg: epigenetic/gene expression & environment studies.