Are you saying intelligence test scores are the most important thing about a person?

No. No person can be reduced meaningfully to a test score. I am saying that, like it or not, the differences among people in their general ability to solve problems and learn complex material are important aspects of life success. Intelligence test scores estimate this general ability and the scores predict many things. But test scores are not perfect predictors because there are many things that influence any measure of success. The predictions made by a test are best thought about as probabilities. Intelligence by itself is one of many attributes that contribute to the way a person navigates through life. Intelligence without judgment or character, of course, may not serve a person well. Intelligence does not guarantee happiness, health or likeability. Nonetheless, intelligence is a key to being human and we should understand where it comes from and how it develops. Intelligence tests are necessary tools for researching these questions.

What does an IQ point measure?

IQ points and scores on all intelligence tests are indirect estimates of reasoning ability. There is no direct measure of intelligence like the direct measures of distance or weight. Four feet is twice the distance of two feet and 10 pounds is twice the weight of five pounds. A person with an IQ score of 140 is not twice as “smart” as a person with a score of 70. This inherent measurement problem is a limitation for intelligence research but test scores do have meaning relative to other people. That’s why test scores typically are referenced as percentiles. An IQ score of 130, for example, puts a person statistically in the top 2% of people. Ranking people on IQ scores is what predicts things like academic success or income. For example, the top percentiles of people on IQ test scores are also in the top percentiles of income. There are many individual exceptions, but generally there is a relationship between intelligence and income. This should not be surprising since jobs and professions that pay more often require more complex thinking. Intelligence correlations with other variables like longevity are perhaps more surprising but the message is that intelligence test scores are meaningful despite measurement issues.

Aren’t IQ tests biased against some groups?

There is no research evidence that standard intelligence tests developed with sophisticated statistical methods (called psychometrics) are biased for or against any group. If there was bias against a group, individuals with low scores might consistently get excellent school grades; or persons with high scores might consistently get bad grades. Both these combinations happen in individual cases so we all can think of such examples. Nonetheless, these generally are exceptions—that’s why IQ scores are not perfect predictors in any individual case. The data show that IQ scores predict academic success, for example, equally well for all groups indicating that the tests themselves are not biased. Note that a difference in an average measurement between two groups does not necessarily mean the measure is biased. For example, on average men are taller than women; no one would conclude this result comes from a bias in tape measures against women. However, since we do not have the equivalent of a tape measure for intelligence, the anti-bias argument is not so obvious.

Are computers that beat humans playing chess, Go, or Jeopardy smarter than people?

As machine software becomes capable of learning from mistakes and improving performance, it becomes more difficult to answer this question, especially with respect to general intelligence that is used across many situations outside of games with prescribed rules. The answer to this question will become even more complex as computer hardware can be designed based on the way the brain actually works. At some point “artificial” intelligence in machines might be replaced by “real” intelligence.

If intelligence differences among people are mostly genetic, should we waste time trying to increase intelligence?

Like test scores, genes are best thought of as probabilistic rather than deterministic. Genetic influences on complex characteristics like intelligence are themselves quite complex. Some genes are deterministic meaning that if you have the “bad” gene, you get the characteristic. This is the case with some diseases and in the 21st century, such examples are also examples of hope for discovering ways to correct the “bad” genes. But for intelligence, the data indicate many genes are involved and until we identify specific genes in this large set we won’t know which genes are sensitive to environmental influences and what combinations of genes are most important. Once these things are understood, there likely will be methods to manipulate the salient genes to increase general intelligence and, perhaps, even specific mental abilities like music or math. Meanwhile, there is nothing wrong with trying to maximize the use of a person’s intelligence through education and supportive environments. In my view, neuroscience doesn’t yet have much to help parents and educators accomplish this worthy goal. However, the more intelligence is influenced by genes, the more likely it is that someday we will know how to manipulate those genes to increase intelligence, perhaps dramatically.

Are you saying that family and early environment don’t influence IQ?

One of the most surprising findings from behavioral genetic studies of intelligence is that the influences of family and other environments are relatively small compared to genetic influences. All environmental influences on intelligence are stronger in children but almost disappear by teen years. This is not a popular finding but it might make sense from an evolutionary perspective given that the environments of early humans were mostly harsh and unpredictable. However, since genetic potential unfolds within an environment, research on gene/environment interactions (epigenetics) is an important but nascent focus in human neuroscience studies. Ironically, progress on understanding environmental influences may accelerate once specific genes for intelligence are identified.

Are you actually suggesting that poverty and economic disadvantages are brain or genetic problems?

It’s hardly popular to suggest that some individuals have limited potential for education and economic success due to genetic influences on intelligence. To the extent that intelligence is a major factor of education and economic success and not the other way around, it’s time to consider that some persistent social problems result, in part, because many individuals lack the requisite mental abilities to succeed on their own even modestly in the modern world. 51 million Americans have IQ scores below 85. To the extent that intelligence has major genetic inputs, we are faced with the uncomfortable possibility that some part of poverty and low SES (social-economic-status) are driven indirectly by genetics. I call this piece of the problem “neuro-poverty.” It’s a hard-edged concept and the natural reaction among many fair-minded people is to reject it in favor of more obvious and possibly more malleable environmental drivers. My interpretation of the data may be incorrect, but I stand by the need to examine the concept of “neuro-poverty” and it’s implications. For me, the implications lead directly to a strong role for government programs that support people in need, through no fault of their own, both materially and with dignity. I am optimistic that in the long run, an understanding of the neuroscience basis of intelligence might alleviate some aspects of persistent social problems.

What is the relationship between intelligence and education?

The pace of learning complex material and the amount of material learned are related to general intelligence. Bright students typically learn more material and learn it faster. It would be quite surprising if intelligence and learning were unrelated. Given this basic relationship, here’s a mystery: why is the word “intelligence” absent from virtually every issue discussed about education? Every teacher knows that each student comes to class with a unique combination of mental ability strengths and weaknesses. Educators try to maximize how each student applies these abilities. Shouldn’t what we know about intelligence be part of the discussion about how best to maximize learning for individual students? Many of the problems with the Common Core program could have been avoided by paying attention to robust findings from intelligence research. For example, holding all children to a college-ready standard is not realistic and results in poorer performance overall.

If intelligence is so important for success, why do smart people do dumb things?

Humans don’t rely solely on intelligence for making decisions. Remember Star Trek’s mega-rational Mr. Spock is fictional (and half alien), and arguably not a fun guy. Emotions usually play at least some role, even if unconscious (intuition). Neuroimaging suggests largely separate neural networks for emotion and intelligence. Perhaps there is more or less overlap in these networks in individuals or perhaps emotion decisions have some priority in many situations based on our evolutionary history—better to run immediately when afraid rather than think about what might be causing the fear. The fact that smart people do dumb things does not negate the important role of intelligence in everyday life but it also underscores that intelligence is not the only important thing. If stupidity was regarded as a disease, we might have a National Stupidity Institute to find a cure by funding neuroscience studies of intelligence to address this question.

Isn’t there anything I can do to increase intelligence for my children or me?

In my opinion, the weight-of-evidence doesn’t support any claims about increasing intelligence by any means. If there were a way, I’d be the first in line. I believe that dramatic increases may be possible once we understand the basic neuroscience of intelligence. This is a formidable goal but imagine what it would be like learn more, learn faster, and see complex relationships more clearly. Not everyone may dream about this possibility but having the ability to increase intelligence really would change everything.

]]>Proof That Tetris Makes You Smarterhttp://www.cambridgeblog.org/2016/10/proof-that-tetris-makes-you-smarter/
Wed, 26 Oct 2016 09:00:48 +0000http://www.cambridgeblog.org/?p=24507Why is neuroscientist Richard Haier in the Guinness World Records, Gamers Addition2008? The surprising reason is his neuro-imaging study of Tetris. It was the first to show how the brain worked when learning a computer game. Dr. Haier discussed the Tetris study at the World Science Festival (WSF) in NYC in 2015. The “Tetris” video shows what he had to say about how the brain became more efficient after 50 days of practice.

The longer video from the WSF shows the full context of Dr. Haier’s Tetris remarks as he discussed whether IQ could be increased by listening to Mozart or by memory training. The problem is demonstrated with a memorable video clip of a chimp performing a memory test. Watch the full World Science Festival 2015 here

In the last video, Dr. Haier introduces his new book, The Neuroscience of Intelligence (Cambridge University Press, 2017) in which he updates the latest remarkable research on increasing intelligence, predicting IQ from neuro-images, and the hunt for intelligence genes.

]]>How much smarter will we be in 100 years?http://www.cambridgeblog.org/2016/06/how-much-smarter-will-we-be-in-100-years/
Wed, 22 Jun 2016 13:00:40 +0000http://www.cambridgeblog.org/?p=23249Participants:

James R. Flynn, University of Otago, New Zealand

Richard Haier, University of California, Irvine

Robert Sternberg, Cornell University, New York

What does the future hold in the research of intelligence? How much smarter will we be in 100 years’ time?

James Flynn:

The triggers modernity uses to raise IQs have limits: more years of formal schooling; more adults than children in the home (there are now more solo-parents); more cognitively demanding jobs (trends are toward more service work); more leisure time spent in cognitively demanding pursuits.

Instead of IQ, we should focus on how to realize the potential modern minds already possess.

The modern world asks us to take the hypothetical seriously and use logic to analyze principles. When Martin Luther King marched in 1955, young men had dialogues with their parents. ”What if you woke up tomorrow and had turned black?” Reply: “That is just dumb, who do you know that turned black overnight?” Question: “If a war killed so many foreigners to save 3,000 Americans, would you fall off the boat at 10,000 or 100,000 or one million?” Reply: “That is not my concern, their government protects them and our government protects us.” My father simply did not take the hypothetical seriously, but today we do.

We have better moral reasoning, but need more than “good hearts”. Forty-nine per cent of US high school seniors (2001) read little or nothing for pleasure; for university seniors (2005) the figure was sixty-three per cent. We send troops to the Middle East and have never read a history of the area or its literature (which tells us something about the people who live there). We are bombarded with internet “information” but lack the skill to make sense of it: simple concepts that lie behind understanding elementary economics, what goes on in international politics, and what is bad moral argument. My test (Flynn’s Index of Social Criticism) showed that the graduating class of a distinguished US university had no critical understanding outside their vocational field. We should start measuring these things and do something about them.

Robert Sternberg:

Smarter? The Flynn effect shows that IQs have increased over time. But have people actually become smarter? The challenges of terrorism, climate change, war, hunger, diverging incomes, immigration, and the like have not gone away over time. If anything, they have gotten worse. So we have a “smart” world on the verge of destroying itself. Will the world still be here in 100 years’ time? When I lived in Oklahoma, the main worry was tornadoes. I experienced no earthquakes. Now, as a result of fracking, Oklahoma has more earthquakes than California. In many Asian cities, people barely can breathe the air. Many countries that 20 years ago did not have dictatorships now do. We are making a mess of the world for lack of wisdom, in part because we are so stupid as to rely on IQ tests and their surrogates (ACT, SAT, etc.) to make decisions about who should have educational and leadership opportunities.

The laugh is on us: We have been so stupidly fixated on IQ that we have ended up with a bunch of leaders who have gone to prestigious universities and had high test scores—but who are leading the world to destruction.

If we don’t really get smart, perhaps we will leave our inheritance to some other species. Some people ask why extraterrestrials have not visited the Earth. Perhaps it is because as they got smarter, they became more foolish, and ended up blowing themselves up before they had interstellar space travel. We certainly are on the road, at the same time we commend ourselves for our rising IQs. We have become the blind leading the blind, and that includes many of the scholars in the field of intelligence who have glorified IQ while watching those with the most of it making the world an ever-worse place in which to live.

Richard Haier:

I believe we are entering a Golden Age of intelligence research that will lead to ways to dramatically increase intelligence based on a neuroscience understanding of the brain mechanisms that are relevant for general intelligence and for specific mental abilities like mathematics and music, to name only two. In the near future, neuroscience may help reduce the widely acknowledged education achievement gaps much more dramatically than any social efforts tried so far. In a sure-to-be controversial part of my book I introduce the concept of neuro-poverty to emphasize the relationship between intelligence and economic success in the modern world, and the role genes clearly play in the development of intelligence. A hundred years from now, methods to create all kinds of geniuses might be commonplace if the methods are available to everyone fairly. This is no small public policy problem. It is why an understanding of the neuroscience of intelligence is not arcane. There are important educational, social and national implications that cannot wait a hundred years. The pace of progress is so fast that we may see profound implications in a decade. If there is one message in my book, it is that neuroscience research on intelligence is advancing rapidly. It’s time to think about where it’s going and what it means now.

How can current research inform the development of new methods to assess intelligence?

James Flynn:

If we mean the kind of intelligence that IQ tests at present measure, the Wechsler tests plus Sternberg, I doubt there will be any new breakthroughs in measuring intelligence on the psychological level, at least in fully modern societies. Measurements on the level of brain physiology are dependent on IQ test results to map what areas of the brains are active in various problem-solving tasks. One suggestion should be set aside: that we use measurements of things like reaction times (how quickly a person can press a button when perceiving a light or hearing a sound) as a substitute for IQ tests. They are subject to differences in temperament between people, stop increasing far too young to capture the maturation of intelligence, and are much subject to practice effects.

I do not know enough about creating tests for pre-industrial societies to comment. However, even the use of “our” tests there can be illuminating. In the Sudan, there was a large gain on Object Assembly and Coding, subtests responsive to modernity’s emphasis on spatial skills and speedy information processing. There were moderate gains on Picture Arrangement and Picture Completion, subtests responsive to modernity’s visual culture. As the “new ways of thinking” subtests, Block Design and Similarities, they actually showed a loss. On the “school-basics” subtests of Information, Arithmetic, and Vocabulary, only a slight gain. Diagnosis: no real progress to modernity. They still have traditional formal schooling based on the Koran, and have not learned to use logic on abstractions and to classify. Their entry into the modern world is superficial: just access to radio, TV, and the internet. However, the profile of other nations (Turkey, Brazil) is more promising. If they continue to develop economically, their average IQs will equal those of the West.

Robert Sternberg:

We have developed what we believe to be better tests that measure no only the analytical aspect of intelligence but also the creative, practical, and wisdom-based ones. For example, an analytical item might ask an individual to write an essay on why her favourite book is her favourite book—or perhaps comparing the messages of two books. A creative essay might ask what the world would be like today if the American Revolution had never taken place or if computers had never been invented or if weapons were made illegal throughout the entire world. Another create item might ask people to draw something creative or to design a scientific experiment or to write a creative story. A practical item might ask an individual how he persuaded someone else of an idea he had that the other person initially reacted to sceptically. Or it might ask the individual to say how he would solve a practical problem such as how to move a large bed to a second floor in a house with a winding staircase. A wisdom-based item might ask a person how, in the future, she might make the world a better place; or an item might ask her to resolve a conflict between two neighbours, such as over noise issues.

We have found that, through these tests, it is possible clearly to separate out distinct analytical, creative, and practical factors. These tests increase prediction not only of academic achievement (compared with traditional analytical tests), but also increase prediction of extracurricular success. Moreover, they substantially reduce ethnic/racial group differences. Moreover, students actually like to take the tests, something that cannot be said for traditional tests.

Richard Haier:

There is research oriented to measuring intelligence based on using brain speed measured by reaction time to solving mental test items.

There are major advances using neuroimaging to predict IQ scores from structural and functional connections in the brain. Just after I finished writing my book detailing these advances and noting that none were yet successful, a new study found a way to create a brain fingerprint based on imaging brain connections. They reported that these brain fingerprints were unique and stable within a person. Amazingly, they also found these brain fingerprints predicted IQ scores—truly a landmark study. Fortunately, I was able to add it to my book in time. One implication of this kind of research is that intelligence can be measured by brain imaging. Interestingly, a brain image now costs less than an IQ test. If a brain imaging method to assess intelligence also turns out to predict academic success (as it should), an MRI scan might replace the SATs at a much cheaper cost than an SAT prep course (and you can sleep during the MRI).

James Flynn:

Media advances must play some role, but probably a modest one in the context of a comprehensive explanation of cognitive gains over time: (1) Ultimate causes are the industrial revolution and the trend toward modernity; (2) Intermediate causes are the effects of industrialization on society, more education, emancipation of women, smaller families (with a better adult to child ratio), more cognitively demanding jobs, more cognitively demanding leisure, and finally – a new pictorial world from television and the internet; (3) Proximate causes have to do with how people’s minds altered, so that in the test room they could do better when taking IQ tests (for example, better at classifying and induction).

Greenfield argues that popular electronic games, and computer applications require enhanced problem solving in visual and symbolic contexts. Johnson points to the cognitive demands of video games, for example, the spatial geometry of Tetris, the engineering riddles of Myst, and the mapping of Grand Auto Theft. He shows convincingly that today’s popular TV programs make unprecedented cognitive demands. The popular shows of a generation ago, such as “I love Lucy” and “Dragnet”, and “Starsky and Hutch”, were simplistic, requiring virtually no concentration to follow. Beginning in 1981 with “Hill Street Blues”, single-episode drama began to be replaced with dramas that wove together as many as 10 threads into the plot line. An episode of the hit drama “24” connected the lives of 21 characters, each with a distinct story.

But does the content of TV act as cause or effect? Its level of cognitive complexity has risen but is that because other factors have fashioned people who are more prepared for this, or is a cause in its own right?

Robert Sternberg:

Media forces are changing what intelligence is – Robert Sternberg

If you look at intelligence tests from the mid-twentieth century, some of them (such as those based on Louis Thurstone’s theory) used arithmetic computation problems to measure number ability. But today, such a test would seem dated, as people can do computations on a calculator or a computer. Similarly, tests of spelling would seem dated because of the prevalence of spellchecks. Memory used to be viewed as central to intelligence, and for many, still is. But the skills needed for adaptation today are often not in remembering information; rather they are in effectively retrieving information. With the Internet, most of the information one needs is available but may not be easily accessible—the challenge is to find it and then evaluate its validity.

Media are having other effects on intelligence, some of which may be pernicious. For example, I believe people are having more difficulty concentrating these days and sticking to one task. Rather, they have become multi-taskers, trying to do many things at once. But research shows that people often are not very good at multi-tasking. Often, they think they are better than they really are. Further, many television shows are presented at a really low level of intellect and encourage the worst in us. In the US, there is now an election where the same kind of trash talk that many of us have abhorred in television program is leading to a rather successful presidential political campaign. The kind of trash talk, lack of logic, fluidity of position, and lack of substance that can lead to success today probably would not have worked 50 years ago. Frighteningly, IQs have gone up since 50 years ago (Flynn effect), meaning that whatever IQ tells us, it’s not about people’s skill in analysing real-world information.

Richard Haier:

Jim Flynn has written about this in the context of his original observation that IQ scores are slowly going up for the last several decades. While there is some debate about whether the increase is a g-factor effect or not, there is general agreement that the increases are driven at least in part by factors made possible by technology advances (think Sesame Street and TV in general). I believe such innovations might help maximize a person’s natural (god-given, genetic) intelligence. Nothing is wrong with this idea but the “Flynn Effect” is a generational effect, not necessarily a potent effect for any individual. There are countless claims about using computer games and memory training to enhance intelligence for individuals. My book details the independent research on these claims. In my view the evidence does not support any of the claims. Nonetheless, the Flynn Effect is an important mystery and research on solving the mystery speaks to my view that the goal of all intelligence research is to enhance intelligence. Jim’s newest book is about intelligence and the role family may play in its development. From my perspective, any family or technology or environmental effect on intelligence must work through biology that influences the brain so I see this question as central to a neuroscience perspective.

What role do IQ tests play in measuring intelligence?

Richard Haier:

There is a broad recognition among psychologists that intelligence testing is one of the great accomplishments in the field. This is not to say there are not problems. There is a history of misuse and a history of misunderstanding what an IQ test measures. It is critically important to understand that all intelligence tests estimate but do not measure intelligence in the same way that a yardstick measures distance or a scale measures weight. Intelligence is not like distance or weight and IQ points are not the same kind of measurement as an inch or a pound. Ten pounds is literally twice as heavy as five pounds but someone with an IQ of 140 is not literally twice as smart as someone with an IQ of 70. IQ points have meaning only relative to other people (norms). This is a fundamental problem even though intelligence test scores predict many things quite well. I believe we are moving closer to having a measurement of intelligence that is more like measures of distance or weight. I’m talking about quantifiable measures of brain variables like processing speed or glucose metabolic rate or gray matter volume that might be translated into measures of intelligence. This is the next step beyond the limits of psychometric approaches (paper and pencil tests that compare an individual’s score to a normative group). A new generation of intelligence researchers is moving toward this goal and if combined with neuroscience approaches, there is every reason to expect great advancements in our understanding of intelligence and why some people are smarter than others.

Within Western culture, higher IQ scores over time chart a fascinating progression from the people in 1900 to ourselves – James R.Flynn

Today’s adults today really do have larger vocabularies and can read more widely and converse more “intelligently”. We really can better perform the tasks modern schooling and jobs (the professions, computer programming) demand. A whole new world forces us to use logic on symbols far removed from the concrete world. Pre-modern people see fish as having nothing in common with crows. You can eat one and not the other; one swims, the other flies. We divide creatures into categories that are non-observable but offer understanding: whales are more akin to land animals than fish; the tiny hyrax is more akin to the huge elephant than to the rodents it resembles. Our whole picture of the universe (and even our approach to explaining human behaviour) is based on logic and abstractions. No one has ever observed the “x” of algebra.

Are we more “intelligent” than our ancestors? I say: we can attack a wider range of cognitive problems, but they were equally capable of solving the problems of their time; our brains would look different at autopsy because we have exercised them differently, but look no different at conception. That is all you need to know and adding the label “more or less intelligent” adds nothing.

Robert Sternberg:

IQ tests are somewhat useful for measuring the analytical aspect of intelligence. I say “somewhat” useful because they will tell you different things depending on the kind of environment in which a person grows up. Urban children generally have an advantage over rural ones; children who are tested in a second language are at a disadvantage, as are children whose parents are uneducated. The problem is not with the IQ tests, per se, but rather with society’s tendency to overinterpret and often misinterpret the results.

First, they have been used to predict achievement, whether grades in school or scores on achievement tests. But the best predictor of future grades in school is past grades in school and the best predictor of future achievement test scores is past achievement test scores. We don’t need IQ tests to predict achievement. One might argue that the idea is to predict achievement from ability rather than from achievement, but IQ tests are achievement tests, albeit disguised ones. They measure the achievements one was supposed to have acquired earlier in one’s life.

Second, they have been used to assess learning disabilities. But you do not need IQ tests to assess learning disabilities. Research has suggested that children who have a weakness in a particular area need remediation in that area, regardless of IQ. If someone is a poor reader, the person needs remediation in reading, regardless of IQ.

Third they have been used to identify students for gifted programs. But we don’t need IQ tests for that. If we care about gifted achievement, we should identify students on the basis of what they have achieved, not on the basis of a supposed ability test.

What role does neuroscience play in understanding intelligence and our capacity to learn?

Robert Sternberg:

Neuroscientific investigation of intelligence has revealed many insights about the nature of intelligence. When I was younger, I made the mistake of thinking that some approaches to intelligence are intrinsically better than others—or that some approaches give more insight and others less. This leads to a kind of imperialism or reductionism that fails to acknowledge the benefits of multiple approaches.

I later realized that approaches are not intrinsically better or worse; rather, they answer different questions. For example, psychometric approaches address questions about the “geography” of intelligence—its abstract structure. Cognitive approaches address questions about how information is represented in the mind and about the mental processes that act on those representations. Developmental approaches address questions about how intelligence develops. Neuroscientific approaches address questions about where different kinds of mental activities go on in the brain. Anthropological approaches address questions about the interaction of culture and intelligence. And so forth. The approaches are complementary, not competing. There are some questions that can be answered really well by the neuroscientific approach, for example, such as in what parts of the brain people solve Raven matrix problems. But the approach will not tell us whether the Raven test is equally valid or useful as a measure of intelligence for upper middle class students at Bronx High School of Science in New York (a prestigious school) versus a small school in rural Kenya where children grow up in an agrarian environment with people who are just barely educated in a traditional Western sense.

Richard Haier:

Intelligence includes the ability to learn. People with low IQ, for example, have difficulties learning complex material. People with higher IQs often learn complex material faster. Every chapter of my book assumes that intelligence is 100% a biological phenomenon, genetic or not; influenced by the environment or not, and that the relevant biology takes place in the brain. That is why there is a neuroscience of intelligence to study. Perhaps my most controversial view is that I believe:

the ultimate goal of all intelligence research is to understand how to increase intelligenceClick To Tweet

I don’t mean maximize it for any person—that’s the goal of education. I’m talking about increasing intelligence beyond whatever potential a person has naturally (politicians call this god-given talent; others call it genetic endowment). It is my belief that understanding the neurobiology of intelligence—that is how brain mechanisms produce effective problem solving—will give us insights into how to manipulate those mechanisms to enhance intelligence, perhaps dramatically. After decades of failing to find environmental interventions that increase intelligence, I’m more interested in the potential for neuroscience-based interventions. This might mean manipulating genes, brain growth factors, neurotransmitters, the amount of gray matter, the efficiency of white matter, or other salient brain variables with drugs or by other means like electrical or photo stimulation. These are exciting possibilities that are supported by the latest research that blurs the boundary between science fiction and what’s scientifically possible. My book details these developments.

James R.Flynn:

We want to map the areas/networks of the brain that are activated when people perform various cognitive skills and observe differences that rank people’s performance. – James R. Flynn

Brain physiology should also illuminate cognitive trends from one generation to another. If doing the kind of inductive reasoning schooling requires provides exercise that enlarges the pre-fontal lobes, this should at show up at autopsy. More mapping exercise should mean an enlarged hippocampus. We cannot autopsy the people of 1900 but we could compare drivers versus non-drivers (perhaps the Amish).

The US National Institute of Health has launched a 10-years project. Advances in electrical, optical, acoustic, and genetic techniques will inform us about molecules, cells, circuits, systems, and cognitive behaviour. Technologies would include implantable devices with combined recording and stimulation capabilities. The project will seek: the neural circuits that underlie the ability to represent information symbolically (as in language) and use that information in novel situations; the neural circuits that enable mental mathematical calculations; and the patterns of neural activity that correspond to human emotional states. The Human Brain Project is similar, this time funded by the Europen Union

Physiology cannot replace psychology and sociology: we will still need causal explanations in all these areas. Physiology may predict who will be the best basketball player but we still need to know why someone is doing something as trivial as running down a court to throw a ball through a hoop, and why basketball became more popular after World War II, so that greater participation triggered a huge rise in performance. Brain physiology may chart how the demands of society on various cognitive abilities have altered over time, but we will still need to know how the industrial revolution made the demands of schooling more prominent over that last 150 years.

Can We Define Intelligence?

James R. Flynn:

Jensen rejected the concept of intelligence because it attracted no consensus and could not be directly measured. He was mistaken: we have to define intelligence on two levels. Scientific theories do need mathematically measured concepts so we can verify whether IQ scores predict school achievement, job eligibility, and so forth. Competing theories (like Sternberg’s) offer a test score that may make better predictions (by including items about practical intelligence (how to write a reference) and creativity (write an essay on the octopus’s sneakers).

Over and above these scientific measures of intelligence stands a general concept whose role is not to make predictions but to put all intelligence tests into context. My definition of intelligence on that level runs thus: determine the hierarchy of cognitive problems that a particular time and place wants you to solve in order of priority; see which person learns to solve those problems better or faster given equal opportunity. For example, Australian aborigines put the sort of logical analysis we use in schools well down compared to map reading (need it to avoid dying of thirst). Americans in 1900 (who had little schooling) put it below the practical intelligence you need to run a farm or do a factory job. Any test must measure these abilities in order of priority, so none would bridge cultural divides.

Europeans tried to produce a culturally reduced test to compare all cultures (Raven’s Progressive Matrices). My research (massive IQ gains over time) showed that it was more culturally sensitive than any other test because it tests school-type logic. In Holland, the average Raven’s score was 80 in 1952 compared to 100 in 1982. This did not mean that the average Dutchman of 1952 was close to mental retardation. Over 30 years, Holland had re-prioritized the cognitive problems considered significant.

Intelligence is the ability to think analytically, creatively, practically, and wisely so as to learn from experience and adapt to, shape, and select environments. – Robert Sternberg

Robert Sternberg:

Analytical thinking is what you use when you analyse, compare and contrast, critique, judge, or evaluate. Creative thinking is what you use when you create, invent, discover, imagine, or suppose. Practical thinking is what you use when you put into practice, apply, use, utilize, or contextualise. For example, when you try to convince someone else that an idea you have is a good one, you use creative skills to come up with the idea, analytical skills to make sure the idea is indeed a good one, practical skills to put the idea into practice, and wisdom-based skills to ensure the ideas help to achieve some kind of good, over the long-term as well as the short-term, through the mediation of positive ethical values.

Adaptation occurs when a person changes him or herself to fit the environment. When that does not work, people often move to shaping, which involves changing the environment better to suit oneself; and if that still does not work, one may choose to select a new environment.

In my own theory of successful intelligence, I emphasize the unique nature of each person’s intelligence. Intelligence involves formulating a plan for one’s life that fits oneself and the environment in which one does or can live; executing that plan; and then evaluating how well it is working and changing the plan as needed. A smart person, on this view, is someone who creates the best possible life for him or herself, given the constraints of the environment. The person recognizes his or her strengths and weaknesses, and then capitalizes on the strengths and compensates for or corrects the weaknesses. People do not have complete control over their lives, but they need to use what control they have to create the best possible life—that’s what intelligence really is about.

Richard Haier:

Intelligence is the opposite of stupidity. If stupidity was a designated disease, we might have a National Institute of Stupidity to fund research on a cause and a cure. This would fund intelligence research. Most intelligence researchers define intelligence as a set of mental abilities (factors) that includes a general ability for problem solving. This is called the general factor of intelligence (g) and it is strongly related to another factor called fluid intelligence. The g-factor accounts for at least half the differences among people on intelligence tests and it is the focus of most intelligence research. However, there are other important intelligence factors like verbal ability, numerical ability, and spatial ability. Every person has their own pattern of mental ability strengths and weaknesses but the g-factor is the most predictive of academic and life success indicators like GPA or income. Some researchers, like my friend Bob Sternberg, question whether g is in fact the most important factor or best predictor of real world variables and this is a good debate. Other researchers study how g might develop and how malleable it might be. However, debates about these questions do not mean there is no agreement on how to define intelligence for scientific study. There is agreement enough for over a hundred years of research progress. The definition evolves as more empirical findings are discovered. This what happens in all scientific fields and why the definition of an “atom” or a “gene” has changed dramatically over time. In my view, we may have a more precise definition of intelligence as neuroscience studies of mental abilities advance. That’s a theme of my book.