WHAT SCIENCE AND SUPER-ACHIEVERS TEACH US ABOUT HUMAN POTENTIAL

The book

The author

David Shenk is the national bestselling author of five previous books, including The Forgetting ("remarkable" - Los Angeles Times), Data Smog ("indispensable" - New York Times), and The Immortal Game ("superb" - Wall Street Journal). He is a correspondent for TheAtlantic.com, and has contributed to National Geographic, Slate, The New York Times, Gourmet, Harper's, The New Yorker, NPR, and PBS.

February 01, 2010

I was honored to be part of a discussion panel at The Franklin Institute this past weekend to kick off this year's EduCon conference. The conference is an offshoot of the Science Leadership Academy, an amazing new Philadelphia public high school, and its visionary founder Chris Lehmann. The open-ended question posed to the panel was: "What is Smart?" Here are my slightly-edited opening remarks:

What is smart? This is a really exciting time to ask that question. For a century, we've been living under the oppressive yoke of innate-IQism, the idea championed by Francis Galton, Charles Spearman, and Lewis Terman, among others, that intelligence was something you were endowed with--whatever you got, you got.

This was not the attitude of Alfret Binet and Theodore Simon, who invented the IQ test in 1905 in order to identify French schoolchildren in need of most attention. The Binet-Simon test aimed to lift students up rather than assign them a permanent ranking. Binet said:

"[Some] assert that an individual's intelligence is a fixed quantity which cannot be increased. We must protest and react against this brutal pessimism...With practice, training, and above all method, we manage to increase our attention, our memory, our judgment, and literally to become more intelligent than we were before."

But when the IQ test was adapted by the Stanford psychologist Lewis Terman, Binet's approach was replaced by a very different idea. Terman and his successors proclaimed that intelligence was a pre-loaded thing, and they packaged IQ tests in such a way that it seemed to prove that notion. In the last twenty years, that message has been reinforced by the very misleading idea of "heritability," which come from twin studies and have been interpreted by many as saying that intelligence is 50-60 percent inherited and pre-ordained by our individual genetic codes.

Now we know better, for two reasons.

First, we've learned a lot more about the relationship of biology to ability. The idea that genes contain instructions for a fixed intelligence doesn't wash anymore. Genes don't issue fixed instructions for anything. Rather, genes interact with their environments. The process is totally dynamic and "interactionist." McGill University's Michael Meaney expresses it this way: "There are no genetic factors that can be studied independently of the environment, and there are no environmental factors that function independently of the genome. [A trait] emerges only from the interaction of gene and environment."

So it's not just the brain that is "plastic." This is also happening on a cellular level throughout our bodies. We call this "genetic expression"--our genes are constantly being turned on and off constantly by our environment.

This is kind of a mind-blower of an idea, and takes some getting used to, but the bottom line is that all complex traits in human beings are the result of a dynamic process--and we can and do influence that process with our culture, our parenting, our teaching, and our desires and actions as individuals.

That's the first point.

Second, we now know from Betty Hart, Todd Risley, Robert Sternberg, Anders Ericsson, Carol Dweck, James Flynn, and many researchers that intelligence is, as Sternberg says, "a set of competencies in development."

In other words, intelligence is also a process. It is malleable. Getting kids to understand that malleability is vitally important. Carol Dweck's work powerfully reinforces that notion. Having the I-can-improve mindset rather than the some-people-are-just-gifted-and-others-aren't mindset is critical to achievement.

We need to talk about achievements and abilities as a matter of development rather than innate ability. That doesn't mean we pretend that we or our kids have total control over our lives--many influences come into play. But should imbue them with the wonder of what is possible.

August 05, 2009

I appreciate the intense reactions to this blog so far, and respect the lingering skepticism. (Some of the nastiness I could do without, but it wouldn't be the Internet without some tasty pot-shots). I certainly didn't expect to win over the entire crowd with a handful of short overview pieces containing little evidence and no depth. I get that smart Atlantic readers are going to scrutinize this stuff.

After three years of discussing it among friends, I also understand that this is an issue that often provokes a visceral response. We all have strong opinions about how we became who we are. We need to have these opinions--it's a part of forming an identity. After a century of genetic dogma, terms like "innate" and "gifted" are baked right into our language and thinking. I don't mean to suggest that we all believe that genes control everything. Instead, most of us believe in "nature" followed by "nurture": genes dispense various design instructions as our body is formed in utero, priming us with a certain level of intellectual, creative, and athletic potential; following this, environmental influences develop that potential to some extent or another. This is what we know to be true, and it makes perfect sense.

Well, it turns out not to work that way. But no one familiar with the new science of development expects these old beliefs to wash simply away in a few weeks or months just because a few smart-ass writers come along and say they know better.

As we try to present this stuff, there are a hundred sand traps of understandable confusion. When I argue that "innate" doesn't really exist, it may seem like I'm making the blank slate argument -- which I'm not. When I argue that talent is a process, it may seem like I'm arguing that anyone can do anything, which I'm not. When I argue that we can't really see our individual potential until we've applied extraordinary resources over many years, it may seem like I'm arguing that genetic differences don't matter -- which I'm not. When I criticize The Bell Curve, it may look like I'm an agent of the left pushing a liberal egalitarian agenda, which I'm not.*

What I am pushing is the consideration of a whole new paradigm. In doing so, I am of course just a conduit.

While their evidence is quite complex, their renewed argument is simple: "nature vs. nurture" doesn't adequately explain how we become who we are. That notion needs to be replaced.

Lead author John Spencer:

The nature-nurture debate has a pervasive influence on our lives, affecting the framework of research in child development, biology, neuroscience, personality and dozens of other fields. People have tried for centuries to shift the debate one way or the other, and it's just been a pendulum swinging back and forth. We're taking the radical position that the smarter thing is to just say 'neither' -- to throw out the debate as it has been historically framed and embrace the alternative perspective provided by developmental systems theory.**

"Developmental systems theory" is a vague mouthful, and the scientists behind these observations readily admit that they haven't yet found the most compelling new language to present their ideas to the public. But the basic idea, as I've written in previous posts, is that genes are not static; they are dynamic. Genes interact with the environment to form traits. The more closely scientists look at claims of so-called "hard-wired" behavior and abilities, the more they turn up evidence that actions and talents are formed in conjunction with the culture around them.

John Spencer again:

Researchers sometimes claim we're hard-wired for things, but when you peel through the layers of the experiments, the details matter and suddenly the evidence doesn't seem so compelling...When people say there's an innate constraint, they're making suppositions about what came before the behavior in question. Instead of acknowledging that at 12 months a lot of development has already happened and we don't exactly know what came before this particular behavior, researchers take the easy way out and conclude that there must be inborn constraints. That's the predicament scientists have gotten themselves into.

Imprinting is one of many examples reviewed by the Iowa researchers. In 1935, Viennese zoologist Konrad Lorenz famously discovered that newborn chicks whose eggs were incubated in isolation will still correctly pick the call of their mother over another animal. It seemed the perfect little proof of innate ability.

But in 1997, Gilbert Gottlieb discovered the flaw in that assumption. It turned out that when fetal chicks were deprived of the ability to make vocal sounds inside their own eggs -- that is, the ability to teach themselves what their species sounded like -- they were unable to pick the correct maternal sound from various animals.

Another famously innate quality is "dead reckoning," the ability of fish, birds, and mammals (including humans) to establish one's current location based on past locations and movement history. How could young geese know how to fly home from 100 meters without trial and error? Being a mystery with no apparent answer, the word "innate" was again used as a catch-all explanation. Then it became clear that mother geese train their gosslings' navigational skills through daily walks.

How could baby chicks find their way back to a mother without clear sight of her? It turned out that they simply reversed the directions they had taken when getting lost.

One by one, the Iowa researchers show, scientists have declared basic abilities to be explainable only by hard-wiring only to later have a slow learning process revealed under closer inspection and better tools. The consistent refrain: abilities form in conjunction with development, community, and context. Genes matter, but actual results require genetic expression in conjunction with the environment.

(One big problem with this new paradigm, explains John Spencer, "is that it's much more complicated to explain why the evidence is on shaky ground, and often the one-liner wins out over the 10-minute explanation.")

The Iowa paper also delves deeply into claims of human language innateness, including what is known as "shape-bias." "Shape bias," the authors write, "simpliﬁes the word learning situation and thereby aids vocabulary development, but it is not innate. Rather, it is the emergent product of a step-by-step cascade."

What does all of this have to do with Einstein's genius or your piano playing? Developmental systems theory tells us that, while genetic differences do matter, they cannot, on their own, determine what we become. From there, the whole idea of innate talent falls apart.

As this blog continues, you'll meet more of the scientists who are documenting and shaping these ideas. One of the things I'd like to do is bring them together as a community and give their umbrella notion a more accessible name. "Developmental genetics" is one possibility. "Environmental genetics" is another.

Suggestions are welcome.

_____________

[Thanks to Mark Blumberg, one of the University of Iowa authors and editor-in-chief of Behavioral Neuroscience.]

_____________

Notes

* I am guilty of being a liberal on most issues, and there are elements of this new paradigm that gel nicely with a liberal sensibility; but there are also some very uncomfortable moral implications to come to terms with. Every writer has biases to be sure, but self-respecting journalists don't ignore or cherry-pick information because they like its political ramifications. I didn't write Data Smog because I wanted to bring down the Internet; I didn't offer some sanguine views on new surveillance technologies because I desire a police state, and I haven't been picking and choosing genetics and intelligence studies to prop up the Obama administration.

** These John Spencer quotes are taken from an University of Iowa press release about the journal article.

August 03, 2009

In providing an overview for this new blog's approach, I've so far touched on genetics and intelligence; now it's onto studies of talent and expertise that provide the third key puzzle piece. Taken together, they suggest -- to me at least -- a whole new way to think about high achievement.

Many of you have already read about some of the key research -- the famous 10,000-hours-to-greatness observation of Anders Ericsson and others, described in several recent smart books, including Geoff Colvin's Talent Is Overrated, Malcolm Gladwell's Outliers and Daniel Coyle's The Talent Code.*

These studies are important, not because they put a specific hour-number on what it takes to be a champion, but because of the big idea behind that number. The breathtaking insight that comes through in the work of Ericsson and colleagues is this: talent is not a thing, but a process -- a very slow, largely invisible process that, up till now, has been nearly impossible to document and therefore very easy to misread. As long as this slow accretion of skills went unseen and unarticulated, the mature skills themselves seemed almost magic. For many centuries, greatness appeared to be god-given; later, in the 20th century, it was understood as gene-given. All along, these ideas were reinforced by astounding child prodigy stories that seemed to be explainable only by unusual innate "gifts."**

Now, Ericsson and colleagues -- there are many, with hundreds of studies already published -- are making the invisible visible.*** They are showing how all abilities are based in process. They are exploding the myth of "giftedness."

Their work also dovetails with genetic-environment interaction, and with research showing how extraordinarily plastic the human brain is -- how we constantly change its structure with our moment-to-moment actions.

A new understanding thus emerges: the limits we think we see in ourselves and our kids are really more like obstacles, difficult but not impossible to overcome. What appear to be innate/genetic brick walls are actually just very steep hills to climb. According to this view, the real marvel of genetics is how their dynamic properties allow us to expand and expand and expand our abilities -- if we push hard enough and have the right resources. (These are big ifs.)

Which brings us back to the public fixation with innateness. Given what we've all been told about genes, it's perfectly understandable when we look at a clumsy 8 year-old boy and surmise: "He's got no athletic talent. He just doesn't have the genes for it." But the new science of talent suggests a very different conclusion:

• His clumsiness was developed, not inborn. He became clumsy over time in response to many gene-environment interactions.

• His development continues, and nothing is set in stone. While the odds are of course against him, no can say for certain whether this clumsy boy has professional sports in his future.

We simply don't know his ultimate potential, and neither will he until he marshals all of his resources to get there.

Genes will play a huge role, of course, and will ultimately limit him in some way. But we don't know precisely how.

Discovering our own potential is part of the marvel of being alive.

______

Notes

* I began writing (and blogging) about this stuff in 2007, long before any of these books were published. My book will come late behind these books, and will probably be dismissed by some as Johnny-come-lately. But I think mine has much to add, and hope it will be seen as a complement to them. The reality is, all of these books (including mine) were written concurrently; I, for one, did not read any of them before finishing mine.

** I'll tackle the issue of child prodigies in future posts, and in my book.

*** Here's a tiny sampling of the studies from Ericsson and colleagues:

Salthouse, T. A. EFFECTS OF AGE AND SKILL IN TYPING. Journal of Experimental Psychology: General, 1984.

July 28, 2009

"[Some] assert than an individual's intelligence is a fixed quantity which cannot be increased. We must protest and react against this brutal pessimism."

- Alfred Binet, inventor of the original IQ test, 1909

Last week, I argued that our 21st century understanding of genetics invalidates the idea of fixed, innate abilities. Genes influence everything but determine almost nothing on their own.

What, then, is IQ? Conventional wisdom says that IQ scores reveal our native intelligence. According to this view, IQ tests are different from school grades, different from SAT scores, different from any other test you will ever take, because they somehow reveal the core, innate abilities of each person's brain: your clock speed, your RAM, your absolute limit.

That's what Stanford psychologist Lewis Terman wanted us to believe when he introduced the American version of the IQ test in 1916. (This was quite the opposite intention of the test's original co-inventor, Alfred Binet. But that's a history lesson we'll return to another time.)

What Terman had actually come up was a deceptively simple system for ranking academic progress. His Stanford-Binet tests measured many different skills, and then scored the results so that the median was always 100. If you had an IQ score of 100, it simply meant that half of the test-takers your age had done better and half had done worse.

These tests were impressively stable, which meant that, over time, most people ended up in roughly the same place in the pack. If you had tested in the 60th percentile at age 10, chances were pretty good that that you'd test close to the 60th percentile at age 12 and age 14.

But did this stability prove that the tests revealed innate intelligence?

Far from it. The reality is that students performing at the top of the class in 4th grade tend to be the same students performing at the top of the class in 12th grade, due to many factors that tend to remain stable in students' lives: family, lifestyle, resources, etc.

Being branded with a low IQ at a young age, in other words, is like being born poor. Due to family circumstances and the mechanisms of society, most people born poor will remain poor throughout their lives. But that doesn't mean anyone is *innately* poor or destined to be poor; there is always potential for any poor person to become rich.

The happy reality is that IQ scores:

A) measure developed skills, not native intelligence.

B) can change dramatically.

C) don't say anything about a person's intellectual limits.

More details below.

Coming next in this blog: Should kids know their own IQs?

____________________

AN IQ FAQ

What is IQ?

IQ (short for "intelligence quotient") is a score derived from a collection of
tests which rank academic achievement within a particular age group.

What do IQ tests
measure?

IQ tests measure current academic abilities -- not any sort of fixed, innate intelligence. More specifically, the best-known IQ battery,
"Stanford-Binet 5," measures Fluid Reasoning, Knowledge, Quantitative
Reasoning, Visual-Spatial Processing, and Working Memory. Collectively, these
skills are known as "symbolic logic." Among other things, IQ tests do
not measure creativity;[i] they
do not measure "practical intelligence" (otherwise known as
"street smarts");[ii]
and they do not measure what some psychologists call "emotional
intelligence."

Harvard's Howard
Gardner:

"The tasks
featured in the IQ test are decidedly microscopic, are often unrelated to one
another, and . . . are remote, in many cases, from everyday life. They rely
heavily upon language and upon a person's skill in defining words, in knowing
facts about the world, in finding connections (and differences) among verbal
concepts . . . . Moreover, the intelligence test reveals little about an
indivdual's potential for further growth."[iii]

Tufts'
Robert Sternberg:

IQ
problems tend to be "clearly defined, come with all the information needed
to solve them, have only a single right answer, which can be reached by only a
single method, [and are] disembodied from ordinary experience . . . . Practical
problems, in contrast, tend to
require problem recognition and formulation . . . require information seeking,
have various acceptable solutions, be embedded in and require prior everyday
experience, and require motivation and personal involvement."[iv]

How are IQ scores
determined?

Raw individual test
scores are converted so that they correlate perfectly to a bell curve
representing the entire population of same-age students. The average score is
always 100.

-
An IQ score of 100 means that 50% of the people in your age group scored
better, and 50% scored worse.

- An
IQ score of 85 means that 84.13% of the people in your age group scored better,
and 15.87% scored worse.

-
An IQ score of 130 means that 2.28% of the people in your age group scored
better, and 97.72% scored worse.

If IQ scores can change over time, why do most people's IQ scores
stay reasonable stable?

What any individual can
achieve with the right combination
of assets and gumption is entirely different from what most people actually do
achieve. Most people settle into a particular academic standing early in life
and do not substantially deviate from that standing. That's the inertia of life
and human circumstance.

So IQ scores don't
imply any sort of fixed or innate intelligence?

Quite the contrary. We
know that the abilities IQ measures are skills, and we know that people can
earn these skills. "Intelligence," Robert Sternberg has declared,
"represents a set of competencies in development." There is plenty of
evidence, for example, that schooling raises overall academic intelligence.[vii]
There is also evidence that most human beings are not reaching their cognitive
or academic potential.[viii]
Better schools and higher standards can raise the level of learning for nearly
all students.

Don't genes limit our
intelligence? Isn't intelligence "heritable?"

No, and no. Very sloppy science and journalism has led us to believe that what scientists
call "heritability" (derived from twin studies) is the same thing as
what weordinary folk call
"heredity." In fact, they are not even remotely the same thing. Genes
certainly do have an impact on
intelligence, and everyone has their own theoretical limits, but every
indication is that most of us don't come close to our true intellectual
potential. More on this here.

FOOTNOTES

[i]IQ scores do not identify the most successful
and creative artists or scientists:

July 06, 2009

In his column in
today's Times,
Ross Douthat argues that Sarah Palin and Barack Obama represent two different
American ideals of success:

Our
president represents the meritocratic ideal — that anyone, from any background,
can grow up to attend Columbia and Harvard Law School and become a great
American success story. But Sarah Palin represents the democratic ideal — that
anyone can grow up to be a great success story without graduating from Columbia
and Harvard.

It's always great to do well in school and go to a good
college, but does getting your act together a few years earlier than others
represent a completely different American success paradigm? To me, it's all the
same American meritocratic ideal, represented by Obama, Palin, Warren Buffett (University of Nebraska), Arnold
Schwarzenneger (University of Wisconsin-Superior), and many others. There are
plenty of stumbling blocks out there, but anyone from any background can grow
up to succeed enormously. This includes people who get a later start in their
ambitions.

Douthat goes on to suggest that the central lesson of
Palin's quick flame-out (if that's what we're all witnessing here), is that her
gender and social class made for vicious double-standards that few could
withstand.

Sarah Palin is beloved
by millions because her rise suggested, however temporarily, that the old
American aphorism about how anyone can grow up to be president might actually
be true.

But her
unhappy sojourn on the national stage has had a different moral: Don’t even
think about it.

I see it differently. I think the great moral here is:
"Do your homework."If
you aspire to be a great national leader, lead -- not with empty platitudes,
but with vision and serious plans. Agree or disagree with Obama, few would
argue that he's not a serious man for serious times. He's very young, yes, and
came to the campaign with a relatively thin resume, but made up for it with intellectual
firepower, extraordinary team-building, detailed plans, a sweeping vision, and
a refined temperament. He did his homework, and he did it better than anyone
else running for president. He won.

By contrast, Sarah Palin also sought a quick rise and was
tactically adroit but did little to accrue substance along the way. She burned
through allies, demonstrated petty vindictiveness, and most of all, simply
didn't prepare for the national stage. There's no question she was also treated
harshly -- but were others not? Do we forget the lies about Obama's religion
and the smears of association with terrorists?

The best part of all about America's meritocracy is that it
is full of second, third, and fourth chances. Sarah Palin could decide tomorrow
to become a serious contender, and it wouldn't take but a few years for her to emerge
as a truly formidable force. Her future is still in her hands. Like all of us,
her successes and failures will belong to her.

July 03, 2009

A large number of websites and even quite a few books will tell you that Wolfgang Amadeus Mozart's IQ was 165. They'll also reveal that Benjamin Franklin's IQ was 160, Charles Dickens' was 180, Isaac Newton's was 190, and Blaise Pascal's was 195.

There's
only one small problem with this data: The IQ test was invented in the early
20th Century -- long after all of these people were dead and buried.

Here's
how this lunacy came about: The IQ test was first invented in France by Alfred
Binet in the late 19th Century as a way to measure academic skills and pick out
the students who were not learning as fast they could and should. It was not
designed to separate innately-smart people from less innately-smart people.
Binet, in fact, did not believe intelligence was innate. He saw intelligence
not as a thing, but as a process of acquiring certain thinking skills. (He
turned out to be quite correct.)

Then
along came Lewis Terman, a Stanford psychologist in the early 20th century who
preferred Francis Galton's idea of intelligence: a certain innate quality that
each person is born with. Terman reinvented the IQ test and sold it to American
intellectuals and policymakers as a way to separate the intellectual wheat from
the idiotic chaff. Terman also began an epic study on geniuses entitled, Genetic
Studies of Genius.

Mind
you, he had no proof that intelligence was gene-based. (We still don't have any
such proof, contrary to what you might read elsewhere. See my post on heritability.)

Terman
was well-funded and well-staffed. In 1926, he assigned one of his protégés,
Catharine Cox, to somehow adapt their new IQ test to estimate the IQs of 301
well-known historical figures.

Here's
the rub: Even in Terman's context, this made no sense. It was pure intellectual
foolishness, even if you completely accepted his argument that IQ detected
innate intelligence. That's not just because none of these people actually took
an IQ test, but also because IQ tests only measure people's academic skills
against other people their same age. The actual score is not an actual score of
right vs. wrong answers, but a weighted score to compare every test-taker's
performance with every other same-age test-taker's performance in that
particular year. 100 is always the median. A score of 100 means that 50% of the
same-age students scored higher than you, and 50% scored lower.

So
how could anyone possibly hope to go back in time, look at the work of dead
people, and deduce their IQ score? It was impossible.

In
her report[i], Cox
acknowledged: "The correction attempted in the
present report is a crude approximation . . ."

Cox
and Terman assigned a score of 200 to their hero Francis Galton. That would
make him one of the great geniuses of all time.

Thanks partly to this study, IQ has become one of the great myths of our time. It's going to take us another century to replace it with a more sensible understanding of intelligence.

__________________

[i]
"The Early Mental Traits of Three Hundred Geniuses," by Catharine M.
Cox, from Genetic Studies of Geniu, edited by Lewis M. Terman. Stanford University Press,
1926.

June 15, 2009

For
a few decades now, we science writers have been unwitting victims of a
scientific muddle called "heritability." Now we have a chance to wipe
the slime off and do our jobs.

The
popular confusion started in 1979, when University of Minnesota psychologist
Thomas Bouchard became fascinated with a particular pair of long-separated
identical twins, and adopted what he thought was a method to distinguish
genetic influences from environmental influences -- to statistically separate nature from
nurture. The approach was to compare the ratio of similarities/differences in
separated-identical-twins with the same ratio in separated-fraternal-twins.
Since identical twins were thought to share 100% of their DNA and fraternal
twins share, on average, 50% of their genetic material (like any ordinary
siblings), comparing these two unusual groups allowed for a very tidy
statistical calculation.

Bouchard and colleagues used the
words "heritable" and "heritability" to describe their
results.

There were just two problems with
this approach. First, these terms were possibly the most misleading in
scientific history. Second, it turns out that genetic influence cannot be separated from environmental
influences. Nature is inextricably intertwined with nurture.

***

Strangely, "heritability"
and "heritable" were actually never intended by behavior geneticists
to mean what they sound like -- "inherited." What they called
"heritability" was defined as "that portion of trait variation
caused by genes." In a quick glance, that might seem awfully similar to
"the portion of a trait caused by genes." But the difference is as
great as Mt. Everest and the anthill in front your home.

This led to quite the muddle when
Bouchard and others published twin-study data that seemed to demonstrate that
intelligence was 60%-70% "heritable." What was that actually supposed to mean?

It
did not mean that 60-70% of every
person's intelligence comes from genes.

Nor
did it mean that 30-40% of every
person's intelligence comes from the environment.

Nor
did it mean that 60-70% of every
person's intelligence is fixed, while only 30-40% can be shaped.

What
Bouchard et al intended it to mean was this (read v e r y
slowly): on average, the detectable portion of genetic influence on the
variation in -- not the cause of -- intelligence among specific groups of
people at fixed moments in time was around 60-70%.

If
that sounds confusing, that's because you are a human being.
"Heritability" is so confusing that most of the people who use it professionally
don't really understand it. Let's pick it apart:

On
average.

Heritability,
explains author Matt Ridley in his book Nature via Nurture "is a population average, meaningless for any
individual person: you cannot say that Hermia has more heritable intelligence
than Helena. When somebody says that heritability of height is 90 percent, he
does not and cannot mean than 90 percent of my inches come from genes and 10
percent from my food. He means that variation in a particular sample is
attributable to 90 percent genes and 10 percent environment. There is no
heritability in height for the individual."

"Cause
of variation" is not remotely the same as "cause of trait."

In
discussing "heritability" in the media, scientists have allowed the
public to confuse "causes of variation" with "causes of
traits." Heritability studies do not, and cannot, measure causes of
traits. They can only attempt to measure causes of differences (or variation)
in traits.

So,
for example, a heritability study cannot even attempt to measure the cause of
plant height. It cannot purport to tell you that some percent of plant height
is caused by genes.

What
it can attempt to do is measure the percentage influence that genes have on the
differences in height in a
particular group of plants. But the
percentage would only apply to that particular group.

Fixed
moments.

Heritability
derives from a fixed moment in time. It can only report on how life is, at that
moment, for the specific group studied. It cannot offer any guidance whatever
about the extent to which a trait can be modified over time, or project how
life can be for any other group or individual enjoying different resources or
values.

This
means that is these studies don't even pretend to say anything about individual
capability, or potential.

Finally,
many scientists now think that twin-study heritability estimates are sorely
compromised by a basic flawed supposition. "[They] rest on the
extraordinary assumption that genetic and environmental influences are
independent of one another and do not interact," explains Cambridge biologist Patrick Bateson
"That assumption is clearly wrong."

Now
you'll have a sense how much salt to ingest when you come across silly phrases like
this in the news:

In the end, by parroting a strict "nature vs. nurture" sensibility, heritability estimates are statistical phantoms; they purport to represent something in populations that simply does not exist in actual biology. It's as if someone tried to determine what percentage of the brilliance of "King Lear" comes from adjectives. Just because there are fancy methods available for determining distinct numbers doesn't mean that those numbers actually have any meaning.

May 28, 2009

IQTest.com claims that 5.42 million people have taken their online IQ test, which sends a small shudder through my spine.

"Previously
offered only to corporations, schools, and in certified professional
applications," they declare, "it is now available to you."
Later, they quietly acknowledge that their test is "not intended for
professional use [but rather] for personal entertainment purposes."

Far
be it from me to tell people how to entertain themselves; if anyone is taking
this test as an alternative to online porn or fart jokes, I guess I can't
complain.

But
presumably some of these 5.42 million people are taking the test to see how
smart they are, to know what their potential is. And at that point the
entertainment goes from comedy to tragedy. For starters, the IQtest.com is not
even a gross simplification of the Stanford-Binet Intelligene Scales. It tests
just a few logical thinking skills in the narrowest possible way.

More
importantly, even authentic IQ tests do not reveal any sort of innate
intelligence (as discussed in my IQ
FAQ). They cannot reveal your true potential. They only reveal the current
state of your academic skills, in comparison to others of the same age. That's
not an insignificant thing, but it is very far indeed from the portrayal of IQ
as a revelation of one's inner-smarts.

Some
people are actually plunking down cash to get the full report from IQTest.com.
That, to me is like a bad Jeff Foxworthy joke. You know your IQ ain't that
high if you're paying money for an online IQ test.

At
least porn is honest. These tests are furtively exploiting and perpetuating a
person's anxieties -- for cash. In fact, anyone taking online IQ test seriously
is getting caught up in the century-long but now-disproven "entity"
theory of intelligence. We now understand that intelligence is not a *thing*
one has a certain amount of, but a process of acquiring skills.

May 14, 2009

IQ is a battery of tests measuring basic academic skills and scored according to a pre-set curve.

What do IQ tests
measure?

IQ tests measure current academic abilities -- not any sort of fixed, innate
intelligence. More specifically, the best-known IQ battery,
"Stanford-Binet 5," measures Fluid Reasoning, Knowledge, Quantitative
Reasoning, Visual-Spatial Processing, and Working Memory -- skills known collectively as "symbolic logic." IQ tests do
not measure creativity;[i] they
do not measure "practical intelligence" (otherwise known as
"street smarts");[ii]
and they do not measure what some psychologists call "emotional
intelligence."

Harvard's Howard
Gardner:

"The tasks
featured in the IQ test are decidedly microscopic, are often unrelated to one
another, and . . . are remote, in many cases, from everyday life. They rely
heavily upon language and upon a person's skill in defining words, in knowing
facts about the world, in finding connections (and differences) among verbal
concepts . . . . Moreover, the intelligence test reveals little about an
indivdual's potential for further growth."[iii]

Tufts' Robert Sternberg:

IQ
problems tend to be "clearly defined, come with all the information needed
to solve them, have only a single right answer, which can be reached by only a
single method, [and are] disembodied from ordinary experience . . . . Practical
problems, in contrast, tend to
require problem recognition and formulation . . . require information seeking,
have various acceptable solutions, be embedded in and require prior everyday
experience, and require motivation and personal involvement."[iv]

How are IQ scores
determined?

Raw individual test
scores are converted so that they correlate perfectly to a bell curve
representing the entire population of same-age students. The average score is
always 100.

-
An IQ score of 100 means that 50% of the people in your age group scored
better, and 50% scored worse.

-
An IQ score of 85 means that 84.13% of the people in your age group scored
better, and 15.87% scored worse.

-
An IQ score of 130 means that 2.28% of the people in your age group scored
better, and 97.72% scored worse.

If IQ scores canchange over time, why do most people's IQ scores
stay reasonable stable?

What any individual can
achieve with the right combination
of assets and gumption is entirely different from what most people actually do
achieve. Most people settle into a particular academic standing early in life
and do not substantially deviate from that standing. That's the inertia of life
and human circumstance; the students performing at the top of the class in 4th
grade tend to be the same students performing at the top of the class in 12th
grade.[vi]
That's because the factors that enabled them to do well in fourth grade usually
stay in place throughout their school lives: same parents, same community, same
economic and cultural resources, etc.

Being branded with a
low IQ at a young age, in other words, is like being born poor. Due to personal
circumstances and the mechanisms of society, most people born poor will remain
poor throughout their lives. But that sure doesn't mean anyone is innately
poor or destined to be poor; there is always potential for any poor person to
become rich.

So IQ scores don't
imply any sort of fixed or innate intelligence?

Quite the contrary. We
know that the abilities IQ measures are skills, and we know that people can learn these skills. "Intelligence," Robert Sternberg has declared,
"represents a set of competencies in development." There is plenty of
evidence, for example, that schooling raises overall academic intelligence.[vii]
There is also evidence that most human beings are not reaching their cognitive
or academic potential.[viii]
Better schools and higher standards can raise the level of learning for nearly
all students.

Genes do have a substantial impact on many aspects of our physiology, including intelligence. But, sadly, very sloppy science and journalism have led us to believe that intelligence is essentially innate. It isn’t. Rather, intelligence is fluid, and is a function of many dynamic components. So while genes play a role in limiting our potential, every indication is that most of us don't come close to even grazing such limits, meaning that – from a practical perspective – gene-based limits do not hold us back.

Who invented IQ and
why have we all been taught that it reveals our innate intelligence?

It's a long story (which I
expand on in my book), but the short answer is that the modern IQ test was
invented by Stanford psychologist Lewis Terman, a prominent eugenicist, early
in the 20th century. Terman himself was absolutely convinced that IQ scores
revealed innate intelligence. "Psychological methods of measuring
intelligence [have] furnished conclusive proof that native differences in
endowment are a universal phenomenon," he wrote in 1925. But the whole concept of innate intelligence turns out to be a faulty one.

Terman also bizarrely assigned a protégé, Catherine Cox, to determine the IQs of long-dead
geniuses -- a laughable farce considering how IQ is normally measured and what
it is conventionally said to reveal. They assigned a score of 200 to Terman's
hero Francis Galton -- the father of innate intelligence.[ix]

- David Shenk

FOOTNOTES

[i]IQ scores do not identify the most successful
and creative artists or scientists:

[vi] From the 1995 APA report: "It is
important to understand [that] a child whose IQ score remains the same from age
6 to age 18 does not exhibit the same performance throughout that period. On
the contrary, steady gains in general knowledge vocabulary, reasoning ability,
etc. will be apparent. What does not change is his or her score in comparison
to that of other individuals of the same age."

December 10, 2007

A nice piece in yesterday's NYT by psychologist Richard E. Nisbett on the race/intelligence controversy sparked by The Bell Curve and reignited recently by James Watson. (Thanks to Michael Bowerman for pointing it out).

Nisbett cogently and concisely deflates the claims that intelligence has been proven 60-80% "heritable" by twin and adoption population studies. By the end of the piece, the reader is left with the strong impression that those studies are flawed and misleading. They don't stand up, even on their own terms.

What Nisbett doesn't do -- understandably, because it's a lot trickier -- is explain how those population studies also fly in the face of our modern understanding of genetics. It's not biologically possible for someone to directly inherit, via genes, a certain level of intelligence. Genes don't work that way. Everything about our genes is mediated through interaction with the environment. The dichotomy of "nature vs. nurture" actually does not exist.

Our popular discussion of genetics, intelligence, talent etc is stuck in a very strange place. We use terms, concepts and metaphors which lead us astray. In order to get unstuck, we're going to need a whole new way to frame the discussion. That's what I'm working on (struggling with) in my book.