James Flynn has defined "shorthand abstractions" (or "SHA's") as concepts drawn from science that have become part of the language and make people smarter by providing widely applicable templates ("market", "placebo", "random sample," "naturalistic fallacy," are a few of his examples). His idea is that the abstraction is available as a single cognitive chunk which can be used as an element in thinking and debate.

The Edge Question 2011

WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?

The term 'scientific"is to be understood in a broad sense as the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great people in history, or the structure of DNA. A "scientific concept" may come from philosophy, logic, economics, jurisprudence, or other analytic enterprises, as long as it is a rigorous conceptual tool that may be summed up succinctly (or "in a phrase") but has broad application to understanding the world.

[Thanks to Steven Pinker for suggesting this year's Edge Question and to Daniel Kahneman for advice on its presentation.]

Go deeper than the news. Tell me something I don't know. This is not a purely scientific question: this is question about our culture and ourselves. The ideas we present on Edge can offer readers and the wider public a new set of metaphors to describe ourselves, our minds, the way we think, the world, and all of the things we know in it.

Be imaginative, exciting, compelling, inspiring. Tell a great story. Make an argument that makes a difference. Amaze and delight. Surprise us!

TO THE EDGE PRESS LIST

Last year's Edge Question ("How Is The Internet Changing The Way You Think?") generated 172 essays (132,000 words). We expect at least as many contributions this year. Please feel free use up to 1,500 words of text (gratis) without further permission, provided that:

(a) Edge and its URL (www.edge.org) are mentioned in the first paragraph of your print and online piece; and

(b) a hyperlink to the Edge home page (http://www.edge.org) is provided in the first paragraph of your online edition.

"Deliciously creative...the variety astonishes...intellectual skyrockets of stunning brilliance. Nobody in the world is doing what Edge is doing." Arts & Letters Daily • "Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4 • "Big, deep and ambitious questions... breathtaking in scope." New Scientist • "Brilliant ... captivating ... overwhelming." Seed • "Bold, often thrilling, sometimes chilling, answers." News-Observer • "The fascinating breadth of their visions of the future is revealed today by the discussion website edge.org, which has asked some of the world's finest mind the question: 'What will change everything?'" The Times • "A stellar cast of intellectuals ... a stunning array of responses." New Scientist • "Edge: brilliant, essential and addictive. It interprets, it interrogates, it provokes."Publico (Lisbon) Cover Story, Sunday Magazine • "The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now." San Francisco Chronicle • "The splendidly enlightened Edge website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. I strongly recommend a visit."The Independent • "A great event in the Anglo-Saxon culture." El Mundo • "As fascinating and weighty as one would imagine."The Independent • "They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions."The Guardian • "Praised by everyone from the Guardian, ProspectWired,The New York Times and BBC Radio 4, Edge is an online collective of deep thinkers. Their contributors aren't on the frontier, they are the frontier." The Scotsman • "A selection of the most explosive ideas of our age." Sunday Herald • "Uplifting ...enthralling." The Mail on Sunday • "If you think the web is full of trivial rubbish, you will find the intellectual badinage of edge.org to be a blessed counterpoint." The Times • "...fascinating and provocative reading." The Guardian • "...reads like an intriguing dinner party conversation among great minds in science." Discover • "Danger brilliant minds at work... exhilarating, hilarious, and chilling."Evening Standard • "Wonderful reading." The Times • "Strangely addictive."The Telegraph • "The greatest virtual research university in the world."Arts & Letters Daily • "Audacious and stimulating." La Vanguardia • "Brilliant! Stimulating reading for anyone seeking a glimpse into the next decade." The Sunday Times • "A running fire of a provocative and fascinating thesis." La Stampa

That's the question John Brockman, editor of the Web site edge.org, posed to about 160 cutting-edge minds in his 11th annual Edge Question. As in years past, they responded with bold, often thrilling, sometimes chilling, answers.

Dawkins speculates about how a human-chimp hybrid or the discovery of a living Homo erectus would change the way we see the world. — James

Dawkins — author of The Selfish Gene and The God Delusion — muses on the effect of breaking down the barrier between humans and animals, perhaps by the creation of a chimera in a lab or a "successful hybridisation between a human and a chimpanzee".

The predictions range from miracle cures and world peace to economic ruin and nuclear war. If there is a theme to the World Questions 2009, an online survey of some of the world's top thinkers, it would seem to be inconsistency.

The wide range of answers they gave provides a snapshot of the hopes — and fears — that may come to define our times.

JH: Yes, the annual question from John Brockman, the science book impressario. He's got this great site edge.org 2hich we've talked about before and every year he asks this question and he's asks this ever-growing stable of people, primarily scientists but a of of quasi-scientist pundits to respond this question. The question this year is "What will change everything".

[Caption: Ian McEwan muses that we will look back and 'wonder why we ever thought we had a problem when we are bathed in such beneficent radiant energy'. Photograph: Getty]

Flying cars, personal jetpacks, holidays on the moon, the paperless office — the predictions of futurologists are, it seems, doomed to fail. The only thing predictable about the future is its unpredictability.

But that has not stopped edge.org — the online intellectual salon — asking which ideas and inventions will provide humanity's next leap forward.

Most scientists like to dream about what will change the world — even if they understand that their own work is never likely to have quite the impact of a Copernicus or a Darwin.

The fascinating breadth of their visions of the future is revealed today by the discussion Website edge.com, which has asked some of the world's finest minds the question: "What will change everything?"

Every December the online intellectual salon called Edge, presided over by literary agent John Brockman, asks a select (virtual) assembly of scientists to ponder a question, such as what they are optimistic about (2007), what "dangerous" ideas they have (2006) and what they believe is true but cannot prove (2005). As the bell tolls on 2008 and rings in 2009, Edge is unveiling this year's: "What game-changing scientific ideas and developments do you expect to live to see?"

The splendidly enlightened Edge Website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. I songly recommend a visit.

A great event in the Anglo-Saxon culture

As fascinating and weighty as one would imagine

They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions. But in a refreshing show of new year humility, the world's best thinkers have admitted that from time to time even they are forced to change their minds

Even the world's best brains have to admit to being wrong sometimes: here, leading scientists respond to a new year challenge

Provocative ideas put forward today by leading figures

The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now.

As in the past, these world-class thinkers have responded to impossibly open-ended questions with erudition, imagination and clarity.

A jolt of fresh thinking...The answers address a fabulous array of issues. This is the intellectual equivalent of a New Year's dip in the lake — bracing, possibly shriek-inducing, and bound to wake you up

Answers ring like scientific odes to uncertainty, humility and doubt; passionate pleas for critical thought in a world threatened by blind convictions

For an exceptionally high quotient of interesting ideas to words, this is hard to beat. ...What a feast of egg-head opinionating!.

But when the scientific thinkers look beyond their own specializations
to the big picture, they continue to find cause for cheer — foreseeing
an end to war, for example, or the simultaneous solution
of our global warming and energy problems. The most general
grounds for optimism offered by these thinkers, though,
is that big-picture pessimism so often proves to be unfounded.

Global
warming, the war on terror and rampant consumerism getting
you down? Well, lighten up: here, 17 of the world's smartest
scientists and academics share their reasons to be cheerful

Brockman's respondents were forward-looking, describing
cutting-edge research that will help combat global warming
and other looming problems.

One way or another the answers should give you a warm
glow — either because you agree, or because they
make you angry.

Edge's future-themed article is making some news....
From the lips of contributors to the online magazine
Edge to God's ears (one wonders if She or It may be listening):
dozens of scientists and other thinkers have looked ahead
to the future.

a Web site that aims to bridge the gap between scientists
and other thinkers

[E]ven in the face of such threats as global warming and
religious fundamentalism, scientists remain positive about
the future.

People's fascination for religion and superstition will
disappear within a few decades as television and the Internet
make it easier to get information, and scientists get closer
to discovering a final theory of everything, leading thinkers
argue today.

What are you optimistic about? Why? Tons of brilliant thinkers
respond.

(Sydney)
Into the minds of the believers. With
the aim of gathering ideas from the
world's leading thinkers on intellectual,
philosophical, artistic and literary
issues, US writer John Brockman established
The Edge Foundation in 1988.

Royal
Society president Martin Rees said the
most dangerous idea was public concern
that science and technology were running
out of control.

Audacious
Knowledge. What is a dangerous idea?
One not assumed to be false, but possibly
true?What do you believe is true even
though you cannot prove it?"

Seductive
power of a hazardous idea. The responses
to Brockman's question do not directly
engage with each other, but they do worry
away at a core set of themes.

Academics
see gene cloning perils, untamed global
warming and personality-changing drugs
as presenting the gravest dangers for
the future of civiliztion

Risky
ideas; What do scientists currently regard
as the most dangerous thoughts?

Be
Afraid. Edge.org canvassed scientists for
their "most dangerous idea." David
Buss, a psychologist at the University
of Texas, chose "The Evolution of
Evil."

The
most dangerous idea. Brockman's challenge
is noteworthy because his buddies include
many of the world's greatest scientists:
Freeman Dyson, David Gelertner, J. Craig
Venter, Jared Diamond, Brian Greene.

Dangerous
Ideas About Modern Life. Free will does
not exist. We are not always created
equal. Science will never be able to
address our deepest concerns.

Genome
sequencing pioneer Craig Venter suggests
greater understanding of how genes influence
characteristics such as personality,
intelligence and athletic capability
could lead to conflict in society.

The
wilder shores of creativity. He asked
his roster of thinkers [...] to nominate
an idea, not necessarily their own, they
consider dangerous not because it is
false, but because it might be true.

From cloning to predetermination of sex:
the answers of investigators and philosophers
to a question on the online salon Edge.

Who
controls humans? God? The genes? Or nevertheless
the computer? The on-line forum Edge
asked its yearly question — and
the answers raised more questions.

The
117 respondents include Richard Dawkins,
Freeman Dyson, Daniel Dennett, Jared
Diamond — and that's just the
D's! As you might expect, the submissions
are brilliant and very controversial.

Gene
discoveries highlight dangers facing
society. Mankind's increasing understanding
of the way genes influence behaviour
and the issue's potential to cause ethical
and moral dilemmas is one of the biggest
dangers facing society, according to
leading scientists.

Why
it can be a very smart move to start
life with a Jewish momma: There is one
dangerous idea that still trumps them
all: the notion that, as Steven Pinker
describes it, "groups of people
may differ genetically in their average
talents and temperaments". For "groups
of people", read "races."

The
Earth can cope with global warming, schools
should be banned and we should learn
to love bacteria. These are among the
dangerous ideas revealed by a poll of
leading thinkers.

Science
can be a risky game, as Galileo learned
to his cost. Now John Brockman asks over
a hundred thinkers, "What is your
most dangerous idea?"

"Our
brains are constantly subjected to the
demands of multi-tasking and a seemingly
endless cacophony of information from
diverse sources. "

Very
complex systems — whether organisms,
brains, the biosphere, or the universe
itself — were not constructed by
design; all have evolved. There is a
new set of metaphors to describe ourselves,
our minds, the universe, and all of the
things we know in it.

Space
Without Time, Time Without Rest: John Brockman's
Question for the Republic of Wisdom — It
can be more thrilling to start the New Year
with a good question than with a good intention.
That's what John Brockman is doing for the
eight time in a row.

What
do you believe to be true, even though you
can't prove it? John Brockman asked over a
hundred scientists and intellectuals... more» ...
Edge

That's
what online magazine The Edge — the
World Question Center asked over 120 scientists,
futurists, and other interesting minds. Their
answers are sometimes short and to the point

To
celebrate the new year, online magazine Edge asked
some leading thinkers a simple question:
What do you believe but cannot prove? Here
is a selection of their responses...

Scientists
dream too — imagine that

"Fantastically
stimulating ...Once
you start, you can't stop thinking about that
question. It's like the crack cocaine of the thinking world." — BBC Radio 4

Scientists,
increasingly, have become our public intellectuals,
to whom we look for explanations and solutions.
These may be partial and imperfect, but they
are more satisfactory than the alternatives.

Bangladesh — The
cynic and the optimist, the agnostic and
the believer, the rationalist and the obscurantist,
the scientist and the speculative philosopher,
the realist and the idealist-all converge
on a critical point in their thought process
where reasoning loses its power.

"So
now, into the breach comes John Brockman, the literary
agent and gadfly, whose online scientific salon,
Edge.org, has become one of the most interesting
stopping places on the Web. He begins every year
by posing a question to his distinguished roster
of authors and invited guests. Last year he asked
what sort of counsel each would offer George W.
Bush as the nation's top science adviser. This
time the question is "What's your law?"

"John
Brockman, a New York literary agent, writer and
impresario of the online salon Edge, figures it
is time for more scientists to get in on the whole
naming thing...As a New Year's exercise, he asked
scores of leading thinkers in the natural and social
sciences for "some bit of wisdom, some rule
of nature, some law-like pattern, either grand
or small, that you've noticed in the universe that
might as well be named after you."

"John
Brockman has posted an intriguing question on his
Edge Website. Brockman advises his would-be legislators
to stick to the scientific disciplines."

"Everything
answers to the rule of law. Nature. Science. Society.
All of it obeys a set of codes...It's the thinker's
challenge to put words to these unwritten rules.
Do so, and he or she may go down in history. Like
a Newton or, more recently, a Gordon Moore, who
in 1905 coined the most cited theory of the technological
age, an observation on how computers grow exponentially
cheaper and more powerful... Recently, John Brockman
went looking for more laws."

"In
2002, he [Brockman] asked respondents to imagine
that they had been nominated as White House science
adviser and that President Bush had sought their
answer to 'What are the pressing scientific issues
for the nation and the world, and what is your
advice on how I can begin to deal with them?'Here
are excerpts of some of the responses. "

"Edge's
combination of political
engagement and blue-sky thinking
makes stimulating reading
for anyone seeking a glimpse
into the next decade."

"Dear
W: Scientists Offer
President Advice on Policy"

"There
are
84
responses,
ranging
in
topic
from
advanced
nanotechnology
to
the
psychology
of
foreign
cultures,
and
lots
of
ideas
regarding
science,
technology,
politics,
and
education."

"Brockman's
thinkers of the 'Third Culture,'
whether they, like Dawkins,
study evolutionary biology
at Oxford or, like Alan Alda,
portray scientists on Broadway,
know no taboos. Everything
is permitted, and nothing
is excluded from this intellectual
game."

"The
responses are generally
written in an engaging,
casual style (perhaps encouraged
by the medium of e-mail),
and are often fascinating
and thought — provoking....
These are all wonderful,
intelligent questions..."

"Responses
to this year's question are deliciously creative...
the variety astonishes. Edge continues
to launch intellectual skyrockets of stunning
brilliance. Nobody in the world is doing what Edge is
doing."

"Once
a year, John Brockman of New York, a writer
and literary agent who represents many scientists,
poses a question in his online journal, The
Edge, and invites the thousand or so people
on his mailing list to answer it."

"Don't
assume for a second that Ted Koppel, Charlie
Rose and the editorial high command at the New
York Times have a handle on all the pressing
issues of the day.... a lengthy list of profound,
esoteric and outright entertaining responses.

The
Greatest Inventions of the Past 2,000 Years
Edited
by John
Brockman

"A terrific, thought provoking site."

"The
Power of Big Ideas"

"The
Nominees for Best Invention Of the Last Two Millennia
Are . . ."

"...Thoughtful and often surprising answers
....a fascinating survey of intellectual and
creative wonders of the world ..... Reading
them reminds me of how wondrous our world is." — Bill Gates, New York Times Syndicated
Column

"A
site that has raised electronic discourse on the
Web to a whole new level.... Genuine learning seems
to be going on here."

"To
mark the first anniversary of [Edge],
Brockman posed a question: 'Simply reading the
six million volumes in the Widener Library does
not necessarily lead to a complex and subtle
mind," he wrote, referring to the Harvard
library. "How to avoid the anesthesiology
of wisdom?' "

"Home
to often lively, sometimes obscure and almost
always ambitious discussions."

"Open-minded,
free-ranging, intellectually playful
...an unadorned pleasure in curiosity,
a collective expression of wonder
at the living and inanimate world
... an ongoing and thrilling colloquium." — Ian
McEwan, Author of Saturday

"Astounding
reading."

"An
unprecedented roster of brilliant minds,
the sum of which is nothing short of
visionary"

"Fantastically
stimulating...It's like the crack cocaine
of the thinking world.... Once you
start, you can't stop thinking about
that question."

HOWARD GARDNERPsychologist, Harvard University; Author, Truth, Beauty, And Goodness Reframed: Educating For The Virtues In The 21St Century

"How Would You Disprove Your Viewpoint?!"

Thanks to Karl Popper, we have a simple and powerful tool: the phrase "How Would You Disprove Your Viewpoint?!"

In a democratic and demotic society like ours, the biggest challenge to scientific thinking is the tendency to embrace views on the basis of faith or of ideology. A majority of Americans doubt evolution because it goes against their religious teachings; and at least a sizeable minority are skeptical about global warming — or more precisely, the human contributions to global change — because efforts to counter climate change would tamper with the 'free market'.

Popper popularized the notion that a claim is scientific only to the extent that it can be disproved — and that science works through perpetual efforts to disprove claims.

If American citizens, or, for that matter, citizens anywhere were motivated to decribe the conditions under which they would relinquish their beliefs, they would begin to think scientifically. And if they admitted that empirical evidence would not change their minds, then at least they'd have indicated that their views have a religious or an ideological, rather than a scientific basis.

BRUCE HOODDirector of the Bristol Cognitive Development Centre in the Experimental Psychology Department at the University of Bristol; Author, Supersense

Haecceity

Understanding the concept of haecceity would improve everybody's cognitive toolkit because it succinctly captures most people's intuitions about authenticity that are increasingly threatened by the development of new technologies. Cloning, genetic modification and even digital reproduction are some examples of new innovations that alarm many members of the public because they appear to violate a belief in the integrity of objects

Haecceity is originally a metaphysical concept that is both totally obscure and yet very familiar to all of us. It is the psychological attribution of an unobservable property to an object that makes it unique among identical copies. All objects may be categorized into groups on the basis of some shared property but an object within a category is unique by virtual of its haecceity. It is haecceity that makes your wedding ring authentic and your spouse irreplaceable, even though such things could be copied exactly in a futuristic science fiction world where matter duplication had been solved.

Haecceity also explains why you can gradually replace every atom in an object so that it not longer contains any of the original material and yet psychologically, we consider it to be the same object. That transformation can be total but so long as it has been gradual, we consider it to be the same thing. It is haecceity that enables us to accept restoration of valuable works of art and antiquities as a continuous process of rejuvenation. Even when we discover that we replace most of the cellular structures of our bodies every couple of decades, haecceity enables us to consider the continuity of our own unique self.

Haecceity is an intellectually challenging concept attributable to the medieval Scottish philosopher, John Duns Scotus, who ironically is also the origin of the term for the intellectually challenged, "dunces." Duns Scotus coined haecceity to address the confusion in Greek metaphysics between the invisible property that defines the individual, as opposed to "quiddity" which is the unique property that defines the group.

Today, both haecceity and quiddity have been subsumed under the more recognizable term, "essentialism." Richard Dawkins has recently called essentialism, "the dead hand of Plato," because, as he points out, a intuitive belief in distinct identities is a major impediment to accepting the reality that all diverse life forms have a common biological ancestry. However drawing the distinction within essentialism is important. For example, it is probably intuitive quiddity that makes some people unhappy about genetic modification because they see this as a violation of integrity of the species as a group. On the other hand it is intuitive haecceity that forms our barrier to cloning, where the authenticity of the individual is compromised.

By reintroducing haecceity as a scientific concept, albeit one that captures a psychological construct, we can avoid the confusion over using the less constrained term of essentialism that is applied to hidden properties that define both the group and the individual identity. It also provides a term for that gut feeling that many of us have when the identity and integrity of objects we value are threatened and we can't find the word for describing our concerns.

The notion of a probability distribution would, I think, be a most useful addition to the intellectual toolkits of most people.

Most quantities of interest, most projections, most numerical assessments are not point estimates. Rather they are rough distributions — not always normal, sometimes bi-modal, sometimes exponential, sometimes something else.

Related ideas of mean, median, and variance are also important, of course, but the simple notion of a distribution implicitly suggests these and weans people from the illusion that certainty and precise numerical answers are always attainable.

One of the most widely-useful (but not widely-understood) scientific concepts is that of a possibility space. This is a way of thinking precisely about complex situations. Possibility spaces can be difficult to get your head around, but once you learn how to use them, they are a very powerful way to reason, because they allow you to sidestep thinking about causes and effects.

As an example of how a possibility space can help answer questions, I will use "the Monty Hall problem," which many people find confusing using our normal tools of thought. Here is the setup: A game-show host presents a guest with a choice of items hidden behind three curtains. Behind one is a valuable prize; behind the other two are disappointing duds. After the guest has made an initial choice, the host reveals what is behind one of the un-chosen curtains, showing that it would have been a dud. The guest is then offered the opportunity to change their mind. Should they change or stick with their original decision?

Plausible-sounding arguments can be made for different answers. For instance, one might argue that it does not matter whether the guest switches or not, since nothing has changed the probability that the original choice is correct. Such arguments can be very convincing, even when they are wrong. The possibility space approach, on the other hand, allows us to skip reasoning about complex ideas like changing probabilities and what causes change. Instead, we use a kind of systematic bookkeeping that leads us directly to the answer. The trick is just to be careful to keep track of all of the possibilities.

One of the best ways to generate all the possibilities is to find a set of independent pieces of information that tell you everything you could possibly need to know about what could happen. For example, in the case of the Monty Hall problem, it would be sufficient to know what choice the guest is going to make, whether the host will reveal the leftmost or rightmost dud, and where the prize is located. Knowing these three pieces of information would allow you to predict exactly what is going to happen. It is also important that these three pieces of information are completely independent, in the sense that nothing can be learned about one of them by knowing the others. The possibility space can be constructed by creating every possible combination of these three unknowns (or any other set of unknowns that is predictive and independent.)

In this case, the possibility space is three-dimensional, because there are three unknowns. Since there are three possible initial choices for the guest, two dud options for the host, and three possible locations for the prize, there are initially 3x2x3=18 possibilities in the space. (One might reasonably ask why we don't just call this a possibility table. In this simple case, we could. But, scientists generally work with possibility spaces that contain an infinity of possibilities in a multidimensional continuum, more like a physical space.) This particular possibility space starts out as three-dimensional, but once the guest makes their initial choice, twelve of the possibilities become impossible and it collapses to two dimensions.

Let's assume that the guest already knows what initial choice they are going to make. In that case they could model the situation as a two-dimensional possibility space, one representing the location of the prize, the other representing whether the host will reveal the rightmost or leftmost dud. In this case, the first dimension indicates which curtain hides the prize (1, 2 or 3), and the second represents the arbitrary choice of the host (left dud or right dud), so there are six points in the space, representing the six possibilities of reality. Another way to say this is that the guest can deduce that they may be living in one of six equally-possible worlds. By listing them all, they will see that in four of these six, it is to their advantage to switch from their initial choice.

Host reveals left dud

Host reveals right dud

Prize is behind 1

2 revealed, better to stick

3 revealed, better to stick

Prize is behind 2

3 revealed, better to switch

3 revealed, better to switch

Prize is Behind 3

2 revealed, better to switch

2 revealed, better to switch

Example of a two-dimensional possibility space, when guest's initial Choice is 1

After the host makes his revelation, half of these possibilities become impossible, and the space collapses to three possibilities. It will still be true that in two out of three of these possible worlds it is to the guest's advantage to switch. (In fact, this was even true of the original three-dimensional possibility space, before the guest made their initial choice.)

This is a particularly simple example of a possibility space where it is practical to list all the possibilities in a table, but the concept is far more general. In fact one way of looking at quantum mechanics is that reality actually consists of a possibility space, with Schrödinger's equation assigning a probability to each possibility. This allows quantum mechanics to explain phenomena that are impossible to account for in terms of causes and effects. Even in normal life, possibility spaces give us a reliable way to solve problems when our normal methods of reasoning seem to give contradictory or paradoxical answers. As Sherlock Holmes would say, "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth."

HAIM HARARIPhysicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

The Edge of the Circle

My concept is important, useful, scientific and very appropriate for Edge, but it does not exist. It is: The Edge of the Circle.

We know that a circle has no edge, and we also know that, when you travel on a circle far enough to the right or to the left, you reach the same place. Today's world is gradually moving towards extremism in almost every area: Politics, law, religion, economics, education, ethics, you name it. This is probably due to the brevity of messages, the huge amounts of information flooding us, the time pressure to respond before you think and the electronic means (Twitter, text messages) which impose superficiality. Only extremist messages can be fully conveyed in one sentence.

In this world, it often appears that there are two corners of extremism: Atheism and religious fanaticism; Far right and far left in politics; Suffocating bureaucratic detailed regulatory rules or a complete laissez faire; No ethical restrictions in Biology research and absolute restrictions imposed by religion; one can continue with dozens of examples.

But, in reality, the extremists in the two edges always end up in the same place. Hitler and Stalin both murdered millions, and signed a friendship pact. Far left secular atheist demonstrators in the western world, including gays and feminists, support Islamic religious fanatics who treat women and gays as low animals. It has always been known that no income tax and 100% income tax yield the same result: no tax collected at all, as shown by the famous Laffer curve. This is the ultimate meeting point of the extremist supporters of tax increase and tax reduction.

Societies, preaching for absolute equality among their citizens, always end up with the largest economic gaps. Fanatic extremist proponents of developing only renewable energy sources, with no nuclear power, delay or prevent acceptable interim solutions to global energy issues, just as much as the oil producers. Misuse of animals in biology research is as damaging as the objections of fanatic animal right groups. One can go on and on with illustrations, which are more visible now than they were a decade or two ago. We live on the verge of an age of extremism.

So, the edge of the circle is the place where all of these extremists meet, live and preach. The military doctor who refuses to obey orders "because Obama was born in Africa" and the army doctor, who murdered 12 people in Texas, are both at the edge of the circle.

If you are a sensible moderate thinking person, open any newspaper and see how many times you will read news items or editorials, which will lead you to say: "Wow, these people are really at the edge of the circle" ...

With the discovery of mirror neurons and similar systems in humans, neuroscience has shown us that when we see the actions, sensations and emotions of others, we activate brain regions as if we were doing similar actions, were touched in similar ways or made similar facial expressions. In short, our brain mirrors the states of the people we observe. Intuitively, we have the impression that while we mirror, we feel what is going on in the person we observe. We empathize with him or her.

When the person we see has the exact same body and brain as we do, mirroring would tell us what the other feels. Whenever the other person is different in some relevant way, however, mirroring will mislead us. Imagine a masochist receiving a whiplash. Your mirror system might make you feel his pain — because you would feel pain in his stead. What he actually feels though is pleasure. You committed the mirror fallacy of incorrectly feeling that he would have felt what you would have felt — not what he actually felt.

The world is full of such fallacies: we feel dolphins are happy just because their face resembles ours while we smile or we attribute pain to robots in sci-fi movies. We feel an audience is Japan failed to like a presentation we gave because their poise would be our boredom. Labeling them, and realizing that the way we interpret the social world is through projection might help us reappraise these situations and beware.

The scientific concept of the "multiverse" has already entered popular imagination. But the full implications of the idea that every possible universe has been and will be actualised have yet to sink in. One of these, which could do more to change our view of things than anything is that we are all destined to be immortal.

This welcome news (if indeed it is welcome) follows on two quite different grounds. First, death normally occurs to human bodies in due time either as the result of some kind of macro-accident — for example a car crash, or a homicide; or a micro-one — a heart attack, a stroke; or, if those don't get us, a nano-one — accidental errors in cell division, cancer, old age. Yet, in the multiverse, where every alternative is realised, the wonderful truth is that there has to be at least one particular universe in which by sheer luck each of us as individuals have escaped any and all of these blows.

Second, we live in a world where scientists are, in any case, actively searching for ways of combatting all such accidents: seat belts to protect us in the crash, aspirin to prevent stroke, red wine oxidants to counter heart attacks, antibiotics against disease. And in one or more of the possible universes to come these measures will surely have succeeded in making continuing life rather than death the natural thing.

Taking these possibilities — nay certainties — together, we can reasonably conclude that there will surely be at least one universe in which I — and you — will still find ourselves living in a thousand years, or a million years time.

Then, when we get there, should we, the ultimate survivors, the one in a trillion chancers, mourn our alter-egos who never made it? No, probably no more than we do now. We are already, as individuals, statistically so improbable as to be a seeming miracle. Having made it so far, shouldn't we look forward to more of the same?

GEORGE LAKOFFCognitive Scientist and Linguist; Richard and Rhoda Goldman Distinguished Professor of Cognitive Science and Linguistics, UC Berkeley; Author, The Political Mind

Conceptual Metaphor

Conceptual Metaphor is at the center of a complex theory of how the brain gives rise to thought and language, and how cognition is embodied. All concepts are physical brain circuits deriving their meaning via neural cascades that terminate in linkage to the body. That is how embodied cognition arises.

Primary metaphors are brain mappings linking disparate brain regions, each tied to the body in a different way. For example, More Is Up (as in "prices rose") links a region coordinating quantity to another coordinating verticality. The neural mappings are directional, linking frame structures in each region. The directionality is determined by first-spike synaptic strengthening. Primary metaphors are learned automatically and unconsciously by the hundreds prior to metaphoric language, just by living in the world and having disparate brain regions activated together when different experiences repeated co-occur.

Complex conceptual metaphors arise via neural bindings, both across metaphors and from a given metaphor to a conceptual frame circuit. Metaphorical reasoning arises when source domain inference structures are used for target domain reasoning via neural mappings. Linguistic metaphors occur when words for source domain concepts are used for target domain concepts via neural metaphoric mappings.

Because conceptual metaphors unconsciously structure the brain's conceptual system, much of normal everyday thought is metaphoric, with different conceptual metaphors used to think with on different occasions or by different people. A central consequence is the huge range of concepts that use metaphor cannot be defined relative to the outside world, but are instead embodied via interactions of the body and brain with the world.

There are consequences in virtually every area of life. Marriage, for example, is understood in many ways, as a journey, a partnership, a means for grown, a refuge, a bond, a joining together, and so on. What counts as a difficulty in the marriage is defined by the metaphor used. Since it is rare for spouses to have the same metaphors for their marriage, and since the metaphors are fixed in the brain but unconscious, it is not surprising that so many marriages encounter difficulties.

In politics, conservatives and progressives have ideologies defined by different metaphors. Various concepts of morality around the world are constituted by different metaphors. Even mathematical concepts are understood via metaphor, depending on the branch of mathematics. Emotions are conceptualized via metaphors that are tied to the physiology of emotion. In set theory, numbers are sets of a certain structure. On the number line, numbers are points on a line. "Real" numbers are defined via the metaphor that infinity is a thing; an infinite decimal like pi goes on forever, yet it is a single entity — an infinite thing.

Though conceptual metaphors have been researched extensively in the fields of cognitive linguistics and neural computation for decades, experimental psychologists have been experimentally confirming their existence by showing that, as circuitry physically in the brain they can influence behavior in the laboratory. The metaphors guide the experimenters, showing them what to look for. Confirming the conceptual metaphor that "The Future Is Ahead"; "The Past is Behind", experimenters found that subjects thinking about the future lean slightly forward, while those thinking about the past lean slightly backwards. Subjects asked to do immoral acts in experiments tended to wash or wipe their hands afterwards, confirming the conceptual metaphor "Morality Is Purity".

Subjects moving marbles upward tended to tell happy stories, while those moving marbles downward tended to tell sad stories, confirming "Happy Is Up;" "Sad is Down". Similar results are coming in by the dozens. The new experimental results on embodied cognition are mostly in the realm of conceptual metaphor.

Perhaps most remarkable, there appear to be brain structures that we are born with that provide pathways ready for metaphor circuitry. Edward Hubbard has observed that critical brain regions coordinating space and time measurement are adjacent in the brain, making it easy for the universal metaphors for understanding space in terms of time to develop (as in "Christmas is coming" or "We're coming up on Christmas.") Mirror neuron pathways linking brain regions coordinating vision and hand actions provide a natural pathway for the conceptual metaphor that "Seeing Is Touching" (as in "Their eyes met").

Though metaphor has been discussed in literature for over 2500 years, it is only within the last 30 years that conceptual metaphor has been found scientifically to be central to our mental life.

MILFORD H. WOLPOFFProfessor of Anthropology and Adjunct Associate Research Scientist, Museum of Anthropology at the University of Michigan; Author, Race and Human Evolution

GIGO

A shorthand abstraction I find to be particularly useful in my own cognitive toolkit comes from the world of computer science, and applies broadly in my experience to science and scientists. GIGO means "garbage in, garbage out." Its application in the computer world is straightforward and easy to understand, but I have found much broader applications throughout my career in paleoanthropology.

In computer work, garbage results can arise from bad data or from poorly conceived algorithms applied to analysis — I don't expect that the results from both of these combined are a different order of garbage because bad is bad enough. The science I am used to practicing has far too many examples of mistaken, occasionally fraudulent data and inappropriate, even illogical analysis, and it is all too often impossible to separate conclusions from assumptions.

I don't mean to denigrate paleoanthropology, which I expect is quite like other sciences in these respects, and wherein most work is superbly executed and cannot be described this way. The value of GIGO is to sharpen the skeptical sense and the critical facility because the truth behind GIGO is simple: science is a human activity.

Imagine you need to find the midpoint of a stick. You can measure its length, using a ruler (or making a ruler, using any available increment) and digitally compute the midpoint. Or, you can use a piece of string as an analog computer, matching the length of the stick to the string, and then finding the middle of the string by doubling it back upon itself. This will correspond, without any loss of accuracy due to rounding off to the nearest increment, to the midpoint of the stick. If you are willing to assume that mass scales linearly with length, you can use the stick itself as an analog computer, finding its midpoint by balancing it against the Earth's gravitational field.

There is no precise distinction between analog and digital computing, but, in general, digital computing deals with integers, binary sequences, and time that is idealized into discrete increments, while analog computing deals with real numbers and continuous variables, including time as it appears to exist in the real world. The past sixty years have brought such advances in digital computing that it may seem anachronistic to view analog computing as an important scientific concept, but, more than ever, it is.

Analog computing, once believed to be as extinct as the differential analyzer, has returned. Not for performing arithmetic — a task at which even a pocket calculator outperforms an analog computer — but for problems at which analog computing can do a better job not only of computing the answer, but of asking the questions and communicating the results. Who is friends with whom? For a small high school, you could construct a database to keep track of this, and update it every night to keep track of changes to the lists. If you want to answer this question, updated in real time, for 500 million people, your only hope is to build an analog computer. Sure, you may use digital components, but at a certain point the analog computing being performed by the system far exceeds the complexity of the digital code with which it is built. That's the genius that powers Facebook and its ilk. Your model of the social graph becomes the social graph, and updates itself.

In the age of all things digital, "Web 2.0" is our code word for the analog increasingly supervening upon the digital — reversing how digital logic was embedded in analog components, sixty years ago. The fastest-growing computers of 2010 — Facebook and Google — are analog computers in a big, new, and important way. Instead of meaningful information being encoded as unambiguous (and fault-intolerant) digital sequences referenced by precise numerical addressing, meaningful information is increasingly being encoded (and operated upon) as continuous (and noise-tolerant) variables such as frequencies (of connection or occurrence) and the topology of what connects where, with location being increasingly defined by fault-tolerant template rather than by unforgiving numerical address.

Complex networks — of molecules, people, or ideas — constitute their own simplest behavioral descriptions. This behavior can be more easily and accurately approximated by continuous, analog networks than it can be defined by digital, algorithmic codes. These analog networks may be composed of digital processors, but it is in the analog domain that the interesting computation is being performed.

Analog is back, and here to stay.

ROGER SCHANKPsychologist & Computer Scientist; Engines for Education Inc.; Author, Making Minds Less Well Educated Than Our Own

Experimentation

Some scientific concepts have been so ruined by our education system that it is necessary to explain about the ones that everyone thinks they know about when they really don't.

We learn about experimentation in school. What we learn is that scientists conduct experiments and if we copy exactly what they did in our high school labs we will get the results they got. We learn about the experiments that scientists do, usually about about the physical and chemical properties of things and we learn that they report their results in scientific journals. So, in effect we learn that experimentation is boring, is something done by scientists and has nothing to do with our daily lives.

And, this is a problem. Experimentation is something done by everyone all the time. Babies experiment with what might be good to put in their mouths. Toddlers experiment with various behaviors to see what they can get away with. Teenagers experiment with sex, drugs, and rock and roll. But because people don't really see these things as experiments nor as ways of collecting evidence in support or refutation of hypotheses, they don't learn to think about experimentation as something they constantly do and thus will need to learn to do better.

Every time we take a prescription drug we are conducting an experiment. But, we don't carefully record the results after each dose, and we don't run controls, and we mix up the variables by not changing only one behavior at a time, so that when we suffer from side effects we can't figure out what might have been the true cause. We do the same thing with personal relationships. When they go wrong, we can't figure out why because the conditions are different in each one.

Now, while it is difficult if not impossible to conduct controlled experiments in most aspects of our own lives, it is possible to come to understand that we are indeed conducting an experiment when we take a new job, or try a new tactic in a game we are playing, or when we pick a school to attend, or when we try and figure out how someone is feeling, or when we wonder why we ourselves feel the way we do.

Every aspect of life is an experiment that can be better understood if it is perceived in that way. But because we don't recognize this we fail to understand that we need to reason logically from evidence we gather, and that we need to carefully consider the conditions under which our experiments have been conducted, and that we need to decide when and how we might run the experiment again with better results.

In other words, the scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained as the result of an experiment. But people who don't see their actions as experiments, and those who don't know how to reason carefully from data, will continue to learn less well from their own experiences than those who do.

Since most of us have learned the word "experiment" in the context of a boring ninth grade science class, most people have long since learned to discount science and experimentation as being relevant to their lives.

If school taught basic cognitive concepts such as experimentation in the context of everyday experience, and taught people how to carefully conduct experiments in their own lives instead of concentrating on using algebra as a way of teaching people how to reason, then people would be much more effective at thinking about politics, child raising, personal relationships, business, and every other aspect of daily life.

ROBERT R. PROVINEPsychologist and Neuroscientist, University of Maryland; Author, Laughter

TANSTAAFL

TANSTAAFL is the acronym for "There ain't no such thing as a free lunch," a universal truth having broad and deep explanatory power in science and daily life.

The expression originated from the practice of saloons offering "free lunch" if you buy their overpriced drinks. Science fiction master Robert Heinlein introduced me to TANSTAAFL in The Moon is a Harsh Mistress, his 1966 classic, in which a character warns of the hidden cost of a free lunch.

The universality of the fact that you can't get something for nothing has found application in sciences as diverse as physics (Laws of Thermodynamics) and economics, where Milton Friedman used a grammatically upgraded variant as title of his 1975 book There's No Such Thing as a Free Lunch. Physicists are clearly on board with TANSTAAFL, less so many political economists in their smoke and mirrors world.

My students hear a lot about TANSTAAFL, from the biological costs of the peacock's tail, to our nervous system that distorts physical reality to emphasize changes in time and space. When the final tally is made, peahens cast their ballot for the sexually exquisite plumage of the peacock and its associated vigor, and it is more adaptive for humans to detect critical sensory events than to be high fidelity light and sound meters. In such cases, lunch is not free but comes at reasonable cost as determined by the grim but honest accounting of natural selection, a process without hand-waving and incantation.

GERALD HOLTONMallinckrodt Professor of Physics and Professor of the History of Science, Emeritus, at Harvard University; Coeditor, Einstein for the 21st Century: His Legacy in Science, Art, and Modern Culture

Skeptical Empiricism

In politics and society at large, important decision are all too often based on deeply held presuppositions,ideology or dogma — or, on the other hand, on headlong pragmatism without study of long-range consequences.

Therefore I suggest the adoption of Skeptical Empiricism, the kind that is exemplified by the carefully thought-out and tested research in science at its best. It differs from plain empiricism on the sort that characterized the writings of the scientist/philosopher Ernst Mach, who refused to believe in the existence of atoms because one could not "see" them.

To be sure, in politics and daily life, on some topics decisions have to be made very rapidly, on few or conflicting data. Yet, precisely for that reason it will be wise also to launch a more considerate program of skeptical empiricism on the same topic, if only to be better prepared for the consequences, intended or not, that followed from the quick decision.

MARTIN SELIGMANZellerbach Family Professor of Psychology; Director of the Positive Psychology Center, University of Pennylvaniai; Author, Flourish

PERMA

Is global well being possible?

Scientists commonly predict dystopias: nuclear war, overpopulation, energy shortage, dysgenic selection, widespread soundbyte mentality, and the like. You don't get much attention predicting that the human future will work out. I am not, however, going to predict that a positive human future will in fact occur, but it becomes more likely if we think about it systematically. We can begin by laying out the measurable elements of well being and then asking how those elements might be achieved. I address only measurement.

Well being is about what individuals and societies choose for its own sake , that which is north of indifference. The elements of well being must be exclusive, measurable independently of each other, and ideally, exhaustive. I believe there are five such elements and they have a handy acronym, PERMA:

P Positive Emotion

E Engagement

R Positive Relationships

M Meaning and Purpose

A Accomplishment

There has been forward movement in the measurement of these over the last decade. Taken together PERMA forms a more comprehensive index of well being than "life satisfaction" and it allows for the combining of objective and subjective indicators. PERMA can index the well being of individuals, of corporations, and of cities. The United Kingdom has now undertaken the measurement of well being for the nation and as one criterion — in addition to Gross Domestic Product — of the success of its public policy.

PERMA is a shorthand abstraction for the enabling conditions of life.

How do the disabling conditions, such as poverty, disease, depression, aggression, and ignorance, relate to PERMA? The disabling conditions of life obstruct PERMA, but they do not obviate it. Importantly the correlation of depression to happiness is not minus 1.00, it is only about minus 0.35, and effect of income on life satisfaction is markedly curvilinear, with increased income producing less and less life satisfaction the further above the safety net you are.

Science and public policy have traditionally been focused solely on remediating the disabling conditions, but PERMA suggests that this is insufficient. If we want global well being, we should also measure and try to build PERMA. The very same principal seems to be true in your own life: if you wish to flourish personally, getting rid of depression, anxiety, and anger and getting rich is not enough, you also need to build PERMA directly.

What is known about how PERMA can be built?

Perhaps the Edge Question for 2012 will be "How Can Science Contribute To Building Global Well Being"?

A zero-sum game is an interaction in which one party's gain equals the other party's loss — the sum of their gains and losses is zero. (More accurately, it is constant across all combinations of their courses of action.) Sports matches are quintessential examples of zero-sum games: winning isn't everything, it's the only thing, and nice guys finish last. A nonzero-sum game is an interaction in which some combinations of actions provide a net gain (positive-sum) or loss (negative sum) to the two of them. The trading of surpluses, as when herders and farmers exchange wool and milk for grain and fruit, is a quintessential example, as is the trading of favors, as when people take turns baby-sitting each others' children.

In a zero-sum game, a rational actor seeking the greatest gain for himself or herself will necessarily be seeking the maximum loss for the other actor. In a positive-sum game, a rational, self-interested actor may benefit the other guy with the same choice that benefits himself or herself. More colloquially, positive-sum games are called win-win situations, and are capture in the cliché "Everybody wins."

This family of concepts (zero-sum, nonzero-sum, positive-sum, negative-sum, constant-sum, and variable-sum games) was introduced by John von Neumann and Oskar Morgenstern when they invented the mathematical theory of games in 1944. The Google Books Ngram tool shows that the terms saw a steady increase in popularity beginning in the 1950s, and its colloquial relative "win-win" began a similar ascent in the 1970s.

Once people are thrown together in an interaction, their choices don't determine whether they are in a zero- or nonzero-sum game; the game is a part of the world they live in. But people, by neglecting some of the options on the table, may perceive that they are in a zero-sum game when in fact they are in a nonzero-sum game. Moreover, they can change the world to make their interaction nonzero-sum. For these reasons, when people become consciously aware of the game-theoretic structure of their interaction (that is, whether it is positive-, negative-, or zero-sum), they can make choices that bring them valuable outcomes — like safety, harmony, and prosperity — without their having to become more virtuous, noble, or pure.

Some examples. Squabbling colleagues or relatives agree to swallow their pride, take their losses, or lump it to enjoy the resulting comity rather than absorbing the costs of continuous bickering in hopes of prevailing in a battle of wills. Two parties in a negotiation split the difference in their initial bargaining positions to "get to yes." A divorcing couple realizes that they can reframe their negotiations from each trying to get the better of the other while enriching their lawyers to trying to keep as much money for the two of them and out of the billable hours of Dewey, Cheatham, and Howe as possible. Populaces recognize that economic middlemen (particularly ethnic minorities who specialize in that niche such as Jews, Armenians, overseas Chinese, and expatriate Indians) are not social parasites whose prosperity comes at the expense of their hosts but positive-sum-game-creators who enrich everyone at once. Countries recognize that international trade doesn't benefit their trading partner to their own detriment but benefits them both, and turn away from beggar-thy-neighbor protectionism to open economies which (as classical economists noted) make everyone richer and which (as political scientists have recently shown) discourage war and genocide. Warring countries lay down their arms and split the peace dividend rather than pursuing Pyrrhic victories.

Granted, some human interactions really are zero-sum — competition for mates is a biologically salient example. And even in positive-sum games a party may pursue an individual advantage at the expense of joint welfare. But a full realization of the risks and costs of the game-theoretic structure of an interaction (particularly if it is repeated, so that the temptation to pursue an advantage in one round may be penalized when roles reverse in the next) can militate against various forms of short-sighted exploitation.

Has an increasing awareness of the zero- or nonzero-sumness of interactions in the decades since 1950 (whether referred to in those terms or not) actually led to increased peace and prosperity in the world? It's not implausible. International trade and membership in international organizations has soared in the decades that game-theoretic thinking has infiltrated popular discourse. And perhaps not coincidentally, the developed world has seen both spectacular economic growth and a historically unprecedented decline in several forms of institutionalized violence, such as war between great powers, war between wealthy states, genocides, and deadly ethnic riots. Since the 1990s these gifts have started to accrue to the developing world as well, in part because they have switched their foundational ideologies from ones that glorify zero-sum class and national struggle to ones that glorify positive-sum market cooperation. (All these claims can be documented from the literature in international studies.)

The enriching and pacifying effects of participation in positive-sum games long antedate the contemporary awareness of the concept. The biologists John Maynard Smith and Eörs Szathmáry have argued that an evolutionary dynamic which creates positive-sum games drove the major transitions in the history of life: the emergence of genes, chromosomes, bacteria, cells with nuclei, organisms, sexually reproducing organisms, and animal societies. In each transition, biological agents entered into larger wholes in which they specialized, exchanged benefits, and developed safeguards to prevent one from exploiting the rest to the detriment of the whole. An explicit recognition among literate people of the shorthand abstraction "positive-sum game" and its relatives may be extending a process in the world of human choices that has been operating in the natural world for billions of years.

It is not hard to identify the discipline in which to look for the scientific concept that would most improve everybody's cognitive toolkit; it has to be economics. No other field of study contains so many ideas ignored by so many people at such great cost to themselves and the world. The hard task is picking just one of the many such ideas that economists have developed.

On reflection, I plumped for the law of comparative advantage, which explains how trade can be beneficial for both parties even when one of them is more productive than the other in every way. At a time of growing protectionism, it is more important than ever to reassert the value of free trade. Since trade in labor is roughly the same as trade in goods, the law of comparative advantage also explains why immigration is almost always a good thing — a point which also needs emphasizing at a time when xenophobia is on the rise.

In the face of well-meaning but ultimately misguided opposition to globalization, we must celebrate the remarkable benefits which international trade has brought us, and fight for a more integrated world.

Creativity is a fragile flower, but perhaps it can be fertilized with systematic doses of serendipity. Sarnoff Mednick showed decades ago that some people are better than others at detecting the associations that connect seemingly random concepts: Asked to name a fourth idea that links "wheel," "electric," and "high," people who score high on other measures of creativity will promptly answer "chair."

More recently, research in Mark Jung-Beeman's lab at Northwestern has found that sudden bursts of insight — the Aha! or Eureka! moment — comes when brain activity abruptly shifts its focus. The almost ecstatic sense that makes us cry "I see!" appears to come when the brain is able to shunt aside immediate or familiar visual inputs.

That may explain why so many of us close our eyes (often unwittingly) just before we exclaim "I see!" It also suggests, at least to me, that creativity can be enhanced deliberately through environmental variation. Two techniques seem promising: varying what you learn and varying where you learn it. I try each week to read a scientific paper in a field that is new to me — and to read it in a different place.

New associations often leap out of the air at me this way; more intriguingly, others seem to form covertly and then to lie in wait for the opportune moment when they can click into place. I do not try to force these associations out into the open; they are like shrinking mimosa plants that crumple if you touch them but bloom if you leave them alone.

Robert Merton argued that many of the greatest discoveries of science have sprung from serendipity. As a layman and an amateur, all I hope to accomplish by throwing myself in serendipity's path is to pick up new ideas, and combine old ones, in ways that haven't quite occurred to other people yet. So I let my curiosity lead me wherever it seems to want to go, like that heart-shaped piece of wood that floats across a Ouija board.

I do this remote-reading exercise on my own time, since it would be hard to justify to newspaper editors during the work day. But my happiest moments this autumn came as I reported an investigative article on how elderly investors are increasingly being scammed by elderly con artists. I later realized, to my secret delight, that the article had been enriched by a series of papers I had been reading on altruistic behavior among fish (Lambroides dimidiatus).

If I do my job right, my regular readers will never realize that I spend a fair amount of my leisure time reading Current Biology, the Journal of Neuroscience, and Organizational Behavior and Human Decision Processes. If that reading helps me find new ways to understand the financial world, as I suspect it does, my readers will indirectly be smarter for it. If not, the only harm done is my own spare time wasted.

In my view, we should each invest a few hours a week in reading research that ostensibly has nothing to do with our day jobs, in a setting that has nothing in common with our regular workspaces. This kind of structured serendipity just might help us become more creative, and I doubt that it can hurt.

GINO SEGREProfessor of Physics and Astronomy at the University of Pennsylvania; Author, Ordinary Geniuses

Gedankenexperiment

The notion of gedankenexperiment, or thought experiment, has been integral to theoretical physics' toolkit ever since the discipline came into existence. It involves setting up an imagined piece of apparatus and then running a simple experiment with it in your mind for the purpose of proving or disproving a hypothesis. In many cases a gedankenexperiment is the only approach. An actual experiment to examine retrieval of information falling into a black hole cannot be carried out.

The notion was particularly important during the development of quantum mechanics when legendary gedankenexperiments were conducted by the likes of Bohr and Einstein to test such novel ideas as the Uncertainty Principle and wave-particle duality. Examples, like that of Schrodinger's Cat, have even come into the popular lexicon. Is the cat simultaneously dead and alive? Others, particularly the classic double slit through which a particle/wave passes, were part of the first attempt to understand quantum mechanics and have remained as tools for understanding its meaning.

However, the subject need not be an esoteric one for a gedankenexperiment to be fruitful. My own favorite is Galileo's proof that, contrary to Aristotle's view, more and less massive objects fall in a vacuum with the same acceleration. One might think that a real experiment needs to be conducted to test the hypothesis. But Galileo simply said: consider a large and a small stone tied together by a very light string. If Aristotle was right, he large stone should drag the smaller one and the smaller one retard the larger one if they fell at different rates. However, if you let the string length approach zero, you have a single object with a mass equal to the sum of their masses and hence it should fall at a higher rate than either. This is nonsensical. The conclusion is that all objects fall in vacuum at the same rate.

Consciously or unconsciously we carry out gedankenexperiments of one sort or another in our everyday life and are even trained to do perform them in a variety of disciplines, but I do think it would be useful to have a greater awareness of how they are conducted and how they could be positively applied. We could ask, when confronted with a puzzling situation, how can I set up a gedankenexperiment to resolve the issue? Perhaps our financial, political and military experts would benefit from following such a tactic and arrive at happier outcomes.

SEAN CARROLLTheoretical Physicist, Caltech; Author, From Eternity to Here: The Quest for the Ultimate Theory of Time

Dysteleological Physicalism

The world consists of things, which obey rules. A simple idea, but not an obvious one, and it carries profound consequences.

Physicalism holds that all that really exists are physical things. Our notion of what constitutes a "physical thing" can change as our understanding of physics improves; these days our best conception of what really exists is a set of interacting quantum fields described by a wave function. What doesn't exist, in this doctrine, is anything strictly outside the physical realm — no spirits, deities, or souls independent of bodies. It is often convenient to describe the world in other than purely physical terms, but that is a matter of practical usefulness rather than fundamental necessity.

Most modern scientists and philosophers are physicalists, but the idea is far from obvious, and it is not as widely accepted in the larger community as it could be. When someone dies, it seems apparent that something is gone — a spirit or soul that previously animated the body. The idea that a person is a complex chemical reaction, and that their consciousness emerges directly from the chemical interplay of the atoms of which they are made, can be a difficult one to accept. But it is the inescapable conclusion from everything science has learned about the world.

If the world is made of things, why do they act the way they do? A plausible answer to this question, elaborated by Aristotle and part of many people's intuitive picture of how things work, is that these things want to be a certain way. they have a goal, or at least a natural state of being. Water wants to run downhill; fire wants to rise to the sky. Humans exist to be rational, or caring, or to glorify God; marriages are meant to be between a man and a woman.

This teleological, goal-driven, view of the world is reasonable on its face, but unsupported by science. When Avicenna and Galileo and others suggested that motion does not require a continuous impulse — that objects left to themselves simply keep moving without any outside help — they began the arduous process of undermining the teleological worldview. At a basic level, all any object ever does is obey rules — the laws of physics. These rules take a definite form: given the state of the object and its environment now, we can predict its state in the future. (Quantum mechanics introduces a stochastic component to the prediction, but the underlying idea remains the same.) The "reason" something happens is because it was the inevitable outcome of the state of the universe at an earlier time.

Ernst Haeckel coined the term "dysteleology" to describe the idea that the universe has no ultimate goal or purpose. His primary concern was with biological evolution, but the conception goes deeper. Google returns no hits for the phrase "dysteleological physicalism" (until now, I suppose). But it is arguably the most fundamental insight that science has given us about the ultimate nature of reality. The world consists of things, which obey rules. Everything else derives from that.

None of which is to say that life is devoid of purpose and meaning. Only that these are things we create, not things we discover out there in the fundamental architecture of the world. The world keeps happening, in accordance with its rules; it's up to us to make sense of it.

The media cast about for the proximate causes of life's windfalls and disasters. The public demands blocks against the bad and pipelines to the good. Legislators propose new regulations, fruitlessly dousing last year's fires, forever betting on yesterday's winning horses.

A little-known truth: Every aspect of the world is fundamentally unpredictable. Computer scientists have long since proved this.

How so? To predict an event is to know a shortcut for foreseeing the outcome in advance. A simple counting argument shows there aren't enough shortcuts to go around. Therefore most processes aren't predictable. A deeper argument plays on the fact that, if you could predict your actions, you could deliberately violate your predictions which means the predictions were wrong after all.

We often suppose that unpredictability is caused by random inputs from higher spirits or from low-down quantum foam. But chaos theory and computer science tell us that non-random systems produce surprises on their own. The unexpected tornado, the cartoon safe that lands on Uncle George, the winning pull on a slot machine odd things pop out of a computation. The world can simultaneously be deterministic and unpredictable.

In the physical world, the only way to learn tomorrow's weather in detail is to wait twenty-four hours and see even if nothing is random at all. The universe is computing tomorrow's weather as rapidly and as efficiently as possible any smaller model is inaccurate, and the smallest error is amplified into large effects.

At a personal level, even if the world is as deterministic as a computer program, you still can't predict what you're going to do. This is because your prediction method would involve a mental simulation of you that produces its results slower than you. You can't think faster than you think. You can't stand on your own shoulders.

It's a waste to chase the pipedream of a magical tiny theory that allows us to make quick and detailed calculations about the future. We can't predict and we can't control. To accept this can be a source of liberation and inner peace. We're part of the unfolding world, surfing the chaotic waves.

Our very brains revolt at the idea of randomness. We have evolved as a species to become exquisite pattern-finders — long before the advent of science, we figured out that a salmon-colored sky heralds a dangerous storm, or that a baby's flushed face likely means a difficult night ahead. Our minds automatically try to place data in a framework that allows us to make sense of our observations and use them to understand events and predict them.

Randomness is so difficult to grasp because it works against our pattern-finding instincts. It tells us that sometimes there is no pattern to be found. As a result, randomness is fundamental limit to our intuition; it says that there are processes that we can't predict fully. It's a concept that we have a hard time accepting even though it is an essential part of the way the cosmos works. Without an understanding of randomness, we are stuck in a perfectly predictable universe that simply doesn't exist outside of our own heads.

I would argue that only once we understand three dicta — three laws of randomness — can we break out of our primitive insistence on predictability and appreciate the universe for what it is rather than what we want it to be.

The First Law of Randomness: There is such a thing as randomness.

We use all kinds of mechanisms to avoid confronting randomness. We talk about karma, in a cosmic equalization that ties seemingly unconnected events together. We believe in runs of luck, both good and ill, and that bad things happen in threes. We argue that we are influenced by the stars, by the phases of the moon, and by the motion of the planets in the heavens. When we get cancer, we automatically assume that something — or someone — is to blame.

But many events are not fully predictable or explicable. Disasters happen randomly, to good people as well as to bad ones, to star-crossed individuals as well as those who have a favorable planetary alignment. Sometimes you can make a good guess about the future, but randomness can confound even the most solid predictions — don't be surprised when you're outlived by the overweight, cigar-smoking, speed-fiend motorcyclist down the block.

What's more, random events can mimic non-random ones. Even the most sophisticated scientists can have difficulty telling the difference between a real effect and a random fluke. Randomness can make placebos seem like miracle cures, harmless compounds appear to be deadly poisons, and can even create subatomic particles out of nothing.

The Second Law of Randomness: Some events are impossible to predict.

If you walk into a Las Vegas casino and observe the crowd gathered around the craps table, you'll probably see someone who thinks he's on a lucky streak. Because he's won several rolls in a row, his brain tells him that he's going to keep winning, so he keeps gambling. You'll probably also see someone who's been losing. The loser's brain, like the winner's, tells him to keep gambling. Since he's been losing for so long, he thinks he's due for a stroke of luck; he won't walk away from the table for fear of missing out.

Contrary to what our brains are telling us, there's no mystical force that imbues a winner with a streak of luck, nor is there a cosmic sense of justice that ensures that a loser's luck will turn around. The universe doesn't care one whit whether you've been winning or losing; each roll of the dice is just like every other.

No matter how much effort you put into observing how the dice have been behaving or how meticulously you have been watching for people who seem to have luck on their side, you get absolutely no information about what the next roll of a fair die will be. The outcome of a die roll is entirely independent of its history. And, as a result, any scheme to gain some sort of advantage by observing the table will be doomed to fail. Events like these — independent, purely random events — defy any attempts to find a pattern because there is none to be found.

Randomness provides an absolute block against human ingenuity; it means that our logic, our science, our capacity for reason can only penetrate so far in predicting the behavior of cosmos. Whatever methods you try, whatever theory you create, whatever logic you use to predict the next roll of a fair die, there's always a 5/6 chance you are wrong. Always.

The Third Law of Randomness: Random events behave predictably in aggregate even if they're not predictable individually

Randomness is daunting; it sets limits where even the most sophisticated theories can not go, shielding elements of nature from even our most determined inquiries. Nevertheless, to say that something is random is not equivalent to saying that we can't understand it. Far from it.

Randomness follows its own set of rules — rules that make the behavior of a random process understandable and predictable.

These rules state that even though a single random event might be completely unpredictable, a collection of independent random events is extremely predictable — and the larger the number of events, the more predictable they become. The law of large numbers is a mathematical theorem that dictates that repeated, independent random events converge with pinpoint accuracy upon a predictable average behavior. Another powerful mathematical tool, the central limit theorem, tells you exactly how far off that average a given collection of events is likely to be. With these tools, no matter how chaotic, how strange a random behavior might be in the short run, we can turn that behavior into stable, accurate predictions in the long run.

The rules of randomness are so powerful that they have given physics some of its most sacrosanct and immutable laws. Though the atoms in a box full of gas are moving at random, their collective behavior is described by a simple set of deterministic equations. Even the laws of thermodynamics derive their power from the predictability of large numbers of random events; they are indisputable only because the rules of randomness are so absolute.

Paradoxically, the unpredictable behavior of random events has given us the predictions that we are most confident in.

CLIFFORD PICKOVERAuthor, The Math Book: From Pythagoras to the 57th Dimension; Jews in Hyperspace

Kaleidoscopic Discovery Engine

The famous Canadian physician William Osler once wrote, "In science the credit goes to the man who convinced the world, not to the man to whom the idea first occurs." When we examine discoveries in science and mathematics, in hindsight we often find that if one scientist did not make a particular discovery, some other individual would have done so within a few months or years of the discovery. Most scientists, as Newton said, stood on the shoulders of giants to see the world just a bit further along the horizon. Often, more than one individual creates essentially the same device or discovers the same scientific law at about the same time, but for various reasons, including sheer luck, history sometimes remembers only the more famous discoverer.

In 1858 the German mathematician August Möbius simultaneously and independently discovered the Möbius strip along with a contemporary scholar, the German mathematician Johann Benedict Listing. Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus at roughly the same time. British naturalists Charles Darwin and Alfred Wallace both developed the theory of evolution independently and simultaneously. Similarly, Hungarian mathematician János Bolyai and Russian mathematician Nikolai Lobachevsky seemed to have developed hyperbolic geometry independently and at the same time.

The history of materials science is replete with simultaneous discoveries. For example, in 1886, the electrolytic process for refining aluminum, using the mineral cryolite, was discovered simultaneously and independently by American Charles Martin Hall and Frenchman Paul Héroult. Their inexpensive method for isolating pure aluminum from compounds had an enormous effect on industry. The time was "ripe" for such discoveries, given humanity's accumulated knowledge at the time the discoveries were made. On the other hand, mystics have suggested that a deeper meaning exists to such coincidences. Austrian biologist Paul Kammerer wrote, "We thus arrive at the image of a world-mosaic or cosmic kaleidoscope, which, in spite of constant shufflings and rearrangements, also takes care of bringing like and like together." He compared events in our world to the tops of ocean waves that seem isolated and unrelated. According to his controversial theory, we notice the tops of the waves, but beneath the surface there may be some kind of synchronistic mechanism that mysteriously connects events in our world and causes them to cluster.

We are reluctant to believe that great discoveries are part of a discovery kaleidoscope and mirrored in numerous individuals at once. However, as further examples, there were several independent discoveries of sunspots in 1611, even though Galileo gets most of the credit today. Halley's Comet, named after English astronomer Edmond Halley, was not first discovered by Halley because it had actually seen by countless observers even before the time of Jesus. However, Halley's useful calculations enabled earlier references to the comet's appearance to be found in the historical record. Alexander Graham Bell and Elisha Gray filed their own patents on telephone technologies on the same day. As sociologist of science Robert Morton remarked, "The genius is not a unique source of insight; he is merely an efficient source of insight."

Robert Merton suggested that "all scientific discoveries are in principle 'multiples'." In other words, when a scientific discovery is made, it is made by more than one person. Sometimes a discovery is named after the person who develops the discovery rather than the original discoverer.

The world is full of difficulties in assigning credit for discoveries. Some of us have personally seen this in the realm of patent law, in business ideas, and in our daily lives. Fully appreciating the concept of the kaleidoscope discovery engine adds to our cognitive toolkits because the kaleidoscope succinctly captures the nature of innovation and the future of ideas. If schools taught more about kaleidoscopic discovery, even in the context of everyday experience, then innovators might enjoy the fruits of their labor and still become "great" without a debilitating concern to be first or to crush rivals. The great anatomist William Hunter frequently quarreled with his brother about who was first in making a discovery. But even Hunter admitted, "If a man has not such a degree of enthusiasm and love of the art, as will make him impatient of unreasonable opposition, and of encroachment upon his discoveries and his reputation, he will hardly become considerable in anatomy, or in any other branch of natural knowledge."

When Mark Twain was asked to explain why so many inventions were invented independently, he said "When it's steamboat time, you steam."

I'm alone in my home, working in my study, when I hear the click of the front door, the sound of footsteps making their way toward me. Do I panic? That depends on what I — my attention instantaneously appropriated to the task and cogitating at high speed—infer as the best explanation for those sounds. My husband returning home, the house cleaners, a miscreant breaking and entering, the noises of our old building settling, a supernatural manifestation? Additional details could make any one of these explanations, excepting the last, the best explanation for the circumstances. Why not the last? As Charles Sanders Peirce, who first drew attention to this type of reasoning, pointed out: "Facts cannot be explained by a hypothesis more extraordinary than these facts themselves; and of various hypotheses the least extraordinary must be adopted."

"Inference to the best explanation" is ubiquitously pursued, which doesn't mean that it is ubiquitously pursued well. The phrase, coined by the Princeton philosopher Gilbert Harmon as a substitute for Peirce's term "abduction," should be in everybody's toolkit, if only because it forces one to think about what makes for a good explanation. There is that judgmental phrase, the best, sitting out in the open, shamelessly invoking standards. Not all explanations are created equal; some are objectively better than others. And the phrasealso highlights another important fact. The best means the one that wins out over the alternatives, of which there are always many. Evidence calling for an explanation summons a great plurality (in fact an infinity) of possible explanations, the great mass of which can be eliminated on the grounds of violating Peirce's maxim. We decide among the remainder using such criteria as: which is the simpler, which does less violence to established beliefs, which is less ad hoc, which explains the most, which is the loveliest. There are times when these criteria clash with one another. Inference to the best explanation is certainly not as rule-bound as logical deduction, nor even as enumerative induction, which takes us from observed cases of all a's being b's to the probability that unobserved cases of a's are also b's. But inference to the best explanation also gets us a great deal more than either deduction or enumerative induction does.

It's inference to the best explanation that gives science the power to expand our ontology, giving us reasons to believe in things that we can't directly observe, from sub-atomic particles — or maybe strings — to the dark matter and dark energy of cosmology. It's inference to the best explanation that allows us to know something of what it's like to be other people on the basis of their behavior. I see the hand drawing too near the fire and then quickly pull away, tears starting in the eyes while an impolite word is uttered, and I know something of what that person is feeling. It's on the basis of inference to the best explanation that I can learn things from what authorities say and write, my inferring that the best explanation for their doing so is that they believe what they say or write. (Sometimes that's not the best explanation.) In fact, I'd argue that my right to believe in a world outside of my own solipsistic universe, confined to the awful narrowness of my own immediate experience, is based on inference to the best explanation. What best explains the vivacity and predictability of some of my representations of material bodies, and not others, if not the hypothesis of actual material bodies? Inference to the best explanation defeats mental-sapping skepticism.

Many of our most rancorous scientific debates — say, over string theory or foundations of quantum mechanics — have been over which competing criteria for judging explanations the best ought to prevail. So, too, have debates that many of us have been having over scientific versus religious explanations. These debates could be sharpened by bringing to bear on them the rationality-steeped notion of inference to the best explanation, its invocation of the sorts of standards that make some explanations objectively better than others, beginning with Peirce's enjoiner that extraordinary hypotheses be ranked far away from the best.

NASSIM N. TALEBDistinguished Professor of Risk Engineering, New York University; Author, The Black Swan; The Bed of Procrustes

Antifragility — or— The Property Of Disorder-Loving Systems

I

Antifragility

Just as a package sent by mail can bear a stamp "fragile", "breakable" or "handle with care", consider the exact opposite: a package that has stamped on it "please mishandle" or "please handle carelessly". The contents of such package are not just unbreakable, impervious to shocks, but have something more than that , as they tend to benefit from shocks. This is beyond robustness.

So let us coin the appellation "antifragile" for anything that, on average, (i.e., in expectation) benefits from variability. Alas, I found no simple, noncompound word in any of the main language families that expresses the point of such fragility in reverse. To see how alien the concept to our minds, ask around what's the antonym of fragile. The likely answer will be: robust, unbreakable, solid, well-built, resilient, strong, something-proof (say waterproof, windproof, rustproof), etc. Wrong — and it is not just individuals, but branches of knowledge that are confused by it; this is a mistake made in every dictionary. Ask the same person the opposite of destruction, they will answer construction or creation. And ask for the opposite of concavity, they will answer convexity.

A verbal definition of convexity is: benefits more than it loses from variations; concavity is its opposite. This is key: when I tried to give a mathematical expression of fragility (using sums of path-dependent payoffs), I found that "fragile" could be described in terms of concavity to a source of variation (random or nonrandom), over a certain range of variations. So the opposite of that is convexity — tout simplement. [More technically, all concave exposures are fragile, but the reverse is not necessarily true].

A grandmother's health is fragile, hence concave, with respect to variations in temperature, if you find it preferable to make her spend two hours in 70° F instead of an hour at 0° F and another at 140° F for the exact 70° F on average. (A concave function of a combination f(½ x1+½ x2) is higher than the combination ½ f(x1)+ ½ f(x2).

Further, one could be fragile to certain events but not others: A portfolio can be slightly concave to a small fall in the market but not to extremely large deviations (Black Swans).

Evolution is convex (up to a point) with respect to variations since the DNA benefits from disparity among the offspring. Organisms benefit, up to a point, from a spate of stressors. Trial and error is convex since errors cost little, gains can be large.

Now consider the Triad in the Table. Its elements are those for which I was able to find general concavities and convexities and catalogue accordingly.

The larger the corporation, the more concave to some squeezes (although on the surface companies they claim to benefit from economies of scale, the record shows mortality from disproportionate fragility to Black Swan events). Same with government projects: big government induces fragilities. So does overspecialization (think of the Irish potato famine). In general most top-down systems become fragile (as can be shown with a simple test of concavity to variations).

Worst of all, an optimized system becomes quickly concave to variations, by construction: think of the effect of absence of redundancies and spare parts. So about everything behind the mathematical economics revolution can be shown to fragilize.

Further we can look at the unknown, just like model error, in terms of antifragility (that is, payoff): is what you are missing from a model, or what you don't know in real life, going to help you more than hurt you? In other words are you antifragile to such uncertainty (physical or epistemic)? Is the utility of your payoff convex or concave? Pascal was first to express decisions in terms of these convex payoffs. And economics theories produce models that fragilize (except rare exceptions), which explains why using their models is vastly worse than doing nothing. For instance, financial models based on "risk measurements" of rare events are a joke. The smaller the probability, the more convex it becomes to computational error (and the more concave the payoff): an 25% error in the estimation of the standard deviation for a Gaussian can increase the expected shortfall from remote events by a billion (sic) times! (Missing this simple point has destroyed the banking system).

II

Jensen's Inequality as the Hidden Engine of History

Now the central point. By a simple mathematical property, one can show why, under a model of uncertainty, items on the right column will be likely to benefit in the long run, and thrive, more than shown on the surface, and items on the left are doomed to perish. Over the past decade managers of companies earned in, the aggregate, trillions while retirees lost trillions (the fact that executives get the upside not the downside gives them a convex payoff "free option"). And aggressive tinkering fares vastly better than directed research. How?

Jensen's inequality says the following: for a convex payoff, the expectation of an average will be higher than the average of expectations. For a concave one, the opposite (grandmother's health is worse if on average the temperature is 70 than in an average temperature of 70).

Squaring is a convex function. Take a die (six sides) and consider a payoff equal to the number it lands on. You expect 3½. The square of the expected payoff will be 12¼ (square 3½). Now assume we get the square of the numbers on the die, 15.1666, so, the average of a square payoff is higher than the square of the average payoff. The difference is the convexity bias.

The implications can be striking as this second order effect explains so much of hidden things in history. In expectation, anything that loves Black Swans will be present in the future. Anything that fears it will be eventually gone — to the extent of its concavity.

Einstein ranks extremely high not only among the all-time practitioners of science but also among the producers of aphorisms that place science in its real-world context. One of my favourites is "If we knew what we were doing, it wouldn't be called research." This disarming comment, like so many of the best quotes by experts in any field, embodies a subtle mix of sympathy and disdain for the difficulties that the great unwashed find in appreciating what those experts do.

One of the foremost challenges that face scientists today is to communicate the management of uncertainty. The public know that experts are, well, expert — that they know more than anyone else about the issue at hand. What is evidently far harder for most people to grasp is that "more than anyone else" does not mean "everything" — and especially that, given the possession of only partial knowledge, experts must also be expert at figuring out what is the best course of action. Moreover, those actions must be well judged whether in the lab, the newsroom or the policy-maker's office.

It must not, of course, be neglected that many experts are decidedly inexpert at communicating their work in lay terms. This remains a major issue largely because virtually all experts are called upon to engage in general-audience communication only very rarely, hence do not see it as a priority to gain such skills. Training and advice are available, often from university press offices, but even when experts take advantage of such opportunities it is generally too little and too late.

However, in my view that is a secondary issue. As a scientist with the luxury of communicating with the general public very frequently, I can report with confidence that experience only helps up to a point. A fundamental obstacle remains: that non-scientists harbour deep-seated instincts concerning the management of uncertainty in their everyday lives, which exist because they generally work, but which profoundly differ from the optimal strategy in science and technology. And of course it is technology that matters here, because technology is where the rubber hits the road — where science and the real world meet and must communicate effectively.

Examples of failure in this regard abound — so much so that they are hardly worthy of enumeration. Whether it be swine flu, bird flu, GM crops, stem cells: the public debate departs so starkly from the scientist's comfort zone that it is hard not to sympathise with the errors that scientists make, such as letting nuclear transfer be called "cloning", which end up holding critical research fields back for years.

One particular aspect of this problem stands out in its potential for public self-harm, however: risk-aversion. When uncertainty revolves around such areas as ethics (as with nuclear transfer) or economic policy (as with flu vaccination), the issues are potentially avoidable by appropriate forward planning. This is not the case when it comes to the public attitude to risk. The immense fall in uptake of vaccinations for major childhood diseases following a single, contentious study lining them to autism is a prime example. Another is the suspension of essentially all clinical trials of gene therapy for at least a year in response to the death of one person in a trial: a decision taken by regulatory bodies, yes, but one that was in line with public opinion.

These responses to the risk benefit ratio of cutting-edge technologies are examples of fear of the unknown — of an irrationally conservative prioritisation of the risks of change over the benefits, with unequivocally deleterious consequences in terms of quality and quantity of life in the future. Fear of the unknown is not remotely irrational in principle, when "fear of" is understood as a synonym for "caution about" — but it can be, and generally is, overdone. If the public could be brought to a greater understanding of how to evaluate the risks inherent in exploring future technology, and the merits of accepting some short-term risk in the interests of overwhelmingly greater expected long-term benefit, progress in all areas of technology — especially biomedial technology — would be greatly accelerated.

We have discovered useful metrics for material objects — length, temperature, pressure, volume, kinetic energy. Pragmamorphism is a good word for the attempt to assign such one-dimensional thing-metrics to the mental qualities of humans.

It may be pragmamorphic to make use of a utility function in economics. It's clear that people have preferences. But is it clear that there is a function that describes their preferences?

NICHOLAS CARRAuthor, The Shallows: What the Internet Is Doing to Our Brains

Cognitive Load

You're sprawled on the couch in your living room, watching a new episode of Justified on the tube, when you think of something you need to do in the kitchen. You get up, take ten quick steps across the carpet, and then, just as you reach the kitchen door — poof! — you realize you've already forgotten what it was you got up to do. You stand befuddled for a moment, then shrug your shoulders and head back to the couch.

Such memory lapses happen so often that we don't pay them much heed. We write them off as "absentmindedness" or, if we're getting older, "senior moments." But the incidents reveal a fundamental limitation of our minds: the tiny capacity of our working memory. Working memory is what brain scientists call the short-term store of information where we hold the contents of our consciousness at any given moment — all the impressions and thoughts that flow into our mind as we go through a day. In the 1950s, Princeton psychologist George Miller famously argued that our brains can hold only about seven pieces of information simultaneously. Even that figure may be too high. Some brain researchers now believe that working memory has a maximum capacity of just three or four elements.

The amount of information entering our consciousness at any instant is referred to as our cognitive load. When our cognitive load exceeds the capacity of our working memory, our intellectual abilities take a hit. Information zips into and out of our mind so quickly that we never gain a good mental grip on it. (Which is why you can't remember what you went to the kitchen to do.) The information vanishes before we've had an opportunity to transfer it into our long-term memory and weave it into knowledge. We remember less, and our ability to think critically and conceptually weakens. An overloaded working memory also tends to increase our distractedness. After all, as the neuroscientist Torkel Klingberg has pointed out, "we have to remember what it is we are to concentrate on." Lose your hold on that, and you'll find "distractions more distracting."

Developmental psychologists and educational researchers have long used the concept of cognitive load in designing and evaluating pedagogical techniques. When you give a student too much information too quickly, they know, comprehension degrades and learning suffers. But now that all of us — thanks to the incredible speed and volume of modern digital communication networks and gadgets — are inundated with more bits and pieces of information than ever before, everyone would benefit from having an understanding of cognitive load and how it influences memory and thinking. The more aware we are of how small and fragile our working memory is, the more we'll be able to monitor and manage our cognitive load. We'll become more adept at controlling the flow of the information coming at us.

There are times when you want to be awash in messages and other info-bits. The resulting sense of connectedness and stimulation can be exciting and pleasurable. But it's important to remember that, when it comes to the way your brain works, information overload is not just a metaphor; it's a physical state. When you're engaged in a particularly important or complicated intellectual task, or when you simply want to savor an experience or a conversation, it's best to turn the information faucet down to a trickle.

HANS ULRICH OBRISTCurator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now

To Curate

Lately, the word "curate" seems to be used in an greater variety of contexts than ever before, in reference to everything from a exhibitions of prints by Old Masters to the contents of a concept store. The risk, of course, is that the definition may expand beyond functional usability. But I believe "curate" finds ever-wider application because of a feature of modern life that is impossible to ignore: the incredible proliferation of ideas, information, images, disciplinary knowledge, and material products that we all witnessing today. Such proliferation makes the activities of filtering, enabling, synthesizing, framing, and remembering more and more important as basic navigational tools for 21st centurylife. These are the tasks of the curator, who is no longer understood as simply the person who fills a space with objects but as the person who brings different cultural spheres into contact, invents new display features, and makes junctions that allow unexpected encounters and results.

Michel Foucault once wrote that he hoped his writings would be used by others as a theoretical toolbox, a source of concepts and models for understanding the world. For me, the author, poet, and theoretician Eduard Glissant has become this kind of toolbox. Very early he noted that in our phase of globalization — which is not the first one — there is a danger of a homogenization, but at the same time there is a counter movement to globalization, the retreat into one's own culture. And against both dangers he proposes the idea of mondialité — a global dialogue that augments difference. This inspired me to handle exhibitions in a new way. There is a lot of pressure on curators to do shows not only in one place, but to send them around the world by simply packing them into boxes in one city and unpacking them in the next ‚ this is a homogenizing sort of globalization. Using Glissant's idea as a tool means to develop exhibitions that always build a relation to their place, that change permanently with their different local conditions, that create a changing dynamic, complex system with feedback loops.

To curate, in this sense, is to refuse static arrangements and permanent alignments and instead to enable conversations and relations. Generating these kinds of links is an essential part of what it means to curate, as is disseminating new knowledge, new thinking, and new artworks in a way that can seed future cross-disciplinary inspirations. But there is another case for curating as a vanguard activity for the 21st century.

As the artist Tino Sehgal has pointed out, modern human societies find themselves today in an unprecedented situation: the problem of lack, or scarcity, which has been the primary factor motivating scientific and technological innovation, is now being joined and even superseded by the problem of the global effects of overproduction and resource use. Thus moving beyond the object as the locus of meaning has a further relevance. Selection, presentation, and conversation are ways for human beings to create and exchange real value, without dependence on older, unsustainable processes. Curating can take the lead in pointing us towards this crucial importance of choosing.

SAMUEL BARONDESDirector of the Center for Neurobiology and Psychiatry at the University of California, San Francisco; Author, Better than Prozac

Each Of Us Is Ordinary, And Yet One Of A Kind

Each of us is ordinary, and yet one of a kind.

Each of us is standard-issue, conceived by the union of two germ cells, nurtured in a womb, and equipped with a developmental program that guides our further maturation and eventual decline.

Each of us is also unique, the possessor of a particular selection of gene variants from the collective human genome, and immersed in a particular family, culture, era, and peer group. With inborn tools for adaptation to the circumstances of our personal world we keep building our own ways of being and the sense of who we are.

This dual view of each of us, as both run-of-the-mill and special, has been so well established by biologists and behavioral scientists that it may now seem self-evident. But it still deserves conscious attention as a specific cognitive chunk because it has such important implications. Recognizing how much we share with others promotes compassion, humility, respect, and brotherhood. Recognizing that we are each unique promotes pride, self-development, creativity, and achievement.

Embracing these two aspects of our personal reality can enrich our daily experience. It allows us to simultaneously enjoy the comfort of being ordinary and the excitement of being one of a kind.

RICHARD NISBETTSocial Psychologist, Co-Director, Culture and Cognition Program, University of Michigan; Author, Intelligence and How to Get It

"Graceful" SHA's

1. A university needs to replace its aging hospital. Cost estimates indicate that it would be equally expensive to remodel the old hospital vs. to demolish it and build a new one from scratch. The main argument offered by the proponents of the former is that the original hospital had been very expensive to build and it would be wasteful to simply demolish it. The main argument by the proponents of a new hospital is that a new hospital would inevitably be more modern than a remodeled one. Which seems wiser to you — remodel or build a new hospital?

2. David L., a high school senior, is choosing between two colleges, equal in prestige, cost and distance from home. David has friends at both colleges. Those at College A like it from both intellectual and personal standpoints. Those at College B are generally disappointed on both grounds. But David visits each college for a day and his impressions are quite different from those of his friends. He meets several students at College A who seem neither particularly interesting nor particularly pleasant, and a couple of professors give him the brushoff. He meets several bright and pleasant students at College B and two different professors take a personal interest in him. Which college do you think David should go to?

3. Which of the cards below should you turn over to answer to determine whether the following rule has been violated or not? "If there is a vowel on the front of the card then there is an odd number on the back."

"If there is a vowel on the front of the card then there is an odd number on the back."

U

K

3

8

Some considerations about each of these questions

Question 1: If you said that the university should remodel on the grounds that it had been expensive to build the old hospital you have fallen into the "sunk cost trap" SHA identified by economists. The money spent on the hospital is irrelevant — it's sunk — and has no bearing on the present choice. Amos Tversky and Daniel Kahneman pointed out that people's ability to avoid such traps might be helped by a couple of thought experiments like the following:

"Imagine that you have two tickets to tonight's NBA game in your city and that the arena is 40 miles away. But it's begun to snow and you've found out that your team's star has been injured and won't be playing. Should you go or just throw away the money and skip it?" To answer that question as an economist would, ask yourself the following question: Suppose you didn't have tickets to the game and a friend were to call you up and say that he has two tickets to tonight's game which he can't use and asks if you would like to have them. If the answer is "you've got to be kidding, it's snowing and the star isn't playing," then the answer is you shouldn't go. That answer shows you that the fact that you paid good money for the tickets you have is irrelevant — their cost is sunk and can't be retrieved by doing something you don't want to do anyway. Avoidance of sunk cost traps is a religion for economists, but I find that a single college course in economics actually does little to make people aware of the sunk cost trap. It turns out that exposure to a few basketball-type anecdotes does a lot.

Question 2: If you said that "David is not his friends; he should go to the place he likes," then the SHA of "the law of large numbers" has not been sufficiently salient to you. David has one day's worth of experiences at each; his friends have hundreds. Unless David thinks his friends have kinky tastes he should ignore his own impressions and go to College A. A single college course in statistics increases the likelihood of invoking LLN. Several courses in statistics make LLN considerations almost inevitable.

Question 3: If you said anything other than "turn over the U and turn over the 8," psychologists Wason and Johnson-Laird have shown that you would be in the company of 90% of Oxford students. Unfortunately, you — and they — are wrong. The SHA of the logic of the conditional has not guided your answer. "If P then Q is satisfied by showing that the P is associated with a Q and the not-Q is not associated with a P. A course in logic actually does nothing to make people better able to answer questions such as number 3. Indeed, a Ph.D. in philosophy does nothing to make people better able to apply the logic of the conditional to simple problems like Question 3 or meatier problems of the kind one encounters in everyday life.

Some SHAs apparently are "graceful" in that they are easily inserted into the cognitive toolbox. Others appear to be clunky and don't readily fit. If educators want to improve people's ability to think, they need to know which SHAs are graceful and teachable and which are clunky and hard to teach. An assumption of educators for centuries has been that formal logic improves thinking skills — meaning that it makes people more intelligent in their everyday lives. But this belief may be mistaken. (Bertrand Russell said, almost surely correctly, that the syllogisms studied by the monks of medieval Europe were as sterile as they were.) But it seems likely that many crucially important SHAs, undoubtedly including some which have been proposed by this year's Edge contributors, are readily taught. Few questions are more important for educators to study than to find out which SHAs are teachable and how they can be taught most easily.

When I go about doing what I do, frequently I affect you as an incidental side effect. In many such cases, I don't have to pay you to compensate for any inadvertent harm done; symmetrically, you frequently don't have to pay me for any inadvertent benefits I've bestowed upon you. The term — externalities — refers to these cases, and they are pervasive and important because, especially in the modern, interconnected world, when I go about pursuing my own goals, I wind up affecting you in any number of different ways.

Externalities can be small or large, negative and positive. When I lived in Santa Barbara, many people with no goal other than working on their tans generated (small, true) positive externalities for passersby, who benefitted from the enhanced scenery. These onlookers didn't have to pay for this improvement to the landscape but, on the same beach, rollerbladers, traveling at high speed and distracted by this particular positive externality, occasionally produced a negative one in the form of a risk of collision for pedestrians trying to enjoy the footpath.

Externalities loom large in the present era, when actions in one place can potentially affect others half a world away. When I manufacture widgets for you to buy, to make them I might, as a side effect of the process, produce waste that makes the people around my factory — and maybe around the world — worse off. As long as I don't have to compensate anyone for polluting their water and air, it's unlikely I'll make much of an effort to stop doing it.

At a smaller, more personal scale, we all impose externalities on one another as we go through our daily lives. I drive to work, increasing the amount of traffic you face. You feel the strange compulsion that infects people in theaters these days to check your text messages on your cell phone during the film, and the bright glow peeking over your shoulder reduces my enjoyment of the movie.

The concept of externalities is useful because it directs our attention to unintended side effects. If you weren't focused on externalities, you might think that the way to reduce traffic congestion was to build more roads. That might work, but another way, and a potentially more efficient way, is to implement policies that force drivers to pay the cost of their negative externalities by charging a price to use roads, particularly at peak times. Congestion charges, such as those implemented in London and Singapore, are designed to do exactly this. If I have to pay to go into town during rush hour, I might stay home unless my need is pressing.

Keeping externalities firmly in mind also reminds us to be vigilant about the fact that in complex, integrated systems, simple interventions designed to bring about a particular desirable effect will potentially have many more consequences, both positive and negative. Consider, as an example, the history of DDT. When first used, it had its intended effect, which was to reduce the spread of malaria through the control of mosquito populations. However, its use also had two unintended consequences. First, it poisoned a number of animals (including humans) and, second, it selected for resistance among mosquitoes. Subsequently, policies to reduce the use of DDT probably were effective in their goals of preventing these two negative consequences. However, while there is some debate about the details, these policies might themselves have had an important side effect, increasing rates of malaria, carried by the mosquitoes no longer suppressed by DDT.

The key point is that the notion of externalities forces us to think about unintended (positive and negative) effects of actions, an issue that looms larger as the world gets smaller. It highlights the need to balance not only the intended costs and benefits of a given candidate policy, but also the unintended effects of the policy. Further, it helps focus attention on one type of solution to the problems of unintended harms, which is to think about using prices to provide incentives for people and firms to produce more positive externalities and fewer negative ones.

Considering externalities in our daily lives directs our attention to ways in which we harm, albeit inadvertently, the other people around us, and can be used to guide our own decision making, including waiting until after the credits have rolled to check our messages.

Most of us have a good reputation with ourselves. That's the gist of a sometimes amusing and frequently perilous phenomenon that social psychologists call self-serving bias.

Accepting more responsibility for success than failure, for good deeds than bad.

In experiments, people readily accept credit when told they have succeeded (attributing such to their ability and effort). Yet they attribute failure to external factors such as bad luck or the problem's "impossibility." When we win at Scrabble it's because of our verbal dexterity. When we lose it's because "I was stuck with a Q but no U."

Self-serving attributions have been observed with athletes (after victory or defeat), students (after high or low exam grades), drivers (after accidents), and managers (after profits and losses). The question, "What have I done to deserve this?" is one we ask of our troubles, not our successes.

The better-than-average phenomenon: How do I love me? Let me count the ways.

It's not just in Lake Wobegon that all the children are above average. In one College Board survey of 829,000 high school seniors, zero percent rated themselves below average in "ability to get along with others," 60 percent rated themselves in the top 10 percent, and 25 percent rated themselves in the top 1 percent. Compared to our average peer, most of us fancy ourselves as more intelligent, better looking, less prejudiced, more ethical, healthier, and likely to live longer — a phenomenon recognized in Freud's joke about the man who told his wife, "If one of us should die, I shall move to Paris."

In everyday life, more than 9 in 10 drivers are above average drivers, or so they presume. In surveys of college faculty, 90 percent or more have rated themselves as superior to their average colleague (which naturally leads to some envy and disgruntlement when one's talents are underappreciated). When husbands and wives estimate what percent of the housework they contribute, or when work team members estimate their contributions, their self-estimates routinely sum to more than 100 percent.

Studies of self-serving bias and its cousins — illusory optimism, self-justification, and ingroup bias — remind us of what literature and religion have taught: pride often goes before a fall. Perceiving ourselves and our groups favorably protects us against depression, buffers stress, and sustains our hopes. But it does so at the cost of marital discord, bargaining impasses, condescending prejudice, national hubris, and war. Being mindful of self-serving bias beckons us not to false modesty, but to a humility that affirms our genuine talents and virtues, and likewise those of others.

JAMES O'DONNELLClassicist; Provost, Georgetown University; Author, The Ruin of the Roman Empire

Everything Is In Motion

Nothing is more wonderful about human beings than their ability to abstract, infer, calculate, and produce rules, algorithms, and tables that enable them to work marvels. We are the only species that could even imagine taking on mother nature in a fight for control of the world. We may well lose that fight, but it's an amazing spectacle nonetheless.

But nothing is less wonderful about human beings than their ability to refuse to learn from their own discoveries. The edge to the Edge question this year is the implication that we are brilliant and stupid at the same time, capable of inventing wonders and still capable of forgetting what we've done and blundering stupidly on. Our poor cognitive toolkits are always missing a screwdriver when we need it and we're always trying to get a bolt off that wheel with our teeth when a perfectly serviceable wrench is in the kit over there unused.

So as classicist, I'll make my pitch for what is arguably the oldest of our "SHA" concepts, the one that goes back to the senior pre-Socratic philosopher, Heraclitus."You can't step in the same river twice," he said; putting it another way his mantra was "Everything flows." Remembering that everything is in motion — feverish, ceaseless, unbelievably rapid motion — is always hard for us. Vast galaxies dash apart at speeds that seem faster than is physically possible, while the subatomic particles of which we are composed beggar our ability to comprehend large numbers when we try to understand their motion — and at the same time, I lie here, sluglike, inert, trying to muster the energy to change channels, convinced that one day is just like another, reflecting on the deep truth that my idiot cousin will never change, and wondering why my favorite cupcake store has lost its magic touch.

Because we think and move at human scale in time and space, we can deceive ourselves. Pre-Copernican astronomies depended on the self-evident fact that the "fixed stars" orbited around the other in a slow annual dance; and it was an advance in science to declare that "atoms" (in Greek, literally "indivisibles") were the changeless building blocks of matter — until we split them. Edward Gibbon could be puzzled by the fall of the Roman Empire without realizing that its most amazing feature was that it lasted so long. Scientists discover magic disease-fighting compounds only to find that the disease changes faster than they can keep up.

Take it from Heraclitus and put it in your toolkit: change is the law, stability and consistency are illusions, temporary in any case, a heroic achievement of human will and persistence at best. When we want things to stay the same, we'll always wind up playing catch-up. Better to go with the flow.

DOUGLAS T. KENRICKProfessor of Psychology, Arizona State University; Author, Sex, Murder, and the Meaning of Life; Editor, Evolution and Social Psychology

Subselves and the Modular Mind

Although it seems obvious that there is a single "you" inside your head, research from several subdisciplines of psychology suggests that this is an illusion. The "you" who makes a seemingly rational and "self-interested" decision to discontinue a relationship with a friend who fails to return your phone calls, borrows thousands of dollars he doesn't pay back, and lets you pick up the tab in the restaurant is not the same "you" who makes very different calculations about a son, about a lover, or about a business partner.

Three decades ago cognitive scientist Colin Martindale advanced the idea that each of us has several subselves, and he connected his idea to emerging ideas in cognitive science. Central to Martindale's thesis were a few fairly simple ideas, such as selective attention, lateral inhibition, state-dependent memory, and cognitive dissociation. Although all the neurons in our brains are firing all the time, we'd never be able to put one foot in front of the other if we were unable to consciously ignore almost all of that hyperabundant parallel processing going on in the background. When you walk down the street there are thousands of stimuli to stimulate your already overtaxed brain — hundreds of different people of different ages with different accents, different hair colors, different clothes, different ways of walking and gesturing, not to mention all the flashing advertisements, curbs to avoid tripping over, and automobiles running yellow lights as you try to cross at the intersection. Hence, attention is highly selective. The nervous system accomplishes some of that selectiveness by relying on the powerful principle of lateral inhibition — in which one group of neurons suppresses the activity of other neurons that might interfere with an important message getting up to the next level of processing. In the eye, lateral inhibition helps us notice potentially dangerous holes in the ground, as the retinal cells stimulated by light areas send messages suppressing the activity of neighboring neurons, producing a perceived bump in brightness and valley of darkness near any edge. Several of these local "edge detector" style mechanisms combine at a higher level to produce "shape detectors" — allowing us to discriminate a "b" from a "d" and a "p." Higher up in the nervous system, several shape detectors combine to allow us to discriminate words, and at a higher level, to discriminate sentences, and at a still higher level, place those sentences in context (thereby discriminating whether the statement "Hi, how are you today?" is a romantic pass or a prelude to a sales pitch).

State dependent memory helps sort out all that incoming information for later use, by categorizing new info according to context — if you learn a stranger's name after drinking a doppio espresso at the local java house, it will be easier to remember that name if you meet again at Starbucks than if the next encounter is at a local pub after a martini. For several months after I returned from Italy, I would start speaking Italian and making expansive hand gestures every time I drank a glass of wine.

At the highest level, Martindale argued that all of those processes of inhibition and dissociation lead us to suffer from an everyday version of dissociative disorder. In other words, we all have a number of executive subselves, and the only way we manage to accomplish anything in life is to allow only one subself to take the conscious driver's seat at any given time.

Martindale developed his notion of executive subselves before modern evolutionary approaches to psychology had become prominent, but the idea becomes especially powerful if you combine Martindale's cognitive model with the idea of functional modularity. Building on findings that animals and humans use multiple — and very different — mental processes to learn different things, evolutionarily informed psychologists have suggested that there is not a single information-processing organ inside our heads, but instead multiple systems dedicated to solving different adaptive problems. Thus, instead of having a random and idiosyncratic assortment of subselves inside my head, different from the assortment inside your head, each of us has a set of functional subselves — one dedicated to getting along with our friends, one dedicated to self-protection (protecting us from the bad guys), one dedicated to winning status, another to finding mates, a distinct one for keeping mates (which is a very different set of problems, as some of us have learned), and yet another to caring for our offspring.

Thinking of the mind as composed of several functionally independent adaptive subselves helps us understand many seeming inconsistencies and irrationalities in human behavior, such as why a decision that seems "rational" when it involves one's son seems eminently irrational when it involves a friend or a lover, for example.

The scientist Nicolaus Copernicus recognized that Earth is not in any particularly privileged position in the solar system. This elegant fact can be extended to encompass a powerful idea, known as the Copernican Principle, that we are not in a special or favorable place of any sort. By looking at the world through the eyes of this principle, we can remove certain blinders and preconceptions about ourselves and re-examine our relationship with the universe.

The Copernican Principle can be used in the traditional spatial sense, providing awareness of our mediocre place in the galaxy, and our galaxy's place in the universe. We now recognize that our solar system, once thought to be the center of the galaxy, is actually in the suburban portions of the Milky Way. And the Copernican Principle helps guide our understanding of the expanding universe, allowing us to see that anywhere in the cosmos one would also view other galaxies moving away at rapid speeds, just as we see here on Earth. We are not anywhere special.

The Copernican Principle has also been extended to our temporal position by astrophysicist J. Richard Gott to help provide estimates for lifetimes of events, independent of additional information. As Gott elaborated, other than the fact that we are intelligent observers, there is no reason to believe we are in any way specially located in time. It allows us to quantify our uncertainty and recognize that we are often neither at the beginning of things, nor at the end. This allowed Gott to estimate correctly when the Berlin Wall would fall, and has even provided meaningful numbers on the lifetime of humanity.

This principle can even anchor our location within the many orders of magnitude of our world: we are far smaller than most of the cosmos, far larger than most chemistry, far slower than much that occurs at subatomic scales, and far faster than geological and evolutionary processes. Through this principle, we are compelled to study successively larger and smaller orders of magnitude of our world, because we need not assume that everything interesting is at the same scale as ourselves.

And yet, despite this regimented approach to our mediocrity, we need not have cause for despair: as far as we know, we're the only species that can actually recognize its place in the universe. The paradox of the Copernican Principle is that, by properly understanding our place, even if it be rather humbling, we can only then truly understand our surroundings. And by being able to do that, we don't seem so small or insignificant after all.

One of the most general shorthand abstractions that if adopted would improve the cognitive toolkit of humanity is to think bottom up, not top down. Almost everything important that happens in both nature and in society happens from the bottom up, not the top down. Water is a bottom up, self-organized emergent property of hydrogen and oxygen. Life is a bottom up, self-organized emergent property of organic molecules that coalesced into protein chains through nothing more than the input of energy into the system of Earth's early environment. The complex eukaryotic cells of which we are made are themselves the product of much simpler prokaryotic cells that merged together from the bottom up in a process of symbiosis that happens naturally when genomes are merged between two organisms. Evolution itself is a bottom up process of organisms just trying to make a living and get their genes into the next generation; out of that simple process emerges the diverse array of complex life we see today.

Analogously, an economy is a self-organized bottom up emergent process of people just trying to make a living and get their genes into the next generation, and out of that simple process emerges the diverse array of products and services available to us today. Likewise, democracy is a bottom up emergent political system specifically designed to displace top down kingdoms, theocracies, and dictatorships. Economic and political systems are the result of human action, not human design.

Most people, however, see the world from the top down instead of the bottom up. The reason is that our brains evolved to find design in the world, and our experience with designed objects is that they have a designer (us) who we consider to be intelligent. So most people intuitively sense that anything in nature that looks designed must be so from the top down, not the bottom up. Bottom up reasoning is counter intuitive. This is why so many people believe that life was designed from the top down, and why so many think that economies must be designed and that countries should be ruled from the top down.

One way to get people to adopt the bottom up shorthand abstraction as a cognitive tool is to find examples that we know evolved from the bottom up and were not designed from the top down. Language is an example. No one designed English to look and sound like it does today (in which teenagers use the word "like" every sentence). From Chaucer's time forward our language has evolved from the bottom up by native speakers adopting their own nuanced styles to fit their unique lives and cultures. Likewise, the history of knowledge production has been one long trajectory from top down to bottom up. From ancient priests and medieval scholars to academic professors and university publishers, the democratization of knowledge has struggled alongside the democratization of societies to free itself from the bondage of top down control. Compare the magisterial multi-volume encyclopedias of centuries past that held sway as the final authority for reliable knowledge, now displaced by individual encyclopedists employing wiki tools and making everyone their own expert.

Which is why the Internet is the ultimate bottom up self-organized emergent property of millions of computer users exchanging data across servers, and although there are some top-down controls involved—just as there are some in mostly bottom-up economic and political systems—the strength of digital freedom derives from the fact that no one is in charge. For the past 500 years humanity has gradually but ineluctably transitioned from top down to bottom up systems, for the simple reason that both information and people want to be free.

Fixed-Action Patterns: Using The Study Of Animal Instinct As A Metaphor For Human Behavior

The concept comes from early ethologists, scientists such as Oskar Heinroth and Konrad Lorenz, who defined it as an instinctive response — usually a series of predictable behavior patterns — that would occur reliably in the presence of a specific bit of input, often called a "releaser". FAPs, as they were known, were thought to be devoid of cognitive processing. As it turned out, FAPs were not nearly as fixed as the ethologists believed, but the concept has remained as part of the historical literature, as a way of scientifically describing what in the vernacular might be called "knee-jerk responses". The concept of a FAP, despite its simplicity, might prove quite valuable as a metaphorical means to study and change human behavior.

If we look into the literature on FAPs, we see that many such instinctive responses were actually learned, based on the most elementary of signals. For example, the newly-hatched herring gull chicks' supposed FAP of hitting the red spot on its parents' beak for food was far more complex: Hailman demonstrated that what was innate was only a tendency to peck at an oscillating object in the field of view. The ability to target the beak, and the red spot on the beak, though a pattern that developed steadily and fairly quickly, was acquired experientially. Clearly, certain sensitivities must be innate, but the specifics of their development into various behavioral acts likely depend on how the organism interacts with its surroundings and what feedback it receives. The system need not, especially for humans, be simply a matter of conditioning Response R to Stimulus S, but rather of evaluating as much input as possible.

The relevance is that, if we wish to understand why as humans we often act in certain predictable ways, and particularly if there is a desire or need to change these behavioral responses, we can remember our animal heritage and look for the possible releasers that appear to stimulate our FAPs. Might the FAP actually be a response learned over time, initially with respect to something even more basic than we expect? The consequences could affect aspects of our lives from our social interactions to what we see as our quick decision-making processes in our professional lives. Given an understanding of our FAPs, and those of the other individuals with whom we interact, we — as humans with cognitive processing powers — could begin to re-think our behavior patterns.

An important part of my scientific toolkit is how to think about things in the world over a wide range of magnitudes and time scales. This involves first understanding powers of ten; second, visualizing data over a wide range of magnitudes on graphs using logarithmic scales; and third, appreciating the meaning of magnitude scales such as the decibel (dB) scale for the loudness of sounds and the Richter scale for the strengths of earthquakes.

This toolkit ought to be a part of everyone thinking, but sadly I have found that even well educated nonscientists are flummoxed by log scales and can only vaguely grasp the difference between an earthquake on a Richter scale of 6 and 8 (a thousand times more energy released). Thinking in powers of 10 is such a basic skill that it ought to be taught along with integers in elementary school.

Scaling laws are found throughout nature. Galileo in 1638 pointed out that large animals have disproportionately thicker leg bones than small animals to support the weight of the animal. The heavier the animal, the more stout their legs need to be. This leads to a prediction that the thickness of the leg bone should scale with the 3/2 power of the length of the bone.

Another interesting scaling law is that between the volume of the cortical white matter, corresponding to the long-distance wires between cortical areas, and the gray matter, where the computing takes place. For mammals ranging over 5 orders of magnitude in weight from a pygmy shrew to an elephant, the white matter scales as the 5/4 power of the gray matter. This means that the bigger the brain, the disproportionately larger the fraction of the volume taken up by cortical wiring used for communication compared to the computing machinery.

I am concerned that students I teach have lost the art of estimating with powers of 10. When I was a student I used a slide rule to compute, but students now use calculators. A slide rule lets you carry out a long series of multiplications and divisions by adding and subtracting the logs of numbers; but at the end you need to figure out the powers of 10 by making a rough estimate. A calculator keeps track of this for you, but if you make a mistake in keying in a number you can be off by 10 orders of magnitude, which happens to students who don't have a feeling for orders of magnitude.

A final reason why familiarity with powers of 10 would improve everyone's cognitive toolkit is that it helps us comprehend our life and the world in which we live:

How many seconds are there in a lifetime?109 sec

A second is an arbitrary time unit, but one that is based on our experience. Our visual system is bombarded by snapshots at a rate of around 3 per second caused by rapid eye movements called saccades. Athletes often win or lose a race by a fraction of a second. If you earned a dollar for every second in your life you would be a billionaire. However, a second can feel like a minute in front of an audience and a quiet weekend can disappear in a flash. As a child, a summer seemed to last forever, but as an adult, summer is over almost before it begins. William James speculated that subjective time was measured in novel experiences, which become rarer as you get older. Perhaps life is lived on a logarithmic time scale, compressed toward the end.

What is the GDP of the world? $1014

A billion dollars was once worth a lot, but there is now a long list of multibillionaires. The US government recently stimulated the world economy by loaning several trillion dollars to banks. It is difficult to grasp how much a trillion dollars ($10[12] ) represents, but several clever videos on YouTube (key words: trillion dollars) illustrate this with physical comparisons (a giant pile of $100 bills) and what you can buy with it (10 years of US response to 9/11). When you start thinking about the world economy, the trillions of dollars add up. A trillion here, a trillion there, pretty soon your talking about real money. But so far there aren't any trillionaires.

How many synapses are there in the brain?1015

Two neurons can communicate with each other at a synapse, which is the computational unit in the brain. The typical cortical synapse is less than a micron in diameter (10[-6] meter), near the resolution limit of the light microscope. If the economy of the world is a stretch for us to contemplate, thinking about all the synapses in your head is mind boggling. If I had a dollar for every synapse in your brain I could support the current economy of the world for 10 years. Cortical neurons on average fire once a second, which implies a bandwidth of around 10[15] bits per second, greater than the total bandwidth of the internet backbone.

How many seconds will the sun shine? 1017 sec

Our sun has shined for billions of years and will continue to shine for billions more. The universe seems to be standing still during our lifetime, but on longer time scales the universe is filled with events of enormous violence. The spatial scales are also immense. Our space-time trajectory is a very tiny part of the universe, but we can at least attach powers of 10 to it and put it into perspective.

JUAN ENRIQUEZManaging Director of Excel Venture Management, authored As the Future Catches You and co-authored Homo Evolutis: A Tour of Our New Species.

Life Code

Everyone is familiar with Digital Code, or the shorthand IT. Soon all may be discoursing about Life Code…

It took a while to learn how to read life code; Mendel's initial cookbook was largely ignored. Darwin knew but refused, for decades, to publish such controversial material. Even a term that now lies within every cheesy PR description of a firm, on jeans, and pop psych books…DNA… was largely ignored after its 1953 discovery. For close to a decade very few cited Watson and Crick. They were not even nominated, by anyone, for a Nobel till after 1960, despite the discovery of how life code is written.

First ignorance then controversy continued dogging life code as humanity moved from reading it to copying it. Tadpoles were cloned in 1952, but few focused until Dolly the sheep begat wonder, consternation, and fear. Much the same occurred with in vitro fertilization and Louise Brown, a breakthrough that got the Nobel in 2010, a mere 37 years after the first birth. Copying genes and dozens of species, leading to hundreds of thousands of almost identical animals is now commonplace. The latest controversy is no longer how do we deal with rare clones but should we eat them.

Much has occurred as we learned to read and copy life code, but there is still little understanding for what has occurred recently. But it is this third stage of life code, writing and re-writing, is by far the most important and profound change.

Few realize, so far, that life code is spreading across industries, economies, countries, and cultures. As we begin to rewrite existing life, strange things evolve. Bacteria can be programmed to solve Sudoku puzzles. Viruses begin to create electronic circuits. As we write life from scratch, Venter, Smith et al. partner with Exxon to try to change the world's energy markets. Designer genes introduced by retroviruses, organs built from scratch, the first synthetic cells further examples of massive change.

We see more and more products, derived from life code, changing fields as diverse as energy, textiles, chemicals, IT, vaccines, medicines, space exploration, agriculture, fashion, finance, and real estate. And gradually, "life code" a concept that only got 559 Goggle hits in 2000, and fewer than 50,000 in 2009, becomes a part of the everyday public discourse.

Many of the Fortune 500 within the next decade will be companies based on the understanding and application of life code, much as has occurred over the past decades with digital code leading to the likes of Digital, Lotus, HP, IBM, Microsoft, Amazon, Google, and Facebook.

But this is just the beginning. The real change will become apparent as we re-write life code to morph the human species. We are already transitioning from a humanoid that is shaped by and shapes its own environment into a Homo evolutis, a species that directly and deliberately designs and shapes its own evolution and that of other species…

CARLO ROVELLIPhysicist, University of Aix-Marseille, France; Author, The First Scientist: Anaximander and the Nature of Science

The Uselessness of Certainty

There is a widely used notion that does plenty of damage: the notion of "scientifically proven". Nearly an oxymoron. The very foundation of science is to keep the door open to doubt. Precisely because we keep questioning everything, especially our own premises, we are always ready to improve our knowledge. Therefore a good scientist is never 'certain'. Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain: because the good scientist will be ready to shift to a different point of view if better elements of evidence, or novel arguments emerge. Therefore certainty is not only something of no use, but is in fact damaging, if we value reliability.

Failure to appreciate the value of the lack of certainty is at the origin of much silliness in our society. Are we sure that the Earth is going to keep heating up, if we do not do anything? Are we sure of the details of the current theory of evolution? Are we sure that modern medicine is always a better strategy than traditional ones? No we are not, in none of these cases. But if from this lack of certainty we jump to the conviction that we better not care about global heating, that there is no evolution and the world was created six thousand years ago, or that traditional medicine must be more effective that the modern medicine, well, we are simply stupid. Still, many people do these silly inferences. Because the lack of certainty is perceived as a sign of weakness, instead of being what it is: the first source of our knowledge.

Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure about my own name ... but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. Better understanding of the meaning of probability, and especially realizing that we never have, nor need, 'scientifically proven' facts, but only a sufficiently high degree of probability, in order to take decisions and act, would improve everybody' conceptual toolkit.

The concept of constraint satisfaction is crucial for understanding and improving human reasoning and decision making. A "constraint" is a condition that must be taken into account when solving a problem or making a decision, and "constraint satisfaction" is the process of meeting the relevant constraints. The key idea is that often there are only a few ways to satisfy a full set of constraints simultaneously.

For example, when moving into a new house, my wife and I had to decide how to arrange the furniture in the bedroom. We had an old headboard, which was so rickety that it had to be leaned against a wall. This requirement was a constraint on the positioning of the headboard. The other pieces of furniture also had requirements (constraints) on where they could be placed. Specifically, we had two small end tables that had to be next to either side of the headboard; a chair that needed to be somewhere in the room; a reading lamp that needed to be next to the chair; and an old sofa that was missing one of its rear legs, and hence rested on a couple of books — and we wanted to position it so that people couldn't see the books. Here was the remarkable fact about our exercises in interior design: Virtually always, as soon as we selected the wall for the headboard, bang! The entire configuration of the room was determined. There was only one other wall large enough for the sofa, which in turn left only one space for the chair and lamp.

In general, the more constraints, the fewer the possible ways of satisfying them simultaneously. And this is especially the case when there are many "strong" constraints. A strong constraint is like the locations of the end tables: there are very few ways to satisfy them. In contrast, a "weak" constraint, such as the location of the headboard, can be satisfied in many ways (many positions along different walls would work).

What happens when some constraints are incompatible with others? For instance, say that you live far from a gas station and so you want to buy an electric automobile — but you don't have enough money to buy one. Not all constraints are equal in importance, and as long as the most important ones are satisfied "well enough," you may have reached a satisfactory solution. For example, although an optimal solution to your transportation needs might have been an electric car, a hybrid that gets excellent gas mileage might be good enough.

In addition, once you begin the constraint satisfaction process, you can make it more effective by seeking out additional constraints. For example, when you are deciding what car to buy, you might start with the constraints of (a) your budget and (b) your desire to avoid going to a filling station. You then might consider the size of car needed for your purposes, length of the warrantee, and styling. You may be willing to make tradeoffs, for example, by satisfying some constraints very well (such as mileage) but just barely satisfying others (e.g., styling). Even so, the mere fact of including additional constraints at all could be the deciding factor.

Constraint satisfaction is pervasive. For example:

• This is how detectives — from Sherlock Holmes to the Mentalist — crack their cases, treating each clue as a constraint and looking for a solution that satisfies them all.
• This is what dating services strive to do — find the clients' constraints, identify which constraints are most important to him or her, and then see which of the available candidates best satisfies the constraints.

• This is what you go through when finding a new place to live, weighing the relative importance of constraints such as the size, price, location, and type of neighborhood.

• And this is what you are do when you get dressed in the morning: you choose clothes that "go with each other" (both in color and style).

Constraint satisfaction is pervasive in part because it does not require "perfect" solutions. It's up to you to decide what the most important constraints are, and just how many of the constraints in general must be satisfied (and how well they must be satisfied). Moreover, constraint satisfaction need not be linear: You can appreciate the entire set of constraints at the same time, throwing them into your "mental stewpot" and letting them simmer. And this process need not be conscious. "Mulling it over" seems to consist of engaging in unconscious constraint satisfaction.

Finally, much creativity emerges from constraint satisfaction. Many new recipes were created when chefs discovered that only specific ingredients were available — and they thus were either forced to substitute different ingredients or to come up with a new "solution" (dish) to be satisfied. Creativity can also emerge when you decide to change, exclude, or add a constraint. For example, Einstein had one of his major breakthroughs when he realized that time need not pass at a constant rate. Perhaps paradoxically, adding constraints can actually enhance creativity — if a task is too open or unstructured, it may be so unconstrained that it is difficult to devise any solution.

Everybody knows about the familiar large-scale cycles of nature: day follows night follows day summer-fall-winter-spring-summer-fall-winter-spring, the water cycle of evaporation and precipitation that refills our lakes, scours our rivers and restores the water supply of every living thing on the planet. But not everybody appreciates how cycles — every spatial and temporal scale from the atomic to the astronomic — are quite literally the hidden spinning motors that power all the wonderful phenomena of nature.

Nikolaus Otto built and sold the first internal combustion gasoline engine in 1861, and Rudolf Diesel built his engine in 1897, two brilliant inventions that changed the world. Each exploits a cycle, the four-stroke Otto cycle or the two-stroke Diesel cycle, that accomplishes some work and then restores the system to the original position so that it is ready to accomplish some more work. The details of these cycles are ingenious, and they have been discovered and optimized by an R & D cycle of invention that is several centuries old. An even more elegant, micro-miniaturized engine is the Krebs cycle, discovered in 1937 by Hans Krebs, but invented over millions of years of evolution at the dawn of life. It is the eight-stroke chemical reaction that turns fuel — into energy in the process of metabolism that is essential to all life, from bacteria to redwood trees.

Biochemical cycles like the Krebs cycle are responsible for all the motion, growth, self-repair, and reproduction in the living world, wheels within wheels within wheels, a clockwork with trillions of moving parts, and each clock has to be rewound, restored to step one so that it can do its duty again. All of these have been optimized by the grand Darwinian cycle of reproduction, generation after generation, picking up fortuitous improvements over the eons.

At a completely different scale, our ancestors discovered the efficacy of cycles in one of the great advances of human prehistory: the role of repetition in manufacture. Take a stick and rub it with a stone and almost nothing happens — few scratches are the only visible sign of change. Rub it a hundred times and there is still nothing much to see. But rub it just so, for a few thousand times, and you can turn it into an uncannily straight arrow shaft. By the accumulation of imperceptible increments, the cyclical process creates something altogether new. The foresight and self-control required for such projects was itself a novelty, a vast improvement over the repetitive but largely instinctual and mindless building and shaping processes of other animals. And that novelty was, of course, itself a product of the Darwinian cycle, enhanced eventually by the swifter cycle of cultural evolution, in which the reproduction of the technique wasn't passed on to offspring through the genes but transmitted among non-kin conspecifics who picked up the trick of imitation.

The first ancestor who polished a stone into a handsomely symmetrical hand axe must have looked pretty stupid in the process. There he sat, rubbing away for hours on end, to no apparent effect. But hidden in the interstices of all the mindless repetition was a process of gradual refinement that was well nigh invisible to the naked eye designed by evolution to detect changes occurring at a much faster tempo. The same appearance of futility has occasionally misled sophisticated biologists. In his elegant book, Wetware, the molecular and cell biologist Dennis Bray describes cycles in the nervous system:

In a typical signaling pathway, proteins are continually being modified and demodified. Kinases and phosphates work ceaselessly like ants in a nest, adding phosphate groups to proteins and removing them again. It seems a pointless exercise, especially when you consider that each cycle of addition and removal costs the cell one molecule of — one unit of precious energy. Indeed, cyclic reactions of this kind were initially labeled "futile." But the adjective is misleading. The addition of phosphate groups to proteins is the single most common reaction in cells and underpins a large proportion of the computations they perform. Far from being futile, this cyclic reaction provides the cell with an essential resource: a flexible and rapidly tunable device.

The word "computations" is aptly chosen, for it turns out that all the "magic" of cognition depends, just as life itself does, on cycles within cycles of recurrent, re-entrant, reflexive information-transformation processes from the biochemical scale within the neuron to the whole brain sleep cycle, waves of cerebral activity and recovery revealed by EEGs. Computer programmers have been exploring the space of possible computations for less than a century, but their harvest of invention and discovery so far includes millions of loops within loops within loops. The secret ingredient of improvement is always the same: practice, practice, practice.

It is useful to remember that Darwinian evolution is just one kind of accumulative, refining cycle. There are plenty of others. The problem of the origin of life can be made to look insoluble ("irreducibly complex") if one argues, as Intelligent Design advocates have done, that "since evolution by natural selection depends on reproduction," there cannot be a Darwinian solution to the problem of how the first living, reproducing thing came to exist. It was surely breathtakingly complicated, beautifully designed — must have been a miracle.

If we lapse into thinking of the pre-biotic, pre-reproductive world as a sort of featureless chaos of chemicals (like the scattered parts of the notorious jetliner assembled by a windstorm), the problem does look daunting and worse, but if we remind ourselves that the key process in evolution is cyclical repetition (of which genetic replication is just one highly refined and optimized instance), we can begin to see our way to turning the mystery into a puzzle: How did all those seasonal cycles, water cycles, geological cycles, and chemical cycles, spinning for millions of years, gradually accumulate the preconditions for giving birth to the biological cycles? Probably the first thousand "tries" were futile, near misses. But as Cole Porter says in his most sensual song, see what happens if you "do it again, and again, and again."

A good rule of thumb, then, when confronting the apparent magic of the world of life and mind is: look for the cycles that are doing all the hard work.

When it comes to common resources, a failure to cooperate is a failure to control consumption. In Hardin's classic tragedy, everyone overconsumes and equally contributes to the detriment of the commons. But a relative few can also ruin a resource for the rest of us.

Biologists are familiar with the term 'keystone species', coined in 1969 after Bob Paine's intertidal exclusion experiments. Paine found that by removing the few five-limbed carnivores, Pisaster ochraceus, from the seashore, he could cause an overabundance of its prey, mussels, and a sharp decline in diversity. Without seastars, mussels outcompeted sponges. No sponges, no nudibranchs. Anenomes were also starved out because they eat what the seastars dislodge. Pisaster was the keystone that kept the intertidal community together. Without it, there were only mussels, mussels, mussels. The term keystone species, inspired by the purple seastar, refers to a species that has a disproportionate effect relative to its abundance.

In human ecology, I imagine diseases and parasites play a similar role to Pisaster in Paine's experiment. Remove disease (and increase food) and Homo sapiens takeover. Humans inevitably restructure their environment. But not all human beings consume equally. While a keystone species refers to a specific species that structures an ecosystem, I consider keystone consumers to be a specific group of humans that structures a market for a particular resource. Intense demand by a few individuals can bring flora and fauna to the brink.

There are keystone consumers in the markets for caviar, slipper orchids, tiger penises, plutonium, pet primates, diamonds, antibiotics, Hummers, and seahorses. Niche markets for frog legs in pockets of the U.S., Europe, and Asia are depleting frog populations in Indonesia, Ecuador, and Brazil. Seafood lovers in high-end restaurants are causing stocks of long-lived fish species like Orange roughy or toothfish in Antarctica to crash. The desire for shark fin soup by wealthy Chinese consumers has led to the collapse of several shark species.

One in every four mammals (1,141 of the 5,487 mammals on Earth) is threatened with extinction. At least 76 mammals have become extinct since the 16th century, many, like the Tasmanian tiger, the great auk, and the Steller sea cow, due to hunting by a relatively small group. It is possible for a small minority of humans to precipitate the disappearance of an entire species.

The consumption of non-living resources is also imbalanced. The 15% of the world's population that lives in North America, Western Europe, Japan and Australia consumes 32 times more resources, like fossil fuels and metals, and produces 32 times more pollution than the developing world, where the remaining 85% of humans live. City-dwellers consume more than people living in the countryside. A recent study determined the ecological footprint for an average resident of Vancouver, British Columbia was 13 times higher than his suburban/rural counterpart.

Developed nations, urbanites, ivory collectors: the keystone consumer depends on the resource in question. In the case of water, agriculture accounts for 80% of use in the U.S., i.e. large-scale farms are the keystone consumers. So why do many conservation efforts focus on households rather than water efficiency on farms? The keystone consumer concept helps focus conservation efforts where returns on investments are highest.

Like keystone species, keystone consumers also have a disproportionate impact relative to their abundance. Biologists identify keystone species as conservation priorities because their disappearance could cause the loss of many other species. In the marketplace, keystone consumers should be priorities because their disappearance could lead to the recovery of the resource. Humans should protect keystone species and curb keystone consumption. The lives of others depend on it.

JARON LANIERMusician, Computer Scientist; Pioneer of Virtural Reality; Author, You Are Not A Gadget: A Manifesto

Cumulative Error

It is the stuff of children's games. In the game of "telephone," a secret message is whispered from child to child until it is announced out loud by the final recipient. To the delight of all, the message is typically transformed into something new and bizarre, no matter the sincerity and care given to each retelling.

Humor seems to be the brain's way of motivating itself — through pleasure — to notice disparities and cleavages in its sense of the world. In the telephone game we find glee in the violation of expectation; what we think should be fixed turns out to be fluid.

When brains get something wrong commonly enough that noticing the failure becomes the fulcrum of a simple child's game, then you know there's a hitch in human cognition worth worrying about. Somehow, we expect information to be Platonic and faithful to its origin, no matter what history might have corrupted it.

The illusion of Platonic information is confounding because it can easily defeat our natural skeptical impulses. If a child in the sequence sniffs that the message seems too weird to be authentic, she can compare notes most easily with the children closest to her, who received the message just before she did. She might discover some small variation, but mostly the information will appear to be confirmed, and she will find an apparent verification of a falsity.

Another delightful pastime is over-transforming an information artifact through digital algorithms that are useful if used sparingly, until it turns into something quite strange. For instance, you can use one of the online machine translation services to translate a phrase through a ring of languages back to the original and see what you get.

The phrase, "The edge of knowledge motivates intriguing online discussions" transforms into "Online discussions in order to stimulate an attractive national knowledge" in four steps on Google's current translator. (English->German->Hebrew->Simplified Chinese->English)

We find this sort of thing funny, just like children playing "telephone," as well we should, because it sparks our recollection that our brains have unrealistic expectations of information transformation.

While information technology can reveal truths, it can also create stronger illusions than we are used to. For instance, sensors all over the world, connected through cloud computing, can reveal urgent patterns of change in climate data. But endless chains of online retelling also create an illusion for masses of people that the original data is a hoax.

The illusion of Platonic information plagues finance. Financial instruments are becoming multilevel derivatives of the real actions on the ground that finance is ultimately supposed to motivate and optimize. The reason to finance the purchasing of a house ought to be at least in part to get the house purchased. But an empire of specialists and giant growths of cloud computers showed, in the run up to the Great Recession, that it is possible for sufficiently complex financial instruments to become completely disconnected from their ultimate purpose.

In the case of complex financial instruments, the role of each child in the telephone game does not correspond to a horizontal series of stations that relay a message, but a vertical series of transformations that are no more reliable. Transactions are stacked on top of each other. Each transaction is based on a formula that transforms the data of the transactions beneath it on the stack. A transaction might be based on the possibility that a prediction of a prediction will have been improperly predicted.

The illusion of Platonic information reappears as a belief that a higher-level representation must always be better. Each time a transaction is gauged to an assessment of the risk of another transaction, however, even if it is in a vertical structure, at least a little bit of error and artifact is injected. By the time a few layers have been compounded, the information becomes bizarre.

Unfortunately, the feedback loop that determines whether a transaction is a success or not is based only on its interactions with its immediate neighbors in the phantasmagorical abstract playground of finance. So a transaction can make money based on how it interacted with the other transactions it referenced directly, while having no relationship to the real events on the ground that all the transactions are ultimately rooted in. This is just like the child trying to figure out if a message has been corrupted only by talking to her neighbors.

In principle, the Internet can make it possible to connect people directly to information sources, to avoid the illusions of the game of telephone. Indeed this happens. Millions of people had a remarkable direct experience of the Mars rovers.

The economy of the Internet as it has evolved incentivizes aggregators, however. Thus we all take seats in a new game of telephone, in which you tell the blogger who tells the aggregator of blogs, who tells the social network, who tells the advertiser, who tells the political action committee, and so on. Each station along the way finds that it is making sense, because it has the limited scope of the skeptical girl in the circle, and yet the whole systems becomes infused with a degree of nonsense.

A joke isn't funny anymore if it's repeated too much. It is urgent for the cognitive fallacy of Platonic information to be universally acknowledged, and for information systems to be designed to reduce cumulative error.

When I first took up the piano, merely hitting each note required my concentrated attention. With practice, however, I began to work in phrases and chords. Eventually I was able to produce much better music with much less conscious effort.

Evidently, something powerful had happened in my brain.

That sort of experience is very common, of course. Something similar occurs whenever we learn a new language, master a new game, or get comfortable in a new environment. It seems very likely that a common mechanism is involvedf. I think it's possible to identify, in broad terms, what that mechanism is: We create hidden layers.

The scientific concept of a hidden layer arose from the study of neural networks. Here a little picture is worth a thousand words:

In this picture, the flow of information runs from top to bottom. Sensory neurons — the eyeballs at the top — take input from the external world and encode it into a convenient form (which is typically electrical pulse trains for biological neurons, and numerical data for the computer "neurons" of artificial neural networks). They distribute this encoded information to other neurons, in the next layer below. Effector neurons — the stars at the bottom — send their signals to output devices (which are typically muscles for biological neurons, and computer terminals for artificial neurons). In between are neurons that neither see nor act upon the outside world directly. These inter-neurons communicate only with other neurons. They are the hidden layers.

The earliest artificial neural networks lacked hidden layers. Their output was, therefore, a relatively simple function of their input. Those two-layer, input-output "perceptrons" had crippling limitations. For example, there is no way to design a perceptron that, faced with pictures of a few black circles on a white background, counts the number of circles. It took until the 1980s, decades after the pioneering work, for people to realize that including even one or two hidden layers could vastly enhance the capabilities of their neural networks. Nowadays such multilayer networks are used, for example, to distill patterns from the explosions of particles that emerge from high-energy collisions at the Large Hadron Collider. They do it much faster and more reliably than humans possibly could.

David Hubel and Torstein Wiesel were awarded the 1981 Nobel Prize in physiology or medicine for figuring out what neurons in the visual cortex are doing. They showed that successive hidden layers first extract features of visual scenes that are likely to be meaningful (for example, sharp changes in brightness or color, indicating the boundaries of objects), and then assemble them into meaningful wholes (the underlying objects).

In every moment of our adult waking life, we translate raw patterns of photons impacting our retinas — photons arriving every which way from a jumble of unsorted sources, and projected onto a two-dimensional surface — into the orderly, three-dimensional visual world we experience. Because it involves no conscious effort, we tend to take that everyday miracle for granted. But when engineers tried to duplicate it, in robotic vision, they got a hard lesson in humility. Robotic vision remains today, by human standards, primitive. Hubel and Wiesel exhibited the architecture of Nature's solution. It is the architecture of hidden layers.

Hidden layers embody, in a concrete physical form, the fashionable but rather vague and abstract idea of emergence. Each hidden layer neuron has a template. It becomes activated, and sends signals of its own to the next layer, precisely when the pattern of information it's receiving from the preceding layer matches (within some tolerance) that template. But this is just to say, in precision-enabling jargon, that the neuron defines, and thus creates, a new emergent concept.

In thinking about hidden layers, it's important to distinguish between the routine efficiency and power of a good network, once that network has been set up, and the difficult issue of how to set it up in the first place. That difference is reflected in the difference between playing the piano, say, or riding a bicycle, or swimming, once you've learned (easy), and learning to do those things in the first place (hard). Understanding exactly how new hidden layers get laid down in neural circuitry is a great unsolved problem of science. I'm tempted to say it's the greatest.

Liberated from its origin in neural networks, the concept of hidden layers becomes a versatile metaphor, with genuine explanatory power. For example, in my own work in physics I've noticed many times the impact of inventing names for things. When Murray Gell-Mann invented "quarks", he was giving a name to a paradoxical pattern of facts. Once that pattern was recognized, physicists faced the challenge of refining it into something mathematically precise and consistent; but identifying the problem was the crucial step toward solving it! Similar, when I invented "anyons" I knew I had put my finger on a coherent set of ideas, but I hardly anticipated how wonderfully those ideas would evolve and be embodied in reality. In cases like this, names create new nodes in hidden layers of thought.

I'm convinced that the general concept of hidden layers captures deep aspects of the way minds — whether human, animal, or alien; past, present, or future — do their work. Minds mobilize useful concepts by embodying them in a specific way, namely as features recognized by hidden layers. And isn't it pretty that "hidden layers" is itself a most useful concept, worthy to be included in hidden layers everywhere?

The word "science" itself might be the best answer to this year's Edge question. The idea that we can systematically understand certain aspects of the world and make predictions based on what we've learned — while appreciating and categorizing the extent and limitations of what we know — plays a big role in how we think. Many words that summarize the nature of science such as "cause and effect," "predictions," and " experiments," as well as words that describe probabilistic results such as "mean," "median," "standard deviation," and the notion of "probability" itself help us understand more specifically what this means and how to interpret the world and behavior within it.

"Effective theory" is one of the more important notions within and outside of science. The idea is to determine what you can actually measure and decide — given the precision and accuracy of your measuring tools — and to find a theory appropriate to those measurable quantities. The theory that works might not be the ultimate truth—but it's as close an approximation to the truth as you need and is also the limit to what you can test at any given time. People can reasonably disagree on what lies beyond the effective theory, but in a domain where we have tested and confirmed it, we understand the theory to the degree that it's been tested.

An example is Newton's Laws, which work as well as we will ever need when they describe what happens to a ball when we throw it. Even though we now know quantum mechanics is ultimately at play, it has no visible consequences on the trajectory of the ball. Newton's Laws are part of an effective theory that is ultimately subsumed into quantum mechanics. Yet Newton's Laws remain practical and true in their domain of validity. It's similar to the logic you apply when you look at a map. You decide the scale appropriate to your journey — are you traveling across the country, going upstate, or looking for the nearest grocery store — and use the map scale appropriate to your question.

Terms that refer to specific scientific results can be efficient at times but they can also be misleading when taken out of context and not supported by true scientific investigation. But the scientific methods for seeking, testing, and identifying answers and understanding the limitations of what we have investigated will always be reliable ways of acquiring knowledge. A better understanding of the robustness and limitations of what science establishes, as well as probabilistic results and predictions, could make the world a better place.

People like to think of technologies and media as neutral and that only their use or content determines their impact. Guns don't kill people, after all, people kill people. But guns are much more biased toward killing people than, say, pillows — even though many a pillow has been utilized to smother an aging relative or adulterous spouse.

Our widespread inability to recognize or even acknowledge the biases of the technologies we use renders us incapable of gaining any real agency through them. We accept our iPads, Facebook accounts and automobiles at face value — as pre-existing conditions — rather than tools with embedded biases.

Marshall McLuhan exhorted us to recognize that our media have impacts on us beyond whatever content is being transmitted through them. And while his message was itself garbled by the media through which he expressed it (the medium is the what?) it is true enough to be generalized to all technology. We are free to use any car we like to get to work — gasoline, diesel, electric, or hydrogen — and this sense of choice blinds us to the fundamental bias of the automobile towards distance, commuting, suburbs, and energy consumption.

Likewise, soft technologies from central currency to psychotherapy are biased in their construction as much as their implementation. No matter how we spend US dollars, we are nonetheless fortifying banking and the centralization of capital. Put a psychotherapist on his own couch and a patient in the chair, and the therapist will begin to exhibit treatable pathologies. It's set up that way, just as Facebook is set up to make us think of ourselves in terms of our "likes" and an iPad is set up to make us start paying for media and stop producing it ourselves.

If the concept that technologies have biases were to become common knowledge, we would put ourselves in a position to implement them consciously and purposefully. If we don't bring this concept into general awareness, our technologies and their effects will continue to threaten and confound us.

The ever-cumulating dispersion, not only of information, but also of population, across the globe, is the great social phenomenon of this age. Regrettably, cultures are being homogenized, but cultural differences are also being demystified, and intermarriage is escalating, across ethnic groups within states and between ethnicities across the world. The effects are potentially beneficial for the improvement of cognitive skills, from two perspectives. We can call these "the expanding in-group" and the "hybrid vigor" effects.

The in-group versus out-group double standard, which had and has such catastrophic consequences, could in theory be eliminated if everyone alive were to be considered to be in everyone else's in-group. This Utopian prospect is remote, but an expansion of the conceptual in-group would expand the range of friendly, supportive and altruistic behavior. This effect may already be in evidence in the increase in charitable activities in support of foreign populations that are confronted by natural disasters. Donors identifying to a greater extent with recipients make this possible. The rise in frequency of international adoptions also indicates that the barriers set up by discriminatory and nationalistic prejudice are becoming porous.

The other potential benefit is genetic. The phenomenon of hybrid vigor in offspring, which is also called heterozygote advantage, derives from a cross between dissimilar parents. It is well established experimentally, and the benefits of mingling disparate gene pools are seen not only in improved physical but also in improved mental development. Intermarriage therefore promises cognitive benefits. Indeed, it may already have contributed to the Flynn effect, the well known worldwide rise in average measured intelligence, by as much as three I.Q. points per decade, over successive decades since the early twentieth century.

Every major change is liable to unintended consequences. These can be beneficial, detrimental or both. The social and cognitive benefits of the intermingling of people and populations are no exception, and there is no knowing whether the benefits are counterweighed or even outweighed by as yet unknown drawbacks. Nonetheless, unintended though they might be, the social benefits of the overall greater probability of in-group status, and the cognitive benefits of increasing frequency of intermarriage entailed by globalization may already be making themselves felt.

JONATHAN HAIDTPsychologist, University of Virginia; Author, The Happiness Hypothesis

Contingent Superorganism

Humans are the giraffes of altruism. We're freaks of nature, able (at our best) to achieve ant-like levels of service to the group. We readily join together to create superorganisms, but unlike the eusocial insects, we do it with blatant disregard for kinship, and we do it temporarily, and contingent upon special circumstances (particularly intergroup conflict, as is found in war, sports, and business).

Ever since the publication of G. C. Williams' 1966 classic Adaptation and Natural Selection, biologists have joined with social scientists to form an altruism debunkery society. Any human or animal act that appears altruistic has been explained away as selfishness in disguise, linked ultimately to kin selection (genes help copies of themselves), or reciprocal altruism (agents help only to the extent that they can expect a positive return, including to their reputations).

But in the last few years there's been a growing acceptance of the fact that "Life is a self-replicating hierarchy of levels," and natural selection operates on multiple levels simultaneously, as Bert Hölldobler and E. O. Wilson put it in their recent book, The Superorganism. Whenever the free-rider problem is solved at one level of the hierarchy, such that individual agents can link their fortunes and live or die as a group, a superorganism is formed. Such "major transitions" are rare in the history of life, but when they have happened, the resulting superorganisms have been wildly successful. (Eukaryotic cells, multicelled organisms, and ant colonies are all examples of such transitions).

Building on Hölldobler and Wilson's work on insect societies, we can define a "contingent superorganism" as a group of people that form a functional unit in which each is willing to sacrifice for the good of the group in order to surmount a challenge or threat, usually from another contingent superorganism. It is the most noble and the most terrifying human ability. It is the secret of successful hive-like organizations, from the hierarchical corporations of the 1950s to the more fluid dot-coms of today. It is the purpose of basic training in the military. It is the reward that makes people want to join fraternities, fire departments, and rock bands. It is the dream of fascism.

Having the term "contingent superorganism" in our cognitive toolkit may help people to overcome 40 years of biological reductionism and gain a more accurate view of human nature, human altruism, and human potential. It can explain our otherwise freakish love of melding ourselves (temporarily, contingently) into something larger than ourselves.

The most common misunderstanding about science is that scientists seek and find truth. They don't — they make and test models.

Kepler packing Platonic solids to explain the observed motion of planets made pretty good predictions, which were improved by his laws of planetary motion, which were improved by Newton's laws of motion, which were improved by Einstein's general relativity. Kepler didn't become wrong because of Newton being right, just as Newton didn't then become wrong because of Einstein being right; this succession of models differed in their assumptions, accuracy, and applicability, not their truth.

This is entirely unlike the polarizing battles that define so many areas of life: either my political party, or religion, or lifestyle is right, or yours is, and I believe in mine. The only thing that's shared is the certainty of infallibility.

Building models is very different from proclaiming truths. It's a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don't know, not a weakness to avoid. Bugs are features — violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom.

These are familiar aspects of the work of any scientist, or baby: it's not possible to learn to talk or walk without babbling or toddling to experiment with language and balance. Babies who keep babbling turn into scientists who formulate and test theories for a living. But it doesn't require professional training to make mental models — we're born with those skills. What's needed is not displacing them with the certainty of absolute truths that inhibit the exploration of ideas. Making sense of anything means making models that can predict outcomes and accommodate observations. Truth is a model.

The idea that the brain is basically an engine of prediction is one that will, I believe, turn out to be very valuable not just within its current home (computational cognitive neuroscience) but across the board: for the arts, for the humanities, and for our own personal understanding of what it is to be a human being in contact with the world.

The term 'predictive coding' is currently used in many ways, across a variety of disciplines. The usage I recommend for the Everyday Cognitive Toolkit is, however, more restricted in scope. It concerns the way the brain exploits prediction and anticipation in making sense of incoming signals and using them to guide perception, thought, and action. Used in this way 'predictive coding' names a technically rich body of computational and neuroscientific research (key theorists include Dana Ballard, Tobias Egner, Paul Fletcher, Karl Friston, David Mumford, and Rajesh Rao) . This corpus of research uses mathematical principles and models that explore in detail the ways that this form of coding might underlie perception, and inform belief, choice, and reasoning.

The basic idea is simple. It is that to perceive the world is to successfully predict our own sensory states. The brain uses stored knowledge about the structure of the world and the probabilities of one state or event following another to generate a prediction of what the current state is likely to be, given the previous one and this body of knowledge. Mismatches between the prediction and the received signal generate error signals that nuance the prediction or (in more extreme cases) drive learning and plasticity.

We may contrast this with older models in which perception is a 'bottom-up' process, in which incoming information is progressively built (via some kind of evidence accumulation process, starting with simple features and working up) into a high-level model of the world. According to the predictive coding alternative, the reverse is the case. For the most part, we determine the low-level features by applying a cascade of predictions that begin at the very top; with our most general expectations about the nature and state of the world providing constraints on our successively more detailed (fine grain) predictions.

This inversion has some quite profound implications.

First, the notion of good ('veridical') sensory contact with the world becomes a matter of applying the right expectations to the incoming signal. Subtract such expectations and the best we can hope for are prediction errors that elicit plasticity and learning. This means, in effect, that all perception is some form of 'expert perception', and that the idea of accessing some kind of unvarnished sensory truth is untenable (unless that merely names another kind of trained, expert perception!).

Second, the time course of perception becomes critical. Predictive coding models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context — time and task allowing — to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.

Third, the line between perception and cognition becomes blurred. What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive). This turns out to offer a powerful window on various pathologies of thought and action, explaining the way hallucinations and false beliefs go hand-in-hand in schizophrenia, as well as other more familiar states such as 'confirmation bias' (our tendency to 'spot' confirming evidence more readily than disconfirming evidence).

Fourth, if we now consider that prediction errors can be suppressed not just by changing predictions but by changing the things predicted, we have a simple and powerful explanation for behavior and the way we manipulate and sample our environment. In this view, action is there to make predictions come true and provides a nice account of phenomena that range from homeostasis to the maintenance of our emotional and interpersonal status quo.

Understanding perception as prediction thus offers, it seems to me, a powerful tool for appreciating both the power and the potential pitfalls of our primary way of being in contact with the world. Our primary contact with the world, all this suggests, is via our expectations about what we are about to see or experience. The notion of predictive coding, by offering a concise and technically rich way of gesturing at this fact, provides a cognitive tool that will more than earn its keep in science, law, ethics, and the understanding of our own daily experience.

You see the pattern everywhere: the top 1% of the population control 35% of the wealth. On Twitter, the top 2% of users send 60% of the messages. In the health care system, the treatment for the most expensive fifth of patients create four-fifths of the overall cost. These figures are always reported as shocking, as if the normal order of things has been disrupted, as if the appearance of anything other than a completely linear distribution of money, or messages, or effort, is a surprise of the highest order.

It's not. Or rather, it shouldn't be.

The Italian economist Vilfredo Pareto undertook a study of market economies a century ago, and discovered that no matter what the country, the richest quintile of the population controlled most of the wealth. The effects of this Pareto Distribution go by many names — the 80/20 Rule, Zipfs Law, the Power Law distribution, Winner-Take-All — but the basic shape of the underlying distribution is always the same: the richest or busiest or most connected participants in a system will account for much much more wealth, or activity, or connectedness than average.

Furthermore, this pattern is recursive. Within the top 20% of a system that exhibits a Pareto distribution, the top 20% of that slice will also account for disproportionately more of whatever is being measured, and so on. The most highly ranked element of such a system will be much more highly weighted than even the #2 item in the same chart. (The word "the" is not only the commonest word in English, it appears twice as often the second most common, "of".)

This pattern was so common, Pareto called it a "predictable imbalance"; despite this bit of century-old optimism, however, we are still failing to predict it, even though it is everywhere.

Part of our failure to expect the expected is that we have been taught that the paradigmatic distribution of large systems is the Gaussian distribution, commonly known as the bell curve. In a bell curve distribution like height, say, the average and the median (the middle point in the system) are the same — the average height of a hundred American women selected at random will be about 5'4", and the height of the 50th woman, ranked in height order, will also be 5'4".

Pareto distributions are nothing like that — the recursive 80/20 weighting means that the average is far from the middle. This in turn means that in such systems most people (or whatever is being measured) are below average, a pattern encapsulated in the old economics joke: "Bill Gates walks into a bar and makes everybody a millionaire, on average."

The Pareto distribution shows up in a remarkably wide array of complex systems. Together, "the" and "of" account for 10% of all words used in English. The most volatile day in the history of a stock market will typically be twice that of the second-most volatile, and ten times the tenth-most. Tag frequency on Flickr photos obeys a Pareto distribution, as does the magnitude of earthquakes, the popularity of books, the size of asteroids, and the social connectedness of your friends. The Pareto Principle is so basic to the sciences that special graph paper that shows Pareto distributions as straight lines rather than as steep curves is manufactured by the ream.

And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world. We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming.

This doesn't mean that such distributions are beyond our ability to affect them. A Pareto curve's decline from head to tail can be more or less dramatic, and in some cases, political or social intervention can affect that slope — tax policy can raise or lower the share of income of the top 1% of a population, just as there are ways to constrain the overall volatility of markets, or to reduce the band in which health care costs can fluctuate.

However, until we assume such systems are Pareto distributions, and will remain so even after any such intervention, we haven't even started thinking about them in the right way; in all likelihood, we're trying to put a Pareto peg in a Gaussian hole. A hundred years after the discovery of this predictable imbalance, we should finish the job and actually start expecting it.

We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced. A great graphic designer will generate lots of ideas knowing that most will be aborted. A great dancer realizes most new moves will not succeed. Ditto for any architect, electrical engineer, sculptor, marathoner, startup maven, or microbiologist. What is science, after all, but a way to learn from things that don't work rather than just those that do? What this tool suggests is that you should aim for success while being prepared to learn from a series of failures. More so, you should carefully but deliberately press your successful investigations or accomplishments to the point that they break, flop, stall, crash, or fail.

Failure was not always so noble. In fact much of the world today failure is still not embraced as a virtue. It is a sign of weakness, and often a stigmata that prohibits second chances. Children in many parts of the world are taught that failure brings disgrace, and that one should do everything in one's power to succeed without failure. The rise of the West is in many respects due to the rise in tolerating failure. Indeed many immigrants trained in a failure-intolerant culture may blossom out of stagnancy once moved into a failure-tolerant culture. Failure liberates success.

The chief innovation that science brought to the state of defeat is a way to manage mishaps. Blunders are kept small, manageable, constant, and trackable. Flops are not quite deliberate, but they are channeled so that something is learned each time things fell. It becomes a matter of failing forward.

Science itself is learning how to better exploit negative results. Due to the problems of costly distribution most negative results have not been shared, thus limiting their potential to speed learning for others. But increasingly published negative results (which include experiments that succeed in showing no effects) are becoming another essential tool in the scientific method.

Wrapped up in the idea of embracing failure is the related notion of breaking things to make them better, particularly complex things. Often the only way to improve a complex system is to probe its limits by forcing it to fail in various ways. Software, among the most complex things we make, is usually tested for quality by employing engineers to systematically find ways to crash it. Similarly, one way to troubleshoot a complicated device that is broken is to deliberately force negative results (temporary breaks) in its multiple functions in order to locate the actual disfunction. Great engineers have a respect for breaking things that sometimes surprises non-engineers, just as scientists have a patience with failures that often perplexes outsiders. But the habit of embracing negative results is one of the most essential tricks to gaining success.

One of the greatest scientific insights of the twentieth century was that most psychological processes are not conscious. But the "unconscious" that made it into the popular imagination was Freud's irrational unconscious — the unconscious as a roiling, passionate id, barely held in check by conscious reason and reflection. This picture is still widespread even though Freud has been largely discredited scientifically.

The "unconscious" that has actually led to the greatest scientific and technological advances might be called Turing's rational unconscious .If the vision of the "unconscious" you see in movies like Inception was scientifically accurate, it would include phalanxes of nerds with slide rules, instead of women in negligees wielding revolvers amid Daliesque landscapes.. At least that might lead the audience to develop a more useful view of the mind if not, admittedly, to buy more tickets.

Earlier thinkers like Locke and Hume anticipated many of the discoveries of psychological science but thought that the fundamental building blocks of the mind were conscious "ideas". Alan Turing, the father of the modern computer, began by thinking about the highly conscious and deliberate step-by-step calculations performed by human "computers" like the women decoding German ciphers at Bletchley Park. His first great insight was that the same processes could be instantiated in an entirely unconscious machine with the same results. A machine could rationally decode the German ciphers using the same steps that the conscious "computers" went through. And the unconscious relay and vacuum tube computers could get to the right answers in the same way that the flesh and blood ones could.

Turing's second great insight was that we could understand much of the human mind and brain as an unconscious computer too. The women at Bletchley Park brilliantly performed conscious computations in their day jobs, but they were unconsciously performing equally powerful and accurate computations every time they spoke a word or looked across the room. Discovering the hidden messages about three- dimensional objects in the confusing mess of retinal images is just as difficult and important as discovering the hidden messages about submarines in the incomprehensible Nazi telegrams, and the mind turns out to solve both mysteries in a similar way.

More recently, cognitive scientists have added the idea of probability into the mix, so that we can describe an unconscious mind, and design a computer, that can perform feats of inductive as well as deductive inference. Using this sort of probabilistic logic a system can accurately learn about the world in a gradual, probabilistic way, raising the probability of some hypotheses and lowering that of others, and revising hypotheses in the light of new evidence. This work relies on a kind of reverse engineering. First work out how any rational system could best infer the truth from the evidence it has. Often enough, it will turn out that the unconscious human mind does just that.

Some of the greatest advances in cognitive science have been the result of this strategy. But they've been largely invisible in popular culture, which has been understandably preoccupied with the sex and violence of much evolutionary psychology (like Freud, it makes for a better movie). Vision science studies how we are able to transform the chaos of stimulation at our retinas into a coherent and accurate perception of the outside world. It is, arguably, the most scientifically successful branch of both cognitive science and neuroscience. It takes off from the idea that our visual system is, entirely unconsciously, making rational inferences from retinal data to figure out what objects are like. Vision scientists began by figuring out the best way to solve the problem of vision, and then discovered, in detail, just how the brain performs those computations.

The idea of the rational unconscious has also transformed our scientific understanding of creatures who have traditionally been denied rationality, such as young children and animals. It should transform our everyday understanding too. The Freudian picture identifies infants with that fantasizing, irrational unconscious, and even on the classic Piagetian view young children are profoundly illogical. But contemporary research shows the enormous gap between what young children say, and presumably what they experience, and their spectacularly accurate if unconscious feats of learning, induction and reasoning. The rational unconscious gives us a way of understanding how babies can learn so much when they consciously seem to understand so little.

Another way the rational unconscious could inform everyday thinking is by acting as a bridge between conscious experience and the few pounds of grey goo in our skulls. The gap between our experience and our brains is so great that people ping-pong between amazement and incredulity at every study that shows that knowledge or love or goodness is "really in the brain" (though where else would it be?). There is important work linking the rational unconscious to both conscious experience and neurology.

Intuitively, we feel that we know our own minds — that our conscious experience is a direct reflection of what goes on underneath. But much of the most interesting work in social and cognitive psychology demonstrates the gulf between our rationally unconscious minds and our conscious experience. Our conscious understanding of probability, for example, is truly awful, in spite of the fact that we unconsciously make subtle probabilistic judgments all the time. The scientific study of consciousness has made us realize just how complex, unpredictable and subtle the relation is between our minds and our experience.

At the same time, to be genuinely explanatory neuroscience has to go beyond "the new phrenology" of simply locating psychological functions in particular brain regions. The rational unconscious lets us understand the how and why of the brain and not just the where. Again, vision science has led the way, with elegant empirical studies showing just how specific networks of neurons can act as computers rationally solving the problem of vision.

Of course, the rational unconscious has its limits. Visual illusions demonstrate that our brilliantly accurate visual system does sometimes get it wrong. Conscious reflection may be misleading sometimes, but it can also provide cognitive prostheses, the intellectual equivalent of glasses with corrective lenses, to help compensate for the limitations of the rational unconscious. The institutions of science do just that.

The greatest advantage of understanding the rational unconscious would be to demonstrate that rational discovery isn't a specialized abstruse privilege of the few we call "scientists", but is instead the evolutionary birthright of us all. Really tapping into our inner vision and inner child might not make us happier or more well-adjusted, but it might make us appreciate just how smart we really are.

NICHOLAS A. CHRISTAKISPhysician and Social Scientist, Harvard University; Coauthor, Connected: The Surprising Power of Our Social Networks and How They Shape Our LivesSynergy and Holism

Some people like to build sand castles, and some like to tear them apart. There can be much joy in the latter. But it is the former that interests me. You can take a bunch of minute silica crystals, pounded for thousands of years by the waves, use your hands, and make an ornate tower. Tiny physical forces govern how each particle interacts with its neighbors, keeping the castle together, at least until the force majeur of a foot appears.

But, having built the castle, this is the part that I like the most: you step back and look at it. Across the expanse of beach, here is something new, something not present before among the endless sand grains, something risen from the ground, something that reflects the scientific principle of holism.

Holism is colloquially summarized as "the whole is greater than the sum of its parts." What is interesting to me, however, are not the artificial instantiations of this principle — when we deliberately form sand into ornate castles or metal into airborne planes or ourselves into corporations — but rather the natural instantiations. The examples are widespread and stunning. Perhaps the most impressive one is that carbon, hydrogen, oxygen, nitrogen, sulfur, phosphorus, iron, and a few other elements, when mixed in just the right way, yield life. And life has emergent properties not present in — nor predictable from — these constituent parts. There is a kind of awesome synergy between the parts.

Hence, I think that the scientific concept that would improve everyone's cognitive toolkit is holism: the abiding recognition that wholes have properties not present in the parts and not reducible to the study of the parts.

For example, carbon atoms have particular, knowable physical and chemical properties. But the atoms can be combined in different ways to make, say, graphite or diamond. The properties of those substances — properties such as darkness and softness and clearness and hardness — are not properties of the carbon atoms, but rather properties of the collection of carbon atoms. Moreover, which particular properties the collection of atoms has depends entirely on how they are assembled — into sheets or pyramids. The properties arise because of the connections between the parts. I think grasping this insight is crucial for a proper scientific perspective on the world. You could know everything about isolated neurons and not be able to say how memory works, or where desire originates.

It is also the case that the whole has a complexity that rises faster than the number of its parts. Consider social networks as a simple illustration. If we have ten people in a group, there are a maximum of 10x9/2=45 possible connections between them. If we increase the number of people to 1,000, the number of possible ties increases to 1,000x999/2=499,500. So, while the number of people has increased by 100-fold (from 10 to 1,000), the number of possible ties (and hence, this one measure of the complexity of the system), has increased by over 10,000-fold.

Holism does not come naturally. It is an appreciation not of the simple, but of the complex, or at least of the simplicity and coherence in complex things. Moreover, unlike curiosity or empiricism, say, holism takes a while to acquire and to appreciate. It is a very grown-up disposition. Indeed, for the last few centuries, the Cartesian project in science has been to break matter down into ever smaller bits, in the pursuit of understanding. And this works, to some extent. We can understand matter by breaking it down to atoms, then protons and electrons and neutrons, then quarks, then gluons, and so on. We can understand organisms by breaking them down into organs, then tissues, then cells, then organelles, then proteins, then DNA, and so on.

But putting things back together in order to understand them is harder, and typically comes later in the development of a scientist or in the development of science. Think of the difficulties in understanding how all the cells in our bodies work together, as compared with the study of the cells themselves. Whole new fields of neuroscience and systems biology and network science are arising to accomplish just this. And these fields are arising just now, after centuries of stomping on castles in order to figure them out.

WILLIAM CALVINNeuroscientist, Professor Emeritus, University of Washington in Seattle. Author, Global Fever: How to Treat Climate Change

Find That Frame

An automatic stage of "Compare and contrast" would improve most cognitive functions, not just the grade on an essay. You set up a comparison — say, that the interwoven melodies of Rock 'n' Roll are like how you must twist when dancing on a boat when the bow is rocking up and down in a different rhythm than the deck is rolling from side to side.

Comparison is an important part of trying ideas on for size, for finding related memories, and exercising constructive skepticism. Without it, you can become trapped in someone else's framing of a problem. You often need to know where someone is coming from — and while Compare 'n' Contrast is your best friend, you may also need to search for the cognitive framing. What has been cropped out of the frame can lead the unwary to an incorrect inference, as when they assume that what is left out is unimportant. For example, "We should reach a 2°C (3.6°F) fever in the year 2049" always makes me want to interject "Unless another abrupt climate shift gets us there next year."

Global warming's ramp up in temperature is the aspect of climate change that climate scientists can currently calculate — that's where they are coming from. And while this can produce really important insights — even big emission reductions only delay the 2°C fever for 19 years — it leaves out all of those abrupt climate shifts observed since 1976, as when the world's drought acreage doubled in 1982 and jumped from double to triple in 1997, then back to double in 2005. That's like stairs, not a ramp.

Even if we thoroughly understood the mechanism for an abrupt climate shift — likely a rearrangement of the winds that produce Deluge 'n' Drought by delivering ocean moisture elsewhere, though burning down the Amazon rain forest should also trigger a big one — chaos theory's "butterfly effect" says we still could not predict when a big shift will occur or what size it would be. That makes a climate surprise like a heart attack. You can't predict when. You can't say whether it will be minor or catastrophic. But you can often prevent it — in the case of climate, by cleaning up the excess CO2.

Drawing down the CO2 is also typically excluded from the current climate framing. Mere emissions reduction now resembles locking the barn door after the horse is gone — worthwhile, but not exactly recovery either. Politicians usually love locking barn doors as it gives the appearance of taking action cheaply. Emissions reduction only slows the rate at which things get worse, as the CO2 accumulation still keeps growing. (People confuse annual emissions with the accumulation that causes the trouble.) On the other hand, cleaning up the CO2 actually cools things, reverses ocean acidification, and even reverses the thermal expansion portion of rising sea level.

Recently I heard a biologist complaining about models for insect social behavior: "All of the difficult stuff is not mentioned. Only the easy stuff is calculated." Scientists first do what they already know how to do. But their quantitative results are no substitute for a full qualitative account. When something is left out because it is computationally intractable (sudden shifts) or would just be a guess (cleanup), they often don't bother to mention it at all. "Everybody [in our field] knows that" just won't do when people outside the field are hanging on your every word.

So find that frame and ask about what was left out. Like abrupt climate shifts or a CO2 cleanup, it may be the most important consideration of all.

The notion of uncertainty is perhaps the least well understood concept in science. In the public parlance, uncertainty is a bad thing, implying a lack of rigor and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time.

In fact, however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, to measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning.

One of the things that makes uncertainty difficult for members of the public to appreciate is that the significance of uncertainty is relative. Take, for example, the distance between the earth and sun, 1.49597 x 108 km. This seems relatively precise, after all using six significant digits means I know the distance to an accuracy of one part in a million or so. However, if the next digit is uncertain, that means the uncertainty in knowing the precise earth-sun distance is larger than the distance between New York and Chicago!

Whether or not the quoted number is 'precise' therefore depends upon what I am intending to do with it. If I only care about what minute the Sun will rise tomorrow then the number quoted above is fine. If I want to send a satellite to orbit just above the Sun, however, then I would need to know distances more accurately.

This is why uncertainty is so important. Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.

A self-model is the inner representation some information-processing systems have of themselves as a whole. A representation is phenomenally transparent, if it a) is conscious and b) cannot be experienced as a representation. Therefore, transparent representations create the phenomenology of naïve realism, the robust and irrevocable sense that you are directly and immediately perceiving something which must be real. Now apply the second concept to the first: A "transparent self-model", necessarily, creates the realistic conscious experience of selfhood, of being directly and immediately in touch with oneself as a whole.

This concept is important, because it shows how, in a certain class of information-processing systems, the robust phenomenology of being a self would inevitably appear — although they never were, or had, anything like a self. It is empirically plausible that we might just be such systems.

One very old and pervasive habit of thought is to imagine that the true answer to whatever question we are wondering about lies out there in some eternal domain of "timeless truths." The aim of re-search is then to "discover" the answer or solution in that already existing timeless domain. For example, physicists often speak as if the final theory of everything already exists in a vast timeless Platonic space of mathematical objects. This is thinking outside of time.

Scientists are thinking in time when we conceive of our task as the invention of genuinely novel ideas to describe newly discovered phenomena, and novel mathematical structures to express them. If we think outside of time, we believe these ideas somehow "existed" before we invented them. If we think in time we see no reason to presume that.

The contrast between thinking in time and thinking outside of time can be seen in many domains of human thought and action. We are thinking outside of time when, faced with a technological or social problem to solve, we assume the possible approaches are already determined by a set of absolute pre-existing categories. We are thinking in time when we understand that progress in technology, society and science happens by the invention of genuinely novel ideas, strategies, and novel forms of social organization.

The idea that truth is timeless and resides outside the universe was the essence of Plato's philosophy, exemplified in the parable of the slave boy that was meant to argue that discovery is merely remembering. This is reflected in the philosophy of mathematics called Platonism, which is the belief that there are two ways of existing. Regular physical things exist in the universe and are subject to time and change, while mathematical objects exist in a timeless realm. The division of the world into a time-drenched Earthly realm of life, death, change and decay, surrounded by a heavenly sphere of perfect eternal truth, framed both ancient science and Christian religion.

If we imagine that the task of physics is the discovery of a timeless mathematical object that is isomorphic to the history of the world, then we imagine that the truth to the universe lies outside the universe. This is such a familiar habit of thought that we fail to see its absurdity: if the universe is all that exists then how can something exist outside of it for it to be isomorphic to?

On the other hand, if we take the reality of time as evident, then there can be no mathematical object that is perfectly isomorphic to the world, because one property of the real world that is not shared by any mathematical object is that it is always some moment. Indeed, as Charles Sanders Pierce first observed, the hypothesis that the laws of physics evolved through the history of the world is necessary if we are to have a rational understanding of why one particular set of laws hold, rather than others.

Thinking outside of time often implies the existence of an imagined realm outside the universe where the truth lies. This is a religious idea, because it means that explanations and justifications ultimately refer to something outside of the world we experience ourselves to be a part of. If we insist there is nothing outside the universe, not even abstract ideas or mathematical objects, we are forced to find the causes of phenomena entirely within our universe. So thinking in time is also thinking within the one universe of phenomena our observations show us to inhabit.

Among contemporary cosmologists and physicists, proponents of eternal inflation and timeless quantum cosmology are thinking outside of time. Proponents of evolutionary and cyclic cosmological scenarios are thinking in time. If you think in time you worry about time ending at space-time singularities. If you think outside of time this is an ignorable problem because you believe reality is the whole history of the world at once.

Darwinian evolutionary biology is the prototype for thinking in time because at its heart is the realization that natural processes developing in time can lead to the creation of genuinely novel structures. Even novel laws can emerge when the structures to which they apply come to exist. Evolutionary dynamics has no need of abstract and vast spaces like all the possible viable animals, DNA sequences, sets of proteins, or biological laws. Exaptations are too unpredictable and too dependent on the whole suite of living creatures to be analyzed and coded into properties of DNA sequences. Better, as Stuart Kauffman proposes, to think of evolutionary dynamics as the exploration, in time, by the biosphere, of the adjacent possible.

The same goes for the evolution of technologies, economies and societies. The poverty of the conception that economic markets tend to unique equilibria, independent of their histories, shows the danger of thinking outside of time. Meanwhile the path dependence that Brian Arthur and others show is necessary to understand real markets illustrates the kind of insights that are gotten by thinking in time.

Thinking in time is not relativism, it is a form of relationalism. Truth can be both time bound and objective, when it is about objects that only exist once they are invented, by evolution or human thought.

When we think in time we recognize the human capacity to invent genuinely novel constructions and solutions to problems. When we think about the organizations and societies we live and work in outside of time we unquestioningly accept their strictures, and seek to manipulate the levers of bureaucracy as if they had an absolute reason to be there. When we think about organizations in time we recognize that every feature of them is a result of their history and everything about them is negotiable and subject to improvement by the invention of novel ways of doing things.

My reference point (as a playwright, not a scientist) was Keat's notion of negative capability (from his letters). Being able to exist with lucidity and calm amidst uncertainty, mystery and doubt, without "irritable (and always premature) reaching out" after fact and reason.

This toolkit notion of negative capability is a profound therapy for all manner of ills — intellectual, psychological, spiritual and political. I reflect it (amplify it) with Emerson's notion that "Art (any intellectual activity?) is (best thought of as but) the path of the creator to his work."

Bumpy, twisting roads. (New York City is about to repave my cobblestone street with smooth asphalt. Evil bureaucrats and tunnel-visioned 'scientists" — fast cars and more tacky up-scale stores in Soho.)

Wow! I'll bet my contribution is shorter than anyone else's. Is this my inadequacy or an important toolkit item heretofore overlooked?

Our perceptions are neither true nor false. Instead, our perceptions of space and time and objects, the fragrance of a rose, the tartness of a lemon, are all a part of our "sensory desktop," which functions much like a computer desktop.

Graphical desktops for personal computers have existed for about three decades. Yet they are now such an integral part of daily life that we might easily overlook a useful concept that they embody. A graphical desktop is a guide to adaptive behavior. Computers are notoriously complex devices, more complex than most of us care to learn. The colors, shapes and locations of icons on a desktop shield us from the computer's complexity, and yet they allow us to harness its power by appropriately informing our behaviors, such as mouse movements and button clicks, that open, delete and otherwise manipulate files. In this way, a graphical desktop is a guide to adaptive behavior.

Graphical desktops make it easier to grasp the idea that guiding adaptive behavior is different than reporting truth. A red icon on a desktop does not report the true color of the file it represents. Indeed, a file has no color. Instead, the red color guides adaptive behavior, perhaps by signaling the relative importance or recent updating of the file. The graphical desktop guides useful behavior, and hides what is true but not useful. The complex truth about the computer's logic gates and magnetic fields is, for the purposes of most users, of no use.

Graphical desktops thus make it easier to grasp the nontrivial difference between utility and truth. Utility drives evolution by natural selection. Grasping the distinction between utility and truth is therefore critical to understanding a major force that shapes our bodies, minds and sensory experiences.

Consider, for instance, facial attractiveness. When we glance at a face we get an immediate feeling of its attractiveness, a feeling that usually falls somewhere between hot and not. That feeling can inspire poetry, evoke disgust, or launch a thousand ships. It certainly influences dating and mating. Research in evolutionary psychology suggests that this feeling of attractiveness is a guide to adaptive behavior. The behavior is mating, and the initial feeling of attractiveness towards a person is an adaptive guide because it correlates with the likelihood that mating with that person will lead to successful offspring.

Just as red does not report the true color of a file, so hotness does not report the true feeling of attractiveness of a face: Files have no intrinsic color, faces have no intrinsic feeling of attractiveness. The color of an icon is an artificial convention to represent aspects of the utility of a colorless file. The initial feeling of attractiveness is an artificial convention to represent mate utility.

The phenomenon of synesthesia can help to understand the conventional nature of our sensory experiences. In many cases of synesthesia, a stimulus that is normally experienced in one way, say as a sound, is also automatically experienced in another way, say as a color. Someone with sound-color synesthesia sees colors and simple shapes whenever they hear a sound. The same sound always occurs with the same colors and shapes. Someone with taste-touch synesthesia feels touch sensations in their hands every time they taste something with their mouth. The same taste always occurs with the same feeling of touch in their hands. The particular connections between sound and color that one sound-color synesthete experiences typically differ from the connections experienced by another such synesthete. In this sense, the connections are an arbitrary convention. Now imagine a sound-color synesthete who no longer has sound experiences to acoustic stimuli, and instead has only their synesthetic color experiences. Then this synesthete would only experience as colors what the rest of us experience as sounds. In principle they could get all the acoustic information the rest of us get, only in a color format rather than a sound format.

This leads to the concept of a sensory desktop. Our sensory experiences, such as vision, sound, taste and touch, can all be thought of as sensory desktops that have evolved to guide adaptive behavior, not to report objective truths. As a result, we should take our sensory experiences seriously. If something tastes putrid, we probably shouldn't eat it. If it sounds like a rattlesnake, we probably should avoid it. Our sensory experiences have been shaped by natural selection to guide such adaptive behaviors.

We must take our sensory experiences seriously, but not literally. This is one place where the concept of a sensory desktop is helpful. We take the icons on a graphical desktop seriously; we won't, for instance, carelessly drag an icon to the trash, for fear of losing a valuable file. But we don't take the colors, shapes or locations of the icons literally. They are not there to resemble the truth. They are there to facilitate useful behaviors.

Sensory desktops differ across species. A face that could launch a thousand ships probably has no attraction to a macaque monkey. The rotting carrion that tastes putrid to me might taste like a delicacy to a vulture. My taste experience guides behaviors appropriate for me: Eating rotten carrion could kill me. The vulture's taste experience guides behaviors appropriate to it: Carrion is its primary food source.

Much of evolution by natural selection can be understood as an arms race between competing sensory desktops. Mimicry and camouflage exploit limitations in the sensory desktops of predators and prey. A mutation that alters a sensory desktop to reduce such exploitation conveys a selective advantage. This cycle of exploiting and revising sensory desktops is a creative engine of evolution.

On a personal level, the concept of a sensory desktop can enhance our cognitive toolkit by refining our attitude towards our own perceptions. It is common to assume that the way I see the world is, at least in part, the way it really is. Because, for instance, I experience a world of space and time and objects, it is common to assume that these experiences are, or at least resemble, objective truths. The concept of a sensory desktop reframes all this. It loosens the grip of sensory experiences on the imagination. Space, time and objects might just be aspects of a sensory desktop that is specific to Homo sapiens. They might not be deep insights into objective truths, just convenient conventions that have evolved to allow us to survive in our niche. Our desktop is just a desktop.

Do you know the PDF of your shampoo? A 'PDF' refers to a "partially diminished fraction of an ecosystem," and if your shampoo contains palm oil cultivated on clearcut jungle in Borneo, say, that value will be high. How about your shampoo's DALY? This measure comes from public health: "disability adjusted life years," the amount of one's life that will be lost to a disabling disease because of, say, a liftetime's cumulative exposure to a given industrial chemical. So if your favorite shampoo contains two common ingredients, the carcinogen 1,4 dioxane, or BHA , an endocrine disrupter, its DALY will be higher.

PDFs and DALYs are among myriad metrics for Anthropocene thinking, which views how human systems impact the global systems that sustain life. This way of perceiving interactions between the built and the natural worlds comes from the geological sciences. If adopted more widely this lens might usefully inform how we find solutions to the singular peril our species faces: the extinction of our ecological niche.

Beginning with cultivation and accelerating with the Industrial Revolution, our planet left the Holocene Age and entered what geologists call the Anthropocene Age, in which human systems erode the natural systems that support life. Through the Anthropocene lens, the daily workings of the energy grid, transportation, industry and commerce inexorably deteriorate global biogeochemical systems like the carbon, phosphorous and water cycles. The most troubling data suggests that since the 1950s, the human enterprise has led to an explosive acceleration that will reach criticality within the next few decades as different systems reach a point-of-no-return tipping point. For instance, about half the total rise in atmospheric CO2 concentration has occurred in just the last 30 years — and of all the global life-support systems, the carbon cycle is closest to no-return. While such "inconvenient truths" about the carbon cycle have been the poster child for our species' slow motion suicide, that's just part of a much larger picture, with all the eight global life-support systems under attack by our daily habits.

Anthropocene thinking tells us the problem is not necessarily inherent in the systems like commerce and energy that degrade nature; hopefully these can be modified to become self-sustaining with innovative advances and entrepreneurial energy. The real root of the Anthropocene dilemma lies in our neural architecture.

We approach the Anthropocene threat with brains shaped in evolution to survive the previous geological epoch, the Holocene, when dangers were signaled by growls and rustles in the bushes, and it served one well to reflexively abhor spiders and snakes. Our neural alarm systems still attune to this largely antiquated range of danger.

Add to that misattunement to threats our built-in perceptual blindspot: we have no direct neural register for the dangers of the Anthropocene age, which are too macro or micro for our sensory apparatus. We are oblivious to, say, our body burden, the lifetime build-up of damaging industrial chemicals in our tissues.

To be sure, we have methods for assessing CO2 buildups or blood levels of BHA. But for the vast majority of people those numbers have little to no emotional impact. Our amygdala shrugs.

Finding ways to counter the forces that feed the Anthropocene effect should count high in prioritizing scientific efforts. The earth sciences of course embrace the issue — but do not deal with the root of the problem, human behavior. The sciences that have most to offer have done the least Anthropocene thinking.

The fields that hold keys to solutions include economics, neuroscience, social psychology and cognitive science — and their various hybrids. With a focus on Anthropocene theory and practice they might well contribute species-saving insights. But first they have to engage this challenge, which for the most part has remained off their agenda.

When, for example, will neuroeconomics tackle the brain's perplexing indifference to the news about planetary meltdown, let alone how that neural blindspot might be patched? Might cognitive neuroscience one day offer some insight that might change our collective decision-making away from a lemmings' march to oblivion? Could any of the computer, behavioral or brain sciences come up with an information prosthetic that might reverse our course?

Paul Crutzen, the Dutch atmospheric chemist who won a Nobel for his work on ozone depletion, coined the term 'Anthropocene' ten years ago. As a meme, 'Anthropocene' has as yet little traction in scientific circles beyond geology and environmental science, let alone the wider culture: A Google check on 'anthropocene' shows 78,700 references (mainly in geoscience), while by contrast 'placebo', a once-esoteric medical term now well-established as a meme, has more than 18 million (and even the freshly coined 'vuvuzela' has 3,650,000).

There is a well-known saying: dividing the universe into things that are linear and those that are non-linear is very much like dividing the universe into things that are bananas and things that are not. Many things are not bananas.

Non-linearity is a hallmark of the real world. It occurs anytime outputs of a system cannot be expressed in terms of a sum of inputs, each multiplied by a simple constant — a rare occurrence in the grand scheme of things. Non-linearity does not necessarily imply complexity, just as linearity does not exclude it, but most real systems do exhibit some non-linear feature that results in complex behaviour. Some, like the turbulent stream from a water tap, hide deep non-linearity under domestic simplicity, while others, weather for example, are evidently non-linear to the most distracted of observers. Non-linear complex dynamics are around us: unpredictable variability, tipping points, sudden changes in behaviour, hysteresis are frequent symptoms of a non-linear world.

Non-linear complexity has also the unfortunate characteristic of being difficult to manage, high-speed computing not withstanding, because it tends to lack the generality of linear solutions. As a result we have a tendency to try and view the world in terms of linear models — much for the same reason that looking for lost keys under a lamppost might make sense: because that is where the light is. Understanding — of the kind that "rests in the mind" — seems to require simplification, one in which complexity is reduced where possible and only the most material parts of the problem are preserved.

One of the most robust bridges between the linear and the non-linear, the simple and the complex, is scale analysis, the dimensional analysis of physical systems. It is through scale analysis that we can often make sense of complex non-linear phenomena in terms of simpler models. At its core reside two questions. The first asks what quantities matter most to the problem at hand (which tends to be less obvious than one would like). The second asks what the expected magnitude and — importantly — dimensions of such quantities are. This second question is particularly important, as it captures the simple yet fundamental point that physical behaviour should be invariant to the units we use to measure quantities in. It may sound like an abstraction but, without jargon, you could really call scale analysis "focusing systematically only on what matters most at a given time and place".

There are some subtle facts about scale analysis that make it more powerful than simply comparing orders of magnitude. A most remarkable example is that scale analysis can be applied, through a systematic use of dimensions, even when the precise equations governing the dynamics of a system are not known. The great physicist G.I. Taylor, a character whose prolific legacy haunts any aspiring scientist, gave a famous demonstration of this deceptively simple approach. In the 1950's, back when the detonating power of the nuclear bomb was a carefully guarded secret, the US Government incautiously released some unclassified photographs of a nuclear explosion. Taylor realized that, while its details would be complex, the fundamentals of the problem would be governed by few parameters. From dimensional arguments, he posited that there ought to be a scale-invariant number linking the radius of the blast, the time from detonation, energy released in the explosion and the density of the surrounding air. From the photographs, he was able to estimate the radius and timing of the blast, inferring a remarkably accurate — and embarrassingly public — estimate of the energy of the explosion.

Taylor's capacity for insight was no doubt uncommon: scale analysis seldom generates such elegant results. Nevertheless, it has a surprisingly wide range of applications and an illustrious history of guiding research in applied sciences, from structural engineering to turbulence theory.

But what of its broader application? The analysis of scales and dimensions can help understand many complex problems, and should be part of everybody's toolkit. In business planning and financial analysis for example, the use of ratios and benchmarks is a first step towards scale analysis. It is certainly not a coincidence that they became common management tools at the height of Taylorism — a different Taylor, F.W. Taylor the father of modern management theory — when "scientific management" and its derivatives made their first mark. The analogy is not without problems and would require further detailing than we have time here — for example, on the use of dimensions to infer relations between quantities. But inventory turnover, profit margin, debt and equity ratios, labour and capital productivity are dimensional parameters that could tell us a great deal about the basic dynamics of business economics, even without detailed market knowledge and day to day dynamics of individual transactions.

In fact, scale analysis in its simplest form can be applied to almost every quantitative aspect of daily life, from the fundamental timescales governing our expectations on returns on investments, to the energy intensity of our lives. Ultimately, scale analysis is a particular form of numeracy — one where the relative magnitude, as well as the dimensions of things that surround us, guide our understanding of their meaning and evolution. It almost has the universality and coherence of Warburg's Mnemosyne Atlas: a unifying system of classification, where distant relations between seemingly disparate objects can continuously generate new ways of looking at problems and, through simile and dimension, can often reveal unexpected avenues of investigation.

Of course, anytime a complicated system is translated into a simpler one, information is lost. Scale analysis is a tool that will only be as insightful as the person using it. By itself, it does not provide answers and is no substitute for deeper analysis. But it offers a powerful lens through which to view reality and to understand "the order of things".

"I am large, I contain multitudes" wrote Walt Whitman. I have never met two people who were alike. I am an identical twin, and even we are not alike. Every individual has a distinct personality, a different cluster of thoughts and feelings that color all their actions. But there are patterns to personality: people express different styles of thinking and behaving — what psychologists call "temperament dimensions." I offer this concept of temperament dimensions as a useful new member of our cognitive tool kit.

Personality is composed of two fundamentally different types of traits: those of "character;" and those of "temperament." Your character traits stem from your experiences. Your childhood games; your family's interests and values; how people in your community express love and hate; what relatives and friends regard as courteous or perilous; how those around you worship; what they sing; when they laugh; how they make a living and relax: innumerable cultural forces build your unique set of character traits. The balance of your personality is your temperament, all the biologically based tendencies that contribute to your consistent patterns of feeling, thinking and behaving. As Spanish philosopher, Jose Ortega y Gasset, put it, "I am, plus my circumstances." Temperament is the "I am," the foundation of who you are.

Some 40% to 60% of the observed variance in personality is due to traits of temperament. They are heritable, relatively stable across the life course, and linked to specific gene pathways and/or hormone or neurotransmitter systems. Moreover, our temperament traits congregate in constellations, each aggregation associated with one of four broad, interrelated yet distinct brain systems: those associated with dopamine, serotonin, testosterone and estrogen/oxytocin. Each constellation of temperament traits constitutes a distinct temperament dimension.

For example, specific alleles in the dopamine system have been linked with exploratory behavior, thrill, experience and adventure seeking, susceptibility to boredom and lack of inhibition. Enthusiasm has been coupled with variations in the dopamine system, as have lack of introspection, increased energy and motivation, physical and intellectual exploration, cognitive flexibility, curiosity, idea generation and verbal and non-linguistic creativity.

The suite of traits associated with the serotonin system includes sociability, lower levels of anxiety, higher scores on scales of extroversion, and lower scores on a scale of "No Close Friends," as well as positive mood, religiosity, conformity, orderliness, conscientiousness, concrete thinking, self-control, sustained attention, low novelty seeking, and figural and numeric creativity.

Heightened attention to detail, intensified focus, and narrow interests are some of the traits linked with prenatal testosterone expression. But testosterone activity is also associated with emotional containment, emotional flooding (particularly rage), social dominance and aggressiveness, less social sensitivity, and heightened spatial and mathematical acuity.

Last, the constellation of traits associated with the estrogen and related oxytocin system include verbal fluency and other language skills, empathy, nurturing, the drive to make social attachments and other prosocial aptitudes, contextual thinking, imagination, and mental flexibility.

We are each a different mix of these four broad temperament dimensions. But we do have distinct personalities People are malleable, of course; but we are not blank slates upon which the environment inscribes personality. A curious child tends to remain curious, although what he or she is curious about changes with maturity. Stubborn people remain obstinate; orderly people remain punctilious; and agreeable men and women tend to remain amenable.

We are capable of acting "out of character," but doing so is tiring. People are biologically inclined to think and act in specific patterns — temperament dimensions. But why would this concept of temperament dimensions be useful in our human cognitive tool kit? Because we are social creatures, and a deeper understanding of who we (and others) are can provide a valuable tool for understanding, pleasing, cajoling, reprimanding, rewarding and loving others — from friends and relatives to world leaders. It's also practical.

Take hiring. Those expressive of the novelty-seeking temperament dimension are unlikely to do their best in a job requiring rigid routines and schedules. Biologically cautious individuals are not likely to be comfortable in high-risk posts. Decisive, tough minded high testosterone types are not well suited to work with those who can't get to the point and decide quickly. And those predominantly of the compassionate, nurturing high estrogen temperament dimension are not likely to excel at occupations that require them to be ruthless.

Managers might form corporate boards containing all four broad types. Colleges might place freshman with roommates of a similar temperament, rather than similarity of background. Perhaps business teams, sports teams, political teams and teacher-student teams would operate more effectively if they were either more "like-minded" or more varied in their cognitive skills. And certainly we could communicate with our children, lovers, colleagues and friends more effectively. We are not puppets on a string of DNA. Those biologically susceptible to alcoholism, for example, often give up drinking. The more we come to understand our biology, the more we will appreciate how culture molds our biology.

ARISE, or Adaptive Regression In the Service of the Ego, is a psychoanalytic concept recognized for decades, but little appreciated today. It is one of the ego functions which, depending on who you ask, may number anywhere from a handful to several dozen. They include reality testing, stimulus regulation, defensive function and synthetic integration. For simplicity, we can equate the ego with the self (though ARISS doesn't quite roll off the tongue).

In most fields, including psychiatry, regression is not considered a good thing. Regression implies a return to an earlier and inferior state of being and functioning. But the key here is not the regression, but rather whether the regression is maladaptive or adaptive.

There are numerous vital experiences that cannot be achieved without adaptive regression: The creation and appreciation of art, music, literature and food; the ability to sleep; sexual fulfillment; falling in love; and, yes, the ability to free associate and tolerate psychoanalysis or psychodynamic therapy without getting worse. Perhaps the most important element in adaptive regression is the ability to fantasize, to daydream. The person who has access to their unconscious processes and can mine them, without getting mired in them, can try new approaches, can begin to see things in new ways and, perhaps, can achieve mastery of their pursuits.

In a word: Relax.

It was ARISE that allowed Friedrich August Kekulé to use a daydream about a snake eating its tail as inspiration for his formulation of the structure of the benzene ring. It's what allowed Richard Feynman to simply drop an O-ring into a glass of ice water, show that when cold the ring is subject to distortion, and thereby explain the cause of the Space Shuttle Challenger disaster. Sometimes it takes a genius to see that a fifth grade science experiment is all that is needed to solve a problem.

In another word: Play.

Sometimes in order to progress you need to regress. Sometimes you just have to let go and ARISE.

The second law of thermodynamics, the so-called "arrow of time", popularly associated with entropy (and by association death), is the most widely misunderstood shorthand abstraction in human society today. We need to fix this.

The second law states that over time, closed systems will become more similar, eventually reaching systemic equilibrium. It is not a question of if a system will reach equilibrium; it is only a question of whena system will reach equilibrium.

Living on a single planet, we are all participants in a single physical system which has only one direction — towards systemic equilibrium. The logical consequences are obvious; our environmental, industrial and political systems (even our intellectual and theological systems) will become more homogenous over time. It's already started. The physical resources available to every person on earth, including air, food and water, have already been significantly degraded by the high burn rate of industrialization, just as the intellectual resources available to every person on earth have already been significantly increased by the high distribution rate of globalization.

Human societies are already far more similar than ever before (does anyone really miss dynastic worship?) and it would be very tempting to imagine that a modern democracy based on equal rights and opportunities is the system in equilibrium. That seems unlikely, given our current energy footprint. More likely, if the total system energy is depleted too fast, is that modern democracies will be compromised if the system crashes to its lowest equilibrium too quickly for socially equitable evolution.

Our one real opportunity is to use the certain knowledge of ever increasing systemic equilibrium to build a model for an equitable and sustainable future. The mass distribution of knowledge and access to information through the world wide web is our civilization's signal achievement. Societies that adopt innovative, predictive and adaptive models designed around a significant, on-going redistribution of global resources will be most likely to survive in the future.

But since we are biologically and socially programmed to avoid discussing entropy (death), we reflexively avoid the subject of systemic changes to our way of life, both as a society and individuals. We think it's a bummer. Instead of examining the real problems, we consume apocalyptic fantasies as "entertainment" and deride our leaders for their impotence. We really need to fix this.

Unfortunately, even facing this basic concept faces an uphill battle today. In earlier, expansionist phases of society, various metaphorical engines such as "progress" and "destiny" allowed the metaphorical "arrow" to supplant the previously (admittedly spirit-crushing) "wheel" of time. Intellectual positions that supported scientific experimentation and causality were tolerated, even endorsed, as long as they contributed to the arrow's cultural momentum. But in a more crowded and contested world, the limits of projected national power and consumption control have become more obvious. Resurgent strands of populism, radicalism and magical thinking have found mass appeal in their rejection of many rational concepts. But perhaps most significant is the rejection of undisputed physical laws.

The practical effect of this denial on the relationship between the global economy and the climate change debate (for example) is obvious. Advocates propose continuous "good" (green) growth, while denialists propose continuous "bad" (brown) growth. Both sides are more interested in backing winners and losers in a future economic environment predicated on the continuation of today's systems, than accepting the physical inevitability of increasing systemic equilibrium in any scenario.

Of course, any system can temporarily cheat entropy. Hotter particles (or societies) can "steal" the stored energy of colder (or weaker) ones, for a while. But in the end, the rate at which the total energy is burned and redistributed will still determine the speed at which the planetary system will reach its true systemic equilibrium. Whether we extend the lifetime of our local "heat" through war, or improved window insulation, is the stuff of politics. But even if in reality we can't beat the house, its worth a try, isn't it?

Barbara McClintock was ignored and ridiculed, by the scientific community, for thirty-two years before winning a Nobel Prize in 1984, for discovering "jumping genes." During the years of hostile treatment by her peers, McClintock didn't publish, preferring to avoid the rejection of the scientific community. Stanley Prusiner faced significant criticism from his colleagues until his prion theory was confirmed. He, too, went on to win a Nobel Prize in 1982.

Barry Marshall challenged the medical "fact" that stomach ulcers were caused by acid and stress; and presented evidence that H. Pylori bacteria is the cause. Marshall is quoted as saying, "Everyone was against me."

Progress in medicine was delayed while these "projective thinkers" persisted, albeit on a slower and lonelier course.

Projective thinking is a term coined by Edward de Bono to describe generative rather than reactive thinking. McClintock, Prusiner, and Marshall offered projective thinking; suspending their disbelief regarding accepted scientific views at the time.

Articulate, intelligent individuals can skillfully construct a convincing case to argue almost any point of view. This critical, reactive use of intelligence narrows our vision. In contrast, projective thinking is expansive, "open-ended" and speculative, requiring the thinker to create the context, concepts, and the objectives.

Twenty years of studying maize created a context within which McClintock could speculate. With her extensive knowledge and keen powers of observation, she deduced the significance of the changing color patterns of maize seed. This led her to propose the concept of gene regulation, which challenged the theory of the genome as a static set of instructions passed from one generation to the next.

The work McClintock first reported in 1950, the result of projective thinking, extensive research, persistence, and a willingness to suspend disbelief, wasn't understood or accepted until many years later.

Everything we know, our strongly held beliefs, and, in some cases, even what we consider to be "factual," creates the lens through which we see and experience the world, and can contribute to a critical, reactive orientation. This can serve us well: Fire is hot; it can burn if touched. It can also compromise our ability to observe and to think in an expansive, generative way.

When we cling rigidly to our constructs, as McClintock's peers did, we can be blinded to what's right in front of us. Can we support a scientific rigor that embraces generative thinking and suspension of disbelief? Sometimes science fiction does become scientific discovery.

A structure is recursive if the shape of the whole recurs in the shape of the parts: for example, a circle formed of welded links that are circles themselves. Each circular link might itself be made of smaller circles, and in principle you could have an unbounded nest of circles made of circles made of circles.

The idea of recursive structure came into its own with the advent of computer science (that is, software science) in the 1950s. The hardest problem in software is controlling the tendency of software systems to grow incomprehensibly complex. Recursive structure helps convert impenetrable software rainforests into French gardens — still (potentially) vast and complicated, but much easier to traverse and understand than a jungle.

Benoit Mandelbrot famously recognized that some parts of nature show recursive structure of a sort: a typical coastline shows the same shape or pattern whether you look from six inches or sixty feet or six miles away.

But it also happens that recursive structure is fundamental to the history of architecture, especially to the gothic, renaissance and baroque architecture of Europe — covering roughly the 500 years between the 13th and 18th centuries. The strange case of "recursive architecture" shows us the damage one missing idea can create. It suggests also how hard it is to talk across the cultural Berlin Wall that separates science and art. And the recurrence of this phenomenon in art and nature underlines an important aspect of the human sense of beauty.

The re-use of one basic shape on several scales is fundamental to medieval architecture. But, lacking the idea (and the term) "recursive structure," art historians are forced to improvise ad hoc descriptions each time they need one. This hodgepodge of improvised descriptions makes it hard, in turn, to grasp how widespread recursive structure really is. And naturally, historians of post-medieval art invent their own descriptions—thus obfuscating a fascinating connection between two mutually alien aesthetic worlds.

For example: One of the most important aspects of mature gothic design is tracery — the thin, curvy, carved stone partitions that divide one window into many smaller panes. Recursion is basic to the art of tracery.

Tracery was invented at the cathedral of Reims circa 1220, and used soon after at the cathedral of Amiens. (Along with Chartres, these two spectacular and profound buildings define the High Gothic style.) To move from the characteristic tracery design of Reims to that of Amiens, just add recursion. At Reims, the basic design is a pointed arch with a circle inside; the circle is supported on two smaller arches. At Amiens, the basic design is the same — except that now, the window recurs in miniature inside each smaller arch. (Inside each smaller arch is a still-smaller circle supported on still-smaller arches.)

In the great east window at Lincoln Cathedral, the recursive nest goes one step deeper. This window is a pointed arch with a circle inside; the circle is supported on two smaller arches — much like Amiens. Within each smaller arch is a circle supported on two still-smaller arches. Within each still-smaller arch, a circle is supported on even-smaller arches.

There are other recursive structures throughout medieval art.

Jean Bony and Erwin Panofsky were two eminent 20th century art historians. Naturally they both noticed recursive structure. But neither man understood the idea in itself. And so, instead of writing that the windows of Saint-Denis show recursive structure, Bony said that they are "composed of a series of similar forms progressively subdivided in increasing numbers and decreasing sizes." Describing the same phenomenon in a different building, Panofsky writes of the "principle of progressive divisibility (or, to look at it the other way, multiplicability)." Panofsky's "principle of progressive divisiblity" is a fuzzy, roundabout way of saying "recursive structure."

Louis Grodecki noticed the same phenomenon—a chapel containing a display-platform shaped like the chapel in miniature, holding a shrine shaped like the chapel in extra-miniature. And he wrote that "This is a common principle of Gothic art." But he doesn't say what the principle is; he doesn't describe it in general or give it a name. William Worringer, too, had noticed recursive structure. He described gothic design as "a world which repeats in miniature, but with the same means, the expression of the whole."

So each historian makes up his own name and description for the same basic idea—which makes it hard to notice that all four descriptions actually describe the same thing. Recursive structure is a basic principle of medieval design; but this simple statement is hard to say or even think if we don't know what "recursive structure" is.

If the literature makes it hard to grasp the importance of recursive structure in medieval art, it's even harder to notice that exactly the same principle recurs in the radically different world of Italian Renaissance design.

George Hersey wrote astutely of Bramante's design (ca 1500) for St Peter's in the Vatican that it consists of "a single macrochapel…, four sets of what I will call maxichapels, sixteen minichapels, and thirty-two microchapels." "The principle [he explains] is that of Chinese boxes — or, for that matter, fractals."

If he had only been able to say that "recursive structure is fundamental to Bramante's thought," the whole discussion would have been simpler and clearer — and an intriguing connection between medieval and renaissance design would have been obvious.

Using instead of ignoring the idea of recursive structure would have had other advantages too.

It helps us understand the connections between art and technology; helps us see the aesthetic principles that guide the best engineers and technologists, and the ideas of clarity and elegance that underlie every kind of successful design. These ideas have practical implications. For one, technologists must study and understand elegance and beauty as design goals; any serious technology education must include art history. And we reflect, also, on the connection between great art and great technology on the one hand and natural science on the other.

But without the right intellectual tool for the job, new instances of recursive structure make the world more complicated instead of simpler and more beautiful.

Given recent research about brain plasticity and the dangers of cognitive load, the most powerful tool in our cognitive arsenal may well be design.
Specifically, we can use design principles and discipline to shape our minds. This is different than learning and acquiring knowledge. It's about designing how each of us thinks, remembers and communicates — appropriately and effectively for the digital age.

Today's popular handwringing about its effects on cognition has some merit. But rather than predicting a dire future, perhaps we should be trying to achieve a new one.

New neuroscience discoveries give hope. We know that brains are malleable and can change depending on how they are used. The well-known study of London taxi drivers showed that a certain region in the brain involved in memory formation was physically larger than in non-taxi-driving individuals of a similar age. This effect did not extend to London bus drivers, supporting the conclusion that the requirement of London's taxi drivers to memorize the multitude of London streets drove structural brain changes in the hippocampus.

Results from studies like these support the notion that even among adults the persistent, concentrated use of one neighborhood of the brain real can increase its size, and presumably also its capacity. Not only does intense use change adult brain regional structure and function, but temporary training and perhaps even mere mental rehearsal seem to have an effect as well. A series of studies showed that one can improve tactile (Braille character) discrimination among seeing people who are temporarily blindfolded. Brain scans revealed that participants' visual cortex responsiveness was heightened to auditory and tactile sensory input after only five days of blindfolding for over an hour each time.

The existence of lifelong neuroplasticity is no longer in doubt. The brain runs on a "use it or lose it" motto. So could we "use it to build it right?" Why don't we use the demands of our information-rich, multi-stimuli, fast-paced, multi-tasking, digital existence to expand our cognitive capability? Psychiatrist Dr. Stan Kutcher, an expert on adolescent mental health who has studied the effect of digital technology on brain development, says we probably can: "There is emerging evidence suggesting that exposure to new technologies may push the Net Generation [teenagers and young adults] brain past conventional capacity limitations."

When the straight A student is doing her homework at the same time as five other things online, she is not actually multi-tasking. Instead, she has developed better active working memory and better switching abilities. I can't read my email and listen to iTunes at the same time, but she can. Her brain has been wired to handle the demands of the digital age.

How could we use design thinking to change the way we think? Good design typically begins with some principles and functional objectives. You might aspire to have a strong capacity to perceive and absorb information effectively, concentrate, remember, infer meaning, be creative, write, speak and communicate well, and to enjoy important collaborations and human relationships. How could you design your use (or abstinence) of media to achieve these goals?

Something as old-school as a speed-reading course could increase your input capacity without undermining comprehension. If it made sense in Evelyn Woods' day it is doubly important now and we've learned a lot since her day about how to read effectively.

Feeling distracted? The simple discipline of reading a few full articles per day rather than just the headlines and summaries could strengthen attention.

Want to be a surgeon? Become a gamer or rehearse while on the subway. Rehearsal can produce changes in the motor cortex as big as those induced by physical movement. One study a group of participants were asked to play a simple five-finger exercise on the piano while another group of participants were asked to think about playing the same "song" in their heads using the same finger movements, one note at a time. Both groups showed a change in their motor cortex, with differences among the group who mentally rehearsed the song as great as those who physically played the piano.

Losing retention? Decide how far you want to adopt Alfred Einstein's law of memory. When asked why he went to the phone book to get his number he replied that he only memorizes things he can't look up. There is a lot to remember these days. Between the dawn of civilization and 2003 there were 5 exabytes of data collected (an exabyte equals 1 quintillion bytes). Today 5 exabytes of data gets collected every two days! Soon there will be 5 exabytes every few minutes. Humans have a finite memory capacity. Can you develop criteria for which will be inboard and outboard?
Or want to strengthen your working memory and capability to multitask? Try reverse mentoring — learning with your teenager. This is the first time in history when children are authorities about something important, and the successful ones are pioneers of a new paradigm in thinking. Extensive research shows that people can improve cognitive function and brain efficiency through simple lifestyle changes, such as incorporating memory exercises into their daily routine.

Why don't schools and universities teach design thinking for thinking? We teach physical fitness. But rather than brain fitness we emphasize cramming young heads with information and testing their recall. Why not courses that emphasize designing a great brain?

Does this modest proposal raise the specter of "designer minds?" I don't think so. The design industry is something done to us. I'm proposing we each become designers. But I suppose "I love the way she thinks" could take on new meaning.

ANDRIAN KREYEEditor, The Feuilleton (Arts and Essays), of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

Free Jazz

It's always worth to take a few cues from mid-20th-century avant-garde. So when it comes to improving your cognitive toolkit Free Jazz is perfect. It is a highly evolved new take on an art that has (at least in the West) been framed by a strict set of twelve notes played in accurate factions of bars. It is also the pinnacle of a genre that had begun with the Blues just a half century before Ornette Coleman assembled his infamous double quartet in the A&R Studio in New York City one December day in 1960. In science terms that would mean an evolutionary leap from elementary school math to game theory and fuzzy logic in a mere fifty years.

If you really want to appreciate the mental prowess of Free Jazz players and composers you should start just one step behind. A half a year before Ornette Coleman's Free Jazz session let loose the improvisational genius of eight of the best musicians of their times, John Coltrane recorded what is still considered the most sophisticated Jazz solo ever — his tour de force through the rapid chord progressions of his composition "Giant Steps".

The film student Daniel Cohen has recently animated the notation for Coltrane's solo in a YouTube video. You don't have to be able to read music to grasp the intellectual firepower of Coltrane. After the deceivingly simple main theme the notes start to race up and down the five lines of the stave in dizzying speeds and patterns. If you also take into consideration that Coltrane used to record unrehearsed music to keep it fresh, you know that he was endowed with a cognitive toolkit way beyond normal.

Now take these almost 4:43 minutes, multiply Coltrane's firepower by eight, stretch it into 37 minutes and deduct all traditional musical structures like chord progressions or time. The session that gave the genre it's name in the first place foreshadowed not just the radical freedom the album's title implied. It was a precursor to a form of communication that has left linear conventions and entered the realm of multiple parallel interactions.

It is admittedly still hard to listen to the album "Free Jazz: A Collective Improvisation by the Ornette Coleman Double Quartet". It is equally taxing to listen to recordings of Cecil Taylor, Pharoah Sanders, Sun Ra, Anthony Braxton or Gunter Hampel. It has always been easier to understand the communication processes of this music in a live setting. One thing is a given — it is never anarchy, never was meant to be.

If you're able to play music and you manage to get yourself invited to a Free Jazz session, there is an incredible moment, when all musicians find what is considered "The Pulse". It is a collective climax of creativity and communication that can leap to the audience and create an electrifying experience. It's hard to describe, but might be comparable to the moment when a surfer finds the point when the catalyst of a surfboard bring together the motor skills of his body and the forces of the swell of an ocean start in these few seconds of synergy on top of a wave. It is a fusion of musical elements though that defies common musical theory.

Of course there is a lot of Free Jazz that merely confirms prejudice. Or as the vibraphonist and composer Gunter Hampel phrased it: "At one point it was just about being the loudest on stage." But all the musicians mentioned above have found new forms and structures, Ornette Coleman's music theory called Harmolodics being just one of them. In the perceived cacophony of their music there is a multilayered clarity to discover that can serve as a model for a cognitive toolkit for the 21st century. The ability to find cognitive, intellectual and communication skills that work in parallel contexts rather than linear forms will be crucial. Just as Free Jazz abandoned harmonic structures to find new forms in polyrhythmic settings, one might just have to enable himself to work beyond proven cognitive patterns.

MATT RIDLEYScience Writer; Founding chairman of the International Centre for Life; Author, Francis Crick: Discoverer of the Genetic Code

Collective intelligence

Brilliant people, be they anthropologists, psychologists or economists, assume that brilliance is the key to human achievement. They vote for the cleverest people to run governments, they ask the cleverest experts to devise plans for the economy, they credit the cleverest scientists with discoveries, and they speculate on how human intelligence evolved in the first place.

They are all barking up the wrong tree. The key to human achievement is not individual intelligence at all. The reason human beings dominate the planet is not because they have big brains: Neanderthals had big brains but were just another kind of predatory ape. Evolving a 1200-cc brain and a lot of fancy software like language was necessary but not sufficient for civilization. The reason some economies work better than others is certainly not because they have cleverer people in charge, and the reason some places make great discoveries is not because they have smarter people.

Human achievement is entirely a networking phenomenon. It is by putting brains together through the division of labor — through trade and specialisation — that human society stumbled upon a way to raise the living standards, carrying capacity, technological virtuosity and knowledge base of the species. We can see this in all sorts of phenomena: the correlation between technology and connected population size in Pacific islands; the collapse of technology in people who became isolated, like native Tasmanians; the success of trading city states in Greece, Italy, Holland and south-east Asia; the creative consequences of trade.

Human achievement is based on collective intelligence — the nodes in the human neural network are people themselves. By each doing one thing and getting good at it, then sharing and combining the results through exchange, people become capable of doing things they do not even understand. As the economist Leonard Read observed in his essay "I, Pencil' (which I'd like everybody to read), no single person knows how to make even a pencil — the knowledge is distributed in society among many thousands of graphite miners, lumberjacks, designers and factory workers.

That's why, as Friedrich Hayek observed, central planning never worked: the cleverest person is no match for the collective brain at working out how to distribute consumer goods. The idea of bottom-up collective intelligence, which Adam Smith understood and Charles Darwin echoed, and which Hayek expounded in his remarkable essay "The use of knowledge in society", is one idea I wish everybody had in their cognitive toolkit.

GERD GIGERENZERPsychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings

Risk Literacy

Literacy — the ability to read and write — is the precondition for an informed citizenship in a participatory democracy. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century. Risk literacy is the ability to deal with uncertainties in an informed way.

Without it, people jeopardize their health and money and can be manipulated into experiencing unwarranted, even damaging hopes and fears. Yet when considering how to deal with modern threats, policy makers rarely ever invoke the concept of risk literacy in the general public. To reduce the chances of another financial crisis, proposals called for stricter laws, smaller banks, reduced bonuses, lower leverage ratios, less short-termism, and other measures.

But one crucial idea was missing: helping the public better understand financial risk. For instance, many of the "NINJAs" (no income, no job, no assets) who lost everything but the shirts on their backs in the subprime crisis didn't realize that their mortgages were variable, not fixed-rate. Another serious problem that risk literacy can help solve are the exploding costs of health care.. Tax hikes or rationed care are often presented as the only viable alternatives. Yet by promoting health literacy in patients, better care can be had for less money.

For instance, many parents are unaware that one million U.S. children have unnecessary CT scans annually and that a full body scan can deliver one thousand times the radiation dose of a mammogram, resulting in an estimated 29,000 cancers per year.

I believe that the answer to modern crises is not simply more laws, more bureaucracy, or more money, but, first and foremost, more citizens who are risk literate. This can be achieved by cultivating statistical thinking.

Simply stated, statistical thinking is the ability to understand and critically evaluate uncertainties and risks. Yet 76 percent of U.S. adults and 54 percent of Germans do not know how to express a 1 in 1,000 chance as a percentage (0.1%). Schools spend most of their time teaching children the mathematics of certainty — geometry, trigonometry — and spend little if any time on the mathematics of uncertainty. If taught at all, it is mostly in the form of coin and dice problems that tend to bore young students to death. But statistical thinking could be taught as the art of real-world problem solving, i.e. the risks of drinking, AIDS, pregnancy, horseback riding, and other dangerous things. Out of all mathematical disciplines, statistical thinking connects most directly to a teenager's world.

Even at the university level, law and medical students are rarely taught statistical thinking — even though they are pursuing professions whose very nature it is to deal with matters of uncertainty. U.S. judges and lawyers have been confused by DNA statistics and fallen prey to the prosecutor's fallacy; their British colleagues drew incorrect conclusions about the probability of recurring sudden infant death. Many doctors worldwide misunderstand the likelihood that a patient has cancer after a positive screening test and can't critically evaluate new evidence presented in medical journals. Experts without risk literacy skills are part of the problem rather than the solution.

Unlike basic literacy, risk literacy requires emotional re-wiring: rejecting comforting paternalism and illusions of certainty, and learning to take responsibility and to live with uncertainty. Daring to know. But there is still a long way to go. Studies indicate that most patients want to believe in their doctors' omniscience and don't dare to ask for backing evidence, yet nevertheless feel well-informed after consultations. Similarly, even after the banking crisis, many customers still blindly trust their financial advisors, jeopardizing their fortune in a consultation that takes less time than they'd spend watching a football game. Many people cling to the belief that others can predict the future and pay fortune sellers for illusory certainty. Every fall, renowned financial institutions forecast next year's Dow and dollar exchange rate, even though their track record is hardly better than chance. We pay $200 billion yearly to a forecasting industry that delivers mostly erroneous future predictions.

Educators and politicians alike should realize that risk literacy is a vital topic for the 21st century. Rather than being nudged into doing what experts believe is right, people should be encouraged and equipped to make informed decisions for themselves. Risk literacy should be taught beginning in elementary school. Let's dare to know — risks and responsibilities are chances to be taken, not avoided.

KEITH DEVLINExecutive Director, H-STAR Institute, Stanford University; Author, The Unfinished Game: Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern

Base rate

The recent controversy about the potential dangers to health of the back-scatter radiation devices being introduced at the nation's airports and the intrusive pat-downs offered as the only alternative by the TSA might well have been avoided had citizens been aware of, and understood, the probabilistic notion of base rate.

Whenever a statistician wants to predict the likelihood of some event based on the available evidence, there are two main sources of information that have to be taken into account:

1. The evidence itself, for which a reliability figure has to be calculated;

2. The likelihood of the event calculated purely in terms of relative incidence.

The second figure here is the base rate. Since it is just a number, obtained by the seemingly dull process of counting, it frequently gets overlooked when there is new information, particularly if that new information is obtained by "clever experts" using expensive equipment. In cases where the event is dramatic and scary, like a terrorist attack on an airplane, failure to take account of the base rate can result in wasting massive amounts of effort on money trying to prevent something that is very unlikely.

For example, suppose that you undergo a medical test for a relatively rare cancer. The cancer has an incidence of 1% among the general population. (That is the base rate.) Extensive trials have shown that the reliability of the test is 79%. More precisely, although the test does not fail to detect the cancer when it is present, it gives a positive result in 21% of the cases where no cancer is present — what is known as a "false positive." When you are tested, the test produces a positive diagnosis. The question is: What is the probability that you have the cancer?

If you are like most people, you will assume that if the test has a reliability rate of nearly 80%, and you test positive, then the likelihood that you do indeed have the cancer is about 80% (i.e., the probability is approximately 0.8). Are you right?

The answer is no. You have focused on the test and its reliability, and overlooked the base rate, Given the scenario just described, the likelihood that you have the cancer is a mere 4.6% (i.e., the probability is 0.046). That's right, there is a less than 5% chance that you have the cancer. Still a worrying possibility, of course. But hardly the scary 80% you thought at first.

In the case of the back-scatter radiation devices at the airports, the base rate for dying in a terrorist attack is lower than many other things we do every day without hesitation. In fact, according to some reports, it is about the same as the likelihood of getting cancer as a result of going through the device.

Findex (n): The degree to which a desired piece of information can be found online.

We are the first humans in history to be able to form just about any question in our minds and know that very likely the answer can be before us in minutes, if not seconds. This omnipresent information abundance is a cognitive toolkit entirely in itself. The actuality of this continues to astonish me.

Although some have written about information overload, data smog, and the like, my view has always been the more information online, the better, so long as good search tools are available. Sometimes this information is found by directed search using a web search engine, sometimes by serendipty by following links, and sometimes by asking hundreds of people in our social network or hundreds of thousands of people on a question answering website such as Answers.com, Quora, or Yahoo Answers.

I do not actually know of a real findability index, but tools in the field of information retrieval could be applied to develop one. One of the unsolved problems in the field is how to help the searcher to determine if the information simply is not available.

An Assertion Is Often An Empirical Question, Settled By Collecting Evidence

The most important scientific concept is that an assertion is often an empirical question, settled by collecting evidence. The plural of anecdote is not data, and the plural of opinion is not facts. Quality, peer-reviewed scientific evidence accumulates into knowledge. People's stories are stories, and fiction keeps us going. But science should settle policy.

The archenemy of scientific thinking is conversation. As in typical human conversational discourse, much of which is BS. Personally I have become rather fed up with talking to people. Seriously, it is something a problem. Fact is, folks are prone to getting pet opinions into their heads and then actually thinking they are true to the point of obstinacy, even when they have little or no idea of what they are talking about in the first place. We all do it. It is part of how the sloppy mind generating piece of meat between our ears we call the human brain is prone to work. Humans may be the most rational beings on the planet these days — but that's not saying much considering that the next most rational are chimpanzees.

Take creationism. Along with the global climate issue and parental fear of vaccination, the fact that a big chunk of the American body politic deny evolutionary and paleontological science and actually think a god created humans in near historical times is causing scientists to wonder just what is wrong with the thinking of so many people — mass creationism has been used as a classic example of mass anti-scientific thinking by others responding to this question. But I am not going to focus so much on the usual problem of why creationism is popular, but more on what many who promote science over creationism think they know about those who deny the reality of Darwin's theory.

A few years back an anti-creationist documentary came out, A Flock of Dodos. Nicely done in many regards, it scored some points against the anti-evolution crowd, and when it came to trying to explain why many Americans are repelled by evolution was way off base. The reason it was so wrong was because the creator of the film, Randy Olson, went to the wrong people to find out where the problem lies. A (seeming) highlight of the picture featured a bunch of poker playing Harvard evolutionary scientists gathered around a table to converse and opine on why the yahoos don't like the results of their research. This was a very bad mistake for the simple reason that evolutionary scientists are truly knowledgeable only about their area of expertise, evolutionary science.

If you really want to know why regular folk think the way they do then you go to the experts on that subject, sociologists. Because A Flock of Dodos never does that, its viewers never find out why creationism thrives in the age of science, and what needs to be done to tame the pseudoscientific beast.

This is not an idle problem. In the last decade big strides have been made in understanding the psychosociology of popular creationism — basically, it flourishes only in seriously dysfunctional societies, and the one sure why to suppress the errant belief is to run countries well enough that the religion creationism depends upon withers to minority status, dragging creationism down with it.

In other words better societies result in mass acceptance of evolution. Yet getting the word out is proving disturbingly difficult. So the chatty pet theories abut why creationism is a problem and what to do about it continue to dominant the national conversation, and pro-creationist opinion remains rock steady (although those who favor evolution without a God is rising along with the general increase of nonbelievers).

It's not just evolution. A classic example of conversational thinking by a scientist causing trouble was Linus Pauling's obsession with vitamin C. Many ordinary citizens are skeptical of scientists in general. When researchers offer up poorly sustained opinions on matters outside their firm knowledge base it does not help the general situation.

So what can be done? In principle it is simple enough. Scientists should be scientists. We should know better than to cough up committed but dubious opinion on subjects outside our expertise. This does not mean a given scientist has to limit their observations solely to their official field of research. Say a scientist is also a self-taught authority on baseball. By all means ardently discuss that subject the way Stephen Gould used to.

I have long had an intense interest in the myths of World War II, and can offer an excellent discourse on why the atom bombing of Hiroshima and Nagasaki had pretty much nothing to do with ending the war in case you are interested (it was the Soviet attack on Japan that forced Hirohito to surrender to save his war criminal's neck and keep Japan from being split into occupation zones like Germany and Korea). But if a scientist finds him or herself being asked about something they do not know a lot about either decline to opine, or qualify the observations by stating that the opinion is tentative and nonexpert.

In practical terms the problem is, of course, that scientists are human beings like everyone else. So I am not holding my breath waiting for us to achieve a level of factual discourse that will spread enlightenment to the masses. It's too bad but very human. I have tried to cut down on throwing out idle commentary without qualifying its questionable reality, while being ardent about my statements only when I know I can back them up. Me thinks I am fairly successful in this endeavor, and it does seem to keep me out of trouble.

French for handyman or do-it-yourselfer, this word has migrated into art and philosophy recently and savants would do well tossing it into their cognitive toolbox. A Bricoleur is a talented tinkerer, the sort who can build anything out of anything: whack off a left-over drain pipe, fasten a loop of tin roofing, dab some paint, and presto a mailbox. If one peers closely all the parts are still there, still a piece of roofing, a piece of pipe, but now the assembly exceeds the sum of the parts and is useful in a different way. In letters a Bricoleur is viewed as an intellectual MacGyver tacking bits of his heritage to sub-cultures about him for a new meaning-producing pastiche.

A Bricoleur is not a new thing, but it has become a new way of understanding old things: Epistemology, the Counter-Enlightenment, and the endless parade of "isms" of the 19th and 20th Centuries: Marxism, Modernism, Socialism, Surrealism, Abstract Expressionism, Minimalism — the list is endless, and often exclusive, each insisting that the other cannot be. The exegesis of these grand theories by deconstruction — substituting trace for presence — and similar activities during the past century shows these worldviews not as discoveries, instead but assemblies, by creative Bricoleurs who had been working in the background, stapling together meaning producing scenarios from textual bric-a-brac lying about.

Presently, encompassing worldviews in philosophy have been shelved, and master art movements of style and conclusion folded along side it, no more "isms" are being run up the flagpole, because no one is saluting. Pluralism and modest descriptions of the world have become the activity of fine arts and letters, personalization and private worlds the Zeitgeist. The common prediction was that the loss of grand narrative would result in a descent into end-of-history purposelessness, instead everywhere the Bricoleurs are busy manufacturing meaning-eliciting metaphor.

Motion Graphics, Bio-art, Information Art, Net Art, Systems Art, Glitch Art, Hacktivism, Robotic Art, Relational Esthetics and others, all current art movements tossed up by contemporary Bricoleurs in an endless salad. Revisit 19th Century Hudson River landscape painting? Why not. Neo-Rodin, Post-New Media? A Mormon dabbling with the Frankfurt School. Next month. With the quest for universal validity suspended there is a pronounced freedom to assemble lives filled with meaning from the nearby and at-hand, one just needs a Bricoleur.

The exponential explosion of information and our ability to access it make our ability to validate its truthfulness not only more important but also more difficult. Information has importance in proportion to its relevance and meaning. Its ultimate value is how we use it to make decisions and put it in a framework of knowledge

Our perceptions are crucial in appreciating truth. However, we do not apprehend objective reality. Perception is based on recognition and interpretation of sensory stimuli derived from patterns of electrical impulses. From this data, the brain creates analogues and models that simulate tangible, concrete objects in the real world. Experience, though, colors and influences all of our perceptions by anticipating and predicting everything we encounter and meet. It is the reason Goethe advised that "one must ask children and birds how cherries and strawberries taste." This preferential set of intuitions, feelings, and ideas, less poetically characterized by the term bias, poses a challenge to the ability to weigh evidence accurately to arrive at truth. Bias is the non-dispassionate thumb which experience puts on the scale.

Our brains evolved having to make the right bet with limited information. Fortune, it has been said, favors the prepared mind. Bias in the form of expectation, inclination and anticipatory hunches helped load the dice in our favor and for that reason is hardwired into our thinking.

Bias is an intuition, sensitivity, receptiveness which acts as a lens or filter on all our perceptions. "If the doors of perception were cleansed," Blake said, "everything would appear to man as it is, infinite." But without our biases to focus our attention, we would be lost in that endless and limitless expanse. We have at our disposal an immeasurable assortment of biases and their combination in each of us is as unique as a fingerprint. These biases mediate between our intellect and emotions to help congeal perception into opinion, judgment, category, metaphor, analogy, theory, and ideology which frame how we see the world.

Bias is tentative. Bias adjusts as the facts change. Bias is a provisional hypothesis. Bias is normal.

Although bias is normal in the sense that it is a product of how we select and perceive information, its influence on our thinking cannot be ignored. Medical science has long been aware of the inherent bias, which occurs in collecting and analyzing clinical data. The double blind, randomized controlled study, the gold standard of clinical design, was developed in an attempt to nullify its influence.

We live in the world, however, not in a laboratory and bias cannot be eliminated. Bias critically utilized sharpens the collection of data by knowing when to look, where to look, and how to look. It is fundamental to both inductive and deductive reasoning. Darwin didn't collect his data to formulate the theory of evolution randomly or disinterestedly. Bias is the nose for the story.

Truth needs continually to be validated against all evidence, which challenges it fairly and honestly. Science with its formal methodology of experimentation and reproducibility of its findings is available to anyone who plays by its rules. No ideology, religion, culture or civilization is awarded special privileges or rights. The truth, which survives this ordeal, has another burden to bear. Like the words in a multi-dimensional crossword puzzle, it has to fit together with all the other pieces already in place. The better and more elaborate the fit, the more certain the truth. Science permits no exceptions. It is inexorably revisionary, learning from its mistakes, erasing and rewriting, even their most sacred texts, until the puzzle is complete.

THOMAS A. BASSProfessor of English at the University at Albany; Author, The Spy Who Loved Us

Open Systems

This year, Edge is asking us to identify a scientific concept that "would improve everybody's cognitive toolkit." Not clever enough to invent a concept of my own, I am voting for a winning candidate. It might be called the Swiss Army knife of scientific concepts, a term containing a remarkable number of useful tools for exploring cognitive conundrums. I am thinking of open systems, an idea that passes through thermodynamics and physics, before heading into anthropology, linguistics, history, philosophy, and sociology, until arriving, finally, into the world of computers, where it branches into other ideas such as open source and open standards.

Open standards allow knowledgeable outsiders access to the design of computer systems, to improve, interact with, or otherwise extend them. These standards are public, transparent, widely accessible, and royalty-free for developers and users. Open standards have driven innovation on the Web and allowed it to flourish as both a creative and commercial space.

Unfortunately, the ideal of an open web is not embraced by companies that prefer walled gardens, silos, proprietary systems, aps, tiered levels of access, and other metered methods for turning citizens into consumers. Their happy-face web contains tracking systems useful for making money, but these systems are also appreciated by the police states of the world, for they, too, have a vested interest in surveillance and closed systems.

Now that the Web has frothed through twenty years of chaotic inventiveness, we have to push back against the forces that would close it down. A similar push should be applied to other systems veering toward closure. "Citoyens, citoyennes, arm yourselves with the concept of openness."

Most people tend to think of science in one of two ways. It is a body of knowledge and understanding about the world: gravity, photosynthesis and evolution. Or it is the technology that has emerged from the fruits of that knowledge: vaccines, computers and cars. Science is both of these things, yet as Carl Sagan so memorably explained in The Demon-Haunted World, it is something else besides. It is a way of thinking, the best approach yet devised (if still an imperfect one) to discovering progressively better approximations of how things really are.

Science is provisional, always open to revision in light of new evidence. It is anti-authoritarian: anybody can contribute, and anybody can be wrong. It seeks actively to test its propositions. And it is comfortable with uncertainty. These qualities give the scientific method unparalleled strength as a way of finding things out. Its power, however, is too often confined to an intellectual ghetto: those disciplines that have historically been considered "scientific".

Science as a method has great things to contribute to all sorts of pursuits beyond the laboratory. Yet it remains missing in action from far too much of public life. Politicians and civil servants too seldom appreciate how tools drawn from both the natural and social sciences can be used to design more effective policies, and even to win votes.
In education and criminal justice, for example, interventions are regularly undertaken without being subjected to proper evaluation. Both fields can be perfectly amenable to one of science's most potent techniques — the randomised controlled trial — yet these are seldom required before new initiatives are put into place. Pilots are often derisory in nature, failing even to collect useful evidence that could be used to evaluate a policy's success.

Sheila Bird of the Medical Research Council, for instance, has criticised the UK's introduction of a new community sentence called the Drug Treatment and Testing Order, following pilots designed so poorly as to be worthless. They included too few subjects; they were not randomised; they did not properly compare the orders with alternatives; and judges were not even asked to record how they would otherwise have sentenced offenders.

The culture of public service could also learn from the self-critical culture of science. As Jonathan Shepherd, of the University of Cardiff, has pointed out, policing, social care and education lack the cadre of practitioner-academics that has served medicine so well. There are those who do, and there are those who research: too rarely are they the same people. Police officers, teachers and social workers are simply not encouraged to examine their own methods in the same way as doctors, engineers and bench scientists. How many police stations run the equivalent of a journal club?

The scientific method and the approach to critical thinking it promotes are too useful to be kept back for "science" alone. If it can help us to understand the first microseconds of creation and the structure of the ribosome, it can surely improve understanding of how best to tackle the pressing social questions of our time.

When John Cabot came to the Grand Banks off Newfoundland in 1497 he was astonished at what he saw. Fish, so many fish – fish in numbers he could hardly comprehend. According to Farley Mowat, Cabot wrote that the waters were so "swarming with fish [that they] could be taken not only with a net but in baskets let down and [weighted] with a stone."

The fisheries boomed for five hundred years, but by 1992 it was all over. The Grand Banks cod fishery was destroyed, and the Canadian government was forced to close it entirely, putting 30,000 fishers out of work. It has never recovered.

What went wrong? Many things, from factory fishing to inadequate oversight, but much of it was aided and abetted by treating each step toward disaster as normal. The entire path, from plenitude to collapse, was taken as the status quo, right up until the fishery was essentially wiped out.

In 1995 fisheries scientist Daniel Pauly coined a phrase for this troubling ecological obliviousness – he called it "shifting baseline syndrome". Here is how Pauly first described the syndrome: "Each generation of fisheries scientist accepts as baseline the stock situation that occurred at the beginning of their careers, and uses this to evaluate changes. When the next generation starts its career, the stocks have further declined, but it is the stocks at that time that serve as a new baseline. The result obviously is a gradual shift of the baseline, a gradual accommodation of the creeping disappearance of resource species…"

It is blindness, stupidity, intergeneration data obliviousness. Most scientific disciplines have long timelines of data, but many ecological disciplines don't. We are forced to rely on second-hand and anecdotal information – we don't have enough data to know what is normal, so we convince ourselves that this is normal.

But it often isn't normal. Instead, it is a steadily and insidiously shifting baseline, no different than convincing ourselves that winters have always been this warm, or this snowy. Or convincing ourselves that there have always been this many deer in the forests of eastern North America. Or that current levels of energy consumption per capita in the developed world are normal. All of these are shifting baselines, where our data inadequacy, whether personal or scientific, provides dangerous cover for missing important longer-term changes in the world around us.

When you understand shifting baseline syndrome it forces you to continually ask what is normal. Is this? Was that? And, at least as importantly, it asks how we "know" that it's normal. Because, if it isn't, we need to stop shifting the baselines and do something about it before it's too late.

Modern societies waste billions on protective measures whose real aim is to reassure rather than to reduce risk. Those of us who work in security engineering refer to this as "security theatre", and there are examples all around us. We're searched going into buildings that no terrorist would attack. Social network operators create the pretence of a small intimate group of "friends" in order to inveigle users into disclosing personal information that can be sold to advertisers; the users don't get privacy but privacy theatre. Environmental policy is a third example: cutting carbon emissions would cost lots of money and votes, so governments go for gesture policies that are highly visible even if their effect is negligible. Specialists know that most of the actions that governments claim will protect the security of the planet are just theatre.

Theatre thrives on uncertainty. Wherever risks are hard to measure, or their consequences hard to predict, appearance can be easier to manage than reality. Reducing uncertainty and exposing gaps between appearance and reality are among the main missions of science.

Our traditional approach was the painstaking accumulation of knowledge that enables people to understand risks, options and consequences. But theatre is a deliberate construct rather than just an accidental side-effect of ignorance, so perhaps we need to become more sophisticated about theatrical mechanisms too. Science communicators need to become adept and disrupting the show, illuminating the dark corners of the stage and making the masks visible for what they are.

JOHN MCWHORTERLinguist; cultural commentator; Senior Fellow, Manhattan Institute; Author, That Being Said

In an ideal world all people would spontaneously undestand that what political scientists call path dependence explains much more of how the world works than is apparent. Path dependence refers to the fact that often, something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice, because once established, external factors discouraged going into reverse to try other alternatives.

The paradigm example is the seemingly illogical arrangement of letters on typewriter keyboards. Why not just have the letters in alphabetical order, or arrange them so that the most frequently occurring ones are under the strongest fingers? In fact, the first typewriter tended to jam when typed on too quickly, so its inventor deliberately concocted an arrangement that put A under the ungainly little finger. In addition, the first row was provided with all of the letters in the word typewriter so that salesmen, new to typing, could wangle typing the word using just one row.

Quickly, however, mechanical improvements made faster typing possible, and new keyboards placing letters according to frequency were presented. But it was too late: there was no going back. By the 1890s typists across America were used to QWERTY keyboards, having learned to zip away on new versions of them that did not stick so easily, and retraining them would have been expensive and, ultimately, unnecessary. So QWERTY was passed down the generations, and even today we use the queer QWERTY configuration on computer keyboards where jamming is a mechanical impossibility.

The basic concept is simple, but in general estimation tends to be processed as the province of "cute" stories like the QWERTY one, rather than explaining a massive weight of scientific and historical processes. Instead, the natural tendency is to seek explanations for modern phenomena in present-day conditions.

One may assume that cats cover their waste out of fastidiousness, when the same creature will happily consume its own vomit and then jump on your lap. Cats do the burying as an instinct from their wild days when the burial helped avoid attracting predators, and there is no reason for them to evolve out of the trait now (to pet owners' relief). I have often wished there were a spontaneous impulse among more people to assume that path dependence-style explanations are as likely as jerry-rigged present-oriented ones.

For one, that the present is based on a dynamic mixture of extant and ancient conditions is simply more interesting than assuming that the present (mostly) all there is, with history as merely "the past," interesting only for seeing whether something that happened then could now happen again, which is different from path dependence.

For example, path dependence explains a great deal about language which is otherwise attributed to assorted just-so explanations. Much of the public embrace of the idea that one's language channels how one thinks is based on this kind of thing. Robert McCrum celebrates English as "efficient" in its paucity of suffixes of the kind that complexify most European languages. The idea is that this is rooted in something in its speakers' spirit, which would have propelled them to lead the world via exploration and the Industrial Revolution.

But English lost its suffixes starting in the eighth century, A.D. when Vikings invaded Britain and so many of them learned the language incompletely that children started speaking it that way. After that, you can't create gender and conjugation out of thin air – there's no going back until gradual morphing recreates such things over eons of time. That is, English's current streamlined syntax has nothing to do with any present-day condition of the spirit, nor with any even four centuries ago. The culprit is path dependence, as are most things about how a language is structured.

Or, we hear much lately about a crisis in general writing skills, suposedly due to email and texting. But there is a circularity here – why, precisely, could people not write emails and texts with the same "writerly" style that people used to couch letters in? Or, we hear of a vaguely defined effect of television, despite that kids were curled up endlessly in front of the tube starting in the fifties, long before the eighties when outcries of this kind first took on their current level of alarm in the report A Nation at Risk.

Once again, the presentist explanation does not cohere, whereas one based on an earlier historical development that there is no turning back from does. Public American English began a rapid shift from cosseted to less formal "spoken" style in the sixties, in the wake of cultural changes amidst the counterculture. This sentiment directly affected how language arts textbooks were composed, the extent to which any young person was exposed to an old-fashioned formal "speech," and attitudes towards the English language heritage in general. The result: a linguistic culture stressing the terse, demotic, and spontaneous. After just one generation minted in this context, there was no way to go back. Anyone who decided to communicate in the grandiloquent phraseology of yore would sound absurd and be denied influence or exposure. Path dependence, then, identifies this cultural shift as the cause of what dismays, delights, or just interests us in how English is currently used, and reveals television, email and other technologies as merely epiphenomenal.

Most of life looks path dependent to me. If I could create a national educational curriculum from scratch, I would include the concept as one taught to young people as early as possible.

Contributors include STEVEN PINKER on how the mind adapts to new technologies • NASSIM N TALEB on the destruction of precise knowledge • RICHARD DAWKINS on the consequences of infinite information • NICHOLAS CARR in the future of deep thought • HELEN FISHER on finding love and romance thought the Net • Wikipedia cofounder LARRY SANGER on the promise and pitfalls of the "hive mind" • SAM HARRIS on the wired brain • BRIAN ENO on finding authenticity in a world of endless reproduction

"Edge, the high-minded ideas and tech site. (New York Times Week In Review)

"The answers are remarkable." (Sueddeutsche Zeitung)

"Edge is an organization of deep, visionary thinkers on science and culture." (The Atlantic Wire)

"The German Internet debate is stuck in the nineties. Brockman's question this year sets the chord for questions that take us beyond this set of attitudes. (Frank Schirrmacher, Feuilleton Editor & Co-Publisher, Frankfurter Allgemeine Zeitung)

"If you have more time and think your attention span is up to it, we recommend you enjoy the whole scope of their length and diversity by visiting edge.org." (Ana Gershenfeld, Publico [Lisbon] Weekend Magazine Cover Story)

(* based On The Edge Question 2010: "How Is The Internet Changing The Way You Think?")

Contributors include: RICHARD DAWKINS on cross-species breeding; IAN McEWAN on the remote frontiers of solar energy; FREEMAN DYSON on radiotelepathy; STEVEN PINKER on the perils and potential of direct-to-consumer genomics; SAM HARRIS on mind-reading technology; NASSIM NICHOLAS TALEB on the end of precise knowledge; CHRIS ANDERSON on how the Internet will revolutionize education; IRENE PEPPERBERG on unlocking the secrets of the brain; LISA RANDALL on the power of instantaneous information; BRIAN ENO on the battle between hope and fear; J. CRAIG VENTER on rewriting DNA; FRANK WILCZEK on mastering matter through quantum physics.

"a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality... the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain." (Chicago Sun-Times)

"11 books you must read —
Curl up with these reads on days when you just don't want to do anything else: 5. John Brockman's This Will Change Everything: Ideas That Will Shape the Future" (Forbes India)

Contributors include: STEVEN PINKER on the future of human evolution • RICHARD DAWKINS on the mysteries of courtship • SAM HARRIS on why Mother Nature is not our friend • NASSIM NICHOLAS TALEB on the irrelevance of probability • ALUN ANDERSON on the reality of global warming • ALAN ALDA considers, reconsiders, and re-reconsiders God • LISA RANDALL on the secrets of the Sun • RAY KURZWEIL on the possibility of extraterrestrial life • BRIAN ENO on what it means to be a "revolutionary" • HELEN FISHER on love, fidelity, and the viability of marriage…and many others.

"The
splendidly enlightened Edge Website (www.edge.org) has rounded off
each year of inter-disciplinary debate by asking its heavy-hitting
contributors to answer one question. I strongly recommend a visit." The
Independent

"A
great event in the Anglo-Saxon culture." El
Mundo

"As
fascinating and weighty as one would imagine." The
Independent

"They
are the intellectual elite, the brains the rest of us rely on to
make sense of the universe and answer the big questions. But in
a refreshing show of new year humility, the world's best thinkers
have admitted that from time to time even they are forced to change
their minds." The Guardian

"Even the world's
best brains have to admit to being wrong sometimes: here, leading scientists
respond to a new year challenge." The
Times

"The
world's finest minds have responded with some of the most insightful,
humbling, fascinating confessions and anecdotes, an intellectual
treasure trove. ... Best three or four hours of intense, enlightening
reading you can do for the new year. Read it now." San
Francisco Chronicle

"As
in the past, these world-class thinkers have responded to impossibly
open-ended questions with erudition, imagination and clarity." The
News & Observer

"A
jolt of fresh thinking...The answers address a fabulous array of issues.
This is the intellectual equivalent of a New Year's dip in the lake — bracing,
possibly shriek-inducing, and bound to wake you up." The
Globe and Mail

"Answers
ring like scientific odes to uncertainty, humility and doubt; passionate
pleas for critical thought in a world threatened by blind convictions." The
Toronto Star

"For
an exceptionally high quotient of interesting ideas to words, this
is hard to beat. ...What a feast of egg-head opinionating!" National
Review Online

"Whether or not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." LA Times

"Belief appears to motivate even the most rigorously scientific minds. It stimulates and challenges, it tricks us into holding things to be true against our better judgment, and, like scepticism -its opposite -it serves a function in science that is playful as well as thought-provoking. not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." The Times

"John Brockman is the PT Barnum of popular science. He has always been a great huckster of ideas." The Observer

"An
unprecedented roster of brilliant minds, the sum of which is nothing
short of an oracle — a book ro be dog-eared and debated." Seed

"Scientific
pipedreams at their very best." The
Guardian

"Makes for some astounding
reading." Boston Globe

"Fantastically
stimulating...It's like the crack cocaine of the thinking world....
Once you start, you can't stop thinking about that question." BBC
Radio 4