Sleep Is the Interest We Have to Pay on the Capital Which is Called In at Death

Sleep is the interest we have to pay on the capital which is called in at death; and the higher the rate
of interest and the more regularly it is paid, the further the date of redemption is postponed.

So wrote Arthur Schopenhauer, comparing life to finance in a universe that must keep its books balanced. At birth you receive a loan, consciousness and light borrowed from the void, leaving a hole in the emptiness. The hole will grow bigger each day. Nightly, by yielding temporarily to the darkness of sleep, you restore some of the emptiness and keep the hole from growing limitlessly. In the end you must pay back the principal, complete the void, and return the life originally lent you.

By focusing on the common periodic nature of sleep and interest payments, Schopenhauer extends the metaphor of borrowing to life itself. Life and consciousness are the principal, death is the final repayment, and sleep is la petite mort, the periodic little death that renews.

What's my favorite elegant idea? The elucidation of DNA's structure is surely the most obvious, but it bears repeating. I'll argue that the same strategy used to crack the genetic code might prove successful in cracking the "neural code" of consciousness and self. It's a long shot, but worth considering.

The ability to grasp analogies, and seeing the difference between deep and superficial ones, is a hallmark of many great scientists; Francis Crick and James Watson were no exception. Crick himself cautioned against the pursuit of elegance in biology, given that evolution proceeds happenstantially—"God is a hacker," he famously said, adding (according to my colleague Don Hoffman), "Many a young biologist has slit his own throat with Ockham's razor." Yet his own solution to the riddle of heredity ranks with natural selection as biology's most elegant discovery. Will a solution of similar elegance emerge for the problem of consciousness?

It is well known that Crick and Watson unraveled the double helical structure of the DNA molecule: two twisting complementary strands of nucleotides. Less well known is the chain of events culminating in this discovery.

First, Mendel's laws dictated that genes are particulate (a first approximation still held to be accurate). Then Thomas Morgan showed that fruit flies zapped with x-rays became mutants with punctate changes in their chromosomes, yielding the clear conclusion that the chromosomes are where the action is. Chromosomes are composed of histones and DNA; as early as 1928, the British bacteriologist Fred Griffith showed that a harmless species of bacterium, upon incubation with a heat-killed virulent species, actually changes into the virulent species! This was almost as startling as a pig walking into a room with a sheep and two sheep emerging. Later, Oswald Avery showed that DNA was the transformative principle here. In biology, knowledge of structure often leads to knowledge of function—one need look no further than the whole of medical history. Inspired by Griffith and Avery, Crick and Watson realized that the answer to the problem of heredity lay in the structure of DNA. Localization was critical, as, indeed, it may prove to be for brain function.

Crick and Watson didn't just describe DNA's structure, they explained its significance. They saw the analogy between the complementarity of molecular strands and the complementarity of parent and offspring—why pigs beget pigs and not sheep. At that moment modern biology was born.

I believe there are similar correlations between brain structure and mind function, between neurons and consciousness. I am stating the obvious here only because there are some philosophers, called "new mysterians," who believe the opposite. The erudite Colin McGinn has written, for instance, "The brain is only tangentially relevant to consciousness." ( There are many philosophers who would disagree, e.g. Churchland, Dennett, and Searle.)

After his triumph with heredity, Crick turned to what he called the "second great riddle" in biology—consciousness. There were many skeptics. I remember a seminar Crick was giving on consciousness at the Salk Institute here in La Jolla. He'd barely started when a gentleman in attendance raised a hand and said, "But Doctor Crick, you haven't even bothered to define the word consciousness before embarking on this." Crick's response was memorable: "I'd remind you that there was never a time in the history of biology when a bunch of us sat around the table and said, 'Let's first define what we mean by life.' We just went out there and discovered what it was—a double helix. We leave matters of semantic hygiene to you philosophers."

Crick did not, in my opinion, succeed in solving consciousness (whatever that might mean). Nonetheless, I believe he was headed in the right direction. He had been richly rewarded earlier in his career for grasping the analogy between biological complementarities, the notion that the structural logic of the molecule dictates the functional logic of heredity. Given his phenomenal success using the strategy of structure-function analogy, it is hardly surprising that he imported the same style of thinking to study consciousness. He and his colleague Christoff Koch did so by focusing on a relatively obscure structure called the claustrum.

The claustrum is a thin sheet of cells underlying the insular cortex of the brain, one on each hemisphere. It is histologically more homogeneous than most brain structures, and intriguingly, unlike most brain structures (which send and receive signals to and from a small subset of other structures), the claustrum is reciprocally connected with almost every cortical region. The structural and functional streamlining might ensure that, when waves of information come through the claustrum, its neurons will be exquisitely sensitive to the timing of the inputs.

What does this have to do with consciousness? Instead of focusing on pedantic philosophical issues, Crick and Koch began with their naïve intuitions. "Consciousness" has many attributes—continuity in time, a sense of agency or free will, recursiveness or "self-awareness," etc. But one attribute that stands out is subjective unity: you experience all your diverse sense impressions, thoughts, willed actions and memories as being a unity—not jittery or fragmented. This attribute of consciousness, with the accompanying sense of the immediate "present" or "here and now," is so obvious that we don't usually think about it; we regard it as axiomatic.

So a central feature of consciousness is its unity—and here is a brain structure that sends and receives signals to and from practically all other brain structures, including the right parietal (involved in polysensory convergence and embodiment) and anterior cingulate (involved in the experience of "free will"). Thus the claustrum seems to unify everything anatomically, and consciousness does so mentally. Crick and Koch recognized that this may not be a coincidence: the claustrum may be central to consciousness; indeed it may embody the idea of the " Cartesian theater" that's taboo among philosophers—or is at least the conductor of the orchestra. This is this kind of childlike reasoning that often leads to great discoveries. Obviously, such analogies don't replace rigorous science, but they're a good place to start. Crick and Koch may be right or wrong, but their idea is elegant. If they're right, they've paved the way to solving one of the great mysteries of biology. Even if they're wrong, students entering the field would do well to emulate their style. Crick has been right too often to ignore.

I visited him at his home in La Jolla in July of 2004. He saw me to the door as I was leaving and as we parted, gave me a sly, conspiratorial wink: "I think it's the claustrum, Rama; it's where the secret is." A week later he passed away.

Humans are a story telling species. Throughout history we have told stories to each other and ourselves as one of the ways to understand the world around us. Every culture has its creation myth for how the universe came to be, but the stories do not stop at the big picture view; other stories discuss every aspect of the world around us. We humans are chatterboxes and we just can't resist telling a story about just about everything.

However compelling and entertaining these stories may be, they fall short of being explanations because in the end all they are is stories. For every story you can tell a different variation, or a different ending, without giving reason to choose between them. If you are skeptical or try to test the veracity of these stories you'll typically find most such stories wanting. One approach to this is forbid skeptical inquiry, branding it as heresy. This meme is so compelling that it was independently developed by cultures around the globes; it is the origin of religion—a set of stories about the world that must be accepted on faith, and never questioned.

Somewhere along the line a very different meme got started. Instead of forbidding inquiry into stories about the world people tried the other extreme of encouraging continual questioning. Stories about aspect of the world can be questioned skeptically, and tested with observations and experiments. If the story survives the tests then provisionally at least one can accept it as something more than a mere story; it is a theory that has real explanatory power. It will never be more than a provisional explanation—we can never let down our skeptical guard—but these provisional explanations can be very useful. We call this process of making and vetting stories the scientific method.

For me, the scientific method is the ultimate elegant explanation. Indeed it is the ultimate foundation for anything worthy of the name "explanation". It makes no sense to talk about explanations without having a process for deciding which are right and which are wrong, and in a broad sense that is what the scientific method is about. All of the other wonderful explanations celebrated here owe their origin and credibility to the process by which they are verified—the scientific method.

This seems quite obvious to us now, but it took many thousands of years for people to develop the scientific method to a point where they could use it to build useful theories about the world. It was not, a priori, obvious that such a method would work. At one extreme, creation myths discuss the origin of the universe, and for thousands of years one could take the position that this will never be more than a story—how can humans ever figure out something that complicated and distant in space and time? It would be a bold bet to say that people reasoning with the scientific method could solve that puzzle.

Well, it has taken us a while but by now enormous amounts are known about the composition of stars and galaxies and how the universe came to be. There are still gaps in our knowledge (and our skepticism will never stop), but we've made a lot of progress on cosmology and many other problems. Indeed we know more about the composition of distant stars than many questions about things here on earth. The scientific method has not conquered all great questions - other issues remain illusive, but the spirit of the scientific method is that one does shrink from the unknown. It is OK to say that we have no useful story for everything we are curious about, and we comfort ourselves that at some point in the future new explanations will fill the gaps in our current knowledge, as often raise new questions that highlight new gaps.

It's hard to overestimate the importance of the scientific method. Human culture contains much more than science—but science is the part that actually works—the rest is just stories. The rationally based inquiry the scientific method enables is what has given us science and technology and vastly different lifestyles than those of our hunter-gatherers ancestors. In some sense it is analogous to evolution. The sum of millions of small mutations separate us from single celled like blue-green algae. Each had to survive the test of selection and work better than the previous state in the sense of biological fitness. Human knowledge is the accumulation of millions of stories-that-work, each of which had to survive the test of the scientific method, matching observation and experiment more than the predecessors. Both evolution and science have taken us a long way, but looking forward it is clear that science will take us much farther.

A great example how a great deal of amazing insight can be gained from some very simple considerations is the explanation of atomic forces by the 18th century Jesuit polymath Roger Boscovich, who was born in Dubrovnik.

One of the great philosophical arguments at the time took place between the adherents of Descartes who—following Aristotle—thought that forces can only be the result of immediate contact and those who followed Newton and believed in his concept of force acting at a distance. Newton was the revolutionary here, but his opponents argued—with some justification—that "action at a distance" brought back into physics "occult" explanations that do not follow from the "clear and distinct" understanding that Descartes demanded. (In the following I am paraphrasing reference works.) Boscovich, a forceful advocate of the Newtonian point of view, turned the question around: Let's understand exactly what happens during the interaction that we would call immediate contact?

His arguments are very easy to understand and extremely convincing. Let's imagine two bodies, one of which is traveling at a speed of, say, 6 units, the other at a speed of 12 with the faster body catching up with the slower one along the same straight path. We imagine what transpires when the two bodies collide. By conservation of the "quantity of motion," both bodies should continue after collision along the same path, each with a speed of 9 units (in the case of inelastic collision, or in case of elastic collision for a brief period right after the collision)

But how did the velocity of the faster body come to be reduced from 12 to 9, and that of the slower body increased from 6 to 9? Clearly, the time interval for the change in velocities cannot be zero, for then, argued Boscovich, the instantaneous change in speed would violate the law of continuity. Furthermore, we would have to say that at the moment of impact, the speed of one body is simultaneously 12 and 9, which is patently absurd.

It is therefore necessary for the change in speed to take place in a small, yet finite, amount of time. But with this assumption, we arrive at yet another contradiction. Suppose, for example, that after a small interval of time, the speed of the faster body is 11, and that of the slower body is 7. But this would mean that they are not moving at the same velocity, and the front surface of the faster body would advance through the rear surface of the slower body, which is impossible because we have assumed that the bodies are impenetrable. It therefore becomes apparent that the interaction must take place immediately before the impact of the two bodies and that this interaction can only be a repulsive one because it is expressed in the slowing down of one body and the speeding up of the other.

Moreover, this argument is valid for arbitrary speeds, so one can no longer speak of definite dimensions for the particles that were until now thought of as impenetrable, namely, for the atoms. An atom should rather be viewed as a point source of force, with the force emanating from it acting in some complicated fashion that depends on distance.

According to Boscovich, when bodies are far apart, they act on each other through a force corresponding to the gravitational force, which is inversely proportional to the square of the distance. But with decreasing distance, this law must be modified because, in accordance with the above considerations, the force changes sign and must become a repulsive force. Boscovich even plotted fanciful traces of how the force should vary with distance in which the force changed sign several times, hinting to the existence of minima in the potential and the existence of stable bonds between the particles—the atoms.

With this idea Boscovich not only offered a new picture for interactions in place of the Aristotelian-Cartesian theory based on immediate contact, but also presaged our understanding of the structure of matter, especially that of solid bodies.

Father of Behavioral Economics; Director, Center for Decision Research, University of Chicago Graduate School of Business; Co-Author, Nudge

Commitment

It is a fundamental principle of economics that a person is always better off if they have more alternatives to choose from. But this principle is wrong. There are cases when I can make myself better off by restricting my future choices and commit myself to a specific course of action.

The idea of commitment as a strategy is an ancient one. Odysseus famously had his crew tie him to the mast so he could listen to the Sirens' songs without falling into the temptation to steer the ship into the rocks. And he committed his crew to not listening by filling their ears with wax. Another classic is Cortés's decision to burn his ships upon arriving in Mexico, thus removing retreat as one of the options his crew could consider. But although the idea is an old one, we did not begin to understand its nuances until Nobel Laureate Thomas Schelling's wrote his 1956 masterpiece: "An Essay on Bargaining".

It is well known that thorny games such as the prisoner's dilemma can be solved if both players can credibly commit themselves to cooperating, but how can I convince you that I will cooperate when it is a dominant strategy for me to defect? (And, if you and I are game theorists, you know that I know that you know that I know that defecting is a dominant strategy.)

Schelling gives many examples of how this can be done, but here is my favorite. A Denver rehabilitation clinic whose clientele consisted of wealthy cocaine addicts, offered a "self-blackmail" strategy. Patient were offered an opportunity to write a self- incriminating letter that would be delivered if and only if the patient, who is tested on a random schedule, is found to have used cocaine. Most cocaine addicts will probably have no trouble thinking of something to write about, and will now have a very strong incentive to stay off drugs. They are committed.

Many of society's thorniest problems, from climate change to Middle East peace could be solved if the relevant parties could only find a way to commit themselves to some future course of action. They would be well advised to study Tom Schelling in order to figure out how to make that commitment.

"The most incomprehensible thing about the world is that it is comprehensible." This is one of the most famous quotes from Albert Einstein. "The fact that it is comprehensible is a miracle." Similarly, Eugene Wigner said that the unreasonable efficiency of mathematics is "a wonderful gift which we neither understand nor deserve." Thus we have a problem that may seem too metaphysical to be addressed in a meaningful way: Why do we live in a comprehensible universe with certain rules, which can be efficiently used for predicting our future?

One could always respond that God created the universe and made it simple enough so that we can comprehend it. This would match the words about a miracle and an undeserved gift. But shall we give up so easily? Let us consider several other questions of a similar type. Why is our universe so large? Why parallel lines do not intersect? Why different parts of the universe look so similar? For a long time such questions looked too metaphysical to be considered seriously. Now we know that inflationary cosmology provides a possible answer to all of these questions. Let us see whether it might help us again.

To understand the issue, consider some examples of an incomprehensible universe where mathematics would be inefficient. Here is the first one: Suppose the universe is in a state with the Planck density r ~ 1094 g/cm3. Quantum fluctuations of space-time in this regime are so large that all rulers are rapidly bending and shrinking in an unpredictable way. This happens faster than one could measure distance. All clocks are destroyed faster than one could measure time. All records about the previous events become erased, so one cannot remember anything and predict the future. The universe is incomprehensible for anybody living there, and the laws of mathematics cannot be efficiently used.

If the huge density example looks a bit extreme, rest assured that it is not. There are three basic types of universes: closed, open and flat. A typical closed universe created in the hot Big Bang would collapse in about 10-43 seconds, in a state with the Planck density. A typical open universe would grow so fast that formation of galaxies would be impossible, and our body would be instantly torn apart. Nobody would be able to live and comprehend the universe in either of these two cases. We can enjoy life in a flat or nearly flat universe, but this requires fine-tuning of initial conditions at the moment of the Big Bang with an incredible accuracy of about 10-60.

Recent developments in string theory, which is the most popular (though extremely complicated) candidate for the role of the theory of everything, reveal an even broader spectrum of possible but incomprehensible universes. According to the latest developments in string theory, we may have about 10500 (or more) choices of the possible state of the world surrounding us. All of these choices follow from the same string theory. However, the universes corresponding to each of these choices would look as if they were governed by different laws of physics; their common roots would be well hidden. Since there are so many different choices, some of them may describe the universe we live in. But most of these choices would lead to a universe where we would be unable to live and efficiently use mathematics and physics to predict the future.

At the time when Einstein and Wigner were trying to understand why our universe is comprehensible, everybody assumed that the universe is uniform and the laws of physics are the same everywhere. In this context, recent developments would only sharpen the formulation of the problem: We must be incredibly lucky to live in the universe where life is possible and the universe is comprehensible. This would indeed look like a miracle, like a "gift that we neither understand nor deserve." Can we do anything better than praying for a miracle?

During the last 30 years the way we think about our world changed profoundly. We found that inflation, the stage of an exponentially rapid expansion of the early universe, makes our part of the universe flat and extremely homogeneous. However, simultaneously with explaining why the observable part of the universe is so uniform, we also found that on a very large scale, well beyond the present visibility horizon of about 1010 light years, the universe becomes 100% non-uniform due to quantum effects amplified by inflation.

This means that instead of looking like an expanding spherically symmetric ball, our world looks like a multiverse, a collection of an incredibly large number of exponentially large bubbles. For (almost) all practical purposes, each of these bubbles looks like a separate universe. Different laws of the low energy physics operate inside each of these universes.

In some of these universes, quantum fluctuations are so large that any computations are impossible. Mathematics there is inefficient because predictions cannot be memorized and used. Lifetime of some of these universes is too short. Some other universes are long living but laws of physics there do not allow existence of anybody who could live sufficiently long to learn physics and mathematics.

Fortunately, among all possible parts of the multiverse there should be some exponentially large parts where we may live. But our life is possible only if the laws of physics operating in our part of the multiverse allow formation of stable, long-living structures capable of making computations. This implies existence of stable (mathematical) relations that can be used for long-term predictions. Rapid development of the human race was possible only because we live in the part of the multiverse where the long-term predictions are so useful and efficient that they allow us to survive in the hostile environment and win in the competition with other species.

To summarize, the inflationary multiverse consists of myriads of 'universes' with all possible laws of physics and mathematics operating in each of them. We can only live in those universes where the laws of physics allow our existence, which requires making reliable predictions. In other words, mathematicians and physicists can only live in those universes which are comprehensible and where the laws of mathematics are efficient.

One can easily dismiss everything that I just said as a wild speculation. It seems very intriguing, however, that in the context of the new cosmological paradigm, which was developed during the last 30 years, we might be able, for the first time, to approach one of the most complicated and mysterious problems which bothered some of the best scientists of the 20th century.

Neuroscientist, Baylor College of Medicine; Author, Incognito: The Secret Lives of the Brain

Ceaseless Reinvention Leads To Overlapping Solutions

The elegance of the brain lies in its inelegance.

For centuries, neuroscience attempted to neatly assign labels to the various parts of the brain: this is the area for language, this one for morality, this for tool use, color detection, face recognition, and so on. This search for an orderly brain map started off as a viable endeavor, but turned out to be misguided.

The deep and beautiful trick of the brain is more interesting: it possesses multiple, overlapping ways of dealing with the world. It is a machine built of conflicting parts. It is a representative democracy that functions by competition among parties who all believe they know the right way to solve the problem.

As a result, we can get mad at ourselves, argue with ourselves, curse at ourselves and contract with ourselves. We can feel conflicted. These sorts of neural battles lie behind marital infidelity, relapses into addiction, cheating on diets, breaking of New Year's resolutions—all situations in which some parts of a person want one thing and other parts another.

These are things which modern machines simply do not do. Your car cannot be conflicted about which way to turn: it has one steering wheel commanded by only one driver, and it follows directions without complaint. Brains, on the other hand, can be of two minds, and often many more. We don't know whether to turn toward the cake or away from it, because there are several sets of hands on the steering wheel of behavior.

Take memory. Under normal circumstances, memories of daily events are consolidated by an area of the brain called the hippocampus. But in frightening situations—such as a car accident or a robbery—another area, the amygdala, also lays down memories along an independent, secondary memory track. Amygdala memories have a different quality to them: they are difficult to erase and they can return in "flash-bulb" fashion—a common description of rape victims and war veterans. In other words, there is more than one way to lay down memory. We're not talking about memories of different events, but different memories of the same event. The unfolding story appears to be that there may be even more than two factions involved, all writing down information and later competing to tell the story. The unity of memory is an illusion.

And consider the different systems involved in decision making: some are fast, automatic and below the surface of conscious awareness; others are slow, cognitive, and conscious. And there's no reason to assume there are only two systems; there may well be a spectrum. Some networks in the brain are implicated in long-term decisions, others in short-term impulses (and there may be a fleet of medium-term biases as well).

Attention, also, has also recently come to be understood as the end result of multiple, competing networks, some for focused, dedicated attention to a specific task, and others for monitoring broadly (vigilance). They are always locked in competition to steer the actions of the organism.

Even basic sensory functions—like the detection of motion—appear now to have been reinvented multiple times by evolution. This provides the perfect substrate for a neural democracy.

On a larger anatomical scale, the two hemispheres of the brain, left and right, can be understood as overlapping systems that compete. We know this from patients whose hemispheres are disconnected: they essentially function with two independent brains. For example, put a pencil in each hand, and they can simultaneously draw incompatible figures such as a circle and a triangle. The two hemispheres function differently in the domains of language, abstract thinking, story construction, inference, memory, gambling strategies, and so on. The two halves constitute a team of rivals: agents with the same goals but slightly different ways of going about it.

To my mind, this elegant solution to the mysteries of the brain should change the goal for aspiring neuroscientists. Instead of spending years advocating for one's favorite solution, the mission should evolve into elucidating the different overlapping solutions: how they compete, how the union is held together, and what happens when things fall apart.

Part of the importance of discovering elegant solutions is capitalizing on them. The neural democracy model may be just the thing to dislodge artificial intelligence. We human programmers still approach a problem by assuming there's a best way to solve it, or that there's a way it should be solved. But evolution does not solve a problem and then check it off the list. Instead, it ceaselessly reinvents programs, each with overlapping and competing approaches. The lesson is to abandon the question "what's the most clever way to solve that problem?" in favor of "are there multiple, overlapping ways to solve that problem?" This will be the starting point in ushering in a fruitful new age of elegantly inelegant computational devices.

How The Availability Of Some Plants And Animals Can Explain Thousands Of Years Of Human History

One of the most elegant explanations I have encountered in the social sciences comes courtesy of Jared Diamond, and is outlined in his wonderful book "Guns, Germs, and Steel." Diamond attempts to answer an enormously complex and historically controversial question—why certain societies achieved such historical dominance over others—by appealing to a set of very basic differences in the physical environments from which these societies emerged (such as differences in the availability of plants and animals suitable for domestication).

These differences, Diamond argues, gave rise to a number of specific advantages (such as greater immunity to disease) that were directly responsible for the historical success of some societies.

I'm not an expert in this domain, and I accept that Diamond's explanation might be completely misguided. Yet the appeal to such basic mechanisms in order to explain such a wide set of complex observations is so deeply satisfying that I hope he is right.

Explanations that are extraordinary, both analytically and aesthetically, share among others, these properties: (a) they are often simpler compared to what was received wisdom, (b) they point to the more true cause as being some place quite removed from the phenomenon, and (c) they make you wish so much that you had come upon the explanation yourself.

Those of us who attempt to understand the mind, have a unique limitation to confront: the object that is the knower is also the known. The mind is the thing doing the explaining; the mind is also the thing to be explained. Distance from one's own mind, distance from attachments to the specialness of one's species or tribe, getting away from introspection and intuition (not as hypothesis generators but as answers and explanations) are all especially hard to achieve when what we seek to do is explain our own minds and those of others of our kind.

For this reason, my candidate for the most deeply satisfying explanation of recent decades is the idea of bounded rationality. The idea that human beings are smart by comparison to other species, but not smart enough by their own standards including behaving in line with basic axioms of rationality is a now a well-honed observation with deep empirical foundation in the form of discoveries in support.

Herbert Simon put one stake in the ground through the study of information processing and AI, showing that both people and organizations follow principles of behavior such as "satisficing" that constrain them to decent but not the best decisions. The second stake was placed by Kahneman and Tversky, who showed the stunning ways in even experts are error-prone—with consequences for not only their own health and happiness but that of their societies broadly.

Together the view of human nature that evolved over the past four decades has systematically changed the explanation for who we are and why we do what we do. We are error-prone in the unique ways in which we are, the explanation goes, not because we have malign intent, but because of the evolutionary basis of our mental architecture, the manner in which we remember and learn information, the way in which we are affected by those around us and so on. The reason we are boundedly rational is because the information space in which we must do our work is large compared to the capacities we have, including severe limits on conscious awareness, the ability to be able to control behavior, and to act in line even with our own intentions.

From these bounds on rationality generally, we can look also at the compromise of ethical standards—again the story is the same; that it is not intention to harm that's the problem. Rather the explanation lies in such sources are the manner in which some information plays a disproportionate role in decision making, the ability to generalize or overgeneralize, and the commonness of wrong doing that typify daily life. These are the more potent causes of the ethical failures of individuals and institutions.

The idea that bad outcomes result from limited minds that cannot store, compute and adapt to the demands of the environment is a radically different explanation of our capacities and thereby our nature. It's elegance and beauty comes from it placing the emphasis on the ordinary and the invisible rather than on specialness and malign motives. This seems not so dissimilar from another shift in explanation from god to natural section and it is likely to be equally resisted.

Associate Professor of Psychology and Neuroscience; Stanford University

Expected Value (and beyond)

To make the best choices, we face the impossible task of evaluating the future. Until the invention of "expected value," people lacked a simple way to quantify the value of an uncertain future event. Expected value was famously hit upon in a 1654 correspondence between polymaths Blaise Pascal and Pierre de Fermat. Pascal had enlisted Fermat to help find a mathematical solution to the "problem of points:" namely, how can a jackpot be divided between two gamblers when their game is interrupted before they learn of its final outcome?

A gamble's value obviously depends upon how much one can win. But Pascal and Fermat further concluded that a gamble's value also should be weighted by the likelihood of a win. Thus, expected value is computed as a potential event's magnitude multiplied by its probability (thus, in the case of a single gamble "x," E(x) = x*p). This formula is now so common that it is taken for granted. But I remember a fundamental shift in my worldview after my first encounter with expected value—as if an impending fork in the road transformed into a broad landscape of potentials, whose hills and valleys were defined by goodness and likelihood. This open view of all possible outcomes implies optimal choice—to maximize expected value, simply head for the highest hill. Thus, expected value is both elegant in its computation and deep in its implications for choice.

Even today, expected value forms the backbone of dominant theories of choice in fields including economics and psychology. More recent replacements have mainly tweaked the key ingredients of expected value—adding a curve to the magnitude component (in the case of Expected Utility), or flattening the probability component (in the case of Prospect Theory). But beyond its longevity, what amazes me most about this seventeenth century innovation is that the brain may faithfully represent something like it. Specifically, not only does activity in mesolimbic circuits appear to correlate with expected value before the outcome of a gamble is revealed, but this activity can be used to predict diverse choices—ranging from what to buy, to which investment to make, to whom to trust.

Thus, expected value is beautiful in its simplicity and utility—and almost true. Like any good scientific theory, expected value is not only quantifiable, but also falsifiable. As it turns out, people don't always maximize expected value. Sometimes they let potential losses overshadow gains or disregard probability (as highlighted by Prospect Theory). These quirks of choice suggest that while expected value may prescribe how people should choose, it does not always describe what people do choose. On the neuroimaging front, emerging evidence suggests that while subcortical regions of the mesolimbic circuit are more sensitive to magnitude, cortical regions (i.e., the medial prefrontal cortex) more heavily weight probability. By implication, people who have suffered prefrontal damage (e.g., due to injury, illness, or age) may be more seduced by attractive but unlikely offers (e.g., lottery jackpots).

Indeed, thinking about probability seems more complex and effortful than thinking about magnitude—requiring one not only to consider the next best thing, but also the one after that, and after that, and so on. Neuroimaging findings suggest that more recently evolved parts of the prefrontal cortex allow us not to "be here now"—but instead to transport ourselves into the uncertain future. Mental and neural evidence for differentiating magnitude and probability suggest a limit on the explanatory power of expected value. To some, this limit paradoxically makes expected value all the more intriguing. Scientists often love explanations more for the questions they raise than the questions they answer.

In 1661 or 1162, in his Pensees, philosopher and mathematician Blaise Pascal articulated what would come to be known as Pascal's Wager, the question of whether or not to believe in God, in the face of the failure of reason and science to provide a definitive answer.

"You must wager. It is not optional. You are embarked. Which will you choose then?...You have two things to lose, the true and the good; and two things to stake, your reason and your will, your knowledge and your happiness; and your nature has two things to shun, error and misery. Your reason is no more shocked in choosing one rather than the other, since you must of necessity choose. This is one point settled. But your happiness? Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is."

While this proposition of Pascal's is clothed in obscure religious language and on a religious topic, it is a significant and early expression of decision theory. And, stripped of its particulars, it provides a simple and effective way to reason about contemporary problems like climate change.

We don't need to be 100% sure that the worst fears of climate scientists are correct in order to act. All we need to think about are the consequences of being wrong.

Let's assume for a moment that there is no human-caused climate change, or that the consequences are not dire, and we've made big investments to avert it. What's the worst that happens? In order to deal with climate change:

1. We've made major investments in renewable energy. This is an urgent issue even in the absence of global warming, as the IEA has now revised the date of "peak oil" to 2020, only 11 years from now.

4. We've mitigated the enormous "off the books" economic losses from pollution. (China recently estimated these losses as 10% of GDP.) We currently subsidize fossil fuels in dozens of ways, by allowing power companies, auto companies, and others to keep environmental costs "off the books," by funding the infrastructure for autos at public expense while demanding that railroads build their own infrastructure, and so on.

5. We've renewed our industrial base, investing in new industries rather than propping up old ones. Climate critics like Bjorn Lomborg like to cite the cost of dealing with global warming. But the costs are similar to the "costs" incurred by record companies in the switch to digital music distribution, or the costs to newspapers implicit in the rise of the web. That is, they are costs to existing industries, but ignore the opportunities for new industries that exploit the new technology. I have yet to see a convincing case made that the costs of dealing with climate change aren't principally the costs of protecting old industries.

By contrast, let's assume that the climate skeptics are wrong. We face the displacement of millions of people, droughts, floods and other extreme weather, species loss, and economic harm that will make us long for the good old days of the current financial industry meltdown.

Climate change really is a modern version of Pascal's wager. On one side, the worst outcome is that we've built a more robust economy. On the other side, the worst outcome really is hell. In short, we do better if we believe in climate change and act on that belief, even if we turned out to be wrong.

But I digress. The illustration has become the entire argument. Pascal's wager is not just for mathematicians, nor for the religiously inclined. It is a useful tool for any thinking person.

Editor, The Feuilleton (Arts and Essays), of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

Subjective Environment

Explanations tend to be at their most elegant, when science distills the meanderings of philosophy into fact. I was looking for explanations for an observation, when I came across the theory of "Umwelt" versus "Umfeld" (vaguely environment versus surroundings) by the Estonian biologist and forefather of biosemiotics Jakob von Uexküll. According to his definition "Umwelt" is the subjective environment as perceived and impacted by an organism, while "Umfeld" is the objective environment which encompasses and impacts all organisms in it's realm.

My observation had been a mere notion of the major difference between my native Europe and America, my adopted continent for a couple of decades. In Europe the present is perceived as the end point of history. In America the present is perceived as the beginning of the future. Philosophy or history, I hoped, would have an explanation for such a fundamental yet simple difference. Both can deliver parts of an explanation of course. The different paths the histories of ideas and the histories of the countries have taken just in the past 200 years are astounding.

Uexküll's definition of the subjective environment as published in his book Umwelt und Innnenwelt der Tiere (Environments and inner worlds of animals, published 1909 in the language of his German exile) puts both philosophy and history into perspective and context though. Distrusting theories he always wanted ideas to persist in nature. Putting his idea of the subjective environment to the test in the Indian Ocean, the Atlantic and the Mediterranean. He observed simple creatures like sea anemones, sea urchins and crustaceans. His most famous example to explain his theory was the tick though. Here he found a creature whose perception and actions could be defined by three parameters each. Ticks perceive their surroundings by the directions of up and down, by warm and cold and the presence or absence of butyric acid. Their actions to survive and procreate are crawling, waiting and gripping.

This model lead him not only to define environment as a subjective notion. He found the perception of time for any organism as subjective as the perception of space, defined by the very perceptions and actions that create the organism's subjective environment. If subjective time is defined by the experiences and actions of an organism, the context of a continent's history with its myriads of parameters turns philosophy and history into mere factors in a complex environment of collective perception. Now there was an elegant explanation for a rather simple observation. Making it even more elegant is the notion that in the context of a continent's evolution geography, climate, food and culture will weigh in as factors of the perception of the subjective environment and time as well, making it impossible to prove or disprove the explanation scientifically. Having rendered philosophy to just one of many parameters it thus reduces its efforts to discredit Jakob von Uexküll's definition of the subjective environment to mere meanderings.

"We Are Dreaming Machines That Construct Virtual Models Of The Real World"

The most beautiful and elegant explanation should be as strong and overwhelming as a brick smashing your head; it should break your life in two. For instance, as a result of that explanation, you should realize that even if you are dreaming your brain is active doing what he does best: creating models of reality or, in fact, creating the reality where you live in.

Descartes was aware of this fact and that's why he concluded "I think, therefore I am", cogito ergo sum. You can think of yourself as walking on a park, but this could be just a vivid dream. Therefore, it's not possible to conclude anything about your existence based on the apparent fact of walking. However, if you are really walking on a park, or dreaming, you are thinking, therefore existing. Dreaming is so similar to waking, that you can't trust any sensory information as proof of your existence. You can only trust the fact of thinking or, in contemporary words, the fact that your brain is active.

Dreaming and waking are similar cognitive states, as Rodolfo Llinás says in his masterpiece "I of the vortex". The only difference is that while dreaming, your brain is not perceiving or representing the external reality, it is emulating it and providing self-generated inputs.

The explanation is also shocking in its consequence. While waking we are also dreaming, concludes Llinás: "The waking state is a dreamlike state (…) guided and shaped by the senses, whereas regular dreaming does not involve the senses at all".

In both cases our brain generates models of reality.

With this explanation very few entities—the brain and the matter of reality—are enough to remind us how we create what is usually defined as "reality": "The only reality that exists for us is already a virtual one (…). We are basically dreaming machines that construct virtual models of the real world", says Llinás.

This is not only a beautiful explanation because of the poetic fact that reality is self-generated while dreaming, and partially generated while waking. Is there anything more beautiful than understanding how to create reality?

This is not only an elegant explanation because it shows our minuscule and entirely representative place in the ontological and physical reality, in the huge amount of matter defined as universe.

This explanation is overwhelming in practical terms because as a philosopher and social scientist, I cannot explain the physical or the social reality without considering that we live and move in a model of reality. Including the representational, creative and even ontological role of the brain, is a naturalization project usually omitted as a result of hyper-positivism and scientific fragmentation. From Descartes to Llinás, form the understanding of galaxies to the understanding of crime, this explanation should be relevant in most scientific enterprises.

Neuroscientist; Professor of Psychiatry & Biobehavioral Sciences, David Geffen School of Medicine, UCLA; Author, Mirroring People

Like Attracts Like

The beauty of this explanation is twofold. First, it accounts for the complex organization of the cerebral cortex (the most recent evolutionary component of the brain) using a very simple rule. Second, it deals with scaling issues very well, and indeed it also accounts for a specific phenomenon in a widespread human behavior, imitation. It explains how neurons packed themselves in the cerebral cortex and how humans relate to each other. Not a small feat.

Let's start from the brain. The idea that neurons with similar properties cluster together is theoretically appealing, because it minimizes costs associated with transmission of information. This idea is also supported by empirical evidence (it does not always happen that a theoretically appealing idea is supported by empirical data, sadly). Indeed, more than a century of a variety of brain mapping techniques demonstrated the existence of 'visual cortex' (here we find neurons that respond to visual stimuli), 'auditory cortex' (here we find neurons that respond to sounds), 'somatosensory cortex' (here we find neurons that respond to touch), and so forth. When we zoom in and look in detail at each type of cortex, we also find that the 'like attracts like' principle works well. The brain forms topographic maps. For instance, let's look at the 'motor cortex' (here we find neurons that send signals to our muscles so that we can move our body, walk, grasp things, move the eyes and explore the space surrounding us, speak, and obviously type on a keyboard, as I am doing now). In the motor cortex there is a map of the body, with neurons sending signals to hand muscles clustering together and being separate from neurons sending signals to feet or face muscles. So far, so good.

In the motor cortex, however, we also find multiple maps for the same body part (for instance, the hand). Furthermore, these multiple maps are not adjacent. What is going here? It turns out that body parts are only one of the variables that are mapped by the motor cortex. Other important variables are, for instance, different types of coordinated actions and the space sector in which the action ends. The coordinated actions that are mapped by the motor cortex belong to a number of categories, most notably defensive actions (that is, actions to defend one's own body) hand to mouth actions (important to eat and drink!), manipulative actions (using skilled finger movements to manipulate objects). The problem here is that there are multiple dimensions that are mapped onto a two-dimensional entity (we can flatten the cerebral cortex and visualize it as a surface area). This problem needs to be solved with a process of dimensionality reduction. Computational studies have shown that algorithms that do dimensionality reduction while optimizing the similarity of neighboring points (our 'like attracts like' principle) produce maps that reproduce well the complex, somewhat fractured maps described by empirical studies of the motor cortex. Thus, the principle of 'like attracts like' seems working well even when multiple dimensions must be mapped onto a two-dimensional entity (our cerebral cortex).

Let's move to human behavior. Imitation in humans is widespread and often automatic. It is important for learning and transmission of culture. We tend to align our movements (and even words!) during social interactions without even realizing it. However, we don't imitate other people in an equal way. Perhaps not surprisingly, we tend to imitate more people that are like us. Soon after birth, infants prefer faces of their own race and respond more receptively to strangers of their own race. Adults make education and even career choices that are influenced by models of their own race. This is a phenomenon called self similarity bias. Since imitation increases liking, the self similarity bias most likely influences our social preferences too. We tend to imitate others that are like us, and by doing that, we tend to like those people even more. From neurons to people, the very simple principle of 'like attracts like' has a remarkable explanatory power. This is what an elegant scientific explanation is supposed to do. To explain a lot in a simple way.

Professor of Psychology, University of Texas, Austin; Coauthor: Why Women Have Sex; Author, The Dangerous Passion

Sexual Conflict Theory

A fascinating parallel has occurred in the history of the traditionally separate disciplines of evolutionary biology and psychology. Biologists historically viewed reproduction as an inherently cooperative venture. A male and female would couple for the shared goal of reproduction of mutual offspring. In psychology, romantic harmony was presumed to be the normal state. Major conflicts within romantic couples were and still are typically seen as signs of dysfunction. A radical reformulation embodied by sexual conflict theory changes these views.

Sexual conflict occurs whenever the reproductive interests of an individual male and individual female diverge, or more precisely when the "interests" of genes inhabiting individual male and female interactants diverge. Sexual conflict theory defines the many circumstances in which discord is predictable and entirely expected.

Consider deception on the mating market. If a man is pursuing a short-term mating strategy and the woman for whom he has sexual interest is pursuing a long-term mating strategy, conflict between these interactants is virtually inevitable. Men are known to feign long-term commitment, interest, or emotional involvement for the goal of casual sex, interfering with women's long-term mating strategy. Men's have evolved sophisticated strategies of sexual exploitation. Conversely, women sometimes present themselves as costless sexual opportunities, and then intercalate themselves into a man's mating mind to such a profound degree that he wakes up one morning and suddenly realizes that he can't live without her—one version of the ‘bait and switch' tactic in women's evolved arsenal.

Once coupled in a long-term romantic union, a man and a woman often still diverge in their evolutionary interests. A sexual infidelity by the woman might benefit her by securing superior genes for her progeny, an event that comes with catastrophic costs to her hapless partner who unknowingly devotes resources to a rival's child. From a woman's perspective, a man's infidelity risks the diversion of precious resources to rival women and their children. It poses the danger of losing the man's commitment entirely. Sexual infidelity, emotional infidelity, and resource infidelity are such common sources of sexual conflict that theorists have coined distinct phrases for each.

But all is not lost. As evolutionist Helena Cronin has eloquently noted, sexual conflict arises in the context of sexual cooperation. The evolutionary conditions for sexual cooperation are well-specified: When relationships are entirely monogamous; when there is zero probability of infidelity or defection; when the couple produces offspring together, the shared vehicles of their genetic cargo; and when joint resources cannot be differentially channeled, such as to one set of in-laws versus another.

These conditions are sometimes met, leading to great love and harmony between a man and a woman. The prevalence of deception, sexual coercion, stalking, intimate partner violence, murder, and the many forms of infidelity reveal that conflict between the sexes is ubiquitous. Sexual conflict theory, a logical consequence of modern evolutionary genetics, provides the most beautiful theoretical explanation for these darker sides of human sexual interaction.

Research in fundamental particle physics has culminated in our current Standard Model of elementary particles. Using ever larger machines, we have been able to identify and determine the properties of a whole zoo of elementary particles. These properties present many interesting patterns. All the matter we see around us is composed of electrons and up and down quarks, interacting differently with photons of electromagnetism, W and Z bosons of the weak force, gluons of the strong force, and gravity, according to their different values and kinds of charges. Additionally, an interaction between a W and an electron produces an electron neutrino, and these neutrinos are now known to permeate space—flying through us in great quantities, interacting only weakly. A neutrino passing through the earth probably wouldn't even notice it was there. Together, the electron, electron neutrino, and up and down quarks constitute what is called the first generation of fermions. Using high energy particle colliders, physicists have been able to see even more particles. It turns out the first generation fermions have second and third generation partners, with identical charges to the first but larger masses. And nobody knows why. The second generation partner to the electron is called the muon, and the third generation partner is called the tau. Similarly, the down quark is partnered with the strange and bottom quarks, and the up quark has partners called the charm and top, with the top discovered in 1995. Last and least, the electron neutrinos are partnered with muon and tau neutrinos. All of these fermions have different masses, arising from their interaction with a theorized background Higgs field. Once again, nobody knows why there are three generations, or why these particles have the masses they do. The Standard Model, our best current description of fundamental physics, lacks a good explanation.

The dominant research program in high energy theoretical physics, string theory, has effectively given up on finding an explanation for why the particle masses are what they are. The current non-explanation is that they arise by accident, from the infinite landscape of theoretical possibilities. This is a cop out. If a theory can't provide a satisfying explanation of an important pattern in nature, it's time to consider a different theory. Of course, it is possible that the pattern of particle masses arose by chance, or some complicated evolution, as did the orbital distances of our solar system's planets. But, as experimental data accumulates, patterns either fade or sharpen, and in the newest data on particle masses an intriguing pattern is sharpening. The answer may come from the shy neutrino.

The masses of the three generations of fermions are described by their interaction with the Higgs field. In more detail, this is described by "mixing matrices," involving a collection of angles and phases. There is no clear, a priori reason why these angles and phases should take particular values, but they are of great consequence. In fact, a small difference in these phases determines the prevalence of matter over antimatter in our universe. Now, in the mixing matrix for the quarks, the three angles and one phase are all quite small, with no discernible pattern. But for neutrinos this is not the case. Before the turn of the 21st century it was not even clear that neutrinos mixed. Too few electron neutrinos seemed to be coming from the sun, but scientists weren't sure why. In the past few years our knowledge has improved immensely. From the combined effort of many experimental teams we now know that, to a remarkable degree of precision, the three angles for neutrinos have sin squared equal to 1/2, 1/3, and 0. We do need to consider the possibility of coincidence, but as random numbers go, these do not seem very random. In fact, this mixing corresponds to a "tribimaximal" matrix, related to the geometric symmetry group of the tetrahedron.

What is tetrahedral symmetry doing in the masses of neutrinos?! Nobody knows. But you can bet there will be a good explanation. It is likely that this explanation will come from mathematicians and physicists working closely with Lie groups. The most important lesson from the great success of Einstein's theory of General Relativity is that our universe is fundamentally geometric, and this idea has extended to the geometric description of known forces and particles using group theory. It seems natural that a complete explanation of the Standard Model, including why there are three generations of fermions and why they have the masses they do, will come from the geometry of group theory. This explanation does not yet exist, but when it does it will be deep, elegant, and beautiful—and it will be my favorite.

My favorite elegant explanations will already have been picked by others who turned in their homework early. Although I am a theoretical physicist, my choice could easily be Darwin. Closer to my area of expertise, there is General Relativity: Einstein's realization that free-fall is a property of space-time itself, which readily resolved a great mystery (why gravity acts in the same way on all bodies). So, in the interest of diversity, I will add a modifier and discuss my favorite annoying elegant explanation: quantum theory.

As explanations go, few are broader in applicability than the revolutionary framework of Quantum Mechanics, which was assembled in the first quarter of the 20th century. Why are atoms stable? Why do hot things glow? Why can I move my hand through air but not through a wall? What powers the sun? The strange workings of Quantum Mechanics are at the core of our remarkably precise and quantitative understanding of these and many other phenomena.

And strange they certainly are. An electron takes all paths between the two points at which it is observed, and it is meaningless to ask which path it actually took. We must accept that its momentum and position cannot both be known with arbitrary precision. For a while, we were even expected to believe that there are two different laws for time evolution: Schrödinger's equation governs unobserved systems, but the mysterious "collapse of the wave function" kicks in when a measurement is performed. The latter, with its unsettling implication that conscious observers might play a role in fundamental theory, has been supplanted, belatedly, by the notion of decoherence. The air and light in a room, which in classical theory would have little effect on a measuring apparatus, fundamentally alter the quantum-mechanical description of any object that is not carefully insulated from its environment. This, too, is strange. But do the calculation, and you will find that we used to call "wave function collapse" need not be postulated as a separate phenomenon. Rather, it emerges from

Schrödinger's equation, once we take the role of the environment into account.

Just because Quantum Mechanics is strange doesn't mean that it is wrong. The arbiter is Nature, and experiments have confirmed many of the most bizarre properties of this theory. Nor does Quantum Mechanics lack elegance: it is a rather simple framework with enormous explanatory power. What annoys me is this: we do not know for sure that Quantum Mechanics is wrong.

Many great theories in physics carry within them a seed of their demise. This seed is a beautiful thing. It hints at profound discoveries and conceptual revolutions still to come. One day, the beautiful explanation that has just transformed our view of the Universe will be supplanted by another, even deeper insight. Quantitatively, the new theory must reproduce all the experimental successes of the old one. But qualitatively, it is likely to rest on novel concepts, allowing for hitherto unimaginable questions to be asked and knowledge to be gained.

Newton, for instance, was troubled by the fact that his theory of gravitation allowed for instant communication across arbitrarily large distances. Einstein's theory of General Relativity fixed this problem, and as a byproduct, gave us dynamical spacetime, black holes, and an expanding universe that probably had a beginning.

General Relativity, in turn, is only a classical theory. It rests on a demonstrably false premise: that position and momentum can be known simultaneously. This may a good approximation for apples, planets, and galaxies: large objects, for which gravitational interactions tend to be much more important than for the tiny particles of the quantum world. But as a matter of principle, the theory is wrong. The seed is there. General Relativity cannot be the final word; it can only be an approximation to a more general Quantum Theory of Gravity.

But what about Quantum Mechanics itself? Where is its seed of destruction? Amazingly, it is not obvious that there is one. The very name of the great quest of theoretical physics—"quantizing General Relativity"—betrays an expectation that quantum theory will remain untouched by the unification we seek. String theory—in my view, by far the most successful, if incomplete, result of this quest—is strictly quantum mechanical, with no modifications whatsoever to the framework that was completed by Heisenberg, Schrödinger, and Dirac. In fact, the mathematical rigidity of Quantum Mechanics makes it difficult to conceive of any modifications, whether or not they are called for by observation.

Yet, there are subtle hints that Quantum Mechanics, too, will suffer the fate of its predecessors. The most intriguing, in my mind, is the role of time. In Quantum Mechanics, time is an essential evolution parameter. But in General Relativity, time is just one aspect of spacetime, a concept that we know breaks down at singularities deep inside black holes. Where time no longer makes sense, it is hard to see how Quantum Mechanics could still reign. As Quantum Mechanics surely spells trouble for General Relativity, the existence of singularities suggests that General Relativity may also spell trouble for Quantum Mechanics. It will be fascinating to watch this battle play out.

Journalist: Author, A Planet of Viruses; Science Ink: Tattoos of the Science Obsessed;

A Hot Young Earth: Unquestionably Beautiful and Stunningly Wrong

Around 4.567 billion years ago, a giant cloud of dust collapsed in on itself. At the center of the cloud our Sun began to burn, while the outlying dust grains began to stick together as they orbited the new star. Within a million years, those clumps of dust had become protoplanets. Within about 50 million years, our own planet had already reached about half its current size. As more protoplanets crashed into Earth, it continued to grow. All told, it may have taken another fifty million years to reach its full size—a time during which a Mars-sized planet crashed into it, leaving behind a token of its visit: our Moon.

The formation of the Earth commands our greatest powers of imagination. It is primordially magnificent. But elegant is not the word I'd use to describe the explanation I just sketched out. Scientists did not derive it from first principles. There is no equivalent of E=mc2 that predicts how the complex violence of the early Solar System produced a watery planet that could support life.

In fact, the only reason that we now know so much about how the Earth formed is because geologists freed themselves from a seductively elegant explanation that was foisted on them 150 years ago. It was unquestionably beautiful, and stunningly wrong.

The explanation was the work of one of the greatest physicists of the nineteenth century, William Thompson (a k a Lord Kelvin). Kelvin's accomplishments ranged from the concrete (figuring out how to lay a telegraph cable from Europe to America) to the abstract (the first and second laws of thermodynamics). Kelvin spent much of his career writing equations that could let him calculate how fast hot things got cold. Kelvin realized that he could use these equations to estimate how old the Earth is. "The mathematical theory on which these estimates are founded is very simple," Kelvin declared when he unveiled it in 1862.

At the time, scientists generally agreed that the Earth had started out as a ball of molten rock and had been cooling ever since. Such a birth would explain why rocks are hot at the bottom of mine shafts: the surface of the Earth was the first part to cool, and ever since, the remaining heat inside the planet has been flowing out into space. Kelvin reasoned that over time, the planet should steadily grow cooler. He used his equations to calculate how long it should take for a molten sphere of rock to cool to Earth's current temperature, with its observed rate of heat flow. His verdict was a brief 98 million years.

Geologists howled in protest. They didn't know how old the Earth was, but they thought in billions of years, not millions. Charles Darwin—who was a geologist first and then a biologist later—estimated that it had taken 300 million years for a valley in England to erode into its current shape. The Earth itself, Darwin argued, was far older. And later, when Darwin published his theory of evolution, he took it for granted that the Earth was inconceivably old. That luxury of time provided room for evolution to work slowly and imperceptibly.

Kelvin didn't care. His explanation was so elegant, so beautiful, so simple that it had to be right. It didn't matter how much trouble it caused for other scientists who would ignore thermodynamics. In fact, Kelvin made even more trouble for geologists when he took another look at his equations. He decided his first estimate had been too generous. The Earth might be only 10 million years old.

It turned out that Kelvin was wrong, but not because his equations were ugly or inelegant. They were flawless. The problem lay in the model of the Earth to which Kelvins applied his equations.

The story of Kelvin's refutation got a bit garbled in later years. Many people (myself included) have mistakenly claimed that his error stemmed from his ignorance of radioactivity. Radioactivity was only discovered in the early 1900s as physicists worked out quantum physics. The physicist Ernst Rutherford declared that the heat released as radioactive atom broke down inside the Earth kept it warmer than it would be otherwise. Thus a hot Earth did not have to be a young Earth.

It's true that radioactivity does give off heat, but there isn't enough inside the planet is to account for the heat flowing out of it. Instead, Kelvin's real mistake was assuming that the Earth was just a solid ball of rock. In reality, the rock flows like syrup, its heat lifting it up towards the crust, where it cools and then sinks back into the depths once more. This stirring of the Earth is what causes earthquakes, drives old crust down into the depths of the planet, and creates fresh crust at ocean ridges. It also drives heat up into the crust at a much greater rate than Kelvin envisioned.

That's not to say that radioactivity didn't have its own part to play in showing that Kelvin was wrong. Physicists realized that the tick-tock of radioactive decay created a clock that they could use to estimate the age of rocks with exquisite precision. Thus we can now say that the Earth is not just billions of years old, but 4.567 billion.

Elegance unquestionably plays a big part in the advancement of science. The mathematical simplicity of quantum physics is lovely to behold. But in the hands of geologists, quantum physics has brought to light the glorious, messy, and very inelegant history of our planet.

Co-Director of LSE's Centre for Philosophy of Natural and Social Science; Author, The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today

In The Beginning Is The Theory

Let's eavesdrop on an exchange between Charles Darwin and Karl Popper. Darwin, exasperated at the crass philosophy of science peddled by his critics, exclaims: "How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service!" And, when the conversation turns to evolution, Popper observes: "All life is problem-solving … from the amoeba to Einstein, the growth of knowledge is always the same".

There is a confluence in their thinking. Though travelling by different pathways, they have arrived at the same insight. It is to do with the primacy and fundamental role of theories—of ideas, hypotheses, perspectives, views, dispositions and the like—in the acquisition and growth of knowledge. Darwin was right to stress that such primacy is needed 'if the observation is to be of any service'. But the role of a 'view' also goes far deeper. As Darwin knew, it is impossible to observe at all without some kind of view. If you are unconvinced, try this demonstration, one that Popper liked to use in lectures. "Observe!" Have you managed that? No. Because, of course, you need to know "Observe what?" All observation is in the light of some theory; all observation must be in the light of some theory. So all observation is theory-laden—not sometimes, not contingently, but always and necessarily.

This is not to depreciate observation, data, facts. On the contrary, it gives them their proper due. Only in the light of a theory, a problem, the quest for a solution, can they speak to us in revealing ways.

Thus the insight is immensely simple. But it has wide relevance and great potency. Hence its elegance and beauty.

Here are two examples, first from Darwin's realm then from Popper's.

• Consider the tedious but tenacious argument 'genes versus environment'. I'll take a well-studied case. Indigo buntings migrate annually over long distances. To solve the problem of navigation, natural selection equipped them with the ability to construct a mental compass by studying the stars in the night sky, boy-scout fashion, during their first few months of life. The fount of this spectacular adaptation is a rich source of information that natural selection, over evolutionary time, has packed into the birds' genes—in particular, information about the rotation of the stellar constellations. Thus buntings that migrate today can use the same instincts and the same environmental regularities to fashion the same precision-built instrument as did their long-dead ancestors.

And all adaptations work in this way. By providing the organism with innate information about the world, they open up resources for the organism to meet its own distinctive adaptive needs; thus natural selection creates the organism's own tailor-made, species-specific environment. And different adaptive problems therefore give rise to different environments; so different species, for example, have different environments.

Thus what constitutes an environment depends on the organism's adaptations. Without innate information, carried by the genes, specifying what constitutes an environment, no environments would exist. And thus environments, far from being separate from biology, autonomous and independent, are themselves in part fashioned by biology. Environment is therefore a biological issue, an issue that necessarily begins with biologically-stored information.

But aren't we anyway all interactionists now? No longer genes versus environment but gene—environment interaction? Yes, of course; interaction is what natural selection designed genes to do. Bunting genes are freighted with information about how to learn from stars because stars are as vital a part of a bunting's environment as is the egg in which it develops or the water that it drinks; buntings without stars are destined to be buntings without descendants. But interaction is not parity; the information must come first. Just try this parity test. Try specifying 'an' environment without first specifying whether it is the environment of a bunting or a human, a male or a female, an adaptation for bird navigation or for human language. The task is of course impossible; the specification must start from the information that is stored in adaptations.And here's another challenge to parity. Genes use environments for a purpose—self-replication. Environments, however, have no purposes; so they do not use genes. Thus bunting-genes are machines for converting stars into more bunting-genes; but stars are not machines for converting bunting-genes into more stars.

• The second example is to do with the notion of objectivity in science. Listen further to Darwin's complaint about misunderstandings over scientific observation: "How profoundly ignorant … [this critic] must be of the very soul of observation! About thirty years ago there was much talk that geologists ought only to observe and not theorize; and I well remember some one saying that at this rate a man might as well go into a gravel-pit and count the pebbles and describe the colours".

Nearly two hundred years later, variants of that thinking still stalk science. Consider the laudable, but now somewhat tarnished, initiative to establish evidence-based policy-making. What went wrong? All too often, objective evidence was taken to be data uncontaminated by the bias of a prior theory. But without 'the very soul' of a theory as guidance, what constitutes evidence? Objectivity isn't to do with stripping out all presuppositions. Indeed, the more that's considered to be possible or desirable, the more the undetected, un-criticised presuppositions and the less the objectivity. At worst, a desired but un-stated goal can be smuggled in at the outset. And the upshot? This well-meant approach is often justifiably derided as 'policy-based evidence-making'.

An egregious example from my own recent experience, which still has me reeling with dismay, was from a researcher on 'gender diversity' whose concern was discrimination against women in the professions. He proudly claimed that his research was absolutely free of any prior assumptions about male/female differences and that it was therefore entirely neutral and unbiased. If any patterns of differences emerged from the data, his neutral, unbiased assumption would be that they were the result of discrimination. So might he accept that evolved sex differences exist? Yes; if it were proven. And what might such a proof look like? Here he fell silent, at a loss—unsurprisingly, given that his 'neutral' hypotheses had comprehensively precluded such differences at the start. What irony that, in the purported interests of scientific objectivity, he ostensibly felt justified in clearing the decks of the entire wealth of current scientific findings.

The Darwin-Popper insight, in spite of its beauty, has yet to attract the admirers it deserves.

Nature is lazy. Scientific paradigms and "ultimate" visions of the universe come and go, but the idea of "least action" has remained remarkably unperturbed. From Newton's classical mechanics to Einstein's general relativity to Schrödinger's quantum field theory, every major theory has been reformulated with this single principle, by Euler, Hilbert, and Feynman, respectively. The reliability of this framework suggests a form for whatever the next major paradigm shift may be.

Action is a strange quantity, in the units of energy multiplied by time. A principle of least action does not explicitly specify what will happen, like an equation of motion does, but simply asserts that the action will be the least of any conceivable actions. In some sense, the universe is maximally efficient. To be precise, the action integrated over any interval of time is always minimal. Euler and Lagrange discovered that not only is this principle true, but one can derive all of Newtonian physics from it. The Newtonian worldview was often characterized as "clockwork," both because clockwork was an apt contemporary technology, and because of the crucially absolute measurement of time.

In Einstein's relativity, absolute time was no longer possible, and totally new equations of motion had to be written. Now we have to deal with four-dimensional spacetime, rather than the familiar three-dimensional space and the special dimension of time. At speeds much less than the speed of light, a first-order approximation can transform Einstein's equations into Newton's, yet the resemblance is hardly obvious. However, the principle of least action remains much the same, but with a difference that intuitively connects to the essence of Einstein's insight: instead of just integrating over time, we must integrate over space too, with the speed of light serving as a constant exchange rate between spatial and temporal units. The essence of relativity is not its well-known consequences—time dilation, length dilation, or even E=mc^2. Rather, it is the more intuitive idea that space and time are simply different ways of looking at the same thing. Much more complicated mathematics is needed to derive Einstein's equations from this principle, but the legendary mathematician David Hilbert was able to do it. Maxwell's theory of electromagnetism, too, can be derived from the least action principle by a generalization of operators. Even more remarkably, combining the least-action tweaks that lead to Einstein's and Maxwell's equations respectively produces modern relativistic electromagnetism.

By this point you may be imagining that practically any physical theory can be formulated using the principle of least action. But in fact, many cannot —for instance, an early attempt at quantum electrodynamics, put forth by Paul Dirac. Such theories tend to have other issues that preclude their practical use; under many conditions, Dirac's theory prescribed infinite energies (clearly a dramatic difference from experiment). Quantum electrodynamics was later "fixed" by Feynman, a feat for which he won the Nobel Prize. In his Nobel lecture, he mentioned that the initial confirmation he was on the right track was that his version, unlike Dirac's, could be formulated as a principle of least action.

I believe it's reasonable to expect it will be possible to explain the next major physical theory using the least action framework, whatever it may be. Perhaps it will benefit us as scientists to explore our theories within this framework, rather than attempting to guess at once the explicit equations, and leaving the inevitable least action derivation as an exercise for some enterprising mathematician to work out.

The essential idea of least action transcends even the deepest of theoretical physics, and enters the domain of metaphysics. Claude Shannon derived a formula to quantify uncertainty, which von Neumann pointed out was identical to the formula used in thermodynamics to compute entropy. Edwin Jaynes put forth an interpretation of thermodynamics, in which entropy simply is uncertainty, nothing more and nothing less. Although the formal mathematical underpinnings remain controversial, I find it very worthwhile, at least as an intuitive explanation. Jaynes' followers propose a profound connection between action and information, such that the principle of least action and the laws of thermodynamics both derive from basic symmetries of logic itself. We need only accept that all conceivable universes are equally likely, a principle of least information. Under this assumption, we can imagine a smooth spectrum from metaphysics to physics, from the omniverse to the multiverse to the universe, where the fundamental axis is information, and the only fundamental law is that you can never assume more than you know.

Starting from nothingness, or the absence of information, there is a flowering of possible refinements toward specifying a universe, one of which leads to our Big Bang, to particles, fields and spacetime, and ultimately to intelligent life. The reason that we had a long period of stellar, planetary and biological evolution is that this is the path to intelligent life, which required the least action. Imagine how much action it would take to create intelligence directly from nothing! Universes without intelligent life might require even less action, but there is nobody in those universes to wonder where they came from.

At least for me, the least action perspective explains all known physics as well as the origin of our universe, and that sure is deep and beautiful.

The first time I saw a fitness landscape cartoon (in Garrett Hardin's Man And Nature, 1969), I knew it was giving me advice on how not to get stuck over-adapted—hence overspecialized—on some local peak of fitness, when whole mountain ranges of opportunity could be glimpsed in the distance, but getting to them involved venturing "downhill" into regions of lower fitness. I learned to distrust optimality.

Fitness landscapes (sometimes called "adaptive landscapes") keep turning up when people try to figure out how evolution or innovation works in a complex world. An important critique by Marvin Minsky and Seymour Papert of early optimism about artificial intelligence warned that seemingly intelligent agents would dumbly "hill climb" to local peaks of illusory optimality and get stuck there. Complexity theorist Stuart Kauffman used fitness landscapes to visualize his ideas about the "adjacent possible" in 1993 and 2000, and that led in turn to Steven Johnson's celebration of how the "adjacent possible" works for innovation in Where Good Ideas Come From.

The man behind the genius of fitness landscapes was the founding theorist of population genetics, Sewell Wright (1889-1988). In 1932 he came up with the landscape as a way to visualize and explain how biological populations escape the potential trap of a local peak by imagining what might drive their evolutionary "path" downhill from the peak toward other possibilities. Consider these six diagrams of his :

The first two illustrate how low selection pressure or a high rate of mutation (which comes with small populations) can broaden the range of a species whereas intense selection pressure or a low mutation rate can severely limit a species to the very peak of local fitness. The third diagram shows what happens when the landscape itself shifts, and the population has to evolve to shift with it.

The bottom row explores how small populations respond to inbreeding by wandering ineffectively. The best mode of exploration Wright deemed the final diagram, showing how a species can divide into an array of races that interact with one another. That jostling crowd explores well, and it can respond to opportunity.

Fitness landscapes express so much so economically. There's no better way, for example, to show the different modes of evolution of a remote oceanic island and a continental jungle. The jungle is dense and "rugged" with steep peaks and valleys, isolating countless species on their tiny peaks of high specialization. The island, with its few species, is like a rolling landscape of gentle hills with species casually wandering over them, evolving into a whole array of Darwin's finches, say. The island creatures and plants "lazily" become defenseless against invaders from the mainland.

You realize that for each species, its landscape consists almost entirely of other species, all of them busy evolving right back. That's co-evolution. We are all each other's fitness landscapes.

The deepest, most elegant, and most beautiful explanations are the ones we find so overwhelmingly compelling that we don't even realize they're there. It can take years of philosophical training to recognize their presence and to evaluate their merits.

Consider the following three examples:

REALISM. We explain the success of our scientific theories by appeal to what philosophers call realism—the idea that they are more or less true. In other words, chemistry "works" because atoms actually exist, and hand washing prevents disease because there really are loitering pathogens.

OTHER MINDS. We explain why people act the way they do by positing that they have minds more or less like our own. We assume that they have feelings, beliefs, and desires, and that they are not (for instance) zombie automata that convincingly act as if they have minds. This requires an intuitive leap that engages the so-called "problem of other minds."

CAUSATION. We explain the predictable relationship between some events we call causes and others we call effects by appeal to a mysterious power called causation. Yet, as noted by 18th century philosopher David Hume, we never "discover anything but one event following another," and never directly observe "a force or power by which the cause operates, or any connexion between it and its supposed effect."

These explanations are at the core of humans' understanding of the world—of our intuitive metaphysics. They also illustrate the hallmarks of a satisfying explanation: they unify many disparate phenomena by appealing to a small number of core principles. In other words, they are broad but simple. Realism can explain the success of chemistry, but also of physics, zoology, and deep-sea ecology. A belief in other minds can help someone understand politics, their family, and Middlemarch. And assuming a world governed by orderly, causal relationships helps explain the predictable associations between the moon and the tides as well as that between caffeine consumption and sleeplessness.

Nonetheless, each explanation has come under serious attack at one point or another. Take realism, for example. While many of our current scientific theories are admittedly impressive, they come at the end of a long succession of failures: every past theory has been wrong. Ptolemy's astronomy had a good run, but then came the Copernican Revolution. Newtonian mechanics is truly impressive, but it was ultimately superseded by contemporary physics. Modesty and common sense suggest that like their predecessors, our current theories will eventually be overturned. But if they aren't true, why are they so effective? Intuitive realism is at best a metaphysical half-truth, albeit a pretty harmless one.

From these examples I draw two important lessons. First, the depth, elegance, and beauty of our intuitive metaphysical explanations can be a liability. These explanations are so broad and so simple that we let them operate in the background, constantly invoked but rarely scrutinized. As a result, most of us can't defend them and don't revise them. Metaphysical half-truths find a safe and happy home in most human minds.

Second, the depth, elegance, and beauty of our intuitive metaphysical explanations can make us appreciate them less rather than more. Like a constant hum, we forget that they are there. It follows that the explanations most often celebrated for their virtues—explanations such as natural selection and relativity—are importantly different from those that form the bedrock of intuitive beliefs. Celebrated explanations have the characteristics of the solution to a good murder-mystery. Where intuitive metaphysical explanations are easy to generate but hard to evaluate, scientific superstars like evolution are typically the reverse: hard to generate but easy to evaluate. We need philosophers like Hume to nudge us from complacency in the first case, and scientists like Darwin to advance science in the second.

Is there a single explanation that can account for all of human behavior? Of course not. But, I think there is one that does darn well. Human beings are motivated to see themselves in a positive light. We want, and need, to see ourselves as good, worthwhile, capable people. And fulfilling this motive can come at the expense of our being "rational actors." The motive to see oneself in a positive light is powerful, pervasive, and automatic. It can blind us to truths that would otherwise be obvious. For example, while we can readily recognize who among our friends and neighbors are bad drivers, and who among us is occasionally sexist or racist, most of us are deluded about the quality of our own driving and about our own susceptibility to sexist or racist behavior.

The motive to see oneself in a positive light can have profound effects. The work of Claude Steele and others shows that this motive can lead children who underperform in school to decide that academics are unimportant and not worth the effort, a conclusion that protects self-esteem but at a heavy price for the individual and society. More generally, when people fail to achieve on a certain dimension, they often disidentify from it in order to preserve a positive sense of self. That response can come at the expense of meeting one's rational best interest. It can cause some to drop out of school (after deciding that there are better things to do than "be a nerd"), and it can cause others to ignore morbid obesity (after deciding that other things are more important than "being skinny").

Another serious consequence of this motive involves prejudice and discrimination. A wide array of experiments in social psychology have demonstrated how members of different ethnic groups, different races, and even different bunks at summer camp see their "own kind" as better and more deserving than "outsiders" who belong to other groups—a perception that leads not only to ingroup favoritism but also to blatant discrimination against members of other groups. And, people are especially likely to discriminate when their own self-esteem has been threatened. For example, one study found that college students were especially likely to discriminate against a Jewish job applicant after they themselves had suffered a blow to their self-esteem; notably, their self-esteem recovered fully after the discrimination.

The motive to see oneself in a positive light is so fundamental to human psychology that it is a hallmark of mental health. Shelley Taylor and others have noted that mentally healthy people are "deluded" by positive illusions of themselves (and depressed people are sometimes more "realistic"). But, how many of us truly believe that this motive drives us? It is difficult to spot in ourselves because it operates quickly and automatically, covering its tracks before we detect it. As soon as we miss a shot in tennis, it is almost instantaneous that we generate a self-serving thought about the sun having been in our eyes. The automatic nature of this motive is perhaps best captured by the fact that we unconsciously prefer things that start with the same letter as our first initial (so people named Paul are likely to prefer pizza more than people named Harry, whereas Harrys are more likely to prefer hamburgers). Herein, though, lies the rub. I know a Lee who hates lettuce, and a Wendy who will not eat wheat. Both of them are better at tennis than they realize, and both take responsibility for a bad serve. Simple and elegant explanations only go so far when it comes to the complex and messy problem of human behavior.

Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

Einstein's Photons

My favorite deep, elegant and beautiful explanation is Albert Einstein's 1905 proposal that light consists of energy quanta, today called photons. Actually, it is little known, even among physicists, but extremely interesting how Einstein came to this position. It is often said that Einstein invented the concept to explain the photoelectric effect. Certainly, that is part of Einstein's 1905 publication, but only towards its end. The idea itself is much deeper, more elegant and, yes, more beautiful.

Imagine a closed container whose walls are at some temperature. The walls are glowing, and as they emit radiation, they also absorb radiation. After some time, there will be some sort of equilibrium distribution of radiation inside the container. This was already well known before Einstein. Max Planck had introduced the idea of quantization that explained the energy distribution of the radiation inside such a volume. Einstein went a step further. He studied how orderly the radiation is distributed inside such a container. For physicists, entropy is a measure of disorder.

To consider a simple example, it is much more probable that books, notes, pencils, photos, pens etc. are cluttered all over my desk than that they are well ordered forming a beautiful stack. Or, if we consider a million atoms inside a container, it is much more probable that they are more or less equally distributed all over the volume of the container than that they are all collected in one corner. In both cases, the first state is less orderly: when the atoms fill a larger volume they have a higher entropy than the second one mentioned. The Austrian physicist Ludwig Boltzmann had shown that the entropy of a system is a measure of how probable its state is.

Einstein then realized in his 1905 paper that the entropy of radiation (including light) changes in the same mathematical way with the volume as for atoms. In both cases, the entropy increases with the logarithm of that volume. For Einstein this could not just be a coincidence. Since we can understand the entropy of the gas because it consists of atoms, the radiation consists also of particles that he calls energy quanta.

Einstein immediately applied his idea for example to his well-known application of the photoelectric effect. But he also realizes very soon a fundamental conflict of the idea of energy quanta with the well-studied and observed phenomenon of interference.

The problem is simply how to understand the two-slit interference pattern. This is the phenomenon that, according to Richard Feynman, contains "the only mystery" of quantum physics. The challenge is very simple. When we have both slits open, we obtain bright and dark stripes on an observation screen, the interference fringes. When we have only one slit open, we get no stripes, no fringes, but a broad distribution of particles. This can easily be understood on the basis of the wave picture. Through each of the two slits, a wave passes, and they extinguish each other at some places of the observation screen and at others, they enforce each other. That way, we obtain dark and bright fringes.

But what to expect if the intensity is so low that only one particle at a time passes through the apparatus? Following Einstein's realist position, it would be natural to assume that the particle has to pass through either slit. We can still do the experiment by putting a photographic plate at the observation screen and sending many photons in, one at a time. After a long enough time, we look at the photographic plate. According to Einstein, if the particle passes through either slit, no fringes should appear, because, simply speaking, how should the individual particle know whether the other slit, the one it does not pass through, is open or not. This was indeed Einstein's opinion, and he suggested that the fringes only appear if many particles go through at the same time, and somehow interact with each other such that they make up the interference pattern.

Today, we know that the pattern even arises if we have such low intensities that only one, say, photon per second passes through the whole apparatus. If we wait long enough and look at the distribution of all of them, we get the interference pattern. The modern explanation is that the interference pattern only arises if there is no information present anywhere in the Universe through which slit the particle passes. But even as Einstein was wrong here, his idea of the energy quanta of light, today called photons pointed far into the future.

In a letter to his friend Habicht in the same year of 1905, the miraculous year where he also wrote his Special Theory of Relativity, he called the paper proposing particles of light "revolutionary". As far as is known, this was the only work of his that he ever called revolutionary. And therefore it is quite fitting that the Nobel Prize was given to him for the discovery of particles of light. This was the Nobel Prize of 1921. That the situation was not as clear a few years before is witnessed by a famous letter signed by Planck, Nernst, Rubens and Warburg, suggesting Einstein for membership in the Prussian Academy of Sciences in 1913. They wrote: "the fact that he (Einstein) occasionally went too far should not be held too strongly against him. Not even in the exact natural sciences can there be progress without occasional speculation." Einstein's deep, elegant and beautiful explanation of the entropy of radiation, proposing particles of light in 1905, is a strong case in point for the usefulness of occasional speculation.

It may sound odd, but for as much as I loathe airport security lines, I must admit that while I'm standing there, stripped down and denuded of metal, waiting to go through the doorway, part of my mind wanders to oceans that likely exist on distant worlds in our solar system.

These oceans exist today and are sheltered beneath the icy shells that cover worlds like Europa, Ganymede, and Callisto (moons of Jupiter), and Enceladus and Titan (moons of Saturn). The oceans within these worlds are liquid water (H2O), just as we know and love it here on Earth, and they have likely been in existence for much of the history of the solar system (about 4.6 billion years). The total volume of liquid water contained within these oceans is at least 20 times that found here on Earth.

From the standpoint of our search for life beyond Earth, these oceans are prime real estate for a second origin of life and the evolution of extraterrestrial ecosystems.

But how do we know these oceans exist? The moons are covered in ice and thus we can't just look down with a spacecraft and see the liquid water.

That's where the airport security comes into play. You see, when you walk through an airport security door you're walking through a rapidly changing magnetic field. The laws of physics dictate that if you put a conducting material in a changing magnetic field electric currents will arise and those electric currents will then create a secondary magnetic field. This secondary field is often referred to as the induced magnetic field because it is induced by the primary field of the doorway. Also contained within the doorway are detectors that can sense when an induced field is present. When these sensors detect an induced field, the alarm goes off, and you get whisked over to the 'special' search line.

The same basic principle, the same fundamental physics, is largely responsible for our knowledge of oceans on some of these distant worlds. Jupiter's moon Europa provides a good example. Back in the late 1990's the NASA's Galileo spacecraft made several flybys of Europa and the magnetic field sensors on the spacecraft detected that Europa does not have a strong internal field of its own, instead it has an induced magnetic field that is created as a result of Jupiter's strong background magnetic field. In other words, the alarm went off.

But in order for the alarm to go off there needed to be a conductor. And for Europa the data indicated that the conducting layer must be near the surface. Other lines of evidence had already shown that the outer ~150 km of Europa was water, but those datasets could not help distinguish between solid ice water and liquid water. With the magnetic field data, however, ice doesn't work—it's not a good conductor. Liquid water with salts dissolved in it, similar to our ocean, does work. A salty ocean is needed to explain the data. The best fits to the data indicate that Europa has an outer ice shell of about 10 km in thickness, beneath which lies a global ocean of about ~100 km in depth. Beneath that is a rocky seafloor that may be teeming with hydrothermal vents and bizarre other-wordly organisms.

So, the next time your in airport security and get frustrated by that disorganized person in front of you who can't seem to get it through their head that their belt, wallet, and watch will all set off the alarm, just take a deep breathe and think of the possibly habitable distant oceans we now know of thanks to the same beautiful physics that's driving you nuts as you try to reach your departing plane.

A few years ago, I heard said only old-fashioned folk wear watches. But I thought I would always wear a watch. Today I don't wear a watch.

How do I find the time? Either I do without or I keep my eyes fixed on a screen that has the time in the upper-right corner. It's gotten so that I resent that reality doesn't dispay the time in the upper-right corner.

An elegant and beautiful explanation is, to me, one that corrals a herd of seemingly unrelated facts within a single unifying concept. In our explorations of the worlds, including our own, that orbit the Sun, and in our attempts to find from these efforts what is special and what is commonplace about our own planet, I can think of two examples of this.

The first is an idea that was originally offered in the 1912 but met with such extreme hostility from the scientific establishment—not an unusual response, by the way, to an original idea—that it wasn't generally accepted until 50 years later. By that time, the sheer weight of evidence supporting it became so overwhelming that the notion was rendered irrefutable. And that notion was plate tectonics.

It could be said that the first indications of plate motions, though of course not recognized as such at the time, came from the observations of the early explorers, like Magellan, who noticed the puzzle-like fit of the continents, Africa and South America, for instance, on their maps. Fast forward to the early 20th century…Alfred Wegener, a German geophysicist, proposes movement of the continents (continental drift), to explain this hand-in-glove fit. Having no explanation, however, for how the continents could actually move, he was laughed out of the room.

But the evidence continued to mount: fossils, rock types, ancient climates were shown to be similar within widely separated geographical regions, like the east coast of South America and the west coast of Africa. Studies of magnetized rocks, which if stationary will always indicate a consistent direction to the north magnetic pole regardless where on the globe they form, indicated that either the north pole location varied throughout time or that the rocks themselves were not formed where they are found today. Finally, by the 1960s, it was clear that many of the Earth's presently active geological phenomena, such as the strongest earthquakes and volcanoes, were found within distinct, sinuous belts that wrapped around the planet and carved the Earth's surface into distinct bounded regions. Furthermore, studies of rocks on the floors of the Earth's oceans revealed an alternating north-up/north-down magnetic striping pattern that could only be explained by the upwelling of molten lava from below, creating new oceanic floor, and the consequent spreading of the old floor, pushing the continents farther apart with time. We now know that the tectonic forces driving the motions of the Earth's crustal plates arise from the convective upwelling and downwelling currents of molten rock in the Earth's mantle that drag around the solid plates sitting atop them.

In the end, the notion that bits of the Earth's surface can drift over time is a glorious example of a simple, efficient and even elegant idea that was eventually proven correct yet so radical for its time, it was scorned.

The second is more or less an extraterrestrial version of the same. In an historic mission not unlike Homer's Odyssey, two identical spacecraft—Voyager I and II—spent the 1980s touring the planetary systems of Jupiter, Saturn, Uranus and Neptune. And the images they returned provided humanity its first detailed views of these planets and the moons and rings surrounding them.

Jupiter was the gateway planet, the first of the four encountered, and it was there that we learned just how complex and presently active other planetary bodies could be. Along with the stunningly active moon, Io, which sported at the time about 9 large volcanic eruptions, Voyager imaged the surface of Jupiter's icy moon, Europa. Just a bit smaller than our own Moon, Europa's surface was clearly young, rather free of craters, and scored with a complex pattern of cracks and fractures that were cycloidal in shape and continuous, with many 'loops', like the scales on a fish. From these discoveries and others, it was inferred that Europa might have a thin crust overlying either warm, soft ice or perhaps even liquid water, though how the fracture pattern came to look the way it does was a mystery. The idea of a sub-surface ocean was enticing for the implicit possibility of a habitable zone for extraterrestrial life.

A follow-on spacecraft, Galileo, arrived at Jupiter in 1995 and before too long got an even better look at Europa's cracked ice shell and its cycloidal fractures. It became clear to researchers at the University of Arizona's Lunar and Planetary Lab that the cycloidal fractures, and even their detailed characteristics, like the shapes of the cycloidal segments, and the existence of, the distance between, and orientations of the cusps, could all be explained by the stresses across the moon's thin ice shell created by the tides raised on it by Jupiter. Europa's distance to Jupiter varies over the course of its orbit because of gravitational resonances with the other Jovian moons. And that varying separation causes the magnitude and direction of the tidal stresses on its surface to change. Under these conditions, if a crack in the thin ice shell is initiated at any location by these stresses, then that crack will propagate across the surface over the course of a Europan day and will take the shape of a cycloid. This will continue, day in and day out, scoring the surface of Europa in the manner that we find it today. Furthermore, tidal stresses would be inadequate to affect these kinds of changes to the moon's surface if its ice shell did not overlie a liquid ocean…an exciting possibility by anyone's measure.

And so, a whole array of features on the surface of one of Jupiter's most fascinating moons, the enormous complexity of the patterns they form, and the implication of a subterranean liquid water ocean in which extraterrestrial life might have taken hold, were explained and supported with one very simple, very easily demonstrated, and very elegant idea….an idea which itself, like that of plate tectonics, exemplifies the great beauty and economy derivable from logical induction, one of humankind's most demonstrably powerful intellectual devices.

I play this game with my kids. It's a 'guess-who' game: Think of an animal, person, object and then try to describe it to another person without giving away the real identity. The other person has to guess what/who you are. You have to get in character and tell a story: What do you do, how do you feel, what do you think and want?

Let's have a go. Read the character scenarios below and see if you can guess who/what they are.

"It's just not fair! Mum says I'm getting in the way, I'm a lay-about and she can't afford for me to stay with her any more. But I like being in a big family, and I don't want to leave. Mum says that if I am to stay home, we'd need some kind of 'glue' to keep us from drifting apart. Glue is costly and she says she hasn't the energy to make it since she's busy making babies. But then I had this brilliant idea: how aboutImake the glue using a bit of cell wall (mum won't mind), add some glycoproteins (they're a bit sticky, so I have to promise mum I'll wash my hands afterwards) and bingo! Job done: we've got ourselves a nice cosy extracellular matrix! I'm happy doing the bulk of the work, so long as mum keeps giving me more siblings. I suggested this to mum last night, and guess what? She said yes! But she also said I'm out the door if I don't keep up my side of the bargain: no free-riders…."

Who am I?

"I am a uni-cell becoming multicellular. If I group with my relatives then someone needs to pay the cost of keeping us together—the extracellular matrix. I don't mind paying that cost if I benefit from the replication of my own genes through my relatives."

Ok, that was a tough one. Try this one:

"I'm probably what you'd call the 'maternal type'. I like having babies, and I seem to be pretty good at it. I love them all equally, obviously. Damn hard work though, especially since their father didn't stick around. I can't see my latest babies surviving unless I get some help around the place. So I said to my oldest the other day, fancy helping your old Ma out? Here's the deal: you go find some food whilst I squeeze out a few more siblings for you. Remember, kid, I'm doing this for you—all these siblings will pay off in the long run. One day, some of them will be Mas just like me, and you'll be reaping in the benefits from them long after you and I are gone. This way you don't ever have to worry about sex, men or any of that sperm stuff. Your old Ma's got everything you need, right here. All you have to do is feed us, and clear out the mess!"

Who am I?

"I am an insect becoming a society. If I nest alone I have to find food which means leaving my young unprotected. If some of my grown-up children stay home and help me, they can go out foraging whilst I stay home to protect the young. I can have even more babies this way, which my children love as this means more and more of their genes are passed on through their siblings. Anyway, it's a pretty tough world out there right now for youngsters; it's much less risky to stay at home."

"I could also be a gene becoming a genome, or a prokaryote becoming an eukaryote. I am part of the same, fundamental event in evolution's playground. I am the evolution of helping and cooperation. I am the major transition that shapes all levels of biological complexity. The reason I happen is because I help others like me, and we settle on a division of labour. I don't help because, paradoxically, I benefit. My secret? I'm selective: I like to help relatives because they end up also helping me, by passing on our shared genes. I've embraced the transition from autonomy to cooperation. And it feels good!"

The evolution of cooperation and helping behaviour is a beautiful and simple explanation of how nature got complex, diverse and wonderful. It's not restricted to the charismatic Meerkats, or fluffy bumble-bees. It is a general phenomenon which generates the biological hierarchies that characterise the natural world. Groups of individuals (genes, prokaryotes, single-celled and multicellular organisms) that could previously replicate independently, form a new, collective individual that can only replicate as a whole.

Hamilton's 1964 inclusive fitness theory is an elegant and simple explanation why sociality evolves. It was more recently formalised conceptually as unified framework to explain the evolution of major transitions to biological complexity in general (e.g. Bourke's 2011 Principles of Social Evolution). Entities cooperate because it increases their fitness—their chance of passing on genes to the next generation. Beneficiaries get enhanced personal reproduction; helpers benefit from the propagation of the genes they share with the relatives they help. But the conditions need to be right: the benefits must outweigh the costs and this sum is affected by the options available to independent replicating entities before they commit to their higher-level collective. Ecology and environment play a role, as well as kinship. The resulting division of labour is the fundamental basis to societal living, uniting genes into genomes, mitochondria with prokaryotes to produce eukaryotes, unicellular organisms into multicellular ones, and solitary animals into eusocieties. This satisfyingly simple explanation makes the complexities of the world less mysterious, but no less wonderful.

If only adults indulged a bit more in children's games, perhaps we'd stumble across simple explanations for the complexities of life more often.

"I think, therefore I am." Cogito ergo sum. Remember this elegant and deep idea from René Descartes' Principles of Philosophy? The fact that a person is contemplating whether she exists, Descartes argued, is proof that she, indeed, actually does exist. With this single statement, Descartes knit together two central ideas of Western philosophy: 1) thinking is powerful, and 2) individuals play a big role in creating their own I's—that is, their psyches, minds, souls, or selves.

Most of us learn "the cogito" at some point during our formal education. Yet far fewer of us study an equally deep and elegant idea from social psychology: Other people's thinking likewise powerfully shapes the I's that we are. Indeed, in many situations, other people's thinking has a bigger impact on our own thoughts, feelings, and actions than do the thoughts we conjure while philosophizing alone.

In other words, much of the time, "You think, therefore I am." For better and for worse.

An everyday instance of how your thinking affects other people's being is the Pygmalion effect. Psychologists Robert Rosenthal and Lenore Jacobson captured this effect in a classic 1963 study. After giving an IQ test to elementary school students, the researchers told the teachers which students would be "academic spurters" because of their allegedly high IQs. In reality, these students' IQs were no higher than those of the "normal" students. At the end of the school year, the researchers found that the "spurters'" had attained better grades and higher IQs than the "normals." The reason? Teachers had expected more from the spurters, and thus given them more time, attention, and care. And the conclusion? Expect more from students, and get better results.

A less sanguine example of how much our thoughts affect other people's I's is stereotype threat. Stereotypes are clouds of attitudes, beliefs, and expectations that follow around a group of people. A stereotype in the air over African Americans is that they are bad at school. Women labor under the stereotype that they suck at math.

As social psychologist Claude Steele and others have demonstrated in hundreds of studies, when researchers conjure these stereotypes—even subtly, by, say, asking people to write down their race or gender before taking a test—students from the stereotyped groups score lower than the stereotype-free group. But when researchers do not mention other people's negative views, the stereotyped groups meet or even exceed their competition. The researchers show that students under stereotype threat are so anxious about confirming the stereotype that they choke on the test. With repeated failures, they seek their fortunes in other domains. In this tragic way, other people's thoughts deform the I's of promising students.

As the planet gets smaller and hotter, knowing that "You think, therefore I am" could help us more readily understand how we affect our neighbours and how our neighbours affect us. Not acknowledging how much we impact each other, in contrast, could lead us to repeat the same mistakes.

Eratosthenes (276-195 BCE), the head of the famous Library of Alexandria in Ptolemaic Egypt, made ground-breaking contributions to mathematics, astronomy, geography, and history. He also argued against dividing humankind into Greeks and 'Barbarians'. What he is remembered for however is having provided the first correct measurement of the circumference of the Earth (a story well told in Nicholas Nicastro's recent book, Circumference). How did he do it?

Eratosthenes had heard that, every year, on a single day at noon, the Sun shone directly to the bottom of an open well in the town of Syene (now Aswan). This meant that the Sun was then at the zenith. For that, Syene had to be on the Tropic of Cancer and the day had to be the Summer solstice (our June 21). He knew how long it took caravans to travel from Alexandria to Syene and, on that basis, estimated the distance between the two cities to be 5014 stades. He assumed that Syene was due south on the same meridian as Alexandria. Actually, in this he was slightly mistaken—Syene is somewhat to the east of Alexandria—, and also in assuming that Syene was right on the Tropic; but, serendipitously, the effect of these two mistakes cancelled one another. He understood that the Sun was far enough to treat as parallel its rays that reach the Earth. When the Sun was at the zenith in Syene, it had to be south of the zenith in the more northern Alexandria. By how much? He measured the length of the shadow cast by an obelisk located in front of the Library (says the story—or cast by some other, more convenient vertical object), and, even without trigonometry that had yet to be developed, he could determine that the Sun was at an angle of 7.2 degrees south of the zenith. That very angle, he understood, measured the curvature of the Earth between Alexandria and Syene (see the figure). Since 7.2 degrees is a fiftieth of 360 degrees, Eratosthenes could then, by multiplying the distance between Alexandria and Syene by 50, calculate the circumference of the Earth. The result, 252,000 stades, is 1% shy of the modern measurement of 40,008 km.

Eratosthenes brought together apparently unrelated pieces of evidence—the pace of caravans, the Sun shining to the bottom of a well, the length of the shadow of an obelisk—, assumptions—the sphericity of the Earth, its distance from the Sun—, and mathematical tools to measure a circumference that he could only imagine but neither see nor survey. His result is simple and compelling: the way he reached it epitomizes human intelligence at its best.

Was Eratosthenes thinking concretely about the circumference of the earth (in the way he might have been thinking concretely about the distance from the Library to the Palace in Alexandria)? I believe not. He was thinking rather about a challenge posed by the quite different estimates of the circumference of the Earth that had been offered by other scholars at the time. He was thinking about various mathematical principles and tools that could be brought to bear on the issue. He was thinking of the evidential use that could be made of sundry observations and reports. He was aiming at finding a clear and compelling solution, a convincing argument. In other terms, he was thinking about representations—theories, conjectures, reports—, and looking for a novel and insightful way to put them together. In doing so, he was inspired by others, and aiming at others. His intellectual feat only makes sense as a particularly remarkable link in a social-cultural chain of mental and public events. To me, it is a stunning illustration not just of human individual intelligence but also and above all of the powers of socially and culturally extended minds.

Science Writer; Founding chairman of the International Centre for Life; Author, The Rational Optimist

Life Is a Digital Code

It's hard now to recall just how mysterious life was on the morning of 28 February and just how much that had changed by lunchtime. Look back at all the answers to the question "what is life?" from before that and you get a taste of just how we, as a species, floundered. Life consisted of three-dimensional objects of specificity and complexity (mainly proteins). And it copied itself with accuracy. How? How do you set about making a copy of a three-dimensional object? How to do you grow it and develop it in a predictable way? This is the one scientific question where absolutely nobody came close to guessing the answer. Erwin Schrodinger had a stab, but fell back on quantum mechanics, which was irrelevant. True, he used the phrase "aperiodic crystal" and if you are generous you can see that as a prediction of a linear code, but I think that's stretching generosity.

Indeed, the problem had just got even more baffling thanks to the realization that DNA played a crucial role—and DNA was monotonously simple. All the explanations of life before 28 Feb 1953 are hand-waving waffle and might as well speak of protoplasm and vital sparks for all the insights they gave.

Then came the double helix and the immediate understanding that, as Crick wrote to his son a few weeks later, "some sort of code"—digital, linear two-dimensional, combinatorially infinite and instantly self-replicating—was all the explanation you needed. Here's part of Francis Crick's letter, 17 March 1953:

"My Dear Michael,

Jim Watson and I have probably made a most important discovery...Now we believe that the DNA is a code. That is, the order of the bases (the letters) makes one gene different from another gene (just as one page pf print is different from another). You can see how Nature makes copies of the genes. Because if the two chains unwind into two separate chains, and if each chain makes another chain come together on it, then because A always goes with T, and G with C, we shall get two copies where we had one before. In other words, we think we have found the basic copying mechanismby which life comes from life...You can understand we are excited."

Never has a mystery seemed more baffling in the morning and an explanation more obvious in the afternoon.

Professor of Biological Sciences, Physics, Astronomy, University of Calgary; Author, Reinventing the Sacred

Demonstration That Cell Types Are Dynamical Attractors

The human body has, by histological criteria, some 285 cell types. Familiar examples are skin cells, liver cells, nerve cells, muscle cells. But the human embryo starts as a single fertilized egg, the zygote, and in the process call ontogeny the zygote divides about 50 times to create the new born baby, not only with 285 cell types, but the intricate morphology of the human. Developmental biology is the study of these processes.

The process by which the zygote gives rise to different cell types is called cell differentiation.

By the 1950s, using protein separation techniques such as gel electrophoresis, it became clear that different cell types manufactured their own specific set of proteins. For example, only red blood cells make the protein hemoglobin. By then genetic work had convinced all biologists that genes were like beads lined up single file on chromosomes. The emerging hypothesis of the day was "one gene makes one protein". If so, different cells had different active subsets of the total complement of human genes on the diverse chromosomes. The active genes would be making their specific proteins.

By 1953 Watson and Crick established the structure of DNA, the race was on to discover the genetic code, soon worked out, by which the sequence of nucleotide bases in a gene encodes a corresponding messenger RNA, which was then translated, according to the soon known genetic code, into proteins by a host of protein enzymes and the RNA ribosome.

These brilliant results, the core of molecular biology, left hanging the deep question: How do different cell types manage to have different subsets of the full set of human genes active in the corresponding cell type?

Two superb French microbiologists, F. Jacob and J. Monod in 1961 made the first experimental breakthrough using the bacterium E. coli. They showed that, adjacent to the gene encoding a protein called beta galactosidase, a small "operator" DNA sequence, "O", bound a "repressor protein", "R". When R was bound to O, the adjacent gene for beta galactosidase could not be copied into its messenger RNA. In short, genes and their products could turn one another "on" and "off".

By 1963 these two authors cracked, it seemed, the central problem of how 285 cell types with the same genes could possibly have different patterns of gene activity in the 285 cell types. Jacob and Monod proposed that two genes, A and B, which turned one another "off", ie "repressed" one another, could have two "steady states of gene activity: 1) A "on" B "off"; 2) A "off" B "on".

In brief, by imagining a very simple "genetic circuit" where A and B repress one another, the genetic circuit, like an electronic circuit, could have two different steady state patterns of gene activity, each corresponding to one of two cell types, the first making A protein, the second making B protein.

In principle, Jacob and Monod had cracked the problem of cell differentiation: how cells with the same genes could exhibit diverse and unique patterns of gene activities.

I was entering biology at that time, with a background studying neural circuits and logic. It was clear to most biologists that some huge genetic network among the then thought 100,000 human genes somehow controlled cell differentiation in ontogeny. My question was an odd one: Did evolution have to struggle hard to evolve very specific genetic networks to support ontogeny, or, I hoped, were there huge classes, or "ensembles" of networks that as a total class, behaved with sufficient self-organized order to account for major features of ontogeny?

To explore this, I invented "random Boolean networks", a set of N "light bulbs", each receiving regulatory inputs from K light bulbs, and turing on and off according to some logical, or Boolean rule. Such networks have the property of having a generalization of the Jacob and Monod "A "on" B "off" versus A "off B "on", property. These two patterns are called "attractors", for reasons I make clear in a moment.

I studied networks with up to 10,000 light bulb model genes, each with K = 2 inputs, and sampled the class or ensemble of N = 10,000 K = 2 possible networks by assigning the 2 inputs to each model gene at random and the logical, or Boolean, rule to that gene at random from the 16 logical functions of two inputs. It turned out that such networks followed a sequence of states, like a stream, and settled down to a cycle of states, like a stream reaching a lake. Many streams, or trajectories, typically flowed into each "state cycle" attractor, called an "attractor" because the state cycle lake attracts a set of trajectories to flow into it. More each network had many such state cycle lake attractors.

The obvious hypothesis was that each attractor corresponded to a cell type.

On this hypothesis, differentiation was a process in which a signal, or chemical noise, induced a cell to leave one attractor, and reach a state that flowed along a trajectory that flowed to another attractor cell type.

This remained a mere hypothesis until a decade ago, a brilliant biologist, Dr. Sui Huang, used a mild leukemic cell line, HL60. He used two different chemical perturbations, All Trans Retinoic Acid and DMSO, to treat two sets of HL60 cells. At regular intervals, Huang used the technique of gene arrays to sample the activities of 12,000 genes. He showed, wonderfully, that under the two chemical treatments, HL60 followed two divergent then convergent "stream-trajectory" pathways in the patterns of activity of 12,000 genes, both of which ended up on the same pattern, corresponding to a normal polymorphoneuterophil blood cell.

Huang's powerful result remains the best evidence that cell types are indeed attractors.. If this is true, it should be possible to perturb the gene activities of cell types to control their differentiation into desired cell types.

There is no denying that the central concept of modern biology is evolution, but I'm afraid I was a victim of the American public school system, and I went through twelve years of education without once hearing any mention of the 'controversial' E word. We dissected cats, we memorized globs of taxonomy, we regurgitated extremely elementary fragments of biochemistry on exams, but we were not given any framework to make sense of it all (one reason I care very much about science education now is that mine was so poor).

The situation wasn't much better in college. There, evolution was universally assumed, but there was no remedial introduction to the topic—it was sink or swim, and determined not to drown, I sought out context, anything that would help me understand all these facts my instructors expected me to know. I found it in a used bookstore, a book that I selected because it wasn't too thick and daunting, and because when I skimmed it, it was clearly written, unlike all the massive dense and opaque reference books my classes foisted on me. It was John Tyler Bonner's On Development: The Biology of Form, and it blew my mind, and also warped me permanently to see biology through the lens of development.

The first thing the book taught me wasn't an explanation, which was something of a relief; my classes were just full of explanations already. Bonner's book is about questions, good questions, some of which had answers and others are just hanging there ripely. For instance, how is biological form defined by genetics? It's the implicit question in the title, but the book just refined the questions that we need to answer in order to explain the problem! Or maybe that is an important explanation at a different level; science isn't the body of facts archived in our books and papers, it's the path we follow to acquire new knowledge.

Bonner also led me to D'Arcy Wentworth Thompson and his classic book, On Growth and Form, which provided my favorite aphorism for a scientific view of the universe, "Everything is the way it is because it got that way"—it's a subtle way of emphasizing the importance of process and history in understanding why everything is the way it is. You simply cannot grasp the concepts of science if your approach is to dissect the details in a static snapshot of its current state; your only hope is to understand the underlying mechanisms that generate that state, and how it came to be. The necessity of that understanding is implicit in developmental biology, where all we do is study the process of change in the developing embryo, but I also found it essential as well in genetics, comparative physiology, anatomy, biochemistry...and of course, it is paramount in evolutionary biology.

So my most fundamental explanation is a mode of thinking: to understand how something works, you must first understand how it got that way.

Phylogenetically, and in terms of their ecological niche and morphology, apes are less similar to small monkeys than to paleolithic humans. Therefore, absent evidence to the contrary, we should expect that we can predict ape behavior better by looking at paleolithic human behavior than by looking at monkey behavior. This is a testable claim. Naive subjects can try to predict ape behavior by using different pools of evidence, information about either monkeys or paleolithic humans. The amount that one can know about a by observing b is correlated to the amount one can know about b by observing a. This consideration really should narrow the range of hypotheses we consider when speculating upon our innate behavior. If all other apes and monkeys possess a feature, including 'innate behavior' in so far as 'innate behavior' is a valid conceptual construct, we probably also possess that feature.

It may not be much, as a theory of psychology, but it's a start.

While making some correct predictions, this model certainly has its failures. Some of these failures seem relatively unthreatening. For instance, because we eat far less fruit than most monkeys or apes do, we tend to have much lower concentrations of vitamin C in our blood than do other primates. We can explain this difference easily because we can conceive of a clear difference between the concepts 'innate biochemistry' and 'biochemistry'. Other theoretical failures are more perplexing. One might speculate that paleolithic humans lack thick body hair because they use fire. Some day, differences between preserved Habilus and Erectus tissues might even partially confirm this, but one then has to ask why they and Sapiens but no other apes can use fire. Habitat is surely somewhat relevant, as most apes live in very heavily wooded locations, which present difficulties in the use of fire. Other apes also possess generally inferior tool-using abilities when compared to Habilus,Erectus or Sapiens. Most distressing, paleolithic humans tend to produce a much larger range of vocalizations than do other primates, a behavior that seems more typically birdlike.

I take comfort in the fact that there are two human moments that seem to be doled out equally and democratically within the human condition—and that there is no satisfying ultimate explanation for either. One is coincidence, the other is déja vu. It doesn't matter if you're Queen Elizabeth, one of the thirty-three miners rescued in Chile, a South Korean housewife or a migrant herder in Zimbabwe—in the span of 365 days you will pretty much have two déja vus as well as one coincidence that makes you stop and say, "Wow, that was a coincidence."

The thing about coincidence is that when you imagine the umpteen trillions of coincidences that can happen at any given moment, the fact is, that in practice, coincidences almost never do occur. Coincidences are actually so rare that when they do occur they are, in fact memorable. This suggests to me that the universe is designed to ward of coincidence whenever possible—the universe hates coincidence—I don't know why—it just seems to be true. So when a coincidence happens, that coincidence had to work awfully hard to escape the system. There's a message there. What is it? Look. Look harder. Mathematicians perhaps have a theorem for this, and if they do, it might, by default be a theorem for something larger than what they think it is.

What's both eerie and interesting to me about déja vus is that they occur almost like metronomes throughout our lives, about one every six months, a poetic timekeeping device that, at the very least, reminds us we are alive. I can safely assume that my thirteen year old niece, Stephen Hawking and someone working in a Beijing luggage-making factory each experience two déja vus a year. Not one. Not three. Two.

The underlying biodynamics of déja vus is probably ascribable to some sort of tingling neurons in a certain part of the brain, yet this doesn't tell us why they exist. They seem to me to be a signal from larger point of view that wants to remind us that our lives are distinct, that they have meaning, and that they occur throughout a span of time. We are important, and what makes us valuable to the universe is our sentience and our curse and blessing of perpetual self-awareness.

Professor & Director, Institute of Philosophy School of Advanced Study University of London

Lemons are Fast

When asked to put lemons on a scale between fast and slow almost everyone says 'fast', and we have no idea why. Maybe human brains are just built to respond that way. Probably. But how does that help? It's an explanation of sorts but it seems to be a stopping point when we wanted to know more. This leads us to ask what we want from an explanation: one that's right, or one that satisfies us? Things that were once self-evident are now known to be false. A straight line is obviously the shortest distance between two points until we think that space is curved. What satisfies our way of thinking need not reflect reality. Why expect a simple theory of a complex world?

Wittgenstein had interesting things to say about what we want from explanations and he knew different things could serve. Sometimes we just need more information; sometimes, we need to examine a mechanism, like a valve, or a pulley, to understand how it works; while sometimes what we need is a way of seeing something familiar in a new light, to see it as it really is. He also knew there were times when explanations won't do. 'To the man who has lost in love', he asks, 'what will help him? An explanation?' The question clearly invites the answer, no.

So what of the near universal response to the seemingly meaningless question of whether lemons are fast or slow? To be told that our brains are simply built to respond that way doesn't satisfy us. But it's precisely when an explanation leaves us short that it spurs us to greater effort.

It's the start of the story, not the end. For the obvious next question to ask is why are human brains built this way? What purpose could it serve? And here the phenomenon of automatic associations may give us a deep clue about the way the mind works because it's symptomatic of what we call cross-modal correspondences: non-arbitrary associations between features in one sense modality with features in another.

There are cross-modal correspondences between taste and shape, between sound and vision, between hearing and smell, many of which are being investigated by neuroscientist Charles Spence and philosopher Ophelia Deroy. These unexpected connections are reliable and shared, unlike cases of synaesthesia, which are idiosyncratic—though individually consistent. And the reason we make these connections in the brain is to give us multiple fixes on objects in the environment that we can both hear and see. It also allows us to communicate elusive aspects of our experience.

We often say that tastes are hard to describe, but when we realise that we can change vocabulary and talk about a taste as round or sharp new possibilities open up. Musical notes are high or low; sour tastes are high, and bitter notes low. Smells can have low notes and high notes. You can feel low, or be incredibly high. This switching of vocabularies allows us to utilise well-understood sensory modalities to map various possibilities of experience.

Advertisers know this intuitively and exploit cross-modal correspondences between abstract shapes and particular products, or between sounds and sights. Angular shapes conjure up carbonated water not still, while an ice cream called 'Frisch' would be thought creamier than one called 'Frosch'. Notice, too, how many successful companies have names starting with the /k/ sound, and how few with /s/. These associations set up expectations in the mind that not only help us perceive but may shape our experiences.

And it is not just vocabularies that we use. In his nineteenth century tract on the Psychology of Architecture, Heinrich Wofflin tells us that it's because we have bodies, and are subject to gravity, bending and balance, that we can appreciate the shape of buildings, and columns, by feeling an empathy for their weight and strain. Physical forms possess a character only because we possess a body.

This idea has led to recent insights into aesthetic appreciation in the work of Chris McManus at UCL. Like all good explanations it spawns more explanations and further insights. It's another example of how we use the interaction of sensory information to shape our perceptions and help us to understand and respond to the world around us. Like all good explanations it spawns more explanations and further insights. So the fact that we all think that lemons are fast may be a big part of the reason why we are so smart.

Associate Professor of Journalism, American University; Author, The Wikipedia Revolution

"Information Is The Resolution Of Uncertainty"

Nearly everything we enjoy in the digital age hinges on this one idea, yet few people know about its originator or the foundations of this simple, elegant theory of information.

Einstein is well rooted in popular culture as the developer of the theory of relativity. Watson and Crick are associated with the visual spectacle of DNA's double helix structure.

How many know that the information age was not the creation of Gates or Jobs but of Claude Shannon in 1948?

The brilliant mathematician, geneticist and cryptanalyst formulated what would become information theory in the aftermath of World War II, when it was apparent it was not just a war of steel and bullets.

If World War I was the first mechanized war, the second war could be considered the first struggle based around communication technologies. Combat in the Pacific and Atlantic theaters were as much a battle of information as they were about guns, ships and planes.

Consider the advances of the era that transformed the way wars were fought.

Unlike previous conflicts, there was heavy utilization of radio communication among military forces. This quick remote coordination quickly pushed the war to all corners of the globe. Because of this, the field of cryptography advanced quickly in order to keep messages secret and hidden from adversaries. Also, for the first time in combat, radar was used to strategically detect and track aircraft, thereby surpassing conventional visual capabilities that ended on the horizon.

One researcher, Claude Shannon, was working on the problem of anti-aircraft targeting and designing fire-control systems to work directly with radar. How could you determine the current, and future position of enemy aircraft's flight path, so you could properly time artillery fire to shoot it down? The radar information about plane position was a breakthrough, but "noisy" in that it provided an approximation of its location, but not precisely enough to be immediately useful.

After the war, this inspired Shannon and many others to think about the nature of filtering and propagating information, whether it was radar signals, voice for a phone call, or video for television.

He knew that noise was the enemy of communication, so any way to store and transmit information that rejected noise was of particular interest to his employer, Bell Laboratories, the research arm of the mid-century American telephone monopoly.

Shannon considered communication "the most mathematical of the engineering sciences," and turned his intellectual sights towards this problem. Having worked on the intricacies of Vannevar Bush's differential analyzer analog computer in his early days at MIT, and with a mathematics-heavy Ph.D. thesis on the "Algebra for Theoretical Genetics," Shannon was particularly well-suited to understanding the fundamentals of handling information using knowledge from a variety of disciplines.

By 1948 he had formed his central, simple and powerful thesis:

Information is the resolution of uncertainty.

As long as something can be relayed that resolves uncertainty, that is the fundamental nature of information. While this sounds surprisingly obvious, it was an important point, given how many different languages people speak and how one utterance could be meaningful to one person, and unintelligible to another. Until Shannon's theory was formulated, it was not known how to compensate for these types of "psychological factors" appropriately. Shannon built on the work of fellow researchers Ralph Hartley and Harry Nyquist to reveal that coding and symbols were the key to resolving whether two sides of a communication had a common understanding of the uncertainty being resolved.

Shannon then considered: what was the simplest resolution of uncertainty?

To him, it was the flip of the coin—heads or tails, yes or no—as an event with only two outcomes. Shannon concluded that any type of information, then, could be encoded as a series of fundamental yes or no answers. Today, we know these answers as bits of digital information—ones and zeroes—that represent everything from email text, digital photos, compact disc music or high definition video.

That any and all information could be represented and coded in discrete bits not just approximately, but perfectly, without noise or error was a breakthrough which astonished even his brilliant peers at academic institutions and Bell Laboratories who previously thought it was unthinkable to have a simple universal theory of information.

The compact disc, the first ubiquitous digital encoding system for the average consumer, showed the legacy of Shannon's work to the masses in 1982. It provides perfect reproduction of sound by dividing each second of musical audio waves into 44,100 slices (samples), and recording the height of each slice in digital numbers (quantization). Higher sampling rates and finer quantization raise the quality of the sound. Converting this digital stream back to audible analog sound using modern circuitry allowed for consistent high fidelity versus the generation loss people were accustomed to in analog systems, such as compact cassette.

Similar digital approaches have been used for images and video, so that today we enjoy a universe of MP3, DVDs, HDTV, AVCHD multimedia files that can be stored, transmitted and copied without any loss of quality.

Shannon became a professor at MIT, and over the years students of Shannon went on to be the builders of many major breakthroughs of the information age, including digital modems, computer graphics, data compression, artificial intelligence and digital wireless communication.

Shannon was a humble man and intellectual wanderlust who shunned public speaking or granting interviews. He once remarked, "After I had found answers, it was always painful to publish, which is where you get the acclaim. Many things I have done and never written up at all. Too lazy, I guess."

Shannon was perhaps lazy to publish, but he was not a lazy thinker. Later in life, he occupied himself with puzzles he found personally interesting—designing a Rubik's cube solving device and modeling the mathematics of juggling.

The 20th century was a remarkable scientific age, where the fundamental building blocks of matter, life and information were all revealed in interconnected areas of research centered around mathematics. The ability to manipulate atomic and genetic structures found in nature have provided breakthroughs in the fields of energy and medical research. Less expected was discovering the fundamental nature of communication. Information theory as a novel, original and "unthinkable" discovery has completely transformed nearly every aspect of our lives to digital, from how we work, live, love and socialize.

Structural realism—in its metaphysical version, championed by the philosopher of science James Ladyman—is the deepest explanation I know, because it serves as a kind of meta-explanation, one that explains the nature of reality and the nature of scientific explanations.

The idea behind structural realism is pretty simple: the world isn't made of things, it's made of mathematical relationships, or structure. A mathematical structure is a set of isomorphic elements, each of which can be perfectly mapped onto the next. To give a trivial example, the numbers 25 and 52 share the same mathematical structure.

When the philosopher John Worrall first introduced structural realism (though he attributes it to physicist Henri Poincaré), he was trying to explain something puzzling: how was it possible that a scientific theory that would later turn out to be wrong could still manage to make accurate predictions? Take Newtonian gravity. Newton said that gravity was a force that masses exert on one another from a distance. That idea was overthrown by Einstein, who showed that gravity was the curvature of spacetime. Given how wrong Newton was about gravity, it seems almost miraculous that he was able to accurately predict the motions of the planets.

Thankfully, we don't have to resort to miracles. Newton may have gotten the physical interpretation of gravity wrong, but he got a piece of the math right. That's why, at weak masses and small velocities, Einstein's equations reduce to Newton's. The problem, Worrall pointed out, was that we mistook an interpretation of the theory for the theory itself. The fact is, in physics, theories are sets of equations, and nothing more. "Quantum field theory" is a group of mathematical structures. "Electrons" are little stories we tell ourselves.

These days, believing in the reality of objects—of physical things like particles, fields, forces, even spacetime geometries—can quickly lead to profound existential crises.

Quantum theory, for instance, strips particles of any sense of "thingness". One electron is not merely similar to another, all electrons are exactly the same. Electrons have no inherent identity—a fact that makes quantum statistics drastically different from the classical kind. Anyone who believes that an electron is a "thing" in its own right is bound to lose big in a quantum casino.

Meanwhile, all of nature's fundamental forces, including electromagnetism and the nuclear forces that operate deep in the cores of atoms, are described by gauge theory, which shows that forces aren't physical things in the world, but discrepancies in different descriptions of the world, in different observers' points of view. Gravity is a gauge force too, which means you can make it blink out of existence just by changing your frame of reference. In fact, that was Einstein's "happiest thought": a person in freefall can't feel their weight. It's often said that you can't disobey the law of gravity, but the truth is you can take it out with a simple coordinate change.

Recent advances in theoretical physics have only made the situation worse. The holographic principle tells us that our four-dimensional spacetime and everything in it is exactly equivalent to physics taking place on the two-dimensional boundary of the universe. Neither description is more "real" than the other—one can be perfectly mapped onto the other with no loss of information. When we try to believe that spacetime is really four-dimensional or really has a particular geometry, the holographic principle pulls the rug out from under us.

The physical nature of reality has been further eroded by M-theory, the theory that many physicists believe can unite general relativity and quantum mechanics. M-theory encompasses five versions of string theory (plus one non-stringy theory known as supergravity) all of which are related by mathematical maps called dualities. What looks like a strong interaction in one theory looks like a weak interaction in another. What look like eleven dimensions in one theory look like ten in another. Big can look like small, strings can look like particles. Virtually any object you can think of will be transformed into something totally different as you move from one theory to the next—and yet, somehow, all of the theories are equally true.

This reality crisis has grown so dire that Stephen Hawking has called for a kind of philosophical surrender, a white flag he terms "model-dependent realism", which basically says that while our theoretical models offer possible descriptions of the world, we'll simply never know the true reality that lies beneath. Perhaps there is no reality at all.

But structural realism offers a way out. An explanation. A reality. The only catch is that it's not made of physical objects. Then again, our theories never said it was. Electrons aren't real, but the mathematical structure of quantum field theory is. Gauge forces aren't real, but the symmetry groups that describe them are. The dimensions, geometries and even strings described by any given string theory aren't real—what's real are the mathematical maps that transform one string theory into another.

Of course, it's only human to want to interpret mathematical structure. There's a reason that "42" is hardly a satisfying answer to life, the universe and everything. We want to know what the world is really like, but we want it in a form that fits our intuitions. A form that means something. And for our narrative-driven brains, meaning comes in the form of stories, stories about things. I doubt we'll ever stop telling stories about how the universe works, and I, for one, am glad. We just have to remember not to mistake the stories for reality.

Structural realism forces us to radically revise the way we think about the universe. But it also provides a powerful explanation for some of the most mystifying aspects of physics. Without it, we'd have to give up on the notion that scientific theories can ever tell us how the world really is. And that, in my humble opinion, makes it a pretty beautiful explanation.

Academic; Newspaper Columnist; Vice-President, Wolfson College; Author, From Gutenberg to Zuckerberg: what you really need to know about the Internet

Flocking Behaviour In Birds

My favourite explanation is Craig Reynold's suggestion (first published in 1986, I think) that flocking behaviour in birds can be explained by assuming that each bird follows three simple rules—separation (don't crowd your neighbours), alignment (steer towards the average heading of your neighbours) and cohesion (steer towards the average position of your neighbours). The idea that such complex behaviour can be accounted for in such a breathtakingly simple way is, well, just beautiful.

I admire this explanation of cultural relativity, by the anthropologist Mary Douglas, for its clean lines and tidiness. I like its beautiful simplicity, the way it illuminates dark corners of misreading, how it limelights the counter-conventional. Poking about in the dirt is exciting, and irreverent. It is about talking what is out of bounds and making it relevant. Douglas's explanation of 'dirt' makes us question the very boundaries we are pushing.

We sometimes tend to think that ideas and feelings arising from our intuitions are intrinsically superior to those achieved by reason and logic. Intuition—the 'gut'—becomes deified as the Noble Savage of the mind, fearlessly cutting through the pedantry of reason. Artists, working from intuition much of the time, are especially prone to this belief. A couple of experiences have made me more sceptical.

The first is a question that Wittgenstein used to pose to his students. It goes like this: you have a ribbon which you want to tie round the centre of the Earth (let's assume it to be a perfect sphere). Unfortunately you've tied the ribbon a bit too loose: it's a meter too long. The question is this: if you could distribute the resulting slack—the extra meter—evenly round the planet so the ribbon hovered just above the surface, how far above the surface would it be?

Most people's intuitions lead them to an answer in the region of a minute fraction of a millimeter. The actual answer is almost 16 cms. In my experience only two sorts of people intuitively get close to this: mathematicians and dressmakers. I still find it rather astonishing: in fact when I heard it as an art student I spent most of one evening calculating and recalculating it because my intuition was screaming incredulity.

Not many years later, at the Exploratorium in San Francisco, I had another shock-to-the-intuition. I saw for the first time a computer demonstration of John Conway's Life. For those of you who don't know it, it's a simple grid with dots that are acted on according to an equally simple, and totally deterministic, set of rules. The rules decide which dots will live, die or be born in the next step. There are no tricks, no creative stuff, just the rules. The whole system is so transparent that there should be no surprises at all, but in fact there are plenty: the complexity and 'organic-ness' of the evolution of the dot-patterns completely beggars prediction. You change the position of one dot at the start, and the whole story turns out wildly differently. You tweak one of the rules a tiny bit, and there's an explosion of growth or instant armageddon. You just have no (intuitive) way of guessing which it's going to be.

These two examples elegantly demonstrate the following to me:

a) 'Deterministic' doesn't mean 'predictable',
b) we aren't good at intuiting the interaction of simple rules with initial conditions (and the bigger point here is that the human brain may be intrinsically limited in its ability to intuit certain things—like quantum physics and probability, for example), and
c) intuition is not a quasi-mystical voice from outside ourselves speaking through us, but a sort of quick-and-dirty processing of our prior experience (which is why dressmakers get it when the rest of us don't). That processing tool sometimes produces incredibly impressive results at astonishing speed, but it's worth reminding ourselves now and again that it can also be totally wrong.

Director of the Bristol Cognitive Development Centre in the Experimental Psychology Department at the University of Bristol; Author, The Self-Illusion

Complexity Out of Simplicity

As a scientist dealing with complex behavioral and cognitive processes, my deep and elegant explanation comes not from psychology (which is rarely elegant) but from the mathematics of physics. For my money, Fourier's theorem has all the simplicity and yet more power than other familiar explanations in science. Stated simply, any complex pattern, whether in time or space, can be described as a series of overlapping sine waves of multiple frequencies and various amplitudes.

I first encountered Fourier's theorem when I was a Ph.D. student in Cambridge working on visual development. There, I met Fergus Campbell who in the 1960's had demonstrated that not only was Fourier theorem an elegant way of analyzing complex visual patterns, but it was also biologically plausible. This insight was later to become a cornerstone of various computational models of vision. But why restrict the analysis to vision?

In effect, any complex physical event can be reduced to the mathematical simplicity of sine waves. It doesn't matter whether it is Van Gogh's Starry Night, Mozart's Requiem, Chanel's No. 5, Rodin's Thinker or a Waldorf salad. Any complex pattern in the environment can be translated into neural patterns that in turn, can be decomposed into the multitude of sine wave activity arising from the output of populations of neurons.

Maybe I have some physics envy, but to quote Lord Kelvin, "Fourier's theorem is not only one of the most beautiful results of modern analysis, but it may be said to furnish an indispensable instrument in the treatment of nearly every recondite question in modern physics." You don't get much higher praise than that.

To me, epigenetics is the most monumental explanation to emerge in the social and biological sciences since Darwin proposed his theories of Natural Selection and Sexual Selection. Over 2,500 articles, many scientific meetings, the formation of the San Diego Epigenome Center as well as other institutes, a five-year Epigenomics Program launched in 2008 by the National Institutes of Health, and many other institutions, academic forums and people are now devoted to this new field. Although epigenetics has been defined in several ways, all are based in the central concept that environmental forces can affect gene behavior, either turning genes on or off. As an anthropologist untrained in advanced genetics, I won't attempt to explain the processes involved, although two basic mechanisms are known: one involves molecules known as methyl-groups that latch on to DNA to suppress and silence gene expression; the other involves molecules known as acetyl-groups which activate and enhance gene expression.

The consequences of epigenetic mechanisms are likely to be phenomenal. Scientists now hypothesize that epigenetic factors play a role in the etiology of many diseases, conditions and human variations—from cancers, to clinical depression and mental illnesses, to human behavioral and cultural variations.

Take the Moroccan Amazighs or Berbers, people with highly similar genetic profiles who now reside in three different environments: some roam the deserts as nomads; some farm the mountain slopes; some live in the towns and cities along the Moroccan coast. And depending on where they live, up to one-third of their genes are differentially expressed, reports researcher Youssef Idaghdour.

For example, among the urbanites, some genes in the respiratory system are switched on—perhaps, Idaghdour suggests, to counteract their new vulnerability to asthma and bronchitis in these smoggy surroundings. Idaghdour and his colleague Greg Gibson, propose that epigenetic mechanisms have altered the expression of many genes in these three Berber populations, producing their population differences.

Psychiatrists, psychologists and therapists have long been preoccupied with our childhood experiences, specifically how these sculpt our adult attitudes and behaviors. Yet they have focused on how the brain integrates and remembers these occurrences. Epigenetic studies provide a different explanation.

As an example, mother rats that spend more time licking and grooming their young during the first week after birth produce infants who later become better adjusted adults. And researcher Moshe Szyf proposes that this behavioral adjustment occurs because epigenetic mechanisms are triggered during this "critical period," producing a more active version of a gene that encodes a specific protein. Then this protein, via complex pathways, sets up a feedback loop in the hippocampus of the brain—enabling these rats to cope more efficiently with stress.

These behavioral modifications remain stable through adulthood. However, Szyf notes that when specific chemicals were injected into the adult rat's brain to alter these epigenetic processes and suppress this gene expression, well-adjusted rats became anxious and frightened. And when different chemicals are injected to trigger epigenetic processes that enhance the expression of this gene instead, fearful adult rats (that had received little maternal care in infancy) became more relaxed.

Genes hold the instructions; epigenetic factors direct how those instructions are carried out. And as we age, scientists report, these epigenetic processes continue to modify and build who we are. Fifty-year-old twins, for example, show three times more epigenetic modifications than do three-year-old twins; and twins reared apart show more epigenetic alterations than those who grow up together. Epigenetic investigations are proving that genes are not destiny; but neither is the environment—even in people.

Shelley Taylor has shown this. Studying an allele (genetic variant) in the serotonin system, she and colleagues were able to demonstrate that the symptoms of depression are visible only when this allele is expressed in combination with specific environmental conditions. Moreover, Taylor maintains that individuals growing up in unstable households are likely to suffer all their lives with depression, anxiety, specific cancers, heart disease, diabetes or obesity. Epigenetics at work? Probably.

Even more remarkable, some epigenetic instructions are passed from one generation to the next. Trans-generational epigenetic modifications are now documented in plants and fungi, and have been suggested in mice. Genes are like the keys on a piano; epigenetic processes direct how these keys are played—modifying the tune, even passing these modifications to future generations. Indeed, in 2010, scientists wrote in Science magazine that epigenetic systems are now regarded as "heritable, self-perpetuating and reversible."

If epigenetic mechanisms can not only modulate our intellectual and physical capacities, but also pass these alterations to our descendants, epigenetics has immense and profound implications for the origin, evolution and future of life on earth. In coming decades scientists studying epigenetics may come to understand how myriad environmental forces impact our health and longevity in specific ways, find cures for many human diseases and conditions, and explain intricate variations in human personality.

The 18th century philosopher, John Locke, was convinced that the human mind is an empty slate upon which the environment inscribes personality. With equal self-assurance, others have been convinced that genes orchestrate our development, illnesses and life styles. Yet social scientists had failed for decades to explain the mechanisms governing behavioral variations between twins, family members and culture groups. And biological scientists had failed to pinpoint the genetic foundations of many mental illnesses and complex diseases. The central mechanism to explain these complex issues has been found.

I am hardly the first to hail this new field of biology as revolutionary—the fundamental process by which nature and nurture interact. But to me as an anthropologist long trying to take a middle road in a scientific discipline intractably immersed in nature-versus-nurture warfare, epigenetics is the missing link.

"Caron non ti crucciare: Vuolsi così colà dove si puote ciò che si vuole, e più non dimandare" ("Charon, do not torment yourself: It is so willed where will and power are one, and ask no more") says Virgil to old Charon, justifiably alarmed by the mortal Dante and his inflated sense of entitlement in attempting to cross the Acheron. And so with that explanation they go on, Virgil and his literary pupil in tow. Or rather they would if Dante didn't decide to faint right there and then, as so often happens during his journey through the great poem.

Explanations are seldom as effective as Virgil's, although arguably they can be somewhat more scientific.I suppose I could have chosen any number of elegant scientific theories, but in truth it is a practitioner's explanation rather than that of a theorist, that I find most compelling. My preferred explanation is actually this: that our own habits tend to distort our perception of our own body and its movement.

Have you ever recorded your own voice, listened to it, and wondered how on earth you never realized you sounded like that? Well, the same is true for movement. Often we don't move the way we think we do.

Start by asking someone some basic questions about their anatomy. For example, ask them where is the joint that attaches their arm to the rest of their skeleton. They will likely point to their shoulder. But, functionally, the clavicle—the "collar bone"—is part of the arm and can move with it. In that sense the arm attaches to the rest of the skeleton at the top of the sternum. Ask them to point to the top of their spine, where their head sits. They will likely indicate some point in the middle of their neck. But the top of the first vertebra, the atlas, actually sits more or less at the height of the tip of their nose or the middle of the ear.

The problem is not just having the wrong mental picture of one's anatomy: the problem is being in the habit of moving as if it that were the right one. As it turns out, most of the time we operate our bodies much in the same way that someone without a driving license would operate a car: with difficulty and often hurting ourselves in the process.

One of the people who famously made this observation was a somewhat eccentric Tasmanian actor by the name of F. M. Alexander, around the turn of the last century. Alexander was essentially an empiricist. The story goes that he lost his voice while reciting Shakespeare. After visiting a number of specialists, who could not figure out what in the world was wrong with him, he eventually concluded that the problem must be something he was doing.

What followed was an impressive exercise in solipsistic sublimation: Alexander spent three years observing himself in a three-way mirror, trying to find out what was wrong. Finally he noticed something: When he would declaim verse, he had an almost imperceptible habit of tensing the back of his neck. Could that small tension, unnoticed up to then, possibly explain the loss of voice? It turned out that it did. Alexander continued to observe and explore, and through this process learnt to recognize and retrain his own movement and the use of his body. His voice returned.

To be fair, the study of the body and its movement has a long and illustrious history, from Muybridge onwards. But the realization that our own perception of it may be inaccurate has practical implications that run deep into our capacity to express ourselves.

This point is familiar to performing artists. Playing the key of a piano requires applying a pressure equivalent to about seventy grams, an almost insignificant weight for a human body a thousand times that. Yet most pianists know well the physical exhaustion that can come with playing. Not only their movement fails to achieve efficiency or effectiveness, but it often does not even reflect what they believe to be doing. Not surprisingly, a substantial part of modern piano training consists of inhibiting habitual behavior, in the hopes of achieving Rubinstein's famous effortlessness.

Musicians are not the only ones for whom the idea of retraining one's proprioception is important. For example, our posture and movement communicate plenty of information to those who see us. When wanting to "stand tall", we push our chests out, lift our chin and face. But we fail to notice that this results in a contraction of the neck muscles and therefore a significant shortening of the back, achieving exactly the opposite result. Actors know this well, and have a long tradition of studying the use of the body, recognizing and inhibiting such habits: you cannot control what to communicate through your posture and movement, if you are already busy communicating existing habits. This insight is, alas, true for most of us.

We are fascinated by the natural world as conceptualized in our elegant explanations, yet the single thing we spend the most time doing—using our body—is rarely the subject of extensive analytical consideration. Simple insights of practitioners like Alexander have depth and meaning because they remind us that analytical observation can tell us a lot, not just about the extraordinary, but also about the ordinary in our daily lives.

Independent Researcher; Author, The Princeton Field Guide of Dinosaurs

Birds Are The Direct Descendents Of Dinosaurs

The most graceful example of an elegant scientific idea in one of my fields of expertise is the idea is that dinosaurs were tachyenergetic, that they were endotherms with the high internal energy production and high aerobic exercise capacity typical of birds and mammals that can sustain long periods of intense activity. Although it is not dependent upon it, high powered dinosaurs blends in with the hypothesis that birds are the direct descendents of dinosaurs, that birds literally are flying dinosaurs, much as bats are flying mammals.

It cannot be overemphasized how much sense the above makes, and how it has revolutionized a big chunk of our understanding of evolution and 230 millions years of earth history relative to what was thought from the mid 1800s to the 1960s. Until then it was generally presumed that dinosaurs were a dead end collection of bradyenergetic reptiles that could achieve high levels of activity for only brief bursts; even walking at 5 mph requires high respiratory capacity beyond that of reptiles who must plod along at a mph or so if they are moving a long distance. Birds were seen as a distinct and feathery group in which energy inefficiency evolved in order to power flight.

Although the latter hypothesis was not inherently illogical, it was divergent from the evolution of bats in which high aerobic capacity was already present in their furry ancestors.

I first learned of "warm-blooded" dinosaurs in my senior year of high school via a blurb in Smithsonian Magazine about Robert Bakker's article in Nature in the summer of 1972. As soon as I read it, it just clicked. I had been illustrating dinosaurs in accord with the reptilian consensus, but it was a bad fit because dinosaurs are so obviously constructed like birds and mammals, not crocs and lizards. About the same time John Ostrom, who also had a hand in discovering dinosaur endothermy, was presenting the evidence that birds are aerial versions of avepod dinosaurs—a concept so obvious that should have become the dominant thesis back in the 1800s.

For a quarter century the hypotheses was highly controversial—the one regarding dinosaur metabolics especially so—and some of the first justifications were flawed. But the evidence has piled up. Growth rings in dinosaur bones show they grew at the fast pace not achievable by reptiles, their trackways show they walked at steady speeds too high for bradyaerobes, many small dinosaurs were feathery, and polar dinosaurs, birds and mammals were living through blizzardy Mesozoic winters that excluded ectotherms.

Because of the dinorevolution our understanding of the evolution of the animals that dominated the continents is far closer to the truth than it was. Energy efficient amphibians and reptiles dominated the continents for only 70 million years in the later portion of the Paleozoic, the era that had begun with trilobites and nothing on land. For the last 270 million years higher power albeit less energy efficient tachyenergy has reigned supreme on land, starting with protomammalian therapsids near the end of the Paleozoic. When therapsids went belly up early in the Mesozoic (the survivors of the group being the then all small mammals) they were not replaced by lower power dinosaurs for the next 150 million years, but by dinosaurs that quickly took aerobic exercise capacity to even greater levels.

The unusual avian respiratory complex is so effective that some birds fly as high as airliners, but the system did not evolve for flight. That's because the skeletal apparatus for operating air sac ventilated lungs first developed in flightless avepod dinosaurs for terrestrial purposes (some but by no means all offer low global oxygen levels as the selective factor). So the basics of avian energetics appeared in predacious dinosaurs, and only later were used to achieve powered flight. Rather like how internal combustion engines happened to make powered human flight practical, rather than being developed to do so.

Columnist, The New York Times Magazine; Editorial Director, West Studios; Author, Magic and Loss

Back To The Beginning, The Mere Word

Richard Rorty's transformation of "survival of the fittest" to "whatever survives survives" is not in itself my favorite explanation, though I find it immensely satisfying, but it enacts a wholesome return to language away from the most rococo fantasies of science. In this case, a statement that looks to describe history and biology—controversially, no less—drops into a foundational locution, a tautology. Back to the beginning, the mere Word. Anytime an explanation, for anything, returns us to language, and its dynamics, I'm satisfied. Exhilarated.

Brooks and Suzanne Ragen Professor of Psychology, Yale University; Author, Just Babies: The Origins of Good and Evil

Everything Is The Way It Is Because It Got That Way

This aphorism is attributed to the biologist and classicist D'Arcy Thompson, and it's an elegant summary of how Thompson sought to explain the shapes of things, from jellyfish to sand dunes to elephant tusks. I saw this quoted first in an Edge discussion by Daniel Dennett, who made the point that this insight applies to explanation more generally—all sciences are, to at least some extent, historical sciences.

I think it's a perfect motto for my own field of developmental psychology. Every adult mind has two histories. There is evolution. Few would doubt that some of the most elegant and persuasive explanations in psychology appeal to the constructive process of natural selection. And there is development—how our minds unfold over time, the processes of maturation and learning.

While evolutionary explanations work best for explaining what humans share, development can sometimes capture how we differ. This can be obvious: Nobody is surprised to hear that adults who are fluent in Korean have usually been exposed to Korean when they were children or that adults who practice Judaism have usually been raised as Jews. But other developmental explanations are rather interesting.

There is evidence that an adult's inability to see in stereo is due to poor vision during a critical period in childhood. Some have argued that the self-confidence of adult males is influenced by how young they were when they reached puberty (because of the boost in status caused by being bigger, even if temporarily, than their peers). It's been claimed that smarter adults are more likely to be firstborns (because later children find themselves in environments that are, on average, less intellectually sophisticated). Creative adults are more likely to be later-borns (because they were forced to find their own distinctive niches.) Romantic attachments in adults are influenced by their relationships as children with their parents. A man's pain-sensitivity later in life is influenced by whether or not he was circumcised as a baby.

With the exception of the stereo-vision example, I don't know if any of these explanations are true. But they are elegant and non-obvious, and some of them verge on beautiful.

These three terms are all interconnected in my head. While I was about to quit Grad school in theoretical physics, I stumbled across a quantum field theory book on a classmate's desk which, upon reading the introduction, enticed me to finish up. Let me paraphrase the beginning of the book's intro: "The new paradigm of physics describes the totality of nature as an ensemble vibrating fields that interact with each other… in that sense the universe is like an orchestra and we are a result of eons of harmonies, rhythm and improvisation" (I added in the improvisation part). It helped that I was also a student of jazz theory at the time.

As I continued to study quantum field theory over the years (and now teach the subject), I am amazed at the concise parallels between the quantum field paradigm of nature, music and improvisation. Connected to this theme is one of the coolest things I've learned; an idea that was pioneered by a true master in quantum field theory, Leonard Parker. The basic idea is that when we combine Einstein's discovery that the gravitational force arises from the curving of spacetime with the field paradigm of matter we get a very neat physical effect—which underlies Stephen Hawking's information loss paradox and the emergence of matter from the early universe that is devoid of stars and galaxies.

Last month I had the pleasure to finally meet Parker at the University of Wisconsin, Milwaukee. After a seminar on gravitational wave physics, Leonard took me to his office and revealed the pioneering calculations in his PhD dissertation, which established the study of quantum fields in curved spacetime—I'll refer to the effect as the Parker Process.

In quantum field theory, we can think of all matter as a field (similar to the electromagnetic field) that permeates a large region of space. A useful picture is to imagine a smooth blanket of magnetic fields that fills our entire galaxy (which is actually true). Likewise the electron also has a field that can be distributed across regions of space. At this stage, the field is "classical", since it is a smooth, continuous distribution.

To speak of a quantized field means that we can imagine that if the field vibrates, only discrete bundles (quanta) of vibration are allowed, like an individual musical note on a guitar. The quanta of the field are identified with the creation (or annihilation) of a particle. Quantum field theory has new features that classical field theories lack; perhaps the most important one is the notion of the vacuum. The vacuum is a situation where no particles exist, but one can "disturb" the vacuum and create particles by "exciting" the field (usually with an interaction). It is important to know that the vacuum depends on the space-time location of an observer who can measure no particles.

On the other hand we know from General relativity, space-time is curved and observers don't see the same curvature at different places in general. What Parker realized was that in space-times of cosmological interest, such our expanding universe that Hubble discovered, that existed in a state of zero particles would create particles at a later time due to the very expansion of the space. We can think of this effect occurring because of the wave-like nature of particles (a quantum effect).

The quantum matter fields that live in the vacuum also interact with the expanding space-time field (the gravitational field). The expansion acts on the vacuum in a manner that "squeezes" particle quanta out of the vacuum. It is this quantum-field effect, which is used to explain the seeds of stars, and galaxies that now exist in the universe. Similarly, when black holes evaporate, the space-time also becomes time-dependent and particles are created, but this time as a thermal bath of matter/radiation. This physical feature of quantum fields in curved spaces raise the philosophical questions about the observer dependence of particles or the lack of them.

As you read these words, don't try to imagine some strange observer in some far region of the universe that will swear that you don't exist; and don't blame it on Leonard.

Professor of Astronomy, Harvard University; Director, Harvard Origins of Life Initiative; Author, The Life of Super-Earths

Frames of Reference

Deep and elegant explanations relate to natural or social phenomena and the observer often has no place in them. As a young student I was fascinated to understand how frames of reference work, i.e., to learn what it means to be an observer.

The reference frame concept is central in physics and astronomy. For example, the study of flows relies most often on two basic frames: one in which the flow is described as it moves through space, called an Eulerian frame, and another—called a Lagrangian frame, which moves with the flow, stretching and bending as it goes. The equations of motion in the Eulerian frame seemed intuitively obvious to me, but I felt exhilarated when I understood the same flow described by equations in the Lagrangian frame.

It is beautiful: let's think of a flow of water—a winding river. You are perched on a hill by the riverbank observing the water flow marked by a multitude of floating tree leaves. The banks of the river, the details of the surroundings—they provide a natural coordinate system, just as you would on a geographical map—you could almost create a mental image of fixed criss crossing lines—your frame of reference. The river flow of water moves through that fixed map: you are able to describe the twists and turns of the currents and their changing speed, all thanks to this fixed Eulerian frame of reference, named after Leonhard Euler (1707-1783).

It turns out that you could describe the flow with equal success if, instead of standing safely on the top of the hill, you plunged into the river and floated downstream, observing the whirling motions of the tree leaves all around you. Your frame of reference—the one named after Joseph-Louis Lagrange (1736-1813), is no longer fixed; instead you are describing all motions as relative to you and to each other. Your description of the flow will match exactly the description you achieved by observing from the hill, although the mathematical equations appear unrecognizably different.

To younger me, back then, the transformation between the two frames looked like magic. It was not deep perhaps, but it was elegant, and extremely helpful. However, it was also just the easy start of a journey—a journey that would pull the old frames of reference out from under me. It started with the naïve unmoving Earth as the absolute frame of Aristotle, soon to be rejected and replaced by Galileo with a frame of reference in which motion is not absolute—oh, how I loved floating with Lagrange down Euler's river!, only to be unsettled again by the special relativity of Einstein and trying to comprehend the loss of simultaneity. And a loss it was.

A fundamental shift in our frame of reference, especially the one that defines our place in the world, affects deeply each and every one of us personally. We live and learn, the next generation is born into the new with no attachment to the old. In science it is easy. Human frames of reference go beyond mathematics, physics, and astronomy. Do we know how to transform between human frames of reference successfully? Are they more often than not "Lagrangian" and relative? Perhaps we could take a cue from science and find an elegant solution. Or at least—an elegant explanation.

I would like to propose not only a particular explanation, but also a particular exposition and exponent: Richard Feynman's lectures on quantum electrodynamics (QED) delivered at the University of Auckland in 1979. These are surely among the very best ever delivered in the history of science.

For a start, the theory is genuinely profound, having to do with the behaviour and interactions of those (apparently) most fundamental of particles, photons and electrons. And yet it explains a huge range of phenomena, from the reflection, refraction and diffraction of light to the structure and behaviour of electrons in atoms and their resultant chemistry. Feynman may have been exaggerating when he claimed that QED explains all of the phenomena in the world "except for radioactivity and gravity", but only slightly.

Let me give a brief example. Everyone knows that light travels in straight lines—except when it doesn't, such as when it hits glass or water at anything other than a right angle. Why? Feynman explains that light always takes the path of least time from point to point and uses the analogy of a lifeguard racing along a beach to save a drowning swimmer. (This being Feynman, the latter is, of course, a beautiful girl.) The lifeguard could run straight to the water's edge and then swim diagonally along the coast and out to sea, but this would result in a long time spent swimming, which is slower than running on the beach. Alternatively, he could run to the water's edge at the point nearest to the swimmer, and dive in there. But this makes the total distance covered longer than it needs to be. The optimum, if his aim is to reach the girl as quickly as possible, is somewhere in between these two extremes. Light, too, takes such a path of least time from point to point, which is why it bends when passing between different materials.

He goes on to reveal that this is actually an incomplete view. Using the so-called 'path integral formulation' (though he avoids that ugly term), Feynman explains that light actually takes every conceivable path from one point to another, but most of these cancel each other out, and the net result is that it appears to follow only the single path of least time. This also happens to explain why uninterrupted light (along with everything else) travels in straight lines—so fundamental a phenomenon that surely very few people even consider it to be in need of an explanation. While at first sight such a theory may seem preposterously profligate, it achieves the welcome result of minimising that most scientifically unsatisfactory of all attributes: arbitrariness.

My amateurish attempts at compressing and conveying this explanation have perhaps made it sound arcane. But on the contrary, a second reason to marvel is that it is almost unbelievably simple and intuitive. Even I, an innumerate former biologist, came away not merely with a vague appreciation that some experts somewhere had found something novel, but that I was able to share directly in this new conception of reality. Such an experience is all too rare in science generally, but in the abstract, abstruse world of quantum physics is all but unknown. The main reason for this perspicacity was the adoption of a visual grammar (those famous 'Feynman diagrams') and an almost complete eschewal of hardcore mathematics (the fact that the spinning vectors that are central to the theory actually represent complex numbers seems almost incidental). Though the world it introduces is as unfamiliar as can be, it makes complete sense in its own bizarre terms.

Information Scientist and Professor of Electrical Engineering and Law, the University of Southern California; Author, Noise

Why The Sun Still Shines

One of the deepest explanations has to be why the sun still shines—and thus why the sun has not long since burned out as do the fires of everyday life. That had to worry some of the sun gazers of old as they watched campfires and forest fires burn through their life cycles. It worried the nineteenth-century scientists who knew that gravity alone could not account for the likely long life of the sun.

It sure worried me when I first thought about it as a child.

The explanation of hydrogen atoms fusing into helium was little comfort. It came at the height of the duck-and-cover cold-war paranoia in the early 1960s after my father had built part of the basement of our new house into a nuclear bomb shelter. The one-room shelter came complete with reinforced concrete and metal windows and a deep freeze packed with homemade TV dinners.

The sun burned so long and so brightly because there were in effect so many mushroom-cloud producing thermonuclear hydrogen-bomb explosions going off inside it and because there was so much hydrogen bomb-making material in the sun. The explosions were just like the hydrogen-bomb explosions that could scorch the earth and that could even incinerate the little bomb shelter if they went off close enough.

The logic of the explanation went well beyond explaining the strategic equilibrium of a nuclear Mexican standoff on a global scale. The good news that the sun would not burn out anytime soon came with the bad news that the sun would certainly burn out in a few billion years. But first it would engulf the molten earth in its red-giant phase.

The same explanation said further that in cosmic due course all the stars would burn out or blow up. There is no free lunch in the heat and light that results when simpler atoms fuse into slightly more complex atoms and when mass transforms into energy. There would not even be stars for long. The universe will go dark and get ever closer to absolute-zero cold. The result will be a faint white noise of sparse energy and matter. Even the black holes will over eons burn out or leak out into the near nothingness of an almost perfect faint white noise. That steady-state white noise will have effectively zero information content. It will be the last few steps in a staggeringly long sequence of irreversible nonlinear steps or processes that make up the evolution of the universe. So there will be no way to figure out the lives and worlds that preceded it even if something arose that could figure.

The explanation of why the sun still shines is deep as it gets. It explains doomsday.

The modern theory of the quantum has only recently come to be understood to be even more exquisitely geometric than Einstein's General Relativity. How this realization unfolded over the last 40 years is a fascinating story that has, to the best of my knowledge, never been fully told as it is not particularly popular with some of the very people responsible for this stunning achievement.

To set the stage, recall that fundamental physics can be divided into two sectors with separate but maddeningly incompatible advantages. The gravitational force has, since Einstein's theory of general relativity, been admired for its four dimensional geometric elegance. The quantum, on the other hand encompasses the remaining phenomena, and is lauded instead for its unparalleled precision, and infinite dimensional analytic depth.

The story of the geometric quantum begins at some point around 1973-1974, when our consensus picture of fundamental particle theory stopped advancing. This stasis, known as the 'Standard Model', seemed initially like little more than a temporary resting spot on the relentless path towards progress in fundamental physics, and theorists of the era wasted little time proposing new theories in the expectation that they would be quickly confirmed by experimentalists looking for novel phenomena. But that expected entry into the promised land of new physics turned into a 40-year period of half-mad tribal wandering in an arid desert, all but devoid of new phenomena.

Yet just as particle theory was failing to advance in the mid 1970s, something amazing was quietly happening over lunch at the State University of New York at Stony Brook. There, Nobel physics laureate CN Yang and geometer (and soon to billionaire) Jim Simons had started an informal seminar to understand what, if anything, modern geometry had to do with quantum field theory. The shocking discovery that emerged from these talks was that both geometers and quantum theorists had independently gotten hold of different collections of insights into a common structure that each group had independently discovered for themselves. A Rosetta stone of sorts called the Wu-Yang dictionary was quickly assembled by the physicists, and Isadore Singer of MIT took these results from Stony Brook to his collaborator Michael Atiyah in Oxford where their research with Nigel Hitchin began a geometric renaissance in physics inspired geometry that continues to this day.

While the Stony Brook history may be less discussed by some of today's younger mathematicians and physicists, it is not a point of contention between the various members of the community. The more controversial part of this story, however, is that a hoped for golden era of theoretical physics did not emerge in the aftermath to produce a new consensus theory of elementary particles. Instead the interaction highlighted the strange idea that, just possibly, Quantum theory was actually a natural and elegant self-assembling body of pure geometry that had fallen into an abysmal state of pedagogy putting it beyond mathematical recognition. By this reasoning, the mathematical basket case of quantum field theory was able to cling to life and survive numerous near death experiences in its confrontations with mathematical rigor only because it was being underpinned by a natural infinite dimensional geometry, which is to this day still only partially understood.

In short, most physicists were trying and failing to quantize Einstein's geometric theory of gravity because they were first meant to go in the opposite and less glamorous direction of geometrizing the quantum instead. Unfortunately for Physics, mathematicians had somewhat dropped the ball by not sufficiently developing the geometry of infinite dimensional systems (such as the Standard Model), which would have been analogous to the 4-dimensional Riemannian geometry appropriated from mathematics by Einstein.

This reversal could well be thought of as Einstein's revenge upon the excesses of quantum triumphalism, served ice cold decades after his death: the more researchers dreamed of becoming the Nobel winning physicists to quantize gravity, the more they were rewarded only as mathematicians for what some saw as the relatively remedial task of geometrizing the quantum. The more they claimed that the 'power and glory' of string theory (a failed piece of 1970s sub-atomic physics which has mysteriously lingered into the 21st century) was the 'only game in town', the more it suggested that it was the string theory-based unification claims that, in the absence of testable predictions, were themselves sinking with a glug to the bottom of the sea.

What we learned from this episode was profound. Increasingly, the structure of Quantum Field Theory appears to be a purely mathematical input-output machine where our physical world is but one of many natural inputs that the machine is able to unpack from initial data. In much the way that a simple one-celled human embryo self-assembles into a trillion celled infant of inconceivable elegance, the humble act of putting a function (called an 'action' by physicists) on a space of geometric waves appears to trigger a self-assembling mathematical Rube-Goldberg process which recovers the seemingly intricate features of the formidable quantum as it inexorably unfolds. It also appears that the more geometric the input given to the machine, the more the unpacking process conspires to steer clear of the pathologies which famously afflict less grounded quantum theories. It is even conceivable that sufficiently natural geometric input could ultimately reveal the recent emphasis on 'quantizing gravity' as an extravagant mathematical misadventure distracting from Einstein's dream of a unified physical field. Like genius itself, with the right natural physical input, the new geometric quantum now appears to many mathematicians and physicists to be the proverbial fire that lights itself.

Yet, if the physicists of this era failed to advance the standard model, it was only in their own terms that they went down to defeat. Just as in an earlier era in which physicists retooled to become the first generation of molecular biologists, their viewpoints came to dominate much of modern geometry in the last four decades, scoring numerous mathematical successes that will stand the tests of time. Likewise their quest to quantize gravity may well have backfired, but only in the most romantic and elegant way possible by instead geometrizing the venerable quantum as a positive externality.

But the most important lesson is that, at a minimum, Einstein's minor dream of a world of pure geometry has largely been realized as the result of a large group effort. All known physical phenomena can now be recognized as fashioned from the pure, if still heterogeneous, marble of geometry through the efforts of a new pantheon of giants. Their achievements, while still incomplete, explain in advance of unification that the source code of the universe is overwhelmingly likely to determine a purely geometric operating system written in a uniform programming language. While that leaves Einstein's greater quest for the unifying physics unfinished, and the marble something of a disappointing patchwork of motley colors, it suggests that the leaders during the years of the Standard Model stasis have put this period to good use for the benefit of those who hope to follow.

Consultant; Adaptive Optics and Adjunct Professor of Anthropology, University of Utah; Co-author, The Ten Thousand Year Explosion

Germs Cause Disease

The germ theory of disease has been very successful, particularly if you care about practical payoffs, like staying alive. It explains how disease can rapidly spread to large numbers of people (exponential growth), why there are so many different diseases (distinct pathogen species), and why some kind of contact (sometimes indirect) is required for disease transmission.

In modern language, most disease syndromes turn out to be caused by tiny self-replicating machines whose genetic interests are not closely aligned with ours.

In fact, germ theory has been so successful that it almost seems uninteresting. Once we understood the causes of cholera and pneumonia and syphilis, we got rid of them, at least in the wealthier countries. Now we're at the point where people resist the means of victory—vaccination for example—because they no longer remember the threat.

It is still worth studying—not just to fight the next plague, but also because it has been a major factor in human history and human evolution. You can't really understand Cortez without smallpox or Keats without tuberculosis. The past is another country—don't drink the water.

It may well explain patterns that we aren't even supposed to see, let alone understand. For example, human intelligence was, until very recently, ineffective at addressing problems causing by microparasites, as William McNeill pointed out in Plagues and Peoples. Those invisible enemies played a major role in determining human biological fitness—more so in some places than others. Consider the implications.

Lastly, when you leaf through a massively illustrated book on tropical diseases and gaze upon an advanced case of elephantiasis, or someone with crusted scabies, you realize that any theory that explains that much ugliness just has to be true.

Professor of Philosophy at Birkbeck College, University of London, and a Supernumerary Fellow of St Anne's College, Oxford; Author, The God Argument

Russell's Theory of Descriptions

My favourite example of an elegant and inspirational theory in philosophy is Russell's Theory of Descriptions. It did not prove definitive, but it prompted richly insightful trains of enquiry into the structure of language and thought.

In essence Russell's theory turns on the idea that there is logical structure beneath the surface forms of language, which analysis brings to light; and when this structure is revealed we see what we are actually saying, what beliefs we are committing ourselves to, and what conditions have to be satisfied for the truth or falsity of what is thus said and believed.

One example Russell used to illustrate the idea is the assertion 'the present King of France is bald,' said when there is no King of France. Is this assertion true or false? One response might be to say that it is neither, since there is no King of France at present. But Russell wished to find an explanation for the falsity of the assertion which did not dispense with bivalence in logic, that is, the exclusive alternative of truth and falsity as the only two truth-values.

He postulated that the underlying form of the assertion consists in the conjunction of three logically more basic assertions: (a) there is something that has the property of being King of France, (b) there is only one such thing (this takes care of the implication of the definite article 'the') (c) and that thing has the further property of being bald. In the symbolism of first-order predicate calculus, which Russell took to be the properly unambiguous rendering of the assertion's logical form (I omit strictly correct bracketing so as not to clutter):

(Ex)Kx & [(y)Ky— >y=x] & Bx

which is pronounced 'there is an x such that x is K; and for anything y, if y is K then y and x are identical (this deals logically with 'the' which implies uniqueness); and x is B,' where K stands for 'has the property of being King of France' and B stands for 'has the property of being bald.' 'E' is the existential quantifier 'there is...' or 'there is at least one...' and '(y)' stands for the universal quantifier 'for all' or 'any.'

One can now see that there are two ways in which the assertion can be false; one is if there is no x such that x is K, and the other is if there is an x but x is not bald. By preserving bivalence and stripping the assertion to its logical bones Russell has provided what Frank Ramsey wonderfully called 'a paradigm of philosophy.'

To the irredeemable sceptic about philosophy all this doubtless looks like 'drowning in two inches of water' as the Lebanese say; but in fact it is in itself an exemplary instance of philosophical analysis, and it has been very fruitful as the ancestor of work in a wide range of fields, ranging from the contributions of Wittgenstein and W. V. Quine to research in philosophy of language, linguistics, psychology, cognitive science, computing and artificial intelligence.

Falling Into Place: Entropy, Galileo's Frames of Reference, and the Desperate Ingenuity Of Life

The hardest choice I had to make in my early scientific life was whether to give up the beautiful puzzles of quantum mechanics, nonlocality, and cosmology for something equally arresting: To work instead on reverse engineering the code that natural selection had built into the programs that made up our species' circuit architecture. In 1970, the surrounding cultural frenzy and geopolitics made first steps toward a nonideological and computational understanding of our evolved design, "human nature", seem urgent; the recent rise of computer science and cybernetics made it seem possible; the almost complete avoidance of and hostility to evolutionary biology by behavioral and social scientists had nearly neutered those fields, and so made it seem necessary.

What finally pulled me over was that the theory of natural selection was itself such an extraordinarily beautiful and elegant inference engine. Wearing its theoretical lenses was a permanent revelation, populating the mind with chains of deductions that raced like crystal lattices through supersaturated solutions. Even better, it starts from first principles (such as set theory and physics), so much of it is nonoptional.

But still, from the vantage point of physics, beneath natural selection there remained a deep problem in search of an explanation: The world given to us by physics is unrelievedly bleak. It blasts us when it is not burning us or invisibly grinding our cells and macromolecules until we are dead. It wipes out planets, habitats, labors, those we love, ourselves. Gamma ray bursts wipe out entire galactic regions; supernovae, asteroid impacts, supervolcanos, and ice ages devastate ecosystems and end species. Epidemics, strokes, blunt force trauma, oxidative damage, protein cross-linking, thermal noise-scrambled DNA—all are random movements away from the narrowly organized set of states that we value, into increasing disorder or greater entropy. The second law of thermodynamics is the recognition that physical systems tend to move toward more probable states, and in so doing, they tend to move away from less probable states (organization) on their blind toboggan ride toward maximum disorder.

Entropy, then, poses the problem: How are living things at all compatible with a physical world governed by entropy, and, given entropy, how can natural selection lead over the long run to the increasing accumulation of functional organization in living things? Living things stand out as an extraordinary departure from the physically normal (e.g., the earth's metal core, lunar craters, or the solar wind). What sets all organisms—from blackthorn and alder to egrets and otters—apart from everything else in the universe is that woven though their designs are staggeringly unlikely arrays of highly tuned interrelationships—a high order that is highly functional. Yet as highly ordered physical systems, organisms should tend to slide rapidly back toward a state of maximum disorder or maximum probability. As the physicist Erwin Schrödinger put it, "It is by avoiding the rapid decay into the inert state that an organism appears so enigmatic."

The quick answer normally palmed off on creationists is true as far as it goes, but it is far from complete: The earth is not a closed system; organisms are not closed systems, so entropy still increases globally (consistent with the second law of thermodynamics) while (sometimes) decreasing locally in organisms. This permits but does not explain the high levels of organization found in life. Natural selection, however, can (correctly) be invoked to explain order in organisms, including the entropy-delaying adaptations that keep us from oxidizing immediately into a puff of ash.

Natural selection is the only known counterweight to the tendency of physical systems to lose rather than grow functional organization—the only natural physical process that pushes populations of organisms uphill (sometimes) into higher degrees of functional order. But how could this work, exactly?

It is here that, along with entropy and natural selection, the third of our trio of truly elegant scientific ideas can be adapted to the problem: Galileo's brilliant concept of frames of reference, which he used to clarify the physics of motion.

The concept of entropy was originally developed for the study of heat and energy, and if the only kind of real entropy (order/disorder) was the thermodynamic entropy of energy dispersal then we (life) wouldn't be possible. But with Galileo's contribution one can consider multiple kinds of order (improbable physical arrangements), each being defined with respect to a distinct frame of reference.

There can be as many kinds of entropy (order/disorder) as there are meaningful frames of reference. Organisms are defined as self-replicating physical systems. This creates a frame of reference that defines its kind of order in terms of causal interrelationships that promote the replication of the system (replicative rather than thermodynamic order). Indeed, organisms must be physically designed to capture undispersed energy, and like hydroelectric dams using waterfalls to drive turbines, they use this thermodynamic entropic flow to fuel their replication, spreading multiple copies of themselves across the landscape.

Entropy sometimes introduces copying errors into replication, but injected disorder in replicative systems is self-correcting. By definition the less well-organized are worse at replicating themselves, and so are removed from the population. In contrast, copying errors that increase functional order (replicative ability) become more common. This inevitable ratchet effect in replicators is natural selection.

Organisms exploit the trick of deploying different entropic frames of reference in many diverse and subtle ways, but the underlying point is that what is naturally increasing disorder (moving toward maximally probable states) for one frame of reference inside one physical domain can be harnessed to decrease disorder with respect to another frame of reference. Natural selection picks out and links different entropic domains (e.g., cells, organs, membranes) that each impose their own proprietary entropic frames of reference locally.

When the right ones are associated with each other, they do replicative work through harnessing various types of increasing entropy to decrease other kinds of entropy in ways that are useful for the organism. For example: oxygen diffusion from the lungs to the blood stream to the cells is the entropy of chemical mixing—falling toward more probable high entropy states, but increasing order from the perspective of replication-promotion.

Entropy makes things fall, but life ingeniously rigs the game so that when they do they often fall into place.

What caused our Big Bang? My favorite deep explanation is that our baby universe grew like a baby human—literally. Right after your conception, each of your cells doubled roughly daily, causing your total number of cells to increase day by day as 1, 2, 4, 8, 16, etc. Repeated doubling is a powerful process, so your Mom would have been in trouble if you'd kept doubling your weight every day until you were born: after nine months (about 274 doublings), you would have weighed more than all the matter in our observable universe combined.

Crazy as it sounds, this is exactly what our baby universe did according to the inflation theory pioneered by Alan Guth and others: starting out with a speck much smaller and lighter than an atom, it repeatedly doubled its size until it was more massive than our entire observable universe, expanding at dizzying speed. And it doubled not daily but almost instantly. In other words, inflation created our mighty Big Bang out of almost nothing, in a tiny fraction of a second. By the time you reached about 10 centimeters in size, your expansion had transitioned from accelerating to decelerating. In the simplest inflation models, our baby universe did the same when it was about 10 centimeters in size, its exponential growth spurt slowing to a more leisurely expansion where hot plasma diluted and cooled and its constituent particles gradually coalesced into nuclei, atoms, molecules, stars and galaxies.

Inflation is like a great magic show—my gut reaction is: "This can't possibly obey the laws of physics!''

Yet under close enough scrutiny, it does. For example, how can one gram of inflating matter turn into two grams when it expands? Surely, mass can't just be created from nothing? Interestingly, Einstein offered us a loophole through his special relativity theory, which says that energy E and mass m are related according to the famous formula E=mc², where c is the speed of light.

This means that you can increase the mass of something by adding energy to it. For example, you can make a rubber band slightly heavier by stretching it: you need to apply energy to stretch it, and this energy goes into the rubber band and increases its mass. A rubber band has negative pressure because you need to do work to expand it. Similarly, the inflating substance has to have negative pressure in order to obey the laws of physics, and this negative pressure has to be so huge that the energy required to expand it to twice its volume is exactly enough to double its mass. Remarkably, Einstein's theory of General Relativity says that this negative pressure causes a negative gravitational force. This in turn causes the repeated doubling, ultimately creating everything we can observe from almost nothing.

To me, the hallmark of a deep explanation is that it answers more than you ask. And inflation has proven to be the gift that keeps on giving, churning out answer after answer. It explained why space is so flat, which we've measured to about 1% accuracy. It explained why on average, our distant universe looks the same in all directions, with only 0.002% fluctuations from place to place. It explained the origins of these 0.002% fluctuations as quantum fluctuations stretched by inflation from microscopic to macroscopic scales, then amplified by gravity into today's galaxies and cosmic large scale structure. It even explained the cosmic acceleration that nabbed the 2011 physics Nobel Prize as inflation restarting, in slow motion, doubling the size of our universe not every split second but every 8 billion years, transforming the debate from whether inflation happened or not to whether it happened once or twice.

It's now becoming clear that inflation is an explanation that doesn't stop—inflating or explaining.

Just as cell division didn't make merely one baby and stop, but a huge and diverse population of humans, it looks like inflation didn't make merely one universe and stop, but a huge and diverse population of parallel universes, perhaps realizing all possible options for what we used to think of as physical constants. Which would explain yet another mystery: the fact that many constants in our own universe are so fine-tuned for life that if they changed by small amounts, life as we know it would be impossible—there would be no galaxies or no atoms, say. Even though most of the parallel universes created by inflation are stillborn, there will be some where conditions are just right for life, and it's not surprising that this is where we find ourselves.

Inflation has given us an embarrassment of riches—and embarrassing it is... Because this infinity of universes has brought about the so-called measure problem, which I view as the greatest crisis facing modern physics. Physics is all about predicting the future from the past, but inflation seems to sabotage this. Our physical world is clearly teeming with patterns and regularities, yet when we try quantifying them to predict the probability that something particular will happen, inflation always gives the same useless answer: infinity divided by infinity.

The problem is that whatever experiment you make, inflation predicts that there will be infinite copies of you obtaining each physically possible outcome in an infinite number of parallel universes, and despite years of tooth-grinding in the cosmology community, no consensus has emerged on how to extract sensible answers from these infinities. So strictly speaking, we physicists are no longer able to predict anything at all! Our baby universe has grown into an unpredictable teenager.

This is so bad that I think a radical new idea is needed. Perhaps we need to somehow get rid of the infinite. Perhaps, like a rubber band, space can't be expanded ad infinitum without undergoing a big snap? Perhaps those infinite parallel universes get destroyed by some yet undiscovered process, or perchance they're for some reason mere mirages? The very deepest explanations don't just provide answers, but also questions. I think inflation still has some explaining left to do!

Did you ever notice that the "vein" you are told, for some reason, to remove from shrimp before eating them doesn't seem to ooze anything you'd be inclined to call blood? Doesn't the slime seem more like some sort of alimentary waste? That's because it is. In shrimp, you can get at the digestive system right through its back because that's where it is. The heart's up there too, and this is the way it is in arthropods, the animal phylum that includes crustaceans and insects. Meanwhile, if you were interested in finding the shrimp's main nerve highway, you'd find it running down along its bottom side.

That feels backwards to us, because we're chordates, another big animal phylum. Chordates have the spinal nerve running down the back, with the gut and heart up in front. It's as if our body plans were mirror images of arthropods', and this is a microcosm of a general split between larger classes: arthropods are among the protostomes, with the guts on the back, as opposed to the deuterostomes that we chordates are among, with the guts up front.

Biologists have noticed this since auld lang sine, with naturalist Étienne Geoffroy Saint-Hilaire famously turning a dissected lobster upside down and showing that as such, its innards' arrangement resembled ours. The question was how things got this way, especially as Darwin's natural selection theory became accepted. How could one get step-by-step from guts on the back and the spinal chord up front to the reverse situation? More to the point, why would this be evolutionarily advantageous, which is the only reason we assume it would happen at all?

Short of imagining that the nerve chord glommed upward and took over the gut and a new gut spontaneously developed down below because it was "needed"—this was actually entertained for a while by one venturesome thinker—the best biologists could do for a long time was suppose that the arthropod plan and the chordate plan were alternative pathways of evolution from some primordial creature. It must have just been a matter of the roll of the dice coming out differently one time than the next one, they thought.

Not only was this boring—the problem was that molecular biology started making it ever clearer that arthropods and chordates trace back to the same basic body plan in a good amount of detail. The shrimp's little segments are generated by the same basic genes that create our vertebral column, and so on. Which leads to the old question again—how do you get from a lobster to a cat? Biologists are converging upon an answer that combines elegance with a touch of mystery while occasioning a scintilla of humility in the bargain.

Namely, what is increasingly thought to have happened is that some early worm-like aquatic creature with the arthropod-style body plan started swimming upside-down. Creatures can do that: brine shrimp, today, for example (remember those "sea monkeys"?). Often it's because a creature's coloring is different on the top than the bottom, and having the top color face down makes them harder for predators to see. That is, there would have been evolutionary advantage to such a creature gradually turning upside down forever.

But what this would mean is that in this creature, the spinal chord was up and the guts were down. In itself, that's perhaps cute, maybe a little sad, but little more. But—suppose this little worm then evolved into today's chordates? It's hardly a stretch, given that the most primitive chordates actually are wormish, only vaguely piscine things called lancelets. And of course, if you were moved to rip one open you'd see that nerve chord on the back, not the front.

Molecular biology is quickly showing exactly how developing organisms can be signaled either to develop a shrimp-like or a cat-like body plan along these lines. There even seems to be a "missing link"—there are rather vile, smelly bottom-feeding critters called acorn worms that have nerve chords on the back and on the front, and guts that seem on their way to moving on down.

So—the reason we humans have a backbone is not because it's somehow better to have a spinal column to break a fall backwards or anything of the sort. Roll the dice again and we could be bipedals with spinal columns running down our fronts like zippers and the guts carried in the back (it actually doesn't sound half bad). And beyond this, this explanation of what's called dorsoventral inversion is yet more evidence of how under natural selection, such awesome variety can emerge in unbroken fashion from such humble beginnings. And finally, it's hard not to be heartened by a scientific explanation that early adopters, like Geoffroy-St. Hilaire, were ridiculed for espousing.

Quite often when preparing shrimp, tearing open a lobster, contemplating what it would be like to be forced to dissect an acorn worm, patting my cat on the belly, or giving someone a hug, I think a bit about the fact that all of those bodies are built on the same plan, except that the cats' and the huggees' bodies are the legacy of, of all things, some worm swimming the wrong way up in a Precambrian Ocean over 550 million years ago. It has always struck me as rather gorgeous.

Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, The First Scientist: Anaximander and His Legacy

How Apparent Finality Can Emerge

Darwin, no doubt. The beauty and the simplicity of his explanation is astonishing. I am sure that others have pointed out Darwin as their favorite deep, elegant, beautiful explanation, but I still want to emphasize the general reach of Darwin's central intuition, which goes well beyond the already monumental result of having clarified that we share the same ancestors with all living beings on Earth, and is directly relevant to the very core of the entire scientific enterprise.

Shortly after the ancient greek "physicists" started to develop naturalistic explanations of Nature, a general objection came forward. The objection is well articulated in Plato, for instance in the Phaedon, and especially in Aristotle discussion of the theory of the "causes". Naturalistic explanations rely on what Aristotle called the "the efficient cause", namely past phenomena producing effects. But the world appears to be dominated by phenomena that can be understood in terms of "final causes", namely an "aim" or a "purpose". These are evident in the kingdom of life. We have the mouth "so we can" eat. The importance of this objection cannot be underestimated. It is this objection that brought down ancient naturalism and in the minds of many it is still today the principal source of psychological resistance against a naturalistic understanding of the world.

Darwin has discovered the spectacularly simple mechanism where efficient causes can produce phenomena that appear to be governed by final causes: anytime we have phenomena that can reproduce, the actual phenomena that we observe are those that keep reproducing, and therefore are necessarily those better at reproducing, and we can thus read them in terms of final causes. In other words, a final cause can be effective to understanding the world because it is a shortcut for accounting the past history of a continuing phenomenon.

To be sure, the idea has appeared before. Empedocles discusses the idea that the apparent finality in the living kingdom could be the result of selected randomness, and Aristotle himself in his "Physics" mentions a version of this idea for species ("seeds"). But the times where not yet ripe, and the suggestion was lost in the following religious ages. I think that the resistance against Darwin is not just the difficulty of seeing the power of a spectacularly beautiful explanation: it is the fear of realizing the extraordinary power that such an explanation has in shattering rests of old world views.

Professor and Chair of Geography; Professor of Earth, Planetary, and Space Sciences at UCLA; Author, The World in 2050

Continuity

For me, the answer to this year's Edge question is clear: The Continuity Equations.

These are already familiar to you, at least in anecdotal form. Most everyone has heard of the law of "Conservation of Mass" (sometimes using the word "matter" instead of mass) and probably its partner "Conservation of Energy" too. These laws tell us that for practical, real-world (i.e. non-quantum, non-general relativity) phenomena, matter and energy can never be created or destroyed, only shuffled around. That concept has origins tracing at least as far back as the ancient Greeks, was formally articulated in the 18th century (a major advance for modern chemistry), and today underpins virtually every aspect of the physical, life, and natural sciences. Conservation of Mass (matter) is what finally quashed the alchemists' quest to transform lead to gold; Conservation of Energy is what consigns the awesome power of a wizard's staff to the imaginations of legions of Lord of the Rings fans.

The Continuity Equations take these laws an important step further, by providing explicit mathematical formulations that track the storage and/or transfers of mass (Mass Continuity) and energy (Energy Continuity) from one compartment or state to another. As such, they are not really a single pair of equations but instead written into a variety of forms, ranging from the very simple to the very complex, in order to best represent the physical/life science/natural world phenomenon they are supposed to describe. The most elegant forms, adored by mathematicians and physicists, have exquisite detail and are therefore the most complex. A classic example is the set of Navier-Stokes equations (sometimes called the Saint-Venant equations) used to understand the movements and accelerations of fluids. The beauty of Navier-Stokes lies in their explicit partitioning and tracking of mass, energy and momentum through space and time. However, in practice such detail also makes them difficult to solve, requiring either hefty computing power or simplifying assumptions to be made to the equations themselves.

But the power of the Continuity Equations is not limited to complex forms comprehensible solely to mathematicians and physicists. A forest manager, for example, might use a simple, so-called "mass balance" form of a mass continuity equation to study her forest, by adding up the number, size, and density of trees, the rate at which seedlings establish, and subtracting the trees' mortality rate and number of truckloads of timber removed to learn if its total wood content (biomass) is increasing, decreasing, or stable. Automotive engineers routinely apply simple "energy balance" equations when, for example, designing a hybrid electric car to recapture kinetic energy from its braking system. None of the energy is truly created or destroyed just recaptured (e.g. from a combustion engine, which got it from breaking apart ancient chemical bonds, which got it from photosynthetic reactions, which got it from the Sun). Any remaining energy not recaptured from the brakes is not really "lost", of course, but instead transferred to the atmosphere as low-grade heat.

The cardinal assumption behind these laws and equations is that mass and energy are conserved (constant) within a closed system. In principle, the hybrid electric car only satisfies energy continuity if its consumption is tracked from start (the Sun) to finish (dissipation of heat into the atmosphere). This a bit cumbersome so it is usually treated as an open system. The metals used in the car's manufacture satisfy mass continuity only if tracked from their source (ores) to landfill. This is more feasible, and such "cradle-to-grave" resource accounting—a high priority for many environmentalists—is thus more compatible with natural laws than our current economic model, which tends to externalize (i.e., assume an open system) such resource flows.

Like the car, our planet, from a practical standpoint, is an open system with respect to energy and a closed system with respect to mass (while it's true that Earth is still being bombarded by meteorites, that input is now small enough to be neglected). The former is what makes life possible: Without the Sun's steady infusion of fresh, external energy, life as we know it would quickly end. An external source is required because although energy cannot be destroyed, it is constantly degraded into weaker, less useful forms in accordance with the 2nd law of thermodynamics (consider again the hybrid-electric car's brake pads—their dissipated heat is of not much use to anyone). The openness of this system is two-way, because Earth also streams thermal infrared energy back out to space. Its radiation is invisible to us, but to satellites with "vision" in this range of the electromagnetic spectrum the Earth is a brightly glowing orb, much like the Sun.

Interestingly, this closed/open dichotomy is yet another reason why the physics of climate change are unassailable. By burning fossil fuels, we shuffle carbon (mass) out of the subsurface—where it has virtually no interaction with the planet's energy balance—to the atmosphere, where it does. It is well understood that carbon in the atmosphere alters the planet's energy balance (the physics of this have been known since 1893) and without carbon-based and other greenhouse gases our planet would be a moribund, ice-covered rock. Greenhouse gases prevent this by selectively altering the Earth's energy balance in the troposphere (the lowest few miles of the atmosphere, where the vast majority of its gases reside), thus raising the amount of thermal infrared radiation that it emits. Because some of this energy streams back down to Earth (as well as out to space) the lower troposphere warms, to achieve energy balance. Continuity of Energy commands this.

Our planet's carbon atoms, however, are stuck here with us forever—Continuity of Mass commands that too. The real question is what choices will make about how much, and how fast, to shuffle them out the ground. The physics of natural resource stocks, climate change and other problems can often be reduced to simple, elegant, equations—if only we had such masterful tools to dictate their solution.

Director, Cambridge Embodied Cognition and Emotion Laboratory; University Senior Lecturer and Fellow of Jesus College, University of Cambridge

Embodied Metaphors Unify Perception, Cognition and Action

Philosophers and psychologists grappled with a fundamental question for quite some time: How does the brain derive meaning? If thoughts consist of the manipulation of abstract symbols, just like computers are processing 0s and 1s, then how are such abstract symbols translated into meaningful cognitive representations? This so-called "symbol grounding problem" has now been largely overcome because many findings from cognitive science suggest that the brain does not really translate incoming information into abstract symbols in the first place. Instead, sensory and perceptual inputs from every-day experience are taken in their modality-specific form, and they provide the building blocks of thoughts.

British empiricists such as Locke and Berkeley long ago recognized that cognition is inherently perceptual. But following the cognitive revolution in the 1950ies psychology treated the computer as the most appropriate model to study the mind. Now we know that a brain does not work like a computer. Its job is not to store or process information; instead, its job is to drive and control the actions of the brain's large appendage, the body. A new revolution is taking shape, considered by some to bring an end to cognitivism, and giving way to a transformed kind of cognitive science, namely an embodied cognitive science.

The basic claim is thatthe mind thinks in embodied metaphors. Early proponents of this idea were linguists such as George Lakoff, and in recent years social psychologists have been conducting the relevant experiments, providing compelling evidence. But it does not stop here; there is also a reverse pathway: Because thinking is for doing, many bodily processes feed back into the mind to drive action.

Consider the following recent findings that relate to the very basic spatial concept of verticality. Because moving around in space is a common physical experience, concepts such as "up" or "down" are immediately meaningful relative to one's own body. The concrete experience of verticality serves as a perfect scaffold for comprehending abstract concepts, such as morality: Virtue is up, whereas depravity is down: Good people are "high minded" and "upstanding" citizens, whereas bad people are "underhanded" and the "low life" of society. Recent research by Brian Meier, Martin Sellbom and Dustin Wygant illustrated that research participants are faster to categorize moral words when they are presented in an up location, and immoral words when they are presented in a down location. Thus, people intuitively relate the moral domain to verticality; however, Meier and colleagues also found that peoplewho do not recognize moral norms, namely psychopaths, fail to do so, and do not show this effect.

People not only think of all things good and moral as up, but they also think of God as up, and the Devil as down. Further, those in power are conceptualized as being high up relative to those down below over whom they hover and exert control, as shown by Thomas Schubert.All the empirical evidence suggests that there is indeed a conceptual dimension that leads up, both literally and metaphorically. This vertical dimension that pulls the mind up to considering what higher power there might be is deeply rooted in the very basic physical experience of verticality.

However, verticality not only influences people's representation of what is good, moral and divine, but movement through space along the vertical dimension can even change their moral actions. Larry Sanna, Edward Chang, Paul Miceli and Kristjen Lundberg recently demonstrated that manipulating people's location along the vertical dimension can actually turn them into more "high minded" and "upstanding" citizens. They found that people in a shopping mall who had just moved up an escalator were more likely to contribute to a charity donation box than people who had moved down on the escalator. Similarly, research participants who had watched a film depicting a view from high above, namely flying over clouds seen from an airplane window subsequently showed more cooperative behaviour than participants who had watched a more ordinary, and less "elevating" view from a car window. Thus, being physically elevated induced people to act on "higher" moral values.

The growing recognition that embodied metaphors provide one common language of the mind has lead to fundamentally different ways of studying how people think. For example, under the assumption that the mind functions like a computer psychologists hoped to figure out how people think by observing how they play chess, or memorize lists of random words. From an embodied perspective it is evident that such scientific attempts were hopelessly doomed to fail. Instead, it is increasingly clear that cognitive operations of any creature, including humans, have to solve certain adaptive challenges of the physical environment. In the process, embodied metaphors are the building blocks of perception, cognition, and action. It doesn't get much more simple and elegant than that.

The core elegant explanation at the heart of Computer Science is known as the Church-Turing hypothesis, which is that all models of computing are equivalent to each other in that they are able to run all the same programs. The lemmas are that a fixed piece of hardware can change its personality with a different code, and that there really is no fundamental difference between a Mac and a PC, despite the actors who portray them on TV.

One thing we didn't warn you about was that, in the limit, the general purpose computer is God, whose destructive wrath, following Sodom, Gomorrah, and Katrina, is finally dawning upon us.

The computer is the destroyer of occupations. Every job has become equivalent: sitting in front of a monitor and keyboard, entering or editing information. Doctor, Dentist, CEO, Banker, Broker, Teller, Lawyer, Engineer: Today, they are all sitting in front of their console, with ever more decisions becoming automated.

The computer has destroyed the rewards of art, digitizing all copyrighted work into underground networks of anonymous file-sharing.

The computer has destroyed all gadgets. I remember owning and loving my calculators, radios, walkie-talkies, voice recorders, phones, text pagers, calendars, cameras, walkmen, gameboys, remote controls, and GPS systems. I bought the first pocket computer in Japan in 1984 with a 12 character screen and 4k of memory. Today's pocket computers, misnamed as smartphones, are so powerful that they have destroyed the consumer electronics business by absorbing every unique product as just another app.

The computer is on a path to destroy money. What started as gold and coin has evolved beyond paper and plastic into pure bits. The only essential difference in wealth between the 1% and the 99% is what is recorded in institutional databases, which unfortunately can be easily manipulated by insiders.

Finally—and those in AI have held this thought for a long time—that which makes us uniquely human is also just computable. While the advent of true AI is still far off, as general purpose computers continue to shrink, and the biological interface is perfected, we will become one with our new god. Choose your new religion wisely—Transhumanist, Singulartarian, or the new file-sharing cult recognized in Sweden, Kopimism.

In 1980 David Collingridge, an obscure academic at the University of Aston in the UK, published an important book called The Social Control of Technology, which set the tone of many subsequent debates about technology assessment.

In it, he articulated what has become known as "The Collingridge dilemma"—the idea that there is always a trade-off between knowing the impact of a given technology and the ease of influencing its social, political, and innovation trajectories.

Collingridge's basic insight was that we can successfully regulate a given technology when it's still young and unpopular and thus probably still hiding its unanticipated and undesireable consequences—or we can wait and see what those consequences are but then risk losing control over its regulation.

Or as Collingridge himself so eloquently put it: "When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming."

It's called a "dilemma" for good reasons—Collingridge didn't like this state of affairs and wanted to solve it (mostly by urging us to exert more control over the social life of a given technology).

Still, even unsolved, the Collingridge dilemma is one of the most elegant ways to explain many of the complex ethical and technological quandaries—think drones or automated facial recognition—that plague our globalized world today.

What explains the extraordinary complexity of the observed universe, on all scales from quarks to the accelerating universe? My favorite explanation (which I certainly did not invent) is that the fundamental laws of physics produce natural instability, energy flows, and chaos. Some call the result the Life Force, some note that the Earth is a living system itself (Gaia, a "tough bitch" according to Margulis), and some conclude that the observed complexity requires a supernatural explanation (of which we have many). But my dad was a statistician (of dairy cows) and he told me about cells and genes and evolution and chance when I was very small. So a scientist must look for the explanation of how nature's laws and statistics brought us into conscious existence. And how is that seemingly improbable events are actually happening all the time?

Well, the physicists have countless examples of natural instability, in which energy is released to power change from simplicity to complexity. One of the most common to see is that cooling water vapor below the freezing point produces snowflakes, no two alike, and all complex and beautiful. We see it often so we are not amazed. But physicists have observed so many kinds of these changes from one structure to another (we call them phase transitions) that the Nobel Prize in 1992 could be awarded for understanding the mathematics of their common features.

Now for a few examples of how the laws of nature produce the instabilities that lead to our own existence. First, the Big Bang (what an insufficient name!) apparently came from an instability, in which the "false vacuum" eventually decayed into the ordinary vacuum we have today, plus the most fundamental particles we know, the quarks and leptons. So the universe as a whole started with an instability. Then, a great expansion and cooling happened, and the loose quarks, finding themselves unstable too, bound themselves together into today's less elementary particles like protons and neutrons, liberating a little energy and creating complexity. Then, the expanding universe cooled some more, and neutrons and protons, no longer kept apart by immense temperatures, found themselves unstable and formed helium nuclei. Then, a little more cooling, and atomic nuclei and electrons were no longer kept apart, and the universe became transparent. Then a little more cooling, and the next instability began: gravitation pulled matter together across cosmic distances to form stars and galaxies. This instability is described as a "negative heat capacity" in which extracting energy from a gravitating system makes it hotter—clearly the 2nd law of thermodynamics does not apply here! (This is the physicist's part of the answer to e e cummings' question: what is the wonder that's keeping the stars apart?) Then, the next instability is that hydrogen and helium nuclei can fuse together to release energy and make stars burn for billions of years. And then at the end of the fuel source, stars become unstable and explode and liberate the chemical elements back into space. And because of that, on planets like Earth, sustained energy flows support the development of additional instabilities and all kinds of complex patterns. Gravitational instability pulls the densest materials into the core of the Earth, leaving a thin skin of water and air, and makes the interior churn incessantly as heat flows outwards. And the heat from the Sun, received mostly near the equator and flowing towards the poles, supports the complex atmospheric and oceanic circulations.

And because of that, the physical Earth is full of natural chemical laboratories, concentrating elements here, mixing them there, raising and lowering temperatures, ceaselessly experimenting with uncountable events where new instabilities can arise. At least one of them was the new experiment called Life. Now that we know that there are at least as many planets as there are stars, it is hard to imagine that nature's ceaseless experimentation would not be able to produce Life elsewhere—but we don't know for sure.

And Life went on to cause new instabilities, constantly evolving, with living things in an extraordinary range of environments, changing the global environment, with boom-and-bust cycles, with predators for every kind of prey, with criminals for every possible crime, with governments to prevent them, and instabilities of the governments themselves.

One of the instabilities is that humans demand new weapons and new products of all sort, leading to serious investments in science and technology. So the natural/human world of competition and combat is structured to lead to advanced weaponry and cell phones. So here we are in 2012, with people writing essays and wondering whether their descendents will be artificial life forms travelling back into space. And, pondering what are the origins of those forces of nature that give rise to everything. Verlinde has argued that gravitation, the one force that has so far resisted our efforts at a quantum description, is not even a fundamental force, but is itself a statistical force, like osmosis.

What an amazing turn of events! But after all I've just said, I should not be surprised a bit.

Plate Tectonics is a breathtakingly elegant explanation of a beautiful theory, continental drift. Both puzzle and answer were hiding in plain sight right under our feet. Generations of globe-twirling school children have noticed that South America's bulge seems to fit in the gulf of Africa, and that Baja California looks like it was cut out of the Mexican mainland. These and other more subtle clues led Alfred Wegner to propose to the German Geological Society in 1912 that the continents had once formed a single landmass. His beautiful theory was greeted with catcalls and scientific brickbats.

The problem was that Wegner's beautiful theory lacked a mechanism. Critics sneeringly pronounced that the lightweight continents could not possibly plow through a dense and unyielding oceanic crust. No one, including Wegner could imagine a force that could cause the continents to move. It didn't help that Wegner was an astronomer poaching in geophysical territory. He would die on an arctic expedition in 1931, his theory out of favor and all but forgotten.

Meanwhile, hints of a mechanism were everywhere, but at once too small and too vast to see with biased eyes. Like ants crawling on a globe, puny humans missed the obvious. It would take the slow arrival of powerful new scientific tools to reveal the hidden forensics of continental drift. Sonar traced mysterious linear ridges running zipper-like along ocean floors. Magnetometers towed over the seabed painted symmetrical zebra-striped patterns of magnetic reversals. Earthquakes betrayed plate boundaries to listening seismographs. And radiometric dating laid out a scale reaching into deep time.

Three decades after Wegner's death, the mechanism of plate tectonics emerged with breathtaking clarity. The continents weren't plowing through anything—they were rafting atop the crust like marshmallows stuck in a sheet of cooling chocolate. And the oceanic crust was moving like a conveyor, with new crust created in mid-ocean spreading centers and old crust subducted, destroyed or crumpled upwards into vast mountain ranges at the boundaries where plates met.

Elegant explanations are the Kuhnian solvent that leaches the glue from old paradigms, making space for new theories to take hold. Plate tectonics became established beyond a doubt in the mid-1960s. Contradictions suddenly made sense, and ends so loose no one thought they were remotely connected came together. Continents were seen for the wanderers they were, the Himalaya were recognized as the result of a pushy Indian plate smashing into its Eurasian neighbor, and it became obvious that an ocean was being born in Africa's Great Rift Valley. Mysteries fell like dominoes before the predictive power of a beautiful theory and its elegant explanation. The skeptics were silenced and Wegner was posthumously vindicated.

The obvious answer should be the double helix. With the incomparably laconic "It has not escaped our notice….," it explained the very mechanism of inheritance. But the double helix doesn't do it for me. By the time I got around to high school biology, the double helix was ancient history, like pepper moths evolving or mitochondria as the power houses of the cell. Watson and Crick—as comforting but as taken for granted as Baskin and Robbins.

Then there's the work of Hubel and Wiesel, which showed that the cortex processes sensations with a hierarchy of feature extraction. In the visual cortex, for example, neurons in the initial layer each receive inputs from a single photoreceptor in the retina. Thus, when one photoreceptor is stimulated, so is "its" neuron in the primary visual cortex. Stimulate the adjacent photoreceptor, and the adjacent neuron activates. Basically, each of these neurons "knows" one thing, namely how to recognize a particular dot of light. Groups of I-know-a-dot neurons then project onto single neurons in the second cortical layer. Stimulate a particular array of adjacent neurons in that first cortical layer and a single second layer neuron activates. Thus, a second layer neuron knows one thing, which is how to recognize, say, a 45 degree angle line of light oriented. Then groups of I-know-a-line neurons send projections on to the next layer.

Beautiful, explains everything—just keep going, cortical layer upon layer of feature extraction, dot to line to curve to collection of curves to…….until there'd be the top layer where a neuron would know one complex, specialized thing only, like how to recognize your grandmother. And it would be the same in the auditory cortex—first layer neurons knowing particular single notes, second layer knowing pairs of notes….some neuron at the top that would recognize the sound of your grandmother singing along with Lawrence Welk.

It turned out, though, that things didn't quite work this way. There are few "Grandmother neurons" in the cortex (although a 2005 Nature paper reported someone with a Jennifer Aniston neuron). The cortex can't rely too much on grandmother neurons, because that requires a gazillion more neurons to accommodate such inefficiency and overspecialization. Moreover, a world of nothing but grandmother neurons on top precludes making multi-modal associations (e.g., where seeing a particular Monet reminds you of croissants and Debussy's music and the disastrous date you had at an Impressionism show at the Met. Instead, we've entered the world of neural networks.

Switching to a more mundane level of beauty, consider the gastrointestinal tract. In addition to teaching neuroscience, I've been asked to be a good departmental citizen and fill in some teaching holes in our core survey course. Choices: photosynthesis, renal filtration or the gastrointestinal tract. I picked the GI tract, despite knowing nothing about it, since the first two subjects terrified me. Gut physiology turns out to be beautiful and elegant amid a huge number of multi-syllabic hormones and enzymes. As the Gentle Reader knows, the GI tract is essentially a tube starting at the mouth and ending at the anus. When a glop of food distends the tube, the distended area secretes some chemical messenger that causes that part of the tube to start doing something (e.g., contracting rhythmically to pulverize food). But the messenger also causes the part of the tract just behind to stop doing its now-completed task and causes area just ahead to prepare for its job. Like shuttling ships through the Panama Canal's locks, all the way to the bathroom.

Beautiful. But even bowelophiles wouldn't argue that this is deep on a fundamental, universe-explaining level. Which brings me to my selection, which is emergence and complexity, as represented by "swarm intelligence."

Observe a single ant, and it doesn't make much sense, walking in one direction, suddenly careening in another for no obvious reason, doubling back on itself. Thoroughly unpredictable.

The same happens with two ants, a handful of ants. But a colony of ants makes fantastic sense. Specialized jobs, efficient means of exploiting new food sources, complex underground nests with temperature regulated within a few degrees. And critically, there's no blueprint or central source of command—each individual ants has algorithms for their behaviors. But this is not wisdom of the crowd, where a bunch of reasonably informed individuals outperform a single expert. The ants aren't reasonably informed about the big picture. Instead, the behavior algorithms of each ant consist of a few simple rules for interacting with the local environment and local ants. And out of this emerges a highly efficient colony.

Ant colonies excel at generating trails that connect locations in the shortest possible way, accomplished with simple rules about when to lay down a pheromone trail and what to do when encountering someone else's trail—approximations of optimal solutions to the Traveling Salesman problem. This has useful applications. In "ant-based routing," simulations using virtual ants with similar rules can generate optimal ways of connecting the nodes in a network, something of great interest to telecommunications companies. It applies to the developing brain, which must wire up vast numbers of neurons with vaster numbers of connections without constructing millions of miles of connecting axons. And migrating fetal neurons generate an efficient solution with a different version of ant-based routine.

A wonderful example is how local rules about attraction and repulsion (i.e., positive and negative charges) allow simple molecules in an organic soup to occasionally form more complex ones. Life may have originated this way without the requirement of bolts of lightening to catalyze the formation of complex molecules.

And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don't require a blue print, they don't require a blue print maker. If they don't require lightening bolts, they don't require Someone hurtling lightening bolts.

I find most beautiful not a particular equation or explanation, but the astounding fact that we have beauty and precision in science at all. That exactness comes from using mathematics to measure, check and even predict events. The deepest question is, why does this splendor work?

Beauty is everywhere in science. Physics abounds in symmetries and lovely curves, like the parabola we see in the path of a thrown ball. Equations like eiΠ+ 1 =0 show that there is exquisite order in mathematics, too.

Why does such beauty exist? That, too, has a beautiful explanation. This may be the most beautiful fact in science.

In 1960, Eugene Wigner published a classic article on the philosophy of physics and mathematics, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." Wigner asked, why does mathematics work so well in describing our world? He was unsure.

We use Hilbert spaces in quantum mechanics, differential geometry in general relativity, and in biology difference equations and complex statistics. The role mathematics plays in these theories is also varied. Math both helps with empirical predictions and gives us elegant, economical statements of theories. I can't imagine how we could ever invent quantum mechanics or general relativity without it.

But why is this true? For beautiful reasons? I think so.

Darwin stated his theory of natural selection without mathematics at all, but it can explain why math works for us. It has always seemed to me that evolutionary mechanisms should select for living forms that respond to nature's underlying simplicities. Of course, it is difficult to know in general just what simple patterns the universe has. In a sense they may be like Plato's perfect forms, the geometric constructions such as the circle and polygons. Supposedly we see their abstract perfection with our mind's eye, but the actual world only approximately realizes them. Thinking further in like fashion, we can sense simple, elegant ways to viewing dynamical systems. Here's why that matters.

Imagine a primate ancestor who saw the flight of a stone, thrown after fleeing prey, as a complicated matter, hard to predict. It could try a hunting strategy using stones or even spears, but with limited success, because complicated curves are hard to understand. A cousin who saw in the stone's flight a simple and graceful parabola would have a better chance of predicting where it would fall. The cousin would eat more often and presumably reproduce more as well. Neural wiring could reinforce this behavior by instilling a sense of genuine pleasure at the sight of an artful parabola.

There's a further selection at work, too. To hit running prey, it's no good to ponder the problem for long. Speed drove selection: that primate had to see the beauty fast. This drove cognitive capacities all the harder. Plus, the pleasure of a full belly.

We descend from that appreciative cousin. Baseball outfielders learn to sense a ball's deviations from its parabolic descent, due to air friction and wind, because they are building on mental processing machinery finely tuned to the parabola problem. Other appreciations of natural geometric ordering could emerge from hunting maneuvers on flat plains, from the clever design of simple tools, and the like. We all share an appreciation for the beauty of simplicity, a sense emerging from our origins. Simplicity is evolution's way of saying, this works.

Evolution has primed humans to think mathematicallybecause they struggled to make sense of their world for selective advantage. Those who didn't aren't in our genome.

Many things in nature, inanimate and living, show bilateral, radial, concentric and other mathematically based symmetries. Our rectangular houses, football fields and books spring from engineering constraints, their beauty arising from necessity. We appreciate the curve of a suspension bridge, intuitively sensing the urgencies of gravity and tension.

Our cultures show this. Radial symmetry appears in the mandala patterns of almost every society, from Gothic stoneworks to Chinese rugs. Maybe they echo the sun's glare flattened into two dimensions. In all cultures, small flaws in strict symmetries express artful creativity. So do symmetry breaking particle theories.

Philosophers have three views of the issue: mathematics is objective and real; it arises from our preconceptions; or it is social.

Physicist Max Tegmark argues the first view, that math so well describes the physical world because reality really is completely mathematical. This radical Platonism says that reality is isomorphic to a mathematical structure. We're just uncovering this bit by bit. I hold the second view: we evolved mathematics because it describes the world and promotes survival. I differ from Tegmark because I don't think mathematics somehow generated reality; as Hawking says, what gives fire to the equations, and makes them construct reality?

Social determinists, the third view, think math emerges by consensus. This is true in that we're social animals, but this view also seems to ignore biology, which brought about humans themselves through evolution. Biology generates society, after all.

But how general were our adaptations to our world?

R. Lemarchand and Jon Lomberg have argued in detail that symmetries and other aesthetic principles should be truly universal, because they arise from fundamental physical properties. Aliens orbiting distant stars will still spring from evolutionary forces that reward a deep, automatic understanding of the laws of mechanics. The universe itself began with a Big Bang that can be envisioned as a four-dimensional symmetric expansion; yet without some flaws, so-called anisotropies, in the symmetry of the Big Bang, galaxies and stars would never happen.

Strategies for the Search for Extra-Terrestrial Intelligence, SETI, have assumed this since their beginnings in the early 1960s. Many supposed that interesting properties such as the prime numbers, which do not appear in nature, would figure in schemes to send messages by radio. Primes come from thinking about our mathematical constructions of the world, not directly from that world. So they're evidence for a high culture based on studying mathematics.

A case for the universality of mathematics is in turn an argument for the universality of aesthetic principles: evolution should shape all of us to the general contours of physical reality. The specifics will differ enormously, of course, as a glance at the odd creatures in our fossil record shows.

Einstein once remarked, "How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" But it isn't independent—and that's beautiful.

The following deep, elegant, and beautiful explanation of the true rotational symmetry of space comes from the late Sidney Coleman, as presented to his graduate physics class at Harvard. This explanation takes the form of a physical act that you will perform yourself. Although elegant, this explanation is verbally awkward to explain, and physically awkward to perform. It may need to be practised a few times. So limber up and get ready: you are about to experience in a deep and personal way the true rotational symmetry of space!

At bottom, the laws of physics are based on symmetries, and the rotational symmetry of space is one of the most profound of these symmetries. The most rotationally symmetric object is a sphere. So take a sphere such as a soccer—or basket-ball that has a mark, logo, or unique lettering at some spot on the sphere. Rotate the sphere about any axis: the rotational symmetry of space implies that the shape of the sphere is invariant under rotation. In addition, if there is a mark on the sphere, then when you rotate the sphere by three hundred and sixty degrees, the mark returns to its initial position. Go ahead. Try it. Hold the ball in both hands and rotate it by three hundred and sixty degrees until the mark returns.

That's not so awkward, you may say. But that's because you have not yet demonstrated the true rotational symmetry of space. To demonstrate this symmetry requires fancier moves. Now hold the ball cupped in one hand, palm facing up. Your goal is to rotate the sphere while always keeping your palm facing up. This is trickier, but if Michael Jordan can do it, so can you.

Keep on rotating in the same direction, palm facing up. At one hundred and eighty degrees—half a rotation—your arm sticks out in back of your body to keep the ball cupped in your palm.

As you keep rotating to two hundred and seventy degrees—three quarters of a rotation—in order to maintain your palm facing up, your arm sticks awkwardly out to the side, ball precariously perched on top.

At this point, you may feel that it is impossible to rotate the last ninety degrees to complete one full rotation. If you try, however, you will find that you can continue rotating the ball keeping your palm up by raising your upper arm and bending your elbow so that your forearm sticks straight forward. The ball has now rotated by three hundred and sixty degrees—one full rotation. If you've done everything right, however, your arm should be crooked in a maximally painful and awkward position.

To relieve the pain, continue rotating by an additional ninety degrees to one and a quarter turns, palm up all the time. The ball should now be hovering over your head, and the painful tension in your shoulder should be somewhat lessened.

Finally, like a waiter presenting a tray containing the pi'ece de resistance, continue the motion for the final three quarters of a turn, ending with the ball and your arm—what a relief—back in its original position.

If you have managed to perform these steps correctly and without personal damage, you will find that the trajectory of the ball has traced out a kind of twisty figure eight or infinity sign in space, and has rotated around not once but twice. The true symmetry of space is not rotation by three hundred and sixty degrees, but by seven hundred and twenty degrees!

Although this excercise might seem no more than some fancy and painful basketball move, the fact that the true symmetry of space is rotation not once but twice has profound consequences for the nature of the physical world at its most microscopic level. It implies that 'balls' such as electrons, attached to a distant point by a flexible and deformable 'strings,' such as magnetic field lines, must be rotated around twice to return to their original configuration. Digging deeper, the two-fold rotational nature of spherical symmetry implies that two electrons, both spinning in the same direction, cannot be placed in the same place at the same time. This exclusion principle in turn underlies the stability of matter. If the true symmetry of space were rotating around only once, then all the atoms of your body would collapse into nothingness in a tiny fraction of a second. Fortunately, however, the true symmetry of space consists of rotating around twice, and your atoms are stable, a fact that should console you as you ice your shoulder.

The beauty of science—in the long run—is its lack of subjectivity. So answering the question "what is your favorite deep, beautiful, or elegant explanation" can be a little disturbing to a scientist, since the only objective words in the question are "what", "is", "or", and (in an ideal scientific world) "explanation." Beauty and elegance do play a part in science but are not the arbiters of truth. But I will admit that simplicity, which is often confused with elegance, can be a useful guide to maximizing explanatory power.

As for the question, I'll stick to an explanation that I think is extremely nice, relatively simple (though subtle), and which might even be verified within the year. That is the Higgs mechanism, named after the physicist Peter Higgs who developed it. The Higgs mechanism is probably responsible for the masses of elementary particles like the electron. If the electron had zero mass (like the photon), it wouldn't be bound into atoms. If that were the case, none of the structure of our Universe (or of life) would be present. That's not the correct description of our world.

In any case, experiments have measured the masses of elementary particles and they don't vanish. We know they exist. The problem is that these masses violate the underlying symmetry structure we know to be present in the physical description of these particles. More concretely, if elementary particles had mass from the get-go, the theory would make ridiculous predictions about very energetic particles. For example, it would predict interaction probabilities greater than one.

So there is a significant puzzle. How can particles have masses that have physical consequences and can be measured at low energies but act as if they don't have masses at high energies, when predictions would become nonsensical? That is what the Higgs mechanism tells us. We don't yet know for certain that it is indeed responsible for the origin of elementary particle masses but no one has found an alternative satisfactory explanation.

One way to understand the Higgs mechanism is in terms of what is known as "spontaneous symmetry breaking," which I'd say is itself a beautiful idea. A spontaneously broken symmetry is broken by the actual state of nature but not by the physical laws. For example, if you sit at a dinner table and use your glass on the right, so will everyone else. The dinner table is symmetrical—you have a glass on your right and also to your left. Yet everyone chooses the glass on the right and thereby spontaneously breaks the left-right symmetry that would otherwise be present.

Nature does something similar. The physical laws describing an object called a Higgs field respects the symmetry of nature. Yet the actual state of the Higgs field breaks the symmetry. At low energy, it takes a particular value. This nonvanishing Higgs field is somewhat akin to a charge is spread throughout the vacuum (the state of the universe with no actual particles). Particles acquire their masses by interacting with these "charges." Because this value appears only at low energies, particles effectively have masses only at these energies and the apparent bottleneck to elementary particle masses is apparently resolved.

Keep in mind that the Standard Model has worked extremely well, even without yet knowing for sure if the Higgs mechanism is correct. We don't need to know about the Higgs mechanism to know particles have masses and to make many successful predictions with the so-called Standard Model of particle physics. But the Higgs mechanism is essential to explaining how those masses can arise in a sensible theory. So it is rather significant.

The Standard Model's success nonetheless illustrates another beautiful idea essential to all of physics, which is the concept of an "effective theory." The idea is simply that you can focus on measurable quantities when making predictions and leave understanding the source of those quantities to later research when you have better precision.

Fortunately that time has now come for the Higgs mechanism, or at least the simplest implementation which involves a particle called the Higgs boson. The Large Hadron Collider at CERN near Geneva should have a definitive result on whether this particle exists within this coming year. The Higgs boson is one possible (and many think the most likely) consequence of the Higgs mechanism. Evidence last December pointed to a possible discovery, though more data is needed to know for sure. If confirmed, it will demonstrate that the Higgs mechanism is correct and furthermore tell us what is the underlying structure responsible for spontaneous symmetry breaking and spreading "charge" throughout the vacuum. The Higgs boson would furthermore be a new type of particle (a fundamental boson for those versed in physics terminology) and would be in some sense a new type of force. Admittedly, this is all pretty subtle and esoteric. Yet I (and much of the theoretical physics community) find it beautiful, deep, and elegant.

Symmetry is great. But so is symmetry breaking. Over the years many aspects of particle physics were first considered ugly and then considered elegant. Subjectivity in science goes beyond communities to individual scientists. And even those scientists change their minds over time. That's why experiments are critical. As difficult as they are, results are much easier to pin down than the nature of beauty. A discovery of the Higgs boson will tell us how that is done when particles acquire their masses.

The most revolutionary, beautiful, elegant, and important idea to be advanced in the past two centuries is the idea that reality is made up of more than one universe. By an infinity of parallel universes, in fact. By "parallel universe" I mean universes exactly like ours, containing individuals exactly like each and every one of us. There are an infinity of Frank Tiplers, individuals exactly like me, each of whom has written an essay entitled "Parallel Universes" each of which is word-for-word identical to the essay you are now reading, and each of these essays is now being read by individuals who are exactly identical to you, the reader. And more: there are other universes which are almost identical to ours, but differ in minor ways: for example, universes in which you the reader (and I the writer!) really did marry that high school sweetheart—and universes in which you didn't if you did in this universe.

A truly mind-boggling idea, because were it to be true, it would infinitely expand reality. It would expand reality infinity more than the Copernican Revolution ever did, because at most, all that Copernicus did was increase the size of this single universe to infinity. The parallel universes concept proposes to multiply that single infinite Copernican universe an infinite number of times. Actually, an uncountable infinity of times.

Several physicists in the early to mid twentieth century independently came up with the parallel universes idea—for instances, the Nobel-Prize-winning physicists Erwin Schrödinger and Murray Gell-Mann—but only a Princeton graduate student named Hugh Everett had the guts to publish, in 1957, the mathematical fact that parallel universes were an automatic consequence of quantum mechanics. That is, if you accept quantum mechanics—and more than a century of experimental evidence says you have to—then you have to accept the existence of the parallel universes.

Like the Copernican Revolution, the Everettian Revolution will take decades before it is accepted by all educated people, and it will take even longer for the full implications of the existence of an infinite number of parallel universes to be worked out. The quantum computer, invented by the Everettian physicist David Deutsch, is one of the first results of parallel universe thinking. The idea of the quantum computer is simple: since the analogues of ourselves in the parallel universes are interested in computing the same thing at the same time, why not share the computation between the universes? Let one of us do part of the calculation, another do another part, and so on with the final result being shared between us all.

Quantum mechanics is only mysterious if one ignores the other universes. For example, the Heisenberg Uncertainly Relations, which in the old days were claimed to be an expression of a breakdown in determinism, are nothing of the kind. The inability to predict the future state of our particular universe is not due to a lack of determinism in Nature, but rather due to the interaction of the other parallel universes with our own universe. The mathematics of Everett shows that if one attempts to measure a particle's position, the interaction of the particle with its analogues in the other universes will make its momentum vary enormously. (This shows, by the way, that the parallel universes are real and detectable: they interact with our own universe.) If one leaves out most of reality when trying to predict the future, then of course one's predictions are going to be incorrect.

In fact, quantum mechanics is actually more deterministic than classical mechanics! It is possible to derive quantum mechanics mathematically from classical mechanics by requiring that classical mechanics be always deterministic—and also be composed of parallel universes. So adding the parallel universes ensures the validity of Albert Einstein's dictum: "God does not play dice with the universe."

Remarkably, the other great scientist of the past two hundred years, Charles Darwin, took the opposite point of view. God, Darwin insisted, does play dice with the universe. In the last chapter of his Variation of Animals and Plants Under Domestication, Darwin correctly pointed out that anyone who truly believes in determinism will not accept his theory of evolution by natural selection acting on "random" mutations. Obviously, because there are no "random" events of any sort. All events—and mutations—are determined. In particular, if it was determined in the beginning of time that I would be here 15 billion years later writing these words, then all previous evolutionary events leading to me, like the evolution of Homo sapiens, necessarily had to occur when they did occur.

So the Everettian Revolution means that we will have to choose between Einstein and Darwin.

Many leading evolutionary biologists have recognized that there is a problem with standard Darwinian theory. For example, Lynn Margulis and Dorian Sagan, in their book Acquiring Genomes, discuss these difficulties, but their own proposed replacement does not quite eliminate the difficulties (as the great evolutionist Ernst Mayr points out in the book's Forward), because they still accept the idea that there is randomness at the microlevel. If one gives up randomness and accepts determinism, there is no reason why speciation must be gradual. It could occur in a single generation. A Homo erectus mother could give birth to a Homo sapiens male-female pair of twins. In his early work on punctuated equilibrium, the famous Harvard evolutionist Stephen J. Gould was attracted to the idea of speciation in a single generation, but he could not imagine a mechanism that would make it work. The determinism that is an implication of the Everettian Revolution provides such a mechanism, and more, shows that this mechanism necessarily is in operation.

The existence of the parallel universes means that we shall have to rethink everything. Which is why I have called this idea the Everettian Revolution.

My example of a deep, elegant, and beautiful explanation in science is John Maynard Smith's concept of an evolutionarily stable strategy (ESS). Not only does this wonderfully straightforward idea explain a whole host of biological phenomena, it also provides a very useful heuristic tool to test the plausibility of various types of claims in evolutionary biology, allowing us, for example, to quickly dismiss group-selectionist misconceptions such as the idea that altruistic acts by individuals can be explained by the benefits that accrue to the species as a whole from these acts. Indeed, the idea is so powerful that it explains things which I didn't even realize needed explaining until I was given the explanation! I will now present one such explanation below to illustrate the power of ESS. I should note that while Smith developed ESS using the mathematics of game theory (along with collaborators G. R. Price and G. A. Parker), I will attempt to explain the main idea using almost no math.

So, here is a question: think of common animal species like cats, or dogs, or humans, or golden eagles; why do all of them have (nearly) equal numbers of males and females? Why are there not sometimes 30% males in a species and 70% females? Or the other way? Or some other ratio altogether? Why are sex ratios almost exactly 50/50? I, at least, never even considered the question until I read the incredibly elegant explanation.

Let us consider walruses: they exist in the normal 50/50 sex ratio but most walrus males will die virgins. (But almost all females will mate.) Only a few dominant walrus males monopolize most of the females (in mating terms). So what's the point of having all those extra males around, then? They take up food and resources, but in the only thing that matters to evolution, they are useless, because they do not reproduce. From a species point-of-view, it would be better and more efficient if only a small proportion of walruses were males, and the rest were females, in the sense that such a species of walrus would make much more efficient use of its resources and would, according to the logic of group-selectionists, soon wipe out the actual existing species of walrus with the inefficient 50/50 ratio of males to females. So why don't they?

Here's why: because a population of walruses (of course, you can substitute any of the other animals I have mentioned, including humans, for the walruses in this example) with, say, 10% males and 90% females (or any other non-50/50 ratio) would not be stable over a large number of generations. Why not? Remember that, given the 10% males and 90% females of this example, each male is producing about 9 times as many children as any female (by successfully mating with, on average, close to 9 females). Imagine such a population. If you were a male in this kind of population, it would be to your evolutionary advantage to produce more sons than daughters because each son could be expected to produce roughly 9 times as many offspring as any of your daughters. Let me run through some numbers to make it more clear: suppose that the average male walrus fathers 90 children (only 9 of which will be males and 81 females, on average), and the average female walrus mothers 10 baby walruses (only 1 of which will be a male and 9 will be females). Okay?

Here's the crux of the matter: suppose a mutation arose in one of the male walruses (as it well might over a large number of generations) that made it such that this particular male walrus had more Y (male-producing) sperm than X (female-producing) sperm. In other words, the walrus produced sperm that would result in more male offspring than female ones, this gene would spread like wildfire through the described population. Within a few generations, more and more male walruses would have the gene that makes them have more male offspring than female ones, and soon you would get to the 50/50 ratio that we see in the real world.

The same argument applies for females: any mutation in a female that caused her to produce more male offspring in the population of our example (though sex is determined by the sperm, not the egg, there are other mechanisms the female might employ to affect the sex ratio) than female ones, would spread quickly in this population, changing the ratio from 10/90 closer to 50/50 with each subsequent generation until it actually reaches the 50/50 mark. In fact, any significant deviation from the 50/50 ratio will not be evolutionarily stable for this reason, and will through random mutation soon revert to the 50/50 sex ratio.

One can, of course, use a mirror image of this argument to show that a population of 90% males and 10% females would also soon revert to the 50/50 ratio. So having children with any sex ratio other than 50/50 is not an evolutionarily stable strategy for either males or females. And this is just one example of the explanatory power of ESS.

What makes an explanation beautiful? Many elegant explanations in science are those that have been vetted fully but there are just as many beautiful wildly popular explanations where the beauty is just skin deep. I want to give two examples from the field of brain health.

When preliminary mice studies showed that an ingredient in dietary curry spice may have anti-Alzheimer effects, I suspect every vindaloo lover thought that was a beautiful explanation for why India had a low rate of Alzheimer's. But does India really have a low Alzheimer's rate after adjusting for life span and genetic differences? No one really knows.

Likewise when an observational study in the 1990s reported wine drinkers in Bordeaux had lower rates of Alzheimer's, there was a collective "I knew it" from oenophiles.

The latest observational findings now link coffee drinking with lower risk for Alzheimer's, much to the delight of the millions of caffeine addicts.

In reality, neither coffee nor wine nor curry spice have been proven in controlled trials to have any benefits against Alzheimer's. Regardless, the cognitive resonance these "remedies" find with the reader far exceeds the available evidence. One can find similar examples in virtually every field of medicine and science.

I would like to suggest two conditions that might render an explanation unusually beautiful: 1) a ring of truth, 2) confirmation biases. We all favor explanations and test them in a manner that confirms our own beliefs (confirmation bias). A small amount of factual data can be magnified into a beautiful fully proven explanation in one's mind if the right circumstance exist—thus, beauty is in the eye of the beholder. This may occur less often in one's own specialized fields, but we are all vulnerable in fields in which we are less expert in.

Given how often leading scientific explanations are proven wrong in subsequent years, one would do well to bear in mind Santayana's quote that "almost every wise saying has an opposite one, no less wise, to balance it". As for me, I love my curry, coffee and wine but am not yet counting on them to stop Alzheimer's.

Visiting Professor, Center for Maritime System; Author, The Power of the Sea: Tsunamis, Storm Surges, and Our Quest to Predict Disasters

"It Just Is?"

The concept of an indivisible component of matter, something that cannot be divided further, has been around for at least two and half millennia, first proposed by early Greek and Indian philosophers. Democritus called the smallest indivisible particle of matter "átomos" meaning "uncuttable". Atoms were also simple, eternal, and unalterable. But in Greek thinking (and generally for about 2,000 years after) atoms lost out to the four basic elements of Empedocles—fire, air, water, earth—which were also simple, eternal, and unalterable, but not made up of little particles, Aristotle believing those four elements to be infinitely continuous.

Further progress in our understanding of the world based on the concept of atoms had to wait till the 18th century. By that time the four elements of Aristotle had been replaced by 33 elements of Lavoisier based on chemical analysis. Dalton then used the concept of atoms to explain why elements always react in ratios of whole numbers, proposing that each element is made up of atoms of a single type, and that these atoms can combine to form chemical compounds. Of course, by the early 20th century (through the work of Thompson, Rutherford, Bohr and many others) it was realized that atoms were not indivisible and they were thus not the basic units of matter. All atoms were made up of protons, neutrons, and electrons, which took over the title of being the indivisible components (basic building blocks) of matter.

Perhaps because the Rutherford-Bohr model of the atom is now considered transitional to more elaborate models based on quantum mechanics, or perhaps because it evolved over time from the work of many people (and wasn't a single beautiful proposed law), we have forgotten how much about the world can be explained by the concept of protons, neutrons, and electrons—probably more than any other theory ever proposed. With only three basic particles one could explain the properties of 118 atoms/elements and the properties of thousands upon thousands of compounds chemically combined from those elements. A rather amazing feat, and certainly making the Rutherford-Bohr model worthy of being considered a favorite deep, elegant, and beautiful explanation.

Since that great simplification, further developments in our understanding of the physical universe have gotten more complicated, not less. To explain the properties of our three basic particles of matter, we went looking for even-more-basic particles. We ended up needing 12 fermions (6 quarks, 6 leptons) to "explain" the properties of the 3 previously thought-to-be basic particles (as well as the properties of some other particles that were not known to us until we built high energy colliders). And we added 4 other particles, force-carrier particles, to "explain" the 4 basic force fields (electromagnetism, gravitation, strong interaction, and weak interaction) that affect those 3 previously thought-to-be basic particles. Of these 16 now thought-to-be basic particles most are not independently observable (at least at low energies).

Even if the present Standard Model of particle physics turns out to be true, the question can be asked: "What next?" Every particle (whatever it's level in the hierarchy of particles) will have certain properties or characteristics. When asked "why" quarks have a particular electric charge, color charge, spin, or mass, do we simply say "they just do"? Or do we try to find even-more-basic particles which seem to explain the properties of quarks, and of leptons and bosons? And if so, does this continue on to still-even-more-basic particles? Could that go on forever? Or at some point, when asked the question "why does this particle have these properties", would we simply say "it just does"? At some point would we have to say that there is no "why" to the universe? "It just is."

At what level of our hierarchy of understanding would we resort to saying, "it just is"? The highest (and least understanding) level is religious—the gods of Mount Olympus each responsible for some worldly phenomenon, or the all-knowing monotheistic god creating the world and making everything work by means truly unknowable to humans. In their theories about how the world worked Aristotle and other Greek philosophers incorporated the gods of Mount Olympus (earth, water, fire, and air were all assigned to particular gods), but Democritus and other philosophers were deterministic and materialistic, and they looked for predictable patterns and simple building blocks that might create the complex world they saw around them. Throughout the growth and evolution of scientific thinking there have been various "it just is" moments, where an explanation/theory seems to hit a wall where one might say "it just is", only to have some one else come along and say "maybe not" and goes on to further advance our understanding. But as we get to the most basic questions about our universe (and our existence) the "it just is" answer becomes more likely. One very basic scientific question is whether there will ever be found truly indivisible particles of nature. The accompanying philosophical question is whether there can be truly indivisible particles of nature.

At some level the next group of mathematically derived "particles" may so obviously appear not to be observable/"real", that we will describe them instead as simply entities in a mathematical model that appears to accurately describe the properties of the observable particles in the level above. At which point the answer to the question of why these particles act as described by this mathematical model would be "they just do". How far down we go with such models will probably depend on how much a new level in the model allows us to explain previously unexplainable observed phenomena or to correctly predict new phenomena. (Or perhaps we might be stopped by the model becoming too complex.)

For determinists still unsettled by the probabilities inherent in quantum mechanics or the philosophical question about what would have come before a Big Bang, it is just one more step toward recognizing the true unsolvable mystery of our universe—recognizing it, but maybe still not accepting it; a new much better model could still come along.

Philosopher, Novelist; Author, Betraying Spinoza; 36 Arguments for the Existence of God: A Work of Fiction

An Unresolved (And, Therefore, Unbeautiful) Reaction To The Edge Question

This year's Edge question sits uneasily on a deeper question: Where do we get the idea—a fantastic idea if you stop and think about it—that the beauty of an explanation has anything to do with the likelihood of its being true? What do beauty and truth have to do with each other? Is there any good explanation of why the central notion of aesthetics (fluffy) should be inserted into the central notion of science (rigorous)?

You might think that, rather than being a criterion for assessing explanations, the sense of beauty is a phenomenon to be explained away. Take, for example, our impression that symmetrical faces and bodies are beautiful. Symmetry, it turns out, is a good indicator of health and, consequently, of mate-worthiness. It's a significant challenge for an organism to coordinate the production of its billions of cells so that its two sides proceed to develop as perfect matches, warding off disease and escaping injury, mutation and malnutrition. Symmetrical female breasts, for example, are a good predictor of fertility. As our own lustful genes know, the achievement of symmetry is a sign of genetic robustness, and we find lopsidedness a turnoff. So, too, in regard to other components of human beauty—radiant skin, shining eyes, neotony (at least in women). The upshot is that we don't want to mate with people because they're beautiful, but rather they're beautiful because we want to mate with them, and we want to mate with them because our genes are betting on them as replicators.

So, too, you might think that beauty of every sort is to be similarly explained away, an attention-grabbing epiphenomenon with no substance of its own. Which brings me to the Edge question concerning beautiful explanations. Is there anything to this notion of explanatory beauty, a guide to choosing between explanatory alternatives, or it just that any explanation that's satisfactory will, for that very reason and no other, strike us as beautiful, beautifully explanatory, so that the reference to beauty is once again without any substance? That would be an explanation for the mysterious injection of aesthetics into science. The upshot would be that explanations aren't satisfying because they're beautiful, but rather they're beautiful because they're satisfying: they strip the phenomenon bare of all mystery, and maybe, as a bonus, pull in further phenomena which can be rendered non-mysterious using the same sort of explanation. Can explanatory beauty be explained away, summarily dismissed by way of an eliminative explanation? (Eliminative explanations are beautiful.)

I'd like to stop here, with a beautiful explanation for explaining away explanatory beauty, but somebody is whispering in my ear. It's that damned Plato. Plato is going on about how there is more in the idea of explanatory beauty than is acknowledged in the eliminative explanation. In particular, he's insisting, as he does in his Timaeus, that the beauty of symmetry, especially as it's expressed in the mathematics of physical laws, cannot be explained away with the legerdemain of the preceding paragraph. He's reproaching the eliminative explanation of explanatory beauty with ignoring the many examples in history when the insistence on the beauty of symmetry led to substantive scientific progress. What was it that led James Clerk Maxwell to his four equations of electromagnetism but his trying to impose mathematical symmetry on the domains of electricity and magnetism? What was it that led Einstein to his equations of gravity but an insistence on beautiful mathematics?

Eliminative explanations are beautiful, but only when they truly and thoroughly explain. So instead of offering an answer to this year's Edge question I offer instead an unresolved (and, therefore, unbeautiful) reaction to the deep question on which it rests.

From the earliest days of computer programming up through the present, we are faced with the unfortunate reality that the field does not know how to design error-free programs.

Why can't we tame the writing of computer programs to emulate the successes of other areas of engineering? Perhaps the most lyrical thinker to address this question is Fred Brooks, author most famously of the "The Mythical Man-Month."(If one bears in mind that this unfortunately titled book was first published in 1975, it is a bit easier to ignore the sexist language that litters this otherwise fine work; the points Brooks made more than 35 years ago are almost all accurate today except the assumption that all programmers are "he".)

When espousing the joys of programming, Brooks writes:

"The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. ... Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time."

But this magic comes with the bite of its flip side:

"In many creative activities the medium of execution is intractable. Lumber splits; paints smear; electrical circuits ring. These physical limitations of the medium constrain the ideas that may be expressed, and they also create unexpected difficulties in the implementation.

... Computer programming, however, creates with an exceedingly tractable medium. The programmer builds from pure thought-stuff: concepts and very flexible representations thereof. Because the medium is tractable, we expect few difficulties in implementation; hence our pervasive optimism. Because our ideas are faulty, we have bugs; henour optimism is unjustified."

Just as there is an arbitrarily large number of ways to arrange the words in an essay, a staggering variety of different programs can be written to perform the same function. The universe of possibility is too wide open, too unconstrained, to permit elimination of errors.

There are additional compelling causes of programming errors, most importantly the complexiting of autonomously interacting independent systems with unpredictable inputs, often driven by even more unpredictable human actions interconnected on a world wide network. But in my view the beautfiul explanation is the one about unfettered thought-stuff.

Elegance is more than an aesthetic quality, or some ephemeral sort of uplifting feeling we experience in deeper forms of intuitive understanding. Elegance is formal beauty. And formal beauty as a philosophical principle is one of the most dangerous, subversive ideas humanity has discovered: it is the virtue of theoretical simplicity. Its destructive force is greater than Darwin's algorithm or that of any other single scientific explanation, because it shows us what the depth of an explanation is.

Elegance as theoretical simplicity comes in many different forms. Everybody knows Occam's razor, the ontological principle of parsimony: Entities are not to be multiplied beyond necessity. William of Occam gave us a metaphysical principle for choosing between competing theories: All other things being equal, it is rational to always prefer the theory that makes fewer ontological assumptions about the kinds of entities that really exist (souls, life forces, abstract objects, or an absolute frame of reference like electromagnetic ether). We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances—Isaac Newton formulated this as the First Rule of Reasoning in Philosophy, in his Principia Mathematica. Throw out everything that is explanatorily idle, and then shift the burden of proof to the proponent of a less simple theory. In Albert Einstein's words: The grand aim of all science … is to cover the greatest possible number of empirical facts by logical deductions from the smallest possible number of hypotheses or axioms.

Of course, in today's technical debates new questions have emerged: Why do metaphysics at all? Isn't it simply the number of free, adjustable parameters in competing hypotheses what we should measure? Is it not syntactic simplicity that captures elegance best, say, the number fundamental abstractions and guiding principles a theory makes use of? Or will the true criterion for elegance ultimately be found in statistics, in selecting the best model for a set of data points while optimally balancing parsimony with the "goodness of fit" of a suitable curve? And, of course, for Occam-style ontological simplicity the BIG question always remains: Why should a parsimonious theory more likely be true? Ultimately, isn't all of this rooted in a deeply hidden belief that God must have created a beautiful universe?

I find it fascinating to see how the original insight has kept its force over the centuries. The very idea of simplicity itself, applied as a metatheoretical principle, has demonstrated great power—the subversive power of reason and reductive explanation. The formal beauty of theoretical simplicity is deadly and creative at the same time. It destroys superfluous assumptions whose falsity we just cannot bring ourselves to believe, whereas truly elegant explanations always give birth to an entirely new way of looking at the world. What I would really like to know is this: Can the fundamental insight—the destructive, creative virtue of simplicity—be transposed from the realm of scientific explanation into culture or onto the level of conscious experience? What kind of formal simplicity would make our culture a deeper, more beautiful culture? And what is an elegant mind?

Certain facts in mathematics feel as though they contain a kind of compressed power—they look innocuous and mild-mannered when you first meet them, but they're dazzling when you see them in action. One of the most compelling examples of such a fact is the Pigeonhole Principle.

Here's what the Pigeonhole Principle says. Suppose a flock of pigeons lands in a group of trees, and there are more pigeons than trees. Then after all the pigeons have landed, at least one of the trees contains more than one pigeon.

This fact sounds obvious, and it is: there are simply too many pigeons, and so they can't each get their own tree. Indeed, if this were the end of the story, it wouldn't be clear why this is a fact that even deserves to be named or noted down. But to really appreciate the Pigeonhole Principle, you have to see some of the things you can do with it.

So let's move on to a fact that doesn't look nearly as straightforward. The statement itself is intriguing, but what's more intriguing is the effortless way it will turn out to follow from the Pigeonhole Principle. Here's the fact: Sometime in the past 4000 years, there have been two people in your family tree—call them A and B—with the property that A was an ancestor of B's mother and also an ancestor of B's father. Your family tree has a "loop", where two branches growing upward from B come back together at A—in other words, there's a set of parents in your ancestry who are blood relatives of each other, thanks to this relatively recent shared ancestor A.

It's worth mentioning a couple of things here. First, the "you" in the previous paragraph is genuinely you, the reader. Indeed, one of the intriguing features of this fact is that I can blithely make such assertions about you and your ancestors, despite the fact that I don't even know who you are. Second, the statement doesn't rely on any assumptions about the evolution of the human race, or the geographic sweep of human history. Here, in particular, are the only assumptions I'll need. (1) Everyone has two biological parents. (2) No one has children after the age of 100. (3) The human race is at least 4000 years old. (4) At most a trillion human beings have lived in the past 4000 years. (Scientists' actual best estimate for (4) is that roughly a hundred billion human beings have ever lived in all of human history; I'm bumping this up to a trillion just to be safe.) All four assumptions are designed to be as uncontroversial as possible; and even then, a few exceptions to the first two assumptions and an even larger estimate in the fourth would only necessitate some minor tweaking to the argument.

Now back to you and your ancestors. Let's start by building your family tree going back 40 generations: you, your parents, their parents, and so on, 40 steps back. Since each generation lasts at most 100 years, the last 40 generations of your family tree all take place within the past 4000 years. (In fact, they almost surely take place within just the past 1000 or 1200 years, but remember that we're trying to be uncontroversial.)

We can view a drawing of your family tree as a kind of "org chart", listing a set of jobs or roles that need to be filled by people. That is, someone needs to be your mother, someone needs to be your father, someone needs to be your mother's father, and so forth, going back up the tree. We'll call each of these an "ancestor role"—it's a job that exists in your ancestry, and we can talk about this job regardless of who actually filled it. The first generation back in your family tree contains two ancestor roles, for your two parents. The second contains four ancestor roles, for your grandparents; the third contains eight roles, for your great-grandparents. Each level you go back doubles the number of ancestor roles that need to be filled, so if you work out the arithmetic, you find that 40 generations in the past, you have more than a trillion ancestor roles that need to be filled.

At this point it's time for the Pigeonhole Principle to make its appearance. The most recent 40 generations of your family tree all took place within the past 4000 years, and we decided that at most a trillion people ever lived during this time. So there are more ancestor roles (over a trillion) than there are people to fill these roles (at most a trillion). This brings us to the crucial point: at least two roles in your ancestry must have been filled by the same person. Let's call this person A.

Now that we've identified A, we're basically done. Starting from two different roles that A filled in your ancestry, let's walk back down the family tree toward you. These two walks downward from A have to first meet each other at some ancestor role lower down in the tree, filled by a person B. Since the two walks are meeting for the first time at B, one walk arrived via B's mother, and the other arrived via B's father. In other words, A is an ancestor of B's mother, and also an ancestor of B's father, just as we wanted to conclude.

Once you step back and absorb how the argument works, you can appreciate a few things. First, in a way, it's more a fact about simple mathematical structures than it is about people. We're taking a giant family tree—yours—and trying to stuff it into the past 4000 years of human history. It's too big to fit, and so certain people have to occupy more than one position in it.

Second, the argument has what mathematicians like to call a "non-constructive" aspect. It never really gave you a recipe for finding A and B in your family tree; it convinced you that they must be there, but very little more.

And finally, I like to think of it as a typical episode in the lives of the Pigeonhole Principle and all the other quietly powerful statements that dot the mathematical landscape—a band of understated little facts that seem to frequently show up at just the right time and, without any visible effort, clean up an otherwise messy situation.

In contributing to this volume, I decided to go back to the basics, literally to the beginnings of Western scientific thought. For already in Ancient Greece we find the striving for ideas with beauty and elegance that remains so important to our culture. As we will see, their influence is more than merely historical.

So, back to the late pre-Socratic philosophers we go, to around 450 BCE. (Thus, not quite "pre" Socrates, as he was born c. 469 BCE.) At the time, there were two warring views of reality, which had been developed and refined for some 200 years. On the one hand, the Ionians—Thales of Miletus being the first—claimed that what was essential in Nature was change: nothing was permanent, everything was in flux. "You cannot step in the same river twice," proclaimed Heraclitus of Ephesus (although not in so direct a manner). Later on, Aristotle commented on this view of perpetual change in his Physics: "some say…that all things are in motion all the time, but that this escapes our attention." This Ionian philosophy is known as a philosophy of "becoming," focusing on transformation and the transient nature of natural phenomena.

On the other hand, the Eleatics—Parmenides of Elea being the first—claimed the exact opposite: what is essential is that which doesn't change. So, to find the true nature of things we look for what is permanent. Among the Eleatics we find Zeno, whose famous paradoxes aimed at proving that motion was an illusion. This was a philosophy of "being," focusing on the unchangeable.

If you were an ambitious young philosopher starting out around 450 BCE, what were you to do? Two schools (let's leave the Pythagoreans out), two opposite views of reality. It is here that Leucippus came in, a man who, like Thales, was probably also from Miletus. He and his prolific pupil Democritus came up with a beautifully simple solution to the change vs. no-change dilemma. What if, they reasoned, everything was made from tiny bits of matter, like pieces of a Lego set? The bits are indestructible and indivisible—the eternal atoms, and thus give material existence to the Eleatic notion of "being". On the other hand, the bits can combine in myriad ways, giving rise to the changing shapes and forms of all objects in Nature. So, objects of being combine to forge the changing nature of reality: being and becoming are unified!

Fast forward to the present. Atoms are now very different entities: not indivisible, but made of yet smaller bits. Not uncountable, but with a total number of 94 naturally-occurring and a few others made artificially in labs. Notwithstanding the differences between ancient and modern atoms, the core notions that all objects are made of smaller bits and that the properties of composite objects can be understood studying the properties of these bits—the essence of reductionism— has served science extremely well.

Yet, as science marched on to describe the properties of the elementary bits of matter, the elementary particles, a new notion came to substitute that of small bits, the concept of "field". Nowadays, particles are seen as excitations of underlying fields: electrons are excitations of the electron field, quarks of the quark field, and so on. The fields are fundamental, not the particles. Furthermore, many scientists today express their discontent with reductionism, stating that a more holistic approach to science may open new avenues of understanding. There is much truth to this, since it's impracticable to think that we can understand the behavior of, say, a DNA molecule—a huge entity with hundreds of billions of atoms—by integrating the behavior of each one of its atoms. Matter organizes in different ways at different levels of complexity, and new laws are needed to describe each of these different levels.

Are we then done with Atomism's intellectual inheritance? Not if we look at its essence, as an attempt to reconcile change and no-change, which necessarily coexist. Our modern view of physical reality remains a construction built upon these twin concepts: on the one hand, the material world, made of changing fields, their excitations, and their interactions. On the other, we know that these interactions are ruled by certain laws which, by their very nature, are unchangeable: the laws of nature. Thus, we still understand the world based on the twin pillars of being and becoming, as pre-Socratic philosophers did some two and a half millennia ago. The tools have changed, the rules have changed, but the beauty and elegant simplicity of the idea that change and no-change coexist remains as vivid today as it was then.

There is a deep fascination I have been carrying with me for decades now, ever since earliest childhood: the interplay between simplicity and complexity.

Unable to express it verbally at the time, in hindsight it seems clear: they are all about that penultimate question: what is life and how did this world come into existence?

In many stages and phases I discovered a multitude of ideas that are exactly what is called for here: deep, elegant and beautiful explanations of the principles of nature.

Simplicity is embodied in a reductionist form in the YinYang symbol: being black or white.

In other familiar words: To be or not to be.

Those basic elements combined: that is the process spawning diversity, in myriads of forms.

As a youngster I was totally immersed in 'Lego' blocks. There are a handful of basic shapes (never liked the 'special' ones and clamored instead for a bigger box of basics) and you could put them together in arrangements that become houses, ships, bridges...entire towns I had growing up the sides of my little room to the tops of wardrobes. And I sensed it then: there is something deep about this.

A bit later I got into a mechanical typewriter (what a relief to be able to type clearly, my handwriting had always been horrid—the hand not being able to keep up with the thinking... and relished the ability to put together words, sentences, paragraphs. Freezing a thought in a material fashion, putting it on paper to recall later. What's more—to let someone else follow your thinking! I sensed: this is a thing of beauty.

Then I took up playing the piano. The embryonic roots of the software designer of later decades probably shuddered at the interface: 88 unlabeled keys! Irregular intervals of black ones interspersed... and almost the exact opposite of todays "we need to learn this in oneminute and no, we never ever look at manuals" attitude. It took months to make any sense of it, but despite the frustrations, it was deeply fascinating. String together a few notes with mysterious un-definable skill and out comes... deeply moving emotion?

So the plot thickens: a few Lego blocks, a bunch of lettershapes or a dozen musical notes... and you take that simplicity of utterly lame elements, put them together...and out pops complexity, meaning, beauty.

Later in the early 70s I delved into the very first generation of large synthesizers and dealt specifically with complex natural sounds being generated from simple unnatural ingredients and processes. By 1977—now in California—it was computer graphics that became the new frontier—and again: seemingly innocent little pixels combine to make ... any image—as in: anything one can imagine. Deep.

In those days I also began playing chess, and carom billiards—simple rules, a few pieces, 3 balls...but no game is ever the same. Not even close. The most extreme example of this became another real fascination: the game of GO. Just single moves of black and white stones, on a plain grid of lines with barely a handful of rules—but a huge variety of patterns emerges. Elegant.

The earliest computing, in the first computer store in the world, Dick Heyser in Santa Monica, had me try something that I had read in SciAm by Martin Gardner: Conway's 'Game of Life'. The literal incarnation of the initial premise: Simplicity reduced to that YinYang: a cell is On or Off, black or white. But there is one more thing added here now: iteration. With just four rules each cell is said to live or die and in each cycle the pattern changes, iteratively. From dead dots on paper, and static pixels on phospor, it sprang to—life! Not only patterns, but blinkers, gliders, even glider guns, heck glider gun canons! Indeed, it is now seen as a true Turing-complete machine. Artificial Life. Needless to say: very deep.

Another example in that vein are of course fractals. Half an inch of a formula, when iterated, is yielding worlds of unimaginably intricate shapes and patterns. It was a great circle closing after 20 years for me to re-examine this field, now flying through them as "frax" on a little iPhone, in realtime and in real awe.

The entire concept of the computer embodies the principles of simple on/off binary codes, much like YinYang, being put together to form still simplistic gates and then flip-flops, counters, all the way to RAM and complex CPU/GPUs and beyond. And now we have a huge matrix computer with billions of elements networked together (namely 'us', including this charming little side corridor called 'Edge'), just a little over

70 years after Zuse's Z3 we reached untold complexity—with no sign of slowing down.

Surely the ultimate example of 'simplexity' is the genetic code—four core elements being combined by simple rules to extreme complex effect—the DNA to build archaea, felis or homo somewhat sapiens.

Someone once wrote on Edge "A great analogy is like...a diagonal frog" which embodies the un-definable art of what constitutes a deep, beautiful or elegant explanation: Finding the perfect example! The lifelong encounters with "trivial ingredients turning to true beauty" recited here are in themselves neither terse mathematical proofs nor eloquently worded elucidations (such as one could quote easily from almost any Nobel laureate's prize-worthy insights).

Instead of the grandeur of 'the big formulas' I felt that the potpourri of AHA! moments over six decades may be just as close to that holy grail of scientific thinking: to put all the puzzle pieces together in such away that a logical conclusion converges further on... the truth. And I guess one of the pillars of that truth, in my eyes, is the charmingly disarmingly miniscule insight:

Professor of Ethology, Cambridge University; Co-author, Design for a Life

Subverting Biology

Two years ago I reviewed the evidence on inbreeding in pedigree dogs. Inbreeding can result in reduced fertility both in litter size and sperm viability, devel­opmental disruption, lower birth rate, higher infant mortality, shorter life span, increased expression of inherited disorders and reduction of immune sys­tem function. The immune system is closely linked to the removal of cancer cells from a healthy body and, indeed, reduction of immune system function increased the risk of full-blown tumours. These well-documented cases in domestic dogs confirm what is known from many wild populations of other species. It comes as no surprise, therefore, that a variety of mechanisms render inbreeding less likely in the natural world. One such is the choice of unfamiliar individuals as sexual partners.

Despite all the evidence, the story is more complicated than at first appears and this is where the explanation for what happens has a certain beauty. While inbreeding is generally seen as being undesirable, the debate has become much more nu­anced in recent years. Purging of the genes with seriously damaging ef­fects can carry obvious benefits. This can happen when a population is inbred.Outcrossing, which is usually perceived as advantageous, does carry the possibility that the benefits of purging are un­done by introducing new harmful genes into the population. Furthermore a population adapted to one environment may not do well if crossed with a population adapted to another environment. So a balance is often struck between inbreeding and outbreeding

When the life history of the species demands careful nurturing of the offspring, the parents may go to a lot of trouble to mate with the best partner possible. A mate should be not too similar to oneself but not too dissimilar either. Thirty years I found that Japanese quail of both sexes preferred partners that were first cousins. Subsequent animal studies have suggested that an optimal degree of relatedness is most beneficial to the organism in terms of reproductive success. A study of a human Icelandic population also pointed to the same conclusion. Couples who were third or fourth cousins had a larger number of grandchildren than more closely related or more distantly related partners. Much evidence from humans and non-human animals suggests that the choice of a mate is dependent on experience in early life, with individuals tending to choose partners who are a bit different but not too different from familiar individuals, who are usually but not always close kin.

The role of early experience in determining sexual and social preferences bears on a well-known finding that humans are extremely loyal to members of their own group. They are even prepared to give up their own lives in defence of those with whom they identify. In sharp contrast, they can behave with lethal aggressiveness towards those who are unfamiliar to them. This suggests then a hopeful resolution to the racism and intolerance that bedevils many societies. As people from different countries and ethnic backgrounds become better acquainted with each other, they will be more likely to treat them well, particularly if the familiarity starts at an earlier age. If familiarity leads to marriage the couples may have fewer grandchildren, but that may be a blessing on an over-populated planet. This optimistic principle, generated by knowledge of how a balance has been struck between inbreeding and outbreeding, subverts biology, but it does hold for me considerable beauty.

In 1953, when James Watson pushed around some two-dimensional cut-outs and was startled to find that an adenine-thymine pair had an isomorphic shape to the guanine-cytosine pair, he solved eight mysteries simultaneously. In that instant he knew the structure of DNA: a helix. He knew how many strands: two. It was a double helix. He knew what carried the information: the nucleic acids in the gene, not the protein. He knew what maintained the attraction: hydrogen bonds. He knew the arrangement: The sugar-phosphate backbone was on the outside and the nucleic acids were in the inside. He knew how the strands match: through the base pairs. He knew the arrangement: the two identical chains ran in opposite directions. And he knew how genes replicated: through a zipper-like process.

The discovery that Watson and Crick made is truly impressive, but I am also interested in what we can learn from the process by which they arrived at their discovery. On the surface, the Watson-Crick story fits in with five popular claims about innovation, as presented below. However, the actual story of their collaboration is more nuanced than these popular claims suggest.

It is important to have clear research goals. Watson and Crick had a clear goal, to describe the structure of DNA, and they succeeded.

But only the first two of their eight discoveries had to do with this goal. The others, arguably the most significant, were unexpected byproducts.

Experience can get in the way of discoveries. Watson and Crick were newcomers to the field and yet they scooped all the established researchers, demonstrating the value of fresh eyes.

However, Watson and Crick as a team actually had more comprehensive expertise than the other research groups. The leading geneticists didn't care about biochemistry; they were just studying the characteristics of genes. The organic chemists who were studying DNA weren't interested in genetics. In contrast Crick had a background in physics, x-ray techniques, protein and gene function. Watson brought to the table biology, phages, and bacterial genetics. Crick was the only crystallographer interested in genes. Watson was the only one coming out of the U.S.-based phage group interested in DNA.

Fixation on theories blinds you to the data. Many of the researchers at the time had been gripped by a flawed belief that proteins carried the genetic information, because DNA seemed too simple with only four bases. That was the handicap that the experienced researchers carried, not their expertise. Watson and Crick, being new to the field, weren't fixated by the protein hypothesis and were excited by new data suggesting that DNA played a central role in genetic information.

On the other hand, excessive reliance on data also carries a penalty because the data can be flawed. Rosalind Franklin was handicapped in her research by earlier results that had mixed dry and wet forms of DNA. She pursued the dry form of DNA, whereas she needed to be studying the wet form. She didn't have the over-arching theory of Watson and Crick that DNA must be a helix, which would have helped her make sense of her own data. She ignored an important photograph for 10 months whereas Watson was struck by its significance as soon as he saw it. As modelers, Watson and Crick benefited from a top-down perspective that helped them judge which kinds of data were important.

Also, Watson and Crick were gripped by a flawed theory of their own. They believed that DNA would be a triple helix, a belief that sent them off in some wrong directions but also provided them with concrete ideas they could test. They were in a "speculate and test" mode rather than trying to keep an open mind.

Pressure for results gets in the way of creativity. No granting agency was sponsoring their research. They didn't have to demonstrate progress in order to get funding renewal.

Actually, the two of them felt enormous pressure to unravel the mystery of DNA, particularly when Linus Pauling showed interest. Unlike Pauling or any of the other research groups, Watson and Crick perceived themselves to be in a frantic race for the prize.

Scientists need to safeguard their reputation for accuracy. Scientific reputations are important. You won't be taken seriously as a scientist if you are seen as doing sloppy research or jumping to unfounded conclusions. For example, Oswald Avery had shown in 1944 that bacterial genes were carried by DNA. But the scientific community thought that Avery's work lacked the necessary controls. He wasn't seen as a careful researcher and his findings weren't given as much credence as they deserved.

However, Watson and Crick weren't highly regarded either. Rosalind Franklin was put off by their eagerness to speculate about questions that would eventually be resolved by carefully gathering data. When Watson and Crick enthusiastically unveiled their triple helix model to her in Cambridge she had little difficulty shooting it down.

I think this last issue is the most important. Too many scientists are very careful not to make errors, not to make claims that later have to be retracted. For many, the ideal is to only announce results that can withstand all criticisms, results that can't possibly be wrong. Unfortunately, the safer the claim, the lower the information value. Watson and Crick embody an opposite tendency, to make the strongest claim that they can defend.

Kepler's planetary motion ellipses, Bohr's electron shells, and Watson and Crick's double helix are good examples of bringing a bolt of clarity and explanation to a specific scientific problem. Another level of explanatory power is ideas that are applicable on more of a universal basis to many phenomena thereby making sense of things at a higher order. Some examples of these ideas include: Occam's Razor, the invisible hand, survival of the fittest, the incompleteness theorem, and cellular reprogramming.

Therefore some of the best explanations may have the parameters of being intuitively beautiful and elegant, offering an explanation for the diverse and complicated phenomena found in the natural universe and human-created world, being universally applicable or at least portable to other contexts, and making sense of things at a higher order. Fields like cosmology, philosophy, and complexity theory have already delivered in this exercise: they encompass many other science fields in their scope and explain a variety of micro and macro scale phenomena.

Next node foment is an idea inspired by complexity theory. As large complex adaptive systems move across time and landscape, they periodically cycle between order and chaos, in a dynamic progression of symmetry-attaining and symmetry-breaking. These nodes of symmetry are ephemeral. A moment of symmetry in a dynamic system is unstable because system forces drive progression away from the stuck state of Buridan's ass and back into the search space of chaos towards the next node of symmetry. This is the process of life, of intelligence, of the natural world, and of complex man-made systems. Pressure builds to force innovation in the dynamic process or the system gains entropy and stagnates into a fixed state or death.

A classic example of next node foment can be found in the history of computing paradigms, cycling in and out of symmetry and moving to the next nodes through a process of capacity exhaust, frustration, competition, and innovation. These paradigms have evolved from the electro-mechanical punch card to the relay to the vacuum tube to the transistor to the integrated circuit to whatever is coming next. The threatened end of Moore's law is not a disaster but an invitation for innovation. Creative foment towards the next node is already underway in the areas of block copolymers, DNA nanoelectronics, the biomolecular integration of organic and inorganic materials, 3D circuits, quantum computing, and optical computing.

Another area is energy, as any resource starts to run out (e.g.; wood, whale oil, coal, petroleum), innovators develop new ideas to push the transition. For example, the shifts in the automotive industry in the last few years have been significant, driven by both resource depletion (the 'end of oil') and a political emphasis on energy independence. Some of the entrants competing for the next node paradigm are synthetic biofuels, electric cars, hybrids, and hydrogen fuel-cell cars.

Other classic examples of next node foment and symmetry-breaking behavior can be found in the fields of complexity theory and chaos theory. These include the phases of cosmic expansion, the occurrence of neutrinos, and the chiral structure of proteins and lipids. For example, one benefit of non-equilibrium systems is that they transform energy from the environment into an ordered behavior of a new typethat ischaracterized by symmetry breaking.

Information compression eras is another area of next node foment: the progression from analog to digital and the developing friction for the next era. Analog and digital are modes of modulating information onto the electromagnetic spectrum with increasing efficacy. The next era could be characterized by the even greater effectiveness of electromagnetic spectrum control, particularly moving to multidimensional attribute modulation. Already DNA is a potential alternative encoding system with four and maybe eight combinations instead of the 1s and 0s of the digital era. Terahertz networking and data provenance are early guides in the progress to the next node of information compression.

Part of the beauty of next node foment is that it extends beyond science and technology to a wide range of areas such as philosophy. For example, one of the lesser-known definitions of irony is when individuals experience a sense of dissimulation from a group. This feeling of being dissimulated is that of experiencing an anxious uncanniness about what it means to be a doctor, a Christian, a New Yorker, etc., because the norms of the group no longer hold for the individual. However, it is only by cultivating this anxious uncanniness that the progression to next node can be realized: redefining oneself or the group norms, or starting a new group. As the end of Moore's law is an invitation for innovation, so too is anxious uncanniness an invitation for intellectual growth and cultural evolution.

Next node foment can also be seen in areas of current conflict in scientific theories, where two elegant high-order paradigms with explanative power are themselves in competition, uncomfortable coexistence, or broken symmetry fomenting towards a larger explanatory paradigm. Some examples include a grand unified theory to unify the general theory of relativity with electromagnetism, mathematical theories that include both power laws and randomness, and a behavioral theory of beyond-human level intelligence that includes both computronium and aesthetics (e.g.; does AI do art, solely compute, or is there no distinction at that level of cosmic navel-gazing?).

Next node foment is a novel and effective explanation for many diverse and complicated phenomena found in the natural universe and human-created world. It has intuitive simplicity, beauty, and elegance, wide and perhaps universal applicability, and the ability to make sense of things at a higher order. Next node foment explains natural world phenomena in cosmology, physics, and biology, and human-derived phenomena in the progression of technology innovation, energy eras, information compression eras, and the evolution of culture.

There are people who want a stable marriage, yet continue to cheat on their wives.

There are people who want a successful career, yet continue to undermine themselves at work.

Aristotle defined Man as a rational animal. Contradictions like these show that we are not.

All people live with the conflicts between what they want and how they live.

For most of human history we had no way to explain this paradox until Freud's discovery of the unconscious resolved it. Before Freud, we were restricted to our conscious awareness when looking for answers regarding what we knew and felt. All we had to explain incompatible thoughts, feelings and motivations was limited to what we could access in consciousness. We knew what we knew and we felt what we felt. Freud's elegant explanation postulated a conceptual space that is not manifest to us but where irrationality rules. This aspect of the mind is not subject to the constraints of rationality such as logical inference, cause and effect, and linear time. The unconscious explains why presumably rational people live irrational lives.

Critics may take exception as to what Freud believed resides in the unconscious—drives, both sexual and aggressive, defenses, conflicts, fantasies, affects and beliefs—but no one would deny its existence; the unconscious is now a commonplace. How else to explain our stumbling through life, unsure of our motivations, inscrutable to ourselves? I wonder what a behaviorist believes is at play while in the midst of divorcing his third astigmatic redhead.

The universe consists primarily of dark matter. We can't see it, but it has an enormous gravitational force. The conscious mind—much like the visible aspect of the universe—is only a small fraction of the mental world. The dark matter of the mind, the unconscious, has the greatest psychic gravity. Disregard the dark matter of the universe and anomalies appear. Ignore the dark matter of the mind and our irrationality is inexplicable.

Sherrell J. Aston Professor of Psychology at the University of Virginia; Co-author, Social Psychology; Author, Strangers to Ourselves; Redirect

We Are What We Do

My favorite is the idea that people become what they do. This explanation of how people acquire attitudes and traits dates back to the philosopher Gilbert Ryle, but was formalized by the social psychologist Daryl Bem in his self-perception theory. People draw inferences about who they are, Bem suggested, by observing their own behavior.

Self-perception theory turns common wisdom on its head. People act the way they do because of their personality traits and attitudes, right? They return a lost wallet because they are honest, recycle their trash because they care about the environment, and pay $5 for a caramel brulée latte because they like expensive coffee drinks. While it is true that behavior emanates from people's inner dispositions, Bem's insight was to suggest that the reverse also holds. If we return a lost wallet, there is an upward tick on our honesty meter. After we drag the recycling bin to the curb, we infer that we really care about the environment. And after purchasing the latte, we assume that we are coffee connoisseurs.

Hundreds of experiments have confirmed the theory and shown when this self-inference process is most likely to operate (e.g., when people believe they freely chose to behave the way they did, and when they weren't sure at the outset how they felt).

Self-perception theory is an elegant in its simplicity. But it is also quite deep, with important implications for the nature of the human mind. Two other powerful ideas follow from it. The first is that we are strangers to ourselves. After all, if we knew our own minds, why would we need to guess what our preferences are from our behavior? If our minds were an open book, we would know exactly how honest we are and how much we like lattes. Instead, we often need to look to our behavior to figure out who we are. Self-perception theory thus anticipated the revolution in psychology in the study of human consciousness, a revolution that revealed the limits of introspection.

But it turns out that we don't just use our behavior to reveal our dispositions—we infer dispositions that weren't there before. Often, our behavior is shaped by subtle pressures around us, but we fail to recognize those pressures. As a result, we mistakenly believe that our behavior emanated from some inner disposition. Perhaps we aren't particularly trustworthy and instead returned the wallet in order to impress the people around us. But, failing to realize that, we infer that we are squeaky clean honest. Maybe we recycle because the city has made it easy to do so (by giving us a bin and picking it up every Tuesday) and our spouse and neighbors would disapprove if we didn't. Instead of recognizing those reasons, though, we assume that we should be nominated for the Green Neighbor of the Month Award. Countless studies have shown that people are quite susceptible to social influence, but rarely recognize the full extent of it, thereby misattributing their compliance to their true wishes and desires--the well-known fundamental attribution error.

Like all good psychological explanations, self-perception theory has practical uses. It is implicit in several versions of psychotherapy, in which clients are encouraged to change their behavior first, with the assumption that changes in their inner dispositions will follow. It has been used to prevent teenage pregnancies, by getting teens to do community service. The volunteer work triggers a change in their self-image, making them feel more a part of their community and less inclined to engage in risky behaviors. In short, we should all heed Kurt Vonnegut's advice: "We are what we pretend to be, so we must be careful about what we pretend to be."

Social Psychologist, University of British Columbia; Co-author, Happy Money: The Science of Smarter Spending

Why We Feel Pressed for Time

Recently, I found myself on the side of the road, picking gravel out of my knee and wondering how I’d ended up there. I had been biking from work to meet a friend at the gym, pedalling frantically to make up for being a few minutes behind schedule. I knew I was going too fast, and when I hit a patch of loose gravel while careening through a turn, my bike slid out from under me. How had I gotten myself in this position? Why was I in such a rush?

I thought I knew the answer. The pace of life is increasing; people are working more and relaxing less than they did 50 years ago. At least that’s the impression I got from the popular media. But as a social psychologist, I wanted to see the data. As it turns out, there is very little evidence that people are now working more and relaxing less than they did in earlier decades. In fact, some of the best studies suggest just the opposite. So, why do people report feeling so pressed for time?

A beautiful explanation for this puzzling phenomenon was recently offered by Sanford DeVoe, at the University of Toronto and Jeffrey Pfeffer, at Stanford. They argue that as time becomes worth more money, time is seen as scarcer. Scarcity and value are perceived as conjoined twins; when a resource—from diamonds to drinking water—is scarce, it is more valuable, and vice versa. So, when our time becomes more valuable, we feel like we have less of it. Indeed, surveys from around the world have shown that people with higher incomes report feeling more pressed for time. But there are lots of plausible reasons for this, including the fact that more affluent people often work longer hours, leaving them with objectively less free time.

DeVoe and Pfeffer proposed, however, that simply perceiving oneself as affluent might be sufficient to generate feelings of time pressure. Going beyond past correlational analyses, they used controlled experiments to put this causal explanation to the test. In one experiment, DeVoe and Pfeffer asked 128 undergraduates to report the total amount of money they had in the bank. All the students answered the question using an 11-point scale, but for half the students, the scale was divided into $50 increments, ranging from $0-$50 (1) to over $500 (11), whereas for the others, the scale was divided into much larger increments, ranging from $0-$500 (1) to over $400,000 (11). When the scale was divided into $50 increments, most undergraduates circled a number near the top of the scale, leaving them with the sense that they were relatively well-off. And this seemingly trivial manipulation led participants to feel that they were rushed, pressed for time, and stressed out. In other words, just feeling affluent led students to experience the same sense of time pressure reported by genuinely affluent individuals. Other studies confirmed that increasing the perceived economic value of time increases its perceived scarcity.

If feelings of time scarcity stem in part from the sense that time is incredibly valuable, then ironically, one of the best things we can do to reduce this sense of pressure may be to give our time away. Indeed, new research suggests that giving time away to help others can actually alleviate feelings of time pressure. Companies like Home Depot provide their employees with opportunities to volunteer their time to help others, potentially reducing feelings of time stress and burnout. And Google encourages employees to use 20% of their time on their own pet project, which may or may not payoff. Although some of these projects have resulted in economically valuable products like Gmail, the greatest value of this program might lie in reducing employees’ sense that their time is scarce.

As well as pointing to innovative solutions to feelings of time pressure, DeVoe and Pfeffer’s work can help to account for important cultural trends. Over the past 50 years, feelings of time pressure have risen dramatically in North America, despite the fact that weekly hours of work have stayed fairly level and weekly hours of leisure have climbed. This apparent paradox may be explained, in no small part, by the fact that incomes have increased substantially during the same period. This causal effect may also help to explain why people walk faster in wealthy cities like Tokyo and Toronto than in cities like Nairobi and Jakarta. And at the level of the individual, this explanation suggests that as incomes grow over the life course, time seems increasingly scarce. Which means that, as my career develops, I might have to force myself to take those turns a little slower.

We have the clear impression that our deliberative mind makes the most important decisions in our life: What work we do, where we live, who we marry. But contrary to this belief the biological evidence points toward a decision process in an ancient brain system called the basal ganglia, brain circuits that consciousness cannot access. Nonetheless, the mind dutifully makes up plausible explanations for the decisions.

The scientific trail that led to this conclusion began with honeybees. Worker bees forage the spring fields for nectar, which they identify with the color, fragrance and shape of a flower. The learning circuit in the bee brain converges on VUMmx1, a single neuron that receives the sensory input and, a bit later, the value of the nectar, and learns to predict the nectar value of that flower the next time the bee encounters it. The delay is important because the key is prediction, rather than a simple association. This is also the central core of temporal-difference (TD) learning, which can learn a sequence of decisions leading to a goal and is particularly effective in uncertain environments like the world we live in.

Buried deep in your midbrain there is a small collection of neurons, found in our earliest vertebrate ancestors, that project throughout the cortical mantle and basal ganglia that are important for decision making. These neurons release a neurotransmitter called dopamine, which has a powerful influence on our behavior. Dopamine has been called a "reward" molecule, but more important than reward itself is the ability of these neurons to predict reward: If I had that job, how happy would I be? Dopamine neurons, which are central to motivation, implement TD learning, just like VUMmx1.

TD learning solves the problem of finding the shortest path to a goal. It is an "online" algorithm because it learns by exploring and discovers the value of intermediate decisions in reaching the goal. It does this by creating an internal value function, which can be used to predict the consequences of actions. Dopamine neurons evaluate the current state of the entire cortex and inform the brain about the best course of action from the current state. In many cases the best course of action is a guess, but because guesses can be improved, over time TD learning creates a value function of oracular powers. Dopamine may be the source of the "gut feeling" you sometime experience, the stuff that intuition is made from.

When you consider various options, prospective brain circuits evaluate each scenario and the transient level of dopamine registers the predicted value of each decision. The level of dopamine is also related to your level of motivation, so not only will a high level of dopamine indicate a high expected reward, but you will also have a higher level of motivation to pursue it. This is quite literally the case in the motor system, where a higher tonic dopamine level produces faster movements. The addictive power of cocaine and amphetamines is a consequence of increased dopamine activity, hijacking the brain's internal motivation system. Reduced levels of dopamine lead to anhedonia, an inability to experience pleasure, and the loss of dopamine neurons results in Parkinson's Disease, an inability to initiate actions and thoughts.

TD learning is powerful because it combines information about value along many different dimensions, in effect comparing apples and oranges in achieving distant goals. This is important because rational decision-making is very difficult when there many variables and unknowns, so having an internal system that quickly delivers good guesses is a great advantage, and may make the difference between life and death when a quick decision is needed. TD learning depends on the sum of your life experiences. It extracts what is essential from these experiences long after the details of the individual experiences are no longer remembered.

TD learning also explains many of the experiments that were performed by psychologists who trained rats and pigeons on simple tasks. Reinforcement learning algorithms have traditionally been considered too weak to explain more complex behaviors because the feedback from the environment is sparse and minimal. Nonetheless reinforcement learning is universal among nearly all species and is responsible for some of the most complex forms of sensorimotor coordination, such as piano playing and speech. Reinforcement learning has been honed by hundreds of millions of years of evolution. It has served countless species well, in particular our own.

How complex a problem can TD learning solve? TD gammon is a computer program that learned how to play backgammon by playing itself. The difficulty with this approach is that the reward comes only at the end of the game, so it is not clear which were the good moves that led to the win. TD gammon started out with no knowledge of the game, except for the rules. By playing itself many times and applying TD learning to create a value function to evaluation game positions, TD gammon climbed from beginner to expert level, along the way picking up subtle strategies similar to ones that humans use. After playing itself a million times it reached championship level and was discovering new positional play that astonished human experts. Similar approaches to the game of Go have achieved impressive levels of performance and are on track to reach professional levels.

When there is a combinatorial explosion of possible outcomes, selective pruning is helpful. Attention and working memory allow us to focus on most the important parts of a problem. Reinforcement learning is also supercharged by our declarative memory system, which tracks unique objects and events. When large brains evolved in primates, the increased memory capacity greatly enhanced the ability to make more complex decisions, leading to longer sequences of actions to achieve goals. We are the only species to create an educational system and to consign ourselves to years of instruction and tests. Delayed gratification can extend into the distant future, in some cases extending into an imagined afterlife, a tribute to the power of dopamine to control behavior.

At the beginning of the cognitive revolution in the 1960s the brightest minds could not imagine that reinforcement learning could underlie intelligent behavior. Minds are not reliable. Nature is more clever than we are.

Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, Alone Together

Transitional Objects

I was a student in psychology in the mid-1970s at Harvard University. The grand experiment that had been "Social Relations" at Harvard had just crumbled. Its ambition had been to bring together the social sciences in one department, indeed, most in one building, William James Hall. Clinical psychology, experimental psychology, physical and cultural anthropology, and sociology, all of these would be in close quarters and intense conversation.

But now, everyone was back in their own department, on their own floor. From my point of view, what was most difficult was that the people who studied thinking were on one floor and the people who studied feeling were on another.

In this Balkanized world, I took a course with George Goethals in which we learned about the passion in thought and the logical structure behind passion. Goethals, a psychologist who specialized in adolescence, was teaching a graduate seminar in psychoanalysis. Goethals focus was on a particular school of analytic thought: British object relations theory. This psychoanalytic tradition kept its eye on a deceptively simple question: How do we bring people and what they meant to us "inside" us? How do these internalizations cause us to grow and change? The "objects" of its name were, in fact, people.

Several classes were devoted to the work of David Winnicott and his notion of the transitional object. Winnicott called transitional the objects of childhood—the stuffed animals, the bits of silk from a baby blanket, the favorite pillows—that the child experiences as both part of the self and of external reality. Winnicott writes that such objects mediate between the child's sense of connection to the body of the mother and a growing recognition that he or she is a separate being. The transitional objects of the nursery—all of these are destined to be abandoned. Yet, says Winnicott, they leave traces that will mark the rest of life. Specifically, they influence how easily an individual develops a capacity for joy, aesthetic experience, and creative playfulness. Transitional objects, with their joint allegiance to self and other, demonstrate to the child that objects in the external world can be loved.

Winnicott believes that during all stages of life we continue to search for objects we experience as both within and outside the self. We give up the baby blanket, but we continue to search for the feeling of oneness it provided. We find them in moments of feeling "at one" with the world, what Freud called the "oceanic feeling." We find these moments when we are at one with a piece of art, a vista in nature, a sexual experience.

As a scientific proposition, the theory of the transitional object has its limitations. But as a way of thinking about connection, it provides a powerful tool for thought. Most specifically, it offered me a way to begin to understand the new relationships that people were beginning to form with computers, something I began to study in the late 1970s and early 1980s. From the very beginning, as I began to study the nascent digital culture culture, I could see that computers were not "just tools." They were intimate machines. People experienced them as part of the self, separate but connected to the self.

A novelist using a word processing program referred to "my ESP with the machine. The words float out. I share the screen with my words." An architect who used the computer to design goes went even further: "I don't see the building in my mind until I start to play with shapes and forms on the machine. It comes to life in the space between my eyes and the screen."

After studying programming, a thirteen year old girl said, that when working with the computer, "there's a little piece of your mind and now it's a little piece of the computer's mind and you come to see yourself differently." A programmer talked about his "Vulcan mind meld" with the computer.

When in the late 1970s, I began to study the computer's special evocative power, my time with George Goethals and the small circle of Harvard graduate students immersed in Winnicott came back to me. Computers served as transitional objects. They bring us back to the feelings of being "at one" with the world. Musicians often hear the music in their minds before they play it, experiencing the music from within and without. The computer similarly can be experienced as an object on the border between self and not-self. Just as musical instruments can be extensions of the mind's construction of sound, computers can be extensions of the mind's construction of thought.

This way of thinking about the computer as an evocative objects puts us on the inside of a new inside joke. For when psychoanalysts talked about object relations, they had always been talking about people. From the beginning, people saw computers as "almost-alive" or "sort of alive." With the computer, object relations psychoanalysis can be applied to, well, objects. People feel at one with video games, with lines of computer code, with the avatars they play in virtual worlds, with their smartphones. Classical transitional objects are meant to be abandoned, their power recovered in moments of heightened experience. When our current digital devices—our smartphones and cellphones—take on the power of transitional objects, a new psychology comes into play. These digital objects are never meant to be abandoned. We are meant to become cyborg.

Forty-five years ago, some social psychological experiments posed story problems that assessed people's willingness to take risks (for example, what odds of success should a budding writer have in order to forego her sure income and attempt writing a significant novel?). To everyone's amazement, group discussions in various countries led people to advise more risk, setting off a wave of speculation about group risk taking by juries, business boards, and the military.

Alas, some other story problems surfaced for which group deliberation increased caution (should a young married parent with two children gamble his savings on a hot stock tip?).

Out of this befuddlement—does group interaction increase risk, or caution?—there emerged a deeper principle of simple elegance: group interaction tends to amplify people's initial inclinations (as when advising risk to the novelist, and caution in the investing).

This "group polarization" phenomenon was then repeatedly confirmed. In one study, relatively prejudiced and unprejudiced students were grouped separately and asked to respond—before and after discussion—to racial dilemmas, such as a conflict over property rights versus open housing. Discussion with like-minded peers increased the attitude gap between the high- and low-prejudiced groups.

Fast forward to today. Self-segregation with kindred spirits is now rife. With increased mobility, conservative communities attract conservatives and progressive communities attract progressives. As Bill Bishop has documented, the percentage of landslide counties—those voting 60 percent or more for one presidential candidate—nearly doubled between 1976 and 2008. And when neighborhoods become political echo chambers, the consequence is increased polarization, as David Schkade and colleagues demonstrated by assembling small groups of Coloradoans in liberal Boulder and conservative Colorado Springs. The community discussions of climate change, affirmative action, and same-sex unions further diverged Boulder folks leftward and Colorado Springs folks rightward.

Terrorism is group polarization writ large. Virtually never does it erupt suddenly as a solo personal act. Rather, terrorist impulses arise among people whose shared grievances bring them together. In isolation from moderating influences, group interaction becomes a social amplifier.

The Internet accelerates opportunities for like-minded peacemakers and neo-Nazis, geeks and goths, conspiracy schemers and cancer survivors, to find and influence one another. When socially networked, birds of a feather find their shared interests, attitudes, and suspicions magnified.

Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

True or False: Beauty Is Truth

"Beauty is truth, truth beauty," said John Keats. But what did he know? Keats was a poet, not a scientist.

In the world that scientists inhabit, truth is not always beautiful or elegant, though it may be deep. In fact, it's my impression that the deeper an explanation goes, the less likely it is to be beautiful or elegant.

Some years ago, the psychologist B. F. Skinner proposed an elegant explanation of "the behavior of organisms," based on the idea that rewarding a response—he called it reinforcement—increases the probability that the same response will occur again in the future. The theory failed, not because it was false (reinforcement generally does increase the probability of a response) but because it was too simple. It ignored innate components of behavior. It couldn't even handle all learned behavior. Much behavior is acquired or shaped through experience, but not necessarily by means of reinforcement. Organisms learn different things in different ways.

The theory of the modular mind is another way of explaining behavior—in particular, human behavior. The idea is that the human mind is made up of a number of specialized components, often called modules, working more or less independently. These modules collect different kinds of information from the environment and process it in different ways. They issue different commands—occasionally, conflicting commands. It's not an elegant theory; on the contrary, it's the sort of thing that would make Occam whip out his razor. But we shouldn’t judge theories by asking them to compete in a beauty pageant. We should ask whether they can explain more, or explain better, than previous theories were able to do.

The modular theory can explain, for example, the curious effects of brain injuries. Some abilities may be lost while others are spared, with the pattern differing from one patient to another.

More to the point, the modular theory can explain some of the puzzles of everyday life. Consider intergroup conflict. The Montagues and the Capulets hated each other; yet Romeo (a Montague) fell in love with Juliet (a Capulet). How can you love a member of a group, yet go on hating that group? The answer is that two separate mental modules are involved. One deals with groupness (identification with one's group and hostility toward other groups), the other specializes in personal relationships. Both modules collect information about people, but they do different things with the data. The groupness module draws category lines and computes averages within categories; the result is called a stereotype. The relationship module collects and stores detailed information about specific individuals. It takes pleasure in collecting this information, which is why we love to gossip, read novels and biographies, and watch political candidates unravel on our TV screens. No one has to give us food or money to get us to do these things, or even administer a pat on the back, because collecting the data is its own reward.

The theory of the modular mind is not beautiful or elegant. But not being a poet, I prize truth above beauty.

Professor of Genomics, The Scripps Translational Science Institute; Author, The Patient Will See You Now

Seeing Is Believing: From Placebos To Movies In Our Brain

Our brain of one hundred billion neurons and a quadrillion of synapses, give or take a few billion here or there, has to be considered one of the most complex entities to demystify. And that may be a good thing, since we don't necessarily want others to be able to read our minds, which would not only be regarded as terribly invasive, but also taking the recent megatrend of transparency much too far.

But the ability to use functional magnetic resonance (fMRI) and positron emission tomography (PET) to image the brain and construct sophisticated activation maps is fulfilling the "seeing is believing" aphorism for any skeptics. One of the longest controversies in medicine has been whether the placebo effect, a notoriously complex, mind-body end product, has a genuine biological mechanism. That now seems to be resolved with the recognition that the opiod drug pathway — the one that is induced by drugs like morphine and oxycontin—shares the same brain activation pattern as seen with the administration of placebo for pain relief. And just like we have seen neuroimaging evidence of dopamine "squirts" from our Web-based networking and social media engagement, dopamine release from specific regions of the brain has been directly visualized after administering a placebo to patients with Parkinson's disease. Indeed, the upgrading of the placebo effect to having discrete, distinguishable psychobiological mechanisms has now even evoked the notion of deliberately administering placebo medications as therapeutics and Harvard recently set up a dedicated institute called "The Program in Placebo Studies and the Therapeutic Encounter."

The decoding of the placebo effect seems to be just a nascent step along the way to the more ambitious quest of mind reading. This past summer a group at UC Berkeley was able to show, via reconstructing brain imaging activation maps, a reasonable facsimile of short YouTube movies that were shown to individuals. In fact, it is pretty awe-inspiring and downright scary to see the resemblance of the frame-by-frame comparison of the movie shown and what was reconstructed from the brain imaging.

Coupled with the new initiative of developing miniature, eminently portable MRIs, are we on the way to watching our dreams in the morning on our iPad? Or, even more worrisome, potentially having others see the movies in our brain. I wonder what placebo effect that might have.

Moore's Law originated in a four page 1965 magazine article written by Gordon Moore, then at Fairchild Semiconductor and later one of the founders of Intel. In it he predicted that the number of components on a single integrated circuit would rise from the then current number of roughly two to the sixth power, to roughly two to the sixteenth power in the following ten years, i.e., the number of components would double every year. He based this on four empirical data points and one null data point, fitting a straight line on a graph plotting the log of the number of components on a single chip against a linear scale of calendar years. Intel later amended Moore's Law to say that the "number of transistors on a chip roughly doubles every two years".

Moore's law is rightly seen as the fundamental driver of the information technology revolution in our world over the last fifty years as doubling the number of transistors every so often has made our computers twice as powerful for the same price, doubled the amount of data they can store or display, made them twice as fast, made them smaller, made them cheaper and in general improved them in every possible way by a factor of two on a clockwork schedule.

But why does it happen? Automobiles have not obeyed Moore's Law, neither have batteries, nor clothing, nor food production, nor the level of political discourse. All but the last have demonstrably improved due to the influence of Moore's Law, but none have had the same relentless exponential improvements.

The most elegant explanation for what makes Moore's Law possible is that digital logic is all about an abstraction, and in fact a one-bit abstraction, a yes/no answer to a question, and that abstraction is independent of physical bulk.

In a world that consists entirely of piles of red sand and piles of green sand, the size of the piles is irrelevant. A pile is either red or green, and you can take away half the pile, and it is still either a pile of red sand or a pile of green sand. And you can take away another half, and another half, and so on, and still the abstraction is maintained. And repeated halving at a constant rate makes an exponential.

That is why Moore's Law works on digital technology, and doesn't work on technologies that require physical strength, or physical bulk, or must deliver certain amounts of energy. Digital technology uses physics to maintain an abstraction and nothing more.

Some caveats do apply:

1. In his short paper Moore expressed some doubt as to whether his prediction would hold for linear, rather than digital, integrated circuits as he pointed out that by their nature, "such elements require the storage of energy in a volume" and that the volume would necessarily be large.

2. It does matter when you get down to piles of sand with just one grain, and then technology has to shift and you need to use some new physical property to define the abstraction—such technology shifts have happened again and again, in the maintenance of Moore's Law over almost fifty years.

3. It does not explain the sociology of how Moore's Law is implemented and what determines the time constant of a doubling, but it does explain why exponentials are possible in this domain.

Recipient, Nobel Prize in Physiology or Medicine, 2002; Professor of Biochemistry and Molecular Biophysics, Columbia University; Author, The Age of Insight; In Search of Memory

How Psychotherapy Can Be Placed On A Scientific Basis: 5 Easy Lessons

How did psychoanalysis, once a major mode for treating non-psychotic mental disorders fall so badly in the estimation of the medical community in the United States and in the estimation of the public at large. How could it be reversed? Let me try to address this question putting it in a bit of historical perspective.

While an undergraduate at Harvard College I was drawn to Psychiatry—and specifically to Psychoanalysis. During my training from 1960-1965, psychotherapy was the major mode of treating mental illness and this therapy was derived from psychoanalysis and was based on the belief that one needed to understand mental symptoms in terms of their historical roots in childhood. These therapies tended to take years and neither the outcome nor the mechanisms were studied systematically because this was thought to be very difficult. Psychotherapy and in the limit psychoanalysis when successful allowed people to work a bit better and to love a bit and these were dimensions that were thought to be difficult to measure.

In the 1960s Aaron Beck changed all that by introducing five major obvious, but nevertheless elegant and beautiful innovations:

First, he introduced instruments for measuring mental illness. Up until the time of Beck's work, psychiatric research was hampered by a dearth of techniques for operationalizing the various disorders and measuring their severity. Beck developed a number of instruments, beginning with a Depression Inventory, a Hopelessness Scale, and a Suicide Intent Scale. These scales helped to objectify research in psychopathology and facilitated the establishment of better clinical outcome trials.

Second, Beck introduced a new short-term, evidence-based therapy he called Cognitive Behavioral Therapy.

Third, Beck manualized the treatments. He wrote a cookbook so method could be reliably taught to others. You and I could in principle learn to do Cognitive Behavioral Therapy.

Fourth, he carried out with the help of several colleagues, progressively better controlled studies which documented that Cognitive Behavioral Therapy worked more effectively than placebo and as effectively as antidepressants in mild and moderate depression. In severe depression it did not act as effectively as an anti-depressant but acted synergistically with them to enhance recovery.

Fifth and finally, Beck's work was picked up by Helen Mayberg, another one of my heroes in psychiatry. She carried out FMRI studies of depressed patients and discovered that Brodmann area 25 was a focus of abnormal activity in depression. She went on to find that if—and only if—a patient responded to cognitive behavior therapy or to antidepressants SSRI's (selective serotonin reuptake inhibitors) this abnormality reverted to normal.

What I find so interesting in this recital is the Edge question: What elegant, deep explanation did Aaron Beck bring to his work that differentiated him from the rest of my generation of psychotherapists and allowed him to be so original?

Aaron Beck trained as a psychoanalyst in Philadelphia, but soon became impressed with the radical idea that the central issue in many psychiatric disorders is not unconscious conflict but distorted patterns of thinking. Beck conceived of this novel idea from listening with a critical—and open—mind to his patients with depression. In his early work on depression Aaron set out to test a specific psychoanalytic idea: that depression was due to "introjected anger." Patients with depression, it was argued, experienced deep hostility and anger toward someone they loved. They could not deal with having hostile feelings toward someone they valued and so they would repress their anger and direct it inward toward themselves. Beck tested this idea by comparing the dreams—the royal road to the unconscious—of depressed patients with those of non-depressed patients and found that in their dreams depressed patients showed—if anything—less hostility than non-depressed patients. Instead Beck found that in their dreams as in their waking lives depressed patients have a systematic negative bias in their cognitive style, in the way they thought about themselves and their future. They saw themselves as "losers."

Aaron saw these distorted patterns of thinking not simply as a symptom—a reflection of a conflict lying deep within the psyche—but as a key etiological agent in maintaining the disorders.

This led Beck to develop a systematic psychological treatment for depression that focused on distorted thinking. He found that by increasing the patients' objectivity regarding their misinterpretation of situations or their cognitive distortions and their negative expectancies, the patients experienced substantial shifts in their thinking and subsequently improvements in their affect and behavior.

In the course of his work on depression Beck focused on suicide and provided for the first time a rational basis for the classification and assessment of suicidal behaviors that made it possible to identify high-risk individuals. His prospective study of 9,000 patients led to the formulation of an algorithm for predicting future suicide that has proven to have high predictive power. Of particular importance was his identification of clinical and psychological variables such as hopelessness and helplessness to predict future suicides. These proved to be better predictors of suicide than clinical depression per se. Beck's work on suicide, and that of others such as John Mann at Columbia, demonstrated that a short-term cognitive intervention can significantly reduce subsequent suicide attempts when compared to a control group.

In the 1970s, Beck carried out the randomized controlled trials I referred to earlier. Later, the NIMH did similar trials and together these established cognitive therapy as the first ever-psychological treatment that could objectively be shown to be effective in clinical depression.

As soon as cognitive therapy had been found to be effective in the treatment of depression, Beck turned to other disorders. In a number of controlled clinical trials he demonstrated that cognitive therapy is effective in panic disorder, posttraumatic stress disorder, and obsessive-compulsive disorder. In fact even earlier than Helen Mayberg's work on depression—Lewis Baxter at UCLA had imaged patients with obsessive-compulsive disorder and found they had an abnormality in the caudate nucleus that was reversed when patients improved with cognitive behavioral therapy.

Aaron Beck has recently turned his attention to patients with schizophrenia—and has found that cognitive therapy helps improve their cognitive and negative symptoms, particularly their motivational deficits. Another amazing advance.

So—the answer to the decline of psychoanalysis may not simply lie in the limitation of Freud's thought—but perhaps much more so in the lack of a deep, critical scientific attitude of many of the subsequent generation of therapists. I have little doubt that insight therapy is extremely useful as a therapy. And there are studies that support that contention. But an elegant, deep and beautiful proof requires putting a set of highly validated approaches together to make the point in a convincing manner and perhaps even an idea of how the therapeutic result is achieved.

This central idea has shaped our ideas of modern cosmology, given us the image of the expanding universe, and has led to remarkable understandings—such as the apparent presence of a black hole four million times the mass of the sun at the center of our galaxy. It even offers a possible explanation of the origin of our Universe—as quantum tunneling from "nothing."

This idea lies at the heart of Einstein's General Theory of Relativity, which is still our best understanding of gravity, after nearly 100 years. Its essence is embodied in Wheeler's famous words: "matter tells spacetime how to curve, and curved spacetime tells matter how to move" The equations expressing this are even simpler, once one has understood the background math. The theory exudes simple, essential beauty.

But when brought together with quantum mechanics, an epic conflict between the two theories results. Apparently, they both cannot be right. And, the lessons of black holes—and Hawking's discovery that they ultimately explode—seem to teach us that quantum mechanics must win, and classical spacetime is doomed.

We do not yet know the full shape of the quantum theory providing a complete accounting for gravity. We do have many clues, from studying the early quantum phase of cosmology, and ultrahigh energy collisions that produce black holes and their subsequent disintegrations into more elementary particles. We have hints that the theory draws on powerful principles of quantum information theory. And, we expect that in the end it has a simple beauty, mirroring the explanation of gravity-as-curvature, from an even more profound depth.

In a thriller novel, the explanation comes at the end. In a newspaper article, it usually comes at the beginning. In an executive summary meant to be read by the top management, the explanation, comes at the beginning of the memo. And for a scientific paper, a summary, with findings and hypothesis is presented at the beginning. There is not a single aesthetics of explanations. True: their beauty, deepness, elegance, always rely on the beauty, deepness and elegance of the question to which they answer: but the way the answer is introduced depends on conventional wisdom in different disciplines.

What changes the structure of the questioning-answering conventions?

The major difference is probably in the importance of context.

In entertainment, in a novel or in a movie, the context is the world of meanings that is created by storytellers. Questions appear as surprising twists in the context description. And, mastering the whole thing, the storyteller lets the reader enjoy an entertaining experience by explaining everything at the end. In science, in the news, in a company, the context is in a world of meanings that is already present to the mind of the reader, the storyteller doesn't master the whole thing, everybody feels to be part of the story, and the correct approach is to explain everything as soon as possible, and then to share all the specific findings to help everybody evaluate the quality of the explanation.

But what happens when a really great scientific or economic breakthrough needs to be proposed and shared? What happens when an important new notion that will change the paradigm of its discipline is to be explained? And what happens when something even changes the world of meanings in which the discipline is accustomed to develop?

When Nicolaus Copernicus wrote his masterpiece, De revolutionibus orbium coelestium, he had to make a choice. After having dedicated his work to Pope Paul III, he started the first book introducing a vision of the universe, based on his heliocentric idea. He continued writing three books about mathematics, descriptions of stars, movements of the Sun and the Moon. Only at the end did he explained his new system and how to calculate the movements of all astronomical objects in a heliocentric model.

That was a deep, elegant and beautiful explanation of an historic change. It was an explanation that had to create a new vision of everything, of a new paradigm.

Professor of Neurobiology and Psychiatry, UCSF; Author, Making Sense of People

Personality Differences: The Importance of Chance

In the golden age of Greek philosophy Theophrastus, Aristotle's successor, posed a question for which he is still remembered: "Why has it come about that, albeit the whole of Greece lies in the same clime, and all Greeks have a like upbringing, we have not the same constitution of character [personality]?" The question is especially noteworthy because it bears on our sense of who each of us is, and we now know enough to offer an answer: each personality reflects the activities of brain circuits that gradually develop under the combined direction of the person's unique set of genes and experiences. What makes the implications of this answer so profound is that they lead to the inescapable conclusion that personality differences are greatly influenced by chance events.

Two types of chance events influence the genetic contribution to personality. The first, and most obvious, is the events that brought together the person's mom and dad. Each of them has a particular collection of gene variants—a personal sample of the variants that have accumulated in the collective human genome—and the two parental genetic repertoires set limits on the possible variants that can be transmitted to their offspring. The second chance event is the hit-or-miss union of the particular egg and sperm that make the offspring, each of which contains a random selection of half of the gene variants of each parent. It is the interactions of the resultant unique mixture of maternal and paternal gene variants that plays a major part in the 25-year-long developmental process that builds the person's brain and personality. So two accidents of birth—the parents who conceive us, and the egg–sperm combinations that make us—have decisive influences on the kinds of people we become.

But genes don't act alone. Although there are innate programs of gene expression that continue to unfold through early adulthood to direct the construction of rough drafts of brain circuits, these programs are specifically designed to incorporate information from the person's physical and social world. Some of this adaptation to the person's particular circumstances must come at specific developmental periods, called critical periods. For example, the brain circuits that control the characteristic intonations of a person's native language are only open for environmental input during a limited window of development.

And just as chance influences the particular set of genes we are born with, so does it influence the particular environment we are born into. Just as our genes incline us to be more or less friendly, or confident, or reliable, the worlds we grow up in incline us to adopt particular goals, opportunities and rules of conduct. The most obvious aspects of these worlds are cultural, religious, social, and economic, each transmitted by critical agents: parents, siblings, teachers, and peers. And the specific content of these important influences—the specific era, place, culture etc. we happen to have been born into—is as much a toss of the dice as the specific content of the egg and sperm that formed us.

Of course, chance is not fate. Recognizing that chance events contribute to individual personality differences doesn't mean each life is predetermined or that there is no free will. The personality that arises through biological and socio-cultural accidents of birth can be deliberately modified in many ways, even in maturity. Nevertheless, the chance events that direct brain development in our first few decades leave enduring residues.

When thinking about a particular personality it is, therefore, helpful to be aware of the powerful role that chance played in its construction. Recognizing the importance of chance in our individual differences doesn't just remove some of their mystery. It can also have moral consequences by promoting understanding and compassion for the wide range of people with whom we share our lives.

My favorite explanation in science is the principle of inertia. It explains why the earth moves in spite of the fact that we don't feel any motion, which was perhaps the most counterintuitive revolutionary step taken in all of science. It was first proposed by Galileo and Descartes and has been the core of all the successful explanations in physics in the centuries since.

The principle of inertia is the answer to a very simple question: how would an object that is free, in the sense that no eternal influences or forces affect its motion, move?

This is a seemingly simple question, but notice that to answer it we have to have in mind a definition of motion. What does it mean for something to move?

The modern conception is that motion has to be described relative to an observer.

Consider an object that is sitting at rest relative to you, say a cat sleeping on your lap, and imagine how it appears to move as seen by other observers. Depending on how the observer is moving the cat can appear to have any motion at all. If the observer spins relative to you, the cat will appear to them to spin.

So to make sense of the question of how free objects move we have to refer to a special class of observers. The answer to the question is the following:

There is a special class of observers, relative to whom all free objects appear to either be at rest or to move in straight lines with constant speeds.

I have just stated the principle of inertia.

The power of this principle is that it is completely general. Once a special observer sees one free object move in a straight line with constant speed, she will observe all other free objects to so move.

Furthermore suppose you are a special observer. Any observer who moves in a straight line at a constant speed with respect to you will also see the free objects move at a constant speed in a straight line, with respect to them.

So these special observers form a big class, all of which are moving with respect to each other. These special observers are called inertial observers.

An immediate and momentous consequence is that there is no absolute meaning in not moving. An object may be at rest with respect to one inertial observer, but other inertial observers will see it moving-always in a straight line at constant speed. This can be formulated as a principle:

There is no way, by observing objects in motion, to distinguish observers at rest from other inertial observers.

Thus, any inertial observer has equal rights to say they are at rest and it is the others that are moving.

This is called Galileo's principle of relativity. It explains why the Earth may be moving without us observing gross effects.

To appreciate how revolutionary this was notice that physicists of the 16th Century could disprove Copernicus's claim that the Earth moves by a simple observation. Just drop a ball from the top of a tower. Were the Earth rotating around its axis and revolving around the Sun at the speeds Copernicus required, the ball would land far from the base of the tower. QED. The Earth is at rest.

But this proof assumes that motion is absolute, defined with respect to a special observer at rest, with respect to whom objects with no forces on them come to rest. By altering the definition of motion, Galileo could argue that the very same experiment that previously proved that the Earth is at rest now demonstrates that the Earth could be moving.

The principle of inertia was not just the core of the scientific revolutions of the 17th Century. It contained the seeds of revolutions to come. To see why, notice the qualifier in the statement of the principle of relativity: "by observing objects in motion." For many years it was thought that there would be other kinds of observations that could be used to tell which inertial observers are really moving and which are really at rest. Einstein constructed his theory of special relativity simply by removing this qualifier. Einstein's principle of relativity states:

There is no way to distinguish observers at rest from other inertial observers.

And there was more. A decade after special relativity, the principle of inertia was the seed also for the next revolution—the discovery of general relativity. The principle was generalized by replacing "moving in a straight line with constant speed" to "moving along a geodesic in spacetime." A geodesic is the generalization of a straight line in a curved geometry—it is the shortest distance between two points. So now the principle of inertia reads:

There is a special class of observers, relative to whom all free objects appear to either be at rest or to move along geodesics in spacetime. These are observers who are in free fall in a gravitational field.

And there is consequent generalization of the principle of relativity.

There is no way to distinguish observers in free fall from each other.

This becomes Einstein's equivalence principle that is the core of his general theory of relativity.

But is the principle of inertia really true? So far it has been tested in circumstances where the energy of motion of a particle is as much as eleven orders of magnitude greater than its mass. This is pretty impressive, but there is still a lot of room for the principle of inertia and its twin, the principle of relativity, to fail. Only experiment can tell us if these principles or their failures will be the core of revolutions in science to come.

But whatever happens, no other explanation in science besides the principle of inertia has survived unscathed for so long, nor proved valid over such a range of scales, nor has any other been the seed of several scientific revolutions separated by centuries.

Professor Emeritus of Psychology at Stanford University; Author, The Lucifer Effect: Understanding How Good People Turn Evil

Time Perspective Theory

I am here to tell you that the most powerful influence on our every decision that can lead to significant action outcomes is something that most of us are both totally unaware of and at the same time is the most obvious psychological concept imaginable.

I am talking about our sense of psychological time, more specifically, the way our decisions are framed by the time zones that you have learned to prefer and tend to overuse. We all live in multiple time zones, learned from childhood, shaped by education, culture, social class, and experiences with economic and family stability-instability. For most of us, we develop a biased temporal orientation that favors one time frame over others, becoming excessively oriented to past, present, or the future.

Thus, at decision time for major or minor judgments, some of us are totally influenced by factors in the immediate situation: The stimulus qualities, what others are doing, saying, urging, and one’s biological urges. Others facing the same decision matrix ignore all those present qualities by focusing instead on the past, the similarities between current and prior settings, remembering what was done and its effects. Finally, a third set of decision makers ignores the present and the past by focusing primarily on the future consequences of current actions, calculating costs vs. gains.

To complicate matters, there are sub domains of each of these primary time zones. Some past-oriented people tend to focus on negatives in their earlier experiences, regret, failure, abuse, trauma, while others are primarily past positive, focusing instead on the good old days, nostalgia, gratitude, and successes. There are two ways to be present-oriented, to live in present- hedonistic domain of seeking pleasure and novelty, and sensation seeking versus being present- fatalistic, living in a default present by believing nothing one does can make any changes in one’s future life. Future-oriented people are goal setters, plan strategies, tend to be successful, but another future focus is on the transcendental future—life begins after the death of the mortal body.

My interest in Time Perspective Theory inspired me to create an inventory to make it possible to determine exactly the extent to which we fit into each of these six time zones. The Zimbardo Time Perspective Inventory, or ZTPI correlates scores on these time dimensions with a host of other psychological traits and behaviors. We have demonstrated that Time Perspective has a major impact across a vast domain of human nature. In fact, some of the relationships uncovered reveal correlation coefficients much greater than ever seen in traditional personality assessment. For example, Future orientation correlates .70 with the trait of conscientiousness, which in turn predicts to longevity. Present Hedonism correlates .70 with sensation seeking and novelty seeking. Those high on Past Negative are most likely to be high on measures of anxiety, depression and anger, with correlations as robust as .75. Similarly substantial correlations are uncovered between Present Fatalism and these measures of personal distress. I should add that this confirmatory factor analysis was conducted on a sample of functioning college students, thus such effects should be cause for alarm by counselors. Beyond mere correlations of scale measures, the ZTPI scales predict to a wide range of behaviors: Course grades, risk taking, alcohol, drug use and abuse, environmental conservation, medical checkups, creativity, problem solving, and much more.

Finally, one of the most surprising discoveries is the application of Time Perspective theory to time therapy in “curing” PTSD in Veterans, as well as in sexually abused women or civilians suffering from motor vehicle fatality experiences. Dr. Richard Sword and Rosemary Sword have been treating with remarkably positive outcomes a number of veterans from all US recent wars and also civilian clients. The core of the treatment replaces the Past Negative and Present Fatalistic biased time zones common to those suffering from PTSD with a balanced time perspective that highlights the critical role of the hope-filled future, adds in some selected present hedonism, and introduces memories of a Past Positive nature. In a sample of 30 PTSD vets of varying ages and ethnicities, treated with Time Perspective Therapy for a relatively few sessions (compared to traditional cognitive behavioral therapies), dramatic positive changes have been found for all PTSD standard assessments, as well as in life-changing social and professional relationships. It is so rewarding to see many of our honored veterans who have continued to suffer for decades from their combat-related severe traumas to discover a new life rich with opportunities, friends, family, fun and work by being exposed to this simple, elegant reframing of their mental orientation toward the life of their time.

Kepler's Use Of The Platonic Solids To Explain The Relative Distance Of Planets From The Sun

In 1595 Johannes Kepler proposed a deep, elegant and beautiful solution to the problem of determining the distance from the Sun of the six then-known planets. Nesting, as in the case of Russian dolls, each of the five Platonic solids within a sphere, and having those solids arranged in the proper order—octahedron, icosahedron, dodecahedron, tetrahedron, cube- he proposed that the succession of spherical radii would have the same relative ratios as the planetary distances. Of course the deep, elegant and beautiful solution was also wrong but then, as Joe.E.Brown famously said in Some Like it Hot's last line "Nobody's perfect."

Two thousand years earlier, in a notion that would come to be commonly described as Harmony of the Spheres, Pythagoras had already sought a solution by relating those distances to the sites on a string at which it needed to be plucked in order to produce notes pleasing to the ear. And almost two hundred years after Kepler's suggestion, Johann Bode and Johann Titius offered, without an underlying explanation for its existence, a simple numerical formula that supposedly fit the distances in question. So we see that Kepler's explanation was neither the first nor the last attempt to explain the ratios of planetary orbit radii, but in its linking of dynamics to geometry, it remains for me the deepest as well as being the simplest and most elegant.

In a strict sense none of the three proposals is strictly wrong. They are instead solutions to a problem that doesn't exist for we now understand that the location of planets is purely accidental, a byproduct of how the swirling disk of dust that circled our early Sun evolved under the force of gravity into its present configuration. The realization that there was no problem came as our view expanded from one in which our own planetary system was central to a far greater vision in which it was one of an almost limitless number of such systems scattered throughout the vast numbers of galaxies comprising our Universe.

I have been thinking about this because, together with many of my fellow theoretical physicists, I have spent a good part of my career searching for an explanation of the masses of the so-called elementary particles. But perhaps the reason it has eluded us is a proposal that is increasingly gaining credence, namely that our own visible Universe is only a random example of an essentially infinite number of Universes, all of which contain quarks and leptons with masses taking different values. It just happens that in at least one of those Universes, the values allow for there being at least one star and one planet where creatures that worry about such problems have come to life.

In other words a problem that we thought was central may once again have ceased to exist as our conception of the Universe has grown, in this case extended to one of many Universes. If this is indeed true, what grand vistas may lie before us in the future? I only hope that our descendants may have a much deeper understanding of these problems than we do and that they will smile at our attempts at our feeble attempts to provide a deep, elegant and beautiful solution to what they have recognized is a non-existent problem.

In art, the title of a work can often be its first explanation. And in this context I am thinking especially of the titles of Gerhard Richter. In 2006, when I visited Richter in his studio in Cologne, he had just finished a group of six corresponding abstract paintings which he gave the title "Cage".

There are many relations between Richter's painting and the compositions of John Cage. In a book about the "Cage" series, Robert Storr has traced them from Richter‘s attendance of a Cage performance at the "Festum Fluxorum Fluxus" in Düsseldorf 1963 to analogies in their artistic processes. Cage has often applied chance procedures in his compositions, notably with the use of the "I Ching". Richter in his abstract paintings also intentionally allows effects of chance. In these paintings, he applies the oil paint on the canvas by means of a large squeegee. He selects the colors on the squeegee, but the factual trace that the paint leaves on the canvas is to a large extend the outcome of chance. The result then forms the basis for Richter's decisions how to continue with the next layer. In such an inclusion of ‘controlled chance' an artistic similarity between Cage and Richter can be found. Additionally to the reference to John Cage, Richter's title "Cage" can also be a visual association, as the six paintings have a very hermetic almost impermeable appearance. The title points to different layers of meaning.

Beyond his Richter's abstract paintings, analogies to Cage can be found also in other of his works. His artist book "Patterns" is my favorite book of the year 2011. It shows Richter's experiment of taking an image of his "Abstract Painting [CR: 724-4]" and dividing it vertically into strips: first 2, then 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, up to 4096 strips. This methodology leads to 8190 strips. Throughout the process the strips become thinner and thinner. The experiment then leads to the strips being mirrored and repeated, which leads to a diversity of patterns. The outcomes are 221 patterns that are published on 246 double page images. In "Patterns" Richter set the precise rules, but he didn't manipulate the outcome, so that the pictures are againan interaction of a defined system and chance.

"Patterns" is one of many outstanding artist books Richter has done over the last couple of years, such as "Wald" (2008), or his book "Ice" (1981) which includes a special layout of the artist with his stunning photos of a trip to the Antarctic. The layout of those books is composed of intervals with different arrangements of the photos but also blank spaces–like pauses. Richter told me that his layout has to do with music, Cage and silence.

In 2007 Richter designed a twenty-meter-high arched stained-glass window to fill the south transept of Cologne Cathedral. The "Cologne Cathedral Window" comprises 11,000 hand-blown squares of glass in seventy-two colors derived from the palette of the original Medieval glazing that was destroyed during the Second World War. Half of the squares were allotted by a random generator, the other half then were like a mirror image to them. Control is once more ceded here to some extent, suggesting his interest in Cage's ideas to do with chance and the submission of the individual will to forces beyond one's control. "Coincidences are only useful", Richter has told me, "because they've been worked out—that means either eliminated or allowed or emphasized."

In Halberstadt, Germany, a performance of Cage's piece "ORGAN²/ASLSP" (1987) is taking place at the moment. "ASLSP" stands for "as slow as possible". Cage has not further specified this instruction, so that each performance of the score will be different. The actual performance will take 639 years to be completed. The slowness in Cage's piece is an essential aspect for our time. With globalization and the Internet all processes are accelerated to a speed in which no time for critical reflection remains. The present "Slow movement" thus argues to take time for well-chosen decisions together with a more locally oriented approach. The idea of slowness is one of the many aspects that continue to make Cage mostly relevant for the 21st Century.

Richter‘s concise title "Cage" can be unfolded into an extensive interpretation of these abstract paintings (and of other works)—but, one can say, the short form already contains everything. The title, like an explanation of a phenomenon, unlocks the works, describing their relation to one of the most important cultural figures of the twentieth century, John Cage, who shares with Richter the great themes of chance and uncertainty.

Doris Duke Chair of Conservation Ecology, Duke University; Author, The World According to Pimm: a Scientist Audits the Earth

Mother Nature's Laws

"Deep, elegant, or beautiful explanation" requires something equally singular to explain. That means laws, by which I simply mean generalities or patterns. Thus, the "law of gravity" is general enough to describe pendulums and planets. Pendulums do not quicken my heart, but living things fascinate me, even if I have yet to express affection for nematodes.Writing from Sarawak, Alfred Wallace nailed the most important law of living things in a crisp 18 words:

Every species has come into existence coincident both in space and time with a pre-existing closely allied species.

With judicious editing, Wallace could have fit his 1855 "laws of evolution" paperinto today's word limits of PNAS or Nature. We don't find trilobites scattered in the Devonian, Jurassic, and Eocene with nothing in between. Nor are polar bears only in Greenland, Patagonia, and Tibet. The paper screams for an explanation of these bundled generalities of palaeontology and biogeography.

The scientific community were asleep at the wheel and barely noticed. A few years later, that forced Wallace to send his deep, elegant and very beautiful explanation to Darwin for moral support. Darwin had the same explanation of course. The resulting and familiar history is not where I want to go with this.

What other laws has Mother Nature given us for biological diversity?

The average geographical range size of a group of species is very much larger than the median range.

The average of the geographical ranges of 1684 species of mammals in the New World is 1.8 million km2, but 50% of those species have ranges smaller than 250,000 km2—a seven to one ratio. For the region's three main bird groups, the ratio is five and eight times and for amphibians, forty times. There are many species with small ranges and few with large ranges.

There are more species in the tropics than temperate regions.

The first explorers to reach the tropics uncovered this law. Rembrandt was painting birds of paradise and marine cone shells in the early 1600s. Wallace went first to the Amazon because collecting novel species was how he earned a living.

Species with small ranges concentrate in places that typically are not where the largest numbers of species live.

This just doesn't make sense. Surely, with more species, one should have more species with large ranges, small ranges, and everything in between. It isn't so. Small-ranged species concentrate in some very special places. About half of all species live in a couple dozen places that together constitute about 10% of the ice-free parts of the planet.

Species with small ranges are rare within those ranges, while those with large ranges are common.

Pardon the language, but Mother Nature is a bitch. You'd think she'd give species with small ranges a break—and make them locally common. Not so. Widespread species tend to be common everywhere, while local ones are rare even where you find them.

What inspired Darwin and Wallace were encounters with places rich in birds and mammals found nowhere else—the Galapagos and the islands in southeast Asia. There are no such places in Europe. Darwin spent most of HMS Beagle's voyage too far south in South America, while Wallace's first trip was to the Amazon. The Amazon is very rich in species, but it is a striking example of the law that such places rarely have many species with small ranges. (I suspect this cost Wallace dearly because his sponsors wanted novelty. He found that on his next trip, to the East.)

Scientists found widespread species first. Darwin and Wallace were among the first explorers to encounter the majority of species—those with small, geographical ranges concentrated in a few places. Even for well-known groups of species, those with the smallest ranges have only been discovered in the last decades.

Given the observed distribution of range sizes, the tropics have to have more species simply because there are in the middle. Sufficiently large ranges must span the middle—that's the only way to fit them in. But, they need not be at the ends—which geographically tend to be arctic or temperate places. Yet, middles have more species than ends even when the middles are not tropical. There are more species in the middle of Madagascar's wet forests, though it's the northern end (with fewer species) that is closer to the equator, for example.

In addition, warm, wet middles—tropical moist forests—have more species than hotter and drier middles. The correlation of species with warmth and wetness is compelling, but a compelling mechanism is illusive.

Small-ranged species can be anywhere—near middles or near ends. They are not. They tend to be on islands (Galapagos, southeast Asia), and on "habitat islands"—mountaintops—(the Andes). This fits our ideas on how species form. Alas, they are not on temperate islands and mountains, so Darwin and Wallace had to leave home to be inspired. Except for salamanders: the Appalachians of the Eastern USA seem to have different species under every rock, forming an a theoretically obstinate temperate centre of endemism that isn't matched by birds, mammals, plants, or indeed other amphibians.

To make matters worse, all this assumes that we know why some species have large ranges and more have small. We do not.

In short, we have correlations, special cases, and some special pleading, but elegance is missing. A deep explanation need not be there, of course. Our ignorance hurts.

Concentrations of local, rare species are where human actions drive species to extinction one hundred to one thousand times faster than the natural rate. Yes, we can map birds and mammals and so know where we need to act to save them. But not butterflies, which people love, let alone nematodes. Without explanations, we cannot tell whether the places we protect birds will also protect butterflies. Unless we understand Mother Nature's Laws and can extend them to the great majority of species still unknown to science, we may never know what we destroyed.

Emeritus Professor of Chemistry, University of Oxford; Author, Reactions: The Private Life of Atoms; On Being: A Scientist's Exploration of the Great Questions of Existence

Why Things Happen

There is a wonderful simplicity in the view that events occur because things get worse. I have in mind the Second Law of thermodynamics and the fact that all natural change is accompanied by an increase in entropy. Although that is in my mind, I understand those words in terms of the tendency of matter and energy to disperse in disorder. Molecules of a gas undergo ceaseless virtually random motion, and spread into the available volume. The chaotic thermal motion of atoms in a block of hot metal jostle their neighbors into motion, and as the energy spreads into the surroundings so the block cools. All natural change is at root a manifestation of this simple process, that of dispersal in disorder.

The astonishing feature of this perception of natural change is that the dispersal can generate order: through dispersal in disorder structure can emerge. All it needs is a device that can link in to the dispersal, and just as a plunging stream of water can be harnessed and used to drive construction, so the stream of dispersal can be harnessed too. Overall there is an increase in disorder as the world progresses, but locally structures, including cathedrals and brains, dinosaurs and dogs, piety and evil deeds, poetry and diatribes, can be thrown up as local abatements of chaos.

Take, for instance, an internal combustion engine. The spark results in the combustion of the hydrocarbon fuel, with the generation of smaller water and carbon dioxide molecules that tend to disperse and in so doing drive down a piston. At the same time the energy released in the combustion spreads into the surroundings. The mechanical design of the engine harnesses these dispersals and through a string of gears that harnessing can be used to build from bricks a cathedral. Thus, dispersal results in a local structure even though overall the world has sunk a little more into disorder.

The fuel might be our dinner, which as it is metabolised releases molecules and energy that spread. The analog of the gears in a vehicle is the network of biochemical reactions within us, and instead of a pile of bricks molded into a cathedral amino acids are joined together to generate the intricate structure of a protein. Thus, as we eat, so we grow. We too are local abatements of chaos driven into being by the generation of disorder elsewhere.

Is it then too fanciful to imagine intellectual creativity, or just plain inconsequential reverie, as being driven likewise? At some kind of notional rest, the brain is a hive of electric and synaptic activity. The metabolic processes driven by the digestion of food can result in the ordering, not now of brick into cathedral, not now of amino acid into protein, but now current into concept, work of artistic expression, foolhardy decision, and scientific understanding.

Even that other great principle, natural selection, can be regarded as an extraordinarily complex reticulated unwinding of the world, with the changes that take place in the biosphere and its evolution driven ultimately by descent into disorder.

Is it then any wonder that I regard the Second Law as a great enlightenment? That from so simple a principle great consequences flow is, in my view, a criterion of the greatness of a scientific principle. No principle, I think, can be simpler than that things get worse, and no consequences greater than the universe of all activity, so surely the Law is the greatest of all?

General relativity is one example of a beautiful explanation of the way nature works. In fact I would argue that it is a truly extraordinary example because it is beautiful in three distinct ways. However, there's an ugly truth that we also have to consider: the concept of beauty is notoriously subjective.

First, general relativity is beautiful in the mathematical sense. Like a great work of art, it is built on strong, basic, foundations that are free of superfluous adornments. Einstein used a handful of principles to blend two fundamental concepts that had, until he came along, been thought to be independent: space and time, on the one hand, and matter and motion on the other.

The quest for this kind of beauty carries a distant echo of the Aesthetic Movement's battle cry of 'art for art's sake'. Distant, yes, but also distinct and pervasive. Bertrand Russell once said that mathematics had a cold and austere beauty, "like that of sculpture." In his book, A Mathematician's Apology, G. H. Hardy suggests that a beautiful proof possesses "inevitability", "unexpectedness", and "economy". Others talk of universality, simplicity, and elemental power.

The great Paul Dirac admired general relativity more than any other modern theory (much more than quantum mechanics). He found it as spine-tinglingly inspirational as any great work of music. That is because he valued aesthetic appeal to an extraordinary degree. At a seminar in Moscow in 1956, when asked to summarise his philosophy of physics, Dirac had once scribbled on the blackboard in capital letters, "Physical laws should have mathematical beauty."

But there's another sense in which theories are beautiful. In his award-winning biography of Dirac, The Strangest Man, Graham Farmelo, describes a telling encounter, recorded by the BBC, between Dirac and his friend Werner Heisenberg, when the latter had made the pragmatic, and apparently uncontroversial, remark that beauty is less important than agreement with experimental results.

Dirac countered with the example of another friend, Erwin Schrödinger, whom Dirac greatly admired for his appreciation of mathematical beauty, and whose eponymous equation describes the behaviour of matter in the micro-world of atoms and molecules. He had attempted to formulate versions of the Schrödinger equation that were compatible with special relativity and gave up when he realised this equation (now called the Klein-Gordon equation) gave incorrect results when used to calculate the energy levels of hydrogen.

Dirac pointed out that if Schrödinger had shown more faith in beauty, he would have ended up publishing the first relativistic version of quantum theory. Heisenberg conceded there was indeed a value to being aesthetic (Farmelo remarks: "Dirac's face lit up with the broadest of smiles, revealing two rows of rotting teeth.")

But Heisenberg gave up too easily. There is indeed another, important, sense in which a theory is beautiful, as Heisenberg himself knew only too well. There are plenty of 'beautiful' theories out there that lack relevance: tellingly, Einstein had become obsessed by this ultra-pure form of beauty towards the end of his career, when he was much less creative. He had lost sight of something important. The most elegant theories of all also possess the beauty of utility: in this sense, general relativity has incredible allure. It gives a dazzling account of gravity.

I would like to argue that there is yet another dimension to the beauty of utility, which goes beyond giving an immaculate account of how nature works. General relativity is also beautiful because it is a fecund theory. It does more than 'give a great account of gravity' – it has yielded new insights in nature, revealing novel vistas that have enabled theorists to keep well ahead of experimenters for decades. Einstein and the likes of Hawking, Penrose, Chandrasekhar and many more used it to shed light on extraordinary phenomena, from the Big Bang 13.7 billion years ago to gravitational waves to the stability of stars to the properties of black holes.

Beautiful bottom line: the supreme fundamental theories should be elegant at the mathematical level, offer a precise fit with nature and also open up new worlds of intellectual endeavour.

One ugly fact remains, however: subjectivity. Why is it that mathematics provides the most beautiful depictions of nature that we have? We all assume this is true but we don't know for sure. It seems that the greatest engine of cultural change—the scientific world-view—rests on a foundation that, in some respects, is ultimately religious. Today it has become a cliché to say that, as Dirac put it in a Scientific American article in 1963: "God is a mathematician of a very high order." God's alleged aestheticism is an illusion, however. I put my faith in mathematical physicists. They are on the right path to a profound and objective truth, the beautiful details of which have yet to be revealed in full.

It's Darwin's explanation of the origin of coral reefs (and it was Nick Humphrey who told me this story in the early 1980's—in Tahiti!). What I love is that the question is so easy to grasp but so difficult to answer, and that Darwin's solution is so simple and beautiful—he worked it out by sheer power of imaginative reasoning long before anybody had heard of tectonic plates and more than a hundred years before the theory would be proved (by deep borings on Bikini Atoll).

The mystery, much discussed in the 1830's, was that coral polyps can grow and reproduce only in shallow water, and yet the walls of coral reefs and atolls plunge thousands of feet into the deep ocean. The coral couldn't have grown from the bottom upwards, so how then did the reef and atolls get there?

By the 1830's, the geologist Charles Lyell had decided that atolls must be coral reefs growing on the crater rims of sunken extinct volcanoes. Darwin didn't agree, and it's wonderful to know that Lyell was thrilled when Darwin came back from the Beagle voyage and told him the real answer: the coral organisms thrive in shallow water but the ocean floor is slowly sinking; the polyps die but new ones are growing on top of them, and over an immense amount of time countless billions of tiny calcareous skeletons accumulate to make up a vast structure reaching from the darkness of the ocean bed to the sunlit surface so far above.

Humans alone fluently understand the mental states of others. Humans alone rely on an open-ended system of communication. Humans alone ponder the reasons for their beliefs. For each of these feats, and for others too, humans rely on their most special gift: the ability to represent representations—the ability to form metarepresentations. Hidden behind such mundane thoughts as "Mary believes that Paul believes that it's going to rain" is the explanation of human uniqueness.

There are two ways to represent representations: one immensely powerful, the other rather clumsy. The clumsy way is to create a new representation for every representation that needs to be represented. Using such a device, Mary would have to form a representation "Paul believes that it's going to rain" completely independent of her representation "it's going to rain." She would then have to learn anew all of the inferences that can be drawn from "Paul believe it's going to rain," such as the negative impact on the willingness to go for a jog or the increased probability to fetch an umbrella. This cumbersome process would have to be repeated for each new representation that Mary wishes to attribute, from "Peter things the weather looks lovely" to "Ruth is afraid that the Dow Jones is going to crash tomorrow." Such a process could not possibly account for humans' amazing abilities to attribute just about any thought to other people. How can we account for these skills then?

The explanation is that we use our own representations to attribute thoughts to others. When Mary wants to attribute to Paul the belief "it's going to rain," she 'simply' uses her representation "it's going to rain" and embeds it in a metarepresentation: "Paul thinks "it's going to rain."" Because the same representation is used, Mary can take advantage of the inferences that she could draw from "it's going to rain" to draw inferences from "Paul believes that "it's going to rain."" This trick opened for humans the doors to an unparalleled understanding of their social environment.

Most of the beliefs we form about others are derived from communication: people keep telling us what they believe, want, desire, fear, love… Here again, metarepresentations play a crucial role, since understanding language requires going from utterances—"It's going to rain"—to metarepresentations—"Paul means that "it will soon rain here.""

Mentalizing (attributing thoughts to others) and communicating are the most well known uses of metarepresentations, but they are not the only ones. Metarepresentations are also essential for people to be able to think about reasons. Specific metarepresentations are relied on when people produce and evaluate arguments, as in: "Mary thinks "it's going to rain" is a good argument for "we should not go out."" Again, Mary uses her representation "it's going to rain" but, instead of attributing it to someone else, she represents its strength as a reason to accept a given conclusion.

Several other properties of representations can be represented, from their esthetic value to their normative status. The representational richness made possible by recycling our own representations to represent other people's representations, or to represent other attributes of representations, is our most distinctive trait, one of these amazingly brilliant solutions that natural selection stumbles upon. However, if it is indeed much simpler to rely on this type of metarepresentations than on the cumbersome solution of creating new representations from scratch every time, we still face a complex computational task.

Using the example of mentalizing, it is apparent that even when we use our own representations to attribute representations to other people, a lot of work remains to be done. It cannot be metarepresentations all the way down: at some point, other inputs—linguistic or behavioral cues—have to be used to attribute representations. Moreover, when a representation is represented not all of the inferences that can be drawn from it are relevant. When Mary believes that John believes it's going to rain, some of the inferences that she would draw from "it's going to rain" may not be attributable to John—maybe he doesn't mind jogging in the rain for instance. Other inferences Mary may not spontaneously draw—maybe John will be worried because he has left his book outside. Still, without a baseline—Mary's own representation—the task would jump from merely difficult to utterly intractable.

Probably more than any other cognitive trait, the ability to use our own representations to represent representations is what explains humans' achievements. Without this skill, the complex forms of social cognition that characterize our species would have been all but impossible. It is also critical for us psychologists to understand these ideas if we want to continue our forays into human cognition.

I leave the last word to Dan Sperber who, more than any other cognitive scientists, has made of metarepresentations the most central explanation of humans' unique cognition: "Humans have the ability to represent representations. I would argue that this meta-representational ability is as distinctive of humans, and as important in understanding their behaviour, as is echolocation for bats."

Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings

Unconscious Inferences

Optical illusions are a pleasure to look at, puzzling, and robust. Even if you know better, you still are caught in the illusion. Why do they exist? Are they merely mental quirks? The physicist and physiologist Hermann von Helmholtz (182-1894) provided us with a beautiful explanation of the nature of perception, and how it generates perceptual illusions of depth, space, and other properties. Perception requires smart bets called "unconscious inferences."

In Volume III of his Physiological Optics, Helmholtz recalled a childhood experience:

"I can recall when I was a boy going past the garrison chapel in Potsdam, where some people were standing in the belfry. I mistook them for dolls and asked my mother to reach up and get them for me, which I thought she could do. The circumstances were impressed on my memory, because it was by this mistake that I learned to understand the law of foreshortening in perspective."

This childhood experience taught Helmholtz that information available from the retina and other sensory organs is not sufficient to reconstruct the world. Size, distance, and other properties need to be inferred from uncertain cues, which in turn have to be learned by experience. Based on this experience, the brain draws unconscious inferences about what a sensation means. In other words, perception is a kind of bet about what's really out there. But how exactly does this inference work? Helmholtz drew an analogy with probabilistic syllogisms. The major premise is a collection of experiences that are long out of consciousness; the minor premise is the present sensory impression. Consider the "dots illusion" based on V. S.Ramachandran and colleagues.

The dots in the left picture appear concave, receding into the surface away from the observer, while those on the right side appear convex, curved towards the observer. If you turn the page around, the inward dots will pop out and vice versa. In fact, the two pictures are identical, except for being rotated 180 degrees. The illusion of concave and convex dots occurs because our brain makes unconscious inferences.

Major premise: A shade on the upper part of a dot is nearly always associated with a concave shape.Minor premise: The shade is in the upper partUnconscious inference: The shape of the dot is concave.

Our brains assumes a three-dimensional world, and the major premise guesses the third dimension from two ecological structures:

1. Light comes from above, and
2. There is only one source of light.

These two structures dominated most of human and mammalian history, in which the sun and the moon were the only sources of light, and the first also holds approximately for artificial light today. Helmholtz would have favored the view that the major premise is learned from individual experience, others have favored evolutionary learning. In both cases, visual illusions are seen as the product of unconscious inferences based on evidence that is usually reliable, but can be fooled in special circumstances.

The concept of unconscious inference can also explain phenomena from other sensory modalities. A remarkable instance where a major premise suddenly becomes incorrect is the case of a person whose leg has been amputated. Although the major premise ("A stimulation of certain nerves is associated with that toe") no longer holds, patients nevertheless feel pain in toes that are no longer there. The "phantom limb" also illustrates our inability to correct unconscious inferences in spite of better knowledge. Helmholtz's concept of unconscious inferences has given us a new perspective on perception in particular and cognition in general.]

1.Cognition is inductive inference. Today, the probabilistic syllogism has been replaced by statistical and heuristic models of inference, inspired by Thomas Bayes and Herbert Simon, respectively.
2.Rational inferences need not be conscious. Gut feelings and intuition work with the same inductive inferences as conscious intelligence.
3.Illusions are a necessary consequence of intelligence.

Cognition requires going beyond the information given, to make bets and therefore to risk errors. Would we better off without visual illusions? We would in fact be worse off—like a person who never says anything to avoid making any mistakes. A system that makes no errors is not intelligent.

Claudius Ptolemy explained the sky. He was an Egyptian who wrote in Greek in the Roman empire, in the time of emperors like Trajan and Hadrian. His most famous book was called by its Arabic translators the Almagest. He inherited a long ancient tradition of astronomical science going back to Mesopotamia, but he put his name and imprint on the most successful and so far longest-lived mathematical description of the working of the skies.

Ptolemy's geocentric universe is now known mainly as the thing that was rightly abandoned by Copernicus, Kepler, Newton, and Einstein, in progressive waves of the advancement of modern science, but he deserves our deep admiration. Ptolemy's universe actually made sense. He knows the difference between planets and stars and he knows that the planets take some explaining. (The Greek word 'planet' means wanderer, to reflect ancient puzzlement that those bright lights moved according to no pattern that a shepherd or seaman could intuitively predict, unlikely the reassuringly confident annual march of Orion or the rotation of the great bears overhead.) So Ptolemy represents the heavenly machine in a complex mathematical system most notorious for its "epicycles"—the orbits within orbits, so to speak, by which the planets, while orbiting the earth, spun off their orbits in smaller circles that explained their seeming forward and backward motion in the night sky.

We should admire Ptolemy for many reasons, but chief among them is this: he did his job seriously and responsibly with the tools he had. Given what he knew, his system was brilliantly conceived, mathematically sound, and a huge advance over what had gone before. His observations were patient and careful and as complete as could be, his mathematical calculations correct. More, his mathematical system was a complicated as it needed to be and at the same time as simple as it could be, given what he had to work with. He was, in short, a real scientist. He set the standard.

It took a long time and there were some long arguments before astronomy could advance over what he offered—and that's a sign of his achievement. But when advance was possible, Ptolemy had made it impossible for advance to come through wishful thinking, witch doctors, or fantasy. His successors in the great age of modern astronomy had to play by his rules. They needed to observe more carefully, do their math with exacting care, and propose systems at the poise point of complexity and simplicity themselves. Ptolemy challenged the moderns to outdo him—and so they could and did. We owe him a lot.

According to the guide on my walking tour of The Rocks neighborhood in Sydney, Australia, when the plague hit the city around 1900, a bounty was placed on rats to encourage people to kill them, since it was known that rats bore the fleas that communicated the disease to humans. The intent of the bounty was plain enough: reducing the number of rats to reduce the spread of the plague. An unintended consequence, however, was that residents, tempted by the rat bounty, began breeding rats.

The law of unintended consequences is often associated with Robert Merton, though its general spirit appears in various forms, not least in Adam Smith’s invisible hand notion, and is somehow delightful in its chaos, as if Nature continually thumbs her nose at our attempts to control her.

The idea is that when people intervene in systems with a lot of moving parts, especially ecologies and economies, because of the complex interrelationships among parts of the system, the intervention will have effects beyond those that were intended, including many that were unforeseen or unforeseeable.

Examples abound. Returning to Australia, one of the best known examples of an unintended consequence is the case of rabbits, brought by the First Fleet as food, released into the wild for hunting, with the unintended consequence that rabbit populations grew to staggering proportions, causing untold ecological devastation, in turn leading to the development of measures to control the rabbits, including an exceptionally long fence, which had the unintended consequence of guiding three young girls home in the 1930s, which in turn had the unintended consequence of inspiring an award-winning motion picture (Rabbit Proof Fence) with a soundtrack by Peter Gabriel.

These chains of consequences occur because making changes to one part of a system with many interacting parts leads to changes in other parts of the system. Because many of the systems that we try to influence are complex but incompletely understood—bodies, habitats, markets—there are bound to be consequences that are difficult to predict.

This is not to say that the consequences will always be undesirable. Recently, certain municipalities changed the laws governing the use of marijuana, making it easier to obtain for medical purposes. The law might or might not have reduced the suffering of glaucoma victims, but data from traffic accidents suggest that the change in the legislation did reduce fatalities on the road by about 9%. (People substituted marijuana for alcohol and apparently drive better stoned than drunk.) Saving drivers’ lives was not the intent of the law, but that was the effect.

Another example, smaller in scale though closer to my heart, was the recent abrupt increase of parking rates by a third in University City, Philadelphia, where I work. The intent of the law was to raise revenue to help fund the city’s schools. An unintended consequence—because students seem disinclined to pay the higher price—is that I can rely on getting a parking spot when I have to drive to school.

Intervention in any sufficiently complicated system is bound to produce unintended effects. We treat patients with antibiotics, and we select for resistant strains of pathogens. We artificially select for wrinkly-faced bulldogs, and less pleasant traits, such as respiratory problems, come along for the ride. We treat morning sickness with thalidomide, and babies with birth defects follow.

In the economic sphere, most policies have various sorts of knock on effects, with prohibitions and bans providing some of the most profound examples, including, of course, Prohibition in America, which spun off various consequences including, arguably, the rise of organized crime in the United States. Because governments typically only ban things for which people have a taste, when bans do arise, people find ways to satisfy these tastes, either through substitutes or black markets, both of which lead to varied consequences. Ban sodas, give a boost to sports drink sales. Ban the sale of kidneys, spawn an international black market for organs and underground surgeries. Ban hunting mountain lions, endanger local joggers. (Not because joggers are substituted for mountain lions as prey for hunters; the ban increased the mountain lion population, who in turn menaced joggers.)

There is something oddly beautiful about the tendrils of causality in complex systems, holding the same appeal that we find in the deliberate inelegance of Rube-Goldberg machines. And none of this is to say that the inevitable possibilities of being surprised by our interventions means that we must give in to pessimism. Rather, it is a reminder to have caution and humility. As we gradually increase our understanding of large, complicated systems, we will develop new ways to glimpse the unintended consequences of our actions. We already have some guiderails—people will substitute for banned or taxed products; removing one species in an ecology typically penalizes populations that prey on them and aids species that compete with them, and so on—so while there will probably be always unintended consequences, these consequences won’t always be completely unanticipated.

Paul Meier—who passed away in 2011—was primarily known for his introduction of the Kaplan-Meier estimator. But Meier was also a seminal figure in the widespread adoption of an invaluable explanatory tool: the randomized experiment. The decided unsexiness of the term masks a truly elegant form—that in the hands of its best practitioners approaches art. Simply put, experiments offer a unique and powerful means for devising answers to the question that scientists across discipline seek to answer: How do we know if something "works"?

Take a question that appears anew in the media each year: Is red wine good or bad for us? We learn a great deal about how red wine "works" by asking people about their consumption and health and looking for correlations between the two. To estimate the specific impact of red wine on health, though, we need to ask people a lot of questions—about everything they consume (food, prescription medication, more unsavory forms of medication), their habits (exercise, sleep, sexual activity), their past (their health history, their parents' and grandparents' health histories), and on and on – and then try to control for these factors to isolate the impact of wine on health. Think of the length of the survey…

Randomized experiments completely reengineer how we go about understanding how red wine "works." We take it as a given that people vary in the manifold ways described above (and others), but cope with this variance by randomly assigning people to either drink red wine or not; if people who eat donuts and never exercise are equally likely to be in the "wine treatment" or the "control treatment" then we can do a decent job of assessing the average impact of red wine over and above the likely impact of other factors. It sounds simple because, well, it is —but anytime a simple technique yields so much, elegant is a more apt description.

The rise of experiments in the social sciences that began in the 1950s—including Meier's contributions—has exploded in recent years, with the adoption of randomized experiments in fields ranging from medicine (testing interventions like cognitive behavioral therapy) to political science (running voter turnout experiments) to education (assigning kids to be paid for grades) to economics (encouraging savings behavior). The experimental method has also begun to filter into and impact public policy: President Obama appointed behavioral economist Cass Sunstein to head the Office of Information and Regulatory Affairs, and Prime Minister David Cameron instituted a "Behavioural Insights Team."

Experiments are by no means a perfect tool for explanation. Some important questions simply do not lend themselves to experiments, and the experimental method in the wrong hands can cause harm as in the infamous Tuskegee syphilis experiment. But the increasingly widespread application of experiments speaks to their flexibility in informing human understanding of how things "work"—and why they work that way.

Physician and Social Scientist, Yale University; Coauthor, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

Out of the Mouths of Babes, or, Why is the Sky Blue?

My favorite explanation is one that I sought as a boy. It is the explanation for why the sky is blue. It's a question every toddler asks, but it is also one that most great scientists since the time of Aristotle, including da Vinci, Newton, Kepler, Descartes, Euler, and even Einstein, have asked.

One of the things I like most about this explanation—beyond the simplicity and overtness of the question itself—is how long it took to arrive at correctly, how many centuries of effort, and how many branches of science it involves.

Unlike other everyday phenomena, such as the rising and setting sun, the color of the sky did not elicit much myth-making, even by the Greeks or Chinese. They were few non-scientific explanations for its color. It took a while for the azure sky to be problematized, but, when it was, it kept our (scientific) attention. How could the atmosphere be colored when the air before our faces is not?

Aristotle is the first, so far as we know, to ask the question about why the sky is blue, in the treatise On Colors; his answer is that the air close at hand is clear and the deep air of the sky is blue the same way a thin layer of water is clear but a deep well of water looks black. This idea was still being echoed in the 13th century by Roger Bacon. Kepler too reinvented a similar explanation, arguing that the air merely looks colorless because the tint of its color is so faint when in a thin layer. But none of them offered an explanation for the blueness of the atmosphere. So the question actually has two, related parts: why the sky has any color, and why it has a blue color.

In the Codex Leicester, Leonardo da Vinci, writing after 1508, noted, "I say that the blue which is seen in the atmosphere is not its own color, but is caused by the heated moisture having evaporated into the most minute, imperceptible particle, which beams of the solar rays attract and cause to seem luminous against the deep, intense darkness of the region of fire that forms a covering above them." Alas, Leonardo does not actually say why these particles should be blue either.

Isaac Newton contributed, both by asking why the sky was blue, and also by demonstrating, though his pathbreaking experiments with refraction, that white light could be decomposed into its constituent colors.

Many now-forgotten and many still-remembered scientists since Newton joined in. What might refract more blue light towards our eyes? In 1760, the mathematician Leonhard Euler speculated on the wave theory of light helping to explain why the sky is blue. The 19th century saw a flurry of experiments and observations of all sorts, from expeditions to mountaintops for observation to elaborate efforts to recreate the blue sky in a bottle—as chronicled in Peter Pesic's wonderful book, Sky in a Bottle. Countless careful observations of blueness in different locations, different altitudes, different times, were made, including with bespoke devices known as cyanometers. Horace-Benedict de Saussure invented the first cyanometer in 1789. His version had 53 sections with varying shades of blue arranged in a circle. Saussure correctly reasoned that something suspended in the air must be responsible for the blue color.

Indeed, for a very long time, it had been suspected that something in the air modified the light and made it appear blue. Eventually it was realized that it was the air itself that did this, that the very gaseous molecules that make up air itself are essential to making it appear blue. And so, the blueness of the sky is connected to the discovery of the physical reality of atoms. The color of the sky is deeply connected to atomic theory, and even to Avogadro's number! And this in turn attracted Einstein's attention in the period from 1905 to 1910.

So, the sky is blue because the incident light interacts with the gas molecules in the air in such as fashion that more of the light in the blue part of the spectrum is scattered, reaching our eyes on the surface of the planet. All the frequencies of the incident light can be scattered this way, but the high-frequency (short wavelength) blue is scattered more than the lower frequencies in a process known as Rayleigh scattering, described in the 1870's. John William Strutt, Lord Rayleigh, who also won the Nobel Prize in physics in 1904 for the discovery of argon, demonstrated that, when the wavelength of the light is on the same order as the size of the gas molecules, the intensity of scattered light varies inversely with the fourth power of its wavelength. Shorter wavelengths like blue (and violet) are scattered more than longer ones. It's as if all the molecules in the air preferentially glow blue, which is what we then see everywhere around us.

Yet, the sky should appear violet since violet light is scattered even more than blue light. But the sky does not appear violet to us because of the final, biological part of the puzzle, which is the way our eyes are designed: they are more sensitive to blue than violet light.

The explanation for why the sky is blue involves so much of the natural sciences: the colors within the visual spectrum, the wave nature of light, the angle at which sunlight hits the atmosphere, the mathematics of scattering, the size of nitrogen and oxygen molecules, and even the way human eyes perceive color. It's most of science in a question that a young child can ask.

The Epidemic of Obesity, Diabetes and "Metabolic Syndrome:" Cell Energy Adaptations in a Toxic World?

"Metabolic syndrome" (MetSyn) has been termed the "Epidemic of the 21st century." MetSyn is an accretion of symptoms, including high body mass index (weight-for-height), high blood sugar, high blood pressure (BP), high blood triglycerides, high waist circumference (central/visceral fat deposition), and/or reduced HDL-cholesterol, the so-called "good" cholesterol. Epidemics of Obesity and diabetes are intertwined with, and accompany, the meteoric rise in MetSyn.

The prevalent view is that MetSyn is due to a glut of food calories ("energy") consumed, and a dearth of exercise energy expended, spurring weight gain—an "energy surfeit"—with the other features arising in consequence. After all, we have more access to calories, and are more often sedentary, than in times gone by. In turn, MetSyn factors are each linked, in otherwise-healthy young populations, to higher mortality.

But this normative view leaves many questions unanswered: Why do elements of MetSyn correlate? Why are overweight people today more likely to have diabetes than hitherto? Why are elements of MetSyn now emerging in infancy? Why is MetSyn materializing in poor and third-world nations?

The customary "explanation" also creates paradoxes. If MetSyn stems from energy surfeit, why do factors that reduce energy supply, or increase demand promote MetSyn—far from protecting against it?

Why does MetSyn cease to elevate mortality (indeed, sometimes boosts survival) when the group studied has advanced age, heart failure, or severe kidney disease—conditions that all impair cell energy?

Suppose the correct explanation were the complete opposite to the accepted one? Could the features of MetSyn be the adaptive response to inadequate energy? After all, fat depots, glucose, and triglycerides are each accessory energy sources (oxygen is primary), blood pressure is needed to deliver these to tissues, especially when underperfused. Cell energy, central to cell and organism survival, is needed—continuously (we live only minutes without oxygen). The stretch is not so great: populations in which prior generations were energy starved have increased obesity/MetSyn now ("thrifty gene" thesis); and low energy supplyin utero is understood to foster MetSyn in adulthood.

This explains—as the energy surfeit view does not—why MetSyn exists at all: why elevated glucose, triglycerides, blood pressure (carrying oxygen, glucose, nutrients) and abdominal fat deposition cohere statistically. It explains why other energy supportive adaptations accompany them, like free fatty acids—as well as (metabolically active) ectopic fat: fatty-liver, fatty-pancreas, fatty-kidney—even fatty-streaks in the blood vessels; why MetSyn is linked to fatigue, and increased sleep duration (to conserve energy). Indeed, increased calories eaten, and reduced exercise expended—the usual MetSyn explanation—arise, too, as fellow energetic adaptations: Thus, this view is arguably not antithetical to the canonical one, but in one sense subsumes it. It explains, as the energy surfeit view does not, the populations at risk for MetSyn such as elderly (mitochondrial function declines exponentially with age) and those afflicted with sleep apnea—or any cause for recurrent energy production impairment. And it explains why in studies focused on persons with conditions that blight energy, those with MetSyn features "paradoxically" don’t do worse, or even fare better.

The energy deficit ("starving cell") hypothesis accounts for scores of facts where the prevailing view provides no insight. Numerous observations deemed "paradoxical" in the standard view emerge seamlessly. It has testable predictions, e.g.: Multiple other oxidative stress- and mitochondrial disruption-inducing exposures that have not yet been assessed will promote one or multiple elements of MetSyn. For factors that relate at both extremes to MetSyn—e.g. short and long sleep duration—the energy disruptive one will prove to cause MetS, and the energy supportive one to serve as a fellow adaptive consequence.

This reframing addresses a pivotally important problem—some think MetSyn is slated to reverse the gains we have made in longevity—with a perhaps surprising conclusion, that should precipitate a revision in our thinking about causes of MetSyn, and importantly, its solutions.

Consciousness is the fusion of immediate stimuli with memory that combines the simultaneous feeling of being both the observer and the observed into a smooth, enveloping flow of time that is neither truly the past nor the present but somehow inexplicably each of them. It is the ultimate authority and arbiter of our perceptual reality. That consciousness is still an intractable problem for scientists and philosophers to understand is not surprising. Whatever the final answer turns out to be, I suspect it will be an illusion the mind evolved to hide the messy workings of its parallel modular computing.

Neurophysiologists are finding, as they pull ever so slightly at the veil, which shrouds the "Wizard of I," that this indispensable attentive and observant self monitor called consciousness, is dependent upon a trick in overseeing our perceptions. Our subjective sense of time does not correspond to reality. Cortical evoked potentials, electrical recordings, of the normal brain during routine activity have been shown to precede by almost a third of a second the awareness of an actual willed movement or a response to sensory stimulation. The cortical evoked potentials indicate that the brain is initiating or reacting to what is happening far sooner than the instantaneous perception we experience. On a physiological scale, this represents a huge discrepancy that our mind corrects by falsifying the actual time an action or event occurs, thus enabling our conscious experience to conform to what we perceive.

But even more damaging to our confidence in the reliability of our perceptions comes from studies of rapid eye movements, called saccades that are triggered by novel visual stimuli. During the brief moments of these jerky eye movements, visual input to the brain is actively suppressed such that we are literally blind. Without this involuntary ocular censorship, we would be repeatedly plagued with moments of acute blurred vision that would be unpleasant as well as unsafe. From a survival calculus, this would pose an extremely unfavorable disadvantage since it would invariably occur with novel stimuli, which by its very nature required, not the worst but the best visual acuity.

The Wizard's solution to this intolerable situation was to exclude those intervals from our stream of awareness and to replace it instead with a vision extrapolated from what had just occurred to what was immediately anticipated. But consciousness, like a former president, had to come up with an accounting for the erased period. Evolution provided a much longer epic to work out the bugs in its necessary deception than the limited time frame under the gun of a special prosecutor. Instead of trying to hide the existence of the tape, consciousness came up with a far clever trick of obscuring the deletion. It did this by falsifying the time, backdating those necessary moments so that there is no appearance of any gap.

This illusion of visual continuity from inference and extrapolation poses an innate vulnerability in the brain's software that any good hacker can exploit. Magicians, card sharks, three card monte hustlers have made a nice living working this perceptual flaw. In a comic routine, Richard Pyror expressed this best when caught by his wife with another woman. "Who are you going to believe? Me or your lying eyes?"

It would take about 100 years to try the 100,000 recipes carried on Epicurious, the largest recipe portal in the US.What fascinates me about this number is not how huge it is, but rather how tiny. Indeed, a typical dish has about eight ingredients. Thus the roughly 300 ingredients used in cooking today allow for about 1,000,000,000,000,000distinct dishes. Add to this your choice of deep freezing, frying, smashing, centrifuging, or blasting your ingredients, and you start to see why cooking is a growth industry. It currently uses only a negligible fraction of its resources—less than one in a trillion dishes that culinary combinatorics permits.

Don't you like green eggs and ham? Or why leave this vast terra incognita unexplored? Do we simply lack the time to taste our way through this boundless bounty, or is it because most combinations are repugnant anyway? Could there be some rules that explain liking some ingredient combinations and avoiding others? The answer appears to be yes, which leads me to my most flavorful explanation to date.

As we search for evidence to support (or refute) any 'laws' that may govern our culinary experiences, we must bear in mind that food sensation is affected by many factors, from color to texture, from temperature to even sound. Yet palatability is largely determined by flavor, representing a group of sensations including odors, tastes, freshness and pungency. This is mainly chemistry, however. Odors are molecules that bind olfactory receptors, tastes are chemicals that stimulate taste buds, freshness or pungency are driven by chemical irritants in our mouth and throat. Therefore, if we want to understand why we prize some ingredient combinations and loathe others, we have to look at the chemical profile of our recipes.

But how can chemistry tell us which ingredients taste well together? Well, we can formulate two orthogonal hypotheses. First, we may like some ingredients together because their chemistry (and henceforth their flavor) is complementary—what one lacks, is provided by the other. The alternative is its polar opposite: taste is like color matching in fashion—we prefer to pair ingredients that already share some flavor compounds, bringing them in chemical harmony with each other. Before you go on reading, I urge you to stop for a second and ponder which of these would you find more plausible.

To this day, it is the first one that makes more sense to me: I put salt in my omelet not because the chemical bouquet of the egg shares the salt's only chemical, NaCl, but precisely because it is missing it. Yet, lately chefs and molecular gastronomers are betting on the second hypothesis, and they even gave it a name, calling it the food pairing principle. Its consequences are already on your table: some contemporary restaurants serve white chocolate with caviar, as they share trimethylamine and other flavor compounds, or chocolate and blue cheese that share at least 73 flavor compounds. Yet, evidence for food pairing is at best anecdotal, making a scientist like myself ask: is this more than a myth?

So whom should I trust, my intuition or the molecular gastronomers? And how to really test if two ingredients indeed go well together? Our first instinct was to taste under controlled conditions all ingredient pairs. Yet 300 ingredients offer about 44,850 pairs to sample, forcing us to search for smarter ways to settle the question. Having spent the last decade trying to understand the laws governing networks, from the social network to the intricate web of genes governing our cells, we decided to rely on network science. We therefore compiled the flavor components of over 300 ingredients and organized them into a network, linking two ingredients if they share flavor compounds. We then used the collective intelligence accumulated in the existing body of recipes to test what goes with what. If two common ingredients are almost never combined, like garlic and vanilla, there must be a reason for it—those who tried it, may have found it either uninspired or outright repulsive. If, however, two ingredients are more often together than we would expect based on their individual popularity, we took that as a sign that they must taste well together. Tomato and garlic are in this category, combined in 12 percent of all recipes.

The truth is rather Dr. Seussian at the end: we may like some combinations here, but not there. That is, North American and Western European cuisine show a strong tendency to combine ingredients that share chemicals. Hence, color matching dominates much of the western cuisine. If you are here, serve parmesan with papaya and strawberry with beer. Do not try this there, however: East Asian cuisine thrives through avoiding ingredients that share favor chemicals. So if you hail from Asia, yin yang is your guiding force: seeking harmony through pairing the polar opposites. Do you like soy sauce with honey? Try them and you may, I say.

Neuroscientist; Collège de France, Paris; Author, The Number Sense; Reading In the Brain

The Universal Algorithm For Human Decisions

The ultimate goal of science, as the French physicist Jean Perrin once stated, should be "to substitute visible complexity for an invisible simplicity". Can human psychology achieve this ambitious goal: the discovery of elegant rules behind the apparent variability of human thought? Many scientists still consider psychology as a "soft" science, whose methods and object of study are too fuzzy, too complex, and too suffused with layers of cultural complexity, to ever yield elegant mathematical generalizations.

And yet cognitive scientists know that this prejudice is simply wrong. Human behavior obeys rigorous laws of the utmost mathematical beauty and even necessity. I will nominate just one of them: the mathematical law by which we take our decisions.

All of our mental decisions appear to be captured by a simple rule that weaves together some of the most elegant mathematics of the past centuries: Brownian motion, Bayes' rule, and the Turing machine.

Let us start with the simplest of all decisions: how do we decide that 4 is smaller than 5? Psychological investigation reveals many surprises behind this simple feat. First, our performance is very slow: the decision takes us nearly half a second, from the moment the digit 4 appears on a screen to the point when we respond by clicking a button. Second, our response time is highly variable from trial to trial, anywhere from 300 milliseconds to 800 milliseconds, even though we are responding to the very same digital symbol "4". Third, we make errors—it sounds ridiculous, but even when comparing 4 with 5, we sometimes make the wrong decision. Fourth, our performance varies with the meaning of the objects: we are much faster, and make fewer errors, when the numbers are far from each other (such as 1 and 5) than when they are close (such as 4 and 5).

Well, all of the above facts, and many more, can be explained by a single law: our brain takes decisions by accumulating the available statistical evidence and committing to a decision whenever the total exceeds a threshold.

Let me unpack this statement. The problem that the brain faces when taking a decision is one of sifting the signal from the noise. The input to any of our decision is always noisy: photons hit our retina at random times, neurons transmit the information with partial reliability, and spontaneous neural discharges (spikes) are emitted throughout the brain, adding noise to any decision. Even when the input is a digit, neuronal recordings show that the corresponding quantity is coding by a noisy population of neurons that fires at semi-random times, with some neurons signaling "I think it's 4", others "it's close to 5", or "it's close to 3", etc. Because the brain's decision system only sees unlabelled spikes, not full-fledged symbols, it is a genuine problem for it to separate the wheat from the chaff.

In the presence of noise, how should one take a reliable decision? The mathematical solution to that the problem was first addressed by Alan Turing, when he was cracking the Enigma code at Bletchley Park. Turing found a small glitch in the Enigma machine, which meant that some of the German messages contained small amounts of information—but unfortunately, too little to be certain of the underlying code. Turing realized that Bayes' law could be exploited to combine all of the independent pieces of evidence. Skipping the math, Bayes' law provides a simple way to sum all of the successive hints, plus whatever prior knowledge we have, in order to obtain a combined statistic that tells us what the total evidence is.

With noisy inputs, this sum fluctuates up and down, as some incoming messages support the conclusion while others merely add noise. The outcome is what mathematicians call a "random walk" or "Brownian motion", a fluctuating march of numbers as a function of time. In our case, however, the numbers have a currency: they represent the likelihood that one hypothesis is true (e.g. the probability that the input digit is smaller than 5). Thus, the rational thing to do is to act as a statistician, and wait until the accumulated statistic exceeds a threshold probability value. Setting it to p=0.999 would mean that we have one chance in a thousand to be wrong.

Note that we can set this threshold to any arbitrary value. However, the higher we put it, the longer we have to wait for a decision. There is a speed-accuracy trade-off: we can wait a long time and take a very accurate but conservative decision, or we can hazard a response earlier, but at the cost of making more errors. Whatever our choice, we will always make a few errors.

Suffice it to say that the decision algorithm I sketched, and which simply describes what any rational creature should do in the face of noise, is now considered as a fully general mechanism for human decision making. It explains our response times, their variability, and the entire shape of their distribution. It describes why we make errors, how errors relate to response time, and how we set the speed-accuracy trade-off. It applies to all sorts of the decisions, from sensory choices (did I see movement or not?) to linguistics (did I hear "dog" or "bog"?) and to higher-level conundrums (should I do this task first or second?). And in more complex cases, such as performing a multi-digit calculation or a series of tasks, the model characterizes our behavior as a sequence of accumulate-and-threshold steps, which turns out to be an excellent description of our serial, effortful Turing-like computations.

Furthermore, this behavioral description of decision-making is now leading to major progress in neuroscience. In the monkey brain, neurons can be recorded whose firing rates index an accumulation of relevant sensory signals. The theoretical distinction between evidence, accumulation and threshold helps parse out the brain into specialized subsystems that "make sense" from a decision-theoretic viewpoint.

As with any elegant scientific law, many complexities are waiting to be discovered. There is probably not just one accumulator, but many, as the brain accumulates evidence at each of several successive levels of processing. Indeed, the human brain increasingly fits the bill for a superb Bayesian machine that makes massively parallel inferences and micro-decisions at every stage. Many of us think that our sense of confidence, stability and even conscious awareness may result from such higher-order cerebral "decisions" and will ultimately fall prey to the same mathematical model. Valuation is also a key ingredient that I skipped, although it demonstrably plays a crucial role in weighing our decisions. Finally, the system is ripe with a prioris, biases and time pressures and other top evaluations that draw it away from strict mathematical optimality.

Nevertheless, as a first approximation, this law stands as one of the most elegant and productive discoveries of twentieth-century psychology: humans act as near-optimal statisticians, and any of our decisions corresponds to an accumulation of the available evidence up to some threshold.

Distinguished Professor of Physiology, Pharmacology, and Neurology, State University of New York Downstate Medical Center

The Elementary Particles of Memory

The most complicated structure in the universe known to us is the human brain, composed of 100 billion neurons. There are two aspects of this complexity. First is the circuit plan of our brains, encoded by the sequence of A, C, G, and T's in DNA, together with its epigenetic changes. This circuitry is our behavioral inheritance as Homo sapiens, endowing us with "vegetative" functions, like sleeping and eating, a basic cognitive toolbox, including perhaps the basics of grammar and counting, and individual tendencies, such as resilience or susceptibility to stress.

But the information encoding this complicated circuitry is miniscule compared with the range and capacity of human thought. An individual thought is due to the firing of a specific population of neurons, which is determined by ongoing sensory input in the present, and the ensembles of neurons within the basic circuitry that have been linked together by experiences in the past — our memories. This linkage, the physical basis of memory, is due to the experience-dependent, persistent strengthening or weakening of synapses, which allow specific groups of neurons to more easily fire together, even if only a few of the neurons in the group are activated by a sensory input. Each neuron has ~10,000 synaptic connections with other neurons. Thus the patterns created by the potential strengthening and weakening of a quadrillion synapses determine the number of potential memories in an individual.

Underlying this complexity, however, is a simple and deep organization to the changes in synaptic strength that keep these populations of neurons together to sustain long-term memories— a handful of physiological processes maintained by a few essential molecules. The first of these physiological processes, called long-term potentiation (LTP), was discovered in 1966 by Terje Lømo, who worked with Tim Bliss to produce the first systematic study of LTP in 1973. LTP is a persistent strengthening of synaptic connections triggered by a brief episode of high-frequency activity of those connections. The second physiological mechanism to store long-term information was discovered by Gary Lynch and Tom Dunwiddie in 1978, and is the inverse of LTP, long-term depression (LTD), a persistent weakening of synapses that is triggered by a different pattern of activity.

There are hundreds of molecules in the synapse that regulate the formation of LTP, but only a few that maintain the persistent strengthening over time. The key molecule maintaining LTP is a persistently active enzyme, called PKMzeta. Together with the molecules maintaining LTD that are still being determined, these elementary molecules store most forms of memory. Without the persistent strengthening of synapses by PKMzeta, the ongoing physiological process of LTP at the synapse collapses, and most long-term memories are erased. The animal returns to a "blank slate," with just its genetic inheritance of behavior.

The molecular mechanism for the formation of PKMzeta likely evolved over 500 million years ago in the Cambrian period. This event was a mutation in a gene critical for the development of polarized structures in cells, such as the wall of an epithelial cell that faces the inside of the gut, or a synapse of a neuron. The gene is for a protein kinase, an enzyme that catalyzes chemical reactions. But PKMzeta is an unusual form of protein kinase. Once made when LTP is triggered, PKMzeta is active all the time, rather than being turned on and off in response to other molecules. When mutations occur in genes for kinases that render them active all the time, they promote uncontrolled growth in cells, leading to cancer. However, the change in the gene that encodes PKMzeta also restricts the formation of the persistently active kinase to neurons. Because mature neurons are tethered to thousands of other neurons through their synapses, they cannot possibly divide and are remarkably resistant to forming cancers (most brain tumors in adults originate from glial cells, which readily divide). By restriction to cells that communicate but cannot divide, a mutation signaling continual growth, potentially deadly to an organism, was used to maintain long-term memory.

Without this accident in the Cambrian, life would have continued, the nervous systems of vertebrates would have evolved, and the behaviors of animals might have become quite complex, but no information could have been stored other than the slow accretion of instinct by natural selection. There would be no record of an animal's experience within its life, and for humans no culture, no knowledge of the world. For an individual, a consciousness that reflects on the narrative of one's life would be inconceivable.

Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, The Greatest Show on Earth, The Magic of Reality

Redundancy Reduction and Pattern Recognition

Deep, elegant, beautiful? Part of what makes a theory elegant is its power to explain much while assuming little. Here, Darwin's natural selection wins hands down. The ratio of the huge amount that it explains (everything about life: its complexity, diversity and illusion of crafted design) divided by the little that it needs to postulate (non-random survival of randomly varying genes through geological time) is gigantic. Never in the field of human comprehension were so many facts explained by assuming so few. Elegant then, and deep—its depths hidden from everybody until as late as the nineteenth century. On the other hand, for some tastes natural selection is too destructive, too wasteful, too cruel to count as beautiful. In any case, coming late to the party as ever, I can count on somebody else choosing Darwin. I'll take his great grandson instead, and come back to Darwin at the end.

Horace Barlow FRS is the youngest grandchild of Sir Horace Darwin, Charles Darwin's youngest child. Now a very active ninety, Barlow is a member of a distinguished lineage of Cambridge neurobiologists. I want to talk about an idea that he published in two papers in 1961, on redundancy reduction and pattern recognition. It is an idea whose ramifications and significance have inspired me throughout my career.

The folklore of neurobiology includes a mythical 'grandmother neurone', which fires only when a very particular image, the face of Jerry Lettvin's grandmother, falls on the retina (Lettvin was a distinguished American neurobiologist who, like Barlow, worked on the frog retina). The point is that Lettvin's grandmother is only one of countless images that a brain is capable of recognising. If there were a specific neurone for everything we can recognise—not just Lettvin's grandmother but lots of other faces, objects, letters of the alphabet, flowers, each one seen from many angles and distances, we would have a combinatorial explosion. If sensory recognition worked on the 'grandmother principle', the number of specific recognition neurones for all possible combinations of nerve impulses would exceed the number of atoms in the universe. Independently, the American psychologist Fred Attneave had calculated that the volume of the brain would have to be measured in cubic light years. Barlow and Attneave independently proposed redundancy reduction as the answer.

Claude Shannon, inventor of Information Theory, coined 'redundancy' as a kind of inverse of information. In English, 'q' is always followed by 'u', so the 'u' can be omitted without loss of information. It is redundant. Wherever redundancy occurs in a message (which is wherever there is nonrandomness), the message can be more economically recoded without loss of information (although with some loss in capacity to correct errors). Barlow suggested that, at every stage in sensory pathways, there are mechanisms tuned to eliminate massive redundancy.

The world at time t is not greatly different from the world at time t-1. Therefore it is not necessary for sensory systems continuously to report the state of the world. They need only signal changes, leaving the brain to assume that everything not reported remains the same. Sensory adaptation is a well-known feature of sensory systems, which does precisely as Barlow prescribed. If a neurone is signalling temperature, for example, the rate of firing is not, as one might naively suppose, proportional to the temperature. Instead, firing rate increases only when there is a change in temperature. It then dies away to a low resting frequency. The same is true of neurones signalling brightness, loudness, pressure and so on. Sensory adaptation achieves huge economies by exploiting the non-randomness in temporal sequence of states of the world.

What sensory adaptation achieves in the temporal domain, the well-established phenomenon of lateral inhibition does in the spatial domain. If a scene in the world falls on a pixellated screen, such as the back of a digital camera or the retina of an eye, most pixels see the same as their immediate neighbours. The exceptions are those pixels which lie on edges, boundaries. If every retinal cell faithfully reported its light value to the brain, the brain would be bombarded with a massively redundant message. Huge economies can be achieved if most of the impulses reaching the brain come from pixel cells lying along edges in the scene. The brain then assumes uniformity in the spaces between edges.

As Barlow pointed out, this is exactly what lateral inhibition achieves. In the frog retina, for example, every ganglion cell sends signals to the brain, reporting on the light intensity in its particular location on the surface of the retina. But it simultaneously sends inhibitory signals to its immediate neighbours. This means that the only ganglion cells to send strong signals to the brain are those that lie on an edge. Ganglion cells lying in uniform fields of colour (the majority) send few if any impulses to the brain because they, unlike cells on edges, are inhibited by all their neighbours. The spatial redundancy in the signal is eliminated.

The Barlow analysis can be extended to most of what is now known about sensory neurobiology, including Hubel and Wiesel's famous horizontal and vertical line detector neurones in cats (straight lines are redundant, reconstructable from their ends), and in the movement ('bug') detectors in the frog retina, discovered by the same Jerry Lettvin and his colleagues. Movement represents a non-redundant change in the frog's world. But even movement is redundant if it persists in the same direction at the same speed. Sure enough, Lettvin and colleagues discovered a 'strangeness' neurone in their frogs, which fires only when a moving object does something unexpected, such as speeding up, slowing down, or changing direction. The strangeness neurone is tuned to filter out redundancy of a very high order.

Barlow pointed out that a survey of the sensory filters of a given animal could, in theory, give us a read-out of the redundancies present in the animal's world. They would constitute a kind of description of the statistical properties of that world. Which reminds me, I said I'd return to Darwin. In Unweaving the Rainbow, I suggested that the gene pool of a species is a 'Genetic Book of the Dead', a coded description of the ancestral worlds in which the genes of the species have survived through geological time. Natural selection is an averaging computer, detecting redundancies—repeat patterns—in successive worlds (successive through millions of generations) in which the species has survived (averaged over all members of the sexually reproducing species). Could we take what Barlow did for neurones in sensory systems, and do a parallel analysis for genes in naturally selected gene pools? Now that would be deep, elegant and beautiful.

You don't have to be human to have a good idea. You can even be a fish.

There is a large fish in shallow Micronesian waters that feeds on little fish. The little fish dwell in holes in the mud, but swarm out to feed. The big fish starts to gobble up the little fish, one by one, but they immediately retreat back into their holes, when his meal has barely begun. What to do?

I have put this problem to my classes over the years, and I remember only one student who came up with the big fish's Good Idea. Of course he did it after a little thought, not after millions of years of evolution, but who's counting?

Here is the elegant trick. When the school of little fishes appears, instead of gobbling, he swims low so that his belly rubs over the mud and blocks the escape holes. Then he can dine at leisure.

What do we learn? To have a good idea, stop having a bad one. The trick was to inhibit the easy, obvious but ineffective attempts, permitting a better solution to come to mind. That worked for the big fish, by some mechanism of mutation and natural selection in fish antiquity. Instead of tinkering with the obvious, obsessing about eating faster, taking bigger bites, etc. junk plan A, and plan B comes swimming up. For humans, supposing that the second solution still does not work, block that too, and wait. A third floats into awareness, and so on, until the insoluble is solved, even if the most intuitively obvious premises have to be overridden in the process.

To the novice the Good Idea seems magical, a leap of intellectual lightning. More likely, however, it resulted from an iterative process as outlined above, with enough experience in back to help reject seductive but misleading premises. Thus the extraordinary actually arises step by step out of the ordinary.

Having a good idea is far from rare in the evolution of non-human species. Indeed, many if not most species need to have an idea or trick that works well enough for them to continue to exist. Admittedly, they may not be able to extrapolate its principle from the context in which it emerged, and generalize it as (some) people can, courtesy of their prefrontal cortex.

When the finest minds fail to resolve a classical problem, during decades, or centuries of trying, they were probably trapped by a premise that was so culturally "given", that it did not even occur to them to challenge it, or they did not even notice it at all. But cultural context changes and what seemed totally obvious yesterday becomes dubious at best today or tomorrow. Sooner or later someone who may be no more gifted than his/her predecessors, but is unshackled from some very basic and very incorrect assumption, may hit upon the solution with relative ease.

Alternatively, one can be a fish, wait a million years or two, and see what comes up.

Associate Professor, Cognitive Science, University of California, San Diego; Author, Louder Than Words: The New Science of How the Mind Makes Meaning

Metaphors are in the mind

To me, the most exciting explanations start by being elegant—they reduce something complex to something simple. But they don't stop there. They're also powerful, in that they extend to phenomena other than the ones that they were originally proposed to account for. And they'reuseful—they inspirenew research to flesh out their fundamental principles and test their predictions. And my latent anti-authoritarian streak leads me to also prefer explanations that questionreceived wisdom. To find all four of these features together is the vanishingly rare scientific equivalent of a ninth inning, two-out, two-strike grand slam that wins the World Series.

I study language, and in my field, there have been a couple game-changing explanations like this over the centuries. One explains how languages change over time. Another explains why all languages share certain characteristics. But my personal favorite is the one that originally got me hooked on language and the mind. It's an explanation of metaphor.

When you look closely at how we use language, you find that a lot of what we say is metaphorical—we talk about certain things as though they were other things. We describe political campaigns as horse races: "Senator Jones has pulled ahead." Morality is cleanliness: "That was a dirty trick." And understanding is seeing: "New finding illuminates the structure of the universe."

People have known about metaphor for a very long time. Until the end of the 20th century, most everyone agreed on one particular explanation, neatly articulated by Aristotle and carried down through the centuries. Metaphor was seen as a strictly linguistic device—a kind of catchy turn of phrase—in which you call one thing by the name of another thing that it's similar to. This is probably the definition of metaphor you learned in high school English. On this view, you can metaphorically say that "Juliet is the sun" if and only if Juliet and the sun are similar—for instance, if they are both particularly luminous.

But in their 1980 the book Metaphors We Live By, George Lakoff and Mark Johnson proposed an explanation for metaphorical language that flew in the face of this received wisdom. They reasoned that if metaphor is just a free-floating linguistic device based on similarity, then you should be able to metaphorically describe anything in terms of anything else that it's similar to. But Lakoff and Johnson observed that real metaphorical language as actually used isn't haphazard at all. Instead, it's systematic and coherent.

It's systematic in that you don't just metaphorically describe anything as anything else. Instead, it's mostly abstract things that you describe in terms of concrete things. Morality is more abstract than cleanliness. Understanding is more abstract than seeing. And you can't reverse the metaphors. While you can say "He's clean" to mean he has no criminal record, you can't say "He's moral" to mean that he bathed recently. Metaphor is unidirectional, from concrete to abstract.

Metaphorical expressions are also coherent with one another. Take the example of understanding and seeing. There are lots of relevant metaphorical expressions, for example "I see what you mean," and "Let's shed some light on the issue," and "Put his idea under a microscope and see if it actually makes sense." And so on. While these are totally different metaphorical expressions—they use completely different words—they all coherently cast certain aspects of understanding in terms of specific aspects of seeing. You always describe the understander as the seer, the understood idea as the seen object, the act of understanding as seeing, the understandability of the idea as the visibility of the object, and so on. In other words, the aspects of seeing you use to talk about aspects of understanding stand in a fixed mapping to one another.

These observations led Lakoff and Johnson to propose that there was something going on with metaphor that was deeper than just the words. They argued that the metaphorical expressions in language are really only surface phenomena, organized and generated by mappings in people's minds. For them, the reason metaphorical language exists and the reason why it's systematic and coherent is that people think metaphorically. You don't just talk about understanding as seeing; you think about understanding as seeing. You don't just talk about morality as cleanliness; you think about morality as cleanliness. And it's because you think metaphorically—because you systematically map certain concepts onto others in your mind—that you talk metaphorically. The metaphorical expressions are merely the visible tip of the iceberg.

As explanations go, this one covers all the bases. It's elegant in that it explains messy and complicated phenomena (the various metaphorical expressions we have that describe understanding as seeing, for instance) in terms of something much simpler—a structured mapping between the two conceptual domains in people's minds. It's powerful in that it explains things other than metaphorical language—recent work in cognitive psychology shows that people think metaphorically even in the absence of metaphorical language; affection as warmth, morality as cleanliness. As a result, the conceptual metaphor explanation helps to explain how it is that we understand abstract concepts like affection or morality at all—by metaphorically mapping them onto more concrete ones. In terms of utility, the conceptual metaphor explanation has generated extensive research in a variety of fields; linguists have documented the richness of metaphorical language and explored its diversity across the globe, psychologists have tested its predictions in human behavior, and neuroscientists have searched the brain for its physical underpinnings. And finally, the conceptual metaphor explanation is transformative—it flies in the face of the accepted idea that metaphor is just a linguistic device based on similarity. In an instant, it made us rethink 2000 years of received wisdom.

This isn't to say that the conceptual metaphor explanation doesn't have its weaknesses, or that it's the final word in the study of metaphor. But it's an explanation that casts a huge shadow. So to speak.

My first exposure to true elegance in science was through a short semi-popular book entitled SYMMETRY written by the renowned mathematician Hermann Weyl. I discovered the book in the fourth grade and have returned to reread passages every few years. The book begins with the intuitive aesthetic notion of symmetry for the general reader, drawing interesting examples from art, architecture, biological forms, and ornamental design. By the fourth and final chapter, though, Weyl turns from vagary to precise science as he introduces elements of group theory, the mathematics that transforms symmetry into a powerful tool.

To demonstrate its power, Weyl spends his final chapter outlining how group theory can be used to explain the shapes of crystals. Crystals have fascinated humans throughout history because of the beautiful faceted shapes they form. Most rocks contain an amalgam of different minerals, each of which is crystalline, but which have grown together or crunched together or weathered to the point that facets are unobservable. Occasionally, though, the same minerals form individual large faceted crystals. That is when we find them most aesthetically appealing. "Aluminum oxide" may not sound like something of value, but add a little chromium, give nature sufficient time and you have a ruby worthy of kings.

If you have not done so recently, I urge you to visit the mineral collection in a museum to observe the remarkable variety and beauty of crystal forms. You will discover for yourself a basic mineralogical fact that the crystal facets found in nature meet at only certain angles corresponding to one of a small set of symmetries. But why does matter take some shapes and not others? What scientific information do the shapes convey? Weyl explains how these questions can be answered seemingly unrelated abstract mathematics aimed at answering a different question: namely, what shapes can be used to tessellate a plane or fill space if the shapes are identical, meet edge-to-edge and leave no spaces? Squares, rectangles, triangles, parallelograms and hexagons can do the job. Perhaps you imagine that many other polygons would work as well; however, try and you will discover there are no more possibilities. Pentagons, heptagons, octagons and all other regular polygons cannot fit together without leaving spaces. Weyl's little book describes the mathematics that allows a full classification of possibilities including as distinct patterns with decorated tiles and including reflections, glides and screw axes,. The final tally is only 17 distinct possibilities in two dimensions (the so-called wallpaper patterns) and 230 distinct symmetry possibilities in three dimensions.

The stunning fact about the mathematicians' list was that it precisely matched the list observed for crystals shapes found in nature. The inference is that crystalline matter is like a tessellation made of indivisible, identical building blocks that repeat to make the entire solid. Of course, we know today that these building blocks are clusters of atoms or molecules. However, bear in mind that the connection between the mathematics and real crystals was made in 19th century when the atomic theory was still in doubt. It is amazing that an abstract study of tiles and building blocks can lead to a keen insight about the fundamental constituents of matter and a classification of all possible arrangements of them. It is a classic example of physicist Eugene Wigner referred to as the "unreasonable effectiveness of mathematics in the natural sciences." The story does not end there. With the development of quantum mechanics, group theory and symmetry principles have been used to predict the electronic, magnetic, elastic and other physical properties of solids. Emulating this triumph, physicists have successfully used symmetry principles to explain the fundamental constituents of nuclei and elementary particles, as well as the forces through which they interact.

As a young student first reading Weyl's book, crystallography seemed like the "ideal" of what one should be aiming for in science: elegant mathematics that provides a complete understanding of all physical possibilities. Ironically, many years later, I played a role in showing that my "ideal" was seriously flawed. In 1984, Dan Shechtman, Ilan Blech, Denis Gratias and John Cahn reported the discovery of a puzzling manmade alloy of aluminumand manganese with icosahedral symmetry. Icosahedral symmetry, with its six five-fold symmetry axes, is the most famous forbidden crystal symmetry. As luck would have it, Dov Levine (Technion) and I had been developing a hypothetical idea of a new form of solid that we dubbed quasicrystals, short for quasiperiodic crystals. (A quasiperiodic atomic arrangement means the atomic positions can be described by a sum of oscillatory functions whose frequencies have an irrational ratio.) We were inspired by a two-dimensional tiling invented by Sir Roger Penrose known as the Penrose tiling, comprised of two tiles arranged in a five-fold symmetric pattern. We showed that quasicrystals could exist in three dimensions and were not subject to the rules of crystallography. In fact, they could have any of the symmetries forbidden to crystals. Furthermore, we showed that the diffraction patterns predicted for icosahedral quasicrystals matched the Shechtman et al. observations. Since 1984, quasicrystals with other forbidden symmetries have been synthesized in the laboratory. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for his experimental breakthrough that changed our thinking about possible forms of matter. More recently, colleagues and I have found evidence that quasicrystals may have been among the first minerals to have formed in the solar system.

The crystallography I first encountered in Weyl's book, thought to be complete and immutable, turned out to be woefully incomplete, missing literally an uncountable number of possible symmetries for matter. Perhaps there is a lesson to be learned: While elegance and simplicity are often useful criteria for judging theories, they can sometimes mislead us into thinking we are right, when we are actually infinitely wrong.

We all know males and females are different below the neck. There is growing evidence that there are differences above the neck too. Looking into the mind reveals that on average females develop empathy faster and that on average males develop stronger interests in systems, or how things work. These are not necessarily differences in ability, but more differences in cognitive style and patterns of interest. These differences shouldn't stand in the way of achieving equal opportunities in society, or equal representation in all disciplines and fields, but such political aspirations are a separate issue to the scientific observation of cognitive differences.

Looking into the brain also reveals differences: for example, whilst on average males have larger brain volume even correcting for height and weight, on average females reach their peak volume of grey and white matter at least a year earlier than males. There's also a difference in the number of neurons in the neocortex: on average, males have 23 million and females have 19 million, a 16% difference. Looking at regions within the brain also shows sex differences: for example, males on average have a larger amygdala (an emotion area) and females on average have a larger planum temporale (a language area). But all this talk about sex differences. Ultimately what we want to know is what gives rise to these differences, and here is where I at least enjoy some deep, elegant, and beautiful explanations.

My favourite is foetal testosterone, since a few more drops of this special molecule seems to have 'masculinizing' effects on the development of the brain and the mind. The credit for this simple idea must go to Charles Phoenix and colleagues in 1959 (University of Kansas), an idea picked up again by Norman Geschwind in 1985 (Harvard). This is not the only masculinizing mechanism (another is the X chromosome) but it is one that has been elegantly dissected.

How scientists get to see the causal properties of foetal testosterone can however be through unethical animal experiments. Take for example a part of the amygdala called the medial posterodorsal (MePD) nucleus that is larger in male rats than in females. If you castrate the poor male rat (thereby depriving him of the main source of his testosterone) the MePD shrinks to the female volume in just 4 weeks. Or you can do the reverse experiment, giving extra testosterone to a female rat, which makes her MePD grow to the same size as a typical male rat, again in just 4 weeks.

In humans we look for more ethical ways of studying how foetal testosterone does its work! You can measure this special hormone in the amniotic fluid that bathes the foetus in the womb. It gets into the amniotic fluid by being excreted by the foetus, and so is thought to reflect the levels of this hormone in the baby's body and brain. We measured the baby's testosterone in this way and then simply waited for the baby to be born, and then invited them into an MRI brain scanner 10 years later. This allows a test of how individual differences in testosterone levels before birth shape the development of the human brain. In a new paper in the Journal of Neuroscience our group shows for example how the more testosterone there is in the amniotic fluid, the less grey matter in the planum temporale (that language area of the brain).

This fits with a finding we published some 10 years ago: that the more testosterone in the amniotic fluid, the smaller the child's vocabulary size, at the age of 2 years old. This helps make sense of a longstanding puzzle about why girls talk earlier than boys, and why boys are disproportionately represented in clinics for language delays and disorders, since boys in the womb produce at least twice as much testosterone as girls.

It also helps make sense of the puzzle of individual differences in rate of language development in typical children irrespective of their sex: why at 2 years old some children have huge vocabularies (600 words) and other children haven't even started talking. Foetal testosterone is not the only factor involved in language development (so are social influences, since first born children develop language faster than later born children) but it seems to be a key part of the explanation. And foetal testosterone has been shown to be associated with a host of other sex-linked features, from eye contact to empathy, and from detailed attention to autistic traits.

Foetal testosterone is tricky to get your hands on, since the last thing a scientist wants to do is interfere with the delicate homeostasis of the uterine environment. In recent years a proxy for foetal testosterone has been proposed: the ratio between the second and fourth finger digit lengths (or 2D:4D ratio). Males have a lower ratio than females in the population, and this is held to be set during foetal life and remains stable throughout one's life. So scientists no longer have to think of imaginative ways to measure the testosterone levels directly in the womb. They can simply take a xerox of someone's hand, palm down, at any time in their life, to measure a proxy for levels of testosterone in the womb.

[Image credit: Linda Wooldridge and Mathew Clement, in "Resolving the role of prenatal sex steroids in the development of digit ratio", by John T. Manning, PNAS.]

I was skeptical of the 2D:4D measure for a long time, simply because it made little sense that how long your 2nd and 4th fingers were should have anything to do with your hormones prenatally. But just last year, in Proceedings of the National Academy of Sciences, Zheng and Cohn showed how even in mice paws, the density of receptors for testosterone and oestrogen varies in the 2nd and 4th digits, making another beautiful explanation for why your finger ratio length is directly affected by these hormones. That same hormone that masculinizes your brain is at work at your fingertips.

Professor of Journalism, New York University; former journalist, Science Magazine; Author, Proofiness: The Dark Arts of Mathematical Deception

The Power Of One, Two, Three

Sometimes even the simple act of counting can tell you something profound.

One day, back in the late 1990s, when I was a correspondent for New Scientist magazine, I got an e-mail from a flack waxing rhapsodic about an extraordinary piece of software. It was a revolutionary data-compression program so efficient that it would squash every digital file by 95% or more without losing a single bit of data. Wouldn't my magazine jump at the chance to tell the world about the computer program that will make their hard drives hold 20 times more information than every before.

No, my magazine wouldn't.

No such compression algorithm could possibly exist; it was the algorithmic equivalent of a perpetual motion machine. The software was a fraud.

The reason: the pigeonhole principle.

The pigeonhole principle is a simple counting argument. It says that if you've got N pigeons and manage to stuff them into fewer than N boxes, then at least one box must have more than one pigeon in it. As blindingly obvious as this is, it's a powerful tool.

For example, imagine that the compression software really worked as advertised, and every file is shrunk by a factor of 20 without any loss of fidelity. Every single file 2000 bits long will be squashed down into a mere 100 bits, and then, when the algorithm is reversed, it expands back into its original form, unscathed.

When compressing files, you bump up against the pigeonhole principle. There are many more 2000-bit pigeons (22000, to be exact) than 100-bit boxes (2100). If an algorithm stuffs the former into the latter, at least one box must contain multiple pigeons. Take that box
—that 100-bit file—and reverse the algorithm, expanding the file into its original 2000-bit form. You can't! Since there are multiple 2000-bit files which all wind up being squashed into to the same 100-bit file, the algorithm has no way of knowing which one was the true original—it can't reverse the compression.

The pigeonhole principle puts an absolute limit on what a compression algorithm can do. It can compress some files—often dramatically—but it can't compress them all, at least if you insist on perfect fidelity.

Counting arguments similar to this one have opened up entire new realms for us to explore. Georg Cantor used a kind of reverse-pigeonhole-principle technique to show that it was impossible to fit the real numbers into boxes labeled by the integers—even though there are an infinite number of integers. The almost unthinkable consequence was that there were different levels of infinity; the infinity of the integers was dwarfed by the infinity of the reals, which, in turn is dwarfed by yet another infinity and another infinity on top of that... an infinity of infinities, all unexplored until we learned to count them.

Taking the pigeonhole principle into deep space has an even stranger consequence. A principle in physics, the holographic bound, implies that in any finite volume of space, there are only a finite number of possible configurations of matter and energy in that space. If, as cosmologists tend to believe, the universe is infinite, there are an infinite number of visible-universe-sized volumes out there—enormous cosmos-sized bubbles containing matter and energy. And if space is more or less homogeneous, there's nothing particularly special about the cosmos-sized-bubble we live in. These assumptions, taken together, lead to a stunning conclusion. Infinite universe-sized bubbles, with only a finite number of configurations of the matter and energy in those bubbles mean that there's not just an exact copy of our universe—and our earth—out there, the transfinite version of the pigeonhole principle states that there's an infinite number of copies of every (technically, "almost every," which has a precise mathematical definition) possible universe. Not only are there infinite copies of you on infinite alternate Earths, there are infinite copies of countless variations upon the theme: versions of you with a prehensile tail, versions of you with multiple heads, versions of you that have made a career juggling carnivorous rabbit-like animals in exchange for costume jewelry.

Even something as simple as counting one, two, three can lead to a completely unexpected realm.

In one of his celebrated just-so stories, Rudyard Kipling recounted how the leopard got his spots. But taking this approach to its logical conclusion, we would need distinct stories for every animal's pattern: the leopard's spots, the cow's splotches, the panther's solid colors. And we would have to add even more stories for the complex patterning of everything from molluscs to tropical fish.

But far from these different animals requiring separate and distinct explanations, there is a single underlying explanation that shows how we can get all of these varied and different patterns using a single unified theory.

Beginning in 1952, with Alan Turing's publication of a paper entitled "The Chemical Basis of Morphogenesis", scientists recognized a simple set of mathematical formulas could dictate the variety of how patterns and colorings form in animals. This model is known as a reaction-diffusion model and works in a simple way: imagine you have multiple chemicals, which diffuse over a surface at different rates and can interact. While in most cases, diffusion simply creates a uniformity of a given chemical—think how pouring cream into coffee will eventually spread and dissolve and create a lighter brown—when multiple chemicals diffuse and interact, this can give rise to non-uniformity. Even though this sounds somewhat counterintuitive, not only can it occur, but it can be generated using only a simple set of equations, and in turn explain the exquisite variety of patterns seen in the animal world.

Mathematical biologists have been exploring the properties of reaction-diffusion equations ever since Turing's paper. They've found that varying the parameters can generate the animal patterns we see. Some mathematicians have even examined the ways in which the size and shape of the surface can dictate the patterns that we see. As the size parameter is modified, we can easily go from such patterns as giraffe-like to those seen on Holstein cows.

This elegant model can even yield simple predictions. For example, while a spotted animal can have a striped tail (and very often does) according to the model, a striped animal will never have a spotted tail. And this is exactly what we see! These equations can generate the endless variation seen in Nature, but can also show the limitations inherent in biology. The just-so of Kipling may be safely exchanged for the elegance and generality of reaction-diffusion equations.

Mallinckrodt Professor of Physics and Professor of the History of Science, Emeritus, Harvard University; Author, Einstein for the 21st Century: His Legacy in Science, Art, and Modern Culture

The Discontinuity of Science and Culture

From time to time, large sections of humanity find themselves, at short notice, in a different universe. Science, culture and society have undergone a tectonic shift, for better or worse—the rise of a powerful religious or political leader, the Declaration of Independence, the end of slavery—or, on the other hand, the fall of Rome, the Great Plague, the World Wars.

So, too in the world of art. Thus, Virginia Woolf said famously, "In or about December 1910, human character changed", owing, in her view, to the explosive exhibition of post-impressionist canvases in London that year. And after the discovery of the nucleus was announced, Wassily Kandinsky wrote: "The collapse of the atom model was equivalent, in my soul, to the collapse of the whole world. Suddenly, the thickest walls fell...", and he could turn to a new way of painting.

Each of such world-view changing occurrences tend to be deeply puzzling or anguishing. They are sudden fissures in the familiar fabric of history that ask for explanations, with treatises published year after year, each hoping to provide an answer, seeking the cause of the dismay.

I will here focus on one such phenomenon.

In 1611, John Donne published his poem, "The First Anniversary", containing the familiar lines "And new Philosophy has all in doubt,/ the Element of fire is quite put out..." and later, " ...Is crumbled out againe to his Atomies/ 'Tis all in peeces, all cohaerence gone/ All just supply and all Relation." He and many others felt the old order and unity had been displaced by relativism and discontinuity.

The explanation for his anguish was as an entirely unexpected event the year before: Galileo's discovery of the fact that the moon has mountains, that Jupiter has moons, that there are immensely more stars than had been known.

Of this happening and its consequent findings, the historian Marie Nicolson wrote: "We may perhaps date the beginning of modern thought from the night of January 7, 1610 when Galileo, by means of the instrument he developed [the telescope], thought he perceived new planets and new, expanded worlds."

Indeed, by his work Galileo gave a deep and elegant explanation for the question how our cosmos is arranged—no matter how painful this may have been to the Aristotelians and poets of his time. At last, the Copernican theory, formulated long ago, had more credibility. From this vast step forward, new science and new culture could be born.

I learned electrodynamics from Mark Heald and his concise text on an even more concise set of equations, Maxwell's. In 4 lines, just 31 characters (or less with some notational tricks), Maxwell's equations unified what had appeared to be unrelated phenomena (the dynamics of electric and magnetic fields), predicted new experimental observations, and contained both theoretical advances to come (including the wave solution for light, and special relativity) and technologies to come (including the fiber optics, coaxial cables, and wireless signals that carry the Internet).

But the explanation that I found to be so memorable was not Maxwell's of electromagnetism, which is well known for its beauty and consequence, it was Mark's that electric field lines behave like furry rubber bands: they want to be as short as possible (the rubber) but don't want to be near each other (the fur). This is an easily-understood qualitative description that has served me in good stead in device design. And it provides a deeper, quantitative insight into the nature of Maxwell's equations: the local solution for the field geometry can be understood as solving a global optimization.

These sorts of scientific similarities that are predictive as well as descriptive help us reason about regimes that our minds didn't evolve to operate in. Unifying forces is not an everyday occurrence, but explaining them can be. Recognizing that something is precisely like something is a kind of object-oriented thinking that helps build bigger thoughts out of smaller ideas.

I understood Berry's phase for spinors by trying to rotate my hand while holding up a glass; I mastered NMR spin echoes by swinging my arms while I revolved; the alignment of semiconductor Fermi levels at a junction made sense when explained as filling buckets with water. Like furry rubber bands and electric fields, these relationships represent analogies between governing equations. Unlike words, they can be exact, providing explanations that connect unfamiliar formalism with familiar experience.

George Orwell's Big Brother taught us what we need to know about authoritarian forms of social control. Mount perpetual war and call it peace. Debase language. Efface memory. Surround people with screens broadcasting propaganda and, at the same time, use these screens for surveillance.

But what if the screens that surround us are not objects of fear, but things we desire? What if propaganda and surveillance are delivered through the mediated images we crave?

This opposing model of social control—where we are enslaved not by what we fear, but by what we love—was described in 1932 by Aldous Huxley, George Orwell's former teacher at Eton. In Huxley's Brave New World, people delight in celebrity culture, sex tapes, recreational drugs, elaborate games, and pornographic films designed to rev up audiences for their own promiscuous pairings.

Orwell and Huxley recognized the same historical amnesia, illiteracy, and debasement of language. They identified similar hierarchies of social control and propaganda regimes, enforced by surveillance. But one world was ruled by fear, the other by love—or at least the simulated desire that today passes for love. As we delight in our screens and the happy images that bathe us in their networked glow, our pleasures present one of today's greatest perils.

Senior Consultant (& former Ed-in-Chief & Publishing Director) at New Scientist; Author, After the Ice: Life, Death, and Geopolitics in the New Arctic

Deep Time

There is one simple and powerful idea that strikes me as both deep and beautiful in its own right and as the mother of a suite of further elegant theories and explanations. The idea is that of "deep time": that the Earth is extremely old and the life of our species on it has been very short. When that idea first emerged it stood against everything that was then believed and it was to eventually change people's view of themselves as much as the earlier discovery that the Earth revolved around the Sun.

We know when the idea of deep time was borne, or at least first vindicated, for a University of Edinburgh professor named John Playfair recorded his reaction in 1788. "The mind seemed to grow giddy", he wrote, "by looking so far into the abyss of time." He had travelled to the Scottish coast with his geologist friend James Hutton, who later put his ideas together in a book called the Theory of the Earth. Hutton was showing him a set of distinct patterns in the rocks that could be most simply explained by assuming that the present land has been laid down in the sea, then lifted, distorted, eroded and once again covered by new sediments at the bottom of a sea. The Earth was not six thousand years old as then accepted calculations from the Bible decreed; nor had the strata of land precipitated out a vast flood as prevailing scientific views, informed by the best chemistry of the time, said.

It was an enormous shift to see the world as Hutton did. Appreciating the vastness of space is easy. When we look up at the stars the immensity of the universe is both obvious and awe inspiring. The immensity of time does not lie within human experience. Nature, observed on a human scale, passes only through the repeated cycle of the seasons, interrupted by occasional catastrophic earthquakes, volcanic eruptions and floods. It is for that reason that creationist and catastrophic theories of the Earth's origins appeared more plausible than those that were slow and gradual. But Hutton had faith in what he saw in the rocks, exhorting others to "open the book of Nature and read in her records".

His thinking about time created fertile ground for other grand theories. With huge spans of time available, then imperceptibly slow processes could shape the natural world. After Hutton came modern geology, then the theory of evolution to explain how new species slowly arose, and eventually a theory of the gradual movement of the continents themselves. All are grounded in deep time.

Hutton's views were a huge challenge to religious orthodoxy too, for when he wrote at the close of his book, "we find no vestige of a beginning--no prospect of an end", he challenged both the idea of a creation and of a judgement day.

The beauty of his idea remains. If we look into the abyss of time, we may not grow giddy, but we can simultaneously feel our own insignificance in the Earth's 4.6 billion year history and the significance of the precise moment in this vast span of time in which we live.

Psychologist; Assistant Professor of Marketing, Stern School of Business, NYU; Author, Drunk Tank Pink: And Other Unexpected Forces that Shape How We Think, Feel, and Behave.

Larger Groups Produce Fewer Responses

The most elegant explanation in social psychology convinced me to pursue a Ph.D. in the field. Every few years, a prominent tragedy attracts plenty of media attention because no one does anything to help. Just before sunrise on an April morning in 2010, a man lay dying on a sidewalk in Queens. The man, a homeless Guatemalan named Hugo Alfredo Tale-Yax, had intervened to help a woman whose male companion began shouting and shaking her violently. When Tale-Yax intervened, the man stabbed him several times in the torso. For ninety minutes, Tale-Yax lay in a growing pool of his own blood as dozens of passers-by ignored him or stared briefly before continuing on their way. By the time firefighters arrived to help, the sun had risen and Tale-Yax had died.

Almost half a century earlier another New Yorker, Kitty Genovese, was attacked and ultimately killed while dozens of onlookers apparently failed to intervene. A New York Times writer decried the callousness of New Yorkers and experts claimed that life in the city had rendered them soulless. Just as commentators said in response to Tale-Yax's death, experts wondered how dozens of people with functioning moral compasses could possibly fail to help someone on the verge of death.

Social psychologists are taught to overcome the natural tendency to blame people for apparently bad behavior, and to look for explanations in the environment instead. Witnessing the vocal response to Genovese's death, social psychologists John Darley and Bibb Latane were convinced that something else about the situation explained why the bystanders failed to intervene. Their elegant insight was that human responses aren't additive in the same way that objects are additive. Whereas four light bulbs illuminate a room more effectively than three light bulbs, and three loudspeakers fill a room with noise more effectively than two loudspeakers, two people aren't always more effective than a single person. People second-guess situations, they stop to make sense of a chain of events before acting, and sometimes pride and the fear of looking foolish prevent them from acting at all.

In a series of brilliant studies, Darley and Latane videotaped students as they sat in a room that slowly filled with smoke. The experimenters pumped smoke into the room with a smoke machine, hidden behind a door in a room nearby, but the effect suggested that the door may have concealed a fire. When the students sat in the room alone, they usually left the room quickly and told the experimenter that something was amiss; but when the students sat in small groups of two, three or four, they often remained seated even as they lost sight of the other students through the pall of smoke. When interviewed later, the students said they chose not to act because they were embarrassed, because they weren't sure whether the smoke truly signaled an emergency, and because they relied on the other impassive students in the room to help them decide that the smoke was benign.

According to Darley and Latane, the patterns of thinking that distinguish us from objects and lower-order animals ultimately undermine our willingness to help when we try to understand those situations alongside other people who are equally confused.

Denumerable Infinities Are The Same Size; No Mind Emerges Through Sorting

My favorites: Cantor's explanation of why all denumerable infinities are the same size—why, eg, the set of all integers is the same size as the set of all positive integers, or all even integers—and why some infinities are bigger than others. (The set of all rational numbers is the same size as the set of all integers, but the set of all real numbers—terminating plus non-terminating decimals—is larger.) The set of all positive integers is the same size as the set of all even, positive integers—to see that, just line them up, one by one. 1 is paired with 2 (the first even positive integer), 2 is paired with 4, 3 with 6, 4 with 8 and so on.

You would think there would be more positive integers than even ones; but this pairing-off shows that no positive integer will ever be left without a partner. (And so they all dance happily off & there are no wallflowers.) The other proofs are similar in their stunning simplicity, but much easier to demo on a blackboard than to describe in words.

Equally favorite: Searle's proof that no digital computer can have mental states (a mental state is, eg, your state of mind when I say "picture a red rose," and you do); that minds can't be built out of software. A digital computer can only do trivial arithmetic and logical instructions. You can do them too; you can execute any instruction that a computer can execute. You can also imagine yourself executing lots & lots of trivial instruction, & then ask yourself: can I picture a new mind emerging on the basis of my doing lots & lots & lots of trivial instructions? (No.) Or: imagine yourself sorting a deck of cards.

Sorting is the sort of thing digital computers do. Now imagine sorting a bigger & bigger & bigger deck; can you see consciousness emerging at some point, when you sort a large enough batch? Nope. (And the inevitable answer to the inevitable first objection: but neurons only do simple signal transmission; can you imagine consciousness emerging out of that? An irrelevant question. The fact that lots of neurons make a mind has no bearing on the question of whether lots of anything else make a mind. I can't imagine being a neuron, but I can imagine executing machine instructions. No mind emerges no matter how many of those instructions I carry out.)

Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, The Lightness of Being

Simplicity

We all have an intuitive sense of what "simplicity" means. In science, the word is often used as a term of praise. We expect that simple explanations are more natural, sounder, and more reliable than complicated ones. We abhor epicycles, or long lists of exceptions and special cases.

But can we take a crucial step further, to refine our intuitions about simplicity into precise, scientific concepts? Is there a simple core to "simplicity"? Is simplicity something we can quantify, and measure?

When I think about big philosophical questions, which I probably do more than is good for me, one of my favorite techniques is to try to frame the question in terms that could make sense to a computer. Usually it's a method of destruction: It forces you to be clear, and once you dissipate the fog you discover that very little of your big philosophical question remains. Here, however, in coming to grips with the nature of simplicity, the technique proved creative, for it led me straight toward a (simple) profound idea in the mathematical theory of information, the idea of description length. (The idea goes by several different names in the scientific literature, including algorithmic entropy and Kolmogorov-Smirnov-Chaitin complexity. Naturally, I chose the simplest one.)

Description length is actually a measure of complexity, but for our purposes that's just as good, since we can define simplicity as the opposite—or, numerically, the negative—of complexity. To ask a computer how complex something is, we have to present that "something" in a form the computer can deal with, that is as a data file, i.e. a string of 0s and 1s. That's hardly a crippling constraint: We know that data files can represent movies, for example, so we can ask how about the simplicity of anything we can present in a movie; since our movie might be a movie recording scientific observations or experiments, we can ask about the simplicity of a scientific explanation.

Interesting data files might be very big, of course. But big files need not be genuinely complex; for example, a file containing trillions of 0s and nothing else isn't genuinely complex. The idea of description length is, simply, that a file is only as complicated as its simplest description. Or, to put it in terms a computer could relate to: A file is as complicated as the shortest program that can produce it from scratch. This defines a precise, widely applicable, numerical measure of simplicity.

An impressive virtue of this notion of simplicity is that illumines and connects up other attractive, successful ideas. Consider, for example, the method of theoretical physics. In theoretical physics, we try to summarize the results of a vast number of observations and experiments in terms of a few powerful laws. We strive, in other words, to produce the shortest possible program that outputs the world. In that precise sense, it's a quest for simplicity.

It's appropriate to add that symmetry, a central feature of the physicist's laws, is a powerful simplicity enabler. For example, if we work with laws that have symmetry under space and time translation—in other words, laws that apply uniformly, everywhere and everywhen—then we don't need to spell out new laws for distant parts of the universe or for different historical epochs, and we can keep our world-program short.

Simplicity leads to depth: For a short program to unfold into rich consequences, it must support long chains of logic and calculation, which are the essence of depth.

Simplicity leads to elegance: The shortest programs will contain nothing gratuitous. Every bit will play a role, for otherwise we could expunge it, and make the program shorter. And the different parts will have to function together smoothly, in order to make a lot from a little. Few processes are more elegant, I think, than the construction, following the program of DNA, of a baby from a fertilized egg.

Simplicity leads to beauty: For it leads, as we've seen, to symmetry, which is an aspect of beauty. As, for that matter, are depth and elegance.

Thus simplicity, properly understood, explains what it is that makes a good explanation deep, elegant, and beautiful.

"What was he thinking!" This is the familiar bewildered cry of parents trying to explain why their teenaged children act the way they do. Developmental psychologists, neuroscientists and clinicians have an interesting and elegant explanation for teenage weirdness. It applies to a wide range of adolescent behavior, from the surprisingly admirable, to the mildly annoying, to the downright pathological. The idea is that there are two different neural and functional systems that interact to turn children into adults. The developmental relationship between those two systems has changed, and that, in turn, has profoundly changed adolescence.

First, there is a motivational and emotional system that is very closely linked to the biological and chemical changes of puberty. This is what turns placid ten-year-olds, safe in the protected immaturity of childhood, into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire, and experience every sensation. And for adolescents, the most important goal of all is to get the respect of your peers. Recent studies show that adolescents aren't reckless because they underestimate risks, but because they overestimate rewards, especially social rewards, or rather that they find rewards more rewarding than adults do—think about the incomparable intensity of first love, the never to be recaptured glory of the high-school basketball championship. (In youth you want things, and then in middle-age you want to want them.)

The second system is a control system that can channel and harness all that seething energy. The prefrontal cortex reaches out to guide and control other parts of the brain. This is the system that inhibits impulses and guides decision-making. This control system depends much more on learning than the motivational system. You get to make better decisions by making not so good decisions and then correcting them. You get to be a good planner by making plans, implementing them and seeing the results again and again. Expertise comes with experience.

In the distant evolutionary past, in fact, even in the recent historical past, these two systems were in sync. Most childhood education involved formal and informal apprenticeships. Children had lots of chances to practice exactly the skills that they would need to accomplish their goals as adults, and so to become expert planners and actors. To become a good gatherer or hunter, cook or caregiver, you would actually practice gathering, hunting, cooking and taking care of children all through middle-childhood and early adolescence—tuning up just the prefrontal wiring you'd need as an adult. But you'd do all that under expert adult supervision and in the protected world of childhood where the impact of your inevitable failures would be blunted. When the motivational juice of puberty kicked in, you'd be ready to go after the real rewards with new intensity and exuberance, but you'd also have the skill and control to do it effectively and reasonably safely.

In contemporary life though, the relationship between these two systems has changed. For reasons that are somewhat mysterious but most likely biological, puberty is kicking in at an earlier and earlier age. (The leading theory points to changes in energy balance as children eat more and move less). The motivational system kicks in with it.

At the same time, contemporary children have very little experience with the kinds of tasks that they'll have to perform as grown-ups. Children have increasingly little chance to even practice such basic skills as cooking and caregiving. In fact, contemporary adolescents and preadolescents often don't do much of anything except go to school. The experience of trying to achieve a real goal in real time in the real world is increasingly delayed, and the development of the control system depends on just those experiences. The developmental psychologist Ron Dahl has a nice metaphor for the result—the adolescents develop a gas pedal and accelerator a long time before they get steering and brakes.

This doesn't mean that adolescents are stupider than they used to be — in many ways, they are much smarter. In fact, there's even some evidence that delayed frontal development is correlated with higher I.Q. The increasing emphasis on schooling means that children know more about more different subjects than they ever did in the days of apprenticeships. Becoming a really expert cook doesn't tell you about the evolution of tool-use or the composition of sodium chloride—the sorts of things you learn in school. But there are different ways of being smart-knowing history and chemistry is no help with a soufflé. Wide-ranging, flexible and broad-based learning may actually be in tension with the ability to develop finely-honed controlled, focused expertise in a particular skill.

Of course, the old have always complained about the young. But the explanation does elegantly account for the paradoxes and problems of our particular crop of adolescents. There do seem to be many young adults who are enormously smart and knowledgeable but directionless, who are enthusiastic and exuberant but unable to commit to a particular work or a particular love until well into their twenties or thirties. And there is the graver case of children who are faced with the uncompromising reality of the drive for sex, power and respect, without the expertise and impulse-control it takes to ward off pregnancy or violence.

I like this explanation because it accounts for so many puzzling everyday phenomena. But I also like it because it emphasizes two really important facts about minds and brains that are often overlooked. First, there's the fact that experience shapes the brain. Its truer to say that our experience of controlling our impulses make the prefrontal cortex develop than it is to say that prefrontal development makes us better at controlling our impulses.

Second, it's increasingly apparent that development plays a crucial role in explaining human nature. The old "evolutionary psychology" picture was that a small set of genes was directly responsible for some particular pattern of adult behavior—a "module". In fact, there is more and more evidence that genes are just the first step in complex developmental sequences, cascades of interactions between organism and environment, and that those developmental processes shape the adult brain. Even small changes in developmental timing can lead to big changes in who we become.

Professor of Life Sciences, Director, Center for Evolution & Medicine, Arizona State University; Coauthor, Why We Get Sick

Natural Selection Is Simple, But The Systems It Shapes Are Unimaginably Complex

The principle of natural selection is exceedingly simple. If some individuals in a population have a heritable trait associated with having more offspring, that trait will (with a few caveats) become more common in the population over the generations.

The products of natural selection are vastly complex. They are not merely complicated in the way that machines are complicated, they are organically complex in ways that are fundamentally different from any product of design. This makes them difficult for human minds to fully describe or comprehend. So, we use that grand human gambit for understanding, a metaphor, in this case, the body as machine.

This metaphor makes it easy to portray the systems that mediate cell division, immune responses, glucose regulation, and all the rest, using boxes for the parts, and arrows to indicate causes what. Such diagrams summarize important information in ways we can grasp. Teachers teach them. Students dutifully memorize them. But, they fundamentally misrepresent the nature of organic complexity. As Haldane noted in a prescient 1917 book, "a living organism has, in truth, but little resemblance to an ordinary machine." Machines are designed. They have discrete parts with specific functions, and most remain intact when turned off. Individual machines are manufactured following identical copies of a single blueprint. In contrast, organisms are evolved. They have components with indistinct boundaries and multiple functions that interact with myriad other parts and the environment to create self-sustaining reproducing systems that whose survival requires the constant activity and cooperation of thousands of interdependent subsystems. Individual organisms develop from unique combinations of genes interacting with each other and environments to create phenotypes, no two of which are identical.

Thinking about the body as a machine was a grand advance in the 16th century, when it offered an alternative to vitalism and vague notions of the life force. Now it is outmoded. It distorts our view of biological systems by fostering thinking about them as simpler and more sensibly "designed" than they are. Experts know better. They recognize that the mechanisms that regulate blood clotting are represented only crudely by the neat diagrams medical students memorize; most molecules in the clotting system interact with many others. Experts on the amygdala know that it does not have one or two functions, it has many, and they are mediated by scores of pathways to other brain loci. Serotonin exists not mainly to regulate mood and anxiety, it is essential to vascular tone, intestinal motility, and bone deposition. Leptin is not mainly a fat hormone, it has many functions, serving different ones at different time, even in the same cell. The reality of organic systems is vastly untidy. If only their parts were all distinct, with specific functions for each! Alas, they are not like machines. Our human minds have as little intuitive feeling for organic complexity as they do for quantum physics.

Recent progress in genetics confronts the problem. Naming genes according to postulated functions is as natural as defining chairs and boats by their functions. If each gene were a box on a blueprint labeled with its specific function, biology would be so much more tractable! However, it is increasingly clear that most traits are influenced by many genes, and most genes influence many traits. For instance, about 80% of the variation in human height is accounted for by genetic variation. It would seem straightforward to find the responsible genes. But looking for them has revealed that the 180 loci with the largest effects together account for only about 10% of the phenotypic variation. Recent findings in medical genetics are more discouraging. Just a decade ago, hope was high that we would soon find the variations that account for highly heritable diseases, such as schizophrenia and autism. But scanning the entire genome has revealed that there are no common alleles with large effects on these diseases. Some say we should have known. Natural selection would, after all, tend to eliminate alleles that cause disease. But, thinking about the body as a machine aroused unrealistic hopes.

The grand vision for some neuroscientists is to trace every molecule and pathway to characterize all circuits in order to understand how the brain works. Molecules, loci, and pathways do serve differentiated functions, this is real knowledge with great importance for human health. But, understanding how the brain works by drawing a diagram that describes all the components and their connections and functions is a dream that may be unfulfillable. The problem is not merely fitting a million items on a page, the problem is that no such diagram can adequately describe the structure of organic systems. They are products of miniscule changes, from diverse mutations, migration, drift, and selection, which develop into systems with incompletely differentiated parts and incomprehensible interconnections, that, nonetheless, work very well indeed. Trying to reverse engineer brain systems focuses important attention on functional significance, but it inherently limited, because brain systems were never engineered in the first place.

Natural selection shapes systems whose complexity is indescribable in terms satisfying to human minds. Some may feel this is nihilistic. It does discourage hopes for finding simple specific descriptions for all biological systems. However, recognizing a quest as hopeless is often the key to progress. As Haldane put it, "We are thus brought face-to-face with the conclusion which to the biologist is just as significant and fundamental, just as true to the facts observed, as the conclusion of mass persists is to the physicists… the structure of a living organism has no real resemblance in its behavior to that of a machine… In the living organisms…the "structure" is only the appearance given by what seems at first to be a constant flow of specific material, beginning and ending in the environment."

If bodies are not like machines, what are they like? They are more like Darwin's "tangled bank" with its "elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner." Lovely. But, can an ecological metaphor replace the metaphor of body as machine? Not likely. Perhaps someday understanding how natural selection shapes organic complexity will be so widely and deeply understood that scientists will be able to say "A body is like…a living body," and everyone will know exactly what that means.

Movies are not smooth. The time between frames is empty. The camera records only twenty-four snapshots of each second of time flow, and discards everything that happens between frames—but we perceive it anyway. We see stills, but we perceive motion. How can we explain this? We can ask the same question about digital movies, videos, and videogames—in fact, all modern digital media—so the explanation is rather important, and one of my favorites.

Hoary old "persistence of vision" can't be the explanation. It's real, but it only explains why you don't see the emptiness between the frames. If an actor or an animated character moves between frames then—by persistence of vision—you should see him in both positions: two Humphrey Bogarts, two Buzz Lightyears. In fact, your retinas do see both, one fading out as the other comes in—each frame is projected long enough to ensure this. It's what your brain does with the retinas' information that determines whether you perceive two Bogarts in two different positions or one Bogart moving.

On its own the brain perceives the motion of an edge, but only if it moves not too far, and not too fast, from the first frame to the second one. Like persistence of vision, this is a real effect, called apparent motion. It's interesting but it's not the explanation I like so much. Classic cel animation—of the old ink on celluloid variety—relies on the apparent-motion phenomenon. The old animators knew intuitively how to keep the successive frames of a movement inside its "not too far, not too fast" boundaries. If they needed to exceed those limits, they had tricks to help us perceive the motion—like actual speed lines and a poof of dust to mark the rapid descent of Wile E. Coyote as he steps unexpectedly off a mesa in hot pursuit of that truly wily Road Runner.

Exceed the apparent motion limits—without those animators' tricks—and the results are ugly. You may have seen old school stop-motion animation—such as Ray Harryhausen's classic sword-fighting skeletons in Jason and the Argonauts—that is plagued by an unpleasant jerking motion of the characters. You're seeing double, at least—several edges of a skeleton at the same time—and correctly interpret it as motion, but painfully so. The edges stutter, or "judder," or "strobe" across the screen—the words reflect the pain inflicted by staccato motion.

Here's what a real movie camera does. The frame it records is not a sample at a single instant, like a Road Runner or a Harryhausen frame. Rather the camera shutter is open for a short while, called the exposure time. A moving object is moving during that short interval, of course, and is thus smeared slightly across the frame during the exposure time. It's like what happens when you try to take a long-exposure still photo of your child throwing a ball and his arm is just a blur. But a bug in a still photograph turns out to be a feature for movies. Without the blur all movies would look as jumpy as Harryhausen's skeletons—unless Uma miraculously stayed within limits.

A scientific explanation can become a technological solution. For digital movies—like Toy Story—the solution to avoid strobing was derived from the explanation for live-action: Deliberately smear a moving object across a frame along its path of motion. So a character's swinging arm must be blurred along the arc the arm traces as it pivots around its shoulder joint. And the other arm independently must be blurred along its arc, often in the opposite direction to the first arm. All that had to be done was to figure out how to do what a camera does with a computer—and, importantly, how to do it efficiently. Live-action movies get motion blur for free, but it costs a lot for digital movies. The solution—by the group now known as Pixar—paved the way to the first digital movie. Motion blur was the crucial breakthrough.

In effect, motion-blur shows your brain the path a movement is taking, and also its magnitude—a longer blur means a faster motion. Instead of discarding the temporal information about motion between frames, we store it spatially in the frames as a blur. A succession of such frames overlapping a bit—because of persistence of vision—thus paints out a motion in a distinctive enough way that the brain can do the full inference.

Pixar throws thousands of computers at a movie—spending sometimes more than thirty hours on a single frame. On the other hand, a videogame—essentially a digital movie restricted to real time—has to deliver a new frame every thirtieth of a second. It was only seventeen years ago that the inexorable increase in computation speed per unit dollar (described by Moore's Law) made motion-blurred digital movies feasible. Videogames simply haven't arrived yet in 2012. They can't compute fast enough to motion blur. Some give it a feeble try, but the feel of the play lessens so dramatically that gamers turn it off and suffer the judder instead. But Moore's Law still applies, so soon—five years? ten?—even videogames will motion blur properly and fully enter the modern world.

Best of all, motion blur is just one example of a potent general explanation called The Sampling Theorem. It works when the samples are called frames, taken regularly in time to make a movie, or when they are called pixels, taken regularly in space to make an image. It works for digital audio too. In a nutshell, the explanation of smooth motion from unsmooth movies expands to explain the modern media world—why it's even possible. But that would take a longer explanation.

Writer and Television Producer; Author, Remembering Our Childhood: How Memory Betrays Us

The Oklo Pyramid

New explanations in science are needed when an observation isn't explicable by current theory. The power of the scientific method lies in the extraordinary richness of understanding that can emerge from an attempt to devise a new explanation. It is like an inverted pyramid, with the first observation—often just a slight departure from the norm—as the point and then ever widening layers of inference, each dependent on a lower layer, until the whole pyramid supplies a satisfying and conclusive explanatory whole.

One of my favourite such explanations began with the observation of a small anomaly in a routine sample of uranium ore sent from Oklo, a region near the town of Franceville, in the Haut-Ogooué province of the Central African state of Gabon. Several natural nuclear fission reactorswere discovered in the uranium mines in the region in 1972.Africa and analysed in a French laboratory. Rock samples of naturally occurring uranium usually contain two types of uranium atoms, isotopes U238 and U235. Most of the atoms are U238 but about 0.7% are U235. In fact, to be accurate, the figure is .720%, but the sample that arrived in France had 'only' .717%, meaning that .003% of the expected U235 atoms were missing.

The only place such differences in proportion were known to occur was in the very artificial surroundings of a nuclear reactor, where U235 was bombarded with neutrons in a chain reaction that transformed the atoms and led to the change in the naturally occurring proportions. But the uranium ore had come from mines in the African state of Gabon and at the time there was no nuclear reactor on the whole continent of Africa, so that couldn't be the explanation. Or could it?

Unlike Olber's Paradox, where science had to wait nearly a hundred years for an explanation of an interesting observation, in the case of Oklo the explanation had already been published. Nearly twenty years before, a scientific paper by three scientists had suggested that somewhere on the Earth the conditions might have existed in the past for a uranium deposit to act like a natural nuclear fission reactor. They suggested three necessary conditions: 1. The size of the deposit should be greater than the average length that fission-inducing neutrons travel, which is about 70 centimeters; 2. Uranium235 atoms must be present in a greater abundance than they exist in natural rocks today, as much as 3% instead of .720%; 3. There must be what is called in a nuclear reactor today a moderator, a substance that 'blankets' the emitted neutrons and slows them down so that they are more apt to induce other uranium atoms to break apart.

These three conditions were exactly those that had applied to the Oklo deposits two billion years ago. The Oklo deposits were much larger than the minimum predicted size. Then, uranium235 has a half life of 704 million years and decays about six times faster that the U238 atoms, so several half lives ago, round about 2 billion years, there would have been much more U235 in natural deposits, just the sort of amount that would lead to a sustainable chain reaction. and so, extrapolating backwards, the relative proportions of the two isotopes would have been approximately 97 to 3 rather than 99.3 to .7 as it is today. And finally, the layers of rock had originally been in contact with natural water suggesting that what had happened was the following:

A chain reaction would start in rocks surrounded by water, and the atoms would split and generate heat. The heat would turn the water to steam and destroy its ability to moderate the reactions and the neutrons would escape, stopping the chain reactions. The steam would condense and turn back into water, beginning to blanket the neutrons that were still being emitted by the uranium, more neutrons would be retained, splitting the uranium atoms and restarting the chain reaction.

In explaining a tiny anomaly in the ratio of two types of atom in a piece of rock this size—I—the scientific method has led to a description of a series of events that happened in a specific location on earth billions of years ago. Over a period of 150 million years a natural nuclear reactor would produce heat for about half an hour and then shut down for two and a half hours before starting up again. It had done this over a period of 150 million years at an average power of 100 kilowatts, the kind of power produced in a typical car engine. Not only is this explanation deep, elegant and beautiful. It's also incontrovertible. It doesn't depend on someone's opinion or bias or desires, unlike many other 'explanations' of how the world works, and that's the power of the best science.

Vice President of Research & Collections, Denver Museum of Nature & Science; Dinosaur paleontologist and science communicator; Author, Dinosaur Odyssey: Fossil Threads in the Web of Life

The Gaia Hypothesis

For my money, the deepest, most beautiful scientific explanation is the Gaia hypothesis, the idea that Earth's physical and biological processes are inextricably interwoven to form a self-regulating system. This notion—the 1965 brainchild of chemist James Lovelock, further co-developed with microbiologist Lynn Margulis—proposes that air (atmosphere), water (hydrosphere), earth (geosphere or pedosphere) and life (biosphere) interact to form a single evolving system capable of maintaining environmental conditions consistent with life. Lovelock initially put forth the Gaia hypothesis to explain how life on Earth has persisted for 4 billion years despite a 30% increase in the Sun's intensity over that same interval.

But how does Gaia work? Lacking a conscious command-and-control system, Lovelock and Margulis demonstrated that Gaia uses feedback loops to track and adjust key environmental parameters. Take oxygen, a highly reactive by-product of life, generated and continually replenished by photosynthetic algae and plants. The present day atmospheric concentration of oxygen is about 21%. A few percentage points lower and air-breathing life forms could not survive. A few percentage points higher and terrestrial ecosystems would become overly combustible, prone to conflagration. According to the Gaia hypothesis, oxygen-producing organisms have used feedback loops to maintain atmospheric oxygen between these narrow limits for hundreds of millions of years.

Similar arguments, backed by an ever-growing body of research, can be made for other atmospheric constituents, as well as for global surface temperature, oceanic salinity, and other key environmental metrics. Although the Gaia hypothesis highlights cooperation at the scale of the biosphere, researchers have documented multiple examples showing how cooperation at one level could evolve through competition and natural selection at lower levels. Initially criticized by serious scientists as new-age mumbo-jumbo, Lovelock's radical notion has increasingly been incorporated into scientific orthodoxy, and key elements are now often taught as "Earth Systems Science." One timely lesson resulting at least in part from Gaian research is that food web complexity, including higher species diversity, tends to enhance ecological and climate stability.

So, while Earth may inhabit a "Goldilocks zone," neither too close nor too far from the sun, life's rampant success on this "pale blue dot" cannot be ascribed to luck alone. Life has had a direct hand in ensuring its own persistence.

Science has not yet fully embraced the Gaia hypothesis. And it must be admitted that, as an explanation, this idea remains incomplete. The insights cascading from Gaia are unquestionably deep and beautiful, uniting the whole of the biosphere and Earth's surface processes into a single, emergent, self-regulating system

Yet this explanation has yet to achieve the third milestone defined in this year's Edge Annual Question—elegance. The Gaia hypothesis currently lacks the mathematical precision of Einstein's E=Mc2. No unified theory of Earth and Life has been presented to explain why life stabilizes more than it destabilizes.

Evolutionary biologist W. D. Hamilton once compared Lovelock's insights to those of Copernicus, adding that we still await the Newton who will define the laws of this grand, seemingly improbable relationship. Hamilton himself became deeply engrossed in seeking an answer to this question, developing a computer model that seemed to show how stability and productivity could increase in tandem. Were it not for his untimely death, Hamilton might have emerged as that modern-day Newton, becoming, in the words of author Tim Flannery, "the most revered biologist of all time."

The cultural implications of Gaia also continue to be debated. Arguably the most profound implication of Lovelock's idea is that Earth, considered as a whole, possesses many qualities of an organism. But is Gaia actually alive, akin to a single life form, or is it more accurate to think of her as a planet-sized ecosystem? Lynn Margulis argued strongly (and convincingly, to my mind) for the latter view. Margulis, whose work revolutionized evolutionary biology at the smallest and grandest of scales, died recently. Always the hard-nosed scientist, she once said,

"Gaia is a tough bitch—a system that has worked for over three billion years without people. This planet's surface and its atmosphere and environment will continue to evolve long after people and prejudice are gone."

While not disagreeing with this blunt assessment, I find considerably greater inspiration in Gaian thinking. Indeed I would go so far as to suggest that this idea can help shift the human perception of nature. In the modernist perspective, the natural world is little more than a collection of virtually infinite resources available for human exploitation. The Gaian lens encourages us to re-envision Earth-bound nature as an intertwined, finite whole from which we evolved, and in which we remain fully embedded. Here, then, is a deep and beautiful perspective in desperate need of broad dissemination.

An evening in early August 1954 Niels Kaj Jerne had an idea while walking home from work and crossing Copenhagen's beautiful Knippelsbro—a bridge connecting the island of Amager where his laboratory was located at the vaccine-producing national serum institute SSI, and central Copenhagen. The flash of insight was completed even before he had crossed the bridge and so was the idea that would for ever change our understanding of the immune system—and perhaps one day also perception and cognition.

Niels Kaj Jerne had for a long time been worried about a single piece of evidence in the study of antibodies. Our immune system somehow produces these elegant structures that can bind to and escort out invading foreign substances like viruses or bacteria. The shape of the antibody fits the three-dimensional structure of the invader like a hand in a glove or a key in a lock.

But how was that possible: There is not nearly enough information in the genetic system to specify an antibody for each of the many million foreign entities that could enter the body. A favorite explanation at the time was that the antibody was informed by the alien substance: It took its shape from what was entering the body, so that it could bind to it and usher it out. However, there was a problem that had been known since the 1930'ies: Small amounts of an antibody is present in the blood stream of animals—even though they have never been exposed to the foreign substance, the antigen, that the antibody is perfectly shaped to bind to. So the antibody could not have learned from the antigen.

The summer evening walk home from work gave Jerne an idea that he immediately saw as "fabulous": The body holds a collection of building blocks for antibodies. Whenever something new enters the body, the building blocks are shuffled into a vast array of antibodies. Each will be produced in one copy. One of them will bind particularly well to the shape of the invader. When the "re-cognition" has taken place, that particular antibody will be mass-produced by the immune system.

This immediately explains why it takes time to fight an infection and why vaccination helps (the shuffling and selection has already been done before the invaders arrive in large numbers).

Niels Kaj Jerne told biochemist Günther Stent about the idea. It took Stent "fifteen, maybe twenty minutes" to become convinced that the idea was right. It took thirty years before Jerne was awarded the Nobel Prize.

In a sense it is Darwinism applied to the immune system. A random production of possible shapes is subject to a selection pressure from an outside universe of shapes. When they fit, the shape from inside is multiplied in large numbers. In Darwinism we are talking organisms and the environment, in Jerne's model we are talking antibodies and the environment.

The elegance of the idea is that there is no transfer of information from the environment into the "design" of the antibody/organism. The environment is only selecting between options produced at random.

Is this not like perception: We actually do not see the world around us, but only our own "simulated visual scenery". When our fantasies are "relevant" to our behavior they are selected for and reproduced.

Is this not like cognition and thinking: We make up lots of "lego-brick" models of the empirical data, reject most of them, but eventually when something fits, we keep thinking that way?

Is Niels Kaj Jerne's idea not a prime example of itself: After playing around with numerous small toy models in his mind, discarding all of them as hopeless, suddenly the right one popped up in his mind that August evening crossing Knippelsbro?

It's easy to imagine a politician's objecting to federal funds going to study how dogs drool. But failing to support such research would have been very short sighted indeed during the days of the great Russian physiologist Ivan Pavlov (1849—1936). As part of his Nobel Prize-winning research on digestion, he measured the amount of saliva produced when dogs were given food. In the course of this work, he and his colleagues noticed something unexpected: The dogs started to salivate well before they were fed. In fact, they salivated when they first heard the approaching footsteps of the person coming to feed them. That core observation led to the discovery of classical conditioning.

The key idea behind classical conditioning is that a neutral stimulus (such as the sound of approaching footsteps) comes to be associated with a stimulus (such as food) that reflexively produces a response (such as salivation)—and, after awhile, the neutral stimulus comes to elicit the response produced reflexively by the paired stimulus. To be clear about the phenomenon, we'll need to take a few words to explain the jargon. The neutral stimulus becomes "conditioned," and hence is known as the "conditioned stimulus" (CS), whereas the stimulus that reflexively produces the response is known as the unconditioned stimulus (UCS). And the response that is produced by the UCS is called the unconditioned response (UR). Classical conditioning occurs when the CS is presented right before a UCS, so that after a while the CS by itself produces the response. When this occurs, the response is called a conditioned response (CR). In short, at the outset a UCS (such as food) produces a UCR (such as salivation); when a CS (the sound of the feeder's footsteps) is presented before the UCS, it soon comes to produce the response, a CR (salivation), by itself.

This simple process gives rise to a host of elegant and non-intuitive explanations.

For example, consider accidental deaths from drug overdoses. In general, a narcotics user tends to take the drug in a specific setting, such as in his or her bathroom. The setting initially is a neutral stimulus, but after a person takes narcotics in it a few times, the bathroom comes to function as a CS: As soon as the user enters his or her bathroom with narcotics, the user's body responds to the setting by preparing for the ingestion of the drug. Specific physiological reactions allow the body to cope with the drug, and those reactions become conditioned to the bathroom (in other words, the reactions become a CR). To get a sufficient "high," the user must now take enough of the narcotic to overcome the body's preparation. But if the user takes the drug in a different setting, perhaps in a friend's bedroom during a party (and hence the new setting is not a CS), the CR does not occur—the usual physiological preparation for the narcotic does not take place. Thus, the usual amount of the drug functions as if it were a larger dose, and may be more than the user can tolerate without the body's preemptive readiness. Hence, although the process of classical conditioning was formulated to explain very different phenomena, it can be extended directly to explain why drug overdoses sometimes accidentally occur when usual doses are taken in new settings.

By the same token, classical conditioning plays a role in the placebo effect: For those of us who have regularly used analgesics such as ibuprofen or aspirin, such medicines begin to have their effects well before their active ingredients actually have time to take effect. How? From previous experience, the mere act of taking that particular pill has become a CS, which triggers the pain-relieving processes invoked by the medicine itself (and those processes have become a CR).

Classical conditioning also can result from an implanted defibrillator, or "pacemaker." When the heart beats too quickly, this device shocks it and thereby causes it to revert to beating at a normal rate. Until the shock level is properly calibrated, the shock can be very uncomfortable and can function as a UCS that produces fear as a UCR. Because the shock does not occur in a consistent environment, the person associates random aspects of the environment with it—which then function as CSs. And when any of those aspects of the environment are subsequently present, the person can experience severe anxiety, awaiting the possible shock and resulting reaction.

This same process explains why you would find a particular food unappealing if you happen to have eaten it and gotten food poisoning (and thus had significant gastrointestinal problems—the UCR to the UCS of tainted food). That type of food can thus come to function as a CS, and if you eat it—or even think about eating it—you may feel queasy, a CR. You may find yourself avoiding that food, and thus a food aversion is born. In fact, simply pairing pictures of particular types of food (such as French fries) with aversive photographs (such as of a horribly burned body) can change how appealing you find that food.

These examples should be sufficient to give you a sense of how the explanation for anticipatory salivation can be easily and simply extended to a wide range of phenomena. But, that said, we need to point out that Pavlov's original conception of classical conditioning was not quite right; he thought that sensory input was directly connected to specific responses, which led the stimuli to produce the response automatically. We now know that the connection is not so direct; classical conditioning involves many cognitive processes, such as attention and those that underlie the ability to interpret and understand. In fact, classical conditioning is a form of "implicit learning." As such, it allows us to navigate through life with less cognitive effort (and stress) than would otherwise be required. Nevertheless, this sort of conditioning has byproducts that can be powerful, surprising, and even sometimes dangerous.

Most scientific facts are based on things that we cannot see with the naked eye or hear by our ears or feel by our hands. Many of them are described and guided by mathematical theory. In the end, it becomes difficult to distinguish a mathematical object from objects in nature.

One example is the concept of a sphere. Is the sphere part of nature or it is a mathematical artifact? That is difficult for a mathematician to say. Perhaps the abstract mathematical concept is actually a part of nature. And it is not surprising that this abstract concept actually describes nature quite accurately.

Professor Emeritus, Stevens Institute of Technology; Former Staff Writer, the New Yorker

Go Small

When confronted with a question like this the tempatation is to "go big" and respond with something,say, from Einstein's theory of relativity. Instead I will go small. When Planck introduced his quantum of action h at the turn of the 20th century he realized that this allowed for a new set of natural units. For example the Planck time is the square root of Planck's constant times the garvitational constant divided by the fifth power of the speed of light. It is the smallest unit of time anyone talks about but is it a "time?" The problem is that these constants are just that. They are the same to a resting observer as to a moving one. But the time is not. I posed this as a"divinette" to my "coven" and Freeman Dyson came up with a beautiful answer. He tried to construct a clock that would measure it. Using the quantum uncertainties he showed that it would be consumed by a black hole of its ownmaking. No measurement is possible. The Planck time ain't a time or it may be beyond time.

A remarkable discovery about visual perception, and about sensory systems more generally, is that there are deep and elegant principles that explain, sometimes with mathematical precision, a diverse array of apparently unrelated phenomena. Two such principles are the principle of generic views and the principle ofsatisficing utility.

When we view a painting, such as M.C. Escher's Relativity, why do we see the straight lines in the painting as straight lines in three dimensions? After all there are, in principle, an infinite number of other three-dimensional interpretations. One could, for instance, interpret a straight line in the painting as a circle in three-dimensions that is seen edge on. Or one could interpret it as sinusoidal wiggle, again seen edge on.

Although these other interpretations are logically possible, they are visually implausible. If in fact one were viewing a circle, and the image at one's eye happened to be a straight line, then if one moved ever so slightly the line at the eye would change to an ellipse. If one were viewing a wiggle, and the image was again a straight line, then a slight move would reveal the wiggle. A small change in viewpoint would make a qualitative change in the image one sees. From a generic view, a circle or wiggle do not appear like a straight line.

However, if in fact one were viewing a straight line in three dimensions, then the image at the eye would remain a straight line from almost every viewpoint, i.e., from a generic view. (The only exception would be viewpoints in which the line happens to look like a point.)

Human vision prefers interpretations that imply a generic view. This principle explains many visual phenomena. For instance, when the endpoint of one line appears to touch another line, we assume that they touch in three dimensions. If they did not, then we would be looking at the lines from a non-generic viewpoint in which the endpoint just happened to coincide with the image of the other line. Our commitment to this principle is so strong that when it leads us astray, as it does when we see impossible waterways wending their way through Escher's Waterfall, we are unable to see other interpretations that violate the principle but would avoid the impossibilities.

Why do certain dragonflies prefer pools of oil to pools of water, and pay the ultimate price for their choice? Why do certain male beetles try to mate with beer bottles, and forsake available females? Why do our eyes detect squirrels more easily than handguns? Why is a face with limbal rings around its irises more attractive than the same face without rings? Each is a consequence of the principle of satisficing utility.

Dragonflies need to lay eggs in water. Their visual systems evolved to detect horizontal polarization of light as a way to find bodies of water suitable for oviposition. In the environment in which they evolved, this simple trick worked well. But another species has altered their environment. H. sapiens learned to consume oil, and on occasion leaves slicks or pools of oil, which happen to polarize light more strongly than water. Horvath and Zeil find that the oil is a supernormal stimulus to the dragonfly, tempting it more strongly than water, and leading to the demise of all who follow temptation. Kriska and colleagues report that Mayflies face a similar problem with certain asphalt roads that strongly polarize light, leading them to lay eggs where they are doomed to die. Oil slicks and asphalt roads are ecological traps for dragonflies and mayflies. Their visual systems evolved a satisficing solution for finding water which fell far short of guaranteeing a genuine find but which, in a world before oil slicks and asphalt roads, was useful enough to let them survive.

A certain jewel beetle in the Australian desert has wing casings that are dimpled, glossy and brown. The males fly around searching for eligible females, a strategy that has served the species well for thousands of years. That is, until H. sapiens started dumping empty beer bottles in the desert that are bumpy, glossy, and just the right shade of brown. Gwynne and Rentz discovered that male beetles find the bottles far more attractive than real females, and swarm the bottles attempting to mate. Their visual systems evolved a satisficing solution for finding females which fell far short of guaranteeing a genuine find but which, in a world before beer bottles, was useful enough to let them survive.

H. sapiens evolved to detect and monitor objects that could be dangerous. In the environment in which H. sapiens evolved, animate objects were a primary source of potential danger. Today inanimate objects such as guns and cars are also dangerous. But psychophysical studies by New, Cosmides and Tooby show that H.sapiens is faster to detect a squirrel than to detect a gun or car. Our visual systems evolved a satisficing solution for finding dangerous objects which fell short of guaranteeing a genuine find but which, in a world before guns and cars, was useful enough to let us survive.

H. sapiens evolved to be more attracted to conspecifics that are more reproductively fit. In the environment in which H. sapiens evolved, a larger and more pronounced limbal ring around the iris was a good probabilistic indicator of youth and health, and therefore of reproductive fitness. Peshek and colleagues found that human observers find a face with limbal rings around its irises more attractive than the same face without rings. (Pronounced limbal rings can be seen in the famous National Geographic photograph of an Afghan girl.) We have evolved a satisficing solution for finding reproductively fit conspecifics, a solution which worked well in the environment in which it evolved. Now, however, contact lenses are sold which endow the wearer with artificially larger and more pronounced limbal rings, but which nevertheless trick our visual systems and make the wearer look more attractive.

The principle of generic views and the principle of satisficing utility each explain a diverse array of apparently unrelated perceptual phenomena. Perhaps as our understanding of sensory systems advances we will find that the principle of generic views is itself a consequence of the principle of satisficing utility.

The request for a favorite deep, elegant, and beautiful explanation left me a bit cold. "Deep," "elegant," and "beautiful" are aesthetic qualities that I associate more with experience and process than explanation, especially that of the observer observing. Observation is the link between all empirical sciences, and the reason why physicists were among the founders of experimental psychology. The difference between psychology and physics is one of emphasis; both involve the process of observers observing. Physics stresses the observed, psychology the observer. As horrifying as this may be to hyper-empiricists who neglect the observer, physics is necessarily the study of the behavior of physicists, biology the study of biologists, and so on. Decades ago, I discussed this issue with John Wheeler who found it obvious, noting that a major limit on cosmology is the cosmologist. When students in my course on Sensation and Perception hear me say that we are engaging the study of everything, I'm absolutely serious. In many ways, the study of sensation and perception is the most basic and universal of sciences.

My passion for observation is aesthetic as well as scientific. My most memorable observations are of the night sky. For others, they may be the discovery of a T-rex fossil, or the sound of bird song on a perfect spring day. To see better and deeper, I build telescopes, large and small. I like my photons fresh, not collected by CCD or analyzed by computer. I want to encounter the cosmos head-on, letting it wash over my retina. My profession of neuroscience provides its own observational adventures, including the unique opportunity to close the circle by investigating the neurological mechanism through which the observer observes and comes to knows the cosmos.

I read about the Faurie-Raymond Hypothesis a long time ago, but it didn't click with me until I fought big Nick. Nick is a national guardsman who trains with me at the local mixed martial arts academy. Technically we were just sparring, not fighting. But Nick is so strong, his punches so sincere, that even when he tries to throw gentle, he makes your consciousness wobble, makes you realize, if you hadn't before, that the goal of boxing is to shut down the brain. The bell rang and we engaged and my fear passed quickly into disorientation. Something wasn't right. Nick is powerful but he's not more skillful than I, and he's not what you would call a graceful mover or a sophisticated striker. Nick plows forward: jab, cross; jab, cross. Nick plows forward: jab, cross; jab, cross, hook. Nick doesn't bob. Nick doesn't weave. Nick plows forward.

So why couldn't I hit him? Why were my punches grazing harmlessly past his temples or glancing off his belly. And why, whenever I tried to slip and counter, was I eating glove leather? I tracked him through the blur of his hands and all of the angles looked wrong, the planes of his face and body askew. There was nothing solid to hit. And all the while he was hammering me with punches I sensed too late—slow and heavy blows, but maddeningly oblique.

When the bell finally saved me, we embraced (it's a paradox: nothing makes men love each other so much as a good-natured fist fight). I collapsed in one of the folding chairs with my head throbbing and the sweat rolling down, and I said to myself: "That seals it. Faurie-Raymond has to be true."

Nick represents a type that ninety percent of boxers fear and despise on sight. Nick is a lefty, which is, according to my pugilism professor, "an abomination" and "a birth defect." Here, my professor joins other righty authorities in the sweet science (they refer to themselves as "orthodox," as if to point up lefty perversion), who don't seem to be kidding when they say, "All southpaws should be drowned at birth."

My professor's claim that lefties are defective has a surprising grain of truth in it. In a world of scissors and school desks shaped for righties, being a lefty is not just annoying. It seems to be bad for you. According to a number of studies, lefties are at higher risk for disorders like schizophrenia, mental retardation, immune deficiency, epilepsy, learning disability, spinal deformity, hypertension, ADHD, alcoholism, and stuttering.

Which brings me to Charlotte Faurie and Michel Raymond, a pair of French scientists who study the evolution of handedness. Left-handedness is partly heritable and is associated with significant health risks. So why, they wondered, hasn't natural selection trimmed it away? Were the costs of left-handedness cancelled out by hidden fitness benefits?

The scientists noted that lefties have advantages in sports like baseball and fencing where the competition is interactive (but not in sports, like gymnastics or swimming, with no direct interaction). In the elite ranks of cricket, boxing, wrestling, tennis, baseball and more, lefties are massively over-represented. The reason is obvious. Since ninety percent of the world is right-handed, righties usually compete against each other. When they confront lefties, who do everything backwards, their brains reel, and the result can be as lopsided as my mauling by Nick. In contrast, lefties are most used to facing righties; when two lefties face off, any confusion cancels out.

Faurie and Raymond made a mental leap. The lives of ancestral people were typically more violent than our own. Wouldn't the lefty advantage in sports—including combat sports like boxing, wrestling, and fencing—have extended to fighting, whether with fists, clubs, or spears? Could the fitness benefits of fighting southpaw have offset the health costs associated with left-handedness? In 1995 Faurie and Raymond published a paper supporting their prediction of a strong correlation between violence and handedness in preindustrial societies: the more violent the society, the more lefties. The most violent society they sampled, the Eipo of Highland New Guinea, was almost thirty percent southpaw.

What makes a scientific explanation beautiful? General factors like parsimony play a role, but as with any aesthetic question, quirks of personal taste bulks large. Why do I find the Faurie-Raymond Hypothesis attractive? Partly because it was an almost recklessly creative idea, and yet the data seemed to fit. But mainly because the undoubtable truth of it was pounded into my brain by a young soldier sometime last year.

This is not to say, with apologies to Keats, that beauty and truth are synonyms. Sometimes the truth turns out to be dull and flat. Many of the loveliest explanations—the ones we adore with almost parental fondness--turn out dead false.And this is what T. H. Huxley called scientific tragedy: "the slaying of a beautiful hypothesis by an ugly fact."

Many studies have since examined the Faurie-Raymond Hypothesis. Results have been mixed, but facts have surfaced that are, to my taste, quite decidedly ugly. A recent and impressive inquiry, found no evidence that lefties are over-represented among the Eipo of Highland New Guinea.

It hurts to surrender a beloved idea--one you just knew was true, one that was stamped into your mind by lived experience not statistics. And I'm not yet ready to consign this one to the bone yard of lovely--but dead--science. Faurie and Raymond brought in sports data to shore up their main story about fighting. But I think the sports data may actually be the main story. Lefty genes may have survived more through southpaw success in play fights than in real fights—a possibility Faurie and Raymond acknowledge in a later paper. Athletic contests are important across cultures, and if we think they are frivolous we are wrong. Around the world, sport is mainly a male preserve, and winners—from captains of football teams to traditional African wrestlers to Native American runners and lacrosse players—gain more than mere laurels. They elevate their cultural status—they win the admiration of men, the desire of women (research confirms the stereotype: athletic men have more sexual success). This raises a bigger possibility: that our species has been shaped more than we know by the survival of the sportiest.

Actor, Writer, Director; Host of PBS program Brains on Trial; Author, Things I Overheard While Talking to Myself

"There are more things in heaven and earth… than are dreamed of in your philosophy."

It doesn't sound like an explanation, but I take it that way. For me, Hamlet's admonition explains the confusion and uncertainty of the universe, (and, lately, the multiverse). It urges us on when, as they always will, our philosophies produce anomalies. It answers the unspoken question, "WTF?" With every door into nature we nudge open, a hundred new doors become visible, each with it's own inscrutable combination lock. It's both an explanation and a challenge, because there's always more to know.

I like how it endlessly loops back on itself. Every time you discover a new thing in heaven or earth, it becomes part of your philosophy, which will eventually be challenged by new new things.

Like all explanations, of course, it has its limits. Hamlet says it to Horatio as a way of urging him to accept the possibility of ghosts. It could just as well be used to prompt belief in UFOs, astrology, and even god—as if to say that that something is proved to exist by the very fact that you can't disprove it exists.

Still, the phrase can get us places. Not as a taxi to the end of thinking, but as a passport to exploration.

I think these words of Hamlet's are best thought of as a ratchet, a word earthily beautiful in sound and meaning: Keep moving on, but preserve what works. We need Einstein for GPS, but we can still get to the moon with Newton.

The 19th Century Explanation of the Remarkable Connection Between Electricity And Magnetism

No explanation I know of in all of recent scientific history is as beautiful or deep, or ultimately as elegant, as the 19th century explanation of the remarkable connection between two familiar, but seemingly distinct forces in nature, electricity and magnetism. It represents to me all that is the best about science: surprising empirical discoveries combined with a convoluted path to a remarkably simple and elegant mathematical framework, which then explained far more than was ever bargained for, and in the process produced the very technology that now powers modern civilization.

It all began with strange experiments with jumping frogs and electric circuits, capped by the serendipitous discovery by the self-schooled, yet greatest experimentalist of his time, Michael Faraday of a very strange connection between magnets and electric currents. It was by then well known that a moving electric charge, i.e. a current, created a magnetic field around the current that could repel or attract other magnets located nearby.

What remained an open question was whether magnets could produce any electric force on charged objects. Faraday discovered, by accident, that when he turned on or off a switch to start or stop a current, creating a magnetic field that grew or decreased with time, during the periods when the magnetic field was changing, a force would suddenly arise in a nearby wire, moving the electric charges within it to create a current.

Faraday's law of induction, as it became known, not only is responsible for the basic principle governing all electric generators, from Niagara falls to nuclear power plants, it produced a theoretical conundrum that required the mind of the greatest theoretical physicist of his time, James Clerk Maxwell to set things straight. Maxwell realized that Faraday's result implied that it was the changing magnetic field (a pictorial concept introduced by Faraday himself because he felt more comfortable with pictures than algebra) that produced an electric field that would push the charges around the wire creating a current.

But in order for mathematical symmetry in the equations governing electric and magnetic fields, this then required that a changing electric field, and not merely moving charges, would produce a magnetic field. This not only produced a set of mathematically consistent equations every physics student knows, and some love, called Maxwell's equations, which can fit on a T-Shirt, but it established the physical reality of what was otherwise a mere figment of Faraday's imagination, a field—some quantity associated with every point in space and time.

But even more than this, Maxwell realized that if a changing electric field produced a magnetic field, then a constantly changing electric field, such as that which occurs when I continue to jiggle a charge up and down, would produce a continuously changing magnetic field. But that in turn would create a continuously changing electric field, which in turn would create a continuously changing magnetic field, and so on. This field 'disturbance' would move out from the original source, the jiggling charge, at a rate that Maxwell could calculate on the basis of his equations. The parameters in these equations came from experiment—measuring the strength of the electric force between two known charges, and the strength of the magnetic force between two known currents.

From these two fundamental properties of nature, Maxwell calculated the speed of the disturbance and found out ... you guessed it, that the speed was precisely the speed that light was measured to have! Maxwell thus discovered that light is indeed a wave ... but a wave of electric and magnetic fields that moved through space at a precise speed determined by two fundamental constants in nature thus laying the basis for Einstein to come along a generation or so later and demonstrate that the constant speed of light required a revision in our notions of space and time.

So, from jumping frogs and differential equations we came up for one of the most beautiful unifications in all of physics, the unification of electricity and magnetism in a single theory of electromagnetism, and that theory explained the existence of that which allows us to observe the universe around us, namely light, and the practical implications of the theory produced the mechanisms that power all of modern civilization and the principles that govern essentially all modern electronic devices, and the nature of the theory itself produced a series of further puzzles that allowed Einstein to yield new insights into space and time!

Not bad for a set of experiments whose worth was questioned by Gladstone (or Queen Victoria—depending upon which apocryphal story one buys)—who came into Faraday's laboratory and wondered what all the fuss was about, and what use all of this experimentation was for. He (or she) was told, either: "Of what use is a newborn baby?"—or in my favorite version of the story—"Use? Why one day this will be so useful you will tax us for it!" Beauty, elegance, depth, utility, adventure, and excitement! Science at its best!

This year's Edge Question is about one's personal responses to explanation. When I was first excited by physics, the depths of its explanations were compelling to me in very esoteric contexts. For example, the binding of matter, energy and space-time in General Relativity seemed (and indeed is) an extraordinarily elegant and deep explanation.

Nowadays I am even more compelled by powerful explanations that lie behind the things we see around us that are too easily taken for granted. And in thinking about the Question, I find myself drawn to a context that is experienced every day by just about everybody.

How generous is that himself the sun
—arriving truly, faithfully who goes
(never for a moment ceasing to begin
the mystery of day for someone's eyes)

Thus wrote ee cummings in the opening of his short but lyrical celebration of our star. Those words highlight a daily moment—a sunrise—whose associated human sense of (in)significance and mystery may for some be only deepened by appreciating (at least) three great explanations underlying within the experience. And each of those explanations has at least one of the qualities of depth, elegance and beauty.

If you care about such things, and (like me) live at a northern middle latitude, you will know the range of the horizon visible from your home, between (roughly) south-east and north-east, across which the point of sunrise shifts back and forth over the year, with sunrise getting later as it moves northward and the days shorten, and the motion reversing at the winter solstice. And beyond that quite complex behaviour is the simple truth of the sun's fidelity—we can indeed trust it to come up somewhere in the East every morning.

Like a great work of art, a great scientific explanation loses none of its power to inspire awe afresh whenever one contemplates it. So it is with the explanation that those daily and annual cycles of sunrises are explicable by a tilted rotating Earth orbiting the Sun, whose average axial direction can be considered fixed relative to the stars thanks to a still-mysterious conservation law.

Unlike my two other chosen explanations, this one encountered scepticism from scientists for decades. The heliocentric view of the solar system, articulated by Copernicus in the mid 16th century, was not widely accepted until well into the 17th century. For me, that triumph over the combination of scientific scepticism and religious hostility only adds to the explanation's appeal.

Another explanation is certainly elegant and lies behind the changing hues of the sky as the sun rises. Lord Rayleigh succeeded James Clerk Maxwell as Cavendish professor of physics at Cambridge. One of his early achievements was to deduce laws of the scattering of light. His first effort reached the right answer on an invalid foundation – the scattering of light in an elastic aether. Although the existence of such an aether wasn't shown to be a fallacy until some years later, he redid his calculations using Maxwell's deeply unifying theories of electromagnetism. 'Rayleigh scattering' is the expression of those theories in contexts where an electromagnetic wave encounters electrically polarised particles much smaller than its wavelength. The amount of scattering, Rayleigh discovered, is inversely proportional to the fourth power of the wavelength. By 1899, he had shown that air molecules themselves were powerful scatterers.

There, in one bound, is the essential explanation of why the sky is blue and why sunrises are reddened. Blue light is scattered much more by air molecules than light of longer wavelengths. The sun's disk is accordingly reddened and all the more so when seen through the long atmospheric path at sunrise and sunset. (To fully account for the effect, you also need to take into account the sun's spectrum and the visual responses of human eyes.) The pink clouds that can add so much to the beauty of a sunrise consist of comparatively large droplets that scatter the wavelengths of reddened sunlight more equally than air molecules—colourwise, what you see is what they get.

The third explanation behind a sunrise is conceptually and cosmologically the deepest. What is happening in the sun to generate its seemingly eternal light and heat? Understanding the nuclear reactions at the sun's core was just a part of an explanation that (thanks especially to Burbidge, Burbidge, Fowler and Hoyle in 1957) simultaneously allowed us to understand not only the light from many kinds of stars but also how almost all the naturally occurring chemical elements are produced throughout the universe: in chains of reactions occurring within stable and cataclysmically unstable cosmic balls of gas in their various stages of stellar evolution, driven by the shifting influences of all the fundamental forces of nature—gravity, electromagnetism and the strong and weak nuclear forces.

Edge readers know that scientific understanding enhances rather than destroys nature's beauty. All of these explanations for me contribute to the beauty in a sunrise.

Ah, but what is the explanation of beauty? Brain scientists grapple with nuclear-magnetic resonance images—a recent meta-analysis indicated that all of our aesthetic judgements seem to include the use of neural circuits in the right anterior insula, an area of the cerebral cortex typically associated with visceral perception. Perhaps our sense of beauty is a by-product of the evolutionary maintenance of the senses of belonging and of disgust. For what it's worth, as exoplanets pour out of our telescopes, I believe that we will encounter astrochemical evidence for some form of extraterrestrial organism well before we achieve a deep, elegant or beautiful explanation of human aesthetics.

Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist; Author, Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning of Life, and How to be Happy

Inverse Power Laws

I'm intrigued by the empirical fact that most aspects of our world and our society are distributed according to so-called inverse power laws. That is, many distribution curves take on the form of a curve that swoops down from a central peak to have a long tail that asymptotically hugs the horizontal axis.

Inverse power laws are elegantly simple, deeply mysterious, but more galling than beautiful. Inverse power laws are self-organizing and self-maintaining. For reasons that aren't entirely understood they emerge spontaneously in a wide range of parallel computations, both social and natural.

One of the first social scientists to notice an inverse power law was George Kingsley Zipf, who formulated an observation now known as Zipf's Law. This is the statistical fact that, in most documents, the frequency with which a given word is used is roughly proportional to the reciprocal of the word's popularity rank. Thus the second most popular word is used half as much as the most popular word, the tenth most popular word is used a tenth as much as the most popular word, and so on.

In society, similar kinds of inverse power laws govern society's rewards. Speaking as an author, I've noticed, for instance, that the hundredth most popular author sells a hundred-fold fewer books than the author at the top. If the top writer sells a million copies, somone like me might sell ten thousand.

Disgruntled scribes sometimes fantasize about a utopian marketplace in which the naturally arising inverse power law distribution would be forcibly replaced by a linear distribution, that is, a sales schedule that lies along a smoothly sloping line instead of taking the form of the present bent curve that starts at an impudently high peak and then swoops down to dawdle along the horizontal axis.

But there's no obvious way that the authors' sales curve could be changed. Certainly there's no hope of having some governing group try and force a different distribution. After all, people make their own choices as to what books to read. Society is a parallel computation, and some aspects of it are beyond control.

The inverse-power-law aspects of income distribution are particularly disturbing. Thus the second-wealthiest person in a society might own half as much as the richest, with the tenth richest person possessing only a tenth as much, and—out on in the burbs—the thousandth richest person is making only one thousandth as much as the person on the top.

Putting the same phenomenon a little more starkly, while a company's chief executive officer might earn a hundred million dollars a year, a software engineer at the same company might earn only a hundred thousand dollars a year, that is, a thousandth as much. And a worker in one of the company's overseas assembly plants might earn only ten thousand dollars a year—a ten-thousandth as much as the top exec.

Power law distributions can also be found in the opening weekend grosses of movies, in the number of hits that web pages get, and in the audience shares for TV shows. Is there some reason why the top ranks do so overly well, and the bottom ranks seem so unfairly penalized?

The short answer is no—there's no real reason. There need be no conspiracy to skew the rewards. Galling as it seems, inverse power law distributions are a fundamental natural law about the behavior of systems. They're ubiquitous.

Inverse power laws aren't limited to societies—they also dominate the statistics of the natural world. The tenth smallest lake is likely to be a tenth as large as the biggest one, the hundredth largest tree in a forest may be a hundredth as big as the largest tree, the thousandth largest stone on a beach is a thousandth the size of the largest one.

Whether or not we like them, inverse power laws are as inevitable as turbulence, entropy, or the law of gravity. This said, we can somewhat moderate them them in our social context, and it would be too despairing to say we have no control whatsoever over the disparities between our rich and our poor.

But the basic structures of inverse power law curves will never go away. We can rail at an inverse power law if we like—or we can accept it, perhaps hoping to bend the harsh law towards not so steep a swoop.

I do drug discovery for a living, so my own field doesn't provide as many elegant explanations as I'd like—not yet, anyway. Physics and math, though, are a collection of jewels, since mathematics can't seem to help being the material the world is made from.

And while there are explanations whose depth and power make you have to sit down for a while once you take them in, many of these require some mathematical scaffolding before you can get a good view. But one of my favorites can be explained to children. I know that because I've told it to my own children, and because it's one that I worked out in my head while I was still a child myself.

Watching the Apollo program unfold on television, I kept hearing about "zero gee", a term that still makes people think (wrongly) that someone in orbit has somehow escaped Earth's gravity. But explain it this way: imagine throwing a rock across a field. You can picture the big arc it makes as it goes off into the distance—gravity's rainbow, in the phrase. To throw it farther, you have to throw it higher, and harder, making a bigger arc that takes it out more into the distance.

Now imagine really getting some air, using a catapult, a cannon, or whatever you like. The rock arcs out higher and faster, and lands farther and farther away, farther than you can see in the distance: across your town, across your country, across the nearest ocean. Eventually, you launch it so high, so powerfully that it falls over the distant curving edge of the earth itself. Instead of coming down, it literally misses the ground and falls out over in a huge whooshing loop around the planet. You have launched your rock into orbit, and instead of talking about zero gravity, you should call it by a better name: free fall.

And this makes the rest of it easier to picture: why rockets go off to the side as they take payloads to orbit, how lower orbits are faster and higher ones slower, how they can eventually spiral in and decay, and why "escape velocity" means exactly what it says. When I think of space and orbital mechanics, I see myself as a boy again, throwing rocks across a field into the Arkansas sky and dreaming about what would happen if one never came down.

Professor of Psychology, University of Michigan; Author, Intelligence and How We Get It

The Elegant Robert Zajonc

The great social psychologist Robert Zajonc came up with elegant explanations for three very important phenomena.

1) The literature on physiological arousal and task performance was a mess, with some people finding performance improvement with greater arousal and some people finding performance decrement. Zajonc showed that, for the simplest tasks, the greater the arousal the greater the improvement—because arousal was amplifying a simple, overlearned response. For more complex tasks, arousal worsened task performance—because arousal was simultaneously multiplying many possible alternative, and competing, responses.

2) The earlier in the sibship a person is born the higher the IQ on average. This had long been known but Zajonc offered the most plausible explanation to date. The firstborn comes into a world where the average IQ is 100 (mother) + 100 (father) + 0 (self)/3 = 83.33. The second child is born into a family with a lower average IQ because the average 100 IQ of each parent is diluted not only with the second child’s own zero IQ but with the first child’s IQ, which, unless that child is adult, is less than 100 on average. The larger the number of children the worse the average intelligence of the environment in which intellectual development takes place.

3) Zajonc showed that "mere familiarity" with an object of apparently any kind increases its attractiveness. This is true of photographs of people, of snatches of a melody, of Turkish words (for monolingual English speakers) and of Chinese characters (for people ignorant of Chinese languages). Unless the initial attractiveness of a stimulus is negative—and sometimes even then—preference for the stimulus increases with exposure. Zajonc discovered this fact decades ago but never could come up with a plausible explanation for it—nor could anyone else. Toward the end of his life Zajonc came up with an extremely simple explanation in terms of reinforcement theory. Every time we encounter a stimulus that is not followed by punishment of some kind a minor reward is experienced. The more the exposures to the (unpunished) stimulus the greater the reward associated with it and hence the greater its attractiveness.

My Edge answer this year has to be evolution by natural selection. Not only does it explain how we all got here and how we are and behave as we do, it can even explain (at least to my fairly critical satisfaction) why many people refuse to accept it and why even more people believe in an all-powerful Deity. But since other Edge respondents are likely to have natural selection as their favorite deep, elegant, and beautiful explanation (it has all three attributes, in addition to wide ranging explanatory power), I'll have to hone in on one particular instance: the explanation of how humans acquired language—by which I mean grammatical structure.

There is evidence to suggest that our ancestors developed effective means to communicate using verbal utterances starting at least 3 million years ago. But grammar is much more recent, perhaps as recent as 75,000 years ago. How did grammar arise?

Anyone who has traveled abroad knows that to communicate basic needs, desires, and intentions to people in your vicinity, concerning objects within sight, a few referring words, together with gestures, suffice. The only grammar required is to occasionally juxtapose two words—"Me Tarzan, you Jane" being the information- (and innuendo-) rich, classic example from Hollywood. Anthropologists refer to such a simple, word-pairing, communicative system as protolanguage.

But to communicate about things not in the immediate here-and-now, you need more. Significantly, effectively planning future joint activities needs pretty well all of grammatical structure, particularly if the planning involves more than two people, with even more demands put upon the grammar if the planned action requires coordination between different groups not all present at the same place or time.

Given the degree to which human survival depends on our ability to plan and coordinate our actions—and to collectively debrief after things go wrong so we avoid repeating our mistakes (though at a national level we seem really bad at doing that)—it is clear that grammatical structure is hugely important to Homo sapiens. Indeed, many argue that is our defining characteristic. But communication, while arguably the killer app for grammar, clearly cannot be what put it into the gene pool in the first place, and for a very simple reason. Since grammar is required in order for verbal utterances to convey more complex ideas than is possible with protolanguage, it only comes into play when the brain can form such complex ideas. These considerations lead to what I think is accepted (though not without opposition) as the "Standard Explanation" of language acquisition.

In highly simplified terms, the Standard Explanation runs like this.

1. Brains (or the organs that became brains) first evolved to associate motor responses to sensory input stimuli.

2. In some creatures, brains became more complex, performing a mediating role between input stimuli and motor responses.

3. In some of those creatures, the brain became able to over-ride what was previously an automatic stimulus-response sequence.

4. In Homo sapiens, and to a lesser extent in other species, the brain acquired the ability to function "off-line", effectively running simulations of actions without the need for sensory input stimuli and without generating output responses.

Stage 4 is when the brain acquires grammar. What we call grammatical structure is in fact a descriptive/communicative manifestation of a mental structure for modeling the world.

As a mathematician, what I like about this explanation is that it also tells us where the brain got its capacity for mathematical thinking. Namely, mathematical thinking is essentially another manifestation of the brain's simulation capacity, but in quantitative/relational/logical terms rather than descriptive/communicative.

As is usually the case with natural selection arguments, it takes considerable work to flesh out the details of these simplistic explanations, and some days I am less convinced than others about some aspects, but overall they strike me as about right. In particular, the mathematical story explains why doing mathematics carries with it an overpowering Platonistic sense of reasoning not about abstractions but real objects—at least "real" in a Platonic realm. At which point, the lifelong mathematics educator in me says I should leave the proof of that corollary as an exercise for the reader—so I shall.

Assistant Professor of Environmental Studies, NYU; Researching cooperation and the tragedy of the commons

Tit For Tat

Selfishness can sometimes seem like the best strategy. It is the rational response to the prisoner's dilemma, for instance, where each individual in a pair can either cooperate or defect, leading to four potential outcomes. No matter what the other person does, selfish behavior always yields greater return. But if both players defect, both do worse than if they had cooperated. Yet when political scientist Robert Axelrod and his colleagues ran hundreds of rounds of the prisoner's dilemma on a computer, the repetition of the game led to a different result.

Experts from a wide range of disciplines submitted 76 different game strategies for Axelrod to try—some of them very elaborate. At the end of 200 rounds, one strategy was far more successful than the others. It was also the simplest. Tit For Tat, a scheme where the player cooperates on the first move and thereafter does what was done on the previous move, was the winner. The importance of cooperation to evolution was detected by humans, but simulated and verified with machines.

This elegant explanation was then documented in living egoists with an elegant experiment. Evolutionary biologist Manfred Milinski noticed Tit For Tat behavior in his subjects, three-spined sticklebacks (Gasterosteus aculeatus). When he watched a pair of these fish approach a predator, he observed four options: they could swim side by side, one could take the lead while the other followed closely behind (or vice versa), or they could both retreat. These four scenarios satisfied the four inequalities that define the prisoner's dilemma.

For the experiment, Milinski wanted to use pairs of sticklebacks, but they are impossible to train. So he placed in the tank a single stickleback and a set of mirrors that would act like two different types of companions. In the first treatment, a parallel mirror was used to simulate a cooperative companion that swam alongside the subject stickleback. In the second treatment, an oblique mirror system set at a 32-degree angle simulated a defecting partner—that is, as the stickleback approached the cichlid, the companion appeared to fall increasingly and uncooperatively behind. Depending on the mirror, the stickleback felt he was sharing the risk equally or increasingly going it alone.

When the sticklebacks were partnered with a defector, they preferred the safer half of the tank furthest away from the predator. But in the trials with the cooperating mirror, the sticklebacks were two times more likely to venture into the half of the tank closest to the cichlid. The sticklebacks were more adventurous if they had a sidekick. In nature, cooperative behavior would translate to more food, more space, and therefore greater individual reproductive success. Contrary to predictions that selfish behavior or retreat was optimal, Milinski's observation that sticklebacks most often approached the predator together was in line with Axelrod's conclusion that Tit For Tat was the optimal evolutionary strategy.

Milinski's evidence, published in 1987 in the journal Nature, was the first to demonstrate that cooperation based on reciprocity definitely evolved among egoists, albeit small ones. A large body of research now shows that many biological systems, especially human societies, are organized around various cooperative strategies; the scientific methods continue to become more and more sophisticated, but the original experiments and Tit For Tat strategy are beautifully simple.

Marcel Duchamp's 20th century Readymades upended art history with the same mental earthquake with which Charles Darwin illuminated biological history. Duchamp's objects moved slowly into the mainstream as they appeared to be some sort of institutional blunder, why was a urinal slotted into the history of art instead of the history of plumbing? Why was the snow shovel inside the museum instead of outside tidying the driveway? Therein lies the essence of his Readymade: an object in the wrong place that questions the aura of objects in the right place. Had the urinal been hung fifty feet away in the restroom we would never have heard of it, but misplaced it was juried, crated, photographed, cataloged, critiqued, insured, donated, dissertated, collected, positioned, lighted, cleaned, and paid for. Had his bottle rack, chocolate grinder, pharmacy bottles or other common objects been stashed away in their proper space they would have passed no mention. But by "creating" an artwork from something made for another purpose he exposed the valuation system that produced an appearance of transcendentalism. In one end went butcher scraps and out came fine sausage.

Duchamp was from a family of artists and mastered painterly cubism prior to breaking with art tradition by designating common objects as art objects. In doing so he exchanged visual space for mental space, a swap of form and color for context and stratified meaning, making him the father of both Pop Art and Conceptual Art, the dominant forms of art today. His Readymades are technically a nominalism, a stock prop of skepticism imported or deported from letters back and forth since the 11th century. Nominalists believe in no eternal verities or universals, just names that imagine there are and no special objects, just ones gathered by words and privileged according to the metaphysics of the time. Hence the demystifying brilliance of the Readymades.

Duchamp pushed the envelope as far as possible with the urinal prank, also known as "Fountain"—what possibly would be less likely to be regarded as art than a piss receptacle? He signed the urinal "R. Mutt," a pun on "I'm a dog," and submitted it anonymously to the 1917 Society of Independent Artists, a New York show that employed Duchamp as a juror. The bylaws required inclusion of anyone who could pony up the six buck admission, however the jury pronounced "R. Mutt's" urinal not-art, declined to include it in the catalog and hid it behind a partition. Over the next nine decades this piss-pot was the most debated event in art theory: an artist-juror rejects the fake art of his nom de plume. In a 2004 survey art professionals voted the urinal-Fountain the world's most influential piece of modern art, ahead of works by Picasso or Matisse. Copies of the urinal-Fountain are now in eleven major collections including the Centre Pompidou and the Tate Modern.

Duchamp did not leave behind many writings instead communicating his thought through visual mischief. When he did write it was usually another spoof such as his essay "Texticles" or his bulletin "Rongwrong" that predicted the high-minded goofiness known as deconstruction decades later. His thought became popularized as "everything is art," somewhat of a misnomer as he never spoke those exact words, documents indicate it would be more accurately "everything can be regarded as art." Duchamp fell in and out of favor over his eighty-one years, but he continued the ruse ever announcing more and more preposterous things as art: "Every breath is an artwork" he explained to an interviewer a year before he passed, "I am a breather, I enjoy it tremendously."

Professor of Physics, Institute for Advanced Study; Author, Many Colored Glass; The Scientist as Rebel

Explaining How Two Systems Of The World Can Both Be True

The situation that I am trying to explain is the existence side by side of two apparently incompatible pictures of the universe. One is the classical picture of our world as a collection of things and facts that we can see and feel, dominated by universal gravitation. The other is the quantum picture of atoms and radiation that behave in an unpredictable fashion, dominated by probabilities and uncertainties. Both pictures appear to be true, but the relationship between them is a mystery.

The orthodox view among physicists is that we must find a unified theory that includes both pictures as special cases. The unified theory must include a quantum theory of gravitation, so that particles called gravitons must exist, combining the properties of gravitation with quantum uncertainties.

I am looking for a different explanation of the mystery. I ask the question, whether a graviton, if it exists, could conceivably be observed. I do not know the answer to this question, but I have one piece of evidence that the answer may be no. The evidence is the behavior of one piece of apparatus, the gravitational wave detector called LIGO that is now operating in Louisiana and in Washington State. The way LIGO works is to measure very accurately the distance between two mirrors by bouncing light from one to the other. When a gravitational wave comes by, the distance between the two mirrors will change very slightly. Because of ambient and instrumental noise, the actual LIGO detectors can only detect waves far stronger than a single graviton. But even in a totally quiet universe, I can answer the question, whether an ideal LIGO detector could detect a single graviton. The answer is no. In a quiet universe, the limit to the accuracy of measurement of distance is set by the quantum uncertainties in the positions of the mirrors. To make the quantum uncertainties small, the mirrors must be heavy. A simple calculation, based on the known laws of gravitation and quantum mechanics, leads to a striking result. To detect a single graviton with a LIGO apparatus, the mirrors must be exactly so heavy that they will attract each other with irresistable force and collapse into a black hole. In other words, nature herself forbids us to observe a single graviton with this kind of apparatus.

I propose as a hypothesis, based on this single thought-experiment, that single gravitons may be unobservable by any conceivable apparatus.

If this hypothesis were true, it would imply that theories of quantum gravity are untestable and scientifically meaningless. The classical universe and the quantum universe could then live together in peaceful coexistence. No incompatibility between the two pictures could ever be demonstrated. Both pictures of the universe could be true, and the search for a unified theory could turn out to be an illusion.

Empiricism is the deepest and broadest principle for explaining the most phenomena in both the natural and social worlds. Empiricism is the principle that we should see for ourselves instead of trusting the authority of others. Empiricism is the foundation of science, as the words of the motto of the Royal Society of London—the first scientific institution—so note: Nullius in Verba—Take nobody's word for it.

Galileo took nobody's word for it. According to Aristotelian cosmology—the Catholic Church's final and indisputable authority of Truth on matters heavenly—all objects in space must be perfectly round, perfectly smooth, and revolve around Earth in perfectly circular orbits. Yet when Galileo looked for himself through his tiny tube with a refracting lens on one end and an enlarging eyepiece on the other he saw mountains on the moon, spots on the sun, phases of Venus, moons orbiting Jupiter, and a strange object around Saturn. Galileo's eminent astronomer colleague at the University of Padua, Cesare Cremonini, was so committed to Aristotelian cosmology that he refused to even look through the tube, proclaiming: "I don't believe that anyone but he saw them, and besides, looking through glasses would make me dizzy." Those who did look through Galileo's tube could not believe their eyes—literally. One of Galileo's colleagues reported that the instrument worked for terrestrial viewing but not celestial, because "I tested this instrument of Galileo's in a thousand ways, both on things here below and on those above. Below, it works wonderfully; in the sky it deceives one." A professor of mathematics at the Collegio Romano was convinced that Galileo had put the four moons of Jupiter inside the tube. Galileo was apoplectic: "As I wished to show the satellites of Jupiter to the Professors in Florence, they would see neither them nor the telescope. These people believe there is no truth to seek in nature, but only in the comparison of texts."

By looking for themselves Galileo, Kepler, Newton and others launched the Scientific Revolution, which in the Enlightenment led scholars to apply the principle of empiricism to the social as well as the natural world. The great political philosopher Thomas Hobbes, for example, fancied himself as the Galileo and William Harvey of society: "Galileus…was the first that opened to us the gate of natural philosophy universal, which is the knowledge of the nature of motion. … The science of man's body, the most profitable part of natural science, was first discovered with admirable sagacity by our countryman, Doctor Harvey. Natural philosophy is therefore but young; but civil philosophy is yet much younger, as being no older…than my own de Cive."

From the Scientific Revolution through the Enlightenment the principle of empiricism slowly but ineluctably replaced superstition, dogmatism, and religious authority. Instead of divining truth through the authority of an ancient holy book or philosophical treatise, people began to explore the book of nature for themselves.

Instead of looking at illustrations in illuminated botanical books scholars went out into nature to see what was actually growing out of the ground.

Instead of relying on the woodcuts of dissected bodies in old medical texts, physicians opened bodies themselves to see with their own eyes what was there.

Instead of burning witches after considering the spectral evidence as outlined in the Malleus Maleficarum—the authoritative book of witch hunting—jurists began to consider other forms of more reliable evidence before convicting someone of a crime.

Instead of a tiny handful of elites holding most of the political power by keeping their citizens illiterate, uneducated, and unenlightened, through science, literacy, and education people could see for themselves the power and corruption that held them down and began to throw off their chains of bondage and demand rights.

Instead of the divine right of kings people demanded the natural right of democracy. Democratic elections, in this sense, are scientific experiments: every couple of years you carefully alter the variables with an election and observe the results. Many of the founding fathers of the United States, in fact, were scientists who deliberately adapted the method of data gathering, hypothesis testing, and theory formation to their nation building. Their understanding of the provisional nature of findings led them to form a social system wherein empiricism was the centerpiece of a functional polity. The new government was like a scientific laboratory conducting a series of experiments year by year, state by state. The point was not to promote this or that political system, but to set up a system whereby people could experiment to see what works. That is the principle of empiricism applied to the social world.

As Thomas Jefferson wrote in 1804: "No experiment can be more interesting than that we are now trying, and which we trust will end in establishing the fact, that man may be governed by reason and truth."

Thirty years ago, I was having lunch with my colleague Bob Abelson on the lawn at Yale. I complained to him that my wife couldn't ever cook steak rare the way I liked it. I didn't see what was so hard about making steak rare. He responded that 30 years earlier in England he had asked the barber to cut his hair short, which was the style in the U.S. at the time, and the barber wouldn't cut it as short as he wanted it.

This response sounded brain-damaged to me but Bob was a brilliant man and wasn't prone to crazy remarks. So, I thought long and hard about this conversation until I understood what had transpired.

Bob had understood what I said at a high level of abstraction. The two stories are actually identical if one sees what I said as "I once asked someone to do something for me who was capable of doing what I asked, and had agreed to do the task in principle, but who refused to do exactly what I asked, because they thought the request was too extreme."

Has Bob really done all that? Of course, there is no other explanation.

People naturally are capable of abstracting in a principled way using a particular kind of language, in order to understand new events in terms of similar old events.

I was floored when I realized this and wondered if maybe this just a peculiarity of Bob. Do people make abstractions of what they are trying to understand?

Then one day I was giving advice to someone and I suggested they "make hay while the sun shines" an odd thing for a Brooklyn boy who knows nothing about hay to say.

This is why all cultures have proverbs. Proverbs give people a language in which to express these high level abstractions about daily life.

This is what it means to understand intelligently. Not everyone tries to do or is capable of doing this kind of abstraction. But it is something that moderately intelligent people regularly do and a case can be made that to some extent everyone does this much of the time.

What are they doing exactly? They are attempting to determine, when they understand a sentence or a situation, who has what goal, what plan they are using to achieve that goal, what obstacles are preventing the achievement of that goal, and what lesson can be learned from the entire situation.

Really? Do we do that all the time? Yes we do, unconsciously. We have no idea that we are doing it, but we cannot really process the actions or statements of others without doing this kind of calculation.

Every time you find yourself quoting a cliche proverb, saying "a stitch in time saves nine," or "the shoemaker's children often go unshod," you have made this kind of abstraction. Finding no story of your own to tell, you tell a culturally known one. When we do find a story of our own to tell, of course, we tell it.

If you found a story of your own to tell, it would have had have been indexed in your mind with precisely these same abstractions.

All conversation depends on this kind of unconscious abstract analysis. We hear a story and we respond with a story. We don't know how we do it, but we do it effortlessly.

This is what reminding and comprehension look like. Without the ability to be reminded we would all see every day as a completely new experience unrelated to what we have seen before. The abstraction mechanism we use is beautiful and elegant and basically unknown to our conscious minds.

Any first course in physics teaches students that the basic quantities one uses to describe a physical system include energy, momentum, angular momentum and charge. What isn't explained in such a course is the deep, elegant and beautiful reason why these are important quantities to consider, and why they satisfy conservation laws. It turns out that there's a general principle at work: for any symmetry of a physical system, you can define an associated observable quantity that comes with a conservation law:

In classical physics, a piece of mathematics known as Noether's theorem (named after the mathematician Emmy Noether) associates such observable quantities to symmetries. The arguments involved are non-trivial, which is why one doesn't see them in an elementary physics course. Remarkably, in quantum mechanics the analog of Noether's theorem follows immediately from the very definition of what a quantum theory is. This definition is subtle and requires some mathematical sophistication, but once one has it in hand, it is obvious that symmetries are behind the basic observables. Here's an outline of how this works, (maybe best skipped if you haven't studied linear algebra...) Quantum mechanics describes the possible states of the world by vectors, and observable quantities by operators that act on these vectors (one can explicitly write these as matrices). A transformation on the state vectors coming from a symmetry of the world has the property of "unitarity": it preserves lengths. Simple linear algebra shows that a matrix with this length-preserving property must come from exponentiating a matrix with the special property of being "self-adjoint" (the complex conjugate of the matrix is the transposed matrix). So, to any symmetry, one gets a self-adjoint operator called the "infinitesimal generator" of the symmetry and taking its exponential gives a symmetry transformation.

One of the most mysterious basic aspects of quantum mechanics is that observable quantities correspond precisely to such self-adjoint operators, so these infinitesimal generators are observables. Energy is the operator that infinitesimally generates time translations (this is one way of stating Schrodinger's equation), momentum operators generate spatial translations, angular momentum operators generate rotations, and the charge operator generates phase transformations on the states.

The mathematics at work here is known as "representation theory", which is a subject that shows up as a unifying principle throughout disparate area of mathematics, from geometry to number theory. This mysterious coherence between fundamental physics and mathematics is a fascinating phenomenon of great elegance and beauty, the depth of which we still have yet to sound.

Why do groups of people behave the same way? Why do they behave differently from other groups living nearby? Why are those behaviors so stable over time? Alas, the obvious answer—cultures are adaptations to their environments—doesn't hold up. Multiple adjacent cultures along the Indus, the Euphrates, the Upper Rhine, have differed in language, dress, and custom, despite existing side-by-side in almost identical environments.

Something happens to keep one group of people behaving in a certain set of ways. In the early 1970s, both E.O. Wilson and Richard Dawkins noticed that the flow of ideas in a culture exhibited similar patterns to the flow of genes in a species—high flow within the group, but sharply reduced flow between groups. Dawkins' response was to assume a hypothetical unit of culture called the meme, though he also made its problems clear—with genetic material, perfect replication is the norm, and mutations rare. With culture, it is the opposite—events are misremembered and then misdescribed, quotes are mangled, even jokes (pure meme) vary from telling to telling. The gene/meme comparison remained, for a generation, an evocative idea of not much analytic utility.

Dan Sperber has, to my eye, cracked this problem. In a slim, elegant volume of 15 years ago with the modest title Explaining Culture, he outlined a theory of culture as the residue of the epidemic spread of ideas. In this model, there is no meme, no unit of culture separate from the blooming, buzzing confusion of transactions. Instead, all cultural transmission can be reduced to one of two types: making a mental representation public, or internalizing a mental version of a public presentation. As Sperber puts it, "Culture is the precipitate of cognition and communication in a human population."

Sperber's two primitives—externalization of ideas, internalization of expressions—give us a way to think of culture not as a big container people inhabit, but rather as a network whose traces, drawn carefully, let us ask how the behaviors of individuals create larger, longer-lived patterns. Some public representations are consistently learned and then re-expressed and re-learned—Mother Goose rhymes, tartan patterns, and peer review have all survived for centuries. Others move from ubiquitous to marginal in a matter of years—pet rocks, the Pina Colada song. Still others thrive only within a subcultural domain—cosplay, Civil War re-enactment. (Indeed, a sub-culture is simply a network of people who traffic in particular representations, representations that are largely inert in the larger culture.)

With Sperber's network-tracing model, culture is best analyzed as an overlapping set of transactions, rather than as a container or a thing or a force. Given this, we can ask detailed questions about which private ideas are made public where, and we can ask when and how often those public ideas take hold in individual minds.

Rather than arguing about whether the sonnet is still a vital part of Western culture, for example, Sperber makes it possible ask instead "Which people have mental representations of individual sonnets, or of the sonnet as an overall form? How often do they express those representations? How often do others remember those expressions?" Understanding sonnet-knowing becomes a network analysis project, driven by empirical questions about how widespread, detailed, and coherent the mental representations of sonnets are. Cultural commitment to sonnets and Angry Birds and American exceptionalism and the theory of relativity can all be placed under the same lens.

This is what is so powerful about Sperber's idea: culture is a giant, asynchronous network of replication, ideas turning into expressions which turn into other, related ideas. Sperber also allows us to understand why persistence of public expression can be so powerful. When I sing "Camptown Races" to my son, he internalizes his own (slightly different) version. As he learns to read sheet music, however, he is gaining access to a much larger universe of such representations; Beethoven is not around to sing "Fur Elise" to him, but through a set of agreed on symbols (themselves internalized as mental representations) Beethoven's public representations can be internalized centuries later.

Sperber's idea also suggests increased access to public presentation of ideas will increase the dynamic range of culture overall. Some publicly available representations will take hold among the widest possible group of participants in history, considered in both absolute numbers and as a percentage of the human race. (Consider, for example, the number of people who can now understand the phrase "That's killing two pigs with one bird.") It is this globally wired possibility for global cultural imitation that Mark Pagel worries about when he talks about the internet enabling "infinite stupidity."

At the same time, it has never been easier for members of possible sub-cultures to find each other, and to create their own public representations, at much lower cost, longer life, and greater reach, than ordinary citizens have ever been able to. The January 25th protests in Egypt hijacked the official public representation of that day as National Police Day; this was only possible when dissidents could create alternate public representations at a similar scale as the Egyptian state.

Actual reductionism—the interpretation of a large number of effects using a small number of causes—is rare in the social sciences, but Sperber has provided a framework for dissolving large and vague questions about culture into a series of tractable research programs. Most of the empirical study of the precipitate of cognition and communication is still in the future, but I can't think of another current idea in the social sciences that offers that degree of explanatory heft.

Science Historian; Author, Turing's Cathedral: The Origins of the Digital Universe; Darwin Among the Machines

Alfvén's Cosmos

A hierarchical universe can have an average density of zero, while containing infinite mass.

Hannes Alfvén (1908–1995), who pioneered the field of magnetohydrodynamics, against initial skepticism, to give us a universe permeated by what are now called "Alfvén waves," never relinquished his own skepticism concerning the Big Bang. "They fight against popular creationism, but at the same time they fight fanatically for their own creationism," he argued in 1984, advocating, instead, for a hierarchical cosmology, whose mathematical characterization he credited to Edmund Edward Fournier d'Albe (1868—1933) and Carl Vilhelm Ludvig Charlier (1861–1932). Hierarchical does not mean isotropic, and observed anisotropy does not rule it out.

Gottfried Wilhelm Leibniz (1646-1716), a lawyer as well as a scientist, believed that our universe was selected, out of an infinity of possible universes, to produce maximum diversity from a minimal set of natural laws. It is hard to imagine a more beautiful set of boundary conditions than zero density and infinite mass. But this same principle of maximum diversity warns us that it may take all the time in the universe to work the details out.

Whenever we see highly ordered phenomenon—a baby, a symphony, a scientific paper, a corporation, a government, a galaxy—we are driven to ask: how does that order arise? One answer, albeit a very abstract one, is that each of these is the product of a variation-selection process. By this I mean any process that begins with many variants and in which most die (or are thrown in the waste-paper basket or dissipate or collapse) leaving only a few that are fit (or strong or appealing or stable) enough to survive. The production of organic forms by natural selection is, of course, the most famous example of such a process. It's also now a commonplace that human culture is driven by an analogous process; but, as the above examples suggest, I think that variation-selection processes can be seen everywhere once we know what to look for.

Many others have had this idea: Karl Popper, Donald Campbell, Henry Plotkin, George Dyson are but a few of the names that spring to mind. But none have seen its implications as deeply as George Price, an American living in London who, in 1970, published an equation that describes variation-selection processes of all kinds. The Price equation, as it is now known, is so simple, deep and elegant that it could easily be my candidate explanation. It can be used to describe, inter alia, the tuning of an analogue radio dial, chemical reaction kinetics, the impact of neo-natal mortality on the distribution of human birth weight, or the reason we inhabit this universe out of the multitude that we do not—assuming the others actually exist. But in fact, for me, the real fascination of the Price equation does not just lie in the form that he gave it in 1970, but in an extension that he published two years later.

One of the properties of variation-selection systems is that the selecting can happen at many different levels. Music is clearly the result of a variation-selection process. The composer sits at his piano considering what comes next and chooses one out of the world of possible notes, chords, or phrases that he might. Look at Beethoven's manuscripts (Op. 47, the Kreutzer Sonata is a good example)—they're scrawled with his second thoughts. In 1996 Brian Eno wittily made this process explicit when he used SSEYO's Koan software to produce an ever-varying collection of pieces that he called "generative music."

But the music that we actually have on our iPods is, of course, not merely the result of the composer's selective choices, nor even those made by producers, performers and so on, but ours. As individual consumers we, too, are a selective, and hence creative, force. And we do not only act as individuals, but also as members of social groups. Experiments show that if we know what music other people are listening to we are quite ready to subsume (if not totally abandon) our own aesthetic preferences and follow the herd—a phenomenon that explains why it's so hard to predict hits. So composers, consumers and groups of consumers all shape the world of music. Umberto Eco made much the same point as long ago as 1962 (Opera Aperta-Open Work). Of course, as a literary critic Eco could do no more than draw attention to the problem. But George Price solved it.

In 1972 George Price extended his general variation-selection equation to allow for multi-level selection. This form of the equation has been very useful to evolutionary biologists for it has allowed them to see, for example, the relationship between kin and group selection clearly and so put to rest endless controversies stemming from incompatible mathematical formulations. It hasn't yet been applied to cultural evolution, though it surely will. But I think that the extended Price equation is much more important than even that. I think it slices one of the Gordian knots that scientists and philosophers of science have wrestled with forever.

It is the knot of reducibility. Can the behaviour of a system be understood in terms of—that is, bereduced to—the behaviour of its components? This question, in one form or another, pervades science. Systems biologists v. biochemists; cognitive scientists v. neuroscientists; Durkheim v. Bentham; Gould v. Dawkins; Aristotle v. Democritus—the gulf (epistemological, ontological and methodological) between the holist v. reductionist stances lies at the root of many of science's greatest disputes. It is also the source of advances as one stance is abandoned in favour of another. Indeed, holist and reductionist research programmes often exist side by side in uneasy truce (think of any Biology department). But when, as so often, the truce breaks down and open warfare resumes it's clear that what is needed is a way of rationally partitioning the creative forces operating at different levels.

That is what Price gave. Of course, his equation only applies to variation-selection systems; but, if you think about it, most order-creating systems are variation-selection systems. Returning to our musical world: who really shapes it? Beethoven's epigones tweaking their MIDI files? Adolescents downloading in the solitude of their bedrooms? The massed impulses of the public? This year the UK Christmas No.1 was a ditty, "Wherever You Are," sung by a choir of military wives. How? Why? In 2012? I think that Price's equation can explain. It certainly has some explaining to do.

I hope I will not be drummed out of the corps of Social Science if I confess to the fact that I can't think of an explanation in our field that is both elegant and beautiful. Perhaps deep . . . I guess we are still too young to have explanations of that sort . . . But there is one elegant and deep statement (which, alas, is not quite an "explanation") that comes close to fulfilling the criteria, and that I find very useful as well as beautifully simple.

I refer to the well-known lines Lord Acton wrote in a letter from Naples in 1887 to the effect that: "Power tends to corrupt, and absolute power corrupts absolutely." At least one philosopher of science has written that on this sentence an entire science of human beings could be built.

I find that the sentence offers the basis for explaining how a failed painter like Adolph Hitler and a failed seminarian like Joseph Stalin could end up with the blood of millions on their hands; or how the Chinese emperors, the Roman popes, or the French aristocracy failed to resist the allure of power. When a religion or ideology becomes dominant, the lack of controls will result in widening spirals of license leading to degradation and corruption.

It would be nice if Acton's insight could be developed into a full-fledged explanation before the hegemonies of our time, based on blind faith in science and the worship of the Invisible Hand, follow former forms of power in the dustbins of history.

In 1969, a Canadian-born educator named Laurence J. Peter pricked the maidenhead of American capitalism. "In a hierarchy," he stated, "every employee tends to rise to his level of incompetence." He called it the Peter Principle, and it appeared in a book of the same name. The little volume, not even 180 pages long, went on to become the year's top seller, with some 200,000 copies going out bookstore doors. It's not hard to see why. Not only did the Peter Principle confirm what everyone suspected—bosses are dolts—but it explained why this had to be so. When a person excels at a job, he gets promoted. And he keeps getting promoted until he attains a job that he's not very good at. Then the promotions stop. He has found his level of incompetence. And there he stays, interminably.

The Peter Principle was a hook with many barbs. It didn't just expose the dunderhead in the corner office. It took the centerpiece of the American dream—the desire to climb the ladder of success—and revealed it to be a recipe for mass mediocrity. Enterprise was an elaborate ruse, a vector through which the incompetent made their affliction universal. But there was more. The principle had, as a New York Times reviewer put it, "cosmic implications." It wasn't long before scientists developed the "Generalized Peter Principle," which went thus: "In evolution, systems tend to develop up to the limit of their adaptive competence." Everything progresses to the point at which it founders. The shape of existence is the shape of failure.

The most memorable explanations strike us as alarmingly obvious. They take commonplace observations—things we've all experienced—and tease the hidden truth out of them. Most of us go through life bumping into trees. It takes a great explainer, like Laurence J. Peter, to tell us we're in a forest.

Psychologist, University of Massachusetts, Amherst; Author, The Cognitive Brain

The Anthropic Principle

Why does the universe appear to us as it does? Why are the observations and measurements of the physical properties of our world compatible with our own existence in space-time? The anthropic principle explains this as a natural consequence of the fact that only in a universe that includes our kind of world could we live to observe, measure, and formulate theories about the universe we live in. But like any good explanation, it highlights other serious questions that are implicit in its own answer. Notice that the anthropic explanation hinges on the fundamental concept of our living existence in space-time. So how are we able to even begin to think about our existence in space-time to pose the anthropic principle? Here I must stop before I violate the stricture of discussing one's own explanation in response to this Edge question.

Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

The Next Level of Fundamental Matter?

A scientific idea may be elegant. It may also be correct. If you must choose, choose correct. But it is always better to have both.

"Elegant" is in the eye of the beholder. "Correct" is decided by the ultimate judge of science: Mother Nature, speaking through the results of experiments. Unlike the standard TV talent contests, neither "Elegant" nor "Correct" can be determined by a vote of the public or by a panel of sneering judges. But the feeling that an idea is elegant often depends on the question that is being asked.

All of matter consists of six types of quarks and six types of leptons, with seemingly random unexplained mass values, spanning more than ten orders of magnitude. No one knows why, within these twelve building blocks, the same pattern repeats itself three times. Some of these objects may also convert into each other, under certain circumstances, by unexplained rates, called "mixing angles". The twenty odd values of these rates and masses seem to be arbitrarily chosen by someone (Nature or God). This is what the standard model of particle physics tells us. Is this elegant? It does not seem so.

But the fact that mountains and snakes, oceans and garbage, people and computers, hamburgers and stars, diamonds and elephants, and everything else in the universe, are all made of only a dozen types of fundamental objects, is truly mind boggling. That is exactly what that same standard model says. So is it elegant? Very much so.

My great hope, for the last 32 years (a neat 100,000 years in binary notation), has been that Nature is actually even more elegant. The twelve fundamental quarks and leptons, and their anti-particles, all have electric charges 0, 1/3, 2/3 and 1, or the negative values of the same numbers. Each value repeats exactly three times.

There is no satisfactory explanation for many questions: Why all charges are multiples of 1/3 of the electron charge? Why each value between 0 and 1 appears on the list, and does so with the same number of times? Why they never acquire more than three doses of that quantity? Why the same entire pattern repeats itself three times? Why the leptons always have integer charges and the quarks always non-integers? Why quark charges and lepton charges are at all related to each other by simple ratios?

The fact that mosquitoes, chairs and tomato juice, are all electrically neutral, results from the unexplained equality of the magnitudes of the electric charges of protons and electrons, causing atoms to be neutral. This follows from the quarks charges having precise simple ratios to the lepton charges. But why is the electron not having a charge of, say, 0.8342 of that of the proton? Why do they have exactly the same charge value?

A very elegant explanation for all of these puzzles would appear, if all quarks and leptons, and therefore all matter in the universe, would consist of only two building blocks, one with electric charge of 1/3 of the electron charge and one without electric charge. Then all combinations of such three objects might exactly create the known pattern of quarks and leptons, and would neatly answer the above questions. The bizarre list of masses and conversion rates of the quarks and leptons would still remain unexplained, but would be relegated to a level of discussion of understanding the dynamical forces, binding the two more fundamental basic objects into a variety of compounds, rather than as God-given or Nature-given long list of more than twenty free fundamental parameters.

An Elegant explanation? Certainly. Correct? Not necessarily, as far as we know now. But you can never prove that particles are not made of more fundamental objects. This may always be discovered in the future, without contradicting any currently known data, especially if the new structure is revealed only at smaller distances and higher energies than anything we have seen so far, or if it obeys a strange new set of basic physics rules. Needless to say, such a simple hypothesis needs to tackle many additional issues, some of which it does beautifully, while in others it fails badly. That may be the partly justified reason for the general negative attitude of most particle physicists to this simple explanation.

I find the idea of creating the entire universe from just two types of basic building blocks (which I call "Rishons" or primaries), a very elegant and enticing explanation of many observed facts. The book of Genesis starts with a universe that is "formless and void" or, in the original Hebrew "Tohu Vavohu". What better notations for the two fundamental objects than T (Tohu, formless) and V (Vohu, void), and then each quark or lepton consists of a different combination of three such Rishons (like TTV or TTT). This may remain forever as a very elegant, but incorrect, idea, or may be revealed one day as the next level of the structure of matter, following the atom, the nucleus, the proton and the quark. Ask Mother Nature. She understands both "Elegant" and "Correct", but she is not yet telling.

"Inexactness" or the "uncertainty" principle, as formulated by physicist Werner Heisenberg, is an end often seen as the beginning. It reflects T.S. Eliot's observation: "what she gives, gives with such supple confusions that the giving famishes the craving".

In 1927, Heisenberg showed that uncertainty is inherent in quantum mechanics. It is impossible to simultaneously measure certain properties—position and momentum. In the quantum world, matter can take the form of either particle or waves. Fundamental elements are neither particles nor waves, but can behave as either and are merely different theoretical ways of picturing the quantum world.

Inexactness marks an end to certainty. In seeking to measure one property more precisely and accurately, the ability to measure the other property is undermined. The act of measurement negates elements of our knowledge of the system.

It undermines scientific determinism, implying that human knowledge about the world is always incomplete, uncertain and highly contingent.

Inexactness challenges causality. As Heisenberg observed: "'If we know the present, then we can predict the future', it is not the consequences, but the premise that is false. As a matter of principle we cannot know all determining elements of the present".

Inexactness questions methodology. Experiments can only prove what they are designed to prove. Inexactness is a theory based on the practical constraints of measurement.

Inexactness and quantum mechanics challenge faith as well as concepts of truth and order. They imply a probabilistic world of matter, where we cannot know anything with certainty but only as a possibility. It removes the Newtonian elements of space and time from any underlying reality. In the quantum world, mechanics are understood as a probability without any causal explanation.

Albert Einstein refused to accept that positions in space-time could never be completely known and quantum probabilities did not reflect any underlying causes. He did not reject the theory but the lack of reason for an event. Writing to Max Born, he famously stated: "I, at any rate, am convinced that He [God] does not throw dice." But as Stephen Hawking later remarked in terms that Heisenberg would have recognised: "Not only does God play dice, but…he sometimes throws them where they cannot be seen."

Allusive and subtle, the power of Inexactness draws on its metaphorical property which has allowed it to penetrate diverse fields such as art theory, financial economics and even popular culture.

At one level, Heisenberg's uncertainty principle is taken to mean the act of measuring something changes what is observed. But at another level, intentional or unintentionally, Werner Heisenberg is saying something about the nature of the entire system—the absence of absolute truths, the lack of certainty and the limits to our knowledge.

Inexactness is linked with different philosophical constructs. Nineteenth-century Danish philosopher Søren Kierkegaard differentiated between objective truths and subjective truths. Objective truths are filtered and altered by our subjective truths, recalling the interaction between observer and event central to Heisenberg's theorem.

Inexactness is related to linguistic philosophies. In the Tractatus Logico-Philosophicus, Ludwig Wittgenstein anticipates Inexactness arguing that the structure of language provides the limits of thought and what can be said meaningfully.

The deep ambiguity of Inexactness manifests itself in other ways: the controversy over the term itself and Heisenberg's personal history.

Heisenberg's principle is various referred to as Ungenauigkeit (meaning inexactness), Unschärfe (blurred or lacking clarity) or Unbestimmtheit (indeterminate). In translation, the ambiguity and differences in meaning are accentuated. Playwright Michael Franyn suggested: indeterminability. It was not until the publication of the 1930 English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory that the term uncertainty (Unsicherheit) was used and widely adopted.

In 1941, during the Second World War, Werner Heisenberg and Niels Bohr, the Danish Physicist and his former teacher, met in occupied Denmark. In Michael Franyn's 1998 play Copenhagen, Margrethe, Bohr's wife, poses the essential question, which is debated in the play: "Why did he [Heisenberg] come to Copenhagen?"

The play repeats their meeting three times, each with different outcomes. As Heisenberg, the character, states: "No one understands my trip to Copenhagen. Time and time again I've explained it. To Bohr himself, and Margrethe. To interrogators and intelligence officers, to journalists and historians. The more I've explained, the deeper the uncertainty has become."

In his 1930 text The Principles of Quantum Mechanic. Paul Dirac, a colleague of Heisenberg, contrasted the Newtonian world and the Quantum one: "It has become increasingly evident… that nature works on a different plan. Her fundamental laws do not govern the world as it appears in our mental picture in any direct way, but instead they control a substratum of which we cannot form a mental picture without introducing irrelevancies."

There was a world before Heisenberg and his Inexactness principle. There is a world after Heisenberg. They are the same world but they are different.

Complex life­ is a product of natural selection, which is driven by competition among replicators. The outcome depends on which replicators best mobilize the energy and materials necessary to copy themselves, and on how rapidly they can make copies which can replicate in turn. The first aspect of the competition may be called survival, metabolism, or somatic effort; the second replication or reproductive effort. Life at every scale, from RNA and DNA to whole organisms, implements features that execute—and constantly trade off­—these two functions.

Among life's tradeoffs is whether to allocate resources (energy, food, risk, time) to pumping out as many offspring as possible and letting them fend for themselves, or to eking out fewer descendants and enhancing the chances of survival and reproduction of each one. The continuum represents the degree of parental investment expended by an organism.

Since parental investment is finite, investing organisms face a second tradeoff, between investing resources in a given offspring and conserving those resources to invest in its existing or potential siblings.

Because of the essential difference between the sexes—females produce fewer but more expensive gametes—the females of most species invest more in offspring than the males, whose investment is often close to zero. Mammalian females in particular have opted for massive investment, starting with internal gestation and lactation. In some species, including Homo sapiens, the males may invest, too, though less than the females.

Natural selection favors the allocation of resources not just from parents to offspring but among genetic kin such as siblings and cousins. Just as a gene that encourages a parent to invest in offspring will be favoring a copy of itself that sits inside those offspring, so a gene that encourage an organism to invest in a brother or cousin will, some proportion of the time, be helping a copy of itself, and will be selected in proportion to the benefits conferred, the costs incurred, and the degree of genetic relatedness.

I've just reviewed the fundamental features of life on earth (and possibly life everywhere), with the barest mention of contingent facts about our own species: only that we're mammals with male parental investment. I'll add a second: that we're a brainy species that deals with life's conundrums not just with fixed adaptations selected over evolutionary time, but with facultative adaptations (cognition, language, socialization) that we deploy in our lifetimes and whose products we share via culture.

From these deep principles about the nature of the evolutionary process, one can deduce a vast amount about the social life of our species. (Credit where it's due: William Hamilton, George Williams, Robert Trivers, Donald Symons, Richard Alexander, Martin Daly, Margo Wilson.)

• Conflict is a part of the human condition. Notwithstanding religious myths of Eden, romantic images of noble savages, utopian dreams of perfect harmony, and gluey metaphors like attachment, bonding, and cohesion, human life is never free of friction. All societies have some degree of differential prestige and status, inequality of power and wealth, punishment, sexual regulations, sexual jealousy, hostility to other groups, and conflict within the group, including violence, rape, and homicide. Our cognitive and moral obsessions track these conflicts. There are a small number of plots in the world's fiction, and are defined by adversaries (often murderous), by tragedies of kinship or love, or both. In the real world, our life stories are largely stories of conflict: the hurts, guilts, and rivalries inflicted by friends, relatives, and competitors.

• The main refuge from this conflict is the family—collections of individuals with an evolutionary interest in one another's flourishing. Thus we find that traditional societies are organized around kinship, and that political leaders, from great emperors to tinpot tyrants, seek to transfer power to their offspring. Extreme forms of altruism, such as donating an organ or making a risky loan, are typically offered to relatives, as are bequests of wealth after death—a major cause of economic inequality. Nepotism constantly threatens social institutions such as religions, governments, and businesses that compete with the instinctive bonds of family.

• Even families are not perfect havens from conflict, because the solidarity from shared genes must contend with competition over parental investment. Parents have to apportion their investment across all their children, born and unborn, with every offspring equally valuable (all else being equal). But while an offspring has an interest in its siblings' welfare, since it shares half its genes with each full sib, it shares all of its genes with itself, so it has a disproportionate interest in its own welfare. The implicit conflict plays itself out throughout the lifespan: in postpartum depression, infanticide, cuteness, weaning, brattiness, tantrums, sibling rivalry, and struggles over inheritance.

• Sex is not entirely a pastime of mutual pleasure between consenting adults. That is because the different minimal parental investment of men and women translates into differences in their ultimate evolutionary interests. Men but not women can multiply their reproductive output with multiple partners. Men are more vulnerable than women to infidelity. Women are more vulnerable than men to desertion. Sex therefore takes place in the shadow of exploitation, illegitimacy, jealousy, spousal abuse, cuckoldry, desertion, harassment, and rape.

• Love is not all you need, and does not make the world go round. Marriage does offer the couple the theoretical possibility of a perfect overlap of genetic interest, and hence an opportunity for the bliss that we associate with romantic love, because their genetic fates are bound together in the same package, namely their children. Unfortunately those interests can diverge because of infidelity, stepchildren, in-laws, or age differences­­—which are, not coincidentally, major sources of marital strife.

None of this implies that people are robots controlled by their genes, that complex traits are determined by single genes, that people may be morally excused for fighting, raping, or philandering, that people try to have as many babies as possible, or that people are impervious to influences from their culture (to take some of the common misunderstandings of evolutionary explanations). What it does mean is that a large number of recurring forms of human conflict fall out of a small number of features of the process that made life possible.

On April 25, 1953, in a short note to Nature entitled "Molecular structure of nucleic acids", James Watson and Francis Crick announced their deduction of the double-helix structure of DNA. Their remarkable article famously ends with one of the most laconic understatements in all of science: "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." Indeed, the double-strands of DNA, each a mirror image of the other, explains how cells can replicate—the "secret of life" as Watson and Crick modestly announced to the startled clientele of The Eagle pub in Cambridge during a break in their labors. But the most remarkable thing about this beautiful and elegant discovery is that it is not remarkable enough to be the topic of this essay.

In fact, their structure held the key to something far more subtle and arguably of even greater fundamental importance than the mechanism for genetic replication. What Watson and Crick had uncovered was a window on the world's oldest fossil, one which sheds light not only on the static structure of modern molecular biology, but also on the dynamical processes leading to the emergence of life on Earth! The unraveling of this story is a fascinating episode in the history of science, because it involves not one, but two beautiful and elegant ideas, each of which is absolutely and unequivocally wrong! And tellingly, each beautiful but flawed idea was conceived by a physicist trying to understand biology as a jigsaw puzzle at the molecular scale. As we proceed into the second decade of this "century of biology", flooded by data but starved of fundamental insight, we will do well to resist the temptation to build biology on naïve, static principles of beauty and elegance that seemingly serve so well in the physical sciences.

Watson and Crick had shown how to pack together the four molecules ("nucleotides"—known concisely by the symbols A,C,G,T) making up DNA into a structure that fitted X-ray measurements of the atomic positions. The precise sequence of the symbols A,C,G,T somehow encoded the composition of proteins, known to be built up from a palette of twenty small molecules known as amino acids. It was the physicist George Gamow who a year later pointed out that the "key-and-lock" relation between the nucleotides of the DNA structure exhibited diamond-shaped holes, and that for geometrical reasons, these diamonds were specified by the three surrounding nucleotides. Gamow enumerated all such holes, and discovered that there were twenty different types of hole—one for each amino acid! Thus was born the idea of a "genetic code": the stupendously complicated biochemical processes going on inside every cell to make proteins could be reduced down to a simple code table, easily able to fit on a tee-shirt, that told you how to read DNA and translate its message into the proteins of living cells.

Gamow's ingenious idea for a genetic code was absolutely wrong, but serendipitously he did get right that triplets of nucleotides ("codons") code for the twenty amino acids of life. But how do they? If you write down all possible triplets of ACGT, you'll find that there are 64 (4x4x4) possible sets of three letters or codons. Evidently some of these possible codons must code for the same amino acid, or else the cell would use 64 amino acids. If the codons were doublets, then there would only be 16 (4x4) possible sets of such letters, and that would not be enough to specify each of the 20 amino acids used by life. So we are stuck with triplets, and evidently some of these triplets code for the same amino acid.

And now there are two mysteries: not only what is the code, but also, why are only 20 amino acids used by life?

These questions were answered in a brilliant article published by Crick, Griffith and Orgel in 1957. Let's follow Crick and friends, as they try to hack the genome, reverse engineering the genetic code by appealing to a logic that one might mischievously call "intelligent design". Let's suppose that we have a section of the message six characters long, some small part of the three billion characters of the human genome:

… ACGGAC …

Knowing about triplet codons, you would parse this as

ACG, GAC,

But, you say, how do we know that the first letter, the A, is actually where the message starts? Maybe the A is the last symbol in the previous codon (the … above), and the message should be parsed as

…A, CGG, AC …

with the last AC the start of the following codon. In other words, if you set out to transmit a code in this way, what would be the smart way to remove this sort of ambiguity? Crick et al. proposed that you construct the code such that it only makes sense when read with the correct starting point. In other words, you make a code in which ACG and GAC stand for meaningful symbols (in this case the amino acids Threonine and Aspartic acid), but CGG would be meaningless. Such a code can be read in only one way, so it requires no punctuation: it is what they called a "comma-less code". The beautiful discovery of Crick et al. is that the maximum number of symbols (ie. amino acids) that can be encoded in this way with triplets is twenty. And so, they concluded, Nature had been forced to work only with twenty amino acids, and not a much larger set, in order to build proteins, because it was a mathematical impossibility to construct a comma-less genetic code with more amino acids.

From their reasoning, it is not difficult to enumerate the possible comma-less genetic codes. There are 288 of them. But the actual genetic code, conclusively determined during the 1960's, is not one of them!

Beautiful, elegant, and absolutely wrong: key-and-lock mechanisms have been important in understanding snapshots of biological structure, but they are misleading clues in the search to understand the dynamical processes of evolution that have led to the phenomenon of life. Far richer than Gamow and Crick conceived, the genetic code is now thought to have been shaped rapidly over evolutionary time through the cooperative dynamics of early organisms, thus ensuring a code that is remarkably robust to errors in translation and mutations of the genome sequence. Only with such a code could such complex organisms as you and me evolve. And that is the real secret of life.

"What is your favorite, deep, elegant, or beautiful explanation?" That's a tough question for a theoretical physicist; theoretical physics is all about deep, elegant, beautiful explanations; and there are just so many to choose from.

Personally my favorites are explanations that that get a lot for a little. In physics that means a simple equation or a very general principle. I have to admit though, that no equation or principle appeals to me more than Darwinian evolution, with the selfish gene mechanism thrown in. To me it has what the best physics explanations have: a kind of mathematical inevitability. But there are many people who can explain evolution much better than I, so I will stick to what I know best.

The guiding star for me, as a physicist, has always been Boltzmann's explanation of second law of thermodynamics: the law that says that entropy never decreases. To the physicists of the late 19th century this was a very serious paradox. Nature is full of irreversible phenomena: things that easily happen but could not possibly happen in reverse order. However, the fundamental laws of physics are completely reversible: any solution of Newton's equations can be run backward and it is still a solution. So if entropy can increase, the laws of physics say it must be able to decrease. But experience says otherwise. For example, if you watch a movie of a nuclear explosion in reverse, you know very well that it's fake. As a rule, things go one way and not the other. Entropy increases.

What Boltzmann realized is that the second law—entropy never decreases—is not a law in the same sense as Newton's law of gravity, or Faraday's law of induction. It's a probabilistic law that has the same status as the following obvious claim; if you flip a coin a million times you will not get a million heads. It simply won't happen. But is it possible? Yes, it is; it violates no law of physics. Is it likely? Not at all. Boltzmann's formulation of the second law was very similar. Instead of saying entropy does not decrease, he said entropy probably doesn't decrease. But if you wait around long enough in a closed environment, you will eventually see entropy decrease: by accident, particles and dust will come together and form a perfectly assembled bomb. How long? According to Boltzmann's principles the answer is the exponential of the entropy created when the bomb explodes. That is a very long time, a lot longer than the time to flip a million heads in a row.

I'll give you a simple example to see how it is possible for things to be more probable one way than the other, despite both being possible. Imagine a high hill that comes to a narrow point—a needle point—at the top. Now imagine a bowling ball balanced at the top of the hill. A tiny breeze comes along. The ball rolls off the hill and you catch it at the bottom. Next, run it in reverse: the ball leaves your hand, rolls up the hill, and with infinite finesse, comes to the top—and stops! Is it possible? It is. Is it likely? It is not. You would have to have almost perfect precision to get the ball to the top, let alone to have it stop dead-balanced. The same is true with the bomb. If you could reverse every atom and particle with sufficient accuracy, you could make the explosion products reassemble themselves. But a tiny inaccuracy in the motion of just one single particle, and all you would get is more junk.

Here's another example: drop a bit of black ink into a tub of water. The ink spreads out and eventually makes the water grey. Will a tub of grey water ever clear up and produce a small drop of ink? Not impossible, but very unlikely.

Boltzmann was the first to understand the statistical foundation for the second law, but he was also the first to understand the inadequacy of his own formulation. Suppose that you came upon a tub that had been filled a zillion years ago, and had not been disturbed since. You notice the odd fact that it contains a somewhat localized cloud of ink. The first thing you might ask is what will happen next. The answer is that the ink will almost certainly spread out more. But by the same token, if you ask what most likely took place a moment before, the answer would be the same: it was probably more spread out a moment ago than it is now. The most likely explanations would be that the ink-blob is just a momentary fluctuation.

Actually I don't think you would come to that conclusion at all. A much more reasonable explanation is that for reasons unknown, the tub started not-so-long-ago with a concentrated drop of ink, which then spread. Understanding why ink and water go one way becomes a problem of "initial conditions". What set up the concentration of ink in the first place?

The water and ink is an analogy for the question of why entropy increases. It increases because it is most likely that it will increase. But the equations say that it is also most likely that it increases toward the past. To understand why we have this sense of direction, one must ask the same question that Boltzmann did: Why was the entropy very small at the beginning? What created the universe in such a special low-entropy way? That's a cosmological question that we are still very uncertain about.

I began telling you what my favorite explanation is, and I ended up telling you what my favorite unsolved problem is. I apologize for not following the instructions. But that's the way of all good explanations. The better they are, the more questions they raise.

Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Master, Trinity College; Author, From Here to Infinity

Physical Reality Could Be Hugely More Extensive Than the Patch of Space and Time Traditionally Called 'The Universe'

An astonishing concept has entered the mainstream of cosmological thought: physical reality could be hugely more extensive than the patch of space and time traditionally called 'the universe'. A further Copernican demotion may loom ahead. We've learnt that we live in just one planetary system among billions, in one galaxy among billions. But now that's not all. The entire panorama that astronomers can observe could be a tiny part of the aftermath of 'our' big bang, which is itself just one bang among a perhaps-infinite ensemble.

Our cosmic environment could be richly textured, but on scales so vast that our purview is restricted to a tiny fragment; we're not aware of the 'big picture', any more than a plankton whose 'universe' was a liter of water would be aware of the world's topography and biosphere. It is obviously sensible for cosmologists to start off by exploring the simplest models. But there is no more reason to expect simplicity on the grandest scale than in the terrestrial environment—where intricate complexity prevails.

Moreover, string theorists suspect—for reasons quite independent of cosmology—that there may be an immense variety of 'vacuum states'. Were this correct, different 'universes' could be governed by different physics. Some of what we call 'laws of nature' may in this grander perspective be local bylaws, consistent with some overarching theory governing the ensemble, but not uniquely fixed by that theory. More specifically, some aspects may be arbitrary and others not. As an analogy (which I owe to Paul Davies) consider the form of snowflakes. Their ubiquitous six-fold symmetry is a direct consequence of the properties and shape of water molecules. But snowflakes display an immense variety of patterns because each is molded by its distinctive history and micro-environment: how each flake grows is sensitive to the fortuitous temperature and humidity changes during its growth.

If physicists achieved a fundamental theory, it would tell us which aspects of nature were direct consequences of the bedrock theory (just as the symmetrical template of snowflakes is due to the basic structure of a water molecule) and which cosmic numbers are (like the distinctive pattern of a particular snowflake) the outcome of environmental contingencies. .

Our domain wouldn't then be just a random one. It would belong to the unusual subset where there was a 'lucky draw' of cosmic numbers conducive to the emergence of complexity and consciousness. Its seemingly designed or fine -tuned features wouldn't be surprising. We may, by the end of this century, be able to say, with confidence, whether we live in a multiverse, and how much variety its constituent 'universes' display. The answer to this question will, I think, determine crucially how we should interpret the 'biofriendly' universe in which we live (and which we share with any aliens who we might one day make contact with).

It may disappoint some physicists if some of the key numbers they are trying to explain turn out to be mere environmental contingencies—no more 'fundamental' than the parameters of the Earth's orbit round the Sun. But that disappointment would surely be outweighed by the revelation that physical reality was grander and richer than hitherto envisioned.

Of course it has to be Darwin. Nothing else comes close. Evolution by means of natural selection (or indeed any kind of selection—natural or unnatural) provides the most beautiful, elegant explanation in all of science. This simple three-step algorithm explains, with one simple idea, why we live in a universe full of design. It explains not only why we are here, but why trees, kittens, Urdu, the Bank of England, Chelsea football team and the iPhone are here.

You might wonder why, if this explanation is so simple and powerful, no one thought of it before Darwin and Wallace did, and why even today so many people fail to grasp it. The reason, I think, is that at its heart there seems to be a tautology. It seems as though you are saying nothing when you say that 'things that survive survive' or 'successful ideas are successful'. To turn these tautologies into power you need to add the context of a limited world in which not everything survives and competition is rife, and also realise that this is an ever-changing world in which the rules of the competition keep shifting.

In that context being successful is fleeting, and now the three-step algorithm can turn tautology into deep and elegant explanation. Copy the survivors many times with slight variations and let them loose in this ever-shifting world, and only those suited to the new conditions will carry on. The world fills with creatures, ideas, institutions, languages, stories, software and machines that have all been designed by the stress of this competition.

This beautiful idea is hard to grasp and I have known many university students who have been taught evolution at school, thought they understood it, but have never really done so. One of the joys of teaching for me was to see that astonished look on a student's face when they suddenly got it. That was heart-warming indeed. But I also call it heart-warming because, unlike some religious folk, when I look out of my window past my computer to the bridge over the river and the trees and cows in the distance, I delight in the simple and elegant competitive process that brought them all into being, and at my own tiny place within it all.

Professor of the Prehistory of Humanity, University of Vienna; Author, The Artificial Ape

Why the Greeks Painted Red People On Black Pots

An explanation of something that seems not to need explaining is good. If it leads to further explanations of things that didn't seem to need explaining, that is better. If it makes a massive stink, as academic vested interests attempt to preserve the status quo in the face of far reaching implications, it is one of the best. I have chosen Michael Vickers' simple and immensely influential explanation of why the ancient Greeks painted little red figures on their pots.

The 'red figure vase' is an icon of antiquity. The phrase is frequently seen on museum labels, and the question of why the figures were not white, yellow, purple or black—other colours the Greeks could and did produce in pottery slips and glazes—does not seem important. Practically speaking, Greek pottery buyers could mix and match without fear of clashing styles, and the basic scheme allowed the potters to focus on their real passion: narrative story telling. The black background and red silhouettes make complex scenes—mythological, martial, industrial, domestic, sporting and ambitiously sexual—graphically crisp. Anyone can understand what is going on (for which reason museums often keep their straight, gay, lesbian, group, bestial, and olisbos [dildo-themed] stuff out of public view, in study collections). So clearly there is enough about Greek vases to catch the eye without thinking about the colour scheme.

Michael's brilliance was to take an idea well known to the scholar Vitruvius in the first century BC and apply it in a fresh context. Vitruvius noted that many features of Greek temples that seemed merely decorative were a hangover from earlier practical considerations: little rows of carefully masoned cubes and gaps just under the roof line were in fact a skeuomorph or formal echo of the beam ends and rafters that had projected at that point when the structures were made of wood. Michael argued that Greek pottery was skeuomorphic too, being the cheap substitute for aristocratic precious metal. He argued that the red figures on black imitated gilded figures on silver, while the shapes of the pots, with their sharp carinations and thin, strap-like handles, so easily broken in clay, were direct translations of the silversmith’s craft.

This still seems implausible to many. But to those of us, like myself, working in the wilds of Eastern European Iron Age archaeology, with its ostentatious barbarian grave mounds packed with precious metal luxuries, it made perfect sense. Ancient silver appears black on discovery, and the golden figuration is a strongly contrasting reddish gold. Museums typically used to 'conserve' such vessels, not realising that (as we now know) the sulfidized burnish to the gold was deliberate, and that no Greek would have been seen dead with shiny silver (a style choice of the hated Persians, who flaunted their access to the exotic lemons with which they cleaned it).

For me, an enthusiast from the start, the killer moment was when Michael photographed a set of lekythoi, elegant little cylindrical oil or perfume jars, laid down end to end in decreasing order in an elegant curve. He demonstrated thereby that no lekythos (the only type of major pottery with a white background, and black only for base and lid) had a diameter larger than the largest lamellar cylinder that could be obtained from an elephant tusk. These vessels, he explained, were skeuomorphs of silver-mounted ivory originals.

The implications are not yet settled, but the reputation of ancient Greece as a philosophically-oriented, art-for-art's sake culture can now be contrasted with an image of a world where everyone wanted desperately to emulate the wealthy owners of slave-powered silver mines with their fleets of trade galleys (in my view, the scale of the ancient economy in every dimension—slavery, trade, population levels, social stratification—has been systematically underestimated, and with it the impact of colonialism and emergent social complexity in prehistoric Eurasia).

The irony for the modern art world is that the red figure vases that change hands for vast sums today are not what the Greeks themselves actually valued. Indeed, it is now clear that the illusion that these intrinsically cheap antiquities were the real McCoy was deliberately fostered, through highly selective use of Greek texts by nineteenth century auction houses intent on creating a market.

Life. To be elegant, beautiful, or deep something must be alive, especially the inanimate or non organic.

A marble statue may contain more humanity and emotion than any daily tableau because it freezes, focuses, captures, and makes life immortal, as does a great poem, story, or moral. Elegance, beauty, and depth are unintelligible without movement, without life.

Static universes, dead stars, black on black-no variant paintings kill the soul. Planets with no orbits, suns lacking spots and flairs, galaxies without movement and time, may exist but they are completely alien to a view of the universe based on movement. Our most beautiful equations bend time, describe water flow, move heat and cold, break light into color...

These equations are dreamed up by those who know food, literature, wine, and art with no spice, no flavor, no layers, no variants, kill the heart. Everything we see, do, think, feel, is seen from, and framed in, the context of something being alive. We reflect this fundamental bias in everything we see, do, say, observe, admire, desire, and describe.

Even in chasing absolute zero, we have yet to understand, or see, absolute lifelessness, a world with no movement. It is something completely alien to our experience, our existence. It is something that has never happened in our known history of the universe.

Our fascination and gradual understanding of the quantum world drives us in exactly the opposite direction, towards worlds where different simultaneous forms of movement, life, existence, prevail. We increasingly conceive of, and project, states where beings and objects are multidimensional, multi layered, simultaneously complex. We are bored with one life, one state, one time, across a few dimensions.

Our primordial need to survive, grow, reproduce, extend reflects on every aspect of our daily life and our biased understanding of everything. We are transfixed by Live Action on TV, feel far more in a theater than on a screen, implicitly trust the now far more than the edited was. We respect, and give far more rights and privileges to the living, and the more complex and layered the living is the more interesting and protected it becomes.

As we invent and expand worlds of 1s and 0s the primary criteria for success and viral replication is whether we can breathe life into their virtual realities. Be this a direct reflection of daily life, a la Facebook, multi player massive worlds, or a replicating, self reinforcing network reproductive logic. The digital world accelerates expansion, survival, virtual life through mutant hyper growth. It becomes a fast forward caricature of what we have strived to build throughout centuries of breeding and agricultural practices, of building layer upon layer of civilizations in great cities.

Our version of the universe is a story of complexity and concentration of power, our history is one of anthropomorphizing everything we see around us into the context, logic, and structure of life. But this need to perceive and deal with difference, to describe change, movement, and history is so universal, so fundamental, so ingrained in our very being and atoms that perhaps it may be precisely what now prevents us from understanding, conceptualizing, and seeing the majority of the universe, that three quarters of everything that is made up of dark energy and perhaps powered by dark matter.

Philosopher and Researcher, C.N.R.S. Paris; Author, Text-E: Text in the Age of the Internet

The Turing Machine

«There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy» says Hamlet to his friend Horatio. An elegant way to point to all the unsolvable, untreatable questions that haunt our lives… One of the most wonderful demonstrations of all times ends up with the same sad conclusion: There are mathematical problems that are simply unsolvable.

In 1936, the British mathematician Alan Turing conceived the simplest and most elegant possible computer ever: A device, as he described it, "with an infinite memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behaviour of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine".

An abstract machine, conceived by the mind of a genius, to solve an unsolvable problem, the decision problem, that is: for each logical formula in a theory, is it possible to decide in a finite number of steps, if the formula is valid in that theory? Well, Turing shows that it is not possible. The decision, or Entscheidungs problem was well-known by mathematicians: it was the number 10 of a list of unsolved problems that David Hilbert presented in 1900 to the community of mathematicians, thus setting most of the agenda for the 20th century of mathematical research.

The decision problem asks whether there is a mechanical process — that can be realizable in a finite number of steps — to decide whether a formula is valid or not, or whether a function is computable or not. Turing started asking himself: "What does a mechanical process mean?" and his answer was that a mechanical process is a process that can be realized by a machine. Obvious, isn't it?

He then designed a Turing machine for each possible formula in a first-order logic system, or, for each possible recursive function within the natural numbers, given the logical equivalence proven by Gödel between the two sets. And indeed with his simple definition, we can write down a string of 0 and 1 for each tape to describe the function, then give to the machine a list of simple instructions (move to left, move to right, stop) so that it writes down the demonstration of each function and then stops.

Turing was able to design a Universal Turing Machine, that is, a machine that can take as input any possible string of symbols that describe a function and give as output its demonstration. But, if you feed the Universal Turing Machine with a description of itself, it doesn't stop, and goes on infinitely generating 0s and 1s. That's it. The mother of all the computers, the soul of the Digital Age, was designed to show that not everything can be reduced to a Turing machine. There are more things in heaven and earth.

There are many persuasive arguments from evolutionary biology explaining why various species, notably Homo sapiens, have adopted a lifestyle in which males and females pair up long-term. But my topic here is not one of those explanations. Instead, it is the explanation for why we are close (or so I claim!)—far closer than most people, even most readers of Edge, yet appreciate—to the greatest societal, as opposed to technological, advance in the history of civilisation.

In 1971, John Rawls coined the term "reflective equilibrium" to denote "a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments". In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable "isms" (sexism, ageism, etc.) are quite clear: the arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already-established contrasting attitudes in matters that were indisputably analogous.

Reflective equilibrium gets my vote for the most elegant and beautiful explanation, because of its immense breadth of applicability and also its lack of dependence on other controversial positions. Most importantly, it rises above the question of cognitivism: the debate over whether there is any such thing as objective morality. Cognitivists assert that certain acts are inherently good or bad irrespective of the society within which they do or do not occur, very much as the laws of physics are (generally believed to be...) independent of those observing their impact on events. Non-cognitivists claim, by contrast, that no moral position is universal, and that each (hypothetical) society makes its own moral rules unfettered, such that even acts that we would view as unequivocally immoral could be morally unobjectionable in some other culture. But when we make actual decisions concerning whether such-and-such a view is morally acceptable (or morally entailed), reflective equilibrium frees us from the need to take a view on the cognitivism question. In a nutshell, it explains why we don't need to know whether morality is objective.

I highlight monogamy here because of the many topics to which reflective equilibrium can be usefully applied, Western society's position on monogamy is at the most critical juncture. Monogamy today compares with heterosexuality not too many decades ago, or tolerance of slavery 150 years ago: quite a lot of people depart from it, a much smaller minority actively advocate the acceptance of departure from it, but most people advocate it and disparage the minority view. Why is this the "critical juncture"? Because it is the point at which enlightened thought-leaders can make the greatest difference to the speed with which the transition to the morally inescapable position occurs.

First let me clarify that I specifically refer here to sex, and not (necessarily, anyway) about deeper emotional attachments. Whatever one's views or predilections concerning the acceptability or desirability of having deep emotional attachments with more than one partner, it remains that fulfillment of the responsibilities entailed in such attachments tends to take a significant proportion of the 24 hours to which every person's day is restricted. The complications arising from this inconvenient truth are a topic for another day. In this essay I focus on liaisons casual enough (whether or not repeated) that availability of time is not a major issue.

An argument from reflective equilibrium always begins with identification of the conventional views with which one then makes a parallel. In this case it's all about jealousy and possessiveness. Consider chess, or drinking. These are activities that are rarely solitary pursuits; one does them with someone else; in some such cases (chess more often than drinking!), with just one other person at a time. Now: is it generally considered reasonable for a friend with which one sometimes plays chess to feel aggrieved when one plays chess with someone else? Indeed, if someone exhibited possessiveness in such a matter—displeasure at one's chess partner having other chess partners—would they not be viewed as unacceptably overbearing and egotistical?

My claim is probably obvious by now. It is simply that there is nothing about sex that morally distinguishes it from other activities that are performed by two (or more) people collectively. In a world no longer driven by reproductive efficiency, and on the presumption that all parties are taking appropriate precautions in relation to pregnancy and disease, sex is overwhelmingly a recreational activity. What, then, can morally distinguish it from other recreational activities? Once we see that nothing does, reflective equilibrium thus forces us to one of two positions: either we start to resent the temerity of our regular chess opponents in playing others, or we cease to resent the equivalent in sex.

My prediction that monogamy's end is extremely nigh arises from my reference to reproductive efficiency above. Every single society in history has seen a precipitous reduction in fertility following its achievement of a level of prosperity that allowed reasonable levels of female education and emancipation. Monogamy is virtually mandated when a woman spends her entire adult life with young children underfoot, because continuous financial support cannot otherwise be ensured. But when it is customary for those of both sexes to be financially independent, this logic collapses. This is especially so for the increasing proportion of men and women who are choosing to delay having any children until middle age (if then).

I appreciate that rapid change in a society's moral compass needs more than the removal of influences maintaining the status quo: it also needs an active impetus. What is that impetus in this case? It is simply the pain and suffering that arises when the possessiveness and jealousy inherent in the monogamous mindset butt heads with the asynchronous shifts of affection and aspiration equally inherent in the response of human beings to their evolving social interactions. Gratuitous suffering is anathema to all people. Thus, the realisation that this particular category of suffering is indeed wholly gratuitous is a development that has not only irresistible moral force (via the principle of reflective equilibrium) but also immense emotional utility.

Having discovered much too late in life that the many things I had taken for granted as pre-existing conditions of the universe were, in fact, creations and ideas of people, I found Baudrillard's "precession of the simulacra" to be an immensely valuable way of understanding just how disconnected from anything to do with reality we can become.

The main idea is that there's the real world, there's the maps we use to describe that world, and then all this other activity that occurs on the map—sometimes with little regard for the territory it is supposed to represent. There's the real world, there's the representation of the world, and there's the mistaking of this simulation for reality.

This idea came back into vogue when virtual reality was hitting the scene, and writers called up Baudrillard as if we needed to be warned about escaping into our virtual worlds and leaving the brick and mortar, flesh and blood one behind. But I never saw computer simulations as so very dangerous. If anything, the obvious fakeness of computer simulations—from arcade games to Facebook—not only kept us aware of their simulated nature, but called into question the reality of everything else.

So there's the land—this real stuff we walk around on. Then there's territory— the maps and lines we use to define the land. But then there are wars fought over where those map lines are drawn.

The levels can keep building on one another, bringing people to further abstractions and disconnection from the real world. Land becomes territory; territory then becomes property that is owned. Property itself can be represented by a deed, and the deed can be mortgaged. The mortgage is itself an investment, that can be bet against with a derivative, which can be secured with a credit default swap.

The computer algorithm trading credit default swaps—as well as the programmers trying to follow that algorithms actions in order to devise competing algorithms—this level of interaction is real. And, financially speaking, it has more influence over who gets to live in your house than almost any other factor. A credit default swap crisis can bankrupt a nation as big as the United States—without changing anything about the real land it refers to.

Or take money: there's the thing of value—the labor, the chicken, the shoe. Then there's the thing we use to represent that value—say gold, grain receipts, or gold certificates. But once we get so used to using those receipts and notes as the equivalent of a thing with value, we can go one step further: the federal reserve note, or "fiat" currency, which has no connection to gold, grain, or the labor, chickens and shoes. Three main steps: there's value, the representation of value, and then the disconnection from what has value.

But that last disconnection is the important one—the sad one, in many respects. Because that's the moment that we forget where things came from—when we forget what they represent. The simulation is put forth as reality. The invented landscape is naturalized, and then mistaken for nature.

And it's when we become so particularly vulnerable to illusion, abuse, and fantasy. For once we're living in a world of created symbols and simulations, whoever has control of the map has control of our reality.

Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

A Beautiful Explanation For Why The Human Mind May Seem To Have An Elegant Explanation Even If It Doesn't

On reading "The Origin of Species" Erasmus Darwin wrote to his brother Charles in 1859: "The a priori reasoning is so entirely satisfactory to me that if the facts won't fit in, why so much the worse for the facts." Some of the facts—such as Kelvin's calculation of the age of the Earth—looked awkard for Darwin's theory at the time. But the theory of natural selection was too beautiful to be wrong. The brother was sure the troublesome facts would have to change. And so they did.

But it doesn't always work that way. Elegance can be misleading. Consider a simple mathematical example. Given the sequence 2, 4, 6, 8, what rule would you guess is operating to generate the series? There are several theoretically possible answers. One would be the simple rule: take the previous number, x, and compute x + 2. But equally valid for these data would be the much more complicated rule, take the previous number, x, and compute

-1/44 xexp3 + 3/11 xexp2 + 34/11

For the sequence as given so far, the first rule is clearly the more elegant. And if someone, let's call her Tracey, were to maintain that, since both rules both work equally well, she was going to make a personal choice of the second, we would surely think she was being deliberately contrarian and anti-elegant. Tracey Emin not Michelangelo.

But suppose now Tracey were to say: "I bet if we look a little further we shall find I was right all along." And suppose, when we do look further, we find to our surprise that the next number in the sequence is not 10, but 8.91 and the next after that not 12 but 8.67, i.e. the sequence we actually discover goes 2, 4, 6, 8, 8.91, 8.67. Then what had previously seemed the better rule would no longer fit the facts at all. Yet—surprise, surprise—the second rule would still fit nicely. In this case we should be forced to concede that Tracey's anti-elegance had won the day.

How often does the real world tease us by seeming to be simpler than it really is? A famous case is Francis Crick's 1959 theory of how DNA passes on instructions for protein synthesis using a "comma free code". As Crick wrote many years later "Naturally [we] were excited by the idea of a comma-free code. It seemed so pretty, almost elegant. You fed in the magic numbers 4 (the 4 bases) and 3 (the triplet) and out come the magic number 20, the number of amino acids." But alas this lovely theory could not be squared with experimental facts. The truth was altogether less elegant.

A tease? I'm not of course suggesting that Nature was deliberately stringing Crick along. As Einstein said, God is subtle but he is not malicious. In this case the failure of the most elegant explanation to be the true one is presumably just a matter of bad luck. And, assuming this doesn't happen often, perhaps in general we can still expect truth and beauty to go together (as no doubt many of the other answers to this Edge Question will prove).

However I believe there is one class of cases where the elegance of an untrue theory may not be luck at all; where indeed complex phenomena have actually been designed to masquerade as simple ones—or at any rate to masquerade as such to human beings. And such cases will arise just when, in the course of evolution, it has been to the the biological advantage of humans to see certain things in a particularly simple way. The designer of the pseudo-elegant explananda has not been God, but natural selection.

Here is my favorite example. Individual humans appear to other humans to be controlled by the remarkable structures we call "Minds". But the surprising and wonderful thing is that human "minds" are quite easy for others to read. We've all been doing it since we were babies, using the "folk theory" known to psychologists as "Theory of Mind" (or sometimes as "belief desire psychology"). Theory of Mind is simple and elegant, and can be understood by a two-year old. There's no question it provides a highly effective way of explaining the way people behave. And this skill at mind-reading has been essential to human survival in social groups. Yet the fact is Theory of Mind could never have worked so well unless natural selection had shaped human brains to be able to read—and to be readable by—one another in this way. Which is where the explanatory sleight of hand comes in. For, as an explanation of how the brain works, Theory of Mind just doesn't add up. It's a purpose-built over-simplified deep, elegant myth. (A myth whose inadequacy may not become apparent perhaps until those "extra numbers" are added by madness or by brain damage—contingencies which selection has not allowed for).

Anthropologist, National Center for Scientific Research, Paris; Author, Talking to the Enemy

The Power of Absurdity

The notion of a transcendent force that moves the universe or history or determines what is right and good—and whose existence is fundamentally beyond reason and immune to logical or empirical disproof—is the simplest, most elegant, and most scientifically baffling phenomenon I know of. Its power and absurdity perturbs mightily, and merits careful scientific scrutiny. In an age where many of the most volatile and seemingly intractable conflicts stem from sacred causes, scientific understanding of how to best deal with the subject has also never been more critical.

Call it love of Group or God, or devotion to an Idea or Cause, it matters little in the end. This is the "the privilege of absurdity; to which no living creature is subject, but man only" of which Hobbes wrote in Leviathan. In The Descent of Man, Darwin cast it as the virtue of "morality," with which winning tribes are better endowed in history's spiraling competition for survival and dominance. Unlike other creatures, humans define the groups to which they belong in abstract terms. Often they strive to achieve a lasting intellectual and emotional bonding with anonymous others, and seek to heroically kill and die, not in order to preserve their own lives or those of people they know, but for the sake of an idea—the conception they have formed of themselves, of "who we are."

Sacred and religious ideas are culturally universal, yet content varies markedly across cultures. [Religious conceptions and sacred, or transcendental, values often go together but aren't quite the same: sacred values—like dignity or honor, love of country or racial purity, devotion to jihad or humanity, or even sometimes to science—share with core religious values immunity to cost-benefit considerations and appear to drive actions independently, or all out of proportion, to likely prospects of success]. Sacred values mark the moral boundaries of societies and determine which material transactions are permissible. Material transgressions of the sacred are taboo: we consider people who sell their children or sell out our country to be sociopaths; other societies consider adultery or disregard of the poor equally immoral, but not necessarily selling children or women or denying freedom of expression.

Sacred values usually become terribly relevant only when challenged, much as food takes on overwhelming value in people's lives only when denied. People in one cultural milieu are often unaware of what's sacred for another; or, in becoming aware through conflict, find immoral and absurd the other side's values (pro-life vs pro-choice).Such conflicts cannot be wholly reduced to secular calculations of interest but must be dealt with on their own terms, a logic different from the marketplace or realpolitik. For example, cross-cultural evidence indicates that prospects of crippling economic burdens and huge numbers of deaths don't necessarily sway people from choosing whether or not to go to war, or to opt for revolution or resistance. As Darwin noted, the virtuous and brave do what is right, regardless of consequences, as a moral imperative. Indeed, we have suggestive neuroimaging evidence that people process sacred values in parts of the brain that are devoted to rule-bound behavior rather than utilitarian calculations (think: "Ten Commandments" or "Bill of Rights").

There is an apparent paradox that underlies the formation of large-scale human societies. The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today's nations, transnational movements, and other "imagined communities" of fictive kin — seem to depend upon what Kierkegaard deemed this "power of the preposterous" (as in Abraham's willingness to slit the throat of his most beloved son to show commitment to an invisible, no-name deity, thus making him the world's greatest culture hero, rather than a child abuser, would-be murderer or psychotic). Humankind's strongest social bonds and actions, including the capacity for cooperation and forgiveness, and for killing and allowing oneself to be killed, are born of commitment to causes and courses of action that are "ineffable," that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for costs and consequences. The more materially inexplicable one's devotion and commitment to a sacred cause — that is, the more absurd—the greater the trust others place in it and the more that trust generates commitment on their part.

To be sure, thinkers of all persuasions have tried to give explanations of the paradox, most being ideologically motivated and simple-minded: often to show that religion is good or, more usual, that religion is unreasonably bad. If anything, evolution teaches that humans are creatures of passion, and that reason itself is primarily aimed at social victory and political persuasion rather than philosophical or scientific truth. To insist that persistent rationality is the best means and hope for victory over enduring irrationality—that logical harnessing of facts could someday to do away with the sacred and so end conflict—defies all that science teaches about our passion-driven nature. Throughout the history of our species, as for the most intractable conflicts and greatest collective expressions of joy today, utilitarian logic is a pale prospect to replace the sacred.

For Alfred Russel Wallace, moral behavior (along with mathematics, music and art) was evidence that humans had not evolved through natural selection alone:

"The special faculties we have been discussing clearly point to the existence in man of something which has not derived from his animal progenitors — something which we may best refer to as being of a spiritual essence… beyond all explanation by matter, its laws and forces."

Of course, this didn't sit well with Darwin: "I hope you have not murdered too completely your own and my child," he lamented in a letter to Wallace. But Darwin himself produced no causal account of how humans became moral animals, other than to say that because our ancestors were so physically weak, only group strength could get them through. Religion and the sacred, banned so long from reasoned inquiry by ideological bias of all persuasions—perhaps because the subject is so close to who we want or don't want to be—is still a vast, tangled and largely unexplored domain for science, however simple and elegant for most people everywhere in everyday life.

Singer Johnny Cash explained in his song "Farmer's Almanac" that God gave us the darkness so we could see the stars. One doesn't have to be religious to gaze upward in awe at the incredible lamp of stars, and, in fact, thoughts about the night sky and stars have led many scientists over the centuries to ponder the deep question, "Why is the sky dark at night?" In 1823, the German astronomer Heinrich Wilhelm Olbers presented a paper that discussed this question, and the problem subsequently became known as Olbers' Paradox. Here is the puzzle. If the universe is infinite, as you follow a line of sight in any direction, that line must eventually intercept a star. This characteristic appears to imply that night sky should be dazzlingly bright with starlight. Your first thought might be that the stars are far away and that their light dissipates as it travels such great distances. Star light does dim as it travels, but by the square of the distance from the observer. However, the volume of the universe and hence the total number of stars would grow as the cube of the distance. Thus, even though the stars become dimmer the further away they are, this dimming is compensated by the increased number of stars. If we lived in an infinite visible universe, the night sky should indeed be very bright.

Here's the solution to Olbers' Paradox. We do not live in an infinite and static visible universe. Our visible universe has a finite age and is expanding. According to the Big Bang theory, our universe evolved from an extremely dense and hot state, and space has been expanding ever since. In particular, the Big Bang occurred 13.7 billion years ago, and today, most galaxies are still flying apart from one another. Because only 13.7 billion years have elapsed since the Big Bang, we can only observe stars out to a finite distance. This means that the number of stars that we can observe is finite. Because of the speed of light, there are portions of the universe we never see, and light from very distant stars has not had time to reach the Earth. If this is difficult to visualize, imagine standing in Kansas, while, only an hour ago, two locomotives have started out from California and New York, racing toward you. Obviously you will not see them. The locomotives are metaphors for the unseen light beams racing to us today from far-away stars. Interestingly, one of the first people to suggest this kind of resolution to Olbers' Paradox was the writer Edgar Allan Poe.

Another factor to consider is that the expansion of the universe also acts to darken the night sky because starlight expands into an ever vaster space. Also, the Doppler Effect causes a redshift in the wavelengths of light emitted from the rapidly receding stars. This effect, named after Austrian physicist Christian Doppler, refers to the change in frequency of a wave for an observer as the source of the wave moves. For example, if a car is moving while its horn is sounding, the frequency of the sound you hear is higher (compared to the actual emitted frequency) as the car approaches you and is lower as it moves away. Although we often think of the Doppler Effect with respect to sound, it applies to all waves, including light.

Life as we know it would not have evolved without the Olbers' effect because the night sky would otherwise have been extremely bright and hot. Olbers was not the first to pose this problem, and physicist Lord Kelvin provided a satisfactory resolution of the paradox in 1901. Note also that although the sky might be very bright in an infinite visible universe, it would not be infinitely bright. Beyond a certain distance, the surface areas of the stars would appear to touch and make a "shield." As a final part of our explanation, note that the stars themselves have a finite lifetime, which should be considered in our explanation and resolution of Olbers' Paradox. The next time we gaze up at the night sky, we can be thankful that we are not blinded by the light and live in a universe in which darkness competes with the stars.

Psychologist; President, The Cooper Union for the Advancement of Science and Art

Beauty and Tragedy In the Mathematics of Music

Some things are too good to be true. Others are too good not to be true. The elegant correspondence between musical consonance and simple mathematical ratios is too good not to be true. The infinitely creative system of musical harmony—based on a small set of canonical intervals and chordsarises out of this correspondence.

And yet the system has a flaw—a rip or tear in the musico-mathematical universe.

The most consonant musical interval is the octave, so much so that two tones an octave apart bear the same letter name, for example, A. The next most consonant interval is the fifth (for example, A to E). If you pluck the A string on a guitar and then pluck it again while placing your finger halfway up the fingerboard (allowing only half the string to vibrate and doubling the frequency), the pitch goes up by an octave to A. If instead you place your finger one-third the way up the fingerboard (allowing only the remaining two-thirds of the string to vibrate and increasing the frequency by a factor of 3/2) the pitch goes up by a fifth to E.

On a full size piano keyboard, if you play the lowest note, A, and then go up by twelve successive fifths, you reach A again—which happens to be seven octaves higher than the lowest A. Calculating by fifths, the highest A is (3/2)12times the frequency of the lowest A. Calculating by octaves, the highest A is 27 times the frequency of the lowest A.

It should therefore be the case that:

(3/2)12 = 27

Tragically, this is not the case: (3/2)12is 129.7463, while 27 is 128. Close but unequal. If you tune by octaves, the fifths are out-of-tune, and vice versa.

This discrepancy—a variant of the Pythagorian comma—shatters an edifice that rests on the beautiful principle that ratio simplicity underpins musical consonance. While each interval is most purely tuned using these simple ratios (the just tuning system: 2/1 = octave, 3/2 = perfect fifth, 4/3 = perfect fourth, 5/4 = major third and 6/5 = minor third), the mathematics falls apart when these intervals interact in music.

The Pythagorian comma has haunted string players for centuries, requiring annoying adjustments. On the violin, some notes must be played slightly sharper and others flatter to dodge dissonance. The comma held back the development of keyboard instruments, in which notes cannot be adjusted while playing, thus limiting a composition to the musical key for which the instrument was tuned.

The solution to this problem is to forego mathematical purity by fudging tuning. This discovery—which emerged in various forms over a period of centuries—unleashed new creative possibilities and inspired J.S. Bach to write The Well Tempered Clavier, which traverses all major and minor keys without discriminating on the basis of dissonance.

Today, several tuning systems achieve this end. In equal tempered tuning (used in some electronic synthesizers), the octave is divided into twelve equal semitones on a logarithmic scale. In this system, each successive semitone is 21/12 times the frequency of the previous one. Thus a fifth is 27/12 or 1.498 instead of 3/2 or 1.5; the perfect fourth is 25/12 or 1.3348 instead of 4/3 or 1.3333; the major third is 24/12 or 1.2599 instead of 5/4 or 1.25; and the minor third is 23/12 or 1.1892 instead of 6/5 or 1.2.

By anchoring the octaves and sprinkling the error elsewhere, consonance is compromised slightly but never disturbingly. The edifice of musical harmony is preserved by systematically fudging the very mathematical system upon which it is built. An elegant way to fix inelegance in an elegant musico-mathematical system.

Theoretical Physicist, Caltech; Author, The Particle at the End of the Universe and From Eternity to Here: The Quest for the Ultimate Theory of Time

Einstein Explains Why Gravity Is Universal

The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall faster. But in reality, the piece of paper slows down the descent of the stone.

Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren't for the interference of air resistance. Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.

Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.

Thinking about this problem led Albert Einstein to what he called "the happiest thought of my life." Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from any other one. Sticking just to what is observed inside the spaceship, there's no way we could detect the existence of gravity.

Einstein, in his genius, realized the profound implication of this situation: if gravity affects everything equally, it's not right to think of gravity as a "force" at all. Rather, gravity is a feature of spacetime itself, through which all objects move. In particular, gravity is the curvature of spacetime. The space and time through which we move are not fixed and absolute, as Newton would have had it; they bend and stretch due to the influence of matter and energy. In response, objects are pushed in different directions by spacetime's curvature, a phenomenon we call "gravity." Using a combination of intimidating mathematics and unparalleled physical intuition, Einstein was able to explain a puzzle that had been unsolved since Galileo's time.

Many would say the opposite, elegance = simplicity. They have (classical) physics envy—for smooth, linear physics, and describably delicious four-letter words, like "F=ma". But modern science has moved on, embracing the complex. Occam now uses a web-enabled, fractal e-razor. Even in mathematics, stripped of awkward realities of non-ideal gases, turbulence, and non-spherical cows, simple statements about integers like Fermat’s an + bn = cnand wrangling maps with four colors take many years and pages (occasionally computers) to prove.

The question is not "What is our favorite elegant explanation?", but "What should it be?" We are capable of changing not only our mind, but changing the whole fabric of human nature. As we engineer, we recurse—successively approximating ourselves as an increasingly survivable force of nature. If so, what will we ultimately admire? Our evolutionary baggage served our ancestors well, but could kill our descendants. Faced with modern foods, our frugal metabolisms feed a diabetes epidemic. Our love of "greedy algorithms" leads to exhausted resources. Our too easy switching from rationality to blind-faith or fear-based decision making can be manipulated politically to drive conspicuous consumption. (Think Easter Island 163 km2 of devastation scaled to Earth Island at 510 million km2). "Humans" someday may be born with bug-fixes for dozens of current cognitive biases as well as intuitive understanding and motivation to manipulate quantum weirdnesses, dimensions beyond three, super rare events, global economics, etc. Agricultural and cultural monocultures are evolutionarily bankrupt. Evolution was only briefly focused on surviving in a sterile world of harsh physics, but ever since has focused on life competing with itself. Elegant explanations are those that predict the future farther and better. Our explanations will help us dodge asteroids, solar red giant flares, and even close-encounters with the Andromeda galaxy. But most of all, we will diversify to deal with our own ever-increasing complexity.

How do we separate fact from fiction? We are frequently struck by seemingly unusual coincidences. Imagine seeing an inscription describing a fish in your morning reading, then at lunch you are served fish and the conversation turns to "April fish" (or April fools). That afternoon a work associate shows you several pictures of fish and in the evening you are presented with an embroidery of fish-like sea monsters. The next morning a colleague tells you she dreamed of fish. This might be starting to seem spooky. It actually turns out that we shouldn't find this surprising. The reason why has a long history resulting in the unintuitive insight of building randomness directly into our understanding of nature, through the probability distribution.

Chance as Ignorance

Tolstoy was skeptical of our understanding of chance. He gave an example of a flock of sheep where one had been chosen for slaughter. This one sheep was given extra food separately from the others and Tolstoy imagined that the flock, with no knowledge of what was coming, must find the continually fattening sheep extraordinary—something he thought the sheep would assign to "chance" due to their limited viewpoint. Tolstoy's solution was for the sheep to stop thinking that things happen only for "the attainment of their sheep aims" and realize that there are hidden aims that explain everything perfectly well, and so there is no need to resort to the concept of chance.

Chance as an Unseen Force

Eighty-three years later Carl Jung published a similar idea in his well-known essay "Synchronicity, An Acausal Connecting Principle." He postulated the existence of a hidden force that is responsible for the occurrence of seemingly related events that otherwise appear to have no causal connection. The initial story of the six fish encounters is Jung's, taken from his book. He finds this string of events unusual, too unusual to be ascribable to chance. He thinks something else must be going on—and labels it the acausal connecting principle.

Persi Diaconis, Stanford Professor and former professor of mine, thinks critically about Jung's example: suppose we encounter the concept of fish once a day on average according to what statisticians call a "Poisson process" (another fish reference!). The Poisson process is a standard mathematical model for counts, for example radioactive decay seems to follow a Poisson process. The model presumes a certain fixed rate at which observations appear on average and otherwise they are random. So we can consider a Poisson process for Jung's example with a long run average rate of one observation per 24 hours and calculate the probability of seeing six or more observations of fish in a 24 hour window. Diaconis finds the chance to be about 22%. Seen from this perspective, Jung shouldn't have been surprised.

The Statistical Revolution: Chance in Models of Data Generation

Only about two decades after Tolstoy penned his lines about sheep, Karl Pearson brought about a statistical revolution in scientific thinking with a new idea of how observations arose, the same idea used by Diaconis in his probability calculation above. Pearson suggested that nature presents data from an unknown distribution but with some random scatter. His insight was that this is a different concept from measurement error, which adds additional error when the observations are actually recorded.

Before Pearson, science dealt with things that were "real," such as laws describing the movement of the planets or blood flow in horses, to use examples from David Salsburg's book, The Lady Tasting Tea. What Pearson made possible was a probabilistic conception of the world. Planets didn't follow laws with exact precision, even after accounting for measurement error. The exact course of blood flow differed in different horses, but the horse circulatory system wasn't purely random. In estimating distributions rather than the phenomena themselves, we are able to abstract a more accurate picture of the world.

Chance Described by Probability Distributions

That measurements themselves have a probability distribution was a marked shift from confining randomness to the errors in the measurement. Pearson's conceptualization is useful because it permits us to estimate whether what we see is likely or not, under the assumptions of the distribution. This reasoning is now our principal tool for judging whether or not we think an explanation is likely to be true.

We can, for example, quantify the likelihood of drug effectiveness or carry out particle detection in high energy physics. Is the distribution of the mean response difference between drug treatment and control groups centered at zero? If that seems likely, we can be skeptical of the drug's effectiveness. Are candidate signals so far from the distribution for known particles that they must be from a different distribution, suggesting a new particle? Detecting the Higgs boson requires such a probabilistic understanding of the data to differentiate Higgs signals from other events. In all these cases the key is that we want to know the characteristics of the underlying distribution that generated the phenomena of interest.

Pearson's incorporation of randomness directly into the probability distribution enables us to think critically about likelihoods and quantify our confidence in particular explanations. We can better evaluate when what we see has special meaning and when it does not, permitting us to better reach our "human aims."

Carl Menger's account of the origin of money is my favourite scientific explanation. It is deeply satisfying because it shows how money can develop from barter without anyone consciously inventing it. As such, it is a great example of Adam Smith's invisible hand, or what scientists now call "emergence."

Menger (1840-1921) founded the Austrian school of economics, a heterodox school of thought which is derided by many mainstream economists. Yet their accounts of the origin of money beg the very question which Menger answered. The typical mainstream economics textbook lists the problems of barter exchange, and then explains how money overcomes these problems. However, that does not really explain how money actually got started, any more than listing the advantages of air travel explains how aeroplanes were invented. As Lawrence White puts it in his book on The Theory of Monetary Institutions (1999), "one is left with the impression that barterers, one morning, suddenly became alert to the benefits of monetary exchange, and, by that afternoon, were busy using some good as money."

That, of course, is ridiculous. In Menger's account, money emerges through a series of small steps, each of which is based on self-interested choices by individual traders with limited knowledge. First, individual barterers realize that, when direct exchange is difficult, they can get what they want by indirect exchange. Rather than finding someone who both has what I want and wants what I have, I need only find someone who wants what I have. I can then trade what I have for his good, even though I don't want to consume it myself, and then trade that for something I do want to consume. In that case, I will have used the intermediate good as a medium of exchange.

Menger notes that not all goods are equally marketable. In other words, some goods are easier to trade than others. It therefore pays a trader to accumulate an inventory of highly marketable items for use as media of exchange. Other alert traders in the market catch on, and eventually the market converges on a single common medium of exchange. This is money.

Menger's theory does not merely show how money can evolve without any conscious plan; it also shows that money does not depend on legal decrees or central banks. Yet this too is often overlooked by mainstream economists. Take Michael Woodford for example. Woodford is one of the most influential academic monetary economists alive today, yet in his 2003 book Interest and Prices: Foundations of a Theory of Monetary Policy, a central bank somehow becomes part of the economy within less than a page after the initial assumptions are introduced. In other words, Woodford does not even pause for a page to consider what a banking system would look like without a central bank. Yet free banking has a long history; the first such system began in China in about 995 AD, more than 600 years before the first central bank.

Can we account for the emergence of central banking by the same invisible-hand type of explanation that Menger proposed for money? The answer depends, according to Lawrence White, on just what we mean by the term "central banking." If government sponsorship is among the defining features of a central bank, then the answer is no. In other words, the emergence of central banks cannot be accounted for entirely by market forces. At some point, deliberate state action must enter the story. The government's motives for getting involved are not hard to imagine. For one thing, exclusive supply of banknotes gives the government a source of monopoly profits in the form of a zero-interest loan from the public's holding of these non-interest bearing notes.

In these troubled times, when central banks are expanding the stock of high-powered money through massive quantitative easing, Menger's theory is more relevant than ever. It alerts us to the possibility that the response to the current crisis in the eurozone need not be ever more centralization, but could instead consist of a move in the opposite direction, towards a regime in which any bank is free to issue its own banknotes, and market forces—not central banks—control the money supply.

Professor of History, Macquarie University, Sydney; Author, Maps of Time: An Introduction to Big History

The Idea of "Emergence" or "Emergent Properties"

One of the most beautiful and profound ideas I know, and one whose power is not widely enough appreciated, is the idea of "emergence" or "emergent properties".

When created our Universe was pretty simple. For several hundred million years there were no stars, hardly any atoms more complex than Helium, and of course no planets, no living organisms, no people, no poetry. (The Keck observatory in Hawaii has just found direct evidence of those simple primordial clouds of matter!)

Then, over 13.7 billion years, all these things have appeared, one by one. Each had qualities that had never been seen before. This is 'creativity' in its most basic and mysterious form. Galaxies and Stars were the first large, complex objects. And they had strange new properties. Stars fused Hydrogen atoms into Helium atoms, creating vast amounts of energy and forming hot spots dotted through the Universe. In their death throes, the largest stars created all the elements of the Periodic Table, while the energy they pumped into the cold space around them helped assemble these elements into utterly new forms of matter with entirely new properties. Now it was possible to form planets, bacteria dinosaurs and us!

Where did all these exotic new things come from? How do new things, new qualities 'emerge'? Were they present in the components from which they were made? The simplest reductive arguments presume that they had to be. But if so, they can be devilishly hard to find. Can you find 'wateriness' in the atoms of hydrogen and oxygen that form water molecules? This is why 'emergence' so often seems magical and mysterious.

But it's not, really. One of the most beautiful explanations of emergence that I know can be found in a Buddhist sutra that was probably composed more than 2,000 years ago: "The Questions of Milinda". (I'm paraphrasing on the basis of an online translation).

Milinda is a great emperor. He was actually a historical figure, the Greco-Bactrian emperor, Menander, who ruled a Central Asian kingdom founded by generals from Alexander the Great's army. In the sutra, Milinda meets with Nagasena, a great Buddhist sage. They probably met in the plains of modern Afghanistan, over 2,000 years ago. Milinda had summoned Nagasena because he was getting interested in Buddhism, but was puzzled because the Buddha seemed to deny the reality of the 'self'. Yet for most of us, the sense of self is the very bedrock of reality. (When Descartes said "I think, therefore I am", I think he meant something like: "The self is the only thing we know that exists for certain.")

So we should imagine Milinda sitting in a royal chariot, followed by a huge retinue of courtiers and soldiers, meeting Nagasena, with his retinue of Buddhist monks for a great debate about the nature of the self, reality and creativity. It's a splendid vision.

Milinda asks Nagasena to explain the Buddha's idea of the 'self'. Nagasena asks: "Sire, how did you come here?" Milinda says: "In a chariot, of course, reverend Sire!" "Sire, if you removed the wheels would it still be a chariot?" "Yes, of course it would," says Milinda with some irritation, wondering where this conversation is going. "And if you removed the framework, or the flag-staff, or the yoke, or the reins, or the goadstick, would it still be a chariot?" Eventually Milinda starts to get it. He admits that at some point his chariot would no longer be a chariot because it would have lost the quality of 'chariotness' and could no longer do what chariots do.

And now, Nagasena cannot resist gloating because Milinda has failed to define in what exact sense his chariot really exists. Then comes the punch line: "Your Majesty has spoken well about the chariot. It is just so with me. … This denomination 'Nagasena,' is a mere name. In ultimate reality this person cannot be apprehended."

Or, in modern language, I, and all the complex things around me, exist only because many things were assembled in a very precise way. The 'emergent' properties are not magical. They are really there and eventually they may start re-arranging the environments that generated them. But they don't exist 'in' the bits and pieces that made them; they emerge from the arrangement of those bits and pieces in very precise ways. And that is also true of the emergent entities known as "you" and "me".

The simple yet profound explanation of why people get themselves into serious epistemological trouble and propose idiotic policies and harebrained ideas is that they ignore the Law of Unintended Consequences. Often enough, when considering a given idea, proposal, option, or policy, people will focus only its beneficial consequences and ignore the negative or damaging ones, which may be further off in time and harder to foresee. But practically every human action entails a substantial network of consequences, and so a rational assessment of any proposal ought to take into account all of its effects, the obvious and the non-obvious, the intended and the unintended, not just those that immediately leap to the eye.

The core notion was captured most clearly by the French economist Frédéric Bastiat, who in 1850 wrote an essay called "That Which is Seen and That Which is Unseen." In it he describes a little boy's accidentally throwing a ball through a shopkeeper's window. Bystanders console the shopkeeper by telling him that, despite his loss, the accident will be a boon to the glazier, who will replace the window for six francs. (Today this is known as "job creation.")

True enough, the glazier benefits. "But," says Bastiat, "that is only what is seen. It is not seen that as our shopkeeper has spent six francs on one thing, he cannot spend them on another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented."

This so-called "parable of the broken window" has been exploited by free-market economists to oppose many ostensibly "humanitarian" government interventions that, they argue, have negative unintended consequences further down the line. Rent control, for example, while keeping some housing affordable is said to reduce the livability and availability of apartments that landlords can no longer afford to maintain.

But the lesson of the seen and the unseen is far more encompassing in scope than its applications within the dismal science. For example, species are sometimes introduced to ecological systems with a specific, narrow purpose in mind, whereas their actual introduction is accompanied by ripple effects that go well beyond the original purpose it was intended to serve. A well-known example is the importation and breeding of cane toads in Australia in order to control the cane beetle, a pest that ravaged sugar cane crops. But not only didn't they control the cane beetle, the toads themselves soon proliferated so wildly that they became major pests in their own right, and had the further effect of converting kindly, animal-loving citizens into raging maniacs on public roads and highways on which drivers, day and night, would swerve right and left in order to run over and flatten (with a loud "pop") as many of the monsters as came into view. (In addition, a group of otherwise sane golfers placed powerful lights on golf courses to attract the toads so that they could hit them with golf clubs and propel these amphibian golf balls across the fairways to their deaths, a scheme that in any case didn’t work: "Nine times out of ten the cane toad will get up and hop away," said one unamused critic of the practice.)

More recently, ethanol has been added to gasoline with the intention of reducing our dependence of foreign petroleum. That is what is seen. As for what is unseen, consider the set of ripple effects described by a team of energy specialists (in a 2007 piece actually entitled "The Ripple Effect: Biofuels, Food Security, and the Environment"). Since ethanol was made from corn, the increased demand for corn caused prices to rise (to wit, from $2.60 per bushel in July 2006 to $4.25 per bushel in March 2007). Higher corn prices in turn meant higher prices for corn-fed beef. It also meant that a basic food crop was made more expensive for food-insecure people who could least afford a price rise. Further ripple effects were massive increases in farmland values (making new acreage more dear to farmers), and greater use of artificial fertilizers, with increased runoffs and negative implications for nitrogen and phosphorous losses to groundwater, surface water, and the atmosphere. And so on.

Moral of the story: When entertaining the merits of some grand new miracle idea, look beyond the obvious.

Since every "explanation" is contingent, limited by its circumstances, and certain to be "superceded" by a better or momentarily more ravishing one—the favorite explanation is really a matter of poetics, rather than science or philosophy. That being said—I like everyone else—fall in "love"—a romantic infatuation which either passes or transforms itself into something else. But it is the repeated momentary ravishment that slowly shapes one because in a sense, one is usually falling in love with the same "type" again and again—and this repetition defines and shapes one's mental character.

When young I was so "shaped and oriented" by what I shall now chose to call my two favorite explanations.

1) I hardly remember the details (being no scientist, of course) but I remember reading about Dirac's theory of the sea of negative energy out of which is popped—by a hole, an absence, the positron which built the world we know. I hope I have this right and haven't made a fool of myself by misrepresenting this—but in a sense it wouldn't matter. Because this image—fueled by this "explanation"—energized my exploration into a new kind of theater in which (evoking a kind of negative theology) I tried—and still try—to pull an audience into the void, rather than feeding it what it already feels about the "real" world and wants confirmed.

2) Shortly thereafter, (this was all in the 1950's) encountering the unjustly neglected philosopher Ortega y Gasset, and being sent spinning by his explanation that a human being is not a "whole persona"(in a world where the mantra was to become a "well rounded person"), but as he famously put it—"I am myself and my circumstances". I.E—a split creature.

And what Ortegean circumstantial setting "set me up" to receive and be seduced by the Diracian explanation? It had something to do with growing up in priviledged Scarsdale—hating it but hiding that fact I felt "out of place and awkward" by becoming a secretly alienated, successful high-school achiever. Dirac—as the creator of what was for me a powerful poetic metaphor allowed me imagine that the un-reachable source (sea of negative energy) was the real ground on which we all secretly stood—and to take courage from the fact that the world surrounding me was blind to the deeper reality of things and my "alienation" was in some sense justified.

But the truth is that post high-school and college—I have a new favorite "elegant explanation" every year—sometimes every month—from science and philosophy (and include psychoanalysis in which ever of those two categories might be permitted). And does this unending succession of theories—all destined to fall—all destined to be replaced—create a wave theory of their own which—perhaps because I cannot accurately describe it (my blindness regarding myself) is my current (for how long?) favorite explanation of how things are.

Yet blindness as the source of what is valuable is not my personal theory or explanation, but has a long and noble history cutting across most disciplines.

Blindness as the source of what pushed us on and on—either forward or backwards. I chose to believe that in a sense this is the "explanation" of my favorites—Freud, Lacan, Parmenides, Nietzsche, Julian Barbour, Quantum, multiple worlds, etc, etc. All those who hold to the poetry of Niels Bohr demanding a theory—"crazy enough to be true".

Why do we—and all other creatures—behave as we do? No answers really exist. I chose the ethologist and ornithologist NikolaasTinbergen's questions for exactly that reason, because sometimes there is no one deep, elegant, and beautiful explanation. Much like a teacher of fishing rather than a giver of fish, Tinbergen did not try to provide any one global explanation, but instead gave us a scaffolding upon which to build our own answers to each individual behavioral pattern we observe, a scaffolding that can be used not only for the ethological paradigms for which he was famous, but all forms of behavior in any domain. Succinctly, Tinbergen asked:

• What is the mechanism? How does it seem to work?
• What is the ontogeny? How do we observe it develop over time?
• What is its function? What are all the possible reasons it is done?
• What is its origin? What are the many ways in which it could it have arisen?

In attempting to answer each of these questions, one is forced to think at the very least about the interplay of genes and environment, of underlying processes (neuroanatomy, neurophysiology, hormones, etc.), of triggers and timing, what advantages and disadvantages are balanced and how these may have changed over time.

Furthermore, unlike most 'favorite' explanations, Tinbergen's questions are enduring. Answers to his questions often reflect a current zeitgeist in the scientific community, but those answers mutate as additional knowledge becomes available. His questions challenge us to rethink our bas