Thursday, March 27, 2014

No one particularly needs me to tell them about the BICEP2 results, given that so many others have already done so very nicely. But here is the way I put it in the latest issue of Prospect, where I wanted to try to put the findings within the broader picture of our unfolding cosmological view over the past century. That’s why I mention dark energy and the cosmological constant, even though one can perfectly well explain inflation without that. I’d contend that, if this work bears up, we’ll see the major landmarks as:
1912/1919: general relativity proposed and ‘confirmed’
1927/29: the Big Bang and cosmic expansion predicted and confirmed
1965: the CMB detected (and a minor landmark with the 1992 COBE results)
1998: the accelerating expansion of the universe
2014: inflation and gravitational waves ‘confirmed’ (?)
Who’s going to put money on Guth and Linde for the Nobel? Probably needs an independent confirmation first, though.

I feel like I spend a fair bit of time these days trying to bring a critical eye to the excesses of science boosterism. So how nice it is to be able for once to relish the sheer joy of how fab science can be. That was an exciting week. And if this piece is a little loose around the edges, forgive me – it had to be knocked out essentially overnight.

______________________________________________________________

The discovery reported on 17 March by a US-led team of scientists will join the small collection of epochal moments that, at a stroke, changed our conception of what the universe is like. It offers evidence that, within an absurdly small fraction of a second after the universe was born in the Big Bang, it underwent a fleeting period of very rapid growth called inflation. This left the fabric of spacetime ringing with “gravitational waves”, which are predicted by Albert Einstein’s theory of general relativity but have never been seen before.

Finding evidence for either inflation or gravitational waves would each be a huge deal on its own. Confirming both together will leave cosmology reeling, and – barring some alternative explanation for the data, which looks unlikely – it is inconceivable that they will fail to win a Nobel prize in their own right and probably to motivate another for the theories they support. According to astrophysicist Sean Carroll of the California Institute of Technology in Pasadena, the results supply “experimental evidence of something that was happening right when our universe was being born”. That we can find this nearly fourteen billion years after the event is astonishing.

The discovery was made by a team led by John Kovac of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, using the Background Imaging of Cosmic Extragalactic Polarization (BICEP2) telescope located at the South Pole. It’s the kind of milestone in observational cosmology that comes only once every few decades, and fits perfectly into the narrative created by the previous ones.

We might start in 1919, when the British astronomer Arthur Eddington observed, from the island of Principe, the bending of starlight passing by the sun during a total solar eclipse. This confirmed Einstein’s prediction that gravity distorts spacetime, forcing light to trace an apparently curved path. The discovery made Einstein internationally famous.

Because of this effect of gravity, general relativity predicts that violent astrophysical events involving very massive objects – an exploding star (supernova), say, or two black holes colliding – can excite waves in spacetime that travel like ripples in a pond: gravitational waves. Scientists were confident that these waves exist, but detecting them is immensely difficult because the distortions of spacetime are so small, changing the length of a kilometre by a fraction of the radius of an atom. Several gravitational-wave detectors have been built around the world to spot these distortions from a passing gravity wave via interference effects in laser beams shone along long, straight channels and bouncing off mirrors at the end. They haven’t yet revealed anything, but the hope is that gravitational waves might eventually be used just like radio waves or X-rays to detect and study distant astronomical events.

The BICEP2 findings unite gravitational waves and general relativity with the theory of the Big Bang, for which we need to go back to the second cosmological milestone. In 1929 American astronomer Edwin Hubble reported evidence that the universe is expanding: the further away galaxies are, he said, the faster they are receding from us. Hubble’s expanding universe is just what is expected from an origin in a Big Bang. In fact Einstein had already found that general relativity predicts this expansion, but before Hubble most people believed that the universe exists in a static steady state, and so Einstein added a term to his equations to impose that. Yet in 1927 a relatively obscure Belgian physicist, George Lemaître, dared to take the theory seriously enough to predict a Big Bang. Hubble’s data confirmed it.

Yet it wasn’t until 1965 that one of the key predictions of the Big Bang theory was verified. Such a violent event should have left an ‘afterglow’: radiation scattered all across the sky, by now dimmed to a haze of microwaves with a temperature of just a little less than three degrees above absolute zero. While setting up a large microwave receiver to conduct radio astronomy, Arno Penzias and Robert Wilson found that they were picking up noise that they couldn’t eliminate. Eventually they realised it was the fundamental noise of the universe itself: the cosmic microwave background (CMB) radiation of the Big Bang. That’s milestone number three.

Number four came in 1998. While observing very distant supernovae, two teams of astronomers discovered that these objects weren’t just receding from us: they were speeding up. That was a real shock, because most cosmologists thought that the gravitational pull of all the matter in the universe would be slowing down its expansion. If, on the contrary, it is speeding up, then some force or principle seems to be opposing gravity. We call it dark energy, but no one knows what it is.

Einstein had already unwittingly provided a formal answer with his balancing act for getting rid of cosmic expansion: he added to his equations a fudge factor now called the cosmological constant. This amounts to saying that the vacuum of empty space itself has an energy – and because this energy increases as space expands, it can in fact produce an acceleration.

BICEP2’s results now look like milestone number five, and they stitch all these ideas together. The telescope has made incredibly detailed measurements of the CMB, spotting temperature differences from place to place in the sky of just a ten-millionth of a degree. Hence the exotic location: the telescope sits at the Amundsen-Scott South Pole station, 2,800 up on an ice sheet, where the atmosphere is thin, dry and clear, and free of interference from light and radio signals.

For the fact is that the CMB isn’t simply a uniform glow: some parts of the universe are a tiny bit “hotter” than others. This was confirmed in 1992 by observations with the Cosmic Background Explorer (COBE) satellite, which provided the first map of these “anisotropies” (hot and cool spots) in the CMB – and thereby some of the best evidence for the Big Bang itself. Since then the maps have got considerably more detailed.

Yet the puzzle is not so much why the CMB isn’t entirely smooth but why it isn’t even more uneven. A simple theory of a Big Bang in which the universe expanded from a tiny primeval fireball predicts that it should be much more blotchy, consisting of patches that are receding too fast to affect one another. So space should be far less flat and uniform. In 1980 the American physicist Alan Guth proposed that very early in the Big Bang – about a trillion-trillion-trillionth of a second (10**-36 s) after it began – the universe underwent a burst of extremely rapid expansion, called inflation, which took it from much smaller than an atom to perhaps the size of a tennis ball – an expansion of around 10**60-10**80-fold. This would have smoothed away the unevenness. In effect, inflationary theory supposes that there was a short time when the vacuum energy was big enough to boost the universe’s expansion.

Inflation doesn’t smooth out space completely, though. Quantum mechanics insists on some randomness in the pre-inflation pinprick universe, and these quantum fluctuations would have been frozen into the inflated universe, imprinted for example on the CMB. In turn, those variations seeded the gravitational collapse of gas into stars and galaxies – a staggering idea really, that infinitesimal quantum randomness is now writ large and glowing across the heavens. It’s possible to calculate what pattern these quantum fluctuations out to give rise to, and observations of the CMB seem to match it.

All the same, there was no direct evidence for inflation – until now. The theory also predicts that the microwave background radiation should be polarized – its electromagnetic oscillations have a preferred orientation – with a characteristic pattern of twists, called the B-mode. This swirly polarization is what BICEP2 has detected, and there’s no obvious explanation for it except inflation. Cue a Nobel nomination for Guth, and other architects of inflationary theory, in October.

What’s all this got to do with gravitational waves? Cosmic inflation was rather like a shock wave that set the universe quaking with primordial gravitational waves. They have now, 13.8 billion years later, died away to undetectable levels. But they’ve left a fingerprint behind, in the form of the polarized swirls of the CMB, just as ocean waves leave ripples in sand. It seems the only way these swirls could have got there was via gravitational waves.

OK, but where does inflation itself come from? Physicists’ usual response to a question they can’t answer is to invent a particle that does the required job, and give it a snazzy name: neutrino, WIMP, graviton, whatever. Carroll, who now proudly records Kovac among his former students, admits that this is what they’ve done here. “We don’t know what field it is that drove inflation”, he says, “so we just call it the inflaton.”

In other words, just as the photon (a ‘particle of light’) is the agent of the force of electromagnetism, and the Higgs boson was initially postulated as the force field that gave some particles their mass, so the inflaton is the alleged particle behind the force that unleashed inflation. It’s just a name, but here’s the point: it’s a particle whose behaviour, like that of all fundamental particles, must be governed by quantum theory.

And that’s where we really hit the exciting stuff. Confirming these two astonishing ideas, inflation and gravitational waves, is terrific. But they always looked a pretty safe bet. It’s what lies behind them that could be truly revolutionary. For gravitational waves are a product of general relativity, the current theory of gravity. But here they get kicked into existence by an effect of quantum mechanics, orchestrated by the quantum inflaton. In other words, we’re looking at an effect that bridges the biggest mystery in contemporary physics: how to reconcile the ‘classical physics’ of relativity with quantum physics, and thus create a quantum theory of gravity. Sure, BICEP2’s results don’t yet show us how to do that. But how many simultaneous revolutions could you cope with?

Wednesday, March 26, 2014

I do enjoy reporting for Nature on the Abel Prize in maths, as I’ve done for the past several years. The Norwegians are friendly and helpful – there’s not the absolute secrecy associated with the Nobels, and this is just as well, because more often than not you need a fair bit of advance warning to get your head around what the prize is being given for. This year it was a little less challenging, though, because I already knew a small amount about the Abel laureate Yakov Sinai, whose work is really about physics, even if it demands the most exacting maths. As Sinai put it in the phone conversation through which he was informed of the award “mathematics and physics go together like a horse and carriage” – OK, it’s not exactly a catchy quote, but it is very interesting to see physics formulated with such rigour. I remember hearing years ago how mathematicians generally can’t believe what physicists think is a rigorous argument or proof. But I don’t think they feel that way about Sinai’s work. Anyway, here’s the pre-edit of the Naturestory.

____________________________________________________________________

Abel Prize laureate has explored physics “with the soul of a mathematician”

The Norwegian Academy of Science and Letters has awarded the 2014 Abel Prize, often regarded as the “maths Nobel”, to Russian-born mathematical physicist Yakov Sinai of Princeton University. The award cites his “his fundamental contributions to dynamical systems, ergodic theory, and mathematical physics.”

Jordan Ellenberg, a mathematician at the University of Wisconsin who presented the award address today, says that Sinai has worked on questions relating to real physical systems “with the soul of a mathematician”. He has developed tools that show how systems that look superficially different might have deep similarities, much as Isaac Newton showed that the fall of an apple and the movements of the planets are guided by the same principles.

Sinai’s work has been largely in the field now known as complex dynamical systems, which might be regarded as accommodating ideal mechanical laws to the messy complications of the real world. While Newton’s laws of motion provide an approximate description of how objects move under the influence of forces in some simple cases – the motions of the planets, for example – the principles governing real dynamical behaviour are usually more complicated. That’s the case for the weather system and atmospheric flows, population dynamics, physiological processes such as heartbeat, and much else.

Sometimes these movements are subjected to random influences, such as the jiggling of small particles by thermal noise. These are called stochastic dynamical processes. The perfect predictability of Newton’s laws might also be undermined simply by the presence of too many mutually interacting bodies, as in fluid flow. For even just three bodies, Newton’s deterministic laws may lead to chaotic behaviour, meaning that vanishingly small differences in the initial conditions can lead to widely different outcomes over long times. This kind of chaos is now known to be present in the orbits of planets in the solar system.

Sinai has developed mathematical tools for exploring such behaviour. He has identified quantities that remain the same even if the trajectories of objects in these complex dynamical systems become unpredictable. His interest in these issues began while he was at Moscow State University in the late 1950s as a student of Andrey Kolmogorov, one of the greatest mathematical physicists of the twentieth century, who established some of the foundations of probability theory.

Sinai and Kolmogorov showed that even for dynamical systems whose detailed behaviour is unpredictable – whether because of chaos or randomness – there is a quantity that measures just how ‘complex’ the motion is. Inspired by the work of Claude Shannon in the 1940s, who showed that a stream of information can be assigned an entropy, Sinai and Kolmogorov defined a related entropy that measures the predictability of the dynamics: the higher the Kolmogorov-Sinai (K-S) entropy, the lower the predictability.

Ellenberg says that, whereas many physicists might have expected such a measure to distinguish between deterministic systems (where all the interactions are exactly specified) and stochastic ones, the K-S entropy showed that in fact there are qualitatively different types of purely deterministic system: those with zero entropy, which can be predicted exactly, and those with a non-zero entropy which are not wholly predictable, in particular chaotic systems.

Invariant measures like the K-S entropy are related to how thoroughly such a system explores all the different states that it could possibly adopt. A system that ‘visits’ all these states more or less equally on average is said to be ergodic. One of the most important model systems for studying ergodic behaviour is the Sinai billiard, which Sinai introduced in the 1960s. Here a particle bounces around (without losing any energy) within a square perimeter, in the centre of which there is a circular wall. This was the first dynamical system for which it could be proved, by Sinai himself, that all the particle’s trajectories are ergodic – they pass through all of the available space. They are also chaotic, in the sense that the slightest difference in the particle’s initial trajectory leads rather quickly to motions that don’t look at all alike.

In these and other ways, Sinai has laid the groundwork for advances in understanding turbulent fluid flow, the statistical microscopic theory of gases, and chaos in quantum-mechanical systems.

The Abel Prize in mathematics, named after Norwegian mathematician Niels Henrik Abel (1802–29), is modelled on the Nobel prizes and has been awarded every year since 2003. It is worth 6 million Norwegian kroner, or about US$1 million.

“I'm delighted that Sinai, whose scientific and social company I enjoy, has won this prize”, says Michael Berry of Bristol University, who has worked on chaotic quantum billiards and other aspects of complex dynamics.

Ellenberg feels that Sinai’s work has demonstrated how, in maths, “a good definition is as important as a good theorem.” While physicists knew in a loose way what they meant by entropy, he says, Sinai has asked “what are we actually talking about here?” This drive to get the right definition has helped him identify what is truly important and fundamental to the way a system behaves.

Monday, March 17, 2014

Listen, I’m going to be straight with you. Well, that’s what I’d intended, but already language has got in the way – you’re not “listening” at all, and “straight” has so many meanings that you should be unsure what is going to follow. All the same, I doubt if any of you thought this meant I was going to stand to attention or be rigorously heterosexual. Language is ambiguous – and yet we cope with it.

But surely that’s a bit of a design flaw, right? We use language to communicate, so shouldn’t it be geared towards making that communication as clear and precise as possible, so that we don’t have to figure out the meaning from the context, or are forever asking “Say that again?” Imagine a computer language that works like a natural language – would the silicon chips have a hope of catching our drift?

Yet the ambiguity of language isn’t a problem foisted on it by the corrupting contingencies of history and use, according to complex-systems scientists Ricard Solé and Luís Seoane of the Pompeu Fabra University in Barcelona, Spain. They say that it is an essential part of how language works. If real languages were too precise and well defined, so that every word referred to one thing only, they would be almost unusable, the researchers say, and we’d struggle to communicate ideas of any complexity.

That linguistic ambiguity has genuine value isn’t a new idea. Cognitive scientists Ted Gibson and Steven Piantadosi of the Massachusetts Institute of Technology have previously pointed out that a benefit of ambiguity is that it enables economies of language: things that are obvious from the context don’t have to be pedantically belaboured in what is said. What’s more, they argued, words that are easy to say and interpret can be “reused”, so that more complex ones aren’t required.

Now Solé and Seaone show that another role of ambiguity is revealed by the way we associate words together. Words evoke other words, as any exercise in free association will show you. The ways in which they do so are often fairly obvious – for example, through similarity (synonymy) or opposition (antonymy). “High” might make you think “low”, or “sky”, say. Or it might make you think “drugs”, or “royal”, which are semantic links to related concepts.

Solé and Seoane look at the intersecting networks formed from these sematic links between words. There are various ways to plot these out – either by searching laboriously through dictionaries for associations, or by asking people to free-associate. There are already several data sets of semantic networks freely available, such as WordNet, which use fairly well-defined rules to determine the links. It’s possible to find paths through the network from any word to any other, and in general there will be more than one connecting route. Take the case of the words “volcano” and “pain”: on WordNet they can be linked via “pain-ease-relax-vacation-Hawaii-volcano” or “pain-soothe-calm-relax-Hawaii-volcano”.

A previous study found that WordNet’s network has the mathematical property of being “scale-free”. This means that there is no real average number of links per word. Some words have lots of links, most have hardly any, and there is everything in between. There’s a simple mathematical relationship between the probability of a word having k connections (P(k)) and the value of k itself: P(k) is proportional to k raised to some power, in this case approximately equal to 3. This is called a power law.

A network in which the links are apportioned this way has a special feature: it is a “small world”. This means that it’s just about always possible to find shortcuts that will take you from one node of the network (one word) to any other in just a small number of hops. It’s the highly connected, common words that provide these shortcuts. Some social networks seem to have this character too, which is why we speak of the famous “six degrees of separation”: we can be linked to just about anyone on the planet through just six or so acquaintances.

Solé and Seoane now find that this small-world feature of the semantic network is only a small world when it includes words that have more than one meaning (in linguistic terms this is called polysemy). Take away polysemy, the researchers say, and the route between any pair of words chosen at random will be considerably longer. By having several meanings, polysemic words can connect clusters of concepts that otherwise might remain quite distinct (just as “right” joins words about spatial relations to words about justice). Again, much the same is true of our social networks, which seem to be “small” because we each have several distinct roles or personas – as professionals, parents, members of a sports team, and so on, meaning that we act as a link between quite different social groups - the web is easy to navigate.

The small-world character of social networks helps to make them efficient at spreading and distributing information. For example, it makes them “searchable”, so that if we want advice on bee-keeping, we might well have a friend who has a bee-keeping friend, rather than having to start from scratch. By the same token Solé and Seaone think that small-world semantic networks make language efficient at enabling communication, because words with multiple meanings make it easier to put our thoughts into words. “We browse through semantic categories as we build up conversations”, Seoane explains. Let’s say we’re talking about animals. “We can quickly retrieve animals from a given category (say reptiles) but the cluster will soon be exhausted”, he says. “Thanks to ambiguous animals that belong to many categories at a time, it is possible to radically switch from one category to another and resume the search in a cluster that has been less explored.”

What’s more, the researchers argue that the level of ambiguity we have in language is at just the right level to make it easy to speak and be understood: it represents an ideal compromise between the needs of the speaker and the needs of the listener. If every single object and concept has its own unique word, then the language is completely unambiguous – but the vocabulary is huge. The listener doesn’t have to do any guessing about what the speaker is saying, but the speaker has to say a lot. (For example, “Come here” might have to be something like “I want you to come to where I am standing.”) At the other extreme, if the same word is used for everything, that makes it easy for the speaker, but the listener can’t tell if she is being told about the weather or a rampaging bear.

Either way, communication is hard. But Solé and Seoane argue that with the right amount of polysemy, and thus ambiguity, the two can find a good trade-off. What’s more, it seems that this compromise brings the advantage also of “collapsing” semantic space into a denser net that allows us to make fertile connections between disparate concepts. We have even arguably turned this small-world nature of ambiguity into an art form – we call it poetry. Or as you might put it,
Words, after speech, reach
Into the silence. Only by the form, the pattern,
Can words or music reach
The stillness.

Sunday, March 16, 2014

I hear that the relaunched Cosmos TV series has included a little hagiography of Giordano Bruno as a martyr to Copernican science, and I sigh. If I was a sensible chap, I would simply accept this myth is never now going to be squashed, because it seems to be too important to many people as a means of “showing” how the Roman Church was determined to stamp out the kind of independent and anti-dogmatic thought that supposedly gave rise to modern science. In short, Bruno fills the same martyr’s role here as early Christians needed to sustain their own faith.

But I am not a sensible chap, because I persist with this fantasy that one day everyone will be persuaded to go back and look at the history and see that this portrayal of Bruno is a (relatively) modern invention – an aspect of the nineteenth-century Draper-White narrative that pitched science in head-on combat with the Church. I am foolish enough to imagine that what I wrote in my book Curiosity is actually going to be read and heeded:
“The Neapolitan friar Giordano Bruno had an arrogant and argumentative nature that was bound to get him into serious trouble eventually, although if he had not happened to promote Copernican cosmology it is doubtful that he would command any greater fame today than the many other intellectual vagabonds who wandered Europe during the Counter-Reformation. It seems a vain hope that Bruno should ever cease to be the ‘martyr to science’ that modern times have made of him; maybe we must resign ourselves to the words spoken by Brecht’s Galileo: ‘Unhappy the land where heroes are needed.’
The fact is that Bruno’s Copernicanism is not mentioned in the charges levelled against him by the Inquisition in 1576, nor the denunciation of 1592 that led to his imprisonment and lengthy trial. Of the heretical accusations that condemned him to be burnt at the stake in 1600, only two are still recorded, which relate to obscure theological matters. He held many opinions of which the Church disapproved deeply, on such delicate matters as the Incarnation and the Trinity, not to mention having a long history of associating with disreputable types. Bruno’s death stains the Church’s record of tolerance for free thought, but says little about its attitude to science. There is nothing in Bruno’s espousal of a world soul, or his long discourses on demons and other spiritual beings, or his unconventional system of the elements, that makes him so very unusual for his times – but nothing either that qualifies him for canonization in the scientific pantheon.”

But then – praise be! – I see that others have done the job already, and better than I could. Corey Powell at Discover magazine has set the record straight on Bruno, and attacking this old Whig view of science history. Meg Rosenburg has posted a nice piece on Bruno too. And best of all, Rebekah Higgitt has written a masterful article in her Guardian blog about why this kind of appropriation of history to serve our modern agenda is invariably false and damaging to the historical record. As she puts it, “Historical figures who lived in a very different world, very differently understood, cannot be turned into heroes who perfectly represent our values and concerns without doing serious damage to the evidence.” And this is really the point, for I’m tired and, I fear, a little cross at scientists who seem to think that being scrupulous with the evidence only applies to science and not to something as wishy-washy as the humanities. So hurrah to all three of you!

And I couldn’t help but be struck by how, at the same time, we have Brendan O’Neil (who I can’t say I always agree with) taking Richard Dawkins to task by pointing out how the Enlightenment was not, as many alleged champions of “Enlightenment values” like to insist today, about attacking religion, but rather about demanding religious tolerance and the freedom to worship as one pleases. But Brendan doesn’t take this point far enough. For the one thing Enlightenment heroes like Voltaire and Rousseau could not abide was atheism. The Enlightenment is as abused an historical notion as Bruno’s “martyrdom” is – by much the same people and for much the same reasons. And so this motivates me to post here what I said about all this at the How The Light Gets In festival at Hay-on-Wye last summer, as part of a debate on optimism, pessimism and the legacy of the Enlightenment. Here it is.

Yes, I’m fool enough to think that this might stop some folk from banging on about “Enlightenment values.” And yes, I know that this is deeply irrational of me.

I’ve been trying to parse the title of this discussion ever since I saw it. The blurb says “The Enlightenment taught us to believe in the optimistic values of humanism, truth and progress” – but of course the title, which sounds a much more pessimistic note, comes from Thomas Hobbes’ Leviathan, and yet Hobbes too is very much a part of the early Enlightenment. You might recall that it was Hobbes’ description of life under what he called the State of Nature: the way people live if left to their own devices, without any overarching authority to temper their instincts to exploit one another.

That scenario established the motivation for Hobbes’ attempt to deduce the most reliable way to produce a stable society. And what marks out Hobbes’ book as a key product of the Enlightenment is that he tried to develop his argument not, as previous political philosophies going back to Plato had done, according to preconceptions and prejudices, but according to strict, quasi-mathematical logic. Hobbes’ Commonwealth is a Newtonian one – or rather, to avoid being anachronistic, a Galilean one, because he attempted to generalize his reasoning from Galileo’s law of motion. This was to be a Commonwealth governed by reason. And let me remind you that what this reason led Hobbes to conclude is that the best form of government is a dictatorship.

Now of course, this sort of exercise depends crucially one what you assume about human nature from the outset. If, like Hobbes, you see people as basically selfish and acquisitive, you’re likely to end up concluding that those instincts have to be curbed by drastic measures. If you believe, like John Locke, that humankind’s violent instincts are already curbed by an intrinsic faculty of reason, then it becomes possible to imagine some kind of more liberal, communal form of self-government – although of course Locke then argued that state authority is needed to safeguard the private property that individuals accrue from their efforts.

Perhaps the most perceptive view was that of Rousseau, who argued in effect that there is no need for some inbuilt form of inhibition to prevent people acting anti-socially, because they will see that it is in their best interests to cooperate. That’s why agreeing to abide by a rule of law administered by a government is not, as in Hobbes’ case, an abdication of personal freedom, but something that people will choose freely: it is the citizen’s part of the social contract, while the government is bound by this contract to act with justice and restraint. This is, in effect, precisely the kind of emergence of cooperation that is found in modern game theory.

My point here is that reasoning about governance during the Enlightenment could lead to all kinds of conclusions, depending on your assumptions. That’s just one illustration of the fact that the Enlightenment doesn’t have anything clear to say about what people are like or how communities and nations should be run. In this way and in many others, the Enlightenment has no message for us – it was too diverse, but more importantly, it was much too immersed in the preoccupations of its times, just like any other period of history. This is one reason why I get so frustrated about the way the Enlightenment is used today as a kind of shorthand for a particular vision of humanity and society. What is most annoying of all is that that vision so often has very little connection with the Enlightenment itself, but is a modern construct. Most often, when people today talk about Enlightenment values, they are probably arguing in favour of a secular, tolerant liberal democracy in which scientific reason is afforded a special status in decision-making. I happen to be one of those people who rather likes the idea of a state of that kind, and perhaps it is for this reason that I wish others would stop trying to yoke it to the false idol of some kind of imaginary Enlightenment.

To state the bleedin’ obvious, there were no secular liberal democracies in the modern sense in eighteenth century Europe. And the heroes of the Enlightenment had no intention of introducing them. Take Voltaire, one of the icons of the Enlightenment. Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of such thinkers in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”.

What about religion, then? Let’s first of all dispose of the idea that the Enlightenment was strongly secular. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance, but not for atheists, who should be banished from the state because their lack of fear of divine punishment means that they can’t be trusted to obey the laws. And even people who affirm the religious dogmas of the state but then act as if they don’t believe them should be put to death.

Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. According to deists, God created the world but then left it alone – he wasn’t constantly intervening to produce miracles. It’s sometimes implied that Enlightenment deism was the first step towards secularism. But contrary to common assertions, there wasn’t any widespread deist movement in Europe at that time. And again, even ideas like this had to be confined to the better classes: the message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”

Well, enough Enlightenment-bashing, perhaps – but then why do we have this myth of what these people thought? Partly that comes from the source of most of our historical myths, which is Victorian scholarship. The simple idea that the Enlightenment was some great Age of Reason is now rejected by most historians, but the popular conception is still caught up with a polemical view developed in particular by two nineteenth-century Americans, John William Draper and Andrew Dickson White. Draper was a scientist who decided that scientific principles could be applied to history, and his 1862 book The History of Intellectual Development in Europe was a classic example of Whiggish history in which humankind makes a long journey out of ignorance and superstition, through an Age of Faith, into a modern Age of Reason. But where we really enter the battleground is with Draper’s 1874 book History of the Conflict between Religion and Science, in which we get the stereotypical picture of science having to struggle against the blinkered dogmatism of faith – or rather, because Draper’s main target was actually Catholicism, against the views of Rome, because Protestantism was largely exonerated. White, who founded Cornell University, gave much the same story in his 1896 book A History of the Warfare if Science with Theology in Christendom. It’s books like this that gave us the simplistic views on the persecution of Galileo that get endlessly recycled today, as well as myths such as the martyrdom of Giordano Bruno for his belief in the Copernican system. (Bruno was burnt at the stake, but not for that reason.)

The so-called “conflict thesis” of Draper and White has been discredited now, but it still forms a part of the popular view of the Enlightenment as the precursor to secular modernity and to the triumph of science and reason over religious dogma.

Bur why, if these things are so lacking in historical support, do intelligent people still invoke the Enlightenment trope today whenever they fear that irrational forces are threatening to undermine science? Well, I guess we all know that our critical standards tend to plummet when we encounter idea that confirm our preconceptions. But it’s more than this. It is one thing to argue for how we would prefer things to be, but far more effective to suggest that things were once like that, and that this wonderful state of affairs is now being undermined by ignorant and barbaric hordes. It’s the powerful image of the Golden Age, and the rhetoric of a call to arms to defend all that is precious to us. What seems so regrettable and ironic is that the casualty here is truth, specifically the historical truth, which of course is always messy and complex and hard to put into service to defend particular ideas.

Should we be optimistic or pessimistic about human nature? Well – big news! – we should be both, and that’s what history really shows us. And if we want to find ways of encouraging the best of our natures and minimizing the worst, we need to start with the here and now, and not by appeal to some imagined set of values that we have chosen to impose on history.

It seems almost tautological to say that for centuries scientists studied light in order to comprehend the visible world. Why are things coloured? What is a rainbow? How do our eyes work? And what is light itself? These are questions that preoccupied scientists and philosophers since the time of Aristotle, including Isaac Newton, Michael Faraday, Thomas Young and James Clerk Maxwell.

But in the late nineteenth century all that changed, and it was largely Maxwell’s doing. This was the period in which the whole focus of physics – still emerging as a distinct scientific discipline – shifted from the visible to the invisible. Light itself was instrumental to that change.

Physics has never looked back from that shift. Today its theories and concepts are concerned largely with invisible entities: fields of force, rays outside our visual perception, particles too small to see even in the most advanced microscopes, ideas of unseen parallel worlds, and mysterious entities named for their very invisibility: dark matter and dark energy.

Things that we can’t see or touch once belonged to the realm of the occult. This simply meant that they were hidden, not necessarily that they were supernatural. But the occult became the hiding place for al kinds of imaginary and paranormal phenomena: ghosts, spirits and demons, telepathy and other ‘psychic forces’. These things seem now to be the antithesis of science, but when science first began to fixate on invisible entities, many leading scientists saw no clear distinction between such occult concepts and hard science.

To make sense of the unseen, we have to look for narratives. This means we fall back on old stories, enshrined in myth and folklore. It’s rarely acknowledged or appreciated that scientists still do this when they are confronted by mysteries and gaps in their knowledge. Those myths aren’t banished as science advances, but simply reinvented.

Occult light

What was it about light that impelled this swerve towards the invisible? In the early nineteenth century Faraday introduced the idea of a field – an invisible, pervasive influence – to explain the nature of electricity and magnetism. In the 1860s Maxwell wrote down a set of equations showing how electricity and magnetism are related. Maxwell’s equations implied that disturbances in these coupled fields – electromagnetic waves – would move through space at the speed of light. It was quickly apparent that these waves in fact are light.

But whereas visible light has wavelengths of between about 400 and 700 millionths of a millimetre, Maxwell’s equations showed that there was no obvious limit to the wavelength that electromagnetic waves can have. They may exist beyond both limits of the visible range – where we can’t see them.

These predictions were soon confirmed. In 1887, the German scientist Heinrich Hertz showed that oscillations of electrical current could give rise to long-wavelength radiation, which became known as radio waves. It took less than a decade for the Italian Guglielmo Marconi to show that radio waves could be used to transmit messages across vast distances.

It’s hard to appreciate now how revolutionary this was, not just practically but conceptually. Previously, messages beyond shouting range had to be sent either by a physical letter or as pulses of electricity down telegraph wires. The telegraph was already extraordinary enough, but still it required a physical link between sender and receiver. With radio, one could communicate wirelessly through ‘empty space’.

It is no coincidence that these discoveries happened at the height of the Victorian enthusiasm for spiritualism, in which mediums claimed to be able to contact the souls of the dead. If radio waves could transmit invisibly between a broadcasting device and a receiver, was it so hard to imagine that human brains – which are after all quickened by electrical nerve signals – could act as receivers?

But what of the senders? Already scientists, familiarized to the concept of invisible fields, had begun to speculate about non-material beings that inhabit an unseen plane of existence. Maxwell’s friends Peter Guthrie Tate and Balfour Stewart, both professors of physics, published The Unseen Universe (1875), in which they presented the ether of Maxwell’s waves as a bridge between the physical and spiritual words. Some of the pioneers of the telegraph had already drawn parallels with spiritualism, calling it ‘celestial telegraphy’. Now the wireless spawned a vision of empty space – still thought to be filled with a tenuous fluid called the ether that carried electromagnetic waves – as being alive with voices, the imprint of invisible intelligences. All you had to do was tune in, just as radio enthusiasts would scan the airwaves for crackly, half-heard snatches of messages from Helsinki or Munich. Rudyard Kipling’s short story “Wireless” (1902) described a man who, feverish from tuberculosis, becomes a receiver for fragments of a poem by Keats, while elsewhere in the house a group of amateur radio operators picks up broadcasts from a nearby ship. While the ‘telegraph line’ of the spiritualist medium offered the comfort of words from departed loved ones, now the wireless seemed instead to make the spirit world a source of impersonal, often meaningless fragments adrift in an unheeding universe.

The discovery of X-rays by Wilhelm Röntgen in 1895 stimulated these imaginings yet more. X-rays, it soon became clear, were invisible rays at the other end of the spectrum from radio, with wavelengths much shorter than those of light. But what made X-rays so astonishing and evocative was that they not only were invisible but revealed the invisible – not least, the bones beneath our flesh, in an unnerving presentiment of death. In the late 1890s people flocked to public demonstrations at shows such as the Urania in Berlin or Thomas Edison’s stage spectacles in New York to watch their skeletons appear on fluorescent, X-ray-sensitive screens. X-ray photography seemed a straightforward extension of the ‘spirit photography’ that had become popular in the 1870s and 80s (faked or genuinely inadvertent double exposures), confirming the photographic emulsion as a ‘sensitive medium’ that could render the invisible visible. Others claimed to see evidence of new types of invisible rays recorded in photographs, and even to be able to photograph ‘thought forms’ and souls.

At the fin-de-siecle, invisible rays were everywhere, and no claim seemed too extravagant. There were cathode rays and anode rays, wholly spurious radiations such as N-rays and ‘black light’ (although ultraviolet light also acquired that name), and most famously, the ‘uranic rays’ that Henri Becquerel discovered coming from uranium in 1896. These streamed in an unchecked and unquenchable flow, suggesting a tremendous hidden source of energy that, through the work of Pierre and Marie Curie, Ernest Rutherford and others, was eventually traced to the nuclear heart of the atom. The Curies renamed these rays ‘radioactivity’.

There was an old cultural preconception that invisible ‘emanations’ could have life-enhancing agency, whether these were the ‘virtues’ ascribed to medicinal herbs in the Middle Ages or the ‘animal magnetism’ or ‘mesmeric force’ of the 18th century German physician Franz Anton Mesmer. We shouldn’t be surprised, therefore, that at first radioactivity too was widely believed to have miraculous healing powers. “Whatever your Ill, write us”, said the Nowata Radium Sanitarium Company in 1905, “Testimonials of Cases cured will be sent you.” ‘‘Therapeutic’ radium was added to toothpastes and cosmetics, and spa towns proudly advertised the radioactivity (from naturally occurring radon) in their waters. It wasn’t until the 1920s that quite the opposite was found to be the case: too late to save Marie Curie herself, or the Radium Girls – factory workers who had for a decade been licking paintbrushes dipped in radioactive paint for the dials of watches.

Ghost factories

What all these discoveries told us was that the universe we perceive is only a small part of what is ‘out there’. There was a long tradition of ‘spirit worlds’ going back at least to the Middle Ages, when it was common belief that invisible and probably malevolent demons lurked all around us. These beliefs provided the unconscious template for making sense of the ‘invisible universe’, so that leading scientists such as the physicist William Barrett, who co-founded the Society for Psychical Research in 1882, could write a book like On the Threshold of the Unseen (1917) in which he proposed the existence of human-like invisible ‘elementals’. Another physicist, Edmund Fournier d’Albe, put forward the theory that the human soul is composed of invisible particles called ‘psychomeres’ possessing a rudimentary kind of intelligence. He suggested that this hypothesis could account for paranormal phenomena such as ghosts and fairies.

One of the most prominent of these ‘psychical’ scientists was William Crookes. A chemist and entrepreneur who served as the President of the Royal Society between 1913 and 1915, Crookes became famous when he discovered the new chemical element thallium in 1861. Yet he seems to have been particularly credulous of spiritualists’ claims, if not in fact even collusive with them. He was taken in by several mediums, including the famous Florence Cook – like many mediums a striking young woman who found it easy to manipulate the judgement of Victorian gentlemen of more advanced years. Crookes was convinced that “there exist invisible intelligent beings, who profess to be spirits of deceased people” (he evidently took this to be the sceptical view). To investigate the ‘psychic force’ that he thought mediums commanded, Crookes invented a device called the radiometer or ‘light mill’, in which delicate vanes attached to a pivot would rotate when illuminated by light. Although the reason for the rotation was not, as at first thought, due to the ‘pressure’ exerted by light itself, that pressure is a real enough phenomenon, and the radiometer helped to establish it as such. It was thus an instrument motivated by a belief in the paranormal that prompted some genuinely useful scientific work.

The same may be said of Crookes’ ‘radiant matter’, allegedly a “fourth form of matter” somewhere between ordinary material and pure light. In 1879 he claimed that this stuff existed in “the shadowy realm between known and unknown”, and suspected that it, like the ether, might be a bridge to the spirit world.

Radiant matter was another figment of Crookes’ over-active imagination. But this too bore fruit. He invoked radiant matter to explain a mysterious region inside the discharge tubes called the ‘dark space’. But it turned out that this dark region was instead caused by cathode rays, and Crookes’ research on this phenomenon led ultimately to the discovery of electrons and X-rays and, coupled with Marconi’s radio broadcasting, to the development of television. Indeed, several of the early pioneers of television were motivated by their paranormal sympathies, whether it is Crookes refining the cathode ray tube, Fournier d’Albe devising his own idiosyncratic televisual technology, or John Logie Baird, usually regarded as the device’s real inventor, who believed he was in spiritualistic contact with the departed spirit of Thomas Edison.

It is tempting to regard all this as a kind of late-Victorian delirium that engulfed dupes like Crookes – not to mention Arthur Conan Doyle, who famously believed in the photographs of the “Cottingley Fairies”, faked by two teenaged girls in Yorkshire. But there was more to it than that.

For one can argue that radio communication was simply representative of all modern media in that they are ghosts factories, forever manufacturing what in 1886 the psychic researcher Frederic W. H. Myers called “phantasms of the living”: disembodied replicas of ourselves, ready to speak on our behalf. Radio could conjure the illusion that the prime minister, or a film star, had become manifest, though disembodied like a phantom, in your sitting room.

How much more potent the illusion was, then, when you could see electronic ghosts as well as hear them. It might have seemed natural and harmless enough to refer to the double images of early television sets, caused by poor reception or bad synchronization of the electron beam, as ‘ghosts’ – but this terminology spoke to, and fed, a common suspicion that the figures you saw on the screen might not always correspond to real people. After all, they might already be dead. News reporters flocked to the home of Jerome E. Travers of Long Island in December 1953 to witness the face of an unknown woman who had appeared on the screen and wouldn’t vanish even when the set was unplugged. (The family had turned the screen towards the wall, as if in disgrace.)

By appearing to transmit our presence over impossible reaches of time and space, and preserving our image and voice beyond death, these media subvert the laws that for centuries constrained human interaction by requiring the physical transport of a letter or the person themself. We submit to the illusion that the voice of our beloved issues from the phone, that the Skyped image conjured on the screen by light-emitting diodes is the far-away relative in the flesh.

Who could possibly be surprised, then, that the internet throngs with ghosts – that, as folklore historian Owen Davies says, “cyberspace has become part of the geography of haunting”. Here too the voices and images of the dead may linger indefinitely; here too pseudonymous identities are said to speak from beyond the grave. More even than the telephone and television, the internet, that invisible babble of voices, seems almost designed to house spirits, which after all are no more ethereal than our own cyber-presence.

Hidden worlds

Contemplating the forms attributed to new invisible phenomena a hundred and more years ago should give us pause when we come to the phantasmal worlds of modern science. For we are still generating them, and their manifestations are familiar. It’s undoubtedly true that our everyday perceptions grant us access to only a tiny fraction of reality. Telescopes responding to radio waves, infrared radiation and X-rays have vastly expanded our view of the universe, while electron microscopes and even finer probes of nature’s granularity have populated the unseeably minuscule microworld. However, our theories and imaginations don’t stop there, and each feeds the other in ways we do not always fully appreciate.

Take, for example, the Many Worlds interpretation of quantum mechanics. There’s no agreement about quite how to interpret what quantum theory tells us about the nature of reality, but the Many Worlds interpretation has plenty of influential adherents. It supposes the existence of parallel universes that embody every possible outcome of the many possible solutions to the equations describing a quantum system. According to physicist Max Tegmark of MIT, “it predicts that one classical reality gradually splits into superpositions of many such realities.” The idea is derived from the work of physicist Hugh Everett in the 1950s – but Everett himself never spoke of “many worlds”. At that time, the prevailing view in quantum theory was that, when you make a measurement on a quantum system, this selects just one of the possible outcomes enumerated in the mathematical entity called the wavefunction – a process called “collapsing the wavefunction”. The problem was there was nothing in the theory to cause this collapse – you had to put it in “by hand”. Everett made the apparently innocuous suggestion that perhaps there is in fact no collapse: that all the other possible outcomes also have a real physical existence. He never really addressed the question of where those other states reside, but his successors had no qualms about building up around them an entire universe, identical to our own in every respect except for that one aspect. Every quantum event causes these parallel universes to proliferate, so that “the act of making a decision causes a person to split into multiple copies”, according to Tegmark. (More properly, they have always existed, but it’s just that things evolve differently in each of them.)

The problem is that this idea itself collapses into incoherence when you try to populate it with sentient beings. It’s not (as sometimes implied) that there are alternative versions of us in these many worlds – they are all in some sense us, but there’s no prescription for where to put our apparently unique consciousness. This conundrum arises not (as some adherents insist) as an inevitable result of “taking the math seriously”, but simply because of the impulse, motivated by neither experiment nor theory, to make each formal mathematical expression a ‘world’ of its own, invisible from ‘this one’. That is done not for any scientific reason, but simply because it is what, in the face of the unknown, we have always done.

Much the same consideration applies to the concept of brane worlds. This arises from the most state-of-the-art variants of string theory, which attempt to explain all the known particles and forces in terms of ultra-tiny entities called strings, which can be envisioned as particles extended into little strands that can vibrate. Most versions of the theory call for variables in the equations that seem to have the role of extra dimensions in space, so that string theory posits not four dimensions (of time and space) but eleven. As physicist and writer Jim Baggott points out, “there is no experimental or observational basis for these assumptions” – the “extra dimensions” are just formal aspects of the equations. However, the latest versions of the theory suggest that these extra dimensions can be extremely large, making these so-called extra-dimensional ‘branes’ (short for membranes) potential repositories for alternative universes, separated from our own like the stacked leaves of a book. Inevitably, there is an urge to imagine that these places too might be populated with sentient beings, although that’s optional. But the point is that these brane worlds are nothing more than mathematical entities in speculative equations, incarnated, as it were, as invisible parallel universes.

Dark matter and dark energy are more directly motivated by observations of the real world. Dark matter is apparently needed to account for the gravitational effects that seem to come from parts of space where no ordinary matter is visible, or not enough to produce that much of a tug. For example, rotating galaxies seem to have some additional source of gravitational attraction, beyond the visible stars and gas, that stops them from flying apart. The ‘lensing’ effect by which distant astrophysical objects get distorted by the gravitational warping of spacetime seems also to demand an invisible form of matter. But dark matter does not ‘exist’ in the usual sense, in that it has not been seen, nor are there theories that can convincingly explain or demand its existence. Dark energy too is a kind of ‘stuff’ required to explain the acceleration of the universe’s expansion, something discovered by astronomers observing far-away objects in the mid-1990s. But it is just a name for a puzzle, without even a hint of any direct detection.

It is fitting and instructive that both of these terms seem to come from the age of William Crookes, whose investigations with gas discharge tubes led him to report a mysterious region inside the tubes called the ‘dark space’, which he explained by invoking his radiant matter. It turned out that there was no such thing as radiant matter; the effects that led Crookes to propose it were instead caused by invisible ‘cathode rays’, shown in 1898 to be streams of subatomic particles called electrons. It seems quite possible that dark energy, and perhaps dark matter too, will turn out to be not exactly ‘stuff’ but symptoms of some hitherto unknown physical principle. These connections were exquisitely intuited by Philip Pullman in the His Dark Materials trilogy, where (the title alone gives a clue) a mysterious substance called Dust is an amalgam of dark matter and Barrett’s quasi-sentient psychomeres, given a spiritual interpretation by the scientist-priests of Pullman’s alternative steampunk Oxford University who sense its presence using instruments evidently based on Crookes’ radiometer.

It would be wrong to conclude that scientists are just making stuff up here, while leaning on the convenience of its supposed invisibility. Rather, they are using dark matter and dark energy, and (if one is charitable) quantum many worlds and branes and other imperceptible and hypothetical realms, to perform an essential task, which is to plug gaps in our knowledge with concepts that we can grasp. These makeshift repairs and inventions are needed if science is not to be simply derailed or demoralized by its lacunae. But when this happens, it seems inevitable that the inventions will take familiar forms – they will be drawn from old concepts and even myths, they will be “mysterious” particles or rays or even entire imagined worlds. These might turn out to be entirely the wrong concepts, but they make our ignorance concrete and enable us to think about how to explore it. The only danger is if the scientists themselves forget what they are up to and begin to believe in their own constructs. Then they will be like William Crookes and William Barrett, looking for spirits in the void, seduced by their own tales into thinking that they already have the answer.

Friday, March 07, 2014

“Molecular mechanisms that generate biological diversity are rewriting ideas about how evolution proceeds”. I couldn’t help noticing how similar that sounds to what I was saying in my Naturearticle last spring, “Celebrate the unknowns”. Some people were affronted by that – although other responses, like this one from Adrian Bird, were much more considered. But this is the claim put forward by Susan Rosenberg and Christine Queitsch in an interesting commentary in Science this week. They point out (as I attempted to) that the “modern synthesis” so dear to some is in need of some modification.

“Among the cornerstone assumptions [of the modern synthesis]”, say Rosenberg and Queitsch, “were that mutations are the sole drivers of evolution; mutations occur randomly, constantly, and gradually; and the transmission of genetic information is vertical from parent to offspring, rather than horizontal (infectious) between individuals and species (as is now apparent throughout the tree of life). But discoveries of molecular mechanisms are modifying these assumptions.” Quite so.

This is all no great surprise. Why on earth should we expect that a theory drawn up 80 or so years ago will remain inviolable today? As I am sure Darwin expected, evolution is complex and doesn’t have a single operative principle, although obviously natural selection is a big part of it. (I need to be careful what I say here – one ticking off I got was from a biologist who was unhappy that I had over-stressed natural selection at the molecular level, which I freely confess was a slight failure of nerve – I have found that saying such things can induce apoplexy in folks who see the shadows of creationism everywhere.) My complaint is why this seemingly obvious truth gets so little airplay in popular accounts of genetics and evolution. I’m still puzzled by that.

I realise now that kicking off my piece with ENCODE was something of a tactical error (even though that study was what began to raise these questions in my mind), since the opposition to that project is fervent to the point of crusading in some quarters. (My own suspicion is that the ENCODE team did somewhat overstate their undoubtedly interesting results.) Epigenetics too is now getting the backlash for some initial overselling. I wish I’d now fought harder to keep in my piece the discussion of Susan Lindquist’s work on stress-induced release of phenotypic diversity (S. Lindquist, Cold Spring Harb. Symp. Quant. Biol.74, 103 (2009)), which is mentioned in the Science piece – but there was no room. In any case, this gives me the impetus to finally put the original, longer version of my Nature article online on my web site – not tonight, but imminently.

Thursday, March 06, 2014

As Prospect has already alluded, neuroscience is going to be an ever fiercer battleground for how we should organize our societies. Gender differences, criminal law, political persuasions – we had better be prepared to grasp some thorny questions about whether or not “our brains make us do it.” To judge from some commentaries, the older psychological frameworks we have used to understand behaviour, dysfunction, trauma, intelligence and ethics - whether that is Freudianism, Kleinianism, object relations, transference or whatever – are about to be replaced with the MRI scanner.

Inevitably, one of the bloodiest fields of combat is going to be education. I say inevitably not only because we know the levels of panic and anxiety schooling already invokes in parents but because few areas of social policy have been so susceptible to ideology, fads and dogma. You can be sure that supporters of every educational strategy will be combing the neuroscience literature for “evidence” of their claims.

That’s why a recent report from the Education Endowment Foundation (EEF) looking at the supposed neurological evidence for 18 teaching techniques is so timely. The report distinguishes those that have rather sound neurological support, such as the cognitive value of minimising stress, engaging in physical exercise and pacing out the school day with plenty of breaks, from those for which the evidence or understanding remains a long way from offering benefits in the classroom, such as genetics or personalized teaching approaches.

The report is also ready to acknowledge that some techniques, such as learning games or using physical actions to “embody cognition” (enacting “action verbs” rather than just reading them, say), warrant serious consideration even though they may not yet be understood well enough to know how best to translate to the classroom. (Seasoned Prospect readers might like to know that claims about the supposed cognitive benefits of cursive writing were apparently not even deemed worthy of consideration.)

These findings, along with earlier studies by specialists of the “neuromyths” that propagate in classrooms, are nicely rounded up in a commentary by Sense About Science, a non-profit organisation that seeks to provide people with the necessary facts to make informed choices about scientific issues.

Sense About Science has already done a great service in debunking the pseudoscientific programme called Brain Gym, which has convinced many schools that it can make children’s brains “work better” through a series of movements and massage exercises. Brain Gym has also run foul of the scourge of “bad science” Ben Goldacre. The EEF report is more politely, but no less firmly, dismissive: “a review of the theoretical foundations of Brain Gym and the associated peer-reviewed research studies fails to support the contentions of its promoters.”

All this is important and useful for cutting through the hype and fuzzy thinking. The EEF report will be valuable reading for teachers, who are often given little opportunity or encouragement to investigate the basis of the methods they are required to use. But we need to be awfully careful about setting up neuroscience as the arbiter of our understanding of the brain and cognition.

It is, after all, still a young science, and we still have a sometimes rudimentary understanding of how those colourful MRI brain scans translate into human experience. As the EEF report acknowledges, neuroscience has in some instances been able to add little so far to what has already been established by well conducted psychological tests. It is a relief to see brain science now undermining simplistic folk beliefs about, for example, “left brain” and “right brain” personalities. But as Raymond Tallis has elegantly explained, neuroscience is sometimes in danger of spawning a spurious dogma of its own.

It’s not just that the science itself might be poorly interpreted or over-extrapolated. The problem is deeper: whether there exists, or can exist, a firm and reliable link between the objective functioning of neural circuits and the subjective experience of people. Psychology is as much about providing a framework for thinking and talking about the latter as it is about pursuing a reductive explanation in terms of the superior frontal gyrus.

It is currently fashionable, for example, to claim that neuroscience has debunked Freudianism. It’s not even clear what this can mean. Freud’s claims that his ideas were scientific are apt to irritate scientists today partly because they don’t recognize how differently that word was used in the late nineteenth century, when novelists like Emile Zola could claim that they were applying the scientific method to literature. More to the point, Freud’s identification of an unconscious world where primitive impulses raged was really of cultural rather than scientific import. One could argue, if one feels inclined, that the identification of “primitive” instinctive areas of the brain such as the basal ganglia, as well as the modern understanding of how childhood experiences affect the brain’s architecture, in fact offer some scientific validation of Freud. But the broader point is that there was never going to be any real meaning in seeking a neuro-anatomical correlate of the ego or the id. As (admittedly somewhat crude) metaphors for our conflicting impulses and inclinations, they still make sense – as much sense as concepts like love, jealousy and disgust (which are sure to have complex and variable neural mappings).

This consideration arises in the matter of “multiple intelligences”, a concept promoted in the 1980s by the developmental psychologist Howard Gardner and which now underpins the widespread view that education should cater to different “learning styles” such as visual, auditory and kinaesthetic. The Sense About Science commentary suggests that neuroscience now contradicts the idea, since different brain functions all seem to stem from the same anatomical apparatus. But like many ideas in psychology, the multiple-intelligences theory runs into problems only when it hardens along doctrinaire lines – if it insists for example that every child must be classified with a particular learning style, or that different styles have wholly distinct neurological pathways. No one who has any experience on the football pitch (a relatively rare situation for academics) will have the slightest doubt that it makes sense to suggest Wayne Rooney possesses a kind of intelligence quite indifferent to his ability to read beyond the Harry Potter books. To think in those terms is a useful tool for considering human capacities, regardless of whether fledgling neuroscience seems to “permit” it.

In case you think this sounds like special pleading from a particularly flaky corner of science, bear in mind that the so-called hard sciences are perfectly accustomed to heuristic concepts that lack a rigorous foundation but which help to make sense of the behaviour scientists actually observe – witness, for example, the notions of electronegativity and oxidation state in chemistry. These concepts are not arbitrary but have proved their worth over decades of careful study. The task of psychology is surely to distinguish between baby and bathwater, rather than policing its ideas for consistency with the diktats of MRI scans.

A conference held in Karlsruhe in 2011 was perhaps the first to address the topic of molecular aesthetics. To judge from this collection of articles and imagery stemming from that meeting, it must have been an event in equal measure stimulating, entertaining and perplexing.

The editors Peter Weibel and Ljiljana Fruk have taken the wise decision not simply to put together a collection of papers from the meeting, but rather, to augment such contributions with a wide collection of reprints on the topic, along with a very generous selection of images of related artworks. The result is an engrossing 500-page digest which will surely contain something for everyone.

Roald Hoffmann characteristically puts the issue in a nutshell: “By virtue of not being comfortable in the official literature, aesthetic judgements in chemistry, largely oral, acquire the character of folk literature.” A question not quite addressed here is whether this is how things must be or whether it should be resisted.

The book is nothing if not diverse, which means that the quality is bound to fluctuate. The paranoid guerrilla rantings of the Critical Art Ensemble and the opaque semiotic posturing of Eric Allie offer few useful insights. Kenneth Snelson’s model of electronic structure is decidedly “outsider science.” Some of the artworks, although striking, bear little on the issue of molecular aesthetics. But I’m not complaining – such inclusiveness adds to the richness of the stew.

My own view is much in accord with that advanced here by Joachim Schummer: if we really want to talk about molecular aesthetics then we must cease warbling about molecules that are “beautiful” (meaning pleasing) because of their symmetry and instead conduct a serious investigation of what the term could mean – what criteria we should use for thinking about the ways we represent chemistry and molecules visually, conceptually and sensorially, and about the delight