Saturday, April 25, 2009

A circular argument?[Here’s the pre-edited version of my latest Muse for Nature News. I have a book review of the stimulating reference 8 appearing shortly in Nature Physics.]

A new proposal for the signature of life on alien worlds resurrects an old idea linking light and life.

The difficulty of saying with scientific rigour what constitutes ‘life’ brings to mind Justice Potter Stewart’s famous description of pornography in 1964: it is hard to define, but we know it when we see it.

Yet do we really? Astrobiologists are haunted by the suspicion of terracentricity: we imagine that life on other planets will look like life here, and bias our searches for extraterrestrials accordingly.

Some of this community struggle nobly to free themselves from such prejudice, questioning for example the complacent conviction that life depends on water. Others seek very general signatures that make no assumptions about biochemical specifics.

One of the first such proposals, made by James Lovelock in the context of lander explorations of Mars [1], argued that sustained chemical disequilibrium in the planetary environment should be a telltale sign. This proposal had the virtue that it could rely on surveying a planet’s atmosphere alone. On Earth, the high proportion of oxygen, along with the presence of other trace gases, should be a giveaway, relying as it does on the operation of photosynthesis to prevent oxygen from getting locked into minerals.

One of the most ingenious ideas is that life affects the topography of a planet, for example by mediating chemical reactions that erode rock, by forming soil and protecting it from erosion, and by dictating climate. Geomorphologists William Dietrich and J. Taylor Perron have argued [2] that the types and distributions of landforms on Earth probably carry an imprint of life’s influence, and that a better understanding of their formation processes might lead to a clear distinction between the contours of planets with and without life.

In 1993, the idea of ‘remote sensing’ of the fingerprints of life from space was explored experimentally when Carl Sagan and coworkers used the data from a planned flyby of the Earth by NASA’s Galileo spacecraft to investigate our planet as though it were an unknown world [3]. From the chemistry of Earth’s atmosphere, images of the planetary surface, and detection of radio-wave emissions, the researchers inferred – somewhat reassuringly – that the presence of water-based life, probably of an intelligent kind, seems highly likely.

Now a new kind of fingerprint has been proposed by William Sparks of the Space Telescope Science Institute in Baltimore and his coworkers. They suggest we search for a characteristic signature of life in the light scattered from the surfaces of extrasolar planets [4]. They say that living organisms will be likely to make the light circularly polarized, meaning that the plane of its oscillating electromagnetic fields is not random but has a characteristic twist to it, either to the left or the right. This shouldn’t happen if the light simply bounces off inorganic surfaces.

Circular polarization is a feature of light scattered from organisms on Earth, where it has its origin in the ‘handedness’ or chirality of the building blocks of biological molecules. All natural proteins are made from amino acids that have a ‘left-handed’ molecular shape – the mirror-image right-handed amino acids can’t be used by cells to build proteins. And all nucleic acids use only ‘right-handed’ sugar molecules in their backbones. This molecular-scale twist means, for example, that light circularly polarized to the left or the right is absorbed to different degrees by the photosynthetic molecular apparatus of bacteria and plants, creating a net circular polarization in the scattered light.

It’s not obvious that this will be evident when the light is measured from afar, however, because the scattering process is complicated. Light rays that are scattered many times tend to have their polarization randomized, and scattering from reflective surfaces reverses the polarization. That’s why the researchers needed to check the light that bounces off cultures of marine photosynthetic bacteria, to make sure that the signature remains evident. It does – as indeed they found also for reflected light from a maple leaf. In contrast, light scattered from inorganic iron oxide shows no significant circular polarization.

Whether this method will work in real exoplanet searches is another matter. It depends on how much surface scattering would come from living organisms as opposed to inorganic substances, and on whether this light can be distinguished clearly enough from that of the parent star. Some of the planned astronomical instruments that might conduct planet searches could have sufficient resolution for this, however – probably not NASA’s Terrestrial Planet Finder (which is currently postponed indefinitely in any case), say Sparks and colleagues, but perhaps the ground-based European Extremely Large Telescope, which might begin operating around 2018. The Hubble Space Telescope has already revealed some of the chemical ingredients of an extrasolar planet [5,6]

But who says life must have a chiral molecular basis? do. ‘Homochirality’, say Sparks and colleagues, ‘is thought to be generic to all forms of biochemical life as a necessity for self-replication.’ This statement relies on the work of astrobiologist Radu Popa of Portland State University in Oregon [7]. But what Popa offers is a plausibility argument based on the idea that homochirality simplifies polymer structure in a way that promotes the efficiency of copying information. This doesn’t imply that homochirality is essential, but only that it might help. And we know that life does not always do things in the most efficient way.

However, the notion that sparks and colleagues are invoking actually goes back much further. An intimate association between life, chirality and light polarization was made in the nineteenth century, first by the French scientist Jean-Baptiste Biot and then by Louis Pasteur, who sought Biot’s advice on his seminal discovery of handedness in organic molecules. Biot, a pioneer in the study of optics and polarization, coined the term ‘optical activity’ to describe a substance that rotates the plane of polarized light, and it was no coincidence that this in itself suggested the operation of some vital, ‘active’ agent, rather than lifeless passive matter. Biot came to believe that optical activity was ‘the sole means in man’s possession of confronting the otherwise indefinable limit between life and nonlife on the molecular level’ [8]. Pasteur became a staunch advocate of this view, to the extent that (contrary to the popular view) he developed something of an anti-materialist, vitalist stance on what life is: he felt that optical rotation must result from ‘the play of vital forces’.

We now know, partly through Pasteur’s own work, that he and Biot were wrong. Sparks and colleagues are on sounder ground, but their idea could be seen to support the suspicion that life is everywhere built in our own image.

Friday, April 24, 2009

You daren’t make it up

One of the things I’ve learnt from writing a novel is that it’s not a sufficient excuse to justify unconvincing aspects of a plot by saying that something like that happened in real life. It’s the job of the author to make the implausible sound, if not plausible, then at least not jarring or undermining to the narrative. I was repeatedly reminded of this when I received comments from editors, and later from readers and reviewers, on events and situations in The Sun and Moon Corrupted that I’d taken straight from life. But I would never have dared invent something apropos of the mysterious red mercury, on which parts of the plot hinge, that was as bizarre as this story, brought to my attention by Ivan Vince. Is nothing too strange to be associated with this stuff?

Tuesday, April 07, 2009

A suggestion that the identification of physical laws can be automated raises questions about what it means to do science.

Two decades ago, computer scientist Kemal Ebcioglu at IBM described a computer program that wrote music like J. S. Bach. Now I know what you’re thinking: no one has ever written music like Bach. And Ebcioglu’s algorithm had a somewhat more modest goal: given the bare melody of a Bach chorale, it could fill in the rest (the harmony) in the style of the maestro. The results looked entirely respectable [1], although sadly no ‘blind tasting’ by music experts ever put them to the test.

Ebcioglu’s aim was not to rival Bach, but to explore whether the ‘laws’ governing his composition could be abstracted from the ‘data’. The goal was really no different from that attempted by scientists all the time: to deduce underlying principles from a mass of observations. Writing ‘Bach-like music’, however, highlights the constant dilemma in this approach. Even if the computerized chorales had fooled experts, there would be no guarantee that the algorithm’s rules bore any relation to the mental processes of Johann Sebastian Bach. To put it crudely, we couldn’t know if the model captured the physics of Bach.

That issue has become increasingly acute in recent years, especially in the hazily defined area of science labelled complexity. Computer models can now supply convincing mimics of all manner of complex behaviours, from the flocking of birds to traffic jams to the dynamics of economic markets. And the question repeatedly put to such claims is: do the rules of the model bear any relation to the real world, or are the resemblances coincidental?

This matter is raised by a recent paper in Science that reports on a technique to ‘automate’ the identification of ‘natural laws’ from experimental data [2]. As the authors Michael Schmidt and Hod Lipson of Cornell University point out, this is much more than a question of data-fitting – it examines what it means to think like a physicist, and perhaps even interrogates the issue of what natural laws are.

The basic conundrum is that, as is well known, it’s always possible to find a mathematical equation that will fit any data set to arbitrary precision. But that’s often pointless, since the resulting equations may be capturing contingent noise as well as meaningful physical processes. What’s needed is a law that obeys Einstein’s famous dictum, being as simple as possible but not simpler.

‘Simpler’ means here that you don’t reduce the data to a trivial level. In complex systems, it has become common, even fashionable, to find power laws (y proportional to x**n) that link two variables [3]. But the very ubiquity of such laws in systems ranging from economics to linguistics is now leading to suspicions that power laws might in themselves lack much physical significance. And some alleged power-laws might in fact be different mathematical relationships that look similar over small ranges [4].

Ideally, the mathematical laws governing a process should reflect the physically meaningful invariants of that process. They might, for example, stem from conservation of energy or of momentum. But it can be terribly hard to distinguish true invariants from trivial patterns. A recent study showed that the similarity of various dimensionless parameters from the life histories of different species, such as the ratio of average life span to age at maturity, have no fundamental significance [5].

It’s not always easy to separate the trivial or coincidental from the profound. Isaac Newton showed that Kepler’s laws identifying mathematical regularities in the parameters of planetary orbits have a deep origin in the inverse-square law of gravity. But the notorious Titius-Bode ‘law’ that alleges a mathematical relationship between the semi-major axes and the ranking of planets in the solar system remains contentious and is dismissed by many astronomers as mere numerology.

As Schmidt and Lipson point out, some of the invariants embedded in natural laws aren’t at all intuitive because they don’t actually relate to observable quantities. Newtonian mechanics deals with quantities such as mass, velocity and acceleration, while its more fundamental formulation by Joseph-Louis Lagrange invokes the principle of minimal action – yet ‘action’ is an abstract mathematical quantity, an integral that can be calculated but not really ‘measured’ directly.

And many of the seemingly fundamental constructs of ‘natural law’ – the concept of force, say, or the Schrodinger equation in quantum theory – turn out to be unphysical conveniences or arbitrary (if well motivated) guesses that merely work well. The question of whether one ascribes any physical reality to such things, or just uses them as theoretical conveniences, is often still unresolved.

Schmidt and Lipson present a clever way to narrow down the list of candidate ‘laws’ describing a data set by using additional criteria, such as whether partial derivatives of the equations also fit those of the data. Their approach is Darwinian: the best candidates are selected, on such grounds, from a pool of trial functions, and refined by iteration with mutation until reaching some specified level of predictive ability. Then parsimony pulls out the preferred solution. This process often generates a sharp drop in predictive ability as the parsimony crosses some threshold, suggesting that the true physics of the problem disappears at that point.

The key point is that the method seems to work. When used to deduce mathematical laws describing the data from two experiments in mechanics – an oscillator made from two masses linked by springs, and a pendulum with two hinged arms – it came up with precisely the equations of motion that physicists would construct from first principles using Newton’s laws of motion and Lagrangian mechanics. In other words, the solutions encode not just the observed data but the underlying physics.

Their experience with this system leads Schmidt and Lipson to suggest ‘seeding’ the selection process by drawing on an ‘alphabet’ of physically motivated building blocks. For example, if the algorithm is sent fishing for equations incorporating kinetic energy, it should seek expressions involving the square of velocities (since kinetic energy is proportional to velocity squared). In this way, the system would start to think increasingly like a physicist, giving results that we can interpret intuitively.

But perhaps the arena most in need of a tool like this is not physics but biology. Another paper in Science by researchers at Cambridge University reports a ‘robot scientist’ named Adam that can frame and experimentally test hypotheses about the genomics of yeast [6] (see here). By identifying connections between genes and enzymes, Adam could channel post-docs away from such donkey-work towards more creative endeavours. But the really deep questions, about which we remain largely ignorant, concern what one might call the physics of genomics: whether there are the equivalents of Newtonian and Lagrangian principles, and if so, what. Despite the current fads for banking vast swathes of biological data, theories of this sort are not going to simply fall out of the numbers. So we need all the help we can get – even from robots.