Saturday, May 25, 2013

Bacteria resistant to antibiotics – popularly called superbugs – are threatening to make routine, minor surgery the kind of life-threatening intervention it used to be a hundred years ago. Some medical experts are warning that standard operations such as hip replacements and organ transplants could become deadly unless we find ways of warding off these infections.

Bacteria might evolve clever ways of evading chemical agents, but they will always struggle to resist the old-fashioned way of killing them: simply heating them up. It takes only a relatively mild warming to kill the bugs without discomfort or harm to tissues. Now a team of scientists in the United States are developing little electric heaters that could be implanted in a wound and powered wirelessly to fry bacteria while healing takes place – and then dissolves harmlessly into body fluids once its job is done.

This is just one potential application of the bio-resorbable electronic circuits made by John Rogers of the University of Illinois at Urbana-Champaign and his coworkers. The idea itself is not new: Rogers and others have previously reported flexible circuits and electronic devices that can be safely laid directly onto skin, and that even biodegrade over time. But their success in making wireless devices might prove crucial to many of the possible applications, especially in medicine, where it’s desirable to implant them and control or power them without having to puncture the skin to insert leads.

The idea is that radio waves can be used both for remote control of the circuits – to turn them on and off, say – and to provide the power to run them, so that there’s no need for implanted batteries. This kind of radio-frequency (rf) wireless technology is becoming ever more widespread – for example, for tagging consumer goods, food packaging, livestock, even dustbins.

To make rf circuits, you need semiconductors and metals. Those don’t sound like the kinds of materials our bodies will dissolve, but Rogers and colleagues use layers of non-toxic substances so thin that they’ll just disintegrate in water or body fluids. For the metal parts, the researchers use gossamer films (about 0.5 to 50 thousandths of a millimetre thick) of magnesium, which is not only harmless but in fact an essential nutrient: our bodies typically contain about 25 grams of it already. For semiconductors, they use silicon membranes 300 nanometres (millionths of a millimetre) thick, which will also break up in water. Many devices also require an insulating material, for which the researchers used magnesium oxide.

One of the simplest but key components of an rf circuit is an antenna, which picks up radio waves. Rogers and colleagues made these simply from long strips of magnesium foil, which they deposited onto thin films of silk. Silk is the ideal base for such devices, because it is non-toxic, biodegradable, strong and relatively cheap. These antennae, typically about four inches long, will completely dissolve in water in about two hours. Although more deeply buried implants would be hampered by the fact that body tissues absorb radio waves quite strongly, they should still receive enough signal for the kinds of very-low-power applications the researchers are considering.

They have also made a variety of standard circuit components: capacitors, resistors, and crucially, diodes and transistors that incorporate the silicon membranes. Transistors are rather complex structures, requiring delicately patterned films of silicon doped with other elements and sandwiched with metal electrodes and insulating layers. Yet using this suite of materials, Rogers’ team could make versions that would readily dissolve within hours.

One of the first full circuits that the team has made is a radio-frequency “power scavenger”, which can convert up to 15 percent of the radio waves it absorbs (at a particular resonant frequency) into electrical power. Their prototype, measuring about four inches by less than two, can pick up enough power to run a small commercial light-emitting diode. The team can control the rate at which these devices dissolve by fine-tuning the precise molecular structure of the silk sheets on which they are laid down or between which they are sandwiched. This way, they can make devices that last for a week or two if necessary – about the length of time needed to ward off bacteria from a healing wound.

As well as deterring bacteria, Rogers says that implantable, bio-resorbable rf electronics could be used for stimulating nerves for pain relief, and for electrically stimulating the regrowth of bone, a process long proven to work. Conceivably they could also be used to control the release of drugs from implanted reservoirs, in situations where this release must follow a tightly prescribed sequence.

When Christopher Nolan cast David Bowie as the Serbian inventor Nikola Tesla in his 2006 movie The Prestige, he chose wisely. Despite Bowie’s dodgy moustache (and dodgier accent), few other actors could have supplied the otherworldliness that the role demands: a combination of RadioShack nerd and space alien. Strikingly handsome, Tesla was a celebrity and socialite in the final decades of the 19th century before disappearing into bankruptcy and then legend.

At one time Thomas Edison’s employee, Tesla (1856-1943) became his rival, vying for the crown of Electrical Wizard. Legends still cling to him—that he discovered how to tap cosmic energy, that his plans for death rays were hidden by the U.S. government. As W. Bernard Carlson, a historian of technology at the University of Virginia, points out in his biography Tesla, he has been credited with every innovation of the electronic age and dismissed as a madman. The classic photograph of Tesla seated calmly reading a book in his Colorado laboratory while an electrical storm rages across the gigantic coils around him captures brilliantly the sense of a magus commanding wild forces. In fact, it is his image that he commands here: The photo was a faked double exposure.

But if it’s the legendary Tesla you seek, you’ll be disappointed — Mr. Carlson does a good job of debunking the New Agers and conspiracy theorists. Instead you will have to negotiate rather opaque discourses on the merits of alternating-current (AC) versus direct-current (DC) power generation. This is both a strength and a weakness of the book.

Tesla was born in the province of Lika in what is now Croatia but was then a Serbian region on the outskirts of the Austro-Hungarian Empire. The son of an educated but austere priest, he was trying to make flying machines while still at school and was seemingly never destined, as his father hoped, for the priesthood himself. During a rather haphazard training as an engineer in Graz and Prague, Tesla developed a fascination for motors, dynamos and electromagnetism in general. While working in Budapest he was hired by Edison’s branch in Paris and then brought to the Edison Machine Works in New York. “This is a damn good man,” Edison is said to have remarked when they met in 1884, but Tesla quit soon after when he felt that his contribution to the company’s arc-lighting system went unappreciated.

His breakthrough invention was a motor that ran off AC. It was simpler and without the sparking contacts of DC motors. He sold the patent to George Westinghouse, who collaborated with him to develop AC power - easier to transmit over long distances - in America. The construction of Westinghouse’s AC hydroelectric power plant at Niagara Falls in 1895 was arguably Tesla’s greatest and most enduring success. As Mr. Carlson explains: “Tesla’s AC inventions were essential to making electricity a service that could be mass-produced and mass-distributed; his inventions set the stage for the ways in which we produce and consume electricity today.”

Like Edison, Tesla was a showman who actively cultivated an impression of wizardry in an age when electromagnetic phenomena still smelled of magic—an association that probably contributed more to Tesla’s status than Mr. Carlson credits. To demonstrate the safety of AC power, he staged public lectures in which he would pass 250,000 volts through his body, creating a glow of ionized air at his fingertips and the ends of his hairs. Despite being prone to depression and odd behavior, he was also, in his heyday, a socialite who could win over tycoons such as Westinghouse, John Jacob Astor and J.P. Morgan.

So Tesla was ingenious—but was he a genius? Like other great inventors of his age, such as Edison, Bell and Ford, he brimmed with imaginative, sometimes bizarre, plans, supported by obsessive determination and only a cursory understanding of the basic science. He hatched grand schemes that were visionary and unrealistic in equal measure. His desire to transfer electrical power wirelessly has become relevant again in the age of the laptop and cellphone. But dreamers are prone to myopia, and he missed opportunities. He believed that “wireless telegraphy” would have to transmit electromagnetic signals through the earth rather than the atmosphere, and so he lost out to Marconi, breaking both his spirit and his financial backing. He missed a chance to discover X-rays, overlooking those produced by his gas-discharge lighting tubes, and he found no takers for his remote-controlled vehicles.

Yet what dreams he had! As he promises to communicate with Martians by radio or to use electricity to convert atmospheric nitrogen into fertilizer (the chemical method Fritz Haber devised a decade later supports the world’s current population), you have to admire his chutzpah. At his experimental station in Colorado Springs, he wirelessly lighted up distant electric bulbs planted in the desert soil and generated ball lightning in a laboratory that surely inspired the film director James Whale’s electrified Frankenstein. He persuaded Morgan to finance another lab at Wardenclyffe on Long Island, complete with a 600-foot radio transmission tower—which never worked, turning their relationship sour and precipitating Tesla’s slow decline into obscurity.

Mr. Carlson’s account is thorough, but flawed by its lack of psychological insight. Tesla was evidently a strange and troubled man. “I calculated the cubical contents of soup plates, coffee cups and pieces of food,” he wrote in a memoir—“otherwise the meal was unenjoyable.” When he reports that dropping little squares of paper into a dish of liquid made him “always sense a peculiar and awful taste in my mouth,” Mr. Carlson doesn’t wonder why he might be engaged in that activity in the first place. He seems to regard these peculiar habits as inconveniences, noting with unintentional deadpan that “they undoubtedly interfered with his relationships with other people.” Mr. Carlson’s view of inner worlds sounds at times almost Victorian: As a boy, he says, Tesla overcame recurrent nightmares “by developing his willpower.”

Most perplexingly, Mr. Carlson does not examine Tesla’s belief that “I was but an automaton devoid of free will in thought and action” except to cite it as motivating his interest in building radio-controlled robotic devices. By the time Mr. Carlson steels himself to talk about sex, we have already deduced that Tesla was sexually repressed and probably gay. Mr. Carlson touches on Tesla’s possible homosexuality but is content to attribute his apparent celibacy to the asceticism of his Eastern Orthodox background and the solitary demands of inventive genius. The author’s relief in getting back to a discussion of inductance coils is almost palpable.

This is a shame, because Mr. Carlson has some interesting things to say about technological innovation, such as the need to blend imagination and analytical rigor—to “think hard but dream boldly”—and the delicate relationship between inventors and investors. But the omissions are important for that very reason. While one shouldn’t pathologize Tesla too much, his idiosyncrasies raise questions about the extent to which one can generalize from his approach to invention—or even how much of it, as reported by Tesla himself, can be taken at face value. Without a deeper insight into the man, it becomes hard to draw any lessons about the process of invention and innovation in history. For that, the mathematician Norbert Wiener’s little treatise Invention, written in 1954, is still hard to beat. In the meantime, this book provides a good survey of Tesla’s technics, but the man remains an enigma.

This year isn’t only the diamond anniversary of Watson & Crick’s DNA structure but also the centenary of the technique that enabled them to get it: X-ray crystallography, devised by William and Lawrence Bragg. I have been asked by the University of Leeds, where Bragg Snr did that work, to write a piece for their magazine celebrating that achievement. The final piece will look somewhat different to this, since it will include material on contemporary research at Leeds using XRD; but this basically historical view is how it looked at the outset. It owes much to John Jenkin’s biography William and Lawrence Bragg, Father and Son (OUP, 2008).

________________________________________________________________

The invention of X-ray crystallography by William Henry Bragg, working at the University of Leeds, and his son W. Lawrence Bragg in Cambridge, was one of the culminating episodes in perhaps the most extraordinary three decades that the physical sciences have ever experienced. Between 1890 and the end of the First World War, scientists discovered X-rays and radioactivity, devised the theories of relativity and quantum mechanics, and decoded the inner secrets of atoms. During this period Marconi developed radio telecommunication, the cathode-ray tube that led to the discovery of electrons and X-rays was morphing into the first television, and the Wright brothers made their first flights. These were, in other words, the formative decades of the modern age.

It is not often appreciated how important to that incipient modernity the Braggs’ work was. By showing how X-rays reflected from substances could reveal the arrangements of their atoms, William and Lawrence paved the way to countless scientific and technological breakthroughs. By understanding the crystal structures – the orderly stacking of atoms – of metal alloys, it became possible to develop new and better materials. X-ray crystallography became the chemist’s most reliable tool for deducing the shapes of molecules, and when applied to the molecules of life it ushered in the age of molecular biology and genetics – most notably as the technique that revealed the chemical constitution of DNA to James Watson and Francis Crick in 1953. The Braggs’ early work on liquid crystals illuminated this puzzling state of matter, seemingly poised between the living and the inorganic, and laid the foundations for their use in today’s display technologies. Crystallography now tells us about the nature of the rocks in the deep Earth and the iron at its core. From drugs to earthquakes, microchips to meteorites, the understanding and technological capabilities that X-ray crystallography has provided are surely unmatched by any other scientific technique.

For their achievements, William and Lawrence Bragg were awarded the Nobel prize for physics in 1915, the year that William left Leeds for University College London. Astonishingly, that work had not even begun when the Braggs arrived in England from Australia six years earlier. Such immediate recognition is rare for Nobel prizes, and testifies both to the evident importance of their work and the clarity with which they explained and demonstrated its potential in many areas of science.

William Henry Bragg was born in Cumbria, but after studying at Cambridge University he moved to Australia in 1886 to take up a post in physics at the University of Adelaide. This was an adventurous decision: although Adelaide possessed many modern amenities and some splendid civic buildings, the British colonies here still had something of a raw, outback character. But William thrived, not least by meeting his future wife Gwendoline Todd, daughter of the astronomer and eminent government figure Charles Todd. They had two sons, Robert and William Lawrence (known to the family as Willie), and a daughter Gwendolen (Gwendy).

In the 1890s William established a solid international reputation for his work on radioactivity and the nature of the new invisible ‘emanations’ from matter: X-rays, gamma rays and alpha particles, the latter two being types of radioactivity. This work brought him in contact with the New Zealander Ernest Rutherford, who left the Antipodes in 1894 to work at Cambridge. Rutherford was fast emerging as the leading expert on atomic physics and radioactivity, and his supportive correspondence with Bragg led to a close friendship and collaboration. When William Stroud resigned his chair of Cavendish Professor of Physics at Leeds in 1907, Rutherford’s colleague the English chemist Frederick Soddy recommended Bragg as his successor, saying to the Dean of Science Arthur Smithells that, when he visited Adelaide “I was much struck with the spirit he has created around him.” Smithells took the advice, and Bragg was offered the post, which he accepted.

In January 1909 Bragg boarded the coal-fired Watarah in Adelaide for the journey to England, arriving in Plymouth in March. By this time, Rutherford was working at Manchester – doubtless one of the reasons the Leeds offer seemed so attractive – and Bragg wasted no time in visiting him. But he was less delighted with what he and his family found in Leeds: the industrial city was smoky and grimy, and poverty was all around, poor labouring families being housed in cramped, cold terraces. The Braggs were, however, able to rent accommodation in fashionable Headingley, near the university, before eventually settling in a country cottage called Deerstones, near Bolton Abbey 20 miles north of the city. Lawrence now went to study, as his father had, at Trinity College in Cambridge.

Rutherford was evidently as pleased at their proximity as Bragg was, and his booming voice became a regular counterpoint to Bragg’s reserve at Deerstones. “Rutherford was continually turning up at our home with an enthusiastic ‘D’you know, Bragg’”, recalled Gwendy. While Rutherford was clarifying the nature of alpha particles and using them to probe inside the atom, Bragg was interested in the character of X-rays, discovered in 1985 by the German Wilhelm Röntgen, which Bragg believed to consist of particles too: a kind of electron with its negative electrical charge neutralized by “a quantity of positive electricity”. Thus William entered a vigorous debate being conducted in Germany over whether X-rays were ‘corpuscles’ or ‘pulses’ – particles or waves. They are in fact electromagnetic waves, like light, which were then still widely believed to travel through an invisible medium called the ether. But as Albert Einstein argued in 1905, even light can be considered to be like a stream of particles too, called photons: this ‘wave-particle duality’ was one of the first fruits of the nascent quantum theory.

It is no surprise, then, that William was very interested in the news mentioned in a letter from the Norwegian physicist Lars Vegard which he received in June 1912. “Recently”, Vegard wrote, “certain new, curious properties of X-rays have been discovered by Dr Laue in Munich.” Max von Laue was one of the most brilliant students of the physicists Max Planck (who first proposed the quantum hypothesis in 1900) and Arnold Sommerfeld in Munich. Earlier that year he and his colleagues found that when a beam of X-rays was fired at a crystal, the reflected rays imprinted a geometric pattern of bright spots on photographic emulsion placed around the sample.

Vegard explained that Laue “thinks that the effect is due to diffraction of the Röntgen rays by the regular structure of the crystal, which should form a kind of grating, with a grating constant of the order of 10**-8 cm, corresponding to the supposed wavelength of Röntgen rays… he is, however, at present unable to explain the phenomenon in its details.”

What did that mean? Laue was referring to the way waves are reflected from an array of evenly spaced objects (a ‘grating’). At some angles, the peaks and troughs of the reflected waves coincide and reinforce each other, producing an intense beam – an effect called constructive interference. At other angles the waves are perfectly out of step, so that the peaks and troughs cancel one another out and the beam vanishes – destructive interference. The resulting patterns of weak or strong reflected waves depend on the wavelength of the waves, which is why this kind of interference of light bouncing off the pitted ‘grating’ of a CD produces different colours (each with its specific wavelength) at different viewing angles. This interference effect is called diffraction.

The effect only happens when the wavelength of the waves is roughly equal to the size and separation of the objects that make up the grating. That is why Laue talked about the ‘grating constant’, meaning the spacing of elements in the grating. He supposed that this grating is produced by the regular arrangement of atoms in the crystal, which are packed together like oranges on the greengrocer’s stall. At that time, very little was known about the nature of atoms and how they were ordered in crystals, but scientists did at least have a fairly good sense of their size – which, as Laue attested, was around a ten billionths of a centimetre, or 10**-8 cm. This was about the same as the wavelength of X-rays, which most researchers (if not Bragg) regarded as ether waves.

On the face of it, then, Laue seemed to have a pretty good explanation for the pattern of bright X-ray spots recorded in the emulsion: it was the result of diffraction of X-rays. But Vegard was right to say that the details were still not clear. By assuming that the diffraction was caused by a simple grid of atoms, Laue could explain why he saw X-ray spots, but he predicted too many of them, and couldn’t account for their elliptical shape. What was it, then, that acted as the diffraction grating: the atomic array, or something else? Many chemists believed that atoms in a crystalline substance such as copper sulphate (used for the first experiments in Munich) were clustered into groups called molecules – in which case, how could they form regular arrays anyway?

Lawrence Bragg, home from Cambridge for the summer, recalled that he and his father discussed Laue’s report intensely “when we were on holiday at Cloughton on the Yorkshire coast”. At that stage, still faithful to William’s ‘corpuscular’ view of X-rays, the Braggs at first sought to interpret Laue’s findings on the basis that X-ray ‘particles’ were being channelled along ‘avenues’ between the rows of atoms, an idea that they described in a paper published that October in the prestigious science journal Nature.

But when that same month Lawrence returned to university, he hit on a better explanation while strolling through the meadows of the Cambridge Backs. “The idea suddenly leapt into my mind”, he later wrote, “that Laue’s spots were due to the reflection of X-ray pulses by sheets of atoms in the crystal.” Sheets? What Lawrence understood was that, if you stack atoms regularly, you end up with series of layers, which become evident only when you look along a certain direction parallel to each set of layers. William explained this in 1915 in the Leeds student magazine The Gryphon with reference to the rows of vines in a vineyard – not the obvious reference for someone surrounded either by the Yorkshire dales or the Cambridgeshire fens, but wine had been cultivated in the Adelaide Hills since the early nineteenth century. As you walk through the rows, every so often the vines line up in rows and you can see them stretching away in parallel formation: a grating.

On this assumption that it was the regularly spaced sheets of atoms that caused X-ray diffraction, Lawrence worked out how the reflection angles at which spots appear depends on the distance between sheets and the wavelength of the X-rays. He included this formula in a paper that he presented to the Cambridge Philosophical Society in November, which was reported in Nature in December. Here he also showed how, by assuming a particular kind of stacking of atoms in crystals of zinc sulphide, he could account perfectly for the X-ray pattern produced from this substance by the Munich group.

Lawrence carried out this work before he had even graduated from Cambridge, which he did in mid-1912. He was already starting to conduct research in the Cavendish Laboratory at Cambridge, and here he was able to test and verify his diffraction law with experiments on X-ray reflection from the mineral mica. His ideas about the atomic arrangements in zinc sulphide had been influenced by the picture of crystal structures then being developed by William Pope, Professor of Chemistry at Cambridge. At Pope’s suggestion, Lawrence began to carry out diffraction experiments on alkali halide salts, such as common salt sodium chloride: a particularly fortunate choice, because these substances have very simple crystal structures in which the atoms are arranged at the corners of stacks of cubes. Those experiments, Lawrence wrote to his father at Leeds, “turned out toppingly”.

William was immensely proud of his son’s achievements, which he mentioned to Rutherford in December. But he was no bystander in all this. The Braggs were constantly discussing the phenomenon of X-ray diffraction, and William was exploring it at Leeds too. Here he had rather better technical support than Lawrence did at the Cavendish, where the ‘sealing wax and string’ approach to experimentation pursued by J. J. Thomson, the discoverer of the electron in 1897, was perpetuated by Rutherford when he became director in 1919. William, in contrast, enjoyed the services of an excellent workshop led by the head mechanic Jenkinson, who built for him an instrument that could be used both to look at how X-rays were absorbed by substances (the technique of X-ray spectrometry, which William studied) and reflected by them (X-ray diffraction).

Lawrence was able to use this instrument for some of his Cambridge studies, and in April 1913 he and his father described their diffraction technique in a joint paper read to the Royal Society in London. Called simply “The reflection of X-rays by crystals”, it was essentially an announcement of the birth of this new discipline. The crucial realisation was that, if the X-ray diffraction pattern recorded in photographic film could be accurately predicted from the crystal structure, then one could also work backwards, deducing from the experimentally measured pattern what the structure of the crystal is. This point was emphasized in a paper by Lawrence read to the Royal Society in June, and was demonstrated most dramatically in a joint paper later that year in which the Braggs reported the structure of diamond.

For that work they needed a particularly fine specimen of the gemstone, which was not easy to get. It came from the mineralogical collection at Cambridge, despite the fact that the Professor of Mineralogy William Lewis had strictly forbidden any loans of the precious specimen. The demonstrator, Arthur Hutchinson, had risked his neck to lend the gem to Lawrence behind Lewis’s back. “I shall never forget Hutchinson’s kindness in organising a black market in minerals to help a callow young student”, Lawrence wrote. “I got all my first specimens and all my first advice from him and I am afraid that Professor Lewis never discovered the source of my supply.”

Although it was Lawrence who had had the crucial insight that diffraction was caused by the atomic planes, and who was doing much of the key experimental work, it was William who was the eminent and senior scientist. And so it was William who was asked to deliver talks on this new science of X-ray crystallography. “It was my father who announced the new results at the British Association, the Solvay Conference, lectures up and down the country, and in America, while I remained at home”, Lawrence recalled ruefully. “It was not altogether an easy time… a young researcher is as jealous of his first scientific discovery as a kitten is with its first mouse… My father more than gave me full credit for my part, but I had some heart-aches.”

The Solvay meeting – a roughly triennial gathering of Europe’s top physical sciences in Brussels – was a particularly prestigious platform, and at the 1913 meeting on “The Structure of Matter” William Bragg discussed his work with Einstein and Marie Curie, along with several scientists, such as Leon Brillouin and Frederick Lindemann, who went on to make important contributions to the understanding of diffraction and crystal structure.

Under these circumstances, it is hardly surprising that many people assumed that X-ray crystallography was William’s discovery. It was from this time that Lawrence began to sign his papers as “W. Lawrence”, foregoing the childhood name of Willie, better to distinguish himself from his father. He was not being merely filial in his comments about his father’s generosity, however. When the Braggs put the seal on their achievements with their joint 1915 book X-Rays and Crystal Structure, William wrote the preface alone while Lawrence was away at war, and he said: “I am anxious to make one point clear, viz., that my son is responsible for the ‘reflection’ idea which has made it possible to advance, as well as for much the greater portion of the work of unravelling crystal structure to which the advance has led.”

All the same, the difference in status could not but cause tensions. “Father and son never managed to discuss the situation”, Gwendy later admitted, “WHB being very reserved and WL inclined to bottle up his feelings.”

Yet there could be no mistaking the joint effort when the two Braggs shared the Nobel prize in 1915. By that point, William had left Leeds, and the award must have only deepened the sense of loss in the physics department. In late 1914 William received an offer of a professorship at University College London. He declined it in March of 1915, but later accepted an improved offer that Leeds simply could not match. He completed the academic year, and started in London in September. “Professor Bragg has not left us for the honour of going to London”, Smithells was at pains to point out in The Gryphon, “…[he] has left us because he thinks that in London he can do better work.” He added that Bragg “has a perfect right to think so”, which can leave little doubt about what Smithells thought of the matter.

But it was not necessarily the tragedy for Leeds that it might have seemed at the time. William Bragg’s seminal work there inevitably left a legacy, and this was re-ignited when his student William Astbury, who worked with him at UCL and later at the Royal Institution in London, was appointed Lecturer in Textile Physics at Leeds in 1928. He remained in the department until his death in 1961, and during that time Astbury’s studies of the crystal structures of proteins, beginning with the keratin protein of wool, were central to the genesis of structural molecular biology. Astbury’s work on keratin guided the chemist Linus Pauling towards the discovery of the basic structural elements of all proteins, while his work on DNA was an important precursor to the breakthrough of Watson and Crick 60 years ago. In such ways, the influence of William Bragg’s time at Leeds continues to resonate at the forefront of science today.

Thursday, May 23, 2013

BBC4’s The Science of Music, presented by Robert Winston, has impressed me so far with the level at which it has been able to discuss the issues. This is why radio so often wins out over TV: no silly distractions or actors swanning around in period costume, just serious engagement with the topic. The fact that I’m in both of the first two episodes has, of course, nothing to do with this judgement. It’s much more a function of Robert being frighteningly well informed about music, as I discovered in my long and rather exhausting (but very enjoyable) chat with him.

Friday, May 17, 2013

I’ve written the catalogue notes for an exhibition that has just opened at the Pangolin Gallery in the King’s Place, King’s Cross, London. It is well worth a visit (and if any Nature folks are reading this, you have no excuse – it’s next door).

________________________________________________________________

Artists have drawn on themes and concepts in science at least since Leonardo da Vinci blended both art and science in a unified representation of the natural world. In the twentieth century an interest in form, space, growth and motion was shared by artists such as Picasso, Duchamp and Richard Hamilton and scientists including Einstein, D’Arcy Thompson and Roger Penrose. Today’s collaborations and interactions of artists and scientists are often motivated by the emergence of neuroscience and the concomitant exploration of how we perceive the world.

It’s rare to find at this fecund interface any engagement with the chemical and molecular sciences, which are commonly regarded as simultaneously too abstruse (all those ball-and-stick molecules) and too applied (all those stinking flasks and vats) to have much to say about aesthetic experience. Chemistry might seem to offer no grand ideas about the universe and our place within it, but just atoms, crystals, solvents: inert materiality.

Yet that’s not how insiders see it. Chemistry is unique among the fundamental sciences in its reliance on creativity – literally, it is about making stuff. That demands of the creator both something worth saying and the skill and imagination needed to convey it. “Chemistry, like much of art, and arguably unlike the other sciences, is oriented towards transformation, process and the dichotomy between the synthetic and the natural”, says American chemist Tami Spector. “It is also, like art, highly tactile in its experimental and laboratory aspects, yet subtly conceptual in its representations.”

Briony Marshall is such an insider: an exceedingly rare example of (bio)chemist turned artist. Marshall studied biochemistry at Oxford University before following an intuition that took her to the Art Academy in Borough, south London. In 2011 she became Sculptor in Residence at the King’s Place, which hosts the Pangolin London gallery in association with the sculpture foundry Pangolin Editions, at which her works were cast during the residency.

The visual language that Marshall deploys in much of her sculpture is one deeply embedded in the tradition of chemical science, and of biochemistry in particular, in which wood, metal, plaster and resin have long been deployed to conjure up representations of the molecular fabrics of the world (including our own). These renditions are necessarily schematic, even symbolic. They can also, like the double-helical framework model of DNA at which James Watson and Francis Crick gaze in that iconic 1953 photograph, be rather beautiful.

2013 is not only the 60th anniversary of Watson and Crick’s discovery of the structure of our genetic material but also the centenary of the invention of the means by which it was disclosed: the technique of X-ray crystallography, devised by William Bragg at the University of Leeds and his son Lawrence at Cambridge. By showing that X-rays deflected from the layers of atoms in crystals can reveal the arrangement of those atoms, the Braggs opened a window onto the molecular world that enabled scientists to figure out the shapes not only of simple crystals such as diamond (one of the first crystal structures deduced by the Braggs) but of complex biological molecules such as protein enzymes. In this way they revealed how, at the smallest scales, we are put together.

So there it is: the patterns of atoms and molecules do after all connect to the human condition. This is perhaps the guiding metaphor of Marshall’s work. Chemists frequently anthropomorphize their molecules, talking of how they have attractions and aversions, how they ‘prefer’ particular shapes, conformations and environments. In Marshall’s reinvention of the Watson-Crick model DNA: Helix of Life this tendency is made explicit. Each atom is a human form, arms and legs reaching out to join with others. The assembly took meticulous planning, not least because atoms can scarcely be expected to respect human anatomy. The components are atomized in Marshall’s Individual Atoms (Kit to Make DNA), where they become dancers and stylized primitive figurines, some poised as if for flight or crucifixion. Life’s fundamental form, perilously fragile, emerges only when these hands link, just as each atomic figure must find its proper place in Marshall’s 2008 work A Dream of Society as Flawless as Diamond, which reproduces the diamond network of carbon atoms first discovered by the Braggs.

Marshall’s DNA double helix represents the culmination of a period of exploration of molecular form, which included also depictions of the more abstract shapes of enzymes. Her latest work displays a shift to a different scale in the architecture of life: the evolving shape of an embryo. In Carnegie Stages, she shows five successive stages of embryonic growth, beginning with the appearance of a central fold called the primitive streak that defines the axis of bilateral symmetry along which the embryo’s limbs and organs will later be arranged. This, then, is in a sense the ‘human form’ at its most basic: now more than just a ball of undifferentiated cells, with the characteristic symmetry of the human body established. The formation of the primitive streak, about 14 days after fertilization, has in fact been made an indicator of personhood: marking the point at which an embryo can no longer develop into twins, its appearance denotes the time after which embryos made by IVF may no longer be legally sustained in vitro.

Yet these embryonic forms show nothing recognizably human – for after all, the same shapes are seen also in the development not only of other mammals but of birds and reptiles too. And Marshall has idealized them: exaggerated the symmetry and made the surfaces smooth and edges sharp, so that the nominally biological forms elide into abstraction. If the result evokes resonances with the abstract sculptures of Henry Moore and Barbara Hepworth, that is wholly appropriate, for Moore and Hepworth were themselves inspired by the emergent, lithely contoured forms of organic nature. They were both associated with the style characterized in 1935 by the critic Gregory Grigson as Biomorphism. In part this drew on the vocabulary of macroscopic form evident in natural objects such as bones and weathered stones, but it was also enriched by the visions of amoebae and single-celled organisms that biologists were seeing under the microscope. “When I look at [Moore’s] carvings”, Grigson wrote, “I sometimes have to reflect that so much of our visual experience of the anatomical detail and microscopical forms of life comes to us, not direct, but through the biologist.” The virtuosic exploration of such forms in Scottish zoologist D’Arcy Thompson’s 1917 book On Growth And Form was an explicit influence on Moore (who read it as a student in Leeds) and other representatives of the avant-garde, and it was the inspiration for one of the first major collaborations of artists and scientists in the postwar years, the 1951 exhibition “Growth and Form” at the Institute of Contemporary Arts in London. Moore himself cultivated friendships with scientists, most notably the biologist Julian Huxley, brother of Aldous and grandson of Charles Darwin’s staunch supporter Thomas Henry Huxley.

Marshall made Embryo Spiral while unaware of D’Arcy Thompson’s work, which itself seems to testify to Thompson’s underlying implication in On Growth and Form that there is a fundamental language of natural form which we already intuit. For the so-called logarithmic spiral that she constructs in Embryo Spiral from copper and brass wire, with a tiny but now recognizably human embryo at its focus, has become Thompson’s emblem, shown on the cover of modern editions of his book and engraved on the slate placard that marks his former residence in St Andrews, Scotland. It is the shape of a snail’s shell, of a ram’s curving horn, of an insect’s flight path towards a light. It encodes mathematical relationships that have been awarded (although not by Thompson) almost mystical significance, and Thompson showed how it was a natural consequence of simple laws of growth. In Embryo Spiral it suggests the unfolding of life as a kind of hidden regularity beneath the complexities of embryogenesis and biochemistry: a reassurance of rule and order underpinning the dizzying details of biology.

Thursday, May 16, 2013

I recently protested at criticisms published in the New Yorker by Gary Marcus and Ernie Davis of a paper published in Physical Review Letters that claimed to extract a kind of ‘intelligence’ from a simple rule governing the dynamics of particles. I’d written an unpublished account of this work, which I found interesting.

Well, I may have spoken too hastily. The paper, by Alex Wissner-Gross of Harvard and Cameron Freer of the University of Hawaii, does seem to me to be interesting and quite soberly presented. But it seems that the main target of Gary and Ernie’s criticisms was the extra-curricular claims that Alex was making for the work, especially in a video presentation for his new start-up Entropica. I’ve now taken a look at this, and I do think it seems rather over the top.

One criticism Gary and Ernie make is that the physics of the paper is ‘made up’. The idea is that, if one imposes a constraint on the particle’s dynamics that it maximizes the rate of entropy production over the entire course of its history – which means giving it an ability to look ahead – then one finds it doing all sorts of interesting things, such as cooperating with other particles or using them like ‘tools’. The objection is that real particles don’t obviously behave this way. But I’d maintain that there is a long and healthy tradition in physics of applying this sort of ‘what if’ thinking: what if the system were governed by this rule rather than that one? That’s interesting for exploring the range of possibilities that a system has access to. It’s a particularly common habit in cosmology and the outer reaches of fundamental physics such as string theory, but happens throughout physics – among other things, it’s a way of exploring what is essential and what is not for the phenomena you’re interested in. I can see that this might look a little odd to other scientists – what’s the point of inventing laws that might not be real? – but it’s a useful way of helping physicists develop intuitions.

Besides, this particular choice of ‘what if’ is well motivated. For one thing, we’re familiar with the idea that the trajectories of photons in quantum electrodynamics are determined by a kind of integration over all possible paths. What’s more, the principle of maximum entropy production – albeit in the moment, not in the future – has been invoked (e.g. by Jaynes) as a criterion for the behaviour of non-equilibrium systems. So this seems an interesting parameter space to explore, and I don’t agree with Ernie that the paper hardly seems to belong in a physics journal.

Ernie says, I think rightly, that “To some extent, I think the difference between your viewpoint and Wissner-Gross' on the one side and Gary's and mine on the other reflects the difference in disciplines. Physicists may be taken with this theory as a parsimonious equation that gives rise to behavior that looks like an elementary approximation of intelligence. As a psychologist and AI researcher, we look for theories at the state of the art in terms of explanatory or computational power, and we care very little about parsimony, which neither useful psychological theories nor useful AI programs generally manifest to any marked degree.”

But then there’s the question of whether this toy system has anything to tell us about ‘real’ intelligence of the sort one sees in the living world. And even if it doesn’t, might the approach be useful in other ways, for example in artificial intelligence?

On both of these issues, the paper itself is modest and largely silent (as it should be) and that is why I felt Gary and Ernie were being harsh. But that Entropica video seems to want to make the analogies direct – comparing the cooperating particles to cooperating animals, say, and claiming that “Entropica is a powerful new kind of artificial intelligence that can reproduce complex human behaviors”. It says that Entropica can “earn money trading stocks, without being told to do so,” and shows it commanding a fleet of ships (though it’s not too clear what they are supposed to be doing). There is an awful lot of “just as… so…” talk here, and once you start showing real animals using tools and cooperating in a task, you’re starting to imply that this is the kind of thing your model explains.

Now, maybe Alex has more concrete results than he is disclosing. But I’m not convinced on the basis of what I’ve seen so far. For example, did Entropica actually “make money”, or just perform OK in some simple simulation of a stock market? Will the ‘tool use’ results really have applications for “agriculture”, and if so, what on earth would those be?

It’s not clear from this video that Alex thinks his ‘causal entropic law’ tells us anything about actual human intelligence or animal behaviour, rather than producing behaviours that just look a bit like it. Gary and Ernie have interpreted some of his comments as making such claims, but I’m not so sure – it seems possible that he is just suggesting he has a simple framework that offers a different way of thinking about the issue, just as some simple biomechanical models can produce something that looks like bipedal walking. But I admit that a comment like “Our hypothesis is that causal entropic forces provide a useful—and remarkably simple—new biophysical model for explaining sophisticated intelligent behavior in human and nonhuman animals” could be interpreted either way, and Alex might need to be careful how he phrases things to avoid a misleading impression.

I don’t know that this is such a big deal. If I were an investor seeing the Entropica video, I’d be unimpressed by the lack of evidence to support grand claims, and indeed the lack of any indication of how Entropica works. It’s a long way from the hype that accompanies some big science projects. Yet I do now understand Gary and Ernie’s scepticism: it does rather look as if Alex is trying to jump much too far ahead too quickly. And perhaps that is part of a broader problem in science, which can’t any longer be allowed to advertise its own merits but instead has to spawn a start-up right away. In the end, Entropica will of course stand or fall on its ability to address real problems. I’ll be curious to see if it does so. In the meantime, it remains no more and no less than an interesting bit of exploratory physics.

Saturday, May 11, 2013

Tents, as is well known, are designed to fold up into a coil just slightly larger than the bag they came out of. Or so it always seems when the time comes to strike camp. Such frustrations might be eased by a study published in the Proceedings of the Royal Society A [1], which describes a mathematical method for deducing particularly efficient ways of pleating large sheets into compact forms.

It is no surprise that the authors, Nicolas Lee and his doctoral supervisor Sigrid Close of Stanford University in California, work in the department of aeronautics and astronautics, for these are disciplines that have a long history of interest in folding sheets. Packing away parachutes in a form that is compact yet guaranteed to unfold easily and reliably had obvious utility for aeronautics; but there is also a growing demand for sheet-like structures on spacecraft, such as solar panels, telescope mirrors, thermal shields and solar sails. With space at a premium on rocket launches, any way to package these systems more effectively can save money.

To find strategies for collapsing sheets compactly, nature is a good place to look. Leaves and flowers are folded inside buds, and insect wings in the cocoon, in a way that not only minimizes space but enables easy unfurling. The accordion-like pleat is a common solution. In some leaves. such as hornbeam and beech, the pleats are not simply parallel ridges but radiate in V-shaped arrays from a central stem or focus, like the folds of a fan.

Lee and Close first consider the case where a sheet of paper is to be pleated into a strip-like form that can be rolled up – for example, so that a rectangular map might be not just rolled into a long tube but concertinaed into a very short tube. A seemingly simple solution would be to fold it into parallel pleats and then roll up the resulting strip. But that doesn’t work at all well, because the sheet has a finite thickness. Even though this might seem very small, it adds up in a roll until the outer layers try to stretch while the inner ones buckle.

The answer, the researchers say, is to make the pleats themselves slightly curved, in such a way as to precisely balance out the gradually increasing radius of the rolled-up pleats. They calculate what the ideal curvature should be for a rectangular sheet of given dimensions and thickness. One consequence is that the folded-up pleats themselves don’t lie flat: the inner pleats in the roll are slightly shorter than the outer ones. By marking out the curved pleats and then folding by hand, Lee and Close show experimentally that a standard piece of A4 printer paper can be wrapped into a tight spiral just 5 mm thick with a central hole of 1 cm.

Some folded structures might need to be opened up while fixed to a central hub, rather like an umbrella. A simple array of corrugated pleats, while efficient for a free-standing sheet like a map, won’t work here, and so a different approach is needed. Leaves attached to a stem are a little like this, and other researchers have shown that one effective design is based on the V-shaped pleats of the hornbeam leaf: these can collapse the sheet into a strip, which might then be rolled or folded up. One drawback of that design, however, is that unfolding involves two distinct processes: unrolling and then opening up the pleats.

An alternative approach, first explored in 2002 by Simon Guest and Davide De Focatiis at the University of Cambridge [2], is to divide up the sheet – a square one, say – into separate sectors (in their case, into four square quarters) and to use the hornbeam fold for each sector separately, resulting in a compact little star-shaped origami form. Lee and Close have now found that, by staggering the corners of the sectors so that they spiral around the central hub, and by slightly curving each V-shaped pleat, such a design can produce a coil that wraps tightly around the hub but can be pulled open in a single gesture. Again, they tried it out – this time with parcel paper, the greater thickness of which poses a more daunting challenge for efficient folding. They found that a square sheet one metre wide could be collapsed into a coil just 8 cm across when wrapped around a central 4-cm-diameter rod. The authors say that, although the curving of the pleats is hardly perceptible, if the pleats are instead perfectly straight then buckling again disrupts the packing.

One particular virtue of the mathematical scheme that the authors have developed to study these folds is that it can be adapted to even more challenging situations, for example where the sheets are made from more than one type of material with different thickness or stiffness, or even for sheets that aren’t perfectly flat, such as bowl-like telescope mirrors made from flexible reflectors – or indeed the curving domes of umbrellas.

My article on DNA, genomics and evolution in Nature has predictably ruffled some feathers, and I’ve been waiting for the dust to settle a bit before broaching it here. The original piece follows below; I have a considerably longer version that I will be putting on my website soon. As for the fallout, I don’t particularly want to get into it, except to say a few things:

1. This was not intended to be a call for some kind of “paradigm shift” in biology; good god, if such a thing is needed then I’m hardly going to be the one to spot it. I’m simply suggesting that there would be some value in making a little more public how complex the picture has become (and how it is not all about DNA), rather than falling back on the obsolete tropes of ‘books of life’.
2. I have always accepted the possibility that epigenetics and ENCODE might be overblown. Or they might not. The debate is ongoing. But they are only a part of the picture anyway (and the full article explains a lot more of it).
3. It would be absurd to suggest that the Central Dogma is “dead” – I don’t even know what that could possibly mean. Of course it expresses something central to how proteins are made. But calling it the ‘central dogma’ was always a bit silly, and even more so now.
4. I accept it is wrong to imply that the Central Dogma is “DNA makes RNA makes protein.” Crick was more careful than that. My error was to rely instead on James Watson and Marshall Nirenberg. Ha, as if they would know!
5. It seems likely that there is going to be a considered, thoughtful response published in Nature soon, which pleases me.

On the 60th anniversary of the double helix, we should admit that we don't fully understand how evolution works at the molecular level, suggests Philip Ball.

This week's diamond jubilee of the discovery of DNA's molecular structure rightly celebrates how Francis Crick, James Watson and their collaborators launched the 'genomic age' by revealing how hereditary information is encoded in the double helix. Yet the conventional narrative — in which their 1953 Nature paper led inexorably to the Human Genome Project and the dawn of personalized medicine — is as misleading as the popular narrative of gene function itself, in which the DNA sequence is translated into proteins and ultimately into an organism's observable characteristics, or phenotype.

Sixty years on, the very definition of 'gene' is hotly debated. We do not know what most of our DNA does, nor how, or to what extent it governs traits. In other words, we do not fully understand how evolution works at the molecular level.

That sounds to me like an extraordinarily exciting state of affairs, comparable perhaps to the disruptive discovery in cosmology in 1998 that the expansion of the Universe is accelerating rather than decelerating, as astronomers had believed since the late 1920s. Yet, while specialists debate what the latest findings mean, the rhetoric of popular discussions of DNA, genomics and evolution remains largely unchanged, and the public continues to be fed assurances that DNA is as solipsistic a blueprint as ever.

The more complex picture now emerging raises difficult questions that this outsider knows he can barely discern. But I can tell that the usual tidy tale of how 'DNA makes RNA makes protein' is sanitized to the point of distortion. Instead of occasional, muted confessions from genomics boosters and popularizers of evolution that the story has turned out to be a little more complex, there should be a bolder admission — indeed a celebration — of the known unknowns.

DNA dispute

A student referring to textbook discussions of genetics and evolution could be forgiven for thinking that the 'central dogma' devised by Crick and others in the 1960s — in which information flows in a linear, traceable fashion from DNA sequence to messenger RNA to protein, to manifest finally as phenotype — remains the solid foundation of the genomic revolution. In fact, it is beginning to look more like a casualty of it.

Although it remains beyond serious doubt that Darwinian natural selection drives much, perhaps most, evolutionary change, it is often unclear at which phenotypic level selection operates, and particularly how it plays out at the molecular level.

Take the Encyclopedia of DNA Elements (ENCODE) project, a public research consortium launched by the US National Human Genome Research Institute in Bethesda, Maryland. Starting in 2003, ENCODE researchers set out to map which parts of human chromosomes are transcribed, how transcription is regulated and how the process is affected by the way the DNA is packaged in the cell nucleus. Last year, the group revealed [1] that there is much more to genome function than is encompassed in the roughly 1% of our DNA that contains some 20,000 protein-coding genes — challenging the old idea that much of the genome is junk. At least 80% of the genome is transcribed into RNA.

Some geneticists and evolutionary biologists say that all this extra transcription may simply be noise, irrelevant to function and evolution [2]. But, drawing on the fact that regulatory roles have been pinned to some of the non-coding RNA transcripts discovered in pilot projects, the ENCODE team argues that at least some of this transcription could provide a reservoir of molecules with regulatory functions — in other words, a pool of potentially 'useful' variation. ENCODE researchers even propose, to the consternation of some, that the transcript should be considered the basic unit of inheritance, with 'gene' denoting not a piece of DNA but a higher-order concept pertaining to all the transcripts that contribute to a given phenotypic trait [3].

According to evolutionary biologist Patrick Phillips at the University of Oregon in Eugene, projects such as ENCODE are showing scientists that they don't really understand how genotypes map to phenotypes, or how exactly evolutionary forces shape any given genome.

Complex code

The ENCODE findings join several other discoveries in unsettling old assumptions. For example, epigenetic molecular alterations to DNA, such as the addition of a methyl group, can affect the activity of genes without altering their nucleotide sequences. Many of these regulatory chemical markers are inherited, including some that govern susceptibility to diabetes and cardiovascular disease [4]. Genes can also be regulated by the spatial organization of the chromosomes, in turn affected by epigenetic markers. Although such effects have long been known, their prevalence may be much greater than previously thought [5].

Another source of ambiguity in the genotype–phenotype relationship comes from the way in which many genes operate in complex networks. For example, many differently structured gene networks might result in the same trait or phenotype [6]. Also, new phenotypes that are viable and potentially superior may be more likely to emerge through tweaks to regulatory networks than through more risky alterations to protein-coding sequences [7]. In a sense this is still natural selection pulling out the best from a bunch of random mutations, but not at the level of the DNA sequence itself.

One consequence of this complex genotype–phenotype relationship is that it may impose constraints on natural selection. If the same phenotypes can result from many similarly structured gene networks, it might take a long time for a 'fitter' phenotype to arise [8]. Alternatively, mutations may accumulate, free from selective 'weeding', thanks to the robustness of networks in maintaining a particular phenotype. Such hidden variation might be unmasked by some new environmental stress, enabling fresh adaptations to emerge [9]. These sorts of constraints and opportunities are poorly understood; evolutionary theory does not help biologists to predict what kinds of genetic network they should expect to see in any one context.

Researchers are also still not agreed on whether natural selection is the dominant driver of genetic change at the molecular level. Evolutionary geneticist Michael Lynch of Indiana University Bloomington has shown through modelling that random genetic drift can play a major part in the evolution of genomic features, for example the scattering of non-coding sections, called introns, through protein-coding sequences. He has also shown that rather than enhancing fitness, natural selection can generate a redundant accumulation of molecular 'defences', such as systems that detect folding problems in proteins [10]. At best, this is burdensome. At worst, it can be catastrophic.

In short, the current picture of how and where evolution operates, and how this shapes genomes, is something of a mess. That should not be a criticism, but rather a vote of confidence in the healthy, dynamic state of molecular and evolutionary biology.

A problem shared

Barely a whisper of this vibrant debate reaches the public. Take evolutionary biologist Richard Dawkins' description in Prospect magazine last year of the gene as a replicator with “its own unique status as a unit of Darwinian selection”. It conjures up the decades-old picture of a little, autonomous stretch of DNA intent on getting itself copied, with no hint that selection operates at all levels of the biological hierarchy, including at the supraorganismal level [2], or that the very idea of 'gene' has become problematic.

Why this apparent reluctance to acknowledge the complexity? One roadblock may be sentimentality. Biology is so complicated that it may be deeply painful for some to relinquish the promise of an elegant core mechanism. In cosmology, a single, shattering fact (the Universe's accelerating expansion) cleanly rewrote the narrative. But in molecular evolution, old arguments, for instance about the importance of natural selection and random drift in driving genetic change, are now colliding with questions about non-coding RNA, epigenetics and genomic network theory. It is not yet clear which new story to tell.

Then there is the discomfort of all this uncertainty following the rhetoric surrounding the Human Genome Project, which seemed to promise, among other things, 'the instructions to make a human'. It is one thing to revise our ideas about the cosmos, another to admit that we are not as close to understanding ourselves as we thought.

There may also be anxiety that admitting any uncertainty about the mechanisms of evolution will be exploited by those who seek to undermine it. Certainly, popular accounts of epigenetics and the ENCODE results have been much more coy about the evolutionary implications than the developmental ones. But we are grown-up enough to be told about the doubts, debates and discussions that are leaving the putative 'age of the genome' with more questions than answers. Tidying up the story bowdlerizes the science and creates straw men for its detractors. Simplistic portrayals of evolution encourage equally simplistic demolitions.

When the structure of DNA was first deduced, it seemed to supply the final part of a beautiful puzzle, the solution for which began with Charles Darwin and Gregor Mendel. The simplicity of that picture has proved too alluring. For the jubilee, we should do DNA a favour and lift some of the awesome responsibility for life's complexity from its shoulders.

Thursday, May 09, 2013

Well, here is a curious thing. On this blog I wrote recently about a paper in Physical Review Letters – my piece was originally written for BBC Future, but had to be dropped when the main BBC news team picked up on the same work.

Now psychologist Gary Marcus and computer scientist Ernest Davis have commented on the work in the New Yorker, criticizing it for making overblown and unsupported claims about AI and intelligence. And they cite my piece as evidence of media hype.

I’m flattered, of course, that my humble blog should be awarded such status, as I have always assumed that it is read solely by its 43 faithful followers. I’m even more flattered that Marcus and Davis generously call me ‘well respected’. And I generally enjoy this sort of piece, which punctures the habitual hype of scientific PR and the media’s parroting of it.

But I think they are utterly mistaken in their criticisms. They seem to have misunderstood totally what the paper is saying. They are apparently under the impression that the authors think they have discovered a new law which makes inanimate particles do amazing things in the real world. But “the physics is make-believe”, they complain – “inanimate objects simply do not behave in the way that the theory of causal entropic forces asserts”. So this ‘causal entropic force’ makes a particle stay in the middle of a box – but hey, real gas particles don’t do that, they move randomly! They can’t all go to the centre, because then the gas would condense spontaneously (and incidentally, the second law of thermodynamics would crumble)! So what makes this one particle so special?

Oh lord, where to begin? Wissner-Gross and Freer are not saying that this is something that real particles do, and that no one noticed before. They are saying that if one were to assume this kind of physics, what emerges are weirdly ‘intelligent-looking’ behaviours, which even seem to have something instrumental about them. A genuinely valid complaint would be not “But that’s not how things are!”, but rather, “What’s the point in invoking a law like this, if there’s no good reason to think it is ever manifested?” But that’s to totally miss the interest here, which is that a constraint that seems very dry and abstract (the capacity to integrate over all possible futures, so as to maximize the rate of entropy production over an entire trajectory) produces behaviour that has some very striking characteristics. The point is that one would not guess those outcomes by looking purely at the law that produces them – it is an emergence-like phenomenon. When Marcus and Davis say that “There is no evidence yet that that causal entropic processes play a role in the dynamics of individual neurons or muscular motions”, they seem to be under the impression that the authors have claimed otherwise.

They build another straw man in what they say about AI: “Wissner-Gross’s work promises to single-handedly smite problems that have stymied researchers for decades.” No, it really doesn’t. There’s nothing in the paper about AI, aside from some introductory remarks about how maximum-entropy methods have been used in some approaches.

Where I might have some sympathy with Marcus and Davis is in regard to a fairly loopy piece about the work on the scifi website io9 (“We come from the future”), which says “the theory offers novel prescriptions for how to build an AI — but it also explains how a world-dominating superintelligence might come about.” Here Wissner-Gross does expand on what he has in mind about AI. He is mostly reasonably reserved about that, implying only that their approach might suggest a new angle. But then we get into Terminator territory: “one of the key implications of Wissner-Gross’s paper is that this long-held assumption [that intelligent machines will decide to take over the world] may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.” Huh? Well, you see, Wissner-Gross talks about particles “trying to take control of the world”. By this, I would assume he means that the causal entropic force directs the particle’s behaviour along particular trajectories that may involve a tendency to arrange the immediate environment. But for the io9’ers, “the world” becomes our planet, and “take control” becomes “impose its remorseless robotic mind”. Now, I can’t tell how much of this came from a degree of injudiciousness in Wissner-Gross’s comments to the reporter, and how much was a post hoc arranging of quotes to fit a narrative. But it seems harsh to criticize a scientific paper on the basis of what a sensationalist news account says about it.

The basic problem here seems to be that Marcus and Davis assume that, when Wissner-Gross and Freer talk about “intelligence”, they must be talking about the same thing that psychologists see day to day in humans. So it’s “intelligent” always to maximize your future options, huh? Well, then, what about this? – “Apes prefer grapes to cucumbers. When given a choice between the two, they grab the grapes; they don’t wait in perpetuity in order to hold their options open. Similarly, when you get married, you are deliberately restricting the options available to you; but that does not mean that it is irrational or unintelligent to get married.” It’s a little bit like saying that bacteria don’t show rudimentary cognition in climbing up chemical gradients because, hey, we sometimes decide not to move towards smells that we really like. The authors were not claiming that all “intelligent” behaviour must be governed by the causal entropy principle, but simply that this remarkably simple rule can produce what look like intelligent behaviours.

“What Wissner-Gross has supplied is, at best, a set of mathematical tools, with no real results beyond a handful of toy problems”, they say. And yes, that is really all the paper claims to do. “Toy problems” is here meant to be dismissive – the authors don’t seem to know that physicists talk about “toy models” all the time, meaning mnimal, obviously too-simple ones that have illustrative, heuristic and suggestive value, rather than ones that are pointless and silly. “There is no reason to take it seriously as a contribution, let alone a revolution, in artificial intelligence”, Marcus and Davis continue, “unless and until there is evidence that it is genuinely competitive with the state of the art in A.I. applications.” Can they really believe the authors think they have a way of doing AI that will beat the state of the art (but that they forgot to mention it in their paper)?

Sure, “it would be grand, indeed, to unify all of physics, intelligence, and A.I. in a single set of equations”, they jeer. To unify all of physics (let alone the rest of it)??! Come on chaps, now you’re really just making it up.

Here’s my most recent piece for BBC Future (although a new one appears today).

_______________________________________________________________

Newborn fish not only can count, but can be taught to count better. This discovery by a team of psychologists at the University of Padova in Italy [1], might seem bizarre, even frivolous at first glance, but in fact it bears on a deep and interesting question: how do we make quantitative estimates based on what we see?

There’s a long tradition of psychological and anthropological research on the question of innate counting systems in humans, which underpins familiar popular notions such as the existence of cultures whose number system goes “one, two three, many.” These simplistic ideas can obscure the fact that even some non-human animals recognize that there are different degrees of “many”: that ten objects are not equivalent to a hundred.

Such distinctions matter in the wild. Animals need, for example, to be able to tell which of two food sources is the largest, or to selectively join the largest group of peers so as to maximize their chance of evading predators.

However, there is a difference between seeing that one group is bigger than another, and actually counting the numbers in each group. One suggestion is that animals use a numerical system for very small numbers – up to about 4, say – but a cruder “this is more than that” ratio-based system for larger numbers. The two methods aren’t easy to tell apart – there’s only so much that a fish will tell you about its reasoning – but it can be done. For example, if an animal makes an assessment based on just the ratio of two quantities, discrimination should get less accurate as the ratio approaches 1: better for 1 vs 4 (ratio 0.25) than for 3 vs 4 (ratio 0.75). But if they’re ‘counting’, the performance should be much the same for all of these pairs.

On this basis, Laura Piffer and her colleagues at Padova have previously shown that some fish can use purely numerical information, and not just ratios, to distinguish numbers of objects more than 4 [2]. Nevertheless, fish seem to prefer a number system only up to about 4, and the ratio method thereafter. An ability to make quantitative distinctions based on ratios is something that develops gradually in humans. Six-month-olds can tell apart a 1:2 ratio (not just one and two objects, but, say, 8 and 16), but not until about 10 months do they distinguish 2:3. Pre-schoolers can handle 3:4 ratios, and 6-year-olds 5:6 [3]. On this basis, fish are approaching a preschooler’s numerical literacy, and it’s not unreasonable to suspect that studying fish might shed some light on the mathematical competencies of humans too.

For one thing, fish, like children, learn numeracy as they grow. Newborn guppies can discriminate 1 from 2, and even 3 from 4, but the ability to distinguish 4 from 8 or 12 emerges only between 20 and 40 days later. That’s a bit surprising, given that the adult guppies use ratios to judge the larger numbers: you’d think that telling 3 from 4 would be harder than telling 4 from 12. So the newborns seem innately blessed with the capacity to count to 4, but have to learn the ratio method as they grow.

The latest set of experiments by Piffer and colleagues then asks: do newborn guppies already have the mental capacity to learn ratios, or does that neural hardware develop later? Can they be taught?

But how do you teach maths to a fish? Just offer them food as a reward. The researchers put the fish in a rectangular tank and displayed images of dots at each end, delivering food near the largest quantity. The fishes’ ability to distinguish the two numbers can then be inferred from the length of time they spend near each end: they learn to wait for food by what they think is the larger number of dots.

While newborn guppies couldn’t distinguish 7 dots from 14, after about 20 trials they had learnt to do so. They were then about two weeks old, but still well below the age at which the ratio discrimination system seemed to kick in for untrained guppies. So they seem indeed to be born with this potential ability, and just need to exercise it. Might human babies have such hidden talents too?

Does this mean fish are smarter than we give them credit for? You could choose to see it that way, but in fact it supports a growing recognition that cognitive processes like counting, which we might imagine to be quite complex, can in fact be achieved with a surprisingly small number of neurons [4]. Counting and comparing numbers might not be as hard as we think.