Saturday, August 30, 2014

Here is my latest Crucible column for Chemistry World. Do look out for Jim and Johnjoe’s book Life of the Edge, which very nicely rounds up where quantum biology stands right now – and Jim has just started filming a two-parter on this (for BBC4, I believe).

“Quantum biology” was always going to be a winning formula. What could be more irresistible than the idea that two of the most mysterious subjects in science – quantum physics and the existence of life – are connected? Indeed, you get the third big mystery – consciousness – thrown in for good measure, if you accept the highly controversial suggestion by Roger Penrose and Stuart Hameroff that quantum behaviour of protein filaments called microtubules are responsible for the computational capability of the human mind [1].

Chemists might sigh that once again those two attention-grabbers, physics and biology, are appropriating what essentially belongs to chemistry. For the fact is that all of the facets of quantum biology that are so far reasonably established or at least well grounded in experiment and theory are chemical ones. The most arguably mundane, but at the same time the least disputable, area in which quantum effects make their presence felt in a biological context is enzyme catalysis, where quantum tunneling processes operate during reactions involving proton and electron transfer [2]. It also appears beyond dispute that photosynthesis involves transfer of energy from the excited chromophore to the reaction centre in an excitonic wavefunction that maintains a state of quantum coherence [3,4]. It still seems rather staggering to find in the warm, messy environment of the cell a quantum phenomenon that physicists and engineers are still struggling to harness at cryogenic conditions for quantum computing. The riskier reaches of quantum biology also address chemical problems: the mechanism of olfaction (proposed to happen by sensing of odorant vibrational spectra using electron tunneling [5]) and of magnetic direction-sensing in birds (which might involve quantum entanglement of electron spins on free radicals [6]).

Yet it is no quirk of fate that these phenomena are sold as a union of physics and biology, bypassing chemistry. For as Jim Al-Khalili and Johnjoe McFadden explain in a forthcoming comprehensive overview of the field, Life On the Edge (Doubleday), the first quantum biologists were pioneers of quantum theory: Pascual Jordan, Niels Bohr and Erwin Schrödinger. Bohr was never shy of pushing his view of quantum theory – the Copenhagen interpretation – into fields beyond physics, and his 1932 lecture “Light and Life” seems to have been influential in persuading Max Delbrück to turn from physics to genetics, on which his work later won him a Nobel Prize.

But it is Schrödinger’s contribution that is probably best known, for the notes from his lectures at Trinity College Dublin that he collected into his little 1944 book What Is Life? remain remarkable for their prescience and influence. Most famously, Schrödinger here formulated the idea that life somehow opposes the entropic tendency towards dissolution – it feeds on negative entropy, as he put it – and he also argued that genetic information might be transmitted by an arrangement of atoms that he called an “aperiodic crystal” – a description of DNA, whose structure was decoded nine years later (partly by another former physicist, Francis Crick), that still looks entirely apt.

One of the most puzzling of biological facts for Schrödinger was that genetic mutations, which were fundamentally probabilistic quantum events on a single-atom scale, could become fixed into the genome and effect macroscopic changes of phenotype. By the same token, replication of genes (which was understood before Crick and Watson revealed the mechanism) happened with far greater fidelity than one should expect from the statistical nature of molecular interactions. Schrödinger reconciled these facts by arguing that it was the very discreteness of quantum events that gave them an accuracy and stability not amenable to classical continuum states.

But this doesn’t sound right today. For the fact is that Schrödinger was underestimating biology. Far from being at the mercy of replication errors incurred by thermal fluctuations, cells have proof-reading mechanisms to check for and correct these mistakes.

There is an equal danger that quantum biologists may overestimate biology. For it’s all too tempting, when a quantum effect such as tunneling is discovered in a biological process, to assume that evolution has put it there, or at least found a way to capitalize on it. Tunnelling is nigh inevitable in proton transfer; but if we want to argue that biology exploits quantum physics here, we need to ask if its occurrence is enhanced by adaptation. Nobel laureate biochemist Arieh Warshel has rejected that idea, calling it a “red herring” [7].

Similarly in photosynthesis, it’s not yet clear if quantum coherence is adaptive. It does seem to help the efficiency of energy transfer, but that might be a happy accident – Graham Fleming, one of the pioneers in this area, says that it may be simply “a byproduct of the dense packing of chromophores required to optimize solar absorption” [8].

These are the kind of questions that may determine what becomes of quantum biology. For its appeal lies largely with the implication that biology and quantum physics collaborate, rather than being mere fellow travellers. We have yet to see how far that is true.

Thursday, August 07, 2014

Italo Calvino’s If On a Winter’s Night a Traveller is one of the finest and funniest meditations on writing that I’ve ever read. It also contains a glorious pre-emptive critique on what began as Zipf’s law and is now called culturomics: the statistical mining of vast bodies of text for word frequencies, trends and stylistic features. What is so nice about it (apart from the wit) is that Calvino seems to recognize that this approach is not without validity (and I certainly think it is not), while at the same time commenting on the gulf that separates this clinical enumeration from the true craft of writing – and for that matter, of reading. I am going to quote the passage in full – I don’t know what copyright law might have to say about that, but I am trusting to the fact that anyone familiar with Calvino’s book would be deterred from trying to enforce ownership of the text by the baroque level of irony that would entail.

__________________________________________________________

[From Vintage edition 1998, translated by William Weaver]

I asked Lotaria if she has already read some books of mine that I lent her. She said no, because here she doesn’t have a computer at her disposal.

She explained to me that a suitably programmed computer can read a novel in a few minutes and record the list of all the words contained in the text, in order of frequency. ‘That way I can have an already completed reading at hand,” Lotaria says, “with an incalculable saving of time. What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings? An electronic reading supplies me with a list of the frequencies, which I have only to glance at to form an idea of the problems the book suggests to my critical study. Naturally, at the highest frequencies the list records countless articles, pronouns, particles, but I don’t pay them any attention. I head straight for the words richest in meaning; they can give me a fairly precise notion of the book.”

“Don’t you already have a clear idea what it’s about?” Lotaria says. “There’s no question: it’s a war novel, all actions, brisk writing, with a certain underlying violence. The narration is entirely on the surface, I would say; but to make sure, it’s always a good idea to take a look at the list of words used only once, though no less important for that. Take this sequence, for example:
“underarm, underbrush, undercover, underdog, underfed, underfoot, undergo, undergraduate, underground, undergrowth, underhand, underprivileged, undershirt, underwear, underweight…”

“No, the book isn’t completely superficial, as it seemed. There must be something hidden; I can direct my research along these lines.”

“What do you think of that? An intimatist narration, subtle feelings, understated, a humble setting, everyday life in the provinces … As a confirmation, we’ll take a sample of words used a single time:
“chilled, deceived, downward, engineer, enlargement, fattening, ingenious, ingenious, injustice, jealous, kneeling, swallow, swallowed, swallowing…"

“Here I would say we’re dealing with a full-blooded story, violent, everything concrete, a bit brusque, with a direct sensuality, no refinement, popular eroticism. But here again, let’s go on to the list of words with a frequency of one. Look, for example:
“ashamed, shame, shamed, shameful, shameless, shames, shaming, vegetables, verify, vermouth, virgins…"

“You see? A guilt complex, pure and simple! A valuable indication: the critical inquiry can start with that, establish some working hypothesis…What did I tell you? Isn’t this a quick, effective system?”

The idea that Lotaria reads my books in this way creates some problems for me. Now, every time I write a word, I see it spun around by the electronic brain, ranked according to its frequency, next to other words whose identity I cannot know, and so I wonder how many times I have used it, I feel the whole responsibility of writing weigh on those isolated syllables, I try to imagine what conclusions can be drawn from the fact that I have used this word once or fifty times. Maybe it would be better for me to erase it…But whatever other word I try to use seems unable to withstand the test…Perhaps instead of a book I could write lists of words, in alphabetical order, an avalanche of isolated words which expresses that truth I still do not know, and from which the computer, reversing its program, could construct the book, my book.

Here’s my take on Dürer’s Melencolia I on its 500th anniversary, published in Nature this week.

________________________________________________________

Albrecht Dürer’s engraving Melencholia I, produced 500 years ago, seems an open invitation to the cryptologist. Packed with occult symbolism from alchemy, astrology, mathematics and medicine, it promises hidden messages and recondite meanings. What it really tells us, however, is that Dürer was a philosopher-artist of the same stamp as Leonardo da Vinci, immersed in the intellectual currents of his time. In the words of art historian John Gage, Melencolia I is “almost an anthology of alchemical ideas about the structure of matter and the role of time” [1].

Dürer’s brooding angel is surrounded by, the instruments of the proto-scientist: a balance, an hourglass, measuring calipers, a crucible on a blazing fire. Here too is numerological symbolism in the “magic square” of the integers 1-16, the rows, columns and main diagonals of which all add up to 34: a common emblem of both folk and philosophical magic. Here is the astrological portent of a comet, streaming across a sky in which an improbable rainbow arches, a symbol of the colour-changing processes of the alchemical route to the philosopher’s stone. And here is the title itself: melancholy, associated in ancient medicine with black bile, the same colour of the material with which the alchemist’s Great Work to make gold was supposed to begin.

But why the tools of the craftsman – the woodworking implements in the foreground, the polygonal block of stone awaiting the sculptor’s hammer and chisel? Why the tormented, introspective eyes of the androgynous angel?

Melencolia I is part of a trio of complex etchings on copper plate that Dürer made in 1513-14. Known as the Master Engravings, they are considered collectively to raise this new art to an unprecedented standard of technical skill and psychological depth. This cluttered, virtuosic image is widely thought often said to represent a portrait of Dürer’s own artistic spirit. Melancholy, often considered the least desirable of the four classical humours then believed to govern health and medicine, was traditionally associated with insanity. But during the Renaissance it was ‘reinvented’ as the humour of the artistic temperament, originating the link popularly asserted between madness and creative genius. The German physician and writer Cornelius Agrippa, whose influential Occult Philosophy (widely circulated in manuscript form from 1510) Dürer is almost certain to have read, claimed that “celestial spirits” were apt to possess the melancholy man and imbue him with the imagination required of an “excellent painter”. For it took imagination to be an image-maker – but also to be a magician.

The connection to Agrippa was first made by the art historian Erwin Panofsky, a doyen of symbolism in art, in 1943. He argued that what leaves Dürer’s art-angel so vexed is the artist’s constant sense of failure: an inability to fly, to exceed the bounds of the human imagination and create the truly wondrous. Her tools, in consequence, lie abandoned. Why astronomy, geometry, meteorology and chemistry should have any relation to the artistic temperament is not obvious today, but in the early sixteenth century the connection would have been taken for granted by anyone familiar with the Neoplatonic idea of correspondences in nature. This notion, which pervades Agrippa’s writing, held that, which joined all natural phenomena, including the predispositions of humankind, are joined into a web of hidden forces and symbols. Melancholy, for instance, is the humour governed by the planet Saturn, whence “saturnine.” That blend of ideas was still present in Robert Burton’s The Anatomy of Melancholy, published a century later, which called melancholics “dull, sad, sour, lumpish, ill-disposed, solitary, any way moved, or displeased.” A harsh description perhaps, but Burton reminds us that “from these melancholy dispositions no man living is free” – for melancholy is in the end “the character of Mortality.” But some are more prone than others: Agrippa reminded his readers of Aristotle’s opinion “that all men that were excellent in any Science, were for the most part melancholy.”

So there would have been nothing obscure about this picture for its intended audience of intellectual connoisseurs. It was precisely because Dürer mastered and exploited the new technologies of printmaking that he could distribute these works widely, and he indicated in his diaries that he sold many on his travels, as well as giving others as gifts to friends and humanist scholars such as Erasmus of Rotterdam. Unlike paintings, you needed only moderate wealth to afford a print. Ferdinand Columbus, son of Christopher, collected over 3,000, 390 of which were by Dürer and his workshop [2].

But even if the alchemical imagery of Melencolia I was part of the ‘occult parcel’ that this engraving presents, Besides all this, it would be wrong to imagine that alchemy was, to Dürer and his contemporaries, purely an esoteric art associated with gold-making. As Lawrence Principe has recently argued (The Secrets of Alchemy, University of Chicago Press, 2013), this precursor to chemistry was not just or even primarily about furtive and futile experimentation to make gold from base metals. It was also a practical craft, not least in providing artists with their pigments. For this reason, artists commonly knew something of its techniques; Dürer’s friend, the German artist Lucas Cranach the Elder, was a pharmacist on the side, which may explain why he was almost unique in Northern Europe in using the rare and poisonous yellow pigment orpiment, an arsenic sulphide. The extent of Dürer’s chemical knowledge is not known, but he was one of the first artists to use acids for etching metal, a technique developed only at the start of the sixteenth century. The process required specialist knowledge: it typically used nitric acid, made from saltpetre, alum and ferrous sulphate, mixed with dilute hydrochloric acid and potassium chlorate (“Dutch mordant”).

Humility should perhaps compel us to concur with art historian Keith Moxey that “the significance of Melencolia I is ultimately and necessarily beyond our capacity to define” [3] – we are too removed from it now for its themes to resonate. But what surely endures in this image is a reminder that for the Renaissance artist there was continuity between theories about the world, matter and human nature, the practical skills of the artisan, and the business of making art.

Wednesday, August 06, 2014

The nerd with the safety specs who is always cropping up on TV doing crazy experiments for Jim Al-Khalili or Mark Miodownik or Michael Mosley, while threatening to upstage them with his patter? That’s Andrea Sella of UCL, who has just been awarded the Michael Faraday Prize by the Royal Society. And this is a very splendid thing. With previous recipients including Peter Medawar, Richard Dawkins, David Attenborough, Robert Winston and Brian Cox, it is clear what a prestigious award this is. But whereas those folks have on the whole found themselves celebrated and supported for their science-communication work, Andrea has sometimes been under a lot of pressure to justify doing this stuff instead of concentrating on his research (on lanthanides). I hope very much that this recognition will help to underline the value of what we now call “outreach activities” when conducted by people in regular research positions, rather than just by those who have managed to establish science communication as a central component of their work. Being able to talk about science (and in Andrea’s case, show it in spectacular fashion) is a rare skill, the challenge of which is sometimes under-estimated and under-valued, and so it is very heartening to see it recognized here.

Monday, August 04, 2014

Here’s my Point of View piece from the Guardian Review a week ago. It’s fair to say that my new book Invisible is now out, and I’m delighted that John Carey seemed to like it (although I’m afraid you can’t fully see why without a subscription).

___________________________________________________________________

H. G. Wells claimed in his autobiography that he and Joseph Conrad had “never really ‘got on’ together”, but you’d never suspect that from the gushing fan letter Conrad sent to Wells, 8 years his junior but far more established as a writer, in 1897. Before their friendship soured Conrad was a great admirer of Wells, and in that letter he rhapsodized the author of scientific romances as the “Realist of the Fantastic”. It’s a perceptive formulation of the way Wells blended speculative invention with social realism: tea and cakes and time machines. That aspect is nowhere more evident than in the book that stimulated Conrad to write to his idol: The Invisible Man.

To judge from Wells’ own account of his aims, Conrad had divined them perfectly. “For the writer of fantastic stories to help the reader to play the game properly”, he wrote in 1934, “he must help him in every possible unobtrusive way to domesticate the impossible hypothesis… instead of the usual interview with the devil or a magician, an ingenious use of scientific patter might with advantage be substituted. I simply brought the fetish stuff up to date, and made it as near actual theory as possible.”

In other words, Wells wanted to turn myth into science, or at least something that would pass for it. This is why The Invisible Man is a touchstone for interpreting the claims of modern physicists and engineers to be making what they call “invisibility cloaks”: physical structures that try to hide from sight what lies beneath. The temptation is to suggest that, as with atomic bombs, Wells’ fertile imagination was anticipating what science would later realise. But the light that his invisible man sheds on today’s technological magic is much more revealing.

It’s likely Wells was explicitly updating myth. One of the earliest stories about invisibility appears near the start of Plato’s Republic, a book that had impressed Wells in his youth. Plato’s narrator Glaucon tells of a Lydian shepherd named Gyges who discovered a ring of invisibility in the bowels of the earth. Without further ado, Gyges used the power to seduce the queen, kill the king and establish a new dynasty of Lydian rulers. In a single sentence Plato tells us what many subsequent stories of invisibility would reiterate about the desires that the dream of invisibility feeds: they are about sex, power and death.

Evidently this power corrupts – which is one reason why Tolkien made much more mythologically valid use of invisibility magic than did J. K. Rowling. But Glaucon’s point has nothing to do with invisibility itself; it is about moral responsibility. Given this power to pass unseen, he says, no one “would be so incorruptible that he would stay on the path of justice, when he could with impunity take whatever he wanted from the market, go into houses and have sexual relations with anyone he wanted, kill anyone, and do the other things which would make him like a god among men.” The challenge is how to keep rulers just if they can keep their injustices hidden.

The point about Gyges’ ring is that it doesn’t need to be explained, because it is metaphorical. The same is true of this and other magic effects in fairy tales: they just happen, because they are not about the doing but the consequences. Fairy-tale invisibility often functions as an agent of seduction and voyeurism (see the Grimms’ “The Twelve Dancing Princesses”), or a gateway to Faerie and other liminal realms. It’s precisely because children don’t ask “how is that possible?” that we shouldn’t fret about filling them with false beliefs.

But it seems to be a peculiarity of our age that we focus on the means of making magic and not the motive. The value of The Invisible Man is precisely that it highlights the messy outcome of this collision between science and myth. True, Wells makes some attempt to convince us that his anti-hero Griffin is corrupted by discovering the “secret of invisibility” – but it is one of the central weaknesses of the tale that Griffin scarcely has any distance to fall, since he is thoroughly obnoxious from the outset, driving his poor father to suicide by swindling him out of money he doesn’t possess in order to fund his lone research. If we are meant to laugh at the superstitions of the bucolic villages of Iping as the invisible Griffin rains blows on them, I for one root for the bumpkins.

No, where the book both impresses and exposes is in its description of how Griffin becomes invisible. A plausible account of that trick had been attempted before, for example in Edward Page Mitchell’s 1881 short story “The Crystal Man”, but Wells had enough scientific nous to make it convincing. While Mitchell’s scientist simply makes his body transparent, Wells knew that it was necessary not just to eliminate pigmentation (which Griffin achieves chemically) but to eliminate refraction too: the bending of light that we see through glass or water. There was no known way of doing that, and Wells was forced to resort to the kind of “jiggery-pokery magic” he had criticized in Mary Shelley’s Frankenstein. He exploited the very recent discovery of X-rays by saying that Griffin had discovered another related form of “ethereal vibration” that gives materials the same refractive strength as air.

Despite this, Griffin finds that invisibility is more a burden than a liberation. He dreams of world domination but, forgetting to vanish his clothes too, has to wander naked in the winter streets of London, bruised by unseeing crowds and frightened that he will be betrayed by the snow that threatens to settle on his body and record his footsteps. His eventual demise has no real tragedy in it but is like the lynching of a common criminal, betrayed by sneezes, sore feet and his digestive tract (in which food visibly lingers for a time). In all this, Wells shows us what it means to domesticate the impossible, and what we should expect when science tries to do magic.

That same gap between principle and practice hangs over today’s “invisibility cloaks”. They work in a different, and technologically marvelous, way: not by transparency, but by guiding light around the object they hide. But when the first of them was unveiled in 2006, it was perplexing: for there it sat, several concentric rings of printed circuits, as visible as you or me. It was, the scientists explained, invisible to microwaves, not to visible light. What had this to do with Gyges, or even with Griffin?

Some scientists argue that, for all their technical brilliance (which is considerable, and improving steadily), these constructs should be regarded as clever optical devices, not as invisibility cloaks. It’s hard to imagine how they could ever conceal a person walking around in daylight. This “magic” is cumbersome and compromised: it is not the way to seduce the queen, kill the king and become a tyrant.

This isn’t to disparage the invention and imagination that today’s “invisibility cloaks” embody. But it’s a reminder that myth is not a technical challenge, not a blueprint for the engineer. It’s about us, with all our desires, flaws, and dreams.

This is my Materials Witness column for the August issue of Nature Materials. I normally figure these columns are a bit too specialized to put up here, but this subject is just lovely: there is evidently so much more to the "sword culture" of the so-called Dark Ages, the Viking era and the early medieval period than a bunch of blokes running amok with big blades. As Snorri Sturluson surely said, you can't beat a good sword.

__________________________________________________________________

There can be few more mythologized ancient materials technologies than sword-making. The common view – that ancient metalsmiths had an extraordinary empirical grasp of how to manipulate alloy microstructure to make the finest-quality blades – contains a fair amount of truth. Perhaps the most remarkable example of this was discovered several years ago: the near-legendary Damascus blades used by Islamic warriors, which were flexible yet strong and hard enough to cleave the armour of Crusaders, contained carbon nanotubes [1]. Formation of the nanotubes was apparently catalysed by impurities such as vanadium in the steel, and these nanostructures assisted the growth of cementite (Fe3C) fibres that thread through the unusually high-carbon steel known as wootz, making it hard without paying the price of brittleness.

Yet it seems that the skill of the swordsmith wasn’t directed purely at making swords mechanically superior. Thiele et al. report that the practice called pattern-welding, well established in swords from the second century AD to the early medieval period, was primarily used for decorative rather than mechanical purposes and, unless used with care, could even have compromised the quality of the blades [2].

Pattern-welding involved the lamination and folding of two materials – high-phosphorus iron and low-phosphorus mild steel or iron – to produce a surface that could be polished and etched to striking decorative effect. After twisting and grinding, the metal surface could acquire striped, chevron and sinuous patterns that were highly prized. A letter to a Germanic tribe in the sixth century AD, complimenting them for the swords they gave to the Ostrogothic king Theodoric, conqueror of Italy, praised the interplay of shadows and colours in the blades, comparing the pattern to tiny snakes.

This and the image above are modern pattern-welded swords made by Patrick Barta using traditional methods.

But was it all about appearance? Surely what mattered most to a warrior was that his sword could be relied on to slice, stab and maim without breaking? It seems not. Thiele et al. commissioned internationally renowned swordsmith Patrick Barta to make pattern-welded rods for them using traditional techniques and re-smelted medieval iron. In these samples the high-phosphorus component was iron and not, as some earlier studies have mistakenly assumed, steel.

They subjected the samples to mechanical tests that probed the stresses typically experienced by a sword: impact, bending and buckling. In no cases did the pattern-welded samples perform any better than hardened and tempered steel. This is not so surprising, given that phosphoric iron itself has rather poor toughness, no matter how it is laminated with other materials.

The prettiness of pattern welding didn’t, however, have to compromise the sword’s strength, since – at least in later examples – the patterned section was confined to panels in the central “fuller” of the blade, while the cutting edge was steel. All the same, here’s an example of how materials use may be determined as much by social as by technical and mechanical considerations. From the Early to the High Middle Ages, swords weren’t just or even primarily for killing people with. For the Frankish warrior, the spear and axe were the main weapons; swords were largely symbols of power and status, carried by chieftains, jarls and princes but used only rarely. Judging by the modern reproductions, they looked almost too gorgeous to stain with blood.