Wednesday, June 25, 2014

Well, I thought I should put up here for posterity my comment published in the Guardian online about the Breakthrough Prizes. I realise that one thing I wanted to say in the piece but didn’t have room for is that it hardly seems an apt way to combat the low profile of scientists in our celebrity culture to try to turn them into celebrities too, with all the lucre that entails.

I know I tend to gripe about the feedback on Comment is Free, but I was pleasantly surprised at the thoughtful and apposite nature of many of the comments this time. But there’s always one – this time, the guy who didn’t read beyond the first line because it showed such appalling ignorance. The problem with not reading past the first line is that you can never be sure that the second line isn’t going to say “At least, that’s what we’re told, but the reality is different…”

I have been implored to put out a shout also for the Copley Medal, for which mathematicians are eligible. It is indeed very prestigious, and is apparently the oldest extant science prize. It is also delightfully modest in its financial component.

____________________________________________________________________

The wonderful thing about science is that it’s what gets discovered that matters, not who did the discovering. As Einstein put it, “When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding.” At least, that’s the official line – until it comes to handing out the prizes. Then, who did what gets picked over in forensic detail, not least by some of those in the running for the award or who feel they have been overlooked in the final decision.

This is nothing to be particularly ashamed of or dismayed about. Scientists are only human, and why shouldn’t they get some reward for their efforts? But the disparity between principle and practice is raised afresh with the inaugural awarding of the Breakthrough Prizes to five mathematicians on Monday. Each of them receives $3m – more than twice the value of a Nobel Prize. With stakes like that, it’s worth asking whether prizes help or hinder science.

The Breakthrough Prizes, established by information-technology entrepreneurs Yuri Milner and Mark Zuckerberg (of Facebook fame), are to be given for work in mathematics, fundamental physics and life sciences. The maths prize is the first to be decided, and the selection of five recipients of the full $3m each is unusual: from 2015, there will be only a single prize of this amount in each category, divided among several winners if necessary.

The creators of the prizes say they want to raise the esteem for science in society. “We think scientists should be much better appreciated”, Milner has said. “They should be modern celebrities, alongside athletes and entertainers. We want young people to get more excited. Maybe they will think of choosing a scientific path as opposed to other endeavours if we collectively celebrate them more."

He has a point – many people could reel off scores of Hollywood and sports stars, but would struggle to name any living physicist besides Stephen Hawking. But the idea that huge cash prizes might attract young people to science seems odd – can there be a single mathematician, in particular, who has chosen their career in the hope that they will get rich and famous? (And if there is, didn’t they study probability theory?)

Yet the curious thing is that maths is hardly starved of prizes already. The four-yearly Fields Medal (about $13,800) and the annual Abel Prize (about $1m) are both commonly described as the “maths Nobels”. In 2000 the privately funded Clay Mathematics Institute announced the Millennium Prizes, which offered $1m to anyone who could solve one of seven problems deemed to be among the most difficult in the subject.

Even researchers have mixed feelings. When Grigori Perelman solved one of the Millennium Problems, the Poincaré conjecture, in 2003, he refused the prize, apparently because he felt it should have recognized the work of another colleague too. Perelman fits the stereotype of the unworldly mathematician who rejects fame and fortune, but he’s not alone in feeling uneasy about the growth of an immensely lucrative “cash prize culture” in maths and science.

One of the concerns is that prizes distort the picture of how science is done, suggesting that it relies on sudden, lone breakthroughs by Great Women and (more usually) Men. Science today is often conducted by vast teams, exemplified by the international effort to find the Higgs boson at CERN, and even though the number of co-laureates for Nobel prizes has been steadily increasing, its arbitrary limit of three no longer accommodates this.

Although the maths Breakthrough prizewinners include some relatively young researchers, and the Fields Medal rewards those under 40, many prizes are seen as something you collect at the end of a career – by which time top researchers are showered with other awards already. Like literary awards, they can become the focus of unhealthy obsession. I have met several great scientists whose thirst for a Nobel is palpable, along with others whose paranoia and jealousies are not assuaged by winning one. Most troublingly, a Nobel in particular can transform a good scientist into an alleged font of all wisdom, giving some individuals platforms from which to pontificate on subjects they are ill equipped to address, from the origin of AIDS to religion and mysticism. The truth is, of course, that winners of big prizes are simply a representative cross-section: some are delightfully humble, modest and wise, others have been arrogant bullies, nutty, or Nazi.

And prizes aren’t won for good work, but for good work that fits the brief – or perhaps the fashion. Geologists will never get a Nobel, and it seems chemists and engineers will never get a Breakthrough prize.

Yet for all their faults, it’s right that scientists win prizes. We should celebrate achievement in this area just as in any other. But I do wonder why any prize needs to be worth $3m – it’s not surprising that the Breakthrough Prizes have been accused of trying to outbid the Nobels. Even some Nobellists will admit that a few thousands are always welcome but a million is a burden. A little more proportion, please.

Tuesday, June 24, 2014

Fashions change. I just learnt from Louise Levathes’ book When China Ruled the Seas that upper-class men in fifteenth-century Siam made a tinkling noise when they moved. Why? Because at age twenty, they “had a dozen tin or gold beads partly filled with sand inserted into their scrotums.” According to a Chinese translator Ma Huan, this looked like “a cluster of grapes”, which Ma found “curious” but the Siamese considered “beautiful”. Were this to catch on again among the upper classes here, David Cameron’s Cabinet might sound rather more delightful, as long as they kept their mouths shut.

Thursday, June 19, 2014

Here’s my latest piece for BBC Future, pre-editing. I was going to illustrate it with that scene from Trainspotting, but I feared no one would have the stomach to read on.

_________________________________________________________________

“They found him in the toilet covered in white powder and frantically trying dispose of cocaine by emptying it down the toilet.” But it’s not just during a drugs bust, as in this report of an arrest in 2012, that illicit substances go down the pan. Many drugs break down rather slowly in the body, and so they are a pervasive contaminant in human wastewater systems. The quantities are big enough to raise concerns about effects on ecosystems – but they can also offer a way to monitor the average levels of drug use in communities.

“Sewage epidemiologist” does not, it has to be said, sound like the kind of post that will bring applications flooding in. But it is a rapidly growing field. And one of the primary goals is to figure out how levels of drug use obtained by more conventional means, such as questionnaires and crime statistics, tally with direct evidence from what gets into the water. Over the past six years or so, sewage epidemiology has been shown to agree rather well with these other approaches to quantifying drug abuse: the amounts of substances such as cocaine and amphetamines in wastewater in Europe and the USA more or less reflect estimates of their use deduced by other means.

A new study, however, shows that the figures don’t always match, and that some previous studies of illicit drugs in sewage might have under-estimated their usage. To appreciate why, there’s no option but to dive in, in the manner of the notorious toilet scene from the movie Trainspotting, and embrace the grimy truth: you might not discover as much by looking at drugs carried in urine and dissolved in water as you will by studying the faecal residues in suspended particles and sewage sludge, since some drugs tend to stick more readily to the solids.

While a few studies have looked at illicit drugs in sewage solids in Europe, Bikram Subedi and Kurunthachalam Kannan of the Wadsworth Center of New York State’s Department of Health in Albany have conducted what seems to be the first such study in the USA. They took samples of wastewater and sludge from two sewage treatment plants handling the wastes of many thousands of people in the Albany area, and carried out chemical analysis to search for drug-related compounds. They looked not only for the drugs themselves – such as cocaine, amphetamine, morphine (the active component of heroin) and the hallucinogen methylenedioxyamphetamine (a designer drug known as “Sally” or “Sass”) – but also for some of their common ‘metabolites’, the related compounds into which they can be transformed in the body. The two researchers also measured amounts of common compounds such as nicotine and caffeine, which act as chemical markers of human excretion and so can serve to indicate the total number of, shall we say, contributors to the sewage.

To measure how much of these substances the samples contain, Subedi and Kannan used the technique of electrospray mass spectrometry. This involves converting the molecules into electrically charged ions by sticking hydrogen ions onto them, or knocking such ions off, and then accelerating them through an electromagnetic field to strike a detector. The field deflects the ions from a straight-line course, but the more massive they are the less they are deflected. So the molecules are separated out into a “spectrum” of different masses. For fairly large molecules like cocaine, with a molecular mass of 303, there’s pretty much only one common way atoms such as carbon, oxygen, nitrogen and hydrogen can be assembled to give this mass (formula C17H21NO4). So you can be confident that a spike in the spectrum at mass 303 is due to cocaine.

From these measurements, the researchers estimated a per capita consumption of cocaine in the Albany area four times higher than that found in an earlier study of wastewater in the US, and a level of amphetamine abuse about six times that in the same previous study, as well as 3-27 times that reported for Spain, Italy and the UK. It’s still early days, but this suggests that sewage epidemiology would benefit from getting to grips with the solids.

Subedi and Kannan could also figure out how good the sewage treatment was at removing these ingredients from the water. That varied widely for different substances: about 99 percent of cocaine was removed, but only about 4 percent of the pharmacologically active cocaine metabolite norcocaine. A few drugs, such as methadone, showed apparently “negative removal” – the wastewater treatment was actually converting some related compounds into the drug. The researchers admit that no one really knows yet what effects these illicit substances will have when they reach natural ecosystems – but there’s increasingconcern about their possible consequences. It looks as though we might need to start thinking about the possibility of “passive drug abuse” – if not in humans, then at least in the wild.

Tuesday, June 17, 2014

I don’t think of myself as a pope-basher. But there are times when one can’t help being flabbergasted at what the Vatican is capable of saying and doing. That’s how I feel on discovering this report of a speech by Pope Francis from last November.

From a supposedly progressive pope, these comments plunge us straight back to the early Middle Ages. This in particular: “In the Gospel, the Pope underlined, we find ourselves before another spirit, contrary to the wisdom of God: the spirit of curiosity’.”

“The spirit of curiosity”, Francis goes on, “distances us from the Spirit of wisdom because all that interests us is the details, the news, the little stories of the day. Oh, how will this come about? It is the how: it is the spirit of the how! And the spirit of curiosity is not a good spirit. It is the spirit of dispersion, of distancing oneself from God, the spirit of talking too much. And Jesus also tells us something interesting: this spirit of curiosity, which is worldly, leads us to confusion.”

Oh, this passion for asking questions! The pope is, of course, scripturally quite correct, for as it says in Ecclesiastes,
“Do not pry into things too hard for you
Or examine what is beyond your reach…
What the Lord keeps secret is no concern of yours;
do not busy yourself with matters that are beyond you.”

All of which tells us to respect our proper place. But if you want to take this to a more judgemental and condemnatory level, then of course we need to turn to St Augustine. Curiosity is in his view a ‘disease’, one of the vices or lusts at the root of all sin. “It is in divine language called the lust of the eyes”, he wrote in his Confessions. “From the same motive, men proceed to investigate the workings of nature, which is beyond our ken – things which it does no good to know and which men only want to know for the sake of knowing.”

I don’t see much distance between this and the words of Pope Francis. However, let’s at least recognize that this is not a specifically Christian thing. The pope seems to be wanting to return the notion of curiosity to the old sense in which the Greeks used it: a distraction, an idle and aimless seeking after novelty. That is what Aristotle meant by periergia, which is generally taken to be the cognate of the Latin curiositas. Plutarch considers curiositas the vice of those given to snooping and prying into the affairs of others – the kind of busybody known in Greek as a polypragmon. In Aristotle’s view, this kind of impulse had no useful role in philosophy. That’s why the medieval Aristotelians tried to make a distinction between curiosity and genuine enquiry. It’s fine to seek knowledge, said Thomas Aquinas and Albertus Magnus – but as the latter wrote,
"Curiosity is the investigation of matters which have nothing to do with the thing being investigated or which have no significance for us; prudence, on the other hand, relates only to those investigations that pertain to the thing or to us."

This is how these folks tried to carve out a space for natural philosophy, within which science eventually grew until theology no longer seemed like a good alternative for understanding the physical world.

Should we, then, give Pope Francis the benefit of the doubt and conclude that he wasn’t talking about the curiosity that drives so much (if not all) of science, but rather, the curiosity that Augustine felt led to a fascination with the strange and perverse, with “mangled corpses, magical effects and marvellous spectacles”? After all, the pope seems to echo that sentiment: “The Kingdom of God is among us: do not seek strange things, do not seek novelties with this worldly curiosity.”

Oh yes, we could allow him that excuse. But I don’t think we should. How many people, on hearing “curiosity” today, think of Augustine’s “mangled corpses” or Aristotle’s witless periergia? The meaning of words is what people understand by them. For a schoolchild commended for their curiosity, the words of the pope will carry the opposite message: be quiet, don’t ask, seek not knowledge but only God. This seems to me to be verging on a wicked thing to say.

But worse: the idea of a hierarchy of questions in which some are too trivial, too aimless or ill-motivated, is precisely what needed to be overcome before science could flourish. Early modern science was distinguished from the admirable natural philosophy of Aquinas and Albertus Magnus, of Roger Bacon and Robert Grosseteste, by the fact that no question was any longer irrelevant or irreverent. One could investigate the gnat’s leg, or a smudge on Jupiter, or optical phenomena in pieces of mica. That wasn’t yet in itself science, but the liberation of curiosity was the necessary precondition for it. In which case, Pope Francis’s message is profoundly anti-progressive and anti-intellectual.

Saturday, June 07, 2014

My interest in the merits (or not) of cursive writing prompted me to follow up Ed Yong’s recent tweet about an article on handwriting in the New York Times. It is by Maria Konnikova, and it is interesting. But I am particularly struck by this paragraph:
“Dr. Berninger goes so far as to suggest that cursive writing may train self-control ability in a way that other modes of writing do not, and some researchers argue that it may even be a path to treating dyslexia. A 2012 review suggests that cursive may be particularly effective for individuals with developmental dysgraphia — motor-control difficulties in forming letters — and that it may aid in preventing the reversal and inversion of letters.”

Here at least seems to be a concrete claim of the supposed cognitive benefits of cursive, in comparison to print handwriting. OK, so Dr Berninger’s claim is totally unsubstantiated here, and I’ll have to live with that. And this claim that cursive might be particularly useful for working with dyslexia and dysgraphia is one I’ve heard previously and seems plausible – it could perhaps offer a valid reason to teach cursive handwriting in preference to manuscript from the outset (not sequentially). But I’d like to see what that 2012 review says about this. So I follow the link.

It leads me to what appears to be a book chapter: “The contribution of handwriting and spelling remediation to overcoming dyslexia”, by Diane Montgomery. She is reporting on a study that used an approach called the Cognitive Process Strategies for Spelling (CPSS) to try to help pupils with identified spelling difficulties, who were in general diagnosed as dyslexics. This method involves, among many other things, teaching these children cursive alone. So Montgomery’s work in itself doesn’t offer any evidence for the superiority of cursive over other handwriting styles in this context – cursive is just accepted here as a ‘given’.

But she does finally explain why cursive is a part of CPSS, in a section titled “Why cursive in remedial work is important”. Here the author claims that “Experiments in teaching cursive from the outset have taken place in a number of LEAs [local education authorities] and have proved highly successful in achieving writing targets earlier and for a larger number of children.” Aha. And the evidence? Two studies are cited, one from 1990, one from 1991. One is apparently a local study in Kingston-upon-Thames. Both are in journals that are extremely hard to access – even the British Library doesn’t seem to keep them. So now I’m losing the trail... But let’s remind ourselves where we are on it. The CPSS method for helping dyslexic children uses cursive because advantages for it have been claimed in some studies almost 25 years ago on non-dyslexic cohorts.

Onward. Montgomery says that other dyslexia programmes base their remediation on cursive. She lists the reasons why that is so, but none of the claims (e.g “spaces between letters and between words is orderly and automatic”) is backed up with citations showing that these actually confer advantages.

There is, however, one such documented claim in her piece. “Ziviani and Watson-Will (1998) found that cursive script appeared to facilitate writing speed.” Now that’s interesting – this is of course the claim made by many people when they defend cursive, so I was delighted to find an assertion that there’s real evidence for it. Well, I could at least get hold of this paper, and so I checked it out. And you know what? It doesn’t show that at all. This statement is totally, shockingly false. Ziviani and Watson-Will were interested in the effects of the introduction of a new cursive style into Australian schools, replacing “the previous print and cursive styles”. How well do the children taught this way fare in terms of speed and legibility? The authors don’t actually conduct tests that compare a cohort trained the old way with one trained the new way. They are just concerned with how, for those trained the new way, speed affects legibility. So it’s a slightly odd study that doesn’t really address the question it poses at the outset. What it does do is to show that there is a weak inverse correlation between speed and legibility for the kids who learnt the new cursive style. Not at all surprising, of course – but there is not the slightest indication in this paper that cursive (of any kind) improves speed relative to manuscript/print style (of any kind).

There’s another relevant reference in Montgomery’s paper that I can get. She says “The research of Early (1976) advocated the exclusive use of cursive from the beginning.” Hmm, I wonder why? So I look it up. It compares two groups of children from two different American schools, one with 21 pupils, the other with 27. One of them was taught cursive from the outset, the other was taught the traditional way of manuscript first and then cursive. The results suggested, weakly, that exclusive teaching of cursive produced fewer letter reversals (say, b/d) and fewer transpositions (say, “first/frist”). But the authors acknowledged that the sample size was tiny (and no doubt they were mindful also that the experimental and control groups were not “identically prepared”). As a result, they said, “We in no way wish to offer the present data as documenting proof of the superiority of cursive over manuscript writing.” Would you have got that impression from Montgomery?

So now I’m really wondering what I’d find in those elusive 1990/1991 studies. At this point it doesn’t look good.

What, then, is going on here? Montgomery says that “custom and practice or ‘teaching wisdom’ is very hard to change and extremely rigid attitudes are frequently found against cursive.” I agree with the first point entirely – but in my experience so far, the rigid attitudes are in favour of cursive. And on the evidence here, advocacy for cursive seems to be made more on the basis of an existing conviction than out of respect for the evidence.

Ironically perhaps, I suspect that Early is nevertheless right. For most children, it won’t make an awful lot of difference whether they are taught cursive or manuscript – but they will find writing a fair bit easier at first if they are taught only one or the other. There does seem to be some slight indication that cursive might help with some particular spelling/writing problems, such as letter reversals and transpositions, though I’d like to see far better evidence for that. In that case, one could argue that the balance tips slightly in favour of cursive, simply for the sake of children with dyslexia and other dysgraphic problems. And I have the impression that in this case, a cursive-like italic style might be the best, rather than anything too loopy.

But if that were to be done, it would be good to be clear about the reasons. We are not saying that there’s anything in it for normal learners. And we really must drop the pathetic, patronising and pernicious habit of telling children that cursive is “grown-up” writing, infantilizing those who find it hard. If they learnt it from the outset, they would understand that it is just a way of writing – nothing more or less.

There is clearly still a lot of mythology, and propagating of misinformation, in this area. Given its importance to educational development, that’s troubling.

Wednesday, June 04, 2014

Here's how my recent article for IEEE Spectrum started off, with some more references, info and links.

________________________________________________________

If science is to reach beyond a myopic fixation on incremental advances, it may need bold and visionary dreams that border on myth-making. There are plenty of those in the field called programmable matter, which aims to blend micro- and nanotechnology, robotics and computing to produce substances that change shape, appearance and function at our whim.

The dream is arrestingly illustrated in a video produced by a team at Carnegie Mellon University in Pittsburgh. Executives sit around a table watching a sharp-suited sales rep make his pitch. From a vat of grey gloop he pulls a perfectly rendered model of a sports car, and proceeds to reshape it with his fingers. With gestures derived from touchscreen technology, he raises or flattens the car’s profile and adjusts the width of the headlamps. Then he changes the car from silver-grey to red, the “atoms” twinkling in close-up with Disney-movie magic as their color shifts.

This kind of total mastery over matter is not so different from the alchemist’s dream of transmuting metals, or in contemporary terms, the biologist’s dream of making life infinitely malleable through synthetic biology. But does the fantasy – it’s little more at present – bear any relation to what can be done?

Because of its affiliation with robotic engineering and computer science, the idea of programmable matter is often attributed to a paper published in 1991 by computer scientists Tommaso Toffoli and Norman Margolus of the Massachusetts Institute of Technology, who speculated about a collection of tiny computing objects that could sense their neighbors and rearrange themselves rather like cellular automata [1]. But related ideas were developed independently in the early 1990s by the chemistry Nobel laureate Jean-Marie Lehn, who argued that chemistry would become an information science by using the principles of spontaneous self-assembly and self-organization to design molecules that would assemble themselves from the bottom up into complex structures [2]. Lehn’s notion of “informed matter” was really nothing less than programmable matter at the atomic and molecular scale.

Lehn’s own work since the 1960s helped to show that the shapes and chemical structures of molecules could predispose them to unite into large-scale organized arrays that could adapt to their circumstances, for example responding to external signals or having self-healing abilities. Such supramolecular (“beyond the molecule”) self-assembly enables biomolecules to become living cells, which need no external instructions to reconfigure themselves because their components already encode algorithms for doing so. In some ways, then, living organisms already embody the aspirations of programmable matter.

Yet in the information age, it is we who do the programming. While living cells are often said (a little simplistically) to be dancing to the evolutionarily shaped program coded in their genomes, technologies demand that we bring matter under our own direct control. It’s one thing to design molecules that assemble themselves, quite another to design systems made from components that will reconfigure or disassemble at the push of a button. The increasingly haptic character of information technology’s interfaces encourages a vision of programmable matter that is responsive, tactile, even sensual.

Many areas of science and technology have come together to enable this vision. Lehn’s supramolecular chemistry is one, and nanotechnology – its extreme miniaturization and interest in “bottom-up” self-organizing processes – is another. Macroscopic robotic engineering has largely abandoned the old idea of humanoid robots, and is exploring machines that can change shape and composition according to the task in hand [3]. To coordinate large numbers of robotic or information-processing devices, centralized control can be cumbersome and fragile; instead, distributed computing and swarm robotics rely on the ability of many interacting systems to find their own modes of coordination and organization [4]. Interacting organisms such as bacteria and ants provide an “existence proof” that such coordination is sometimes best achieved through this kind of collective self-organization. Understanding such emergent behaviour is one of the central themes in the science of complex systems, which hopes to harness it to achieve robustness, adaptability and a capacity for learning.

Meanwhile, thanks to the shrinking of power sources and the development of cheap, wireless radio-frequency communications for labelling everything from consumer goods to animals for ecological studies, robotic devices can talk to one another even at very small scales. And making devices that can be moved and controlled without delicate and error-prone moving parts has benefitted immensely from the development of smart materials that can respond to their environment and to external stimuli by, for example, changing their shape, color or electrical conductivity.

In short, the ideas and technologies needed for programmable matter are already here. So what can we do with them?

Seth Goldstein and his team at Carnegie Mellon, in collaboration with Intel Research Pittsburgh, were among the first to explore the idea seriously. “I’ve always had an interest in parallel and distributed systems”, says Goldstein. “I had been working in the area of molecular electronics, and one of the things that drew me into the field was a molecule called a rotaxane that, when subjected to an electric field, would change shape and as a result change its conductivity. In other words, changing the shape of matter was a way of programming a system. I got to thinking about what we could do if we reversed the process: to use programming to change the shape of matter.”

The Carnegie Mellon group envisions a kind of three-dimensional, material equivalent of sound and visual reproduction technologies, in which millions of co-operating robot modules, each perhaps the size of a dust grain, will mimic any other object in terms of shape, movement, visual appearance, and tactile qualities. Ultimately these smart particles – a technology they call Claytronics [5] – will produce a “synthetic reality” that you can touch and experience without any fancy goggles or gloves. From a Claytronics gloop you might summon a coffee cup, a spanner, a scalpel.

“Any form of programmable matter which can pass the ‘Turing test’ for appearance [looking indistinguishable from the real thing] will enable an entire new way of thinking about the world”, say Goldstein. “Applications like injectable surgical instruments, morphable cellphones, 3D interactive life-size TV and so on are just the tip of the iceberg.”

Goldstein and colleagues call the components of this stuff “catoms” – Claytronic atoms, which are in effect tiny spherical robots that move, stick together, communicate and compute their own location in relation to others. Each catom would be equipped with sensors, color-change capability, computation and locomotive agency. That sounds like a tall order, especially if you’re making millions of them, but Goldstein and colleagues think it should be achievable by stripping the requirements down to the bare basics.

The prototype catoms made by the Pittsburgh researchers since the early 2000s were a modest approximation to this ambitious goal: squat cylinders about 44 mm across, their edges lined with rows of electromagnets that allow them to adhere in two-dimensional patterns. By turning the magnets on and off, one catom could ‘crawl’ across another. Using high-resolution photolithography, the Carnegie Mellon team has now managed to shrink the cylindrical catoms to the sub-millimetre scale, while retaining the functions of power transfer, communication and adhesion. These tiny catoms can’t yet move, but they will soon, Goldstein promises.

Prototype catoms

Electromagnetic coupling might ultimately not be the best way to stick them, however, because it drains power even when the devices are static. Goldstein and colleagues have explored the idea of making sticky patches from carpets of nanofibers, like those on a gecko’s foot, that adhere due to intermolecular forces. But at present Goldstein favors electrostatics as the best force for adhesion and movement. Ideally the catoms will be powered by harvesting energy from the environment – drawing it from an ambient electric field, say – rather than carrying on-board power supplies.

One of the big challenges is figuring out where each catom has to go in order to make the target object. “The key challenge is not in manufacturing the circuits, but in being able to program the massively distributed system that will result from putting all the units together into an ensemble”, says Goldstein. Rather than drawing up a global blueprint, the researchers hope that purely by using local rules, where each catom simply senses the positions of its near neighbors, the ensemble can find the right shape. Living organisms seem to work this way: the single-celled slime mold Dictyostelium discoideum, for example, aggregates under duress into a mushroom-shaped multicellular body without any ‘brain’ to plan it. This strategy means the catoms must communicate with one another. The Carnegie Mellon researchers plan to explore both wireless technologies for remote control, and electrostatic interactions for nearest-neighbour sensing.

To be practical, this repositioning needs to be fast. Goldstein and colleagues think that an efficient way to produce shape changes might be to fill the initial catom “blob” with little voids, and then shift them around to achieve the right contours. Small local movements of adjacent catoms are all that’s needed to move holes through the medium, and if they reach the surface and are expelled like bubbles, the overall volume shrinks. Similarly, the material can be expanded by opening up new bubbles at the surface and engulfing them.

At MIT, computer scientist Daniela Rus and her collaborators have a different view of how smart, sticky ‘grains’ could be formed into an object. Their “smart sand” would be a heap of such grains that, by means of remote messages and magnetic linkages, will stick selectively together so that the target object emerges like a sculpture from a block of stone. The unused grains just fall away. Like Goldstein, Gilpin and colleagues have so far explored prototypes on a larger scale and in two dimensions, making little units the size of sugar cubes with built-in microprocessors and electromagnets on four faces. These units can communicate with each other to duplicate a shape inserted into the 2D array. The smart grains that border the master shape recognize that they are at the edge, and send signals to others to replicate this pixellated mould and the object that lies within it [6].

Rus and her collaborators have hit on an ingenious way to make these ‘grains’ move. They have made larger cubes called M-blocks, several centimeters on each side, which use the momentum of flywheels spinning at up to 20,000 r.p.m. to roll, climb over each other and even leap through the air [7]. When they come into contact, the blocks can be magnetically attached to assemble into arbitrary shapes – at present determined by the experimenters, although their plan is to develop algorithms that let the cubes themselves decide where they need to go.

M-blocks in action

Programmable matter doesn’t have to be made from an army of hard little units. Hod Lipson at Cornell University and his colleagues think that it should be possible to create “soft robots” that can be moulded into arbitrary shapes from flexible smart materials that change their form in response to external signals.

“Soft robotics” is already well established. Shape-memory alloys, which bend or flex when heated or cooled, can provide the ‘muscle’ within the soft padding of a silicone body [8], for example, and polymeric objects can be made to change shape by inflating pneumatic compartments [9]. What made the soft robot designed by Lipson and his colleague Jonathan Hiller particularly neat is that the actuation didn’t require a specific signal, but was built into the structure itself. They used evolutionary computer algorithms to figure out how to arrange tiny blocks of silicone foam rubber so that raising and reducing the air pressure caused the rubber to contract and expand in a way that made the weirdly-shaped assembly crawl across a surface [10].

Lipson and his coworkers have also devised algorithms that can mutate and optimize standardized components such as rods and actuators to perform particular tasks, and have coupled this design process to a 3D printer that fabricates the actual physical components, resulting in “machines that make machines.” They have been able to print not just parts but also power sources such as batteries, and Lipson says that his ultimate goal is to make robots that can “walk out of the printer”.

These are top-down approaches to programmable matter, emerging from existing developments in robotic technology. But there are alternatives that start from the bottom up: from nanoscale particles, or even molecules. For example, currently there is intense research on the behavior of so-called self-propelled or “living” colloids: particles perhaps a hundred or so nanometers across that have their own means of propulsion, such as gas released by chemical reactions at their surface. These particles can show complex self-organized behavior, such as crystalline patterns that form, break and explode [11]. Controlling the resulting arrangements is another matter, but researchers have shown they can at least move and control individual nanoparticles using radiofrequency waves and magnetic fields. This has permitted wireless “remote control” of processes in living cells, such as the pairing of DNA strands [12], the triggering of nerve signals [13], and the control of insulin release in mice [14].

Nature programs its cellular matter partly by the instructions inherited in the DNA of the genome. But by exploiting the same chemical language of the genes – two DNA strands will pair up efficiently into a double helix only if their base-pair sequences are complementary – researchers have been able to make DNA itself a kind of programmable material, designed to assemble into specific shapes and patterns. In this way they have woven complex nanoscale DNA shapes such as boxes with switchable lids [15], nanoscale alphabetic letters [16] and even world maps [17]. By supplying and removing ‘fuel strands’ that drive strand pairing and unpairing, it is possible to make molecular-scale machines that move, such as DNA ‘walkers’ that stride along guide strands [18]. Eventually such DNA systems might be given the ability to replicate and evolve.

DNA origami

In ways like this, programmable matter seems likely to grow from the very small, as well as shrinking from robots the size of dimes. Goldstein says the basic idea can be applied to the building blocks of matter over all scales, from atoms and cells to house bricks. It’s almost a philosophy: a determination to make matter more intelligent, more obedient, more sensitive – in some respects, more alive.

________________________________________________________

Box: What might go wrong?

Isn’t there something a little sinister to this idea of matter that morphs and even mutates? What will the sculptors make? Can they be sure they can control this stuff? Here our fears of “animated matter” are surely shaped by old myths like that of the Jewish golem, a being fashioned from clay that threatened to overwhelm its creator.

The malevolence of matter that is infinitely protean is evident in imagery from popular culture, such as the “liquid robot” T-1000 of Terminator II. The prospect of creating programmable matter this sophisticated remains so remote, though, that such dangers can’t be meaningfully assessed. But in any event, Goldstein insists that “there’s no grey goo scenario here”, referring to a term nanotechnology pioneer Eric Drexler coined in his 1986 book Engines of Creation.

Drexler speculated about the possibility of self-replicating nanobots that would increase exponentially in number as they consumed the raw materials around them. This sparked some early fears that out-of-control nanotechnology could to turn the world into a giant mass of self-replicating gray sludge—a theme that appeared repeatedly in later works of science fiction, including Will McCarthy’s 1998 novel Bloom, Michael Crichton’s 2002 thriller Prey, and even in tongue-in-cheek fashion in a 2011 episode of Futurama.

But the real dangers may be ones associated more generically with pervasive computing, especially when it works by Wifi. What if such a system were hacked? It is one thing to have data manipulated online this way, but when the computing substrate is tangible stuff that reconfigures itself, hackers will gain enormous leverage for creating havoc.

Goldstein thinks, however, that some of the more serious problems might ultimately be of more of a sociological nature. Programmable matter is sure to be rather expensive, at least initially, and so the capabilities it offers might only widen the gap between those with access to new technology and those without. What’s more, innovations like this, as with today’s pervasive factory automation, threaten to render jobs in manufacturing and transport obsolete. So they will make more people unemployable, not because they lack the skills but because there will be nothing for them to do.

Of course, powerful new capabilities always carry the potential for abuse. You can see hints of that already in, say, the use of swarm robotics for surveillance, or in the reconfigurable robots that are being designed for warfare. Expect the dangers of programmable matter to be much like those of the Internet: when just about everything is possible, not all of what goes on will be good.

Here's my third column for the Italian science magazine Sapere on music cognition. It mentions one of my favourite spine-tingling moments in music.

__________________________________________________

In my last column I explained that much of the emotional power of music seems to come from violations of expectation. We think the music will do one thing, but it does another. Or perhaps it just delays what we were expecting. We experience the surprise as an inner tension. You might think this would make music hard to listen to, rather than pleasurable. And indeed it’s a delicate game: if our expectations are foiled too often, we will just get confused and frustrated. Contemporary classical music has that effect on many people, although I’ll explain another time why this needn’t mean such music is bad or unlistenable. But if music always does what we expect, it becomes boring, like a nursery rhyme. Children need nursery rhymes to develop their expectations, but eventually they are ready for something more challenging.

A lot of music does meet our expectations for much of the time: it stays in key and in rhythm, and there is lots of repetition (verse-chorus-verse…). But the violations add spice. One way they can do this is to deliver notes or chords other than the ones we anticipate. Western audiences become very accustomed to hearing sequences of chords called cadences, which round off a musical phrase. If Bach or Mozart wrote a piece in, say, C major, you could be sure it would end with a C major chord (the so-called tonic chord): that just seems the “right” place to finish. Usually this final chord is preceded by the so-called “dominant” chord rooted on the fifth note of the scale (here G major). The dominant chord sets us up to expect the closing tonic chord. This pairing is called an authentic or perfect cadence.

Imagine the surprise, then, when you think you’re being given an authentic cadence but you get something else. That’s what happens about two-thirds of the way through Bach’s Prelude in E flat Minor from the first book of The Well Tempered Clavier, one of the most exquisite pieces Bach ever wrote. Here the prelude seems about to end [at 2:56 in the Richter recording here]: there’s an E flat minor chord (the tonic) followed by a dominant chord (B flat), and we think the next chord will be the closing tonic. But it isn’t. Never mind the fancy name of the chord Bach uses – it very definitely doesn’t close the phrase, but leaves it hanging. The effect is gorgeously poignant. This is sometimes known as a deceptive cadence: the musical term already reflects the idea that our expectations are being deceived. The tonic E flat minor does arrive moments later, and then we sigh as we finally get our delayed resolution.

Electrical brain-scanning studies show that we experience this kind of musical deception the same way as we experience a violation of grammatical syntax – such as when a sentence ends this like. There’s an electrical signal in the brain that signifies our “Huh?” response. This is just one of the ways in which the brain seems to process music and language using the same neural circuits – one way music literally ‘speaks’ to us.