Wednesday, June 25, 2008

Birds that boogie[I reckon this one speaks for itself. It is on Nature News. I just hope Snowball can handle the fame.]

YouTube videos of dancing cockatoos are not flukes but the first genuine evidence of animal dancing

When Snowball, a sulphur-crested male cockatoo, was shown last year in a YouTube video apparently moving in time to pop music, he became an internet sensation. But only now has his performance been subjected to scientific scrutiny. And the conclusion is that Snowball really can dance.

Aniruddh Patel of the Neurosciences Institute in La Jolla, California, and his colleagues say that Snowball’s ability to shake his stuff is much more than a cute curiosity. It could shed light on the biological bases of rhythm perception, and might even hold implications for the use of music in treating neurodegenerative disease.

‘Music with a beat can sometimes help people with Parkinson’s disease to initiate and coordinate walking’, says Patel. ‘But we don’t know why. If nonhuman animals can synchronize to a beat, what we learn from their brains could be relevant for understanding the mechanisms behind the clinical power of rhythmic music in Parkinson’s.’

Anyone watching Snowball can see that his foot-tapping seems to be well synchronized with the musical beat. But it was possible that in the original videos he was using timing cues from people dancing off camera. His previous owner says that he and his children would encourage Snowball’s ‘dancing’ with rhythmic gestures of their own.

Genuine ‘dancing’ – the ability to perceive and move in time with a beat – would also require that Snowball adjust his movements to match different rhythmic speeds (tempi).

To examine this, Patel and his colleagues went to meet Snowball. He had been left by his previous owner at a bird shelter, Birdlovers Only Rescue Service Inc. in Schererville, Indiana, in August 2007, along with a CD containing a song to which his owner said that Snowball liked to dance: ‘Everybody’ by the Backstreet Boys.

Patel and colleagues videoed Snowball ‘dancing’ in one of his favourite spots, on the back of an armchair in the office of Birdlovers Only. They altered the tempi of the music in small steps, and studied whether Snowball stayed in synch.

This wasn’t as easy as it might sound, because Snowball didn’t ‘dance’ continuously during the music, and sometimes he didn’t get into the groove at all. So it was important to check whether the episodes of apparent synchrony could be down to pure chance.

‘On each trial he actually dances at a range of tempi’, says Patel. But the lower end of this range seemed to correlate with the beat of the music. ‘When the music tempo was slow, his tempo range included slow dancing. When the music was fast, his tempo range didn’t include these slower tempi.’

A statistical check on these variations showed that the correlation between the music’s rhythm and Snowball’s slower movements was very unlikely to have happened by chance. ‘To us, this shows that he really does have tempo sensitivity, and is not just ‘doing his own thing’ at some preferred tempo’, says Patel.

He says that Snowball is unlikely to be unique. Adena Schachner of Harvard University has also found evidence of genuine synchrony in YouTube videos of parrots, and also in studies of perhaps the most celebrated ‘intelligent parrot’, the late Alex, trained by psychologist Irene Pepperberg [1]. Patel [2] and Schachner will both present their findings at the 10th International Conference on Music Perception and Cognition in Sapporo, Japan, in August.

Patel and his colleagues hope to explore whether Snowball’s dance moves are related to the natural sexual-display movements of cockatoos. Has he invented his own moves, or simply adapted those of his instinctive repertoire? Will he dance with a partner, and if so, will that change his style?

But the implications extend beyond the natural proclivities of birds. Patel points out that Snowball’s dancing behaviour is better than that of very young children, who will move to music but without any real synchrony to the beat [3]. ‘Snowball is better than a typical 2-4 year old, but not as good as a human adult’, he says. (Some might say the same of Snowball’s musical tastes.)

This suggests that a capacity for rhythmic synchronization is not a ‘musical’ adaptation, because animals have no genuine ‘music’. The question of whether musicality is biologically innate in humans has been highly controversial – some argue that music has served adaptive functions that create a genetic predisposition for it. But Snowball seems to be showing that an ability to dance to a beat does not stem from a propensity for music-making.References1. Pepperberg, I. M. Alex & Me (HarperCollins, 2008).2. Patel, A. D. et al., Proc. 10th Int. Conf. on Music Perception and Cognition, eds M. Adachi et al. (Causal Productions, Adelaide, in press).3. Eerola, T. et al., Proc. 9th Int. Conf. on Music Perception and Cognition, eds M. Baroni et al. (2006).

Wednesday, June 18, 2008

Fly me to the moon?

Last Monday I took part in a debate at the Royal Institution on human spaceflight: is it humanity’s boldest endeavour or one of our greatest follies? My opponent was Kevin Fong of UCL, who confirmed all my initial impressions: he is immensely personable, eloquent and charming, and presents the sanest and least hyperbolic case for human spaceflight you’re ever likely to hear. All of which was bad news for my own position, of course, but in truth this was a debate I was never going to win: a show of hands revealed an overwhelming majority in favour of sending humans into space at the outset, and that didn’t change significantly (I was gratified that I seemed to pick up a few of the swing voters). And perhaps rightly so: if Kevin was put in charge of prioritizing and publicizing human spaceflight in the west, I suspect I’d find it pretty unobjectionable too. Sadly, we have instead the likes of the NASA PR machine and the bloomin’ Mars Society. (The only bit of hype I detected from Kevin all evening was about the importance to planetary geology of the moon rocks returned by Apollo – he seemed to accept (understandably, as an anaesthetist) the absurdly overblown claims of Ian Crawford.) In any event, it was very valuable to hear the ‘best case’ argument for human spaceflight, so that I could sharpen my own views on the matter. As I said then, I’m not against it in principle (I’m more of an agnostic) – but my goodness, there’s a lot of nonsense said and done in practice, and it seems even the Royal Astronomical Society bought some of it. Here, for what it is worth, is a slightly augmented version of the talk I gave.

*****

Two weeks ago I watched the documentary In the Shadow of the Moon, and was reminded of how exciting the Apollo missions were. Like most boys growing up in the late 60s, I wanted to be an astronaut. I retain immense respect for the integrity, dedication and courage of those who pioneered human spaceflight.

So it’s not idly that I’ve come to regard human spaceflight today as a monumental waste of money. I’ve been forced to this conclusion by the stark facts of how little it has achieved and might plausibly achieve in the near future, in comparison to what can be done without it.

Having watched those grainy, monochrome pictures in 1969, and having duly built my Airfix lunar modules and moon buggies, as a teenager I then watched Carl Sagan’s TV series Cosmos at the start of the 1980s. Now, Sagan did say ‘The sky calls to us; if we do not destroy ourselves we will one day venture to the stars.’ And I suspect he is right. But he, like me, didn’t seem to be in any great hurry about that. Or rather, I think he felt that we were essentially going there already, because Sagan drew on the results then just arriving from the Voyager spacecraft, launched only a year or so before the series was made and at that time investigating Jupiter and Saturn. He also reaped the bounty of the earlier Mariner missions to Venus and Mars, which offered images that remain stunning even now. The moon landings were a fantastic human achievement, but it was the unmanned missions that I encountered through Cosmos that really opened my eyes to the richness and the strangeness of the universe. Even in Technicolor, the moon is a drab place; but here, thanks to the Mariners and Voyagers, were worlds of swirling colour, of ice sheets and volcanoes and dust storms and molten sulphur. Did I feel short-changed that we weren’t sending humans to these places? On the contrary, I think I sensed even then that humans don’t belong here; they would simply be absurd, insignificant, even unwelcome intruders.

There had been Skylab in the 1970s, of course, in Earth orbit for six years, and that seemed kind of fun but now I recall a nagging sense that I wasn’t sure quite what they were doing up there, beyond a bit of microgravitational goofing around. And then came the space shuttle, and the Challenger disaster of 1986, and I began to wonder, what exactly is the aim of all this tentative astronautics at the edge of space?

And all the while that human spaceflight was losing its way, unmanned missions were offering us jaw-dropping sights. The Magellan mission wowed us on Venus, the Galileo mission gave thrilling views of Jupiter and its moons, and the rovers Opportunity and Spirit continue to wander on Mars sending back breathtaking postcards. And most recently, the Cassini-Huygens mission to Saturn and its moon Titan has shown us images of the strangest world we’ve ever seen, with methane lakes oozing up against shores encrusted with organic material under the drizzle of a methane rain.

This has all made me look again at the arguments put forward for why humans should go into space. And I’ve yet to find one that convinces me of its value, at this stage in our technological evolution.

One of the first arguments we hear is that there are technological spinoffs. We need to be cautious about this from the outset, because if you put a huge amount of money into developing any new technology, you’re bound to get some useful things from it. Of course, it is probably impossible to quantify, and perhaps rather meaningless to ask, what we would have found if we had directed even a small fraction of the money spent on human spaceflight directly into research on the sort of products it has spun off; but the fact remains that if you want a new kind of miniature heart pump or a better alloy for making golf clubs or better thermal insulation – if you really decide that you need these things badly – then sending people into space is a peculiar way of going about it. Whatever you want to say about the ragbag of products that have had some input from human spaceflight technology, I don’t think you can call them cost-effective. We’ve also got to take care that we distinguish between the spinoffs that have come from unmanned spaceflight.

What’s more, the spinoff argument has been routinely distorted. You ask many people what are the major spinoffs from spaceflight and they will say ‘Teflon’. So let me tell you: DuPont’s major Teflon plant in Virginia was producing a million pounds of it a year in 1950, and Teflon cookware was in the stores when Yuri Gagarin orbited the earth. Then people might say ‘Velcro’ – no, invented in Switzerland in 1941. Or if they’re American, they might cite the instant fruit drink Tang, which NASA simply bought off the supermarket shelf for their astronauts. When the head of NASA, Mike Griffin, referred to spinoffs in a recent speech defending human spaceflight, the first examples he reached for were these three – even though he then admitted that these didn’t come from the space program at all! You have to wonder why these spinoff myths have been allowed to persist for so long – was there really nothing better to replace them?

Then there’s the argument that you can do great science in space. Here again it is not too strong to say that some advocates routinely peddle false claims. Yes, you can do some neat experiments in space. For example, you can look at the fine details of how crystals grow, undisturbed by the convection currents that stir things up under gravity. And that also means you can grow more perfect crystals. Fine – but have we truly benefited from it, beyond clearing up a few small questions about the basic science of crystal growth? One common claim is that these improved crystals, when made from biomolecules, can offer up a more accurate picture of where all the atoms sit, so that we can design better drugs to interact with them. But I am not aware of any truly significant advance in drug development that has relied in any vital way on crystals grown in space. If I’ve overlooked something, I’d be happy to know of it, although you can’t always rely on what you read to make that judgement. In 1999, for example, it was claimed that research on an anti-flu drug had made vital use of protein crystals grown in a NASA project on board a space shuttle. NASA issued a press release with the headline ‘NASA develops flu drugs in space’. To which one of the people involved in the study replied by saying the following: ‘the crystals used in this project were grown here on Earth. One grown on Mir [the Russian space station, and nothing to do with NASA] was used in the initial stages, but it was not significantly better than the Earth-grown crystals.’

I’m confident of this much: if you ask protein crystallographers which technology has transformed their ability to determine crystal structures with great precision, it won’t cross their minds to mention microgravity. They will almost certainly cite the advent of high-intensity synchrotron X-ray sources here on Earth. Crystals grown in space are different, we’re told. Yes, American physicist Robert Park has replied, they are: ‘They cost more. Three orders of magnitude more.’

What we do learn in space that we can’t easily learn on Earth is the effect of low or zero gravity on human physiology. That’s often cited as a key scientific motivation for space stations. But wait a minute. Isn’t there a bit of circularity in the argument that the reason to put people in space is to find out what happens to them when you put them there?

One of the favourite arguments for human space exploration, particularly of the moon and Mars, is that only humans can truly go exploring. Only we can make expert judgements in an instant based on the blend of logic and intuition that one can’t program into robots. Well, there’s probably some truth in that, but it doesn’t mean that the humans have to physically be there to do it. Remote surgery has demonstrated countless times now that humans can use their skill and judgement in real time to guide robotics. NASA researchers have been calling the shots all along with way with the Mars rovers. This pairing of human intelligence with remote, robust robotics is now becoming recognized as the obvious way to explore extreme environments on Earth, and it surely applies in space too. It’s been estimated that, compared with unmanned missions, the safety requirements for human exploration push up launch costs by at least a factor of ten. We still lose a fair number of unmanned missions, but we can afford to, both in financial and in human terms. Besides, it’s easy to imagine ways in which robots can in fact be far more versatile explorers than humans, for example by deploying swarms of miniature robots to survey large areas. And in view of the current rate of advance in robotics and computer intelligence, who knows what will become feasible within the kind of timescale inevitably needed to even contemplate a human mission to Mars. I accept that even in 50 years time there may well be things humans could do on Mars that robots cannot; but I don’t think it is at all clear that those differences will in themselves be so profound as to merit the immense extra cost, effort and risk involved in putting humans there.

And now let’s come to what might soon be called the Hawking justification for human space exploration: we ‘need’ another world if we’re going to survive as a species. At a recent discussion on human exploration, NASA astronaut and former chief scientist John Grunsfeld put it this way: ‘single-planet species don’t survive.’ He admitted that he couldn’t prove it, but this is one of the most unscientific things I’ve heard said about human space exploration. How do you even begin to formulate that opinion? I have an equally unscientific, but to my mind slightly more plausible suggestion: ‘species incapable of living on a single, supremely habitable planet don’t survive.’

Quite aside from these wild speculations, one wonders how some scientists can be quite so blind to what our local planetary environment is like. They seem ready to project visions of Earth onto any other nearby world, just as Edgar Rice Burroughs did in his Mars novels. If you’ve ever flown across Siberia en route to the Far East, you know what it is like down there: there’s not a sign of human habitation for miles upon miles. Humans are incredibly adaptable to harsh environments, but there are places on Earth where we just can’t survive unaided. Well, let me tell you: compared with the Moon and Mars, Siberia is like Bognor Regis. Humans will not live autonomously here while any of us is alive, nor our children. It may be that one day we can run a moonbase, much as we have run space stations. But if the Earth goes belly up, the Moon and Mars will not save us, and to suggest otherwise is fantasy that borders on the irresponsible.

I was once offered an interesting justification for human space exploration by American planetary scientist Brian Enke. In response to a critique of mine, he said this:‘I can’t think of a better way to devastate the space science budget in future years than to kill the goose that lays the golden eggs, the manned space program. We would destroy our greatest justification and base of support in the beltway. Why should Uncle Sam fund space science at its current levels if it gives up on manned space exploration? Our funding depends upon a tenuous mindset - a vision of a progressive future that leads somewhere.’

In other words, we scientists may not be terribly interested in human spaceflight, but it’s what the public loves, and we can’t expect their support if we take that away.

Now, I have some sympathy with this; I can see what Brian means. But I can’t see how a human space program could be honestly justified on these grounds. Scientists surely have a responsibility to explain clearly to the public what they think they can achieve, and why they regard it as worth achieving. The moment we begin to offer false promises or create cosmetic goals, we are in deep trouble. Is there any other area of science in which we divert huge resources to placating public opinion, and even if there was, should we let that happen? In any event, human spaceflight is so hideously expensive that it’s not clear, once we have indulged this act of subterfuge, that we will have much money left to do the real science anyway. That is becoming very evidently an issue for NASA now, with the diversion of funds to fulfil George Bush’s grandiose promise of a human return to the moon by 2020, not to mention the persistent vision of a manned mission to Mars. If we give the ‘beltway’ what they want (or what we think they want), will there be anything left in the pot?

In fact, the more I think, in the light of history, about this notion of assuaging the public demand for ‘vision’, the more unsettling it becomes. Let’s put it this way. In the early 1960s, your lover says ‘Why are you a good-for-nothing layabout? Just look at what the guy next door is building – why can’t you do that?’ And so you say, ‘All right my dear, I’ll build you a rocket to take us to the moon.’ Your lover brightens up instantly, saying ‘Hey, that’s fantastic. I love you after all.’ And so you get to work, and before long your lover is saying ‘Why are you spending all this damned time and money on a space rocket?’ But you say, ‘Trust me, you’ll love it.’ The grumbling doesn’t stop, but you do it, and you go to the moon, and your lover says ‘Honey, you really are fabulous. I’ll love you forever.’ Two years later, the complaining has started again: ‘So you went to the moon. Big deal. Well, you can stop now, I’m not impressed any more.’ So you stop and go back to tinkering in your garage.

The years go by, and suddenly it’s the 1990s, and your lover is discontented again. ‘What have you ever achieved?’ and so on. ‘Oh, but I took us to the moon’, you say. ‘Big deal.’ ‘Well, you could go there again.’ ‘Hmm…’ ‘All right’, you say, exasperated, ‘look, we’ll go the moon again and then to Mars.’ ‘Oh honey, that’s so wonderful, if you do that I’ll love you forever.’ And what’s this? You believe it! You really believe that two years after you’ve been to Mars, they won’t be saying ‘Oh, Mars! Get a life. What else can you do?’ What a sucker. And indeed, what else will you do? Where will you go after that, to keep them happy for a few years longer?

We’re told that space science inspires young people to become scientists. I think this is true. But how do we know that they might not be equally motivated by scientific and technological achievements on Earth? Has anyone ever tried to answer that question? Likewise, how do we compare the motivation that comes from putting people into space with that from the Mars rovers or the Huygens mission to Titan? How would young people feel about being one of the scientists who made these things possible and who were the first to see the images they obtained? Is the allure of astronautics really so much more persuasive than anything else science has to offer young people? Do we know that it is really so uniquely motivating? I don’t believe that has ever been truly put to the test.

I mentioned earlier some remarks by NASA’s head Mike Griffin about human spaceflight. These were made in the context of a speech last year about the so-called ‘real’ reasons we send people into space. Sure, he said, we can justify doing this in hard-nosed cost-benefit terms, by talking about spinoffs, importance for national security, scientific discovery and so on. Now, as I’ve said, I think all those justifications can in fact be questioned, but in any case Griffin argued that they were merely the ‘acceptable’ reasons for space exploration, the kind of arguments used in public policy making. But who, outside of those circles, talks and thinks like that, he asked. The ‘real’ reasons why humans try to fly the Atlantic and climb Everest, he said, have nothing to do with such issues; they are, in Griffin’s words, ‘intuitive and compelling but not immediately logical’, and are summed up in George Mallory’s famous phrase about why we go up mountains: ‘Because it is there’. We want to excel, we want to leave something for future generations. The real reasons, Griffin said, are old-fashioned, they are all about the American pioneer spirit.

This is what the beltway wants to hear! That’s the Columbus ideal! Yes, the real reason many people, in the US at least, will confess to an enthusiasm for human spaceflight is that it speaks of the boldness and vision that has allowed humanity to achieve wonderful things. Part of this is mere hubris – the idea that we’ll have not ‘really’ been to Mars until we’ve stamped our big, dirty feet on the place (and planted our national flag). But part is understandable and valid: science does need vision and ambition. But in terms of space travel, this trades on the illusion that space is just the next frontier, like Antarctica but a bit further away. Well, it’s not. Earth is an oasis in a desert vaster than we can imagine. I can accept the Moon as a valid and clearly viable target, and we’ve been there. I do think that one day humans will go to Mars, and I’m not unhappy about that ultimate prospect, though I see no purpose in trying to do it with our current, fumbling technologies. But what then? Space does not scale like Earth: it has dimensions in time and space that do not fit with our own. Space is not the Wild West; it is far, far stranger and harder than that.

Actually, invoking the Columbus spirit is apt, because of course Columbus’s voyage was essentially a commercial one. And this, it seems, is the direction in which space travel is now going. In 2004 a privately financed spaceplane called SpaceShipOne won the Ansari X Prize, an award of US$10 million offered for the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks. SpaceShipOne was designed by aerospace engineer Burt Rutan, financed by Microsoft co-founder Paul Allen. Rutan is now developing the space vehicle that Richard Branson plans to use for his Virgin Galactic business, which will offer the first commercial space travel. The plan is that Rutan’s SpaceShipTwo will take space tourists 100 kilometres up into suborbital space at a cost of around $200,000 each. Several other companies are planning similar schemes, and space tourism looks set to happen in one way or another. Part of me deplores this notion of space as a playground for the rich. But part of me thinks that perhaps this is how human spaceflight really ought to be done, if we must do it at all: let’s admit its frivolity, marvel at the inventiveness that private enterprise can engender, and let the wasted money come from the pockets of those who want it.

I must confess that I couldn’t quite believe the pathos in one particular phrase from Mike Griffin’s speech: ‘Who can watch people assembling the greatest engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to conceive and to execute that project?’ I’m hoping Griffin doesn’t truly believe this, but I fear he does. I think most scientists would put it a little differently, something like this: ‘Who can watch people assembling the most misconceived and pointless engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to burn dollars?’ Scientists disagree about a lot of things, but there’s one hypothesis that will bring near-unanimity: the International Space Station is a waste of space.

Ronald Reagan told the United States in 1984 that the space station would take six years to build and would cost $8 billion. Sixteen years and tens of billions of dollars later, NASA enlisted the help of 15 other nations and promised that the station would be complete by 2005. The latest NASA plans say it will be finished by the end of this decade. And it had better be, because in 2010 the shuttles will be decommissioned.

It is easy to mock the ISS, with its golf-playing astronauts, its Pizza Hut deliveries, its drunken astronauts and countless malfunctions. But you have to ask yourself: why is it so easy to mock it? Perhaps because it really is risible?

Robert Park, the physicist at the University of Maryland who I mentioned earlier and who has consistently been one of the sanest voices on space exploration, summed this up very recently in a remark with which I want to leave you. He said: ‘There is a bold, adventurous NASA that explores the universe. That NASA had a magnificent week. Having traveled 423 million miles since leaving Earth, the Phoenix Mars Lander soft-landed in the Martian arctic. Its eight-foot backhoe will dig into the permafrost subsoil to see if liquid water exists. There is another NASA that goes in circles on the edge of space. That NASA is having a problem with the toilet on the ISS. I need not go into detail to explain what happens when a toilet backs up in zero gravity - it defines ugly.’

Sunday, June 15, 2008

[Here, because it will soon vanish behind a subscriber wall, is my latest Muse.for Nature News.]

A new theory suggests a natural basis for our preference for musical consonance. But does such a preference exist at all?

What was avant-garde yesterday is often blandly mainstream today. But this normalization doesn’t seem to have happened to experiments in atonalism in Western music. A century has passed since composer Arnold Schoenberg and his supporters rejected tonal organization, yet Schoenberg’s music is still considered by many to be ‘difficult’ at best, and a cacophony at worst.

Could this be because the dissonances characteristic of Schoenberg’s atonal compositions conflict with some fundamental human preference for consonance, embedded in the very way we perceive musical sound? That’s what his detractors have sometimes implied, and it might be inferred also from a new proposal for the origins of consonance and dissonance advanced in a paper by biomathematicians Inbal Shapira Lots and Lewi Stone of Tel Aviv University in Israel, published in the Journal of the Royal Society Interface [1].

Shapira Lots and Stone suggest that a preference for consonance may be hard-wired into the way we hear music. The reason that we prefer two simultaneous tones separated by a pitch interval of an octave or a fifth (seven semitones — the span from the notes C to G, say) rather than ‘dissonant’ intervals such as a tritone (C to F sharp, for instance) is that in the former cases, the ratio of frequencies of the two tones is a simple one: 1:2 for the octave, 2:3 for the fifth. This, the researchers argue, creates robust, synchronized firing of the neural circuits that register the tones.

One reading of this result (although it is one from which the authors hold back) is that Schoenberg’s programme was doomed from the outset because it contravenes a basic physiological mechanism that makes us crave consonance. The reality, however, is much more complicated, both in ways the authors acknowledge and in ways they do not.

Locked in harmony

Here’s the picture Shapira Lots and Stone propose. At the neural level, our response to different pitches seems to be governed by oscillators — either single neurons or small groups of them — that fire and produce an output signal when stimulated by an oscillatory input signal coming from the ear's cochlea. The frequency of the input is the acoustic frequency of the pitch that excites the cochlea, and firing happens when this matches the neural oscillator’s resonant frequency.

A harmonic interval of two simultaneous notes excites two such oscillators. What if they are coupled so that the activity of one can influence that of the other? By considering a biologically realistic form of coupling in which one oscillator can push the other towards the threshold stimulus needed to trigger firing, the researchers calculate that the two oscillators can become ‘mode-locked’ so that their firing patterns repeat with a fixed ratio of periodicities. When mode-locked, the neural responses reinforce each other, which can be deemed to provoke a stronger response to the acoustic stimulus.

Mode-locked synchronization can occur for any frequency ratios of the input signals, but it is particularly stable – the ratio of output frequencies stays constant over a particularly wide range of input frequencies – when the input signals have ratios close to small numbers, such as 1:1, 1:2, 2:3 or 3:4. These are precisely the frequency ratios of intervals deemed to be consonant: the octave, fifth, fourth (C to F), and so on. In other words, neural synchrony is especially easy to establish for these intervals.

In fact, the stability of synchrony, judged this way, mirrors the degree of consonance for all the intervals in the major and minor scales of Western music: the major sixth (C-A), major third (C-E) and minor third (C-E flat) are all slightly less stable than the fourth, and are followed, in decreasing order of stability, by the minor sixth (C-A flat), major second (C-D), major seventh (C-B) and minor seventh (C-B flat). One could interpret this as not only rationalizing conventional Western harmony but also supporting the very choice of note frequency ratios in the Western major and minor scales. Thus, the entire scheme of Western music becomes one with a ‘rational’ basis anchored in the physiology of pitch perception.

Natural music?

This is a very old idea. Pythagoras is credited (on the basis of scant evidence) as being the first to relate musical harmony to mathematics, when he noted that ‘pleasing’ intervals correspond to simple frequency ratios. Galileo echoed this idea when he said that these commensurate ratios are ones that do not “keep the ear drum in perpetual torment”.

However, there were some serious flaws in the tuning scheme derived from Pythagoras’s ratios. For one thing, it generated new notes indefinitely whenever tunes were transposed from one key to another – in essence, Pythagorean tuning assigns a different frequency to sharps and their corresponding flats (F sharp and G flat, say), and the result is a proliferation of finely graded notes. What’s more, the major third interval, which was deemed consonant by Galileo’s time, has a frequency ratio of 64:81, which is not particularly simple at all.

The frequency ratios of the various intervals were simplified in the sixteenth century by the Italian composer Giuseppe Zarlino (he defined a major third as having a 4:5 ratio, for example), and the resulting scheme of ‘just intonation’ solved some of the problems with Pythagorean tuning. But the problem of transposition was not fully solved until the introduction of equal temperament, beginning in earnest from around the eighteenth century, which divides the octave into twelve equal pitch steps, called semitones. The differences in frequency ratio between Pythagorean, just and equal-tempered intonation are very small for some intervals, but significant for others (such as the major third). Some people claim that, once you’ve heard the older schemes, equal temperament sounds jarringly off-key.

In any event, the mathematical and physiological bases of consonance continued to be debated. In the eighteenth century, the French composer Jean-Philippe Rameau rooted musical harmony instead in the ‘harmonic series’ — the series of overtones, with integer multiples of the fundamental frequency, that sound in notes played on any instrument. And the German physiologist Hermann von Helmholtz argued in the nineteenth century that dissonance is the result of ‘beats’: the interference between two acoustic waves of slightly different frequency. If this difference is very small, beats are heard as a periodic rise and fall in the volume of the sound. But as the frequency difference increases, the beating gets faster, and when it exceeds about 20 hertz it instead creates an unpleasant, rattling sensation called roughness. Because real musical notes are complex mixtures of many overtones, there are several potential pairs of slightly detuned tones for any two-note chord. Helmholtz showed that beat-induced roughness is small for consonant intervals of such complex tones, but larger for dissonant intervals.

Shapira Lots and Stone argue rightly that their explanation for consonance can explain some aspects that Helmholtz’s cannot. But the reverse is true too: modern versions of Helmholtz’s theory can account for why the perception of roughness depends on absolute as well as relative pitch frequencies, so that even allegedly consonant intervals sound gruff when played in lower registers.Good vibrations

There are more important reasons why the new work falls short of providing a full account of consonance and dissonance. For one thing, these terms have more than a single meaning. When Shapira Lots and Stone talk of ‘musical dissonance’, they actually mean what is known in music cognition as ‘sensory dissonance’ – the sensation of roughness. Musical dissonance is something else, and a matter of mere convention. As I say, the major third interval that now seems so pleasing to us was not recognized as consonant until the Renaissance, and only the octave was deemed consonant before the ninth century. And sensory dissonance is itself a poor guide to what people will judge to be pleasing. It's not clear, for example, that the fourth is actually perceived as more consonant than the major third [2]. And the music of Ravel and Debussy is full of ‘dissonant’ sixths, major sevenths and ninths that now seem rather lush and soothing.

But fundamentally, it isn’t clear that we really do have an intrinsic systematic preference for consonance. This is commonly regarded as uncontentious, but that’s far from true. It is certainly the case, as Shapira Lots and Stone say, that the musical systems of most cultures are based around the octave, and that intervals of a fifth are widespread too. But it’s hard to generalize beyond this. The slendro scale of Indonesian gamelan music, for instance, divides the octave into five roughly equal and somewhat variable pitch steps, with none of the resulting intervals corresponding to small-number frequency ratios.

Claims that infants prefer consonant intervals over dissonant ones [3] are complicated by the possibility of cultural conditioning. Babies can hear and respond to sound even in the womb, and they have a phenomenal capacity to assimilate patterns and regularities in their environment. A sceptical reading of experiments on infants and primates might acknowledge some evidence that both the octave and the fifth are privileged, but nothing more [4]. My guess is that the ‘neural synchrony’ argument, of which Shapira Lots and Stone offer the latest instalment, is on to something, but that harmony in Western music will turn out to lean more heavily on nurture than on nature.References1. Shapira Lots, I. and Stone, L. J. R. Soc. Interface doi:10.1098/rsif.2008/01432. Krumhansl, C. L. Cognitive Foundations of Musical Pitch (Oxford University Press, 1990).3. Schellenberg, E. G. and Trehub, S. E. Psychol. Sci.7, 272–277 (1996).4. Patel, A. Music, Language, and the Brain (Oxford University Press, 2008).

Brian Appleyard has a nice blog about my book Universe of Stone. He says “Ball, in preferring earlier, starker Gothic to the later more decorative variety, teeters on the brink of the fallacy that has dogged architectural criticism of the last hundred years - the idea there is some necessary and rational connection between clearly expressed function and beauty.” I can see what he means, and why he may have got this impression. But my own preferences here are purely aesthetic: I find the profusion of crockets and the excesses of Flamboyant Gothic often mere clutter, as though the builders had lost faith in letting blank stone speak for itself. I do discuss in the book the Platonic notion that links beauty to intelligibility and order, an issue nicely dealt with in Umberto Eco’s book Art and Beauty in the Middle Ages. But I don’t necessarily intend to imply any advocacy of this position.

Brian’s point is a reminder, however, that I should take care not to get too snobbish and purist about English and Late Gothic vaulting, with its lunatic mosaics of tiercerons and liernes and its fluted fans. These have their own over-enthusiastic charm, and we should just sit back and enjoy it.

Tuesday, June 10, 2008

You’re not a molecule, but sometimes you’re a statistic

The editorial in the latest issue of Nature, written by me, could in its edited form (my original draft is below) seem to present a capitulation to the view of social science advocated by Steve Fuller at Warwick, who has previously been highly critical of the statistical perspective discussed in my book Critical Mass. (My response to Fuller is here.) But that’s not really how it is. The pull quote (“The goal of social science is not simply to understand how people behave in large groups but to understand what motivates individuals to behave the way they do.”) is equally true in reverse, which is what Fuller seems blind to. I’m more that happy to make explicit what statistical ‘laws’ overlook. But to deny that group behaviour matters, or that it can differ from that predicted by linear extrapolation from individuals, is to deny the ‘social’ in social science, which seems to me a far more egregious oversight.

Fair point from the editor, though: it wouldn’t actually be hard at all to improve on Mill’s words, in the sense of leavening the Victorian stodge. But I hope the editorial doesn’t now seem to be implicitly critical of the González et al.paper that motivated it, on the grounds that it focuses on the masses and not the individual. This paper does reveal information about both. That very issue, however, has provoked an absurd level of hysteria in the wake of the news story we ran. It seems some people who haven’t bothered to read the paper are concerned about privacy. Makes you wonder what they have to hide (not that anyone would be finding out in any case, given that the data were rendered anonymous). Do these people ever stop to think what is happening to the data every time they make a purchase on their credit cards?

******

“Events which in their own nature appear most capricious and uncertain and which in any individual case no attainable degree of knowledge would enable us to foresee, occur, when considerable numbers are taken into account, with a degree of regularity approaching to mathematical.” It would be hard to improve on John Stuart Mill’s words to encapsulate the regularities found in human mobility patterns on page 779 of this issue. Who would have thought that something as seemingly capricious as the matter of where we go during our daily lives could yield such lawfulness?

One of the remarkable features of this work is not the results, however, but the methodology. Social scientists have long struggled with a paucity of hard data about human activities – social networks, say, movement patterns. Self-reporting is notoriously unreliable and labour-intensive. The use, in this case, of mobile phone networks to track individuals has supplied a data set of proportions almost unheard of for such a complex aspect of behaviour: over 16 million ‘hops’ for 100,000 people. The resulting statistics show a strikingly small scatter, giving grounds for confidence in the mathematical laws they disclose.

This adds to the examples of information technologies offering tools to the social scientist that provide a degree of quantification and precision comparable to the so-called ‘hard’ sciences. Community network structures can be derived from, say, email transmissions or automated database searches of scientific collaboration. Online schemes can even enable genuinely experimental study of behaviour in large populations, complete with control groups and tunable parameters.

Making sense of these data sets may require a rather different set of skills from the conventional statistical approaches used in the social sciences, which is why it is no surprise that studies like the present one are often conducted by those trained in the physical sciences, where there is a long tradition of investigating ‘complex systems’ of interacting entities. One view might be that this lends some prescience to the suggestion of sociologist George Lundberg in 1939: “It may be that the next great developments in the social sciences will come not from professed social scientists but from people trained in other fields.” Lundberg was a positivist eager for his field to adopt the methods of the natural sciences.

The ‘physicalization’ of the social sciences needs to be regarded with some caution, however. While some social scientists aim to understand the ways people behave in large groups, others insist that ultimately the goal is not to uncover bare statistical laws and regularities but to gain insight into what motivates individuals to behave the way they do. It is not clear that universal scaling functions can offer that: however vast the data set, the inverse problem of deriving the factors that produce it remains as challenging as ever. Statistical regularities may conjure up images of Adolphe Quetelet’s homme moyen, the ‘average man’ who not only tends to deny the richness of human behaviour but even threatens to impose a stifling behavioural norm.

It would be wrong to imply that the interest of these findings is restricted to the conventional boundaries of the social sciences. Epidemiologists, for instance, have traditionally been forced to work with very simple descriptions of dispersal and contact, for example based on diffusive models, for lack of any hard evidence to the contrary. But recent work has made it very clear that the topology and quantitative details of contact networks can have a qualitative impact on the transmission of disease. There is sure also to be commercial interest in information about patterns of usage for portable electronics, while the nature of mass human movement could inform urban planning and the development of transportation networks.

But for the social sciences proper, the latest results suggest both an opportunity and a challenging question: how much of social behaviour do we capture in statistical regularities, and how much do we overlook?

Wednesday, June 04, 2008

Yes, I do read my reviews

‘When the laws of physics defy the science of storytelling’ says the headline. Whoops, I’m in for it. But not completely. Ed Lake’s review of The Sun and Moon Corrupted in the Telegraph last weekend was not a complete stinker; I think it is what one calls ‘mixed’. He calls it a ‘fine piece of pop science’, and says that I ‘manage to deliver a surprising amount of actual science.’ Uh-oh – seems he detects a Djerassi-like agenda to sneak science in through the literary back door. Then we hear about superheroes and the X-Men and a ‘Dan Brown novel with weird science in place of crank Christology’, and I sense I’ve failed to land in the right field here. Can’t exactly blame anyone else for that, but let me just say now: I really don’t care if you learn any science from this book or not. Not a jot.

I’m not about to indecorously defend myself from criticism here. Ed Lake made a considered judgement, and that’s fine. He said some nice things, and some useful things, and he did a good job of conveying the essence of the plot. (I don’t, incidentally, take ‘overripe gothicism’ as a criticism, and I’m not quite sure if he meant it to be – it’s unclear if he wanted less of that, or more.) It’s just an interesting awakening to the world of fiction reviewing, where one unfortunately can’t say ‘this particular criticism was disproved in Physical Review Letters in 1991’. One person’s meat is another person’s demon-haunted brew from the foul swamps of Transylvania.