Monday, February 27, 2012

For Francophones, I have a piece in the February issue of La Recherche on spacetime cloaking, part of a special feature on invisibility. For some reason it’s not included in the online material. But here in any case is how it began in my mother tongue.
_______________________________________________________________

We all have experiences that we’d rather never happened – or perhaps that we just wish no one else had seen. Now researchers have shown how to carry out this kind of editing of history. They use the principles behind invisibility cloaks, which have already been shown to hide objects from light. But instead of hiding objects, we can hide events. In other words, we can apparently carve out a hole in spacetime so that no one on the outside can tell that whatever goes on inside it has ever taken place.

“Such speculations are not fantasy”, insist physicist Martin McCall of Imperial College in London and his colleagues, who came up with the idea last July [1]. They imagine a safe-cracker casting a spacetime cloak over the scene of the crime, so that he can open the safe and remove the contents while a security camera would see just a continuously empty room.

Suppose the cloak was used to conceal someone’s journey from one place to another. Because the device splices together the spacetime on either side of the ‘hole’, it would look as though the person vanished from the starting point and, in the blink of an eye, appeared at her destination. This would then create “the illusion of a Star Trek transporter”, the researchers say.

“It’s definitely a cool idea”, says Ulf Leonhardt, a specialist in invisibility cloaking at the University of St Andrews in Scotland. “Altering the history has been the metier of undemocratic politicians”, he adds, pointing to the way Soviet leaders would doctor photographs to remove individuals who had fallen from favour. “Now altering history has become a subject of physics.”

Lost in spacetime

Conventional invisibility cloaks hide objects by bending light rays around them and then bringing the rays back onto their original trajectory on the far side. That way, it looks to an observer as though the light has passed through an empty space where the hidden object resides. In contrast, the spacetime cloak would manipulate not the path of the rays but their speed. It would be made of materials that slow down light or speed it up. This means that some of the light that would have been scattered by the hidden event is ushered forward to pass before it happened, while the rest was held back until after the event.

These slowed and accelerated rays are then rejoined seamlessly so that there seems to be no gap in spacetime. It’s like bending rays in invisibility cloaks, except that they are bent not in space but in spacetime.

How do you slow down or speed up light? Both have been demonstrated already in some exotic substances such as ultracold gases of alkali metals: light has been both brought to a standstill and speeded up by a factor of 300, so that, bizarrely, a pulse seems to exit the system before it has even arrived. But the spacetime cloak needs to manipulate light in ways that are both simpler and more profound. Light is slowed down in any medium relative to its speed in a vacuum – that is precisely why it bends when it enters water or gas from air, causing the phenomenon of refraction. The amount of slowing down is measured by the refractive index: the bigger this value, the slower the speed relative to a vacuum.

In a spacetime cloak, the light must simply be slowed or speeded up relative to its speed before it entered the cloak. If the cloak itself is surrounded by some cladding material, then the light must be speeded up or retarded only relative to this – there’s no need for fancy tricks that seem to make light travel faster than its speed in a vacuum.

But to obtain perfect and versatile cloaking demands some sophisticated manipulation of the light, for which you need more that just any old transparent materials. For one thing, you need to alter both the electric and the magnetic components of the electromagnetic wave. Most materials (such as glass), being non-magnetic, don’t affect the latter. What’s more, the effects on the electric and magnetic components must be the same, since otherwise some light will be reflected as it enters the material – in this case, making the cloak itself visible. When the electric and magnetic effects are equalized, the material is said to be “impedance matched”. “For a perfect device, we need to modulate the refractive index while also keeping it impedance matched”, explains Paul Kinsler, McCall’s colleague at Imperial.

Hidden recipe

There aren’t really any ordinary materials that would satisfy all these requirements. But they can be met using the same substances that have been used already to make invisibility shields: so-called metamaterials. These are materials made from individual components that interact with electromagnetic radiation in unusual ways. Invisibility cloaks for microwaves have been built in which the metamaterial ‘atoms’ are little electrical circuits etched into copper film, which can pick up the electromagnetic waves like antennae, resonate with them, and re-radiate the energy. Because the precise response of these circuits can be tailored by altering their size and shape, metamaterials can be designed with a range of curious behaviours. For example, they can be given a negative refractive index, so that light rays are bent the wrong way. “Metamaterials that work by resonance offer a large range of strong responses that allow more design freedom”, says Kinsler. “They are also usually designed to have both electric and magnetic responses, which will in general be different from one another.”

Using a combination of these materials, McCall and colleagues offer a prescription for how to put together a spacetime cloak. It’s a tricky business: to divert light around the spacetime hole, one needs to change the optical properties of the cloaking material over time in a particular sequence, switching each layer of material by the right amount at the right moment. “The exact theory requires a perfectly matched and perfectly timed set of changes to both the electric and magnetic properties of the cloak”, says Kinsler.

The result, however, is a sleight of hand more profound than any that normal invisibility shields can offer. “If you turn an ordinary invisibility cloak on and off, you will see a cloaked object disappear and reappear”, explains Kinsler. “With our concept, you never see anything change at all.” At least, not from one side. The spacetime hole opened up by the cloak is not symmetrical – it operates from one side but not the other (although the cloak itself would be invisible from both directions). So an observer on one side might see an event that an observer on the other side will swear never took place.

Could such a device really be used to hide events in the macroscopic world? Physicist John Pendry, also at Imperial (but not part of McCall’s group) and one of the pioneers of invisibility cloaks, considers that unlikely. But he agrees with McCall and colleagues that there might well be more immediate and more practical applications for the technique. “Possible uses might be in a telecommunications switching station, where several packets of information might be competing for the same channel”, he says. “The time cloak could engineer a seamless flow in all channels” – by cloaking interruptions of one signal by another, it would seem as though all had simultaneously flowed unbroken down the same channel.

There could be some more fundamental implications of the work too. This manipulation of spacetime is analogous to what happens at a black hole. Here, light coming from the region near the hole is effectively brought to a standstill at the event horizon, so that time itself seems to be arrested there: an object falling into the hole seems, to an outside observer, to be stopped forever at the event horizon. The parallel between transformation optics and black-hole physics has been pointed out by Leonhardt and his coworkers, who in 2008 revealed an optical analogue of a black hole made from optical fibres. Leonhardt says that the analogy exists for spacetime cloaks also, and that therefore these systems might be used to create the analogue of Hawking radiation: the radiation predicted by Stephen Hawking to be emitted from black holes as a result of the quantum effects of the distortion of spacetime. Such radiation has never been detected yet in astronomical observations of real black holes, but its production at the edge of a spacetime ‘hole’ made by cloaking would provide strong support for Hawking’s idea.

Unlike black holes, however, a spacetime cloak doesn’t really distort spacetime – it just looks as though it does. “I can certainly imagine a transformation device that gives the illusion that causal relationships are distorted or even reversed – a causality editor, rather than our history editor”, says Kinsler. “But the effects generated are only an illusion.”

In the pipeline

In order to manipulate visible light, the component ‘atoms’ of a metamaterial have to be about the same size as the wavelength of the light – less than a micrometre. This means that, while microwave invisibility cloaks have been put together from macroscale components, optical metamaterials are much harder to make.

There’s an easier way, however. Some researchers have realised that another way to perform the necessary light gymnastics is to use transparent substances with unusual optical properties, such as birefringent minerals in which light travels at different speeds in different directions. Objects have been cloaked from visible light in this way using carefully shaped blocks of the mineral calcite (Iceland spar).

In the same spirit, McCall and colleagues realised that sandwiches of existing materials with ‘tunable’ refractive indices might be used to make ‘approximate’ spacetime cloaks. For example, one could use optical fibres whose refractive indices depend on the intensity of the light passing through them. A control beam would manipulate these properties, opening and closing a spacetime cloak for a second beam.

However, as with the ‘simple’ invisibility cloaks made from calcite, the result is that although the object or event can be fully hidden, the cloak itself is not: light is still reflected from it. “Although the event itself can in principle be undetectable, the cloaking process itself isn't”, Kinsler says.

This idea of manipulating the optical properties of optical fibres for spacetime cloaking has already been demonstrated by Moti Fridman and colleagues at Cornell University [2]. Stimulated by the Imperial team’s proposal, they figured out how to put the idea into practice. They use so-called ‘time lenses’ which modify how a light wave propagates not in space, like an ordinary lens, but in time. Just as an ordinary lens can separate different light frequencies in space, and can thus be used to spread out or focus a beam, so a time lens uses the phenomenon of dispersion (the frequency dependence of the speed at which light travels through a medium) to separate frequencies in time, slowing some of them down relative to others.

Because of this equivalence of space and time in the two types of lens, a two-part ‘split time-lens’ can bend a probe beam around a spacetime hole in the same way as two ordinary lenses could bend a light beam around either side of an object to cloak it in space. In the Cornell experiment, a second split time-lens then restored the probe to its original state. In this way, the researchers could temporarily hide the interaction between the probe beam and a second short light pulse, which would otherwise cause the probe signal to be amplified. Fridman and colleagues presented their findings at a Californian meeting of the Optical Society of America in October. “It's a nice experiment, and achieved results remarkably quickly”, says Kinsler. “We were surprised to see it – we were expecting it might take years to do.”

But the spacetime cloaking in this experiment lasts only for a fleeting moment – about 15 picoseconds (trillionths of a second). And Fridman and colleagues admit that the material properties of the optical fibres themselves will make it impossible to extend the gap beyond a little over one millionth of a second. So there’s much work to be done to create a more perfect and more long-lasting cloak. In the meantime, McCall and Kinsler have their eye on other possibilities. Perhaps, they say, we could also edit sound this way by applying the same principles to acoustic waves. As well as hiding things you wish you’d never done, might you be able to literally take back things you wish you’d never said?

Friday, February 24, 2012

Well, I'm here and just thought it possible that someone in NYC might see this before tomorrow (25 Feb) is up. I'm taking part in this event, linked to David Rothenberg's excellent new book. It's free (the event, not the book), and promises to be great fun. If you're in Manhattan - see you tomorrow?

Thursday, February 16, 2012

I wrote a leader for this week’s Nature on the forthcoming talks for an international Arms Trade Treaty. Here’s the original version.
________________________________________________________________

Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.

At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.

Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.

There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.

Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?

Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.

The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.

The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev.114, 101-163; 2010).

So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.

Tuesday, February 14, 2012

With the shrill cries of new atheists ringing in my ears (you would not believe some of that stuff, but I won’t go there), I read John Gray’s review of Alain de Botton’s book Religion for Atheists in the New Statesman and it is as though someone has opened a window and let in some air – not because of the book, which I’ve not read, but because of what John says. Sadly you can’t get it online: the nearest thing is here.

Sunday, February 12, 2012

This piece in the Guardian has caused a little storm, and I’m not so naive as to be totally surprised by that. There’s much I could say about it, but frankly it never helps. I’m tired of how little productive dialogue ever seems to stem from these things and figure I will just leave the damned business alone (no doubt to the delight of the more rapid detractors). I will say here only a few things about this pre-edited version, which was necessarily slimmed down to fit the slot in the printed paper: (1) to those who thought I was saying “hey, wouldn’t it be a great idea if sociologists studied religion”, note the reference to Durkheim as a shorthand way of acknowledging that this notion goes back a long, long way; (2) note that I’m not against everything Dawkins stands for on this subject – I agree with him on more than just the matter of faith schools mentioned below, although of course I do disagree with other things. The “for us or against us” attitude that one seems to see so much of in online discussions is the kind of infantile attitude that I figure we should be leaving to the likes of George W. Bush.
______________________________________________________________

The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.

Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.

You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.

Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.

Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.

At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.

For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.

It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.

The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).

Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?

Friday, February 10, 2012

I have a review of a book about John Dee in the latest issue of Nature. Here's how it started.
_______________________________________________________The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages

The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.

In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.

There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.

There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.

It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.

The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.

He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.

This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.

But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.

Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.

Thursday, February 02, 2012

Here’s my latest Muse for Nature News. But while I’m in that neck of the woods, I very much enjoyed the piece on Dickens in the latest issue. Yes, even Nature is in on that act.
____________________________________________________________

“The people who cast the votes decide nothing”, Josef Stalin is reputed to have said. “The people who count them decide everything.” Little has changed in Russia, if the findings of a new preprint are to be believed. Peter Klimek of the Medical University of Vienna in Austria and his colleagues say that the 2011 election for the Duma (the lower Federal Assembly) in Russia, won by Vladimir Putin’s United Russia party with 49 percent of the votes, shows a clear statistical signature of ballot-rigging [1].

This is not a new accusation. Some have claimed that the Russian statistics show suspicious peaks at multiples of 5 or 10 percent, as though ballot officials simply assigned rounded proportions of votes to meet pre-determined figures. And in December the Wall Street Journal conducted its own analysis of the statistics which led political scientists at the Universities of Michigan and Chicago to concur that there were signs of fraud.

Naturally, Putin denies this. But if you suspect that neither he nor the Wall Street Journal are exactly the most neutral of sources on Russian politics, Klimek and colleagues offer a welcome alternative. They say that the statistical distribution of votes in the Duma election shows over a hundred times more skew than a normal (bell-curve or gaussian) distribution, the expected outcome of a set of independent choices.

The same is true for the contested Ugandan election of February 2011. Both of these statistical distributions are, even at a glance, profoundly different from those of recent elections in, say, Austria, Switzerland and Spain.

Breaking down the numbers into scatter plots of regional votes lays the problems bare. For both Russia and Uganda these distributions are bimodal. Distortion in the main peak suggests ballot rigging which, for Russia, afflicts about 64 percent of districts.

But the second, smaller peaks reveal much cruder fraud. These correspond to districts showing both 100 percent turnout and 100 percent votes for the winning party. As if.

It’s good to see science expose these corruptions of democracy. Yet science also hints that democracy isn’t quite what it’s popularly sold as anyway. Take the choice of voting system. One of the most celebrated results of the branch of economics known as choice theory is that there can be no perfectly fair means of deciding the outcome of a democratic vote. Possible voting schemes are manifold, and their relative merits hotly debated: first-past-the-post (the UK), proportional representation (Scandinavia), schemes for ranking candidates rather than simply selecting one, and so on.

But as economics Nobel laureate Kenneth Arrow showed in the 1950s, none of these systems, nor any other, can satisfy all the criteria of fairness and logic one might demand [2]. For example, a system under which candidate A would be elected from A, B and C should ideally also select A if B is the only alternative. What Arrow’s ‘impossibility theorem’ implies is that either we need to accept that democratic majority rule has some undesirable consequences or we need to find alternatives – which no one has.

Other considerations can undermine the democratic principle too, such as when a bipartisan vote falls within the margin of statistical error. As the Bush vs Gore US election of 2000 showed, the result is then not democratic but legalistic.

And analysis of voting statistics suggests that, regardless of the voting system, our political choices are not free and independent (as most definitions of democracy pretend) but partly the collective result of peer influence. That is one – although not the only – explanation of why some voting statistics don’t follow a gaussian distribution but instead a relationship called a power law [3,4]. Klimek and colleagues find less extreme but significant deviations from gaussian statistics in their analysis of ‘unrigged’ elections [1], which they assume to result from similar collectivization, or as they put it, voter mobilization.

A key premise of current models of voting and opinion formation [5,6] is that most social consensus arises from mutual influence and the spreading of opinion, not from isolated decisions. On the one hand you could say this is just how democratic societies work. On the other, it makes voting a nonlinear process in which small effects (media bias or party budgets, say) can have disproportionately big consequences. At the very least, it makes voting a more complex and less transparent process than is normally assumed.

This isn’t to invalidate Churchill’s famous dictum that democracy is the least bad political system. But let’s not fool ourselves about what it entails.