Over the holiday period, the arXivblog is running a selection of the most popular posts from 2008

5 November 2008: Cloaking objects at a distance

One of the disadvantages of invisibility cloaks is that anything placed inside one is automatically blinded, since no light can get in.

Now Yun Lai and colleagues from The Hong Kong University of Science and Technology have come up with a way round this using the remarkable idea of cloaking at a distance. This involves using a “complementary material” to hide an object outside it.

Over the holiday period, the arXivblog is running a selection of the most popular posts from 2008

5 August 2008: Quantum communication: when 0 + 0 is not equal to 0

One of the lesser known cornerstones of modern physics is Claude Shannon’s mathematical theory of communication which he published in 1948 while juggling and unicycling his way around Bell Labs.

Shannon’s theory concerns how a message created at one point in space can be reproduced at another point in space. He calls the conduit for such a process a channel and the limits imposed by the universe on this process the channel capacity.

Over the holiday period, the arXivblog is running a selection of the most popular posts from 2008

29 August 2008: Do nuclear decay rates depend on our distance from the sun?

Here’s an interesting conundrum involving nuclear decay rates.

We think that the decay rates of elements are constant regardless of the ambient conditions (except in a few special cases where beta decay can be influenced by powerful electric fields).

So that makes it hard to explain the curious periodic variations in the decay rates of silicon-32 and radium-226 observed by groups at the Brookhaven National Labs in the US and at the Physikalisch-Technische Bundesandstalt in Germany in the 1980s.

Over the holiday period, the arXivblog is running a selection of the most popular posts from 2008

28 April 2008: First superheavy element found in nature

The hunt for superheavy elements has focused banging various heavy nuclei together and hoping they’ll stick. In this way, physicists have extended the periodic table by manufacturing elements 111, 112, 114, 116 and 118, albeit for vanishingly small instants. Although none of these elements is particularly long lived, they don’t have progressively shorter lives and this is taken as evidence that islands of nuclear stability exist out there and that someday we’ll find stable superheavy elements.

If you’ve ever used speech recognition software, you’ll know how often it fails to work well. Recognition rates are nowhere near what is needed for anything but the simplest applications.

So a new approach for analysing speech by Yuri Andreyev and Maxim Koroteev at the Institute of Radioengineering and Electronics of the Russian Academy of Sciences in Moscow is welcome. Their approach is to treat the production of speech as a chaotic phenomenon.

That’s a significant difference compared with previous approaches which predict the next point in a speech signal by extrapolating from previous points in a linear fashion.

That works because the organs that produce speech–the vocal cords–change over a much longer time period than the sound they produce. So they can be considered essentially stationary for this type of analysis.

Of course, one of the characteristics of chaos is that very small changes in starting conditions can produce large changes in output. And if that’s happening, what kind of chaos are we talking about?

Andreyev and Koroteev answer this question by measuring the frequency and amplitude of the sound a person makes when saying various vowels and consonants. They then use this data to reconstruct the multidimensional phase space in which the chaotic signal is produced.

The results are interesting because specific vowels appear to be linked to unique structures in the phase space. Andreyev and Koroteev call these structures phase portraits. The picture above is a phase portrait of the vowel sound ‘a’.

It’s a little harder to identify the shapes associated with consonants and the researchers haven’t yet tried with other sounds such as dipthongs.

It’s a long step from here to speech recognition but in principle, it could be done by looking for the phase portraits of specific phonemes and using them to spell out words.

The question, of course, is whether this would be easier or harder than current approaches.

If you’ve ever used speech recognition software, you’ll know how often it fails to work well. Recognition rates are nowhere near what is needed for anything but the simplest applications.

So a new approach for analysing speech by Yuri Andreyev and Maxim Koroteev at the Institute of Radioengineering and Electronics of the Russian Academy of Sciences in Moscow is welcome. Their approach is to treat the production of speech as a chaotic phenomenon.

That’s a significant difference compared with previous approaches which predict the next point in a speech signal by extrapolating from previous points in a linear fashion.

That works because the organs that produce speech–the vocal cords–change over a much longer time period than the sound they produce. So they can be considered essentially stationary for this type of analysis.

Of course, one of the characteristics of chaos is that very small changes in starting conditions can produce large changes in output. And if that’s happening, what kind of chaos are we talking about?

Andreyev and Koroteev answer this question by measuring the frequency and amplitude of the sound a person makes when saying various vowels and consonants. They then use this data to reconstruct the multidimensional phase space in which the chaotic signal is produced.

The results are interesting because specific vowels appear to be linked to unique structures in the phase space. Andreyev and Koroteev call these structures phase portraits. The picture above is a phase portrait of the vowel sound ‘a’.

It’s a little harder to identify the shapes associated with consonants and the researchers haven’t yet tried with other sounds such as dipthongs.

It’s a long step from here to speech recognition but in principle, it could be done by looking for the phase portraits of specific phonemes and using them to spell out words.

The question, of course, is whether this would be easier or harder than current approaches.

The notion of quantum gravity has mystified many physicists, not least because there has never been a prospect of measuring the fabric of the universe on this scale. That looks set to change.

A few years back, a number of physicists suggested that atom interferometry might do the trick. The thinking was that two atoms sent on different routes of equal length through space would then be made to interfere.

If spacetime is smooth and neat, the atoms should produce a certain set of fringes. But if spacetime on the plank scale were to be a maelstrom of quantum fluctuations, then these would force the atoms to travel slightly different paths and that would be picked up by the interferometer.

Sadly, it turns out that atom interferometers are nowhere near sensitive enough to detect these fluctuations and unlikely to become sensitive enough any time soon. The reason is that every three orders of magnitude increase in the sensitivity of the interferometer gives you only one order of magnitude increase in your ability to spot the fluctuations.

Which is why an idea floated by Mark Everitt and pals at the University of Leeds looks interesting. They say that the scaling problem effectively disappears if you use entangled atoms instead of ordinary ones.

And the improvement is such that the effect of quantum gravity should be detectable with current quantum optics technology.

They fall short of making any predictions so let’s fill in the blanks for them: somebody with a decent quantum optics lab will spot the first evidence of quantum gravity in 2009. Betcha!