May 2008 Archives

What would you choose as the most beautiful science experiment ever performed? Some Physics World readers may remember being asked a similar question by columnist Robert Crease a few years ago. The resulting article inspired US science writer George Johnson’s new book The Ten Most Beautiful Experiments, although interestingly the Physics World winner — the double-slit experiment with electrons — doesn’t make it onto Johnson’s list.

Johnson was here in Bristol last night giving a talk about the book. Beauty is a tricky concept at the best of times, let alone when applied to something abstract like science, and he explained the thought processes behind his list: “I was nostalgic for the time when a single mind could confront the unknown.” For Johnson, then, a beautiful experiment is one that poses a question to nature and gets a “crisp, unambiguous reply.” It also needs to be simple enough that it could conceivably be done by anyone, with a few simple pieces of equipment.

He didn’t have time to go through the whole ten, but the audience were treated to discussions of Johnson’s favourite three: Newton’s use of prisms to understand colour; Faraday’s Oersted experiment in which he discovered that light could be influenced by a magnetic field; and the Michelson-Morley experiment, which Johnson describes as a “beautiful failure”. Johnson was an eloquent speaker, and his graphic description of Newton inserting a needle behind his eye to make himself see different colours elicited much squirming. The book promises many more such fascinating gems — to see whether or not it lives up to the hype look out for a full review in the August issue of Physics World.

Scientists at the National Institute of Standards and Technology (NIST) in the US have used neutron beams to investigate the magnetic properties of the iron-based materials. They found that, at low temperatures and when undoped, the materials make a transition into an antiferromagnetic state in which magnetic layers are interspersed with non-magnetic layers. But when the materials are doped with fluorine to make them into high-temperature superconductors, this magnetic ordering is suppressed.

This is reminiscent of the behaviour of cuprates — the highest temperature superconductors known to-date. Is this more than a coincidence? We’ll have to wait and see.

Robert Aymar, the director-general of CERN, has said that the Large Hadron Collider (LHC) — the world’s biggest particle physics experiment — will be in “working order” by the end of June, according to the French news agency Agence France-Presse (AFP).

It is not clear what Aymar means by this, given that the last announcement from CERN was for a July start-up. It seems unlikely that LHC has raced ahead of schedule, so it might be that he thinks the cooling of the magnets will be complete by the end of June. However, the status report on the LHC website would indicate otherwise.

I spoke to a press officer at CERN, and she said that the AFP journalists quoted Aymar from a recent meeting they had at the European lab. She said that, as far as she is aware, the beam commissioning is still set to take place in July.

I have not yet spoken to James Gillies, the chief spokesperson for CERN, because he is tied up in meetings all day. When he gets back to me, I will give you an update.

UPDATE 3.15pm: I have just spoken to Gillies and he said that there is no change to the start-up schedule — the plan is still to begin injecting beams towards the end of July. Aymar was indeed referring to the cooling of the magnets, which should be complete by the end of June. Four of the eight sectors have already been cooled to their operating temperature of 1.9 K; the last (sector 4–5) began the cooling process today.

The reason for the gap between the cooling and beam-injection is that there must be a series of electrical tests, which will take around four weeks.

On 23 March 1989 Martin Fleischmann of the University of Southampton, UK, and Stanley Pons of the University of Utah, US, announced that they had observed controlled nuclear fusion in a glass jar at room temperature, and — for around a month — the world was under the impression that the world’s energy woes had been remedied. But, even as other groups claimed to repeat the pair’s results, sceptical reports began trickle in. An editorial in Nature predicted cold fusion to be unfounded. And a US Department of Energy (DOE) report judged that the experiments did “not provide convincing evidence that useful sources of energy will result from cold fusion.”

This hasn’t prevented a handful of scientists persevering with cold-fusion research. They stand on the sidelines, diligently getting on with their experiments and, every so often, they wave their arms frantically when they think have made some progress.

Nobody notices, though. Why? These days the mainstream science media wouldn’t touch cold-fusion experiments with a barge pole. They have learnt their lesson from 1989, and now treat “cold fusion” as a byword for bad science. Most scientists* agree, and some even go so far as to brand cold fusion a “pathological science” — science that is plagued by falsehood but practiced nonetheless.

[*CORRECTION 29/05/08: It has been brought to my attention that part of this last sentence appears to be unsubstantiated. After searching through past articles I have to admit that, despite it being written frequently, I can find no factual basis that “most scientists” think cold fusion is bad science (although public scepticism is evidently rife). However, there have been surveys to suggest that scientific opinion is more likely divided. According to a 2004 report by the DOE, which you can read here, ten out of 18 scientists thought that the hitherto results of cold-fusion experiments warranted further investigation.]

There is a reasonable chance that the naysayers are (to some extent) right and that cold fusion experiments in their current form will not amount to anything. But it’s too easy to be drawn in by the crowd and overlook a genuine breakthrough, which is why I’d like to let you know that one of the handful of diligent cold-fusion practitioners has started waving his arms again. His name is Yoshiaki Arata, a retired (now emeritus) physics professor at Osaka University, Japan. Yesterday, Arata performed a demonstration at Osaka of one his cold-fusion experiments.

Although I couldn’t attend the demonstration (it was in Japanese, anyway), I know that it was based on reports published here and here. Essentially Arata, together with his co-researcher Yue-Chang Zhang, uses pressure to force deuterium (D) gas into an evacuated cell containing a sample of palladium dispersed in zirconium oxide (ZrO2–Pd). He claims the deuterium is absorbed by the sample in large amounts — producing what he calls dense or “pynco” deuterium — so that the deuterium nuclei become close enough together to fuse.

So, did this method work yesterday? Here’s an email I received from Akito Takahashi, a colleague of Arata’s, this morning:

“Arata’s demonstration…was successfully done. There came about 60 people from universities and companies in Japan and few foreign people. Six major newspapers and two TV [stations] (Asahi, Nikkei, Mainichi, NHK, et al.) were there…Demonstrated live data looked just similar to the data they reported in [the] papers…This showed the method highly reproducible. Arata’s lecture and Q&A were also attractive and active.”

I also received a detailed account from Jed Rothwell, who is editor of the US site LENR (Low Energy Nuclear Reactions) and who has long thought that cold-fusion research shows promise. He said that, after Arata had started the injection of gas, the temperature rose to about 70 °C, which according to Arata was due to both chemical and nuclear reactions. When the gas was shut off, the temperature in the centre of the cell remained significantly warmer than the cell wall for 50 hours. This, according to Arata, was due solely to nuclear fusion.

Rothwell also pointed out that Arata performed three other control experiments: hydrogen with the ZrO2–Pd sample (no lasting heat); deuterium with no ZrO2–Pd sample (no heating at all); and hydrogen with no ZrO2–Pd sample (again, no heating). Nevertheless, Rothwell added that Arata neglected to mention certain details, such as the method of calibration. “His lecture was very difficult to follow, even for native speakers, so I may have overlooked something,” he wrote.

It will be interesting to see what other scientists think of Arata’s demonstration. Last week I got in touch with Augustin McEvoy, a retired condensed-matter physicist who has studied Arata’s previous cold-fusion experiments in detail. He said that he has found “no conclusive evidence of excess heat” before, though he would like to know how this demonstration turned out.

I will update you if and when I get any more information about the demonstration (apparently there might be some videos circulating soon). For now, though, you can form your own opinions about the reliability of cold fusion.

You might recall a while back physicsworld.comreported on a prediction for peculiar event that takes place on the two equinoxes. On the 20 March and the 22 September (or thereabouts) at two places on the Earth’s surface, many of the gravitational forces in the Milky Way should cancel out.

Such a quiet time in the turmoil of our galaxy provides an ideal opportunity for a ruthless test of Newton’s laws of motion. Some physicists think that if there were any deviation in the laws at very low accelerations it would mean dark matter — the elusive substance thought to make up around 95% of the universe’s mass and the dream catch of experiments worldwide — does not exist. Instead, all the phenomena associated with dark matter could be explained by a slight alteration in the laws known as modified Newtonian dynamics (MOND).

When Alex Ignatiev from the Theoretical Physics Research Institute in Melbourne, Australia, came up with the idea for the equinoctial experiment, there were a couple of problems with his proposal. First, there was a worry that stray icebergs at high latitudes where one of the experiments would have to be performed might give a false gravitational signal. Second, Ignatiev did not know the exact time that the desired signal would occur.

Now, in a new paper, he has resolved both of these. He has shown that even the biggest icebergs would not produce a signal big enough to confuse the data. And he has also shown how to predict the exact signal times.

One of the referees for Ignatiev’s paper has given a rich endorsement to the proposal: “MOND is the leading alternative to cosmic dark matter. It has passed a surprising number of astronomical tests and is desperately in need of laboratory tests. The author’s idea for testing MOND in a terrestrial setting is the only viable suggestion I’ve ever heard for such a possibility. This is an incredibly important problem, and deserves to be explored just as much as CDMS and the many other dark matter search experiments.”

The sad saga of the MAPLE nuclear reactors may have finally come to a close with today’s announcement from Atomic Energy of Canada (AECL) that the firm will no longer try to get the pair of reactors licensed to produce medical isotopes.

MAPLE was conceived in the 1980s as a replacement for AECL’s ageing NRX and NRU research reactors at Chalk River, Ontario. “M” stands for “multipurpose”, and the MAPLE was intended for both basic research as well as the commercial production of radioactive isotopes for medical and other applications.

Two MAPLE reactors were finally built at Chalk River in 2000, but it soon became apparent that they both suffered from serious safety problems associated with shoddy workmanship. As a result the facilities have never been granted full operational licences by the Canadian nuclear regulator.

AECL has also had safety problems with the 50-year old NRU, which had to be shutdown unexpectedly for about a month in 2007, leading to an international shortage of medical isotopes.

In the case of NRU, the Canadian government stepped in to restart the reactor — overruling its own regulator. AECL may be gambling that its move to scrap MAPLE may cause the government to pressure the regulator into approving the reactors.

Astrophysicists have a better idea of how dust obscures the light from galaxies, according to a paper published in Astrophysical Journal Letters.

It is already well known that dust, which permeates all galaxies, attenuates the light reaching Earth from the cosmos. It absorbs light of most wavelengths and then re-emits it as a blanket of infrared radiation. Now, Simon Driver of St Andrews University in the UK and colleagues have produced the first model that accounts for this absorption.

One of the model’s implications — that dust absorbs just under half the radiation produced by stars — will not be a surprise to astronomers. They already know this, having compared the average magnitude of the infrared radiation in the sky with the magnitude of the radiation from pinpoint sources like stars and galaxies. But what might be of interest is that Driver and colleagues can show how the dust affects the light output of galaxies depending on their orientation.

I spoke with Alastair Edge of Durham University, who is familiar with Driver’s team’s work, and he was pleased that that the researchers have managed to model the dust successfully. He followed up our conversation with an email: “The authors have made an important link between the observed properties of the galaxies we see from the light coming directly from their stars to the amount of long wavelength radiation we see coming from the dust within the galaxies. Obtaining a match between the energy absorbed and that re-radiated allows us to understand the global properties of galaxies in a more holistic fashion.”

I’m sorry to say that, having taken a day’s leave on Monday, this snippet of news (above) about ScienceDebate 2008 escaped my attention. According to a poll conducted by Harris Interactive on behalf of ScienceDebate 2008 and Research!America, 85% of US adults think agree that the presidential candidates should participate in a debate on science in the run up to the November election.

(For those of you who have missed the protests of the 37,000 signatories of ScienceDebate 2008, see my last news story on their progress.)

Shawn Otto, CEO of Science Debate 2008, gave the following statement in a press release:

“This topic has been virtually ignored by the candidates, but this poll shows that Americans of all walks know how important science and technology are to our health and way of life. We’ve heard a lot about lapel pins and preachers. But tackling the big science challenges is critical to our children’s future — to the future of the country and the future of the planet. Americans want to know that candidates take these issues seriously, and the candidates have a responsibility to let voters know what they think.”

The poll also shows that:

67% of adults think scientific research has contributed either “a lot” or “a great deal”

67% think that scientific evidence, rather than personal belief, should influence science policy

69% rate alternative energy as one of the most serious long-term issues

I wonder if many other scientists under the wing of the Holy See agree with Jose Gabriel Funes, the head of the Vatican observatory, or whether he’s something of a radical.

In an interview in yesterday’s edition of the Vatican newspaper L’Osservatore Romano, Funes not only admits that he believes in the Big-Bang model of the universe’s creation, he states that humans should be open to the possibility of alien life. “Just as there’s a multiplicity of creatures on earth,” he says, “there can be other beings, even intelligent, created by God.”

To be clear, Funes is in no way dismissing the first two chapters of Genesis. In fact, he sees “no contrast” between the notion of aliens and the Catholic faith. The other beings might also be worshipping God, he says.

Astrophysicists have known for more than three decades that black holes shouldn’t be totally black — they should emit a certain amount of “Hawking radiation” from the production of particle–antiparticle pairs around their event horizons. But detecting Hawking radiation has so far proved tricky, mostly because its temperature would be at least eight orders of magnitude lower than the cosmic microwave background left over from the Big Bang.

One way round this problem, as Ulf Leonhardt and colleagues from the University of St Andrews, UK, demonstrated earlier this year, might be to create systems that are analogous to black holes in the lab in which the temperature of the radiation is much higher. The researchers showed that a pulse of light travelling through a fibre can behave like a black hole, and, although they didn’t actually detect Hawking radiation, they showed that in principle it should be possible.

Now, in a paper published today in the New Journal of Physics, is seems as though Leonhardt’s group are one step closer. Rather than use pulses of light as an analogous system to a black hole, they have built a system of water waves. I confess that I haven’t yet studied this paper carefully enough to describe with any certainty what the researchers have done, suffice it to say they claim to have observed “negative-frequency” waves, the classical analogue of anti-particles which are the hallmarks of Hawking radiation.

In a brief email conversation last week, Leonhardt told me that they are not yet sure whether this is enough to constitute an observation of a classical analogue of Hawking radiation: “Hawking’s effect is a quantum phenomenon, a spontaneous quantum process, but like all spontaneous processes it can be stimulated. This is what we did, we sent in waves and saw a tiny bit of stimulated negative-frequency waves, but there are quantitative differences between experiment and theory that we do not understand yet.”

Of course, if and when Leonhardt’s group do find negative-frequency waves that agree with theory, there will be a debate as to whether they are “real” Hawking radiation. No doubt you will be seeing more of this on physicsworld.com soon.

Ohanian has posted an eight-page taster of his work on the arXiv preprint server, in which he presents a “critical examination” of how Einstein went about proving his most famous equation E = MC2. All of these proofs, claims Ohanian, “suffer from mistakes”.

This is not the first time that Einstein’s proofs have come under scrutiny, with various detractors and supporters arguing since at least 1908 — three years after the equation was first derived.

Elsewhere in the world of Einstein biography, a letter on religion written in 1954 by the physicist to the German philosopher Eric Gutkind has come up for auction in London. “The word God is for me nothing more than the expression and product of human weakness…”, wrote Einstein who died the next year — and has presumably discovered whether or not this letter was a mistake.

Sokal had written the paper, which was filled with scientific-sounding gibberish, to highlight what he saw as the sloppy thinking of some sociologists of science, particularly those who deem scientific knowledge to be socially constructed, rather than a matter of objective truth.

His paper sparked what became known as the Science Wars, which saw furious debate in scholarly journals and magazines, including Physics World, between physicists and sociologists. Now Sokal is back on the scene with his new book Beyond the Hoax, published by Oxford University Press.

Speaking at Bristol’s Festival of Ideas, Sokal outlined the main themes of Beyond the Hoax. Post-modernist views as espoused by some sociologists still get his goat, but now that some in that camp have, as he puts it, back-tracked from their earlier, more radical stance, Sokal has extended his criticisms to other groups who he thinks also don’t embrace the rational, empirical thinking that is the hallmark of all science.

That basically boils down to four main groups: religious people, pseudoscientists, proponents of homoepathic medicine, and spindoctors and others involved in PR. He saved particular anger for George Bush and Tony Blair for deciding to go to war with Iraq and then retrospectively justifying the decision based on what Sokal saw as weak evidence such as dodgy satellite photos.

Overall, it’s a bigger pool of victims for Sokal’s ire. But by broadening the range of targets, my concern is that his initial fury from 10 years ago has got somewhat diluted.

To his credit, Sokal responded well to the grilling given by his audience, although the majority were, I’d guess, generally supportive of his main themes. A fuller version of his lecture was previously given in London earlier this year. Meanwhile, Physics World has commissioned a review of Beyond the Hoax, to be published later this summer — so keep an eye out online and in print for an authoritative assessment of his new tome.

“So what would you do if string theory is wrong?” asks string theorist Moataz Emam of Clark University, US, in a paper posted on arXiv yesterday. It’s obvious, you might think. String theorists would briefly mourn the 40 years of misspent speculation and leave furtively through the back door, while anti-string theorists would celebrate in light of their vindication.

Not so, says Emam — string theory will continue to prosper, and might even become its own discipline independent of physics and mathematics.

Oddly, the reason Emam gives for this prediction is precisely the same reason why many physicists despise string theory. For example, in reducing the 10 dimensions of string theory to our familiar four, string theorists have to fashion a “landscape” of at least 10500 solutions. Emam says that such a huge number of solutions — of which only one exists for our universe — may make string theory unattractive, but in studying them physicists are gaining “deep insights into how a physical theory generally works”:

So even if someone shows that the universe cannot be based on string theory, I suspect that people will continue to work on it…The theory would be studied by physicists and mathematicians who might no longer consider themselves either. They will continue to derive beautiful mathematical formulas and feed them to the mathematicians next door. They also might, every once in a while, point out interesting and important properties concerning the nature of a physical theory which might guide the physicists exploring the actual theory of everything over in the next building.

Peter Woit, author of the string-theory polemic Not Even Wrong, notes on his blog that physicists looking to pursue string theory for its beauty should “go and work in a maths department”:

The argument Emam is making reflects in somewhat extreme form a prevalent opinion among string theorists, that the failure of hopes for the theory, even if real, is not something that requires them to change what they are doing. This attitude is all too likely to lead to disaster.

Sabur, 19, will begin teaching physics next month at the Department of Advanced Technology Fusion at Konkuk University, Korea. It will be just another entry on the teenager’s laden CV, which reveals she received a bachelor’s degree at 14 and a masters in materials science at 17.

Something might be awry here, though. There’s nothing wrong with the media adopting the American English definition of “professor” (i.e. any university teacher) — after all, Sabur was born in New York. But it appears that the previous record holder was Scottish physicist Colin Maclaurin, who was appointed professor of mathematics at the University of Aberdeen when he was a few months over 19 in 1717.

I might have to explain to our international readers that in the UK “professor” is a more distinguished title, reserved for heads-of-departments and the like. (At least it has been as far back as any of us at Physics World can vouch for.) Sabur, I note, is yet to defend her PhD.

Does this mean the titles of Sabur and Maclaurin are being confused? Does Maclaurin, who is credited with the mathematical “Maclaurin series”, deserve to keep his accolade?

Of course, science was a considerably narrower discipline back in the 18th century, and achieving a professorship might have taken a little less time than it does today (it certainly wouldn’t have required a PhD). But Maclaurin can’t defend his honour, and offhand I don’t know enough about science in the early 1700s to cast a vote either way.

The high Tc superconductivity community has been abuzz lately with the discovery of a growing number of iron-based materials that remain superconducting at temperatures as high as 55 K.

The first such material (fluorine-doped LaOFeAs) was reported by physicists in Japan earlier this year and has a transition temperature (Tc) of 26 K. Since then, researchers in China replaced the lanthanum (La) with samarium (Sm) and boosted Tc to 55 K. The Japanese team, meanwhile, put their material under pressure and increased Tc to 43 K.

Now, just as physicists are beginning to understand the mechanism behind these iron-based materials, scientists in Russia have come up with a new twist by replacing iron with nickel. They found that fluorine-doped LaONiBi is a superconductor with a Tc of 4K.

While this Tc is much lower than the iron-based materials, the team reports that LaONiBi has very similar structural and electronic properties as its iron-based cousins. This suggests that with a bit of fiddling with doping levels and other properties, the Tc could be boosted considerably.