Tuesday, February 9, 2010

New research is casting doubt on the old adage, “All you need to run is a pair of shoes.” Scientists have found that people who run barefoot, or in minimal footwear, tend to avoid “heel-striking,” and instead land on the ball of the foot or the middle of the foot. In so doing, these runners use the architecture of the foot and leg and some clever Newtonian physics to avoid hurtful and potentially damaging impacts, equivalent to two to three times body weight, that shod heel-strikers repeatedly experience.

“People who don’t wear shoes when they run have an astonishingly different strike,” said Daniel E. Lieberman, professor of human evolutionary biology at Harvard University and co-author of a paper appearing this week in the journal Nature. “By landing on the middle or front of the foot, barefoot runners have almost no impact collision, much less than most shod runners generate when they heel-strike.

“Most people today think barefoot running is dangerous and hurts, but actually you can run barefoot on the world’s hardest surfaces without the slightest discomfort and pain. All you need is a few calluses to avoid roughing up the skin of the foot. Further, it might be less injurious than the way some people run in shoes.”

Working with populations of runners in the United States and Kenya, Lieberman and his colleagues at Harvard, the University of Glasgow, and Moi University in Kenya looked at the running gaits of three groups: those who had always run barefoot, those who had always worn shoes, and those who had converted to barefoot running from shod running. The researchers found a striking pattern.

Most shod runners — more than 75 percent of Americans — heel-strike, experiencing a very large and sudden collision force about 1,000 times per mile run. People who run barefoot, however, tend to land with a springy step toward the middle or front of the foot.

“Heel-striking is painful when barefoot or in minimal shoes because it causes a large collisional force each time a foot lands on the ground,” said co-author Madhusudhan Venkadesan, a postdoctoral researcher in applied mathematics and human evolutionary biology at Harvard. “Barefoot runners point their toes more at landing, avoiding this collision by decreasing the effective mass of the foot that comes to a sudden stop when you land, and by having a more compliant, or springy, leg.”

The differences between shod and unshod running have evolutionary underpinnings. For example, said Lieberman, our early Australopith ancestors had less-developed arches in their feet. Homo sapiens, by contrast, has evolved a strong, large arch that we use as a spring when running.

“Our feet were made in part for running,” Lieberman said. But as he and his co-authors write in Nature: “Humans have engaged in endurance running for millions of years, but the modern running shoe was not invented until the 1970s. For most of human evolutionary history, runners were either barefoot or wore minimal footwear such as sandals or moccasins with smaller heels and little cushioning.”

For modern humans who have grown up wearing shoes, barefoot or minimal shoe running is something to be eased into, warned Lieberman. Modern running shoes are designed to make heel-striking easy and comfortable. The padded heel cushions the force of the impact, making heel-striking less punishing.

“Running barefoot or in minimal shoes is fun but uses different muscles,” said Lieberman. “If you’ve been a heel-striker all your life, you have to transition slowly to build strength in your calf and foot muscles.”

In the future, he hopes, the kind of work done in this paper can not only investigate barefoot running but can provide insight into how to better prevent the repetitive-stress injuries that afflict a high percentage of runners today.

“Our hope is that an evolutionary medicine approach to running and sports injury can help people run better for longer and feel better while they do it,” said Lieberman, who has created a Web site, to educate runners about the respective merits of shod and barefoot running.

Internationally-coordinated research and field testing on ‘geoengineering’ the planet’s atmosphere to limit risk of climate change should begin soon, along with building international governance of the technology, say scientists from the University of Calgary and the United States.

Collaborative and government-supported studies on solar-radiation management, a form of geoengineering, would reduce the risk of nations’ unilateral experiments and help identify technologies with the least risk, says U of C scientist David Keith, in an article published today in Nature, the top-ranked science journal. Co-authors of the opinion piece are Edward Parson at the University of Michigan and Granger Morgan at Carnegie Mellon University.

“Solar-radiation management may be the only human response that can fend off rapid and high-consequence climate change impacts. The risks of not doing research outweigh the risks of doing it,” says Keith, director of the Institute for Sustainable Energy, Environment and Economy’s energy and environmental systems group and a professor in the Schulich School of Engineering.

Solar-radiation management (SRM) would involve releasing megatonnes of light-scattering aerosol particles in the upper atmosphere to reduce Earth’s absorption of solar energy, thereby cooling the planet. Another technique would be to release particles of sea salt to make low-altitude clouds reflect more solar energy back into space.

SRM should not take the place of making deep cuts in industrial greenhouse gas emissions and taking action to adapt to climate change, Keith and his American colleagues stress. However, they say: “We must develop the capability to do SRM in a manner that complements such cuts, while managing the associated environmental and political risks.”

The scientists propose that governments establish an international research budget for SRM that grows from about $10 million to $1 billion a year between now and the end of 2020. They urge that research results be available to all and risk assessments be as transparent and international as possible to build sound norms of governance for SRM.

Long-established estimates show that SRM could offset this century’s predicted global average temperature rise more than 100 times more cheaply than achieving the same cooling by cutting emissions, Keith notes. “But this low price tag raises the risks of single groups acting alone, and of facile cheerleading that promotes exclusive reliance on SRM.”

SRM would also cool the planet quickly, whereas even a massive program of carbon dioxide emission cuts will take many decades to slow global warming because the CO2 already accumulated in the atmosphere will take many years to naturally break down. The 1991 eruption of Mount Pinatubo, for example, cooled the planet by about 0.5 degrees Celsius in less than a year by injecting sulphur into the stratosphere.

But a world cooled by managing sunlight will present risks, the scientists note. The planet would have less precipitation and less evaporation, and monsoon rains and winds might be weakened. Some areas would be more protected from temperature changes than others, creating local ‘winners’ and losers.’

“If the world relies solely on SRM to limit (global) warming, these problems will eventually pose risks as large as those from uncontrolled emissions,” they warn.

Field tests of SRM are the only way to identify the best technologies and potential risks, Keith says. He and the American scientists propose carefully controlled testing that would involve releasing tonnes – not megatonnes – of aerosols in the stratosphere and low-altitude clouds.

“If SRM proves to be unworkable or poses unacceptable risks, the sooner we know the less moral hazard it poses; if it is effective, we gain a useful additional tool to limit climate damages.”

Responsible management of climate risks requires deep emission cuts and research and assessment of SRM technologies, the scientists say. “The two are not in opposition. We are currently doing neither; action is urgently needed on both.”

For the first time, astronomers have found a supernova explosion with properties similar to a gamma-ray burst, but without seeing any gamma rays from it. The discovery, using the National Science Foundation's Very Large Array (VLA) radio telescope, promises, the scientists say, to point the way toward locating many more examples of these mysterious explosions.

"We think that radio observations will soon be a more powerful tool for finding this kind of supernova in the nearby Universe than gamma-ray satellites," said Alicia Soderberg, of the Harvard-Smithsonian Center for Astrophysics.

The telltale clue came when the radio observations showed material expelled from the supernova explosion, dubbed SN2009bb, at speeds approaching that of light. This characterized the supernova, first seen last March, as the type thought to produce one kind of gamma-ray burst.

"It is remarkable that very low-energy radiation, radio waves, can signal a very high-energy event," said Roger Chevalier of the University of Virginia.

When the nuclear fusion reactions at the cores of very massive stars no longer can provide the energy needed to hold the core up against the weight of the rest of the star, the core collapses catastrophically into a superdense neutron star or black hole. The rest of the star's material is blasted into space in a supernova explosion. For the past decade or so, astronomers have identified one particular type of such a "core-collapse supernova" as the cause of one kind of gamma-ray burst.

Not all supernovae of this type, however, produce gamma-ray bursts. "Only about one out of a hundred do this," according to Soderberg.

In the more-common type of such a supernova, the explosion blasts the star's material outward in a roughly-spherical pattern at speeds that, while fast, are only about 3 percent of the speed of light. In the supernovae that produce gamma-ray bursts, some, but not all, of the ejected material is accelerated to nearly the speed of light.

The superfast speeds in these rare blasts, astronomers say, are caused by an "engine" in the center of the supernova explosion that resembles a scaled-down version of a quasar. Material falling toward the core enters a swirling disk surrounding the new neutron star or black hole. This accretion disk produces jets of material boosted at tremendous speeds from the poles of the disk.

"This is the only way we know that a supernova explosion could accelerate material to such speeds," Soderberg said.

Until now, no such "engine-driven" supernova had been found any way other than by detecting gamma rays emitted by it.

"Discovering such a supernova by observing its radio emission, rather than through gamma rays, is a breakthrough. With the new capabilities of the Expanded VLA coming soon, we believe we'll find more in the future through radio observations than with gamma-ray satellites," Soderberg said.

Why didn't anyone see gamma rays from this explosion? "We know that the gamma-ray emission is beamed in such blasts, and this one may have been pointed away from Earth and thus not seen," Soderberg said. In that case, finding such blasts through radio observations will allow scientists to discover a much larger percentage of them in the future.

"Another possibility," Soderberg adds, "is that the gamma rays were 'smothered' as they tried to escape the star. This is perhaps the more exciting possibility since it implies that we can find and identify engine-driven supernovae that lack detectable gamma rays and thus go unseen by gamma-ray satellites."

One important question the scientists hope to answer is just what causes the difference between the "ordinary" and the "engine-driven" core-collapse supernovae. "There must be some rare physical property that separates the stars that produce the 'engine-driven' blasts from their more-normal cousins," Soderberg said. "We'd like to find out what that property is."

One popular idea is that such stars have an unusually low concentration of elements heavier than hydrogen. However, Soderberg points out, that does not seem to be the case for this supernova.

Adults aged over 70 years who are classified as overweight are less likely to die over a ten year period than adults who are in the 'normal' weight range, according to a new study published in the Journal of The American Geriatrics Society.

Researchers looked at data taken over a decade among more than 9,200 Australian men and women aged between 70 and 75 at the beginning of the study, who were assessed for their health and lifestyle as part of a study into healthy aging. The paper sheds light on the situation in Australia, which is ranked the third most obese country, behind the United States and the United Kingdom.

Obesity and overweight are most commonly defined according to body mass index (BMI), which is calculated by dividing bodyweight (in kg) by the square of height (in metres). The World Health Organisation (WHO) defines four principal categories: underweight, normal weight, overweight, and obese. The thresholds for these categories were primarily based on evidence from studies of morbidity and mortality risk in younger and middle-aged adults, but it remains unclear whether the overweight and obese cut-points are overly restrictive measures for predicting mortality in older people.

The study began in 1996 and recruited 4,677 men and 4,563 women. The participants were followed for ten years or until their death, whichever was sooner, and factors such as lifestyle, demographics, and health were measured. The research uncovered that mortality risk was lowest for participants with a BMI classified as overweight, with the risk of death reduced by 13% compared with normal weight participants. The benefits were only seen in the overweight category not in those people who are obese.

"Concerns have been raised about encouraging apparently overweight older people to lose weight and as such the objective of our study was to examine the major unresolved question of, 'what level of BMI is associated with the lowest mortality risk in older people?'" said lead researcher Prof. Leon Flicker, of the University of Western Australia. "These results add evidence to the claims that the WHO BMI thresholds for overweight and obese are overly restrictive for older people. It may be timely to review the BMI classification for older adults."

In those participants who died before the conclusion of the study, the researchers concluded that the type of disease which caused their death, for example heart disease or cancer, did not affect the level of protection being overweight had. To remove any risk of bias in participants with illnesses which caused them to lose weight, and also increased their risk of dying, the researchers contrasted subjects who were relatively healthy compared with those who had major chronic diseases or smoked and found no apparent differences in the BMI: mortality relationship.

While the same benefit in being overweight was true for men and women, being sedentary doubled the risk of death for women, whereas it only increased the risk by a quarter in men.

"Our study suggests that those people who survive to age 70 in reasonable health have a different set of risks and benefits associated with the amount of body fat to younger people, and these should be reflected in BMI guidelines," concluded Flicker.

The colour of some feathers on dinosaurs and early birds has been identified for the first time, reports a paper published in Nature.

The research found that the theropod dinosaur Sinosauropteryx had simple bristles – precursors of feathers – in alternate orange and white rings down its tail, and that the early bird Confuciusornis had patches of white, black and orange-brown colouring. Future work will allow precise mapping of colours and patterns across the whole bird.

Mike Benton, Professor of Palaeontology at the University of Bristol, said, "Our research provides extraordinary insights into the origin of feathers. In particular, it helps to resolve a long-standing debate about the original function of feathers – whether they were used for flight, insulation, or display. We now know that feathers came before wings, so feathers did not originate as flight structures.

"We therefore suggest that feathers first arose as agents for colour display and only later in their evolutionary history did they become useful for flight and insulation."

The team of palaeontologists from the University of Bristol, UK, the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) in Beijing, University College Dublin and the Open University report two kinds of melanosomes found in the feathers of numerous birds and dinosaurs from the world-famous Jehol beds of NE China.

Melanosomes are colour-bearing organelles buried within the structure of feathers and hair in modern birds and mammals, giving black, grey, and rufous tones such as orange and brown. Because melanosomes are an integral part of the tough protein structure of the feather, they survive when a feather survives, even for hundreds of millions of years.

This is the first report of melanosomes found in the feathers of dinosaurs and early birds. It is also the first report of phaeomelanosomes in fossil feathers, the organelles that provide rufous and brown colours.

These discoveries confirm the substantial body of evidence that suggests birds evolved through a long line of theropod (flesh-eating) dinosaurs. It also demonstrates that the unique assemblage of characters that make a modern bird – feathers, wings, lightweight skeleton, enhanced metabolic system, enlarged brain and visual systems – evolved step-by-step over some 50 million years of dinosaur evolution, through the Jurassic and Cretaceous periods.

"These discoveries open up a whole new area of research", said Benton, "allowing us to explore aspects of the life and behaviour of dinosaurs and early birds that lived over 100 million years ago.

"Furthermore, we now know that the simplest feathers in dinosaurs such as Sinosauropteryx were only present over limited parts of its body – for example, as a crest down the midline of the back and round the tail – and so they would have had only a limited function in thermoregulation.

"Feathers are key to the success of birds and we can now dissect their evolutionary history in detail and see how each feather type – and the fine detail of feather structure – was acquired through time. This will link with current work on how the genome controls feather development."

Not every object is food to a Venus flytrap. Like the carnivorous plant, a new material developed at Northwestern University permanently traps only its desired prey, the radioactive ion cesium, and not other harmless ions like sodium.

The synthetic material, made from layers of a gallium, sulfur and antimony compound, is very selective. The Northwestern researchers found it to be extremely successful in removing cesium -- found in nuclear waste but very difficult to clean up -- from a sodium-heavy solution. (The solution had concentrations similar to those in real liquid nuclear waste.)

It is, in fact, cesium itself that triggers a structural change in the material, causing it to snap shut its pores, or windows, and trap the cesium ions within. The material sequesters 100 percent of the cesium ions from the solution while at the same time ignoring all the sodium ions.

The results are published online by the journal Nature Chemistry.

"Ideally we want to concentrate the radioactive material so it can be dealt with properly and the nonradioactive water thrown away," said Mercouri G. Kanatzidis, Charles E. and Emma H. Morrison Professor of Chemistry in the Weinberg College of Arts and Sciences and the paper's senior author. "A new class of materials that takes advantage of the flytrap mechanism could lead to a much-needed breakthrough in nuclear waste remediation."

Capturing only cesium from vast amounts of liquid nuclear waste is like looking for a needle in a haystack, Kanatzidis said. The waste has a much higher concentration of sodium compared to cesium, with ratios as great as 1,000-to-1. This difficult-to-achieve selectivity is why currently there is no good solution for cesium removal.

The Northwestern material is porous with its atoms arranged in an open and layered framework structure with many windows to promote rapid ion exchange. Initially, organic cations reside in the material; when the material comes into contact with the liquid, the cations leave the material by going through the windows, and the cesium ions come in. In the end, the material contains only cesium ions and no organic cations. (The presence of organic cations in the liquid is not an issue as the cations are not radioactive.)

The snap-shut Venus flytrap mechanism occurs because 'soft' materials like to interact with each other. A cesium ion is big and soft, and the metal-sulfide material is soft, too. The cesium ions are attracted to the material, specifically the sulfur atoms, and together form a weak bond. This interaction causes the material to change shape, close its windows and trap the cesium -- like a juicy insect in a flytrap. Sodium, which is clothed in water molecules, can't trigger the response.

Kanatzidis and Nan Ding, then a doctoral student in Kanatzidis' research group and an author of the paper, did not set out to discover the flytrap mechanism. Instead, they were investigating different structures of the material, wondering if they could act as ion exchangers.

"Seeing the windows close was completely unexpected," Kanatzidis said. "We expected ion exchange -- we didn't expect the material to respond dynamically. This gives us a new mechanism to focus on."

The National Nuclear Security Administration (NNSA) announced that scientists at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) have successfully delivered an historic level of laser energy — more than 1 megajoule — to a target in a few billionths of a second and demonstrated the target drive conditions required to achieve fusion ignition.

This is about 30 times the energy ever delivered by any other group of lasers in the world. The peak power of the laser light, which was delivered within a few billionths of a second, was about 500 times that used by the United States at any given time.

“Breaking the megajoule barrier brings us one step closer to fusion ignition at the National Ignition Facility, and shows the universe of opportunities made possible by one of the largest scientific and engineering challenges of our time,” said NNSA Administrator Thomas D’Agostino. “NIF is a critical component in our stockpile stewardship program to maintain a safe, secure and effective nuclear deterrent without underground nuclear testing. This milestone is an example of how our nation’s investment in nuclear security is producing benefits in other areas, from advances in energy technology to a better understanding of the universe.”

In order to demonstrate fusion, the energy that powers the sun and the stars, NIF focuses the energy of 192 powerful laser beams into a pencil-eraser-sized cylinder containing a tiny spherical target filled with deuterium and tritium, two isotopes of hydrogen. Inside the cylinder, the laser energy is converted to X-rays, which compress the fuel until it reaches temperatures of more than 200 million degrees Fahrenheit and pressures billions of times greater than Earth’s atmospheric pressure. The rapid compression of the fuel capsule forces the hydrogen nuclei to fuse and release many times more energy than the laser energy that was required to initiate the reaction.

This experimental program to achieve fusion ignition is known as the National Ignition Campaign sponsored by NNSA and is a partnership among LLNL, Los Alamos National Laboratory, the Laboratory for Laser Energetics, General Atomics, Sandia National Laboratories, as well as numerous other national laboratories and universities.

The NIF laser system, the only megajoule laser system in the world, began firing all 192-laser beams onto targets in June 2009. In order to characterize the X-ray drive achieved inside the target cylinders as the laser energy is ramped up, these first experiments were conducted at lower laser energies and on smaller targets than will be used for the ignition experiments. These targets used gas-filled capsules that act as substitutes for the fusion fuel capsules that will be used in the 2010 ignition campaign. The 1 MJ shot represents the culmination of these experiments using an ignition-scale target for the first time.

These early tests have demonstrated that NIF's laser beams can be effectively delivered to the target and are capable of creating sufficient X-ray energy in the target cylinder to drive fuel implosion. The implosions achieved with the surrogate capsules have also been shown to have good symmetry that is adjustable through a variety of techniques. The next step is to move to ignition-like fuel capsules that require the fuel to be in a frozen hydrogen layer (at 425 degrees Fahrenheit below zero) inside the fuel capsule. NIF is currently being made ready to begin experiments with ignition-like fuel capsules in the summer of 2010.

“This accomplishment is a major milestone that demonstrates both the power and the reliability of NIF’s integrated laser system, the precision targets and the integration of the scientific diagnostics needed to begin ignition experiments,” said NIF Director Ed Moses. “NIF has shown that it can consistently deliver the energy required to conduct ignition experiments later this year.”

NIF, the world’s largest laser facility, is the first facility expected to achieve fusion ignition and energy gain in a laboratory setting.

Wearing a crash helmet is essential to a motorcyclist’s safety, but could it actually be harming their health and affecting their riding?

That is what academics from the two Bath universities are investigating in a new research project funded by the Leverhulme Trust.

Leading the study are Dr Michael Carley, from the Department of Mechanical Engineering at the University of Bath, and Dr Nigel Holt from the Department of Psychology at Bath Spa University.

With the help of Dr Ian Walker, from the Department of Psychology at the University of Bath, the team will take on-road measurements to find how noise is transmitted from a helmet and how it affects the rider’s hearing and ability to concentrate.

Dr Carley said: “The noise inside the helmet at the legal speed of 70 mph is higher than the legal limit for noise at work – more than enough to cause serious hearing damage.

“The issue isn’t noisy engines or loud exhausts as you may think. The noise is simply from the airflow over the helmet.

“Ear plugs won’t help much either as the noise is transferred into the inner ear from the rider’s bones. This has been known for 20 years yet little research has been done on the noise and its effects.”

The laboratory study will be split into two parts; the first will involve applying low level vibration to people’s heads to examine how the noise is transmitted through the whole system of the helmet including the head.

Dr Carley, who will be directing this first study, said: “We already know that the noise passes to the ear partly through air and partly through the rider’s bones. To reduce hearing damage we must establish which route is more important and a higher priority to hearing protection measures.”

The second part includes playing noise back to participants while they do cognitive tests. Riding a motorcycle requires great attention and concentration; anything that reduces performance may lead to more accidents.

Dr Holt said: “It is known that noise can affect perception and cognition but, so far, nobody has tried to examine how noise in motorcycling affects the performance of riders.”

The project starts next month and will run for a year.

Dr Holt added: “This isn’t about putting people off riding or wearing helmets; it’s about finding ways to reduce this damage so that riders can have a better riding experience.

“We hope the research will provide information which can be used in setting standards for helmets and to help improve helmet and motorcycle design.”

New research suggests that animals living at high latitudes grow better than their counterparts closer to the equator because higher-latitude vegetation is more nutritious. The study, published in the February issue of The American Naturalist, presents a novel explanation for Bergmann's Rule, the observation that animals tend to be bigger at higher latitudes.

Ever since Christian Bergmann made his observation about latitude and size in 1847, scientists have been trying to explain it. The traditional explanation is that body temperature is the driving force. Because larger animals have less surface area compared to overall body mass, they don't lose heat as readily as smaller animals. That would give big animals an advantage at high latitudes where temperatures are generally colder.

But biologist Chuan-Kai Ho from Texas A&M University wondered if there might be another explanation. Might plants at higher latitudes be more nutritious, enabling the animals that eat those plants to grow bigger?

To answer that question, Ho along with colleagues Steven Pennings from the University of Houston and Thomas Carefoot from the University of British Columbia, devised a series of lab experiments. They raised several groups of juvenile planthoppers on a diet of cordgrass, which was collected from high to low latitudes. Ho and his team then measured the body sizes of the planthopppers when they reached maturity. They found that the planthoppers that fed the high-latitude grass grew larger than those fed low latitude grass.

The researchers performed similar experiments using two other plant-eating species—grasshoppers and sea snails. "All three species grew better when fed plants from high versus low latitudes," Ho said. "These results showed part of the explanation for Bergmann's rule could be that plants from high latitudes are better food than plants from low latitudes." Although this explanation applies only to herbivores, Ho explained that predators might also grow larger as a consequence of eating larger herbivores.

"We don't think that this is the only explanation for Bergmann's rule," Ho added. "But we do think that studies of Bergmann's rule should consider ecological interactions in addition to mechanisms based on physiological responses to temperature."

It's not known why the higher-latitude plants might be more nutritious. But research in Pennings's lab at the University of Houston offers a clue. Pennings has shown that plants at low latitudes suffer more damage from herbivores than those at higher latitudes. Ho and Pennings suggest that perhaps lower nutrition and increased chemical defenses are a response to higher pressure from herbivores.