Wednesday, April 28, 2010

"Supervolcanoes" have been blamed for multiple mass extinctions in Earth's history, but the cause of their massive eruptions is unknown.

Despite their global impact, the eruptions' origin and triggering mechanisms have remained unexplained. New data obtained during a recent Integrated Ocean Drilling Program (IODP) expedition in the Pacific Ocean may provide clues to unlocking this mystery.To explore the origins of these seafloor giants, scientists drilled into a large, 145 million-year-old underwater volcanic mountain chain off the coast of Japan.

IODP Expedition 324: Shatsky Rise Formation took place onboard the scientific ocean drilling vessel JOIDES Resolution from September 4 to November 4, 2009. Preliminary results of the voyage are emerging.

"'Supervolcanoes' emitted large amounts of gases and particles into the atmosphere, and re-paved the ocean floor," says Rodey Batiza, marine geosciences section head in the National Science Foundation (NSF)'s Division of Ocean Sciences, which co-funded the research.

The result?

"Loss of species, increased greenhouse gases in the atmosphere, and changes in ocean circulation," says Batiza.

In fall 2009, an international team of scientists participating in IODP Expedition 324 drilled five sites in the ocean floor. They studied the origin of the 145 million-year-old Shatsky Rise volcanic mountain chain.

Located 1,500 kilometers (930 miles) east of Japan, Shatsky Rise measures roughly the size of California.

This underwater mountain chain is one of the largest supervolcanoes in the world: the top of Shatsky Rise lies three and a half kilometers (about two miles) below the sea's surface, while its base plunges to nearly six kilometers (four miles) beneath the surface.

Shatsky Rise is composed of layers of hardened lava, with individual lava flows that are up to 23 meters (75 feet) thick.

"Seafloor supervolcanoes are characterized by the eruption of enormous volumes of lava," says William Sager of Texas A&M University, who led the expedition with co-chief scientist Takashi Sano of Japan's National Museum of Nature and Science in Tokyo. "Studying their formation is critical to understanding the processes of volcanism, and the movement of material from Earth's interior to its surface."

About a dozen supervolcanoes exist on Earth; some are on land, while others lie at the bottom of the ocean. Those found on the seafloor are often referred to as large oceanic plateaus.

Current scientific thinking suggests that these supervolcanoes were caused by eruptions over a period of a few million years or less--a rapid pace in geologic time.

Each of these supervolcanoes produced several million cubic kilometers of lava--about three hundred times the volume of all the Great Lakes combined--dwarfing the volume of lava produced by the largest present-day volcanoes in places like Hawaii.

Since the 1960s, geologists have debated the formation and origin of these large oceanic plateaus. The mystery lies in the origin of the magma, molten rock that forms within the Earth.

A magma source rising from deep within the Earth has a different chemical composition than magma that forms just below Earth's crust. Some large oceanic plateaus show signs of a deep-mantle origin. Others exhibit chemical signatures indicative of magma from a much shallower depth.

The IODP Shatsky Rise expedition focused on deciphering the relationship between supervolcano formation and the boundaries of tectonic plates, crucial to understanding what triggers supervolcano formation.

A widely-accepted explanation for oceanic plateaus is that they form when magma in the form of a "plume head" rises from deep within the Earth to the surface.

An alternative theory suggests that large oceanic plateaus can originate at the intersection of three tectonic plates, known as a "triple junction."

Shatsky Rise could play a key role in this debate, because it formed at a triple junction. However, it also displays characteristics that could be explained by the plume head model.

"Shatsky Rise is one of the best places in the world to study the origin of supervolcanoes," says Sager. "What makes Shatsky Rise unique is that it's the only supervolcano to have formed during a time when Earth's magnetic field reversed frequently."

This process creates "magnetic stripe" patterns in the seafloor. "We can use these magnetic stripes to decipher the timing of the eruption," says Sager, "and the spatial relationship of Shatsky Rise to the surrounding tectonic plates and triple junctions."

Sediments and microfossils collected during the expedition indicate that parts of the Shatsky Rise plateau were at one time at or above sea level, and formed an archipelago during the early Cretaceous period (about 145 million years ago).

Shipboard lab studies show that much of the lava erupted rapidly, and that Shatsky Rise formed at or near the equator.

As analyses continue, data collected during this expedition will help scientists resolve the 50 year-old debate about the origin and nature of large oceanic plateaus.

Like microscopic inchworms, cancer cells slink away from tumors to travel and settle elsewhere in the body. Now, researchers at Weill Cornell Medical College report in today’s online edition of the journal Nature that new anti-cancer agents break down the looping gait these cells use to migrate, stopping them in their tracks.

Mice implanted with cancer cells and treated with the small molecule macroketone lived a full life without any cancer spread, compared with control animals, which all died of metastasis. When macroketone was given a week after cancer cells were introduced, it still blocked greater than 80 percent of cancer metastasis in mice.

These findings provide a very encouraging direction for development of a new class of anti-cancer agents, the first to specifically stop cancer metastasis, says the study’s lead investigator, Dr. Xin-Yun Huang, a professor in the Department of Physiology and Biophysics at Weill Cornell Medical College.

“More than 90 percent of cancer patients die because their cancer has spread, so we desperately need a way to stop this metastasis,” Dr. Huang says. “This study offers a paradigm shift in thinking and, potentially, a new direction in treatment.”

Dr. Huang and his research team have been working on macroketone since 2003. Their work started after researchers in Japan isolated a natural substance, dubbed migrastatin, secreted by Streptomyces bacteria, that is the basis of many antibiotic drugs. The Japanese researchers noted that migrastatin had a weak inhibitory effect on tumor cell migration.

Dr. Huang and collaborators at the Memorial Sloan-Kettering Cancer Center then proceeded to build analogues of migrastatin — synthetic and molecularly simpler versions.

“After a lot of modifications, we made several versions that were a thousand-fold more potent than the original,” Dr. Huang says.

In 2005, they published a study showing that several of the new versions, including macroketone, stopped cancer cell metastasis in laboratory animals, but they didn’t know how the agent worked.

In the current study, the researchers revealed the mechanism. They found that macroketone targets an actin cytoskeletal protein known as fascin that is critical to cell movement. In order for a cancer cell to leave a primary tumor, fascin bundles actin filaments together like a thick finger. The front edge of this finger creeps forward and pulls along the rear of the cell. Cells crawl away in the same way that an inchworm moves.

Macroketone latches on to individual fascin, preventing the actin fibers from adhering to each other and forming the pushing leading edge, Dr. Huang says. Because individual actin fibers are too soft when they are not bundled together, the cell cannot move. The new animal experiments detailed in the study confirmed the power of macroketone. The agent did not stop the cancer cells implanted into the animals from forming tumors or from growing, but it completely prevented tumor cells from spreading, compared with control animals, he says. Even when macroketone was given after tumors formed, most cancer spread was blocked.

“This suggests to us that an agent like macroketone could be used to both prevent cancer spread and to treat it as well,” Dr. Huang says. “Of course, because it has no effect on the growth of a primary tumor, such a drug would have to be combined with other anti-cancer therapies acting on tumor cell growth.”

Also pleasing was the finding that the mice suffered few side effects from the treatment, according to Dr. Huang. “The beauty of this approach is that fascin is over-expressed in metastatic tumor cells but is only expressed at a very low level in normal epithelial cells, so a treatment that attacks fascin will have comparatively little effect on normal cells — unlike traditional chemotherapy which attacks all dividing cells,” he says.

Dr. Huang and his colleagues reported another key finding in the same Nature paper — on x-ray crystal structures of fascin and of the complex of fascin and macroketone. They demonstrated how macroketone blocks the activity of fascin. The images showed precisely how macroketone snugly nestles into a pocket of fascin affecting the way it regulates actin filament bundling.

Scientists at Newcastle University have developed a pioneering technique which enables them for the first time to successfully transfer DNA between two human eggs. The technique has the potential to help prevent the transmission of serious inherited disorders known as mitochondrial diseases.

The study, led by Dr Mary Herbert and Professor Doug Turnbull, and funded primarily by the Muscular Dystrophy Campaign, the Medical Research Council and the Wellcome Trust, is published in the journal Nature.

Every cell in our body needs energy to function. This energy is provided by mitochondria, often referred to as the cells' 'batteries'. Mitochondria are found in every cell, along with the cell nucleus, which contains the genes that determine our individual characteristics. The information required to create these 'batteries' – the mitochondrial DNA – is passed down the maternal line, from mother to child.

A mother's egg contains a copy of her own DNA – twenty-three chromosomes – as well as DNA for her mitochondria. The amount of genetic material contained in mitochondrial DNA is very small – 13 protein-producing genes, compared to an estimated 23,000 genes that we inherit from our parents – and this information is used solely to generate the energy produced by the 'batteries'.

Like all DNA, the DNA in mitochondria can mutate and mothers can pass these mutations onto their children. Around one in 200 children are born each year with mutations which in most cases cause only mild or asymptomatic forms of mitochondrial disease. However, around one in 6,500 children are born with severe mitochondrial diseases, which include muscular weakness, blindness, fatal heart failure, liver failure, learning disability and diabetes and can lead to death in early infancy.

There are no treatments available to cure these conditions and mothers face the agonising choice of whether to risk having a child who may be affected by such a disease or not to have children at all.

Now, researchers at Newcastle University have developed a technique which allows them to replace these 'batteries'. This is the first time such a technique has been used in fertilised human eggs.

A fertilised egg usually contains two pronuclei – genetic material from the egg and sperm – as well as mitochondria. The technique developed by the Newcastle team involves extracting the pronuclei but leaving behind the mitochondria. The researchers then take a fertilised egg from a donor, remove its pronuclei and replace them with the extracted pronuclei. This new fertilised egg contains the DNA of the father and mother, and the mitochondria from the donor.

"What we've done is like changing the battery on a laptop. The energy supply now works properly, but none of the information on the hard drive has been changed," explains Professor Turnbull. "A child born using this method would have correctly functioning mitochondria, but in every other respect would get all their genetic information from their father and mother."

The Newcastle team used their technique to create a total of eighty zygotes (fertilised eggs). These were cultured for six to eight days in the laboratory to monitor development as far as the blastocyst stage (the stage at which it has divided into a group of around one hundred cells) in line with the terms of the licence granted by the Human Fertility and Embryology Authority (HFEA) in 2005.

In some cases, a very small amount of the mother's mitochondrial DNA was carried over to the new egg. Since severe diseases only occurs with large amounts of mutations, this would be very unlikely to affect a child's health.

The research is a proof of principle that researchers should be able to prevent transmission of mitochondrial diseases, thereby allowing the mother to give birth to a healthy child.

"This is a very exciting development with immense potential to help families at risk from mitochondrial diseases," says Professor Turnbull. "We have no way of curing these diseases at the moment, but this technique could allow us to prevent the diseases occurring in the first place. It is important that we do all we can to help these families and give them the chance to have healthy children, something most of us take for granted."

The Newcastle team used eggs which were unsuitable for IVF; for example, eggs with one or three pronuclei, rather than the normal two. This is common in the IVF process and affects around one in ten fertilised eggs. The eggs were donated by couples attending the Newcastle Fertility Centre at Life. The egg donation programme and the ethical and regulatory aspects of the project are led by Professor Alison Murdoch.

The team is now planning further studies that will provide further evidence of the safety of this procedure. The Human Fertility and Embryology (HFE) Act as amended in 2009, currently prevents fertility treatment using these techniques. However, the HFE Act includes the provision for the Secretary of State to make provisions for this to be permitted in the future.

Scientists here are taking the trial and error out of drug design by using powerful computers to identify molecular structures that have the highest potential to serve as the basis for new medications.

Most drugs are designed to act on proteins that somehow malfunction in ways that lead to damage and disease in the body. The active ingredient in these medicines is typically a single molecule that can interact with a protein to stop its misbehavior.

Finding such a molecule, however, is not easy. It ideally will be shaped and configured in a way that allows it to bind with a protein on what are known as “hot spots” on the protein surface – and the more hot spots it binds to, the more potential it has to be therapeutic.

To accomplish this, many drug molecules are composed of units called fragments that are linked through chemical bonds. An ideal drug molecule for a specific protein disease target should be a combination of fragments that fit into each hot spot in the best possible way.

Previous methods to identify these molecules have emphasized searching for fragments that can attach to one hot spot at a time. Finding structures that attach to all of the required hot spots is tedious, time-consuming and error-prone.

Ohio State University researchers, however, have used computer simulations to identify molecular fragments that attach simultaneously to multiple hot spots on proteins. The technique is a new way to tackle the fragment-based design strategy.

“We use the massive computing power available to us to find only the good fragments and link them together,” said Chenglong Li, assistant professor of medicinal chemistry and pharmacognosy at Ohio State and senior author of a study detailing this work.

Li likens the molecular fragments to birds flying around in space, looking for food on the landscape: the protein surface. With this technique, he creates computer programs that allow these birds – or molecular fragments – to find the prime location for food, or the protein hot spots. The algorithm is originated from a computation technique called particle swarm optimization.

“Each bird can see the landscape individually, and it can sense other birds that inform each other about where the foods are,” Li said. “That’s how this method works. Each fragment is like a bird finding food on the landscape. And that’s how we place the fragments and obtain the best fragment combination for specific protein binding sites.”

Li verified that the technique works by comparing a molecular structure he designed to the molecular base of an existing cancer medication that targets a widely understood protein.

“My method reconstructed what pharmaceutical companies have already done,” he said. “In the future, we’ll apply this technique to protein targets for diseases that remain challenging to treat with currently available therapies.”

The research appears online and is scheduled for later print publication in the Journal of Computational Chemistry.

Li said this new computer modeling method of drug design has the potential to complement and increase efficiency of more time-consuming methods like nuclear magnetic resonance and X-ray crystallography. For example, he said, X-ray fragment crystallography can be hard to interpret because of “noise” created by fragments that don’t bind well to proteins.

With this new computer simulation technique, called multiple ligand simultaneous docking, Li instructs molecular fragments to interact with each other before the actual experimental trials, removing weak and “noisy” fragments so only the promising ones are left.

“They sense each other’s presence through molecular force. They suppress the noise and go exactly where they are supposed to go,” he said. “You find the right fragment in the right place, and it’s like fitting the right piece into a jigsaw puzzle.”

Before he can begin designing a molecule, Li must obtain information about a specific protein target, especially the protein structures. These details come from collaborators who have already mapped a target protein’s surface to pinpoint where the hot spots are, for example, through directed mutations or from databases.

Li starts the design process with molecular fragments that come from thousands of existing drugs already on the market. He creates a computer image of those molecules, and then chops them up into tiny pieces and creates a library of substructures to work with – typically more than a thousand possibilities.

That is where computational power comes into play.

“To search all of the possibilities of these molecular combinations and narrow them down, we need a massive computer,” he said. Li uses two clusters of multiple computers, one in Ohio State’s College of Pharmacy and the other in the Ohio Supercomputer Center, to complete the simulations.

The results of this computation create an initial molecular template that can serve as a blueprint for later stages of the drug discovery process. Medicinal chemists can assemble synthetic molecules based on these computer models, which can then be tested for their effectiveness against a given disease condition in a variety of research environments.

Li already has used this technique to identify molecules that bind to known cancer-causing proteins. He said the method can be applied to any protein that is a suspected cause of diseases of any kind, not just cancer.

Observations of how the youngest-known neutron star has cooled over the past decade are giving astronomers new insights into the interior of these super-dense dead stars.

Dr Wynn Ho presented the findings on Thursday April 15th at the RAS National Astronomy Meeting in Glasgow.

Dr Ho, of the University of Southampton, and Dr Craig Heinke, of the University of Alberta in Canada, measured the temperature of the neutron star in the Cassiopeia A supernova remnant using data obtained by NASA’s Chandra X-ray Observatory between 2000 and 2009.

“This is the first time that astronomers have been able to watch a young neutron star cool steadily over time. Chandra has given us a snapshot of the temperature roughly every two years for the past decade and we have seen the temperature drop during that time by about 3%,” said Dr Ho.

Neutron stars are composed mostly of neutrons crushed together by gravity, compressed to over a million million times the density of lead. They are the dense cores of massive stars that have run out of nuclear fuel and collapsed in supernova explosions. The Cassiopeia A supernova explosion, likely to have taken place around 1680, would have heated the neutron star to temperatures of billions of degrees, from which it has cooled down to a temperature of about two million degrees Celsius.

“Young neutron stars cool through the emission of high-energy neutrinos – particles similar to photons but which do not interact much with normal matter and therefore are very difficult to detect. Since most of the neutrinos are produced deep inside the star, we can use the observed temperature changes to probe what’s going on in the neutron star’s core. The structure of neutron stars determines how they cool, so this discovery will allow us to understand better what neutron stars are made of. Our observations of temperature variations already rule out some models for this cooling and has given us insights into the properties of matter that cannot be studied in laboratories on Earth,” said Dr Ho.

Initially, the core of the neutron star cools much more rapidly than the outer layers. After a few hundred years, equilibrium is reached and the whole interior cools at a uniform rate. At approximately 330 years old, the Cassiopeia A neutron star is near this cross-over age. If the cooling is only due to neutrino emission, there should be a steady decline in temperature. However, although

Dr Ho and Dr Heinke observed an overall steady trend over the 10 year period, there was a larger change around 2006 that suggests other processes may be active.

“The neutron star may not yet have relaxed into the steady cooling phase, or we could be seeing other processes going on. We don’t know whether the interior of a neutron star contains more exotic particles, such as quarks, or other states of matter, such as superfluids and superconductors. We hope that with more observations, we will be able to explain what is happening in the interior in much more detail,” said Dr Ho.

Dr Ho and Dr Heinke have submitted a paper on their discovery to the Astrophysical Journal.

A lightning researcher at the University of Bath has discovered that during thunderstorms, giant natural particle accelerators can form 40 km above the surface of the Earth.

Dr Martin Füllekrug from the University’s Department of Electronic & Electrical Engineering presented his new work on Wednesday 14 April at the Royal Astronomical Society National Astronomy Meeting (RAS NAM 2010) in Glasgow.

His findings show that when particularly intense lightning discharges in thunderstorms coincide with high-energy particles coming in from space (cosmic rays), nature provides the right conditions to form a giant particle accelerator above the thunderclouds.

The cosmic rays strip off electrons from air molecules and these electrons are accelerated upwards by the electric field of the lightning discharge. The free electrons and the lightning electric field then make up a natural particle accelerator.

The accelerated electrons then develop into a narrow particle beam which can propagate from the lowest level of the atmosphere (the troposphere), through the middle atmosphere and into near-Earth space, where the energetic electrons are trapped in the Earth’s radiation belt and can eventually cause problems for orbiting satellites.

These are energetic events and for the blink of an eye, the power of the electron beam can be as large as the power of a small nuclear power plant.

Dr Füllekrug explained: “The trick to determining the height of one of the natural particle accelerators is to use the radio waves emitted by the particle beam.”

These radio waves were predicted by his co-worker Dr Robert Roussel-Dupré using computer simulations at the Los Alamos National Laboratory supercomputer facility.

A team of European scientists, from Denmark, France, Spain and the UK helped to detect the intense lightning discharges in southern France which set up the particle accelerator.

They monitored the area above thunderstorms with video cameras and reported lightning discharges which were strong enough to produce transient airglows above thunderstorms known as sprites. A small fraction of these sprites were found to coincide with the particle beams.

The zone above thunderstorms has been a suspected natural particle accelerator since the Scottish physicist and Nobel Prize winner Charles Thomson Rees Wilson speculated about lightning discharges above these storms in 1925.

In the next few years five different planned space missions (the TARANIS, ASIM, CHIBIS, IBUKI and FIREFLY satellites) will be able to measure the energetic particle beams directly.

Dr Füllekrug commented: “It’s intriguing to see that nature creates particle accelerators just a few miles above our heads. Once these new missions study them in more detail from space we should get a far better idea of how they actually work.

“They provide a fascinating example of the interaction between the Earth and the wider Universe.”

A natural product found in both coconut oil and human breast milk – lauric acid -- shines as a possible new acne treatment thanks to a bioengineering graduate student from the UC San Diego Jacobs School of Engineering. The student developed a “smart delivery system” – published in the journal ACS Nano in March – capable of delivering lauric-acid-filled nano-scale bombs directly to skin-dwelling bacteria (Propionibacterium acnes) that cause common acne.

On Thursday April 15, bioengineering graduate student Dissaya “Nu” Pornpattananangkul presented her most recent work on this experimental acne-drug-delivery system at Research Expo, the annual research conference of the UC San Diego Jacobs School of Engineering.

Common acne, also known as “acne vulgaris,” afflicts more than 85 percent of teenagers and over 40 million people in the United States; and current treatments have undesirable side effects including redness and burning. Lauric-acid-based treatments could avoid these side effects, the UC San Diego researchers say.

“It’s a good feeling to know that I have a chance to develop a drug that could help people with acne,” said Pornpattananangkul, who performs this research in the Nanomaterials and Nanomedicine Laboratory of UC San Diego NanoEngineering professor Liangfang Zhang from the Jacobs School of Engineering.

The new smart delivery system includes gold nanoparticles attached to surfaces of lauric-acid-filled nano-bombs. The gold nanoparticles keep the nano-bombs (liposomes) from fusing together. The gold nanoparticles also help the liposomes locate acne-causing bacteria based on the skin microenvironment, including pH.

Once the nano-bombs reach the bacterial membranes, the acidic microenvironment causes the gold nanoparticles to drop off. This frees the liposomes carrying lauric acid payloads to fuse with bacterial membranes and kill the Propionibacterium acnes bacteria.

“Precisely controlled nano-scale delivery of drugs that are applied topically to the skin could significantly improve the treatment of skin bacterial infections. By delivering drugs directly to the bacteria of interest, we hope to boost antimicrobial efficacy and minimize off-target adverse effects,” said Zhang. “All building blocks of the nano-bombs are either natural products or have been approved for clinical use, which means these nano-bombs are likely to be tested on humans in the near future.”

Zhang noted that nano-scale topical drug delivery systems face a different set of challenges than systems that use nanotechnology to deliver drugs systematically to people.

Pornpattananangkul and UC San Diego chemical engineering undergraduate Darren Yang confirmed, in 2009 in the journal Biomaterials, the antimicrobial activity of nano-scale packets of lauric acid against Propionibacterium acnes.

Pornpattananangkul, who is originally from Thailand, said that it’s just a coincidence that her research involves a natural product produced by coconuts – a staple of Thai cuisine.

Some of us need regular amounts of coffee or other chemical enhancers to make us cognitively sharper. A newly published study suggests perhaps a brief bit of meditation would prepare us just as well.

While past research using neuroimaging technology has shown that meditation techniques can promote significant changes in brain areas associated with concentration, it has always been assumed that extensive training was required to achieve this effect. Though many people would like to boost their cognitive abilities, the monk-like discipline required seems like a daunting time commitment and financial cost for this benefit.

Surprisingly, the benefits may be achievable even without all the work. Though it sounds almost like an advertisement for a "miracle" weight-loss product, new research now suggests that the mind may be easier to cognitively train than we previously believed. Psychologists studying the effects of a meditation technique known as "mindfulness " found that meditation-trained participants showed a significant improvement in their critical cognitive skills (and performed significantly higher in cognitive tests than a control group) after only four days of training for only 20 minutes each day.

"In the behavioral test results, what we are seeing is something that is somewhat comparable to results that have been documented after far more extensive training," said Fadel Zeidan, a post-doctoral researcher at Wake Forest University School of Medicine, and a former doctoral student at the University of North Carolina at Charlotte, where the research was conducted.

"Simply stated, the profound improvements that we found after just 4 days of meditation training– are really surprising," Zeidan noted. "It goes to show that the mind is, in fact, easily changeable and highly influenced, especially by meditation."

The study appeared in the April 2 issue of Consciousness and Cognition. Zeidan's co-authors are Susan K. Johnson, Zhanna David and Paula Goolkasian from the Department of Psychology at UNC Charlotte, and Bruce J. Diamond from William Patterson University. The research was also part of Zeidan's doctoral dissertation.

The experiment involved 63 student volunteers, 49 of whom completed the experiment. Participants were randomly assigned in approximately equivalent numbers to one of two groups, one of which received the meditation training while the other group listened for equivalent periods of time to a book (J.R.R. Tolkein's The Hobbit) being read aloud.

Prior to and following the meditation and reading sessions, the participants were subjected to a broad battery of behavioral tests assessing mood, memory, visual attention, attention processing, and vigilance.

Both groups performed equally on all measures at the beginning of the experiment. Both groups also improved following the meditation and reading experiences in measures of mood, but only the group that received the meditation training improved significantly in the cognitive measures. The meditation group scored consistently higher averages than the reading/listening group on all the cognitive tests and as much as ten times better on one challenging test that involved sustaining the ability to focus, while holding other information in mind.

"The meditation group did especially better on all the cognitive tests that were timed," Zeidan noted. "In tasks where participants had to process information under time constraints causing stress, the group briefly trained in mindfulness performed significantly better."

Particularly of note were the differing results on a "computer adaptive n-back task," where participants would have to correctly remember if a stimulus had been shown two steps earlier in a sequence. If the participant got the answer right, the computer would react by increasing the speed of the subsequent stimulus, further increasing the difficulty of the task. The meditation-trained group averaged aproximately10 consecutive correct answers, while the listening group averaged approximately one.

"Findings like these suggest that meditation's benefits may not require extensive training to be realized, and that meditation's first benefits may be associated with increasing the ability to sustain attention," Zeidan said.

"Further study is warranted," he stressed, noting that brain imaging studies would be helpful in confirming the brain changes that the behavioral tests seem to indicate, "but this seems to be strong evidence for the idea that we may be able to modify our own minds to improve our cognitive processing – most importantly in the ability to sustain attention and vigilance – within a week's time."

The meditation training involved in the study was an abbreviated "mindfulness" training regime modeled on basic "Shamatha skills" from a Buddhist meditation tradition, conducted by a trained facilitator. As described in the paper, "participants were instructed to relax, with their eyes closed, and to simply focus on the flow of their breath occurring at the tip of their nose. If a random thought arose, they were told to passively notice and acknowledge the thought and to simply let 'it' go, by bringing the attention back to the sensations of the breath." Subsequent training built on this basic model, teaching physical awareness, focus, and mindfulness with regard to distraction.

Zeidan likens the brief training the participants received to a kind of mental calisthenics that prepared their minds for cognitive activity.

"The simple process of focusing on the breath in a relaxed manner, in a way that teaches you to regulate your emotions by raising one's awareness of mental processes as they're happening is like working out a bicep, but you are doing it to your brain. Mindfulness meditation teaches you to release sensory events that would easily distract, whether it is your own thoughts or an external noise, in an emotion-regulating fashion. This can lead to better, more efficient performance on the intended task."

"This kind of training seems to prepare the mind for activity, but it's not necessarily permanent," Zeidan cautions. "This doesn't mean that you meditate for four days and you're done – you need to keep practicing."

Light bounced off reflectors on the moon is fainter than expected and mysteriously dims even more whenever the moon is full. Astronomers think dust is a likely culprit, they report in a forthcoming issue of the journal Icarus.

"Near full moon, the strength of the returning light decreases by a factor of ten," said first author Tom Murphy, associate professor of physics at the University of California, San Diego who leads an effort to precisely measure the distance from earth to moon by timing the reflections of pulses of laser light. "Something happens on the surface of the moon to destroy the performance of the reflectors at full moon."

Only a fraction of the light Murphy's team sends to the moon from a telescope in New Mexico returns to the observatory. Earth's atmosphere scatters the outgoing beam so that it spreads over two kilometers of the surface of the moon. Most of the laser light misses its target, which is about the size of a suitcase. And the reflectors diffract returning light so that it spreads over 15 kilometers on earth.

The team only expects to recapture one in 100 million billion particles of light, or photons. But their instrument detects only a tenth as much light returns most nights. And when the moon is full the results are ten times worse.

They aim for polished blocks of glass, about one and half inches in diameter, called corner cube prisms that Apollo astronauts left behind 40 years ago.

For optimum performance, the whole cube must be the same temperature. "It doesn't take much, just a few degrees, to significantly affect performance," Murphy said. NASA engineers took pains to minimize differences in temperatures across the prisms, which rest in arrays tilted toward earth. Individual prisms sit in recessed pockets so that they are shielded from direct light when the sun is low on the moon's horizon. But when the full face of the moon appears illuminated from earth, the sun is directly above the arrays. "At full moon, the sun is coming straight down the pipe into these recessed pockets," Murphy said.

The cubes are clear glass without any sort of coating. Their reflective properties derive from the shape of their polished facets. NASA engineers chose the design, rather than one with a silvered back like an ordinary mirror, for precision. Uneven heating of the prisms, which might occur with absorption by a coating, would bend the shape of the light pulses they return, interfering with the accuracy of measurements.

Murphy thinks the cubes are heating unevenly at full moon and that a likely cause is dust. "Dust is dark," he said. "It absorbs solar light and would warm the cube prism on the front face."

Light travels faster through warmer glass. Although all paths through the cube prisms are the same length, photons that strike the edge of the reflector will stay near the surface, and those that strike the center will pass deeper into the cube before hitting a reflective surface. If the surface is warmer than the deeper parts of the cube, light striking the edges of the prism will re-emerge sooner than light striking the center, distorting the shape of the reflected laser pulses. "Outgoing light is deformed. It's spreading," Murphy said. "All you have to do is make a thermal gradient and you get the problem."

The moon has no atmosphere, and no wind, but electrostatic forces can move dust around. A constant rain of micrometeorites might puff dust onto the surface. Larger impacts that eject material from the surface across a greater distance could also contribute to an accretion of moon crud. Deposits from outgassing of the Teflon rings that hold each prism in place might also have accumulated on the back side of the prisms, the authors say.

Sorting out which effect might play a role will be difficult. Murphy recently returned from a trip to Italy, where a chamber built to simulate lunar conditions may help sort through the possible explanations.

"We think we have a thermal problem at full moon, plus optical loss at all phases of the moon," Murphy said. Dust on the front surface of the reflectors could account for both observations.

If sunlight-heated dust is really to blame, the effect should vanish during a lunar eclipse. That is, light should bounce back while the moon passes through Earth's shadow, then dim again as sunlight hits the arrays.

"Measurements during an eclipse – there are just a few – look fine. When you remove the solar flux, the reflectors recover quickly, on a time scale of about half an hour," Murphy said.

The problem may be getting worse. The McDonald Observatory was able to run similar experiments at full moon between 1973 and 1976. But between 1979 and 1984, they had "a bite taken out of their data," during full moons, Murphy said. "Ours is deeper."

So far, rotten weather has prevented the project from operating during a lunar eclipse. The next opportunity will be on the night of December 21, 2010. The team plans to be watching.