Wednesday, December 15, 2010

Previous research has shown that when women are in their most fertile phase they become more attracted to certain qualities such as manly faces, masculine voices and competitive abilities. A new study by University of Miami (UM) psychologist Debra Lieberman and her collaborators offers new insight into female sexuality by showing that women also avoid certain traits when they are fertile.

The new study shows that women avoid their fathers during periods of peak fertility. The findings are included in a study entitled “Kin Affiliation Across the Ovulatory Cycle: Females Avoid Fathers When Fertile” available online the first week of December in the journal Psychological Science, a prominent peer-reviewed scholarly journal.

Women stay away from male relatives when they are most fertile for evolutionary reasons, explains Lieberman assistant professor in the Department of Psychology at UM. “Evolutionary biologists have found that females in other species avoid social interactions with male kin during periods of high fertility,” said the study’s lead author Debra Lieberman, a UM assistant professor of psychology. “The behavior has long been explained as a means of avoiding inbreeding and the negative consequences associated with it. But until we conducted our study, nobody knew whether a similar pattern occurred in women.”

For the study, the researchers examined the cell phone records of 48 women in their reproductive years. They noted the date and duration of all calls with their fathers and separately, their mothers over the course of one billing period. They then identified the span of days comprising each woman’s high and low fertility days within that billing period.

“Women call their dads less frequently on these high-fertility days and they hang up with them sooner if their dads initiate a call,” said Martie Haselton, a UCLA associate professor of communication in whose lab the research was conducted. Women were about half as likely to call their fathers during the high fertility days of their cycle as they were to call them during low fertility days. Women’s fertility had no impact, however, on the likelihood of their fathers calling them. Women also talked to their fathers for less time at high fertility, regardless of who initiated the call, talking only an average of 1.7 minutes per day at high fertility compared to 3.4 minutes per day at low fertility.

The researchers concede that the high-fertile women might simply be avoiding their fathers because fathers might be keeping (too close) an eye on potential male suitors. But their data cast some doubt on this possibility. It is more likely, they conclude, that like females in other species, women have built-in psychological mechanisms that help protect against the risk of producing less healthy children, which tends to occur when close genetic relatives mate.

“In humans, women are only fertile for a short window of time within their menstrual cycle,” Lieberman said. “Sexual decisions during this time are critical as they could lead to pregnancy and the long-term commitment of raising a child. For this reason, it makes sense that women would reduce their interactions with male genetic relatives, who are undesirable mates.”

The reluctance to engage in conversations with fathers could not be attributed to an impulse to avoid all parental control during ovulation. In fact, the researchers found that women actually increased their calling to their mothers during this period of their cycle, and that this pattern was strongest for women who felt emotionally closer to their moms. At high fertility, women proved to be four times as likely to call their mothers as they were to phone their fathers, a difference that did not exist during the low fertility days. In addition, women spent an average of 4.7 minutes per day on the phone with their mothers during high fertility days, compared to 4.2 minutes per day during low-fertility.

One possible explanation is that women call their moms for relationship advice, said Elizabeth Pillsworth, who also contributed to the study.

“They might be using mothers as sounding boards for possible mating decisions they’re contemplating at this time of their cycle,” said Pillsworth, an assistant professor of evolutionary anthropology at California State University, Fullerton. “Moms have a lot more experience than they do. Particularly for those women who are close to their mothers, we can imagine them saying, ‘Hey Mom, I just met this cute guy, what do you think?’”

Either way, the findings show that women are unconsciously driven during their most fertile periods to behavior that increases the odds of reproducing and doing so with the right mate, said Haselton.

“This suggests that although human culture has in many ways changed at a rapid pace, our every day decisions are often still tied to ancient factors affecting survival and reproduction,” says Haselton. “We think of ourselves as being emancipated from the biological forces that drive animal behavior. But, that’s not completely true,” she says. “These kinds of findings show us that a complete understanding of human behavior needs to involve these biological forces. Humans are, after all, mammals.”

New research by University of Minnesota psychologists shows how social support benefits are maximized when provided “invisibly”—that is without the support recipient being aware that they are receiving it.

The study, “Getting in Under the Radar: A Dyadic View of Invisible Support,” is published in the December issue of the journal Psychological Science.

In the study, graduate student Maryhope Howland and professor Jeffry Simpson suggest there may be something unique about the emotional support behaviors that result in recipients being less aware of receiving support.

“While previous research has frequently relied solely on the perceptions of support recipients, these findings are notable in that they reflect the behavior of both parties in a support exchange,” Howland said. “They also mark a significant step forward in understanding how and when social support in couples is effective.”

Receiving social support such as advice or encouragement is typically thought of as positive, a generous act by one person yielding benefits for another in a time of need. Effective support should make someone feel better and more competent, it is generally acknowledged. However, what is supposedly considered “support” may make someone feel vulnerable, anxious, or ineffective in the face of a stressor, Howland and Simpson found.

In the U of M study, 85 couples engaged in a videotaped support interaction in the lab. Support recipients were instructed to discuss something they’d like to change about themselves with their partners, who thus had the opportunity to provide support. After the interaction, support recipients reported how much support they had received (or were aware of receiving), and trained observers then watched the videotapes and coded the interactions to gauge the extent to which any support provided was invisible or visible.

Recipients whose partners provided more invisible emotional support such as reassurance or expressions of concern, but believed they had received less emotional support, experienced greater declines in anger and anxiety. This was also true for invisible practical support such as advice or direct offers of assistance. Additionally, in the case of invisible practical support, recipients experienced increases in self-efficacy.

Romantic partners are often a primary source of support, and understanding how the support process works between the two partners is likely to inform counseling and clinical approaches as well as future research in this area, according to the study’s authors.

Plant and animal extinctions are detrimental to your health. That's the conclusion of a paper published in this week's issue of the journal Nature by scientists who studied the link between biodiversity and infectious diseases.

Species loss in ecosystems such as forests and fields results in increases in pathogens, or disease-causing organisms, the researchers found.

The research was funded by the National Science Foundation (NSF)-National Institutes of Health (NIH) Ecology of Infectious Diseases (EID) Program.

The NSF contribution to the EID Program is supported primarily by its Directorates for Biological Sciences and Geosciences; at NIH, by the Fogarty International Center.

The research was also funded by the Environmental Protection Agency.

"Global change is accelerating, bringing with it a host of unintended consequences," says Sam Scheiner, EID program director at NSF. "This paper demonstrates the dangers of global change, showing that species extinctions may lead to increases in disease incidence for humans, other animals and plants."

"A better understanding of the role of environmental change in disease emergence and transmission is key to enabling both prediction and control of many infectious diseases," says Josh Rosenthal, EID program director at NIH. "This thoughtful analysis is an important contribution toward those goals."

The species most likely to disappear as biodiversity declines are often those that buffer infectious disease transmission.

Those that remain tend to be the ones that magnify the transmission of infectious diseases like West Nile virus, Lyme disease and hantavirus.

"We knew of specific cases like West Nile virus and hantavirus in which declines in biodiversity increase the incidence of disease," says Felicia Keesing, an ecologist at Bard College in Annandale, N.Y., and first author of the paper.

"But we've learned that the pattern is much more general: biodiversity loss tends to increase pathogen transmission and infectious disease."

The finding holds true for various types of pathogens--viruses, bacteria, fungi--and for many hosts, whether humans, other animals or plants.

"When a clinical trial of a drug shows that it works," says Keesing, "the trial is halted so the drug can be made available. In a similar way, the protective effect of biodiversity is clear enough that we need to begin implementing policies to preserve it now."

Global biodiversity has declined at an unprecedented pace since the 1950s. Current extinction rates are estimated at 100 to 1,000 times higher than in past epochs, and are projected to rise dramatically in the next 50 years.

Expanding human populations are already increasing contact with novel pathogens through activities such as land-clearing for agriculture, and hunting for wildlife.

For example, in the case of Lyme disease, says co-author Richard Ostfeld of the Cary Institute of Ecosystem Studies in Millbrook, N.Y., "strongly buffering species like the opossum are lost when forests are fragmented, but white-footed mice thrive.

"The mice increase numbers of both the blacklegged tick vector [transmission pathway] and the pathogen that causes Lyme disease."

Scientists don't yet know, Ostfeld says, why the most resilient species--"the last ones standing when biodiversity is lost"--are the ones that also amplify pathogens.

Preserving natural habitats, the authors argue, is the best way to prevent this effect.

Biodiversity is an important factor, as are land-use change--converting forest to agricultural land--and human population growth and behavior, he says. "When biological diversity declines, and contact with humans increases, you have a perfect recipe for infectious disease."

The authors call for careful monitoring of areas in which large numbers of domesticated animals are raised.

"That would reduce the likelihood of an infectious disease jumping from wildlife to livestock, then to humans," says Keesing.

For humans and other species to remain healthy, it will take more than a village. We need an entire planet, the scientists say, one with its biodiversity thriving.

Ocean variability — the perpetual changing of currents, temperatures, salinity and the contours of the seafloor — alters the way sound travels through the water. A new analysis of how this variability affects sound waves could make it easier for Navy submarines to evade detection or for remotely operated underwater vehicles, like those used to combat the recent Macondo oil well spill in the Gulf of Mexico, to maneuver more accurately. It could also aid in basic oceanographic and climate research by helping to calibrate systems for using sound waves to measure ocean properties such as temperature and seafloor topography.

The analysis was carried out by MIT researchers in collaboration with Taiwanese and Woods Hole Oceanographic Institution scientists. Using both theoretical computer models and on-site experiments off Taiwan and Kauai, they found unexpected changes in the way ocean and sound waves interact when they are emitted near the edge of a continental shelf, where the average slope of the seafloor changes abruptly. For the first time, they were able to make integrated ocean and acoustics predictions of how sound waves would propagate at a given time and location, and of the degree of uncertainty in those predictions, and then verified those predictions with actual acoustic measurements.

Pierre Lermusiaux, the Doherty Associate Professor in Ocean Utilization in MIT’s Department of Mechanical Engineering, who led a research team that also included several of his students and group members, says the continental shelf area is becoming ever more important. That’s because such regions are increasingly being exploited for oil and gas drilling, and also used for naval operations by submarines from various nations. They are also most relevant for assessing the health of the oceans and climate dynamics. As a result, he says, “a lot of research interests are now focusing on the complex shallower seas and their interactions with the deep ocean.” The results of his research, which was funded by the Office of Naval Research, were published online on Nov. 30 in the IEEE Journal of Oceanic Engineering.

The research predicted and explained how sound waves used for sonar imaging and for underwater communications can be affected by the interplay of large-scale currents, eddies, internal tides, the irregular topography of the seafloor, and other factors, and demonstrated where and when each can be a key factor in predicting how the sound waves will travel The analysis should make it easier to forecast how beams of sound waves will be affected; such an ability should improve the accuracy of communications and the computer reconstruction of sonar images used to detect submarines and other objects or to study the seafloor, as well as ocean dynamics in general.

“We’re just trying to advance the science and technology,” Lermusiaux says, by creating computer models and data-driven estimation methods that can aid in understanding and forecasting both the ocean and acoustic conditions in a given ocean region for several days ahead, thus facilitating the optimal planning of scanning or communication operations in that region. In addition, he says, some researchers including MIT colleagues have suggested that measuring the propagation of sound waves over long distances in the ocean could give researchers the ability to monitor the effects of climate change because it would allow them to help determine ocean temperatures and circulations over large regions. Better understanding and prediction of diverse ocean dynamics and how they affect sound waves could be important for such measurements, he says.

Nadia Pinardi, professor of oceanography at the University of Bologna, Italy, says, “This paper confirms unequivocally that ocean acoustic uncertainties are connected to a detailed knowledge” of the ocean features and processes at the specific location and time. She adds that “these results could pave the way for new acoustic applications” in underwater imaging and communications.

Arthur Miller, a research oceanographer and senior lecturer in climate sciences at the Scripps Institute of Oceanography in San Diego, says “The shelf-slope system that [Lermusiaux] considers here is extremely complicated with irregular bottom topography, strong horizontal currents and fast internal undulations of the water column, that can each influence the predictions of how sound moves through the water.” Techniques pioneered by Lermusiaux now make it possible to determine which factors are most important, he says, and thus improve the accuracy of predictions used for carrying out underwater measurements.

“These results are critical for practical applications in sonar by the Navy,” Miller adds.

The prevalence of global positioning system (GPS) devices in everything from cars to cell phones has almost made getting lost a thing of the past. But what do you do when your GPS isn’t working? Researchers from North Carolina State University and Carnegie Mellon University (CMU) have developed a shoe-embedded radar system that may help you find your way.

“There are situations where GPS is unavailable, such as when you’re in a building, underground or in places where a satellite connection can be blocked by tall buildings or other objects,” says Dr. Dan Stancil, co-author of a paper describing the research and professor and head of NC State’s Department of Electrical and Computer Engineering. “So what do you do without satellites?”

One solution is to use inertial measurement units (IMUs), which are electronic devices that measure the forces created by acceleration (and deceleration) to determine how quickly you are moving and how far you have moved. The technology works in conjunction with GPS, with the IMU tracking your movement after you lose a GPS signal – and ultimately providing you with location data relevant to your last known location via GPS. For example, if you entered a cave and lost your GPS signal, you could use the IMU to retrace your steps to the last known GPS location and find your way back out.

However, IMUs have traditionally faced a significant challenge. Any minor errors an IMU makes in measuring acceleration lead to errors in estimating velocity and position – and those errors accumulate over time. For example, if an IMU thinks you are moving – even as little as 0.1 meters per second – when you are actually standing still, within three minutes the IMU will have moved you 18 meters away from your actual position.

But, “if you had an independent way of knowing when your velocity is zero, you could significantly reduce this sort of accumulate error,” Stancil says.

Enter the shoe radar.

“To address this problem of accumulating acceleration error, we’ve developed a prototype portable radar sensor that attaches to a shoe,” Stancil says. “The radar is attached to a small navigation computer that tracks the distance between your heel and the ground. If that distance doesn’t change within a given period of time, the navigation computer knows that your foot is stationary.” That could mean that you are standing still, or it could signal the natural pause that occurs between steps when someone is walking. Either way, Stancil says, “by resetting the velocity to zero during these pauses, or intervals, the accumulated error can be greatly reduced.”

In other words, the navigation computer compiles data from the shoe radar and the IMU and, by incorporating the most recent location data from GPS, can do a much better job of tracking your present location.

Patients with constant pain symptoms and extreme fear of this pain can be treated effectively by repeatedly exposing them to 'scary' situations. This is the conclusion of Dutch researcher Jeroen de Jong. Patients with pain conditions such as post-traumatic dystrophy, which can affect all tissues and functions of the limbs, can benefit from this in-vivo exposure therapy. Dr De Jong obtained his PhD from Maastricht University on 25 November.

In-vivo exposure therapy involves patients repeatedly undertaking activities and making movements in a way they consider threatening and would normally avoid. In his various studies, Jeroen de Jong discovered that patients undergoing such therapy not only become less scared of the pain, but actually feel less pain. However, the most striking fact was that the physiological symptoms of post-traumatic dystrophy – oedema, skin discoloration and excess perspiration – improved significantly. In addition, patients were able to make certain movements and carry out activities they would have considered impossible before.

In the Netherlands, 20,000 people are estimated to suffer from chronic post-traumatic dystrophy. This condition is characterised by a relatively innocent injury that causes persistent pain in the affected limb and can eventually lead to the patient losing the use of their arm or leg.

Many patients with chronic pain are scared to cause more pain and to make certain movements that they have associated with this pain before. People suffering from post-traumatic dystrophy may, for example, stop using a hand. Test subjects participating in in-vivo exposure therapy learned that they could make the movements without harmful effects. Jeroen de Jong also invited patients with chronic lower back pain and post-traumatic neck pain to undergo in-vivo exposure therapy. All groups were found to benefit substantially from this form of therapy.

The benefit of in-vivo exposure therapy had been demonstrated in patients with chronic lower back pain before, but De Jong was the first to show that this treatment can also drastically improve the lives of many other pain patients. The results of the studies in his thesis have a significant impact on the diagnosis of, approach to and treatment of chronic pain.

Jeroen de Jong's research was part of the Vici project of psychologist Johan Vlaeyen. The Vici grant awarded by the Netherlands Organisation for Scientific Research is for excellent senior researchers who have demonstrated that they can successfully develop their own new innovative research line and function as a coach for young researchers.

College students who exhibit narcissistic tendencies are more likely than fellow students to cheat on exams and assignments, a new study shows.

The results suggested that narcissists were motivated to cheat because their academic performance functions as an opportunity to show off to others, and they didn’t feel particularly guilty about their actions.

“Narcissists really want to be admired by others, and you look good in college if you’re getting good grades,” said Amy Brunell, lead author of the study and assistant professor of psychology at Ohio State University at Newark.

“They also tend to feel less guilt, so they don’t mind cheating their way to the top.”

Narcissism is a trait in which people are self-centered, exaggerate their talents and abilities and lack empathy for others, Brunell said.

“Narcissists feel the need to maintain a positive self-image and they will sometimes set aside ethical concerns to get what they want.”

The study appears online in the journal Personality and Individual Differences and will be published in an upcoming print edition.

The study involved 199 college students.They completed a scale that measured narcissism by choosing statements that best described them.For example, they could choose between “I am not better or no worse than most people” or “I think I am a special person.”

The researchers also measured the participants’ levels of self-esteem.

Students then completed a measure that examined how much guilt they would feel if they cheated under certain circumstances, or how much guilt they felt a typical student would feel under those same conditions.

Finally, the students indicated how often they had cheated on exams and assignments during the past year, and reported their grade point averages, gender and age.

While it was not surprising that narcissism was linked to cheating, Brunell said it was interesting what dimension of narcissism seemed to have the greatest impact.

“We found that one of the more harmless parts of narcissism -- exhibitionism -- is most associated with academic cheating, which is somewhat surprising,” she said.

Exhibitionism is the desire to show off, to make yourself the center of attention.

The two other dimensions of narcissism -- the desire for power and the belief you are a special person -- were not as strongly linked to academic dishonesty.

“You would think that the belief that you are a special person and that you can do what you want would be associated with cheating,” Brunell said.“But instead, we’re finding that it is the desire to show off that really seems to drive cheating.”

Results showed that students who scored higher on exhibitionism also showed lower levels of guilt, which could explain why they were more willing to cheat.

Importantly, those who scored high on exhibitionism didn’t think other typical students felt a lack of guilt about cheating.

“That suggests narcissists don’t have a lack of moral standards for everyone -- they just don’t feel bad about their own immoral behavior,” she said.

Moreover, narcissists were not more likely than others to believe that other students were cheating.

“One argument might be that narcissists are admitting to cheating, but saying that everyone else does it, too.But that’s not what we found.Narcissists actually report more cheating for themselves than they do for others,” Brunell explained.

While narcissism was linked to cheating in the study, self-esteem was not.

Results showed that students with higher levels of self-esteem also tended to have higher GPAs, and were less likely than others to perceive their classmates as cheating.

“People with higher levels of self-esteem are probably more confident in their abilities and don’t feel any peer pressure to cheat,” she said.

The only major gender difference found in the study was that men were less likely than women to feel guilty about cheating.Older students were less likely to report cheating, and more likely to anticipate feeling guilty about cheating.

These results correspond well with studies that have looked at narcissism in the workplace, Brunell said.

“It seems likely that the same people causing problems in the workplace and engaging in white collar crime are the ones who were cheating in the classroom,” she said.

Astronomers have found the first evidence of a magnetic field in a jet of material ejected from a young star, a discovery that points toward future breakthroughs in understanding the nature of all types of cosmic jets and of the role of magnetic fields in star formation.

Throughout the Universe, jets of subatomic particles are ejected by three phenomena: the supermassive black holes at the cores of galaxies, smaller black holes or neutron stars consuming material from companion stars, and young stars still in the process of gathering mass from their surroundings. Previously, magnetic fields were detected in the jets of the first two, but until now, magnetic fields had not been confirmed in the jets from young stars.

"Our discovery gives a strong hint that all three types of jets originate through a common process," said Carlos Carrasco-Gonzalez, of the Astrophysical Institute of Andalucia Spanish National Research Council (IAA-CSIC) and the National Autonomous University of Mexico (UNAM).

The astronomers used the National Science Foundation's Very Large Array (VLA) radio telescope to study a young star some 5,500 light-years from Earth, called IRAS 18162-2048. This star, possibly as massive as 10 Suns, is ejecting a jet 17 light-years long.

Observing this object for 12 hours with the VLA, the scientists found that radio waves from the jet have a characteristic indicating they arose when fast-moving electrons interacted with magnetic fields. This characteristic, called polarization, gives a preferential alignment to the electric and magnetic fields of the radio waves.

"We see for the first time that a jet from a young star shares this common characteristic with the other types of cosmic jets," said Luis Rodriguez, of UNAM.

The discovery, the astronomers say, may allow them to gain an improved understanding of the physics of the jets as well as of the role magnetic fields play in forming new stars. The jets from young stars, unlike the other types, emit radiation that provides information on the temperatures, speeds, and densities within the jets. This information, combined with the data on magnetic fields, can improve scientists' understanding of how such jets work.

"In the future, combining several types of observations could give us an overall picture of how magnetic fields affect the young star and all its surroundings. This would be a big advance in understanding the process of star formation," Rodriguez said.

Carrasco-Gonzalez and Rodriguez worked with Guillem Anglada and Mayra Osorio of the Astrophysical Institute of Andalucia, Josep Marti of the University of Jaen in Spain, and Jose Torrelles of the University of Barcelona. The scientists reported their findings in the November 26 edition of Science.

Bees do it, humans do it - move genes among crop plants, that is. But until now, researchers and growers had a hard time getting a grip on the factors that determine how much of this gene flow happens in an agricultural landscape.

A new data-driven statistical model that incorporates the surrounding landscape in unprecedented detail describes the transfer of an inserted bacterial gene via pollen and seed dispersal in cotton plants more accurately than previously available methods.

Shannon Heuberger, a graduate student at the University of Arizona's College of Agriculture and Life Sciences, and her co-workers published their findings in the open access journal, PLoS ONE.

The transfer of genes from genetically modified crop plants is a hotly debated issue. Many consumers are concerned about the possibility of genetic material from transgenic plants mixing with non-transgenic plants on nearby fields. Producers, on the other side, have a strong interest in knowing whether the varieties they are growing are free from unwanted genetic traits.

Up until now, realistic models were lacking that could help growers and legislators assess and predict gene flow between genetically modified and non-genetically modified crops with satisfactory detail.

This study is the first to analyze gene flow of a genetically modified trait at such a comprehensive level. The new approach is likely to improve assessment of the transfer of genes between plants other than cotton as well.

"The most important finding was that gene flow in an agricultural landscape is complex and influenced by many factors that previous field studies have not measured," said Heuberger. "Our goal was to put a tool in the hands of growers, managers and legislators that allows them to realistically assess the factors that affect gene flow rates and then be able to extrapolate from that and decide how they can manage gene flow."

The researchers measured many factors in the field and developed a geographic information system-based analysis that takes into account the whole landscape surrounding a field to evaluate how it influences the transfer of genes between fields. Genes can be transferred in several ways, for example by pollinators such as bees, or through accidental seed mixing during farming operations.

Surprisingly, the team found that pollinating insects, widely believed to be the key factor in moving transgenic pollen into neighboring crop fields, had a small impact on gene flow compared to human farming activity, with less than one percent of seeds collected around the edges of non-Bt cotton fields resulting from bee pollination between Bt and non-Bt cotton.

Most previous studies focused on the distance between the non-transgenic crop field and the nearest source of transgenic plants.

"Although this approach is simple, it is potentially less useful for understanding gene flow in commercial agriculture where there can be many sources of transgenic plants," Heuberger said.

Heuberger and her co-workers broadened the scope to include flower-pollinating bees, humans moving seeds around and the area of all cotton fields in a three-kilometer (1.9 mile) radius. This approach turned out to be more powerful in understanding the effect of surrounding fields than using the customary model based solely on distance.

For the study, the scientists chose 15 fields across the state of Arizona planted with cotton that did not have the transgenic protein encoded by a gene from the bacterium Bacillus thuringiensis, or Bt. They assessed the number of pollinators visiting cotton flowers through field observations and determined the transfer of genes by collecting samples of cotton bolls and determining their genetic identity.

"We saw a need for a spatially explicit model that would account for the whole surrounding landscape," Heuberger said. "Our model takes into account the distance and area of all relevant neighboring fields, the effect of pollinators like bees and human factors that can result in the mixing of seed types."

Heuberger's findings have implications not just for genetically engineered traits but also more generally for seed production.

"When you grow a crop and want the variety to be pure, just being able to know how far gene flow will occur and how it is affected by pollinators and human farming activity in the area is very valuable."

In Hebrew, the Dead Sea is called Yam ha-Melah, the "sea of salt." Now measurements show that the sea's salt has profound effects on the chemistry of the air above its surface.

The atmosphere over the Dead Sea, researchers have found, is laden with oxidized mercury. Some of the highest levels of oxidized mercury ever observed outside the polar regions exist there.

The results appear in a paper published on-line November 28th in the journal Nature Geoscience.

In the research, funded by the National Science Foundation (NSF), scientist Daniel Obrist and colleagues at the Desert Research Institute in Reno, Nevada, and at Hebrew University in Israel measured several periods of extremely high atmospheric oxidized mercury.

Mercury exists in the atmosphere in an elemental and in an oxidized state. It's emitted by various natural and human processes, and can be converted in the atmosphere between these forms.

High levels of oxidized mercury are a concern, says Obrist, because this form is deposited quickly in the environment after its formation.

Atmospheric mercury deposition is the main way mercury, a potent neurotoxin, finds its way into global ecosystems.

After it's deposited, mercury can accumulate through the food chain where it may reach very high levels. "These levels are of major concern to humans," says Obrist, "especially in the consumption of mercury-laden fish."

Fish caught in oceans are the main source of mercury intake in the U.S. population.

Observations of high naturally-occurring oxidized mercury levels had been limited to the polar atmosphere. There, oxidized mercury is formed during a process called atmospheric mercury depletion events.

During mercury depletions, elemental mercury is converted to oxidized mercury, which is then readily deposited on surfaces.

These events may increase mercury loads to sensitive arctic environments by hundreds of tons of mercury each year.

Now, Obrist says, "we've found near-complete depletion of elemental mercury--and formation of some of the highest oxidized mercury levels ever seen--above the Dead Sea, a place where temperatures reach 45 degrees Celsius."

Such pronounced mercury depletion events were unexpected outside the frigid poles. High temperatures were thought to impede this chemical process.

"Elemental mercury is somewhat resistant to oxidation, so it's been difficult to explain levels of oxidized mercury measured in the atmosphere outside polar regions," says Alex Pszenny, director of NSF's Atmospheric Chemistry Program, which funded the research. "These new results provide an explanation."

The mechanisms involved in the conversion of mercury above the Dead Sea appear similar, however, to those in polar regions: both start with halogens.

Halogens, or halogen elements, are non-metal elements such as fluorine, chlorine, bromine and iodine.

Observations and modeling results indicate that at the Dead Sea, the conversion of elemental mercury is driven by bromine.

The new results show that bromine levels observed above oceans may be high enough to initiate mercury oxidation.

"We discovered that bromine can oxidize mercury in the mid-latitude atmosphere," says Obrist, "far from the poles. That points to an important role of bromine-induced mercury oxidation in mercury deposition over the world's oceans."

What goes into the ocean, he says, may eventually wind up in its fish. And in those who eat them.

Obrist's co-authors are Xavier Fain of the Desert Research Institute and Eran Tas, Mordechai Peleg, David Asaf and Menachem Luria of Hebrew University.

The long-sought goal of a practical fusion-power reactor has inched closer to reality with new experiments from MIT’s experimental Alcator C-Mod reactor, the highest-performance university-based fusion device in the world.

The new experiments have revealed a set of operating parameters for the reactor — a so-called “mode” of operation — that may provide a solution to a longstanding operational problem: How to keep heat tightly confined within the hot charged gas (called plasma) inside the reactor, while allowing contaminating particles, which can interfere with the fusion reaction, to escape and be removed from the chamber.

Most of the world’s experimental fusion reactors, like the one at MIT’s Plasma Science and Fusion Center, are of a type called tokamaks, in which powerful magnetic fields are used to trap the hot plasma inside a doughnut-shaped (or toroidal) chamber. Typically, depending on how the strength and shape of the magnetic field are set, both heat and particles can constantly leak out of the plasma (in a setup called L-mode, for low-confinement) or can be held more tightly in place (called H-mode, for high-confinement).

Now, after some 30 years of tests using the Alcator series of reactors (which have evolved over the years), the MIT researchers have found another mode of operation, which they call I-mode (for improved), in which the heat stays tightly confined, but the particles, including contaminants, can leak away. This should prevent these contaminants from “poisoning” the fusion reaction. “This is very exciting,” says Dennis Whyte, professor in the MIT Department of Nuclear Science and Engineering and coauthor of some recent papers that describe more than 100 experiments testing the new mode. Whyte presented the results in October at the International Atomic Energy Agency International Fusion Conference in South Korea. “It really looks distinct” from the previously known modes, he says.

While in previous experiments in tokamaks the degree of confinement of heat and particles always changed in unison, “we’ve at last proved that they don’t have to go together,” says Amanda Hubbard, a principal research scientist at MIT’s Plasma Science and Fusion Center and coauthor of the reports. Hubbard presented the latest results in an invited talk at the November meeting of the American Physical Society’s Division of Plasma Physics, and says the findings “attracted a lot of attention.” But, she added, “we’re still trying to figure out why” the new mode works as it does. The work is funded by the U.S. Department of Energy.

The fuel in planned tokamaks, which comprises the hydrogen isotopes deuterium and tritium, is heated to up to more than 100 million degrees Celsius (although in present reactors like Alcator C-Mod, tritium is not used, and the temperatures are usually somewhat lower). This hot plasma is confined inside a doughnut-shaped magnetic “bottle” that keeps it from touching — and melting — the chamber’s walls. Nevertheless, its proximity to those walls and the occasional leakage of some hot plasma causes a small number of particles from the walls to mix with the plasma, producing one kind of contaminant. The other kind of expected contaminant is a product of the fusion reactions themselves: helium atoms, created by the fusing of hydrogen atoms, but which are not capable of further fusion under the same conditions.

When a fusion reactor operates, the impurities accumulate. Whyte says there have been various experimental observations and theoretical proposals for removing them at intervals after they accumulate. Now, he says, “We seem to have discovered a completely different flushing mechanism … so they don’t build up in the first place.”

One of the keys to triggering the new mode was to configure the magnetic fields inside the tokamak in a way that is essentially upside-down from the usual H-mode setup, Hubbard says.

The findings could be significant in enabling the next step forward in fusion energy, where fusion reactions and power are sustained mostly by “self-heating” without requiring a larger constant addition of outside power. Researchers expect to achieve this milestone, referred to as “fusion burn,” in a new international collaboration on a reactor called ITER, currently being built in France. The findings from MIT “almost certainly could be applied” to the very similar design of the ITER reactor, Whyte says.

Patrick Diamond PhD ’79, professor of plasma physics at the University of California at San Diego, says, “The findings are potentially of great importance,” because they could solve a key problem facing the design of next-generation fusion reactors: the occurrence of unpredictable bursts of heat from the edge of the confined plasma, which can “fry” some of the tokamak’s internal parts. “The I-mode eliminates or greatly reduces” these bursts of heat, “because it allows a steep temperature gradient — which is what you want — but does not allow a steep density gradient, which we don’t really need,” he says.

Diamond adds that theorists will have their work cut out to explain this mode. “Why do heat and particle transport behave differently? This is a really fundamental question, since most theories would predict a strong coupling between the two,” he says. “It’s a real challenge to us theorists — and important conceptually as well as practically.”

Rich Hawryluk, a researcher at the Princeton Plasma Physics Laboratory, says this is a "significant advance" which has generated considerable international interest and that other groups are now planning to follow up on these results. One area of research will be whether it is possible to "reliably operate in the I-mode and not go into the H-mode, which might have these violent edge instabilities. The operating conditions and the control requirements to stay in I-mode need to be better understood."

Hubbard explained that one of the key differences that made it possible to discover this phenomenon in MIT’s Alcator C-Mod was that this relatively small reactor, though large enough to produce results relevant to future reactors such as ITER, has great operational flexibility and can easily follow up on new findings. While larger reactors typically plan all their tests up to two years in advance, she says, “with this smaller machine, we have the ability to try new things when they appear. This ability to explore has been a key.”

Neuroscientists at MIT and Harvard have made the surprising discovery that the brain sees some faces as male when they appear in one area of a person’s field of view, but female when they appear in a different location.

The findings challenge a longstanding tenet of neuroscience — that how the brain sees an object should not depend on where the object is located relative to the observer, says Arash Afraz, a postdoctoral associate at MIT’s McGovern Institute for Brain Research and lead author of a new paper on the work.

“It’s the kind of thing you would not predict — that you would look at two identical faces and think they look different,” says Afraz. He and two colleagues from Harvard, Patrick Cavanagh and Maryam Vaziri Pashkam, described their findings in the Nov. 24 online edition of the journal Current Biology.

In the real world, the brain’s inconsistency in assigning gender to faces isn’t noticeable, because there are so many other clues: hair and clothing, for example. But when people view computer-generated faces, stripped of all other gender-identifying features, a pattern of biases, based on location of the face, emerges.

The researchers showed subjects a random series of faces, ranging along a spectrum of very male to very female, and asked them to classify the faces by gender. For the more androgynous faces, subjects rated the same faces as male or female, depending on where they appeared.

Study participants were told to fix their gaze at the center of the screen, as faces were flashed elsewhere on the screen for 50 milliseconds each. Assuming that the subjects sat about 22 inches from the monitor, the faces appeared to be about three-quarters of an inch tall.

The patterns of male and female biases were different for different people. That is, some people judged androgynous faces as female every time they appeared in the upper right corner, while others judged faces in that same location as male. Subjects also showed biases when judging the age of faces, but the pattern for age bias was independent from the pattern for gender bias in each individual.

Afraz believes this inconsistency in identifying genders is due to a sampling bias, which can also be seen in statistical tools such as polls. For example, if you surveyed 1,000 Bostonians, asking if they were Democrats or Republicans, you would probably get a fairly accurate representation of these percentages in the city as a whole, because the sample size is so large. However, if you took a much smaller sample, perhaps five people who live across the street from you, you might get 100 percent Democrats, or 100 percent Republicans. “You wouldn’t have any consistency, because your sample is too small,” says Afraz.

He believes the same thing happens in the brain. In the visual cortex, where images are processed, cells are grouped by which part of the visual scene they analyze. Within each of those groups, there is probably a relatively small number of neurons devoted to interpreting gender of faces. The smaller the image, the fewer cells are activated, so cells that respond to female faces may dominate. In a different part of the visual cortex, cells that respond to male faces may dominate.

“It’s all a matter of undersampling,” says Afraz.

Michael Tarr, codirector of the Center for the Neural Basis of Cognition at Carnegie Mellon University, says the findings add to the growing evidence that the brain is not always consistent in how it perceives objects under different circumstances. He adds that the study leaves unanswered the question of why each person develops different bias patterns. “Is it just noise within the system, or is some other kind of learning occurring that they haven’t figured out yet?” asks Tarr, who was not involved in the research. “That’s really the fascinating question.”

Afraz and his colleagues looked for correlations between each subject’s bias pattern and other traits such as gender, height and handedness, but found no connections.

He is now doing follow-up studies in the lab of James DiCarlo, associate professor of neuroscience at MIT, including an investigation of whether brain cells can be recalibrated to respond to faces differently.