Sunday, February 28, 2010

By deploying modern computers and modern cli¬mate models, the two of us and our colleagues have shown that not only were the ideas of the 1980s correct but the effects would last for at least 10 years, much longer than previously thought. And by doing calculations that assess decades of time, only now possible with fast, current computers, and by including in our cal¬culations the oceans and the entire atmosphere— also only now possible—we have found that the smoke from even a regional war would be heat¬ed and lofted by the sun and remain suspended in the upper atmosphere for years, continuing to block sunlight and to cool the earth.

India and Pakistan, which together have more than 100 nuclear weapons, may be the most worrisome adversaries capable of a regional nu¬clear conflict today. But other countries besides the U.S. and Russia (which have thousands) are well endowed: China, France and the U.K. have hundreds of nuclear warheads; Israel has more than 80, North Korea has about 10 and Iran may well be trying to make its own. In 2004 this situation prompted one of us (Toon) and later Rich Turco of the University of California, Los Angeles, both veterans of the 1980s investiga¬tions, to begin evaluating what the global envi¬ronmental effects of a regional nuclear war would be and to take as our test case an engage¬ment between India and Pakistan.

The latest estimates by David Albright of the Institute for Science and International Security and by Robert S. Norris of the Natural Resourc¬es Defense Council are that India has 50 to 60 assembled weapons (with enough plutonium for 100) and that Pakistan has 60 weapons. Both countries continue to increase their arsenals. In¬dian and Pakistani nuclear weapons tests indi¬cate that the yield of the warheads would be sim¬ilar to the 15-kiloton explosive yield (equivalent to 15,000 tons of TNT) of the bomb the U.S. used on Hiroshima.

Toon and Turco, along with Charles Bardeen, now at the National Center for Atmospheric Re¬search, modeled what would happen if 50 Hiro¬shima-size bombs were dropped across the high¬est population-density targets in Pakistan and if 50 similar bombs were also dropped across In¬dia. Some people maintain that nuclear weapons would be used in only a measured way. But in the wake of chaos, fear and broken communications that would occur once a nuclear war began, we doubt leaders would limit attacks in any rational manner. This likelihood is particularly true for Pakistan, which is small and could be quickly overrun in a conventional conflict. Peter R. La¬voy of the Naval Postgraduate School, for exam¬ple, has analyzed the ways in which a conflict be¬tween India and Pakistan might occur and ar¬gues that Pakistan could face a decision to use all its nuclear arsenal quickly before India swamps its military bases with traditional forces.

Obviously, we hope the number of nuclear targets in any future war will be zero, but policy makers and voters should know what is possible. Toon and Turco found that more than 20 million people in the two countries could die from the blasts, fires and radioactivity—a horrible slaugh¬ter. But the investigators were shocked to discover that a tremendous amount of smoke would be generated, given the megacities in the two coun-tries, assuming each fire would burn the same area that actually did burn in Hiroshima and as¬suming an amount of burnable material per per¬son based on various studies. They calculated that the 50 bombs exploded in Pakistan would produce three teragrams of smoke, and the 50 bombs hitting India would generate four (one teragram equals a million metric tons).

Satellite observations of actual forest fires have shown that smoke can be lofted up through the troposphere (the bottom layer of the atmosphere) and sometimes then into the lower stratosphere (the layer just above, extending to about 30 miles). Toon and Turco also did some “back of the en¬velope” calculations of the possible climate im¬pact of the smoke should it enter the stratosphere. The large magnitude of such effects made them realize they needed help from a climate modeler.

It turned out that one of us (Robock) was already working with Luke Oman, now at the NASA Goddard Space Flight Center, who was finishing his Ph.D. at Rutgers University on the climatic effects of volcanic eruptions, and with Georgiy L. Stenchikov, also at Rutgers and an author of the first Russian work on nuclear winter. They developed a climate model that could be used fairly easily for the nuclear blast calculations.Robock and his colleagues, being conserva¬tive, put five teragrams of smoke into their mod¬eled upper troposphere over India and Pakistan on an imaginary May 15. The model calculated how winds would blow the smoke around the world and how the smoke particles would settle out from the atmosphere. The smoke covered all the continents within two weeks. The black, sooty smoke absorbed sunlight, warmed and rose into the stratosphere. Rain never falls there, so the air is never cleansed by precipitation; par¬ticles very slowly settle out by falling, with air resisting them. Soot particles are small, with an average diameter of only 0.1 micron (μm), and so drift down very slowly. They also rise during the daytime as they are heated by the sun, re¬peatedly delaying their elimination. The calcu¬lations showed that the smoke would reach far higher into the upper stratosphere than the sul¬fate particles that are produced by episodic vol¬canic eruptions. Sulfate particles are transparent and absorb much less sunlight than soot and are also bigger, typically 0.5 μm. The volcanic par¬ticles remain airborne for about two years, but smoke from nuclear fires would last a decade.

Saturday, February 27, 2010

Twenty-five years ago international teams of scientists showed that a nuclear war between the U.S. and the Soviet Union could produce a “nuclear winter.” The smoke from vast fires started by bombs dropped on cit¬ies and industrial areas would envelop the planet and absorb so much sunlight that the earth’s sur¬face would get cold, dark and dry, killing plants worldwide and eliminating our food supply. Sur¬face temperatures would reach winter values in the summer. International discussion about this prediction, fueled largely by astronomer Carl Sa¬gan, forced the leaders of the two superpowers to confront the possibility that their arms race endangered not just themselves but the entire hu¬man race. Countries large and small demanded disarmament.

Nuclear winter became an important factor in ending the nuclear arms race. Looking back later, in 2000, former Soviet Union leader Mikhail S. Gorbachev observed, “Models made by Russian and American scientists showed that a nuclear war would result in a nuclear winter that would be extremely destructive to all life on earth; the knowledge of that was a great stimulus to us, to people of honor and mo¬rality, to act.”

Why discuss this topic now that the cold war has ended? Because as other nations continue to acquire nuclear weapons, smaller, regional nu¬clear wars could create a similar global catastro¬phe. New analyses reveal that a conflict be¬tween India and Pakistan, for example, in which 100 nuclear bombs were dropped on cities and industrial areas—only 0.4 percent of the world’s more than 25,000 warheads—would produce enough smoke to cripple global agriculture. A regional war could cause widespread loss of life even in countries far away from the conflict.

Nuclear bombs dropped on cities and industrial areas in a fight between India and Pakistan would start firestorms that would put massive amounts of smoke into the upper atmosphere. The particles would remain there for years, blocking the sun, making the earth’s surface cold, dark and dry. Agricultural collapse and mass starvation could fol¬low. Hence, global cooling could result from a regional war, not just a conflict be¬tween the U.S. and Russia. Cooling scenarios are based on computer models. But observations of volca¬nic eruptions, forest fire smoke and other phenome¬na provide confidence that the models are correct.

Friday, February 26, 2010

The long search for an AIDS vaccine has produced countless false starts and repeated failed trials, casting once bright hopes into shadows of disenchantment. The now familiar swings appeared in high relief last fall, with news of the most re¬cent, phase III trial in Thailand. Initial fanfare for a protective outcome gave way to disappointment after reanalysis showed that the protection could be attributed only to chance. But rather than dashing all hopes for an AIDS vaccine, the trial has heart¬ened some researchers, who see new clues in the battle against the fatal illness.

Costing $105 million and enrolling more than 16,000 sub¬jects, the Thai clinical trial was the largest AIDS vaccine test to date. It began in 2003, and early results released last September showed a slim but statistically sound benefit from the vaccine (a series of inoculations with drugs known as ALVAC-HIV and AIDSVAX B/E). But in October the full report, with various sta¬tistical analyses, was released in a Paris meeting to greater skep¬ticism. Specifically, 74 people who had received the placebo be¬came infected with HIV in the trial period, compared with the 51 people who became infected after receiving the vaccine, which makes for a protective effect of 31.2 percent. By including, how¬ever, the seven people who turned out to have had HIV at the start of the trial (two in the placebo group and five in the vaccine group), the effectiveness drops to 26.4 percent.

“There are still a huge number of uncertainties surrounding this trial,” says Dennis Burton, an immunologist at the Scripps Re¬search Institute in La Jolla, Calif. The subjects were in low- and moderate-risk groups, such as heterosexuals in monogamous relationships, rather than higher-risk groups such as intravenous drug users. “The numbers involved are small,” he adds, noting that sta¬tistically the protective effects could be the result of mere chance.

Still, many researchers are convinced that the trial has provid¬ed plenty of data to run with. “This contributes more evidence that an AIDS vaccine may be possible,” says Jerome Kim of the Walter Reed Army Institute of Research and co-author of the Thai trial study (which appeared in the New England Journal of Medicine in October). “We’ve taken a very small step,” Kim says. “It’s not a home run, but it opens the door to future work.” Vac¬cine proponents also point to the lessons learned from the failed Merck STEP trial. That vaccine test, halted in 2007, got only as far as phase II, but even so it did not leave researchers back at square one. It suggested, he notes, how some HIV strains could be blocked from infecting cells and offered data that could help in the interpretation of the Thai results. And a new analysis of the STEP trial, published last November in Proceedings of the National Academy of Sciences USA, provides a warning that the very vectors (adenoviruses, which are also employed in other vac¬cine development work) used to distribute the inactive HIV strains can actually make the immune system more vulnerable to infection by recruiting susceptible T cells to mucous membranes, where they are more likely to be infected during sexual activity.

Finding a vaccine has become an increasingly urgent under¬taking. Despite advances in therapies, HIV/AIDS is still incur¬able. Some 7,000 people worldwide contract HIV every day, and in the U.S. about 66,000 new cases are reported every year. Pre¬venting people from getting the virus would save millions of lives as well as greatly reduce health care costs associated with treat¬ment. A vaccine is “really the only optimal method of control for this dreadful pandemic,” says Raphael Dolin of the Beth Israel Deaconess Medical Center in Boston, who also wrote an edito¬rial accompanying the October paper.

Vaccines work by priming the immune system to recognize the target pathogen and attack it when detected. To fend off HIV, researchers introduced one vaccine (ALVAC) to induce a T cell response—thereby alerting the immune system—and another (AIDSVAX) later to spur an antibody response. In a previous phase III trial in intravenous drug users, AIDSVAX did not work. ALVAC, made by Sanofi Pasteur, had not been tested alone.Using these two drugs together raised eyebrows in the vaccine community. Burton, along with 21 other researchers, co-au¬thored a 2004 paper in Science criticizing the choice to proceed to phase III with two vaccines that had never demonstrated any effectiveness alone. The trial collaborators, however, based their decision on previous research that a combined approach can boost helper T cell response better than a single vaccine.

Despite his earlier doubts, Burton has been inspired by the tri¬al results. “I feel more optimistic than I have in some time,” he says. Researchers are embarking on a host of new experiments to put the Thai findings to work. Volunteers from the trial will now be examined for immune responses—particularly neutral¬izing antibodies as well as cellular immunity in T cells—and some will get subsequent booster shots to see if protection can be sus¬tained. In the lab, researchers will try to re-create the Thai results in monkeys to validate a new animal model using multiple low doses. Other recent research has shown that the number of anti¬bodies needed to provide protection is lower than previously be-lieved, possibly making a vaccine easier to create.

Indeed, entirely new and promising candidates are now in an¬imal trials, including those by the U.S. military to address sub¬types A, C and E (rather than the Thai subtype B). Other orga¬nizations—including the International AIDS Vaccine Initiative (IAVI), the Karolinska Institute and the Swiss nonprofit Euro¬Vacc—and manufacturers also have other vaccines in the works. “The science is really moving,” says Seth Berkley, an epidemiol¬ogist at Columbia University’s Mailman School of Public Health and also president and founder of IAVI. All those confronting the epidemic hope that the momentum leads to a payoff sooner rath¬er than later.

Thursday, February 25, 2010

Last December world leaders met in Copenhagen to add more hot air to the climate debate. That is because although the impacts humanity would like to avoid—fire, flood and drought, for starters—are pretty clear, the right strategy to halt global warming is not. Despite decades of effort, scientists do not know what “number”—in terms of temperature or concentrations of greenhouse gases in the atmosphere—constitutes a danger.

When it comes to defining the climate’s sensitivity to forcings such as rising atmospheric carbon dioxide levels, “we don’t know much more than we did in 1975,” says climatologist Stephen Schneider of Stanford University, who first defined the term “cli-mate sensitivity” in the 1970s. “What we know is if you add watts per square meter to the system, it’s going to warm up.”

Greenhouse gases add those watts by acting as a blanket, trap¬ping the sun’s heat. They have warmed the earth by roughly 0.75 degree Celsius over the past century. Scientists can measure how much energy greenhouse gases now add (roughly three watts per square meter), but what eludes precise definition is how much oth¬er factors play a role—the response of clouds to warming, the cooling role of aerosols, the heat and gas absorbed by oceans, hu¬man transformation of the landscape, even the natural variability of solar strength. “We may have to wait 20 or 30 years before the data set in the 21st century is good enough to pin down sensitiv¬ity,” says climate modeler Gavin Schmidt of the NASA Goddard Institute for Space Studies.

Despite all these variables, scientists have noted for more than a century that doubling preindustrial concentrations of CO2 in the atmosphere from 280 parts per million (ppm) would likely result in global average temperatures roughly three degrees C warmer.

But how much heating and added CO2 are safe for human civilization remains a judg ment call. European politicians have agreed that global average temperatures should not rise more than two degrees C above preindustrial levels by 2100, which equals a greenhouse gas concentration of roughly 450 ppm. “We’re at 387 now, and we’re going up at 2 ppm per year,” says geochemist Wallace Broecker of Columbia Univer¬sity. “That means 450 is only 30 years away. We’d be lucky if we could stop at 550.”Goddard’s James Hansen argues that atmospheric concentra¬tions must be brought back to 350 ppm or lower—quickly. “Two degrees Celsius [of warming] is a guaranteed disaster,” he says, noting the accelerating impacts that have manifested in recent years. “If you want some of these things to stop changing—for ex¬ample, the melting of Arctic sea ice—what you would need to do is restore the planet’s energy balance.” Other scientists, such as physicist Myles Allen of the University of Oxford, examine the problem from the opposite side: How much more CO2 can the atmosphere safely hold? To keep warm¬ing below two degrees C, humanity can afford to put one trillion metric tons of CO2 in the atmosphere by 2050, according to Allen and his team—and humans have already emitted half that. Put an¬other way, only one quarter of remaining known coal, oil and nat¬ural gas deposits can be burned. “To solve the problem, we need to eliminate net emissions of CO2 entirely,” Allen says. “Emissions need to fall by 2 to 2.5 percent per year from now on.”

Climate scientist Jon Foley of the University of Minnesota, who is part of a team that defined safe limits for 10 planetary systems, including climate, argues for erring on the side of cau¬tion. He observes that “conservation of mass tells us if we only want the bathtub so high either we turn down the faucet a lot or make sure the drain is bigger. An 80 percent reduction [in CO2 by 2050] is about the only path we go down to achieve that kind of stabilization.”

The National Academy of Sciences, for its part, has convened an expert panel to deliver a verdict on the appropriate “stabiliza¬tion targets” for the nation, a report expected to be delivered lat¬er this year. Of course, perspectives on what constitutes a danger may vary depending on wheth¬er one resides in Florida or Minnesota, let alone the U.S. or the Maldives.

Keeping atmospheric con¬centrations of greenhouse gases below 550 ppm, let alone going back to 350 ppm or less, will re¬quire not only a massive shift in society—from industry to di¬et—but, most likely, new tech¬nologies, such as capturing CO2 directly from the air. “Air cap¬ture can close the gap,” argues physicist Klaus Lackner, also at Columbia, who is looking for funds to build such a device.

Closing that gap is crucial because the best data—observations over the past century or so—show that the climate is sensitive to human activity. “Thresholds of irreversible change are out there—we don’t know where,” Schneider notes. “What we do know is the more warming that’s out there, the more dangerous it gets.”

Wednesday, February 24, 2010

In a Berlin basement sits a small torture chamber. The air inside the hermetically sealed steel chest consists of a choking 95 percent carbon dioxide, some nitrogen, and traces of oxygen and argon. The pressure within is 1/170 that on Earth, and the thermostat is set to –50˚F—in other words, a nice afternoon on Mars. Experiments at the facility regularly subject some of Earth’s hardiest creatures to this hell, and they do just fine.

This August, several dozen scientific institutes combined forces to test a variety of Earth species in Mars-like conditions. Identifying life-forms that can survive on another planet, what mechanisms they use to do so, and what by-products they leave behind will give scientists a more specific idea of what to look for when searching for E.T., says Jean-Pierre de Vera, a biologist at the German Center for Aeronautics and Space Research (DLR), where most of the experiments are carried out. At press time, the scientists had tested Deinococcus radiodurans, a bacterium known for its radiation tolerance, Xanthoria elegans, a lichen that thrives in Antarctica and low-oxygen conditions, and Bacillus subtilis, a comparatively ordinary bacteria found in soil around the planet. “I was astonished that organized, symbiotic communities such as lichens [which consist of fungi and photosynthetic algae or bacteria] can survive,” de Vera says. After 22 days, 80 to 90 percent of the lichens were not only alive but active—it seems that complex life-giving processes can happen off-planet. For one thing, de Vera says, “this is the first evidence that organisms might conduct photosynthesis on Mars.” Next he plans to investigate whether methane-producing bacteria, which could account for Mars’s methane clouds, can make it on the planet.

Tuesday, February 23, 2010

A whale’s skin is easily glommed up with barnacles, algae, bacteria and other sea creatures, but sharks stay squeakyclean. Although these parasites can pile onto a shark’s rippled skin too, they can’t take hold and thus simply wash away. Now scientists have printed that pattern on an adhesive film that will repel bacteria pathogens from hospitals and public restrooms.

Patented by Sharklet Technologies, a Florida-based biotech company, the film, which is covered with microscopic diamondshaped bumps, is the first “surface topography” proven to keep the bugs at bay. In tests in a California hospital, for three weeks the plastic sheeting’s surface prevented dangerous microorganisms, such as E. coli and Staphylococcus A, from establishing colonies large enough to infect humans. Bacteria have an easier time spreading out on smooth surfaces, says CEO Joe Bagan: “We think they come across this surface and make an energy-based decision that this is not the right place to form a colony.” Because it doesn’t kill the bacteria, there’s also little chance of the microbes evolving resistance to it. Hey, it’s worked for sharks for 400 million years.

That’s good news for hospitals, where infections from drugresistant superbacteria like MRSA, a potentially fatal strain of staph, are becoming commonplace. Bagan hopes to stick the skin on nursing call buttons, bed rails, tray tables and other surfaces by next year. Pending FDA approval, the shark pattern could be manufactured directly onto bacteria hotbeds like catheters and water containers by 2012. First, though, look for Sharklet on hightouch surfaces like door handles in restaurant restrooms around the U.S. later this year—a welcome extra line of defense against those who forget to wash their hands.

Monday, February 22, 2010

Tired of Jack Frost knocking out your power? Victor Petrenko, an engineering professor at Dartmouth College, has developed de-icing technology that could save power lines from ice storms. Until now, the only answer to frozen lines has been to hope that they don’t break or pull down poles under the weight of the ice. A single ice storm in early December left more than 1.25 million people in Pennsylvania, New England and New York shivering in the dark after ice storms snapped power lines. Petrenko’s trick is to increase the electrical resistance in cables, something engineers usually avoid because it causes lines to lose energy as heat. Attached to each end of a line, his device switches the wires inside from a standard parallel layout to a series circuit. In normal conditions, the cable works like a standard power line, but flipping the line to series increases resistance, and the wires generate enough heat to shed the ice. The process takes 30 seconds to three minutes and saps less than 1 percent of the electricity running through the lines. Utility companies could switch the lines remotely, and Petrenko says swapping in his cables would cost less than repairing ice damage.

This summer he tested the technology between two transmission towers near Orenburg, Russia; China is considering the device to protect its $170-billion investment in expanding its energy grid. This fall, Petrenko will test a modified version of the tech on an Audi A8 that he expects will de-ice its windshield in two to four seconds. Later, he’ll apply the tech to airplane wings, which could reduce delays and crashes. “A plane that could shed ice in seconds,” he says, “would be a much safer way to fly.”

Sunday, February 21, 2010

The earliest known attempt at earthquake-proofing dates to the sixth century B.C., when builders in modern-day Iran inserted stone blocks between a structure and its foundation to reduce vibrations. Today’s engineers buffer buildings with metal springs, ball bearings and rubber pads, all designed to sop up the energy from seismic waves. This summer, a team of physicists at the University of Liverpool in England and the French National Centre for Scientific Research tested a different strategy: redirect the waves altogether. Instead of absorbing tremors, a shield buried around a skyscraper simply reroutes them, like water running around a boulder.

The design consists of a concrete-and-plastic plate of concentric rings that encircles the foundation. The materials are arranged from stiffest to most flexible from the outer ring to the innermost. Waves follow the path of least resistance toward stiffer rings and bend away from the foundation as they pass through the plate. Computer simulations show that it could protect against the most destructive 70 percent of waves that travel horizontally in the soil from the epicenter. In theory, “this could protect any structure,” says Michael Tantala, a civil engineer and earthquake expert at Tantala Associates in Philadelphia. Engineers will probably combine traditional dampeners with the plate because it doesn’t protect against all types of waves, yet it could be particularly useful in areas where waves traveling horizontally are more destructive, such as parts of Seattle and San Francisco. “Everything around the building will be devastated,” says Sebastien Guenneau, one of the plate’s developers, “but the building itself will stay still.” Next year, engineers will test a two-foot-wide model of the design, and the tech could be on both new and old buildings as early as 2014.

Saturday, February 20, 2010

A DWINDLING SUPPLY OF MEDICAL ISOTOPES MEANS PATIENTS MIGHT NOT GET THE TESTS THEY NEED

The Chalk River nuclear reactor in Ontario doesn’t sell a watt of electricity. Never has. But when it sprang a leak and shut down this spring, it threw a multibillion-dollar industry into crisis. Before it broke, the reactor produced nearly two thirds of the U.S. supply of molybdenum-99, or Mo-99, the isotope behind 16 million critical diagnostic medical tests each year. In July, things got worse: The Dutch reactor that supplied the remaining third shut down for a month of repair work. Nuclear imaging is used on tens of thousands of patients every day to take pictures of their hearts, lungs, kidneys, bones, brains and other organs. Doctors inject isotopes into a patient and use a radiation-sensitive camera to locate blood clots and tumors or to diagnose seizures, among other things. Mo-99 is critical for about 80 percent of all nuclear medicine tests because as it decays, it releases a daughter isotope called technetium-99m, which is energetic enough for the camera to see, but its short, six-hour half-life means it conveniently decays to practically nothing after 24 hours. Unfortunately, Mo-99 can’t be stockpiled for more than a few days.

With the two main reactors down, Mo-99 became scarce. “We were getting 10 percent of what we normally get,” says Michael Graham, president of the Society of Nuclear Medicine. “We had to cancel and postpone tests throughout the country.” Doctors resorted to procedures that were less effective or that exposed patients to higher radiation levels. Some tests, such as one that tracks the spread of cancer from breasts to lymph nodes, have no substitute, forcing patients to wait in line or do without.Just five reactors supply 95 percent of the world’s Mo-99, and they’re all past their prime. A nuclear reactor’s average life span is 40 to 50 years. Chalk River is52 years old. The Dutch reactor—which came back online in August—is 47. The other three, in France, South Africa and Belgium, are 42, 43 and 47, respectively. In 1996, Canada boldly tried to replace them all with its own two-reactor facility, called MAPLE, that would pump out enough Mo-99 to supply the whole world. Other reactor-builders, figuring they would be crushed by MAPLE’s massive output, stayed out of the isotope-making business. But MAPLE engineers found a set of flaws in the reactors, and last spring, after spending $600 million—several times the project’s budget—Canada officially killed it. “That was our big ‘oh, sh-t’ moment,” says Steve Mattmuller, chief nuclear pharmacist at Kettering Medical Center in Ohio. “We were right back where we were 20 years ago, but now our reactors were 20 years older.”

Since the MAPLE debacle, two longterm solutions have been put into motion. The nuclear-power firm Babcock & Wilcox plans to build a facility to supply half the U.S. Mo-99 market. And this summer, Congressman Edward Markey of Massachusetts introduced a $163-million bill for domestic Mo-99 production, some of which could be used to retrofit a reactor at the University of Missouri that could fill the other half. But neither project are likely to be done before 2012.

The Mo-99 supply is back to 70 percent, but not for long. The Dutch pushed January’s six-month maintenance shutdown back to the spring in hopes that the Chalk River reactor will be back up by then, but the repairs are so extensive that the Canadian government might shut Chalk River down for good. With the two largest suppliers out, the world will again be forced to scrape by. As Mo-99 production trickles, certain procedures may once more become the high-stakes guessing games that they were before radioactive diagnostics. During this summer’s drought, Jim Ponto, chief nuclear pharmacist at the University of Iowa Hospitals and Clinics, had to put patients on a weeks-long waiting list. One of his patients opted to skip a Tech-99m procedure that would measure the spread of her cancer and minimize the extent of surgery. She couldn’t bear waiting a week for the test and instead went straight to the operating room. Cases like hers make Ponto nervous. “The cancer could spread,” he says, “and the doctor would never know it.”

Friday, February 19, 2010

Neurologists now determine if a patient has Alzheimer’s disease by giving the patient a memory test and then taking an extensive medical history, talking to the family and performing tests to eliminate other possible causes for the cognitive lapses. In this way, doctors accurately diagnose Alzheimer’s 90 percent of the time, especially with older patients, according to Nechama Bernhardt, a neurologist in Baltimore specializing in Alzheimer’s. Here are some of signs of the memory loss and confusion that characterize the disorder:

• Asking the same questions repeatedly.

• Repeating the same story word for word multiple times.

• Forgetting how to do basic tasks that the person once performed easily, such as cooking, making repairs and playing cards.

• Problems paying bills or balancing a checkbook (assuming these tasks were not previously difficult).

• Getting lost in familiar places.

• Neglecting personal hygiene habits such as bathing or dressing in clean clothes while insisting on having taken a bath or put on a new outfit.

• Relying on someone else to make decisions—such as what to buy at a supermarket or where to go next—that were easily handled in the past.

None of the symptoms above—alone or even in combination—is a sure sign of the disease. But anyone who displays several of these abnormal behaviors should see a specialist for a more thorough examination.

Thursday, February 18, 2010

Modern research confirms that marriage is good for you, but the benefits for men and women are different. If we could randomly select 10,000 men to be married to 10,000 women, and if we could then follow these couples over the decades to see who died when, statistical analysis suggests that what we would find is this: being married adds seven years to a man’s life and two years to a woman’s life. Recent innovative work by emographer Lee Lillard, formerly at the University of Michigan at Ann Arbor, and his colleagues sociologist Linda Waite of the University of Chicago and economist Constantijn Panis of Deloitte Financial Advisory Services has focused on untangling how and why being married lengthens life. Their research has analyzed what happened to more than 11,000 men and women as they entered and left marital relationships during the period 1968 to 1988. They carefully tracked people from before their marriages until after they ended (either because of death or divorce) and even on to any remarriages. And they closely examined how marriage might confer health and survival benefits and how these mechanisms might differ for men and women.

The emotional support that spouses provide has numerous biological and psychological benefits. Being near a familiar person can have effects as diverse as lowering heart rate, improving immune function and reducing depression. In terms of gender roles, Lillard and Waite found that the main way marriage is helpful to the health of men is by providing them with social support and connection, via their wives, to the broader social world. Equally important, married men abandon what have been called “stupid bachelor tricks.” When they get married, men assume adult roles: they get rid of the motorcycle in the garage, stop using illegal drugs, eat regular meals, get a job, come home at a reasonable hour and start taking their responsibilities more seriously—all of which helps to prolong their life.

This process of social control, with wives modifying their husbands’ health behaviors, appears to be crucial to how men’s health improves with marriage. Conversely, the main way that marriage improves the health and longevity of women is much simpler: married women are richer.

This cartoonish summary of a large body of demographic research may seem quite sexist and out-of-date. It is important to note that these studies involved people who were married in the decades when women had much less economic power than men. Nevertheless, these results point to something more profound and less contentious, namely, that pairs of individuals exchange all kinds of things that affect their health, and such exchanges—as with any transaction—need not be symmetric, either in the type or amount exchanged.

Wednesday, February 17, 2010

If worry is an integral part of what makes us human, can it also serve a positive function? Psychologist Graham Davey of the University of Sussex in England was one of the first experts to suggest potential plus sides to worry. In a 1994 study Davey explored a range of consequences stemming from this natural tendency; he found people reported that although fretting can make things worse, it can also be constructive, helping to motivate them to take action, resolve problems and reduce anxiety.

More recent research supports the idea that elevated levels of worry can improve performance. In 2005 psychologist Maya Tamir, then at Stanford University, showed that neurotic students were more likely to believe that increasing their level of worry when working on a cognitively demanding task, such as a test, would allow them to excel. Worrying before the test indeed helped the more neurotic individuals do better, whereas the pretest level of worry did not particularly influence the overall experience or outcomes for the less neurotic participants. Not only can worry benefit performance, but it may also encourage action. A 2007 study in the journal Cognition and Emotion revealed that smokers may be more convinced to quit if they worry about the risks of smoking. The promising results prompted the study authors to suggest potential strategies, such as having doctors remind smokers about the downsides, capitalizing on the worry-motivation relationship to encourage smokers to dispense with cigarettes.

Although it is difficult to determine the precise line between healthy, beneficial worry and unhealthy, detrimental worry, Michel Dugas, a psychologist at Concordia University in Montreal, likes to think of worry as a bell curve whereby moderate levels are associated with improved functioning, but excess levels are associated with a decline in performance.

Christine Calmes, a postdoctoral fellow at the VA Capitol Mental Illness Research, Education and Clinical Center in Baltimore, believes that successful people operate a little higher on the worry scale. As long as fretting doesn’t get the better of someone, it can work to his or her advantage. “It’s all about how people cope with the worry,” Calmes says. “If it’s incapacitating, then it’s not okay. But if worrying motivates people to go above and beyond—put in longer hours, attend to details that others may miss—then it’s a good thing.”

Tuesday, February 16, 2010

Although worry hijacks aspects of our emotional circuitry, chronic worriers seek to control their emotions—and their fretting does tend to numb emotional responses. For instance, it is fairly well established that damage to the frontal lobe—which, as the Boston University study showed, has been demonstrated to be more active in worriers who are thinking about the future—is associated with blunted, or an absence of, emotions. In another emotion-damping mechanism, several studies have confirmed that excess fretting reduces activity in the sympathetic nervous system in response to a threat. This branch of the nervous system normally allows the body to react quickly to impending danger by accelerating breathing and also increasing heart rate to oxygenate muscles to fight or flee.

In one classic study from 1990 Borkovec showed by observing heart rates how worry can dull emotional reactions. He found that people with anxiety about public speaking did not experience variations in their heart rate when relaxing, remaining neutral (that is, neither worrying nor relaxing) or engaging in worry before viewing scary images. After seeing the images, however, subjects in the worry group displayed significantly less variation in heart rate than those in the neutral or relaxed condition, despite reporting feeling more fearful.

At the same time, worry hinders a person’s physical reaction to a threat by amplifying activity in the parasympathetic nervous system. When working properly, this part of the nervous system quiets the body as it recovers from a stressful experience. I experienced this system in operation when I participated in a study in Mennin’s laboratory at Yale. The scene was a lone arm suspended in midair. A hand carrying a razor started slicing it. Blood seeped out of the wound as the razor dug deeper, exposing a mass of blood and cartilage. I wanted nothing more than to look away. Amelia Aldao, the Ph.D. student conducting the experiment, wanted to measure my physiological reaction to various film segments, each one meant to elicit a distinct emotion (for instance, disgust in the case of the mutilated arm).

Aldao recorded with electrocardiography how I dealt with a variety of emotions (this Yale study was the first to expand beyond fear), removed the electrodes from my body and led me into the adjacent room. She did some quick calculations on her computer and out popped a few of my stats. Good news. My heart rate variability was high, and my average heart rate measured about 58 beats per minute. These values indicated that my heart could cope well with intense emotions.

In contrast, by consciously trying to be ready for the worst, worriers are actually compromising their body’s ability to react to a truly traumatic event. In 2006 researchers at Columbia University, the National Institute on Aging and Leiden University in the Netherlands reviewed more than two dozen studies and found that overworrying can tax the body and promote cardiovascular problems. Overall, increased worry was associated with an elevated resting heart rate but low heart rate variability Excessive worriers and GAD patients experienced lower heart rate variability during periods of worry; in other words, their hearts returned to a resting rate more slowly than those of healthy worriers did. Prolonged periods of stress even weakened participants’ endocrine and immune function. Some studies reported that excess worry is linked to elevated levels of the stress hormone cortisol, which slows immune responses and may make chronic worriers more susceptible to disease.

Monday, February 15, 2010

Simple tips and tricks you can use to cope with the stresses of everyday life.

1. Identify productive and unproductive worryFirst, determine whether your worries will help you find practical solutions to a dilemma. If “yes, my worries can be constructive,” write a to-do list with explicit steps to help solve the problem. If the answer is “no, my worries are not helping me,” use some of the techniques below to help deal with unproductive worries.

2. Keep an appointment with your worriesWrite down your unproductive worries throughout the day and set aside a chunk of time, say 6 to 6:30 p.m., dedicated specifically to thinking about them. By 6, “you may find you’re not interested in those worries anymore,” Leahy says. “Many people find that what they thought they needed an answer to earlier, they don’t care about later in the day.”

3. Learn to accept uncertaintyWorriers have a hard time accepting they can never have complete control in their lives. Leahy says that quietly repeating a worry for 20 minutes (“I may never fall asleep” or “I could lose my job”) reduces its power. “Most people get bored by their worries and don’t even make it to 20 minutes,” he notes.

4. Be mindfulMindfulness, a technique based on Buddhist teachings, preaches staying in the present moment and experiencing all emotions even when they are negative. Leahy explains there are ways to be mindful throughout your day, while deeply immersed in your favorite song or in conversation with friends. Try living in the now by practicing deep breathing. Let your body relax and the tension in your muscles melt away.

5. Reframe your worryWhat happens if a worry comes true? Could you survive losing your job or being dumped? Reframing how you evaluate disappointments in life can take the sting out of failure, Leahy says. Create a positive spin by asking yourself what you have learned from your bad experiences. Make a list of things for which you are grateful.

6. Put worries in perspectiveExamine past worries. Do you have a hard time remembering what they are? Very likely this means that those worries never came true or that you were able to cope and forget, Leahy says.

Sunday, February 14, 2010

Worry began to draw the attention of researchers about 25 years ago, when they started to finetune their understanding of the spectrum of anxiety related pathologies. In the early 1980s psychologist Thomas Borkovec of Pennsylvania State University, a pioneer in this field, became interested in the trait while investigating sleep disorders. He found that intrusive cognitive activity at bedtime— worrying—was a factor in insomnia.

By 1990 Borkovec and his colleagues developed the Penn State Worry Questionnaire, a diagnostic tool that helped researchers show excessive fretting to be a feature of all anxiety disorders, especially generalized anxiety disorder (GAD). Psychologists revised the official psychiatric guidelines (then the Diagnostic and Statistical Manual of Mental Disorders III) to reflect this understanding, calling worry the cardinal feature of GAD and making chronic worry a recognized mental health problem. It is now known to affect 2 to 3 percent of the U.S. population, according to the National Institute of Mental Health. Borkovec defined three main components of garden-variety worry: overthinking, avoidance of negative outcomes and inhibition of emotions. Mennin explains that worry piggybacks on humans’ innate tendency to think about the future: “they crave control.” He says “chronic worriers see the world as an unsafe place and want to fight this sense of unrest.”

Overworriers feel that fretting gives them this control, and they tend to avoid situations they can’t have power over. In a 1995 study Borkovec found that people agonized about matters that rarely occurred. The participants, nonetheless, often reported that they believed the overthinking about a possible negative event had prevented it from taking place. Unsurprisingly, worriers show increased activity in areas of the brain associated with executive functions, such as planning, reasoning and impulse control. In 2005 psychologist Stefan Hofmann of Boston University used an electroencephalogram (EEG) to measure activity in the prefrontal cortex, before and after 27 undergraduates were told to give a speech in public. He confirmed previous evidence that activity in the left frontal cortex increases for people who worry compared with those who do not, suggesting that the left frontal cortex plays a prominent role in worrying. Trying too hard to be in command of a given situation or their own thoughts may backfire when worriers are instead overrun with repetitive apprehensions.

Research shows that the more we dwell on negative thoughts, the more those threats feel real and the more they will repeat in our skulls, sometimes uncontrollably. In 1987 Daniel M. Wegner, a psychologist at Harvard University, found that when people were told not to think about a white bear, they tended to mention it about once a minute. In the experiment, Wegner left a participant in a room with a microphone and a bell and asked the volunteer to talk freely about any topic. At one point, he interrupted the person’s monologue and told him to continue talking— but this time, not to think of a white bear. If the subject did think of a white bear, he had to ring the bell. On average, people rang the bell more than six times in the next five minutes and even said “white bear” out loud several times. “By trying to put a worry or a thought out of our mind, it only makes the worry worse,” Wegner says. “Just like when a song gets stuck in your head, you think you ought to be able to get rid of it, but you only end up making it stick more by trying to push it away.” Two mental processes may be at play here, according to Wegner. First, by consciously looking for distractions from the white bear (or your nagging worry), you remain somewhat aware of the undesired thought. The second reason suppression fails is that often you are making an unconscious effort to catch yourself thinking of the forbidden thought, ultimately sensitizing your brain to it.

Two emotion-processing areas of the brain are also involved in worry: the anterior insula and the amygdala. A 2008 Psychological Science study that used functional MRI found that when participants anticipated losing a significant amount of money in the future, activity increased in their anterior insula. That area not only becomes more active in response to worry, but the inclination to worry is also reinforced, because people believe that the act helps them avoid potential losses. The researchers concluded that sometimes, when it comes to making daring monetary decisions, overthinking may turn out to be a good thing.

In 2009 Jack Nitschke, a clinical psychologist at the University of Wisconsin–Madison School of Medicine and Public Health, reported using fMRI to measure activity in the amygdala while GAD patients and healthy subjects viewed pictures of items that were negative (for instance, mutilated bodies) or neutral (say, a fire hydrant). A few seconds before seeing the images, patients received a cue to let them know whether to expect a negative or neutral photograph. Although GAD and healthy subjects experienced no difference in amygdala activation when shown either type of picture, GAD patients displayed unusually high levels of amygdala activity to both negative and neutral cues—suggesting that merely anticipating the possibility of something negative in the future recruits specific neural circuitry, Nitschke says.

Saturday, February 13, 2010

The latest research into the neural roots of intelligence may lead to better drugs and tools for cognitive enhancement. In the future, drugs may enhance the neurotransmitters that regulate communication among the salient brain areas underlying general intelligence or more specific mental abilities. Other drugs could stimulate gray matter growth or white matter integrity in relevant areas. Certainly such advances would be welcome as potential treatments for mental retardation and developmental disabilities. They may also be welcome by any individual looking for more intelligence. If an effective “IQ pill” becomes available, are the societal and ethical issues the same as for performance-enhancing drugs in sports, or is there a moral imperative that more intelligence is always better than less? Apparently, many scientists agree with the latter. An online survey of 1,427 scientists conducted in 2008 by Nature found that 20 percent of respondents already use prescription drugs to enhance “concentration” rather than for treating a medical condition. Almost 70 percent of 1,258 respondents who answered the question said they would be willing to risk mild side effects to “boost their brainpower” by taking cognition-enhancing drugs. Eighty percent of all the scientists who responded— even those who did not use these drugs—defended the right of “healthy humans” to take them as work boosters, and more than half said their use should not be restricted, even for university entrance exams. More than a third said that they would feel pressure to give their children such drugs if they knew other kids at school were also taking them. Few appear to favor the “ignorance is bliss” position. Intelligence is a critical resource for the development of civilization. As the global economy evolves and small countries compete with larger countries, assessing, developing and even enhancing intellectual talent may well become the neuroscience challenge for the 21st century.

Friday, February 12, 2010

Brain-imaging studies reveal many areas in which the amount of gray matter (neuron bodies) correlates with intelligence test scores. The color patches above indicate the approximate location of the Brodmann areas—structural groupings of neurons numbered according to historical tradition. The letters on each Brodmann area indicate which intelligence factors it is associated with: general (g); spatial (s); and crystallized (c), or factual knowledge. Every individual has a unique pattern of gray matter in these areas, giving rise to different cognitive strengths and weaknesses. Fourteen of the Brodmann areas (colored orange above) are consistently implicated in studies of intelligence-related brain structure and function. Neuropsychologist Rex E. Jung of the University of New Mexico and I reviewed the studies and identified this network, calling it the parieto-frontal integration theory (P-FIT ) because areas in the parietal (green) and frontal (blue) lobes were consistent across the most studies. Most of the P-FIT areas are involved in computation (frontal areas) and sensory integration (parietal areas), the processing and conscious understanding of sensory information.

Thursday, February 11, 2010

Surely there must have been times in high school or college when you laid in bed, late at night, and wondered where your “free will” came from? What part of the brain—if it is the brain—is responsible for deciding to act one way or another? One traditional answer is that this is not the job of the brain at all but rather of the soul. Hovering above the brain like Casper the Friendly Ghost, the soul freely perturbs the networks of the brain, thereby triggering the neural activity that will ultimately lead to behavior. Although such dualistic accounts are emotionally reassuring and intuitively satisfying, they break down as soon as one digs a bit deeper. How can this ghost, made out of some kind of metaphysical ectoplasm, influence brain matter without being detected? What sort of laws does Casper follow? Science has abandoned strong dualistic explanations in favor of natural accounts that assign causes and responsibility to specific actors and mechanisms that can be further studied. And so it is with the notion of the will.

Sensation and ActionOver the past decade psychologists such as Daniel M. Wegner of Harvard University amassed experimental evidence for a number of conscious sensations that accompany any willful action. The two most important are intention and agency. Prior to voluntary behavior lies a conscious intention. When you decide to lift your hand, this intention is followed by planning of the detailed movement and its execution. Subjectively, you experience a sensation of agency.

You feel that you, not the person next to you, initiated this action and saw it through. If a friend were to take your hand and pull it above your head, you would feel your arm being dragged up, but you would not feel any sense of being responsible for it. The important insight here is that the consciously experienced feelings of intention and agency are no different, in principle, from any other consciously experienced sensations, such as the briny taste of chicken soup or the red color of a Ferrari. And as a plethora of books on visual illusions illustrate, often our senses can be fooled—we see something that is not there. So it is with the sensation of intentionality and agency. Decades of psychology experiments—as well as careful observation of human nature that comes from a lifetime of living—reveal many instances where we think we caused something to happen, although we bear no responsibility for it; the converse also occurs, where we did do something but feel that something or somebody else must have been responsible. Think about the CEO of a company who takes credit— and bonuses worth many millions—if the stock market price of his company rises but who blames anonymous market forces when it tanks. It is a general human failing to overestimate the import of our own actions when things go well for us. Lest there by any misunderstanding: the sensations of the intention to act and of agency do not speak to the metaphysical debate about whether will is truly free and whether that even is a meaningful statement. Whether free will has some ontological reality or is entirely an illusion, as asserted forcefully by Wegner’s masterful monograph, does not invalidate the observation that voluntary actions are usually accompanied by subjective, ephemeral feelings that are nonetheless as real as anything else to the person who experiences them.

Telling Clues from SurgeriesThe quiddity of these sensations has been strengthened considerably by neurosurgeons.During certain types of brain surgery, neural tissue must be removed, either because it is timorous or because it gives rise to epileptic seizures. How much tissue to remove is a balancing act between the Scylla of leaving remnants of cancerous or seizure-prone material and the Charybdis of removing regions that are critical for speech or other near-essential operations. To probe the function of nearby tissue, the neurosurgeon stimulates it with an electrode that passes pulses of current while the patient—who is awake and under local anesthesia to minimize discomfort—is asked to touch each finger successively with the thumb, count backwards or do some other simple task.

During the course of such explorations in 1991, neurosurgeon Itzhak Fried, now at the University of California, Los Angeles, and his colleagues stimulated the presupplementary motor area, part of the vast expanse of cerebral cortex that lies in front of the primary motor cortex. Activation of different parts of the motor cortex usually triggers movements in different parts on the opposite side of the body, for example, the foot, leg, hip, and so on. The medical team discovered that electrical stimulation of this adjacent region of cortex can, on occasion, give rise to an urge to move a limb. The patient reports that he or she feels a need to move the leg, elbow or arm.

This classical account was elaborated on by a recent study from Michel Desmurget and his colleagues at the Center for Cognitive Neuroscience in Bron, France, that was published in the international journal Science. Here it was electrical stimulation of the posterior parietal cortex, gray matter involved in the transformation of visual information into motor commands—as when your eyes scan the scene in front of you and come to rest on the movie marquee—that could produce pure intentions to act. Patients made comments (in French) such as “It felt like I wanted to move my foot. Not sure how to explain,” “I had a desire to move my right hand,” or “I had a desire to roll my tongue in my mouth.” In none of these cases did they actually carry out the movement to which they referred. But the external stimulation caused an unambiguous conscious feeling of wanting to move. And this feeling arose from within, without any prompting by the examiner and not during sham stimulation. This was different from the cortical sector explored by the earlier Fried study.

One difference between the two stimulated regions was that, at higher current levels, the patient actually moved the limb when the target site was the presupplementary motor area. Parietal stimulation, on the other hand, could trigger a sensation that actual movement had occurred, yet without any motion actually occurring (illusion of movement).

The take-home lesson is that the brain has specific cortical circuits that, when triggered, are associated with sensations that arise in the course of wanting to initiate and then carry out a voluntary action. Once these circuits are delimited and their molecular and synaptic signatures identified, they constitute the neuronal correlates of consciousness for intention and agency. If these circuits are destroyed by a stroke or some other calamity, the patient might act without feeling that it is she who is willing the acting! In the debate concerning the meaning of personal freedom, these discoveries represent true progress, beyond the eternal metaphysical question of free will that will never be answered.

Wednesday, February 10, 2010

The global information resource spun out of research into fundamental physics

When Tim Berners-Lee sketched out what we now know as the World Wide Web, he offered it as a solution to an age-old but prosaic source of problems: documentation. In 1989 the computer scientist was working at CERN, the particle physics laboratory near Geneva, just as a major project, the Large Electron Positron collider, was coming online. CERN was one of the largest Internet sites in Europe at the time, home to thousands of scientists using a variety of computer systems. Information was stored hierarchically: a treelike central repository held documents at the end of its branches. Finding a file meant crawling up the trunk and out to the right leaf. Scientists who were new to CERN (and there were a lot of them—most researchers stayed only for brief, two-year stints) had a hard time figuring out which branches to venture onto to find the right information for their project.

In a proposal to CERN management that March, Berners-Lee suggested constructing a system that operated more like the working structure of the organization itself: “A multiply connected ‘web’ whose interconnections evolve with time,” he wrote in Information Management: A Proposal. Information would no longer be stored on hierarchal trees; instead a forest of nodes would be connected by links. “When describing a complex system,” he wrote, “many people resort to diagrams with circles and arrows. . . . The system we need is like a diagram of circles and arrows, where circles and arrows can stand for anything.”

It was this agnosticism regarding content that gave what became the Web the power it has today. The system Berners-Lee finished on Christmas Day in 1990 was imbued with flexibility at every level: any file could be identified by its unique address, or Universal Resource Locator (URL). Behind the scenes, the Hypertext Transfer Protocol (HTTP) provided a uniform language for different types of computer systems to communicate with one another. And simple Hypertext Markup Language (HTML) linked documents together and specified how they should appear. Equally important, the components were made available free of charge to anyone who wanted them. Two decades later the World Wide Web has proven itself to be the most effective information dissemination platform ever created.

Tuesday, February 9, 2010

Alfred Wegener’s idea of continental drift wandered in the wilderness for the first few decades after he wrote about it in his 1915 book, The Origin of Continents and Oceans. Although some geologists marshaled further evidence for the theory, most remained skeptical because no plausible mechanism seemed capable of sending huge landmasses plowing through the ocean crust on long journeys across the surface of the earth. The modern concept of moving tectonic plates emerged in 1962, proposed by Harry H. Hess of Princeton University. Hess had captained a U.S. Navy transport ship during World War II and used the vessel’s sonar to map the Pacific Ocean floor along his travels. He hypothesized that all the earth’s crust—oceanic as well as continental—was mobile, driven by convective motions in the underlying layer known as the mantle. New crust forms at mid-ocean ridges, where hot magma from the mantle wells up and crystallizes. The young crust spreads from the ridges, and old crust sinks back down at deep ocean trenches. In this way, the crust and the uppermost, solid portion of the mantle (together known as the lithosphere) are divided into moving plates. Hess’s ideas became accepted after studies found the magnetism of rock on the ocean floor matched predictions: the earth’s magnetic field, which sporadically reverses polarity, leaves its imprint in solidifying rock, producing bands of alternating magnetism parallel to ocean ridges. Continental drift thus has its roots in the immense heat coming from the planet’s interior. Radioactive decay still produces the heat today. Yet scientists estimate that three billion years ago twice as much heat was emerging, leading to numerous hotspots with magma welling up, fragmenting the early lithosphere into many small tectonic plates. The first continents may have been not much larger than Iceland and a lot like it in other ways, too: for 16 million years or so Iceland (below) has been forming above a hotspot on the Mid-Atlantic Ridge.

Monday, February 8, 2010

Many have speculated that it exists to keep surgeons in business. Leonardo da Vinci thought it might be an outlet for “excessive wind” to prevent the intestines from bursting. The great artist and anatomist was not entirely off base in that the human appendix does appear to have originated at a time when primates ate plants exclusively, and all that fiber was tougher to digest.

The intestinal offshoot formally known as the vermiform appendix is a long, slender cavity, closed at its tip. It branches off the cecum, which is itself a big pouch at the beginning of the large intestine that receives partly digested food emptying from the small intestine. While food stalls in the cul de sac of the cecum, friendly gut microbes help to break it down further. Some of today’s herbivorous animals, such as rabbits and koalas, have a large appendix, filled with specialized cellulose-digesting bacteria for the same purpose. Yet plenty of planteating mammals, including some monkeys, have no appendix at all, relying on an enlarged cecum to break down plants. Because the appendix seems to be optional even among primates, biologists cannot simply infer that ours is a shrunken legacy from a common ancestor with the bunny. Rather the primate appendix and the appendices of other herbivorous mammals appear to have evolved independently as extensions of the cecum—perhaps for the same digestive purpose— but the human appendix has long since lost that function.

Serving as a repository for food and benign digestive bugs, though, may have created a secondary role for the appendix, at least early in life. Its inner lining is rich in immune cells that monitor the intestinal environment. During the initial weeks of infancy, the human gut is first populated with its normal, healthy complement of symbiotic microbes; the appendix may be a training center to help naive immune cells learn to identify pathogens and tolerate harmless microbes. If it hasn’t already been removed in early adulthood, the opening of the appendix cavity closes entirely sometime in middle age. But by that time its purpose may have been served.

Sunday, February 7, 2010

The yummy baked good is one of America’s first and finest contributions to world cuisine

Like many acts of pure genius, the invention of the cupcake is lost in the creamy fillings of history. According to food historian Andrew Smith, the first known recipe using the term “cupcake” appeared in an American cookbook in 1826. The “cup” referred not to the shape of the cake but to the quantity of ingredients; it was simply a downsized English pound cake. Lynne Olver, who maintains a Web site called the Food Timeline, has tracked down a recipe for cakes baked in cups from 1796. But we will probably never know the name of the first cook to take the innovative leap or whether it had anything to do with a six-year-old’s birthday party. “Just like other popular foods—the brownie comes to mind—it’s impossible to pinpoint a date of origin for the cupcake,” says culinary historian Andrea Broomfield.

That cook almost certainly lived on the left bank of the Atlantic. Broomfield says that the earliest known cupcake recipes in England date to the 1850s and that their popularization was slow. One writer in 1894 had evidently never heard of cupcakes: “In Miss [Mary E.] Wilkins’s delightful New England Stories, and in other tales relating to this corner of the United States, I have frequently found mention of cup-cake, a dainty unknown, I think, in this country. Will some friendly reader . . . on the other side of the Atlantic kindly answer this query, and initiate an English lover of New England folks and ways into the mysteries of cup-cake?” Even to this day true cupcakes—as opposed to muffins or cakes cut up into cup-size portions—are sadly uncommon in Europe.

In recent years the U.S. has had something of a great cupcake awakening, as blogs and bakeries have devoted themselves to its pleasures. Some attribute this renewed popularity to the cupcake-indulging characters of HBO’s Sex and the City, and food historian Susan Purdy also credits dietary awareness: you can have your low-calorie cake and eat it, too. But true connoisseurs needed no moment of rediscovery. They never forgot what it was like to be six.

Saturday, February 6, 2010

Nearly every vehicle on the road today is powered by some version of the four-stroke internalcombustion engine patented by Nikolaus Otto in 1876 (right). Otto exploited the findings of French physicist Sadi Carnot, who in 1824 showed that the efficiency of an engine depends critically on the temperature differential between a hot “source” of energy and a cold “sink.” The four-stroke engine compresses an air-fuel mixture and ignites it with a spark, thus creating a fleeting but intense source of heat. Its portable efficiency has not been matched since. Yet some consider the internal-combustion engine an anachronism, a dangerously out-of-date vestige of a world that assumed oil was unlimited and the climate stable. The best hope for displacing the engine appears to be an electric motor powered by an energy store such as chemical batteries or a hydrogen-powered fuel cell. What many forget is that electric vehicles had their chance—indeed, they were far more popular than gasoline-powered cars in the late 19th and early 20th century. They could go all day on a single charge and move a driver around a city with ease. They did not require a hand crank to start and did not have gears to shift, both of which made gas-powered vehicles of the day as user-friendly as heavy machinery. Electric vehicles were more suited to the world of the 19th century than the 20th, however. Those early vehicles could go all day on one charge because speed limits were set between seven to 12 miles per hour to accommodate horse-drawn carriages. When those limits rose after World War I and travel between cities and towns became the norm, gasoline-powered vehicles began to dominate the auto market. Since then, automakers have invested untold billions into increasing the efficiency of the modern four-stroke engine. Until electric cars can surpass the power and range of vehicles afforded by gas, expect the internal-combustion engine to continue its long reign.Source of Information : Scientific American September 2009

Friday, February 5, 2010

Chocolate was a favorite drink of the Maya, the Aztecs and other Mesoamerican peoples long before the Spaniards “discovered” it and brought it back to Europe. Archaeological evidence suggests that chocolate has been consumed for at least 3,100 years and not just as food: the Maya and other pre-Columbian cultures offered cacao pods to the gods in a variety of rituals, including some that involved human sacrifice.

But it was an Irish Protestant man who had what might be the most pivotal idea in chocolate history. In the 1680s Hans Sloane, a physician and naturalist whose estate—a vast collection of books and natural specimens—kick-started the British Museum, was in service to the British governor of Jamaica, collecting human artifacts and documenting local plants and animals. Sloane realized that the bitter local chocolate beverage would become much more palatable to his taste when mixed with milk. He later patented his invention. Although many had been enjoying chocolate made with hot water, Sloane’s version quickly became popular back in England and elsewhere in Europe. Milk also became a favorite addition to solid chocolate, and today around two thirds of Americans say they prefer milk chocolate to dark chocolate. Chocolate’s positive health effects are by now well documented. Antioxidants such as polyphenols and flavonoids make up as much as 8 percent of a cacao bean’s dry weight, says Joe Vinson, a chemist at the University of Scranton. Antioxidants neutralize highly reactive molecules called free radicals that would otherwise damage cells. And it is not a coincidence that the cacao tree (and other antioxidant-rich plants such as coffee and tea) would originate from low latitudes. “Things that have high levels of antioxidants tend to grow in places near the equator, with lots of sun,” Vinson says. The sun’s ultraviolet rays break up biological molecules into free radicals, and these plants may produce antioxidants to better endure the stress.

Although eating too much chocolate results in excessive calorie intake, human and animal studies have shown that moderate chocolate consumption can have beneficial effects on blood pressure, slow down atherosclerosis and lower “bad” cholesterol. Chocolate may also be good for the mind: a recent study in Norway found that elderly men consuming chocolate, wine or tea—all flavonoid-rich foods— performed better on cognitive tests.

Thursday, February 4, 2010

It emerged not with a quick flip of the switch but with a slow breaking of the dawn

In the book of Genesis, all God had to do was say the word. In modern cosmology, the creation of light took rather more effort. The familiar qualities of light—an electromagnetic wave, a stream of particles called photons, a source of information about the world—emerged in stages over the first millennia of cosmic history. In the very earliest moments, electromagnetism did not operate as an independent force but was interwoven with the weak nuclear force that governs radioactive decay. Those combined electroweak forces produced a phenomenon recognizable as light, but more complicated.For instance, there was not one but two forms of ur-light, made up of particles known as B and W bosons. By 10–11 second, the universe had cooled enough for electromagnetism to make a clean break from the weak force, and the bosons reconfigured themselves to give rise to photons. The photons were thoroughly mixed in with material particles such as quarks. Together they formed an undifferentiated soup. Had you been alive, you would have seen a blinding, featureless glow all around you. Lacking color or brightness variations, it was as unilluminating as absolute darkness. The first objects with some internal structure did not emerge until 10 microseconds, when quarks agglomerated into protons and neutrons, and 10 milliseconds, when protons and neutrons began to form atomic nuclei. Only then did matter start to leave a distinctive imprint on light. At about 380,000 years, the soup broke up and light streamed across space in more or less straight lines. At last it could illuminate objects and form images. As this primordial light dimmed and reddened, the universe passed through a gloomy period known as the Dark Ages. Finally, at an age of 300 million years or so, the first stars lit up and the universe became able to generate new light. In Genesis, light emerged before matter, but in physics, the two emerged together.

Wednesday, February 3, 2010

Two eyes positioned above a pair of nostrils that are themselves perched above a mouth—such is the layout of the face for vertebrate creatures ranging from sharks to humans. However well that arrangement may be optimized for finding and eating food, among mammals the face has taken on another critical role: communication. Nowhere is this function more apparent than in the human visage. Primates in general have complex social lives, and they commonly use facial expressions in their interactions with one another. We humans have particularly expressive faces with which we convey such emotions as fear, happiness, sadness and anger. Researchers once chalked up the rich repertoire of human expressions to our having uniquely specialized facial muscles. But physical anthropologist Anne Burrows of Duquesne University has found that, in fact, the chimpanzee—the next most dramatic primate—differs little from humans in the musculature of its mug. Two features, though, do separate human facial expressions from those of the rest of the primate pack. First, we have distinctive sclerae, or whites, around our irises. Second, our lips protrude from our faces and are darker than the surrounding skin. These traits provide our countenances with strong visual contrasts that may well better telegraph our feelings. Exactly when and how humans evolved such animated faces is unknown, but clues might be found in the fossilized skulls of our ancestors. Endocasts—casts of the impression the brain leaves on the interior of the skull—offer insights into the changing capabilities of brain regions over time. In 2000 paleoneurologist Dean Falk, now at Florida State University, led an analysis of endocasts from the ancient hominid Australopithecus africanus, which lived between three million and two million years ago. The results showed that parts of that creature’s anterior temporal region were larger than those of apes. That enhancement might have made this human predecessor better at processing information about visages. If so, our propensity for making and reading faces may have very deep roots indeed.

Tuesday, February 2, 2010

Expletives may not only be an expression of agony but also a means to alleviate it

Bad language could be good for you, a new study shows. For the first time, psychologists have found that swearing may serve an important function in relieving pain. The study, published in the journal NeuroReport, measured how long college students could keep their hands immersed in cold water. During the exercise, they were told to repeat an expletive of their choice or to chant a neutral word. The 67 volunteers who cursed reported less pain and endured the iciness for about 40 seconds longer on average. “Swearing is such a common response to pain that there has to be an underlying reason why we do it,” says psychologist Richard Stephens of Keele University in England, who led the experiment. And indeed, the findings point to the possible benefit of pain reduction. “I would advise people, if they hurt themselves, to swear,” Stephens adds.One of the first clues that swearing is more than mere language came from a 1965 brain surgery performed at the Omaha Veterans Administration Hospital in Nebraska. To eradicate a growing tumor in a 48-year-old man, doctors split his brain in half—slicing through a thick bridge of nerve fibers—and removed his entire cancerridden left hemisphere. When the patient awoke, he found his ability to speak had been devastated. He could utter only a few, isolated words with great effort—hardly surprising because language relies largely on the left half of the cortex. But as he realized his verbal shortcomings, he let out a perfect string of curses.

Other findings have since confirmed that people with lefthemisphere injuries that ruin speech may nonetheless maintain a first-rate arsenal of profanities. Conversely, a stroke in certain areas buried deep in the right hemisphere usually spares normal language but may leave the afflicted person unable to use meaningful swearwords. Although the details are murky, bad language seems to hinge on evolutionarily ancient brain circuitry with intimate ties to structures that process emotions. One such structure is the amygdala, an almond-shaped group of neurons that can trigger a fight-or-flight response in which our heart rate climbs and we become less sensitive to pain. Indeed, the Keele students’ heart rates rose when they swore, a finding Stephens says suggests that the amygdale was activated. That explanation is backed by other experts in the field.

Psychologist Steven Pinker of Harvard University, whose book The Stuff of Thought (Penguin, 2008) includes a detailed analysis of swearing, compared the situation with what happens in the brain of a cat that somebody accidentally sits on. “I suspect that swearing taps into a defensive reflex in which an animal that is suddenly injured or confined erupts in a furious struggle, accompanied by an angry vocalization, to startle and intimidate an attacker,” Pinker says. But cursing is more than just aggression, explains Timothy Jay, a psychologist at the Massachusetts College of Liberal Arts who has studied our use of profanities for the past 35 years. “It allows us to vent or express anger, joy, surprise, happiness,” Jay remarks. “It’s like the horn on your car—you can do a lot of things with it. It’s built into you.” There is a catch, though: the more we swear, the less emotionally potent the words become, Stephens cautions. And without emotion, all that is left of a swearword is the word itself, unlikely to soothe anyone’s pain. —Frederik Joelving

Monday, February 1, 2010

A new study shows a correlation between nausea and vomiting during pregnancy and the long-term neurocognitive development of those kids. Pediatric researcher Irena Nulman and her team at the Hospital for Sick Children in Toronto found that youngsters whose mothers suffered from morning sickness during pregnancy scored higher on some cognitive tests than did those whose mothers did not start their pregnant days throwing up. All children tested within the normal range, however. One possible explanation could be differing hormone levels, Nulman says. According to one hypothesis, vomiting reduces caloric intake, decreasing insulin secretion. Low insulin, in turn, boosts levels of other hormones that are known to play a role in the development of a healthy placenta and a healthy blood supply to growing brains.

Top Tabs

About Me

In its broadest sense, science (from the Latin scientia, meaning "knowledge") refers to any systematic knowledge or practice. In its more usual restricted sense, science refers to a system of acquiring knowledge based on scientific method, as well as to the organized body of knowledge gained through such research.

Fields of science are commonly classified along two major lines: natural sciences, which study natural phenomena (including biological life), and social sciences, which study human behavior and societies. These groupings are empirical sciences, which means the knowledge must be based on observable phenomena and capable of being experimented for its validity by other researchers working under the same conditions.