share

News this Week

Cash-Starved deCODE Is Looking For a Rescuer for Its Biobank

Jocelyn Kaiser

Human geneticists are keeping a close eye on the health of Iceland's deCODE Genetics. The genomics company announced earlier this month that it has enough cash to keep going for only a few more weeks. If it halts its research efforts, that would bring to an end a remarkably productive run in the search for genes underlying common diseases. The company's troubles are also raising questions about the future of its massive DNA biobank, which contains a wealth of genetic data on the Icelandic population.

deCODE says it is talking to potential academic as well as commercial partners as part of a restructuring. The Wellcome Trust, the giant U.K. biomedical research charity, is said to be in discussions with the company to take over support of the biobank. (Wellcome declined to comment.) But Iceland's data privacy laws and deCODE's agreements with clinicians could preclude transfer of the biobank outside the country.

deCODE and its CEO, geneticist Kári Stefánsson, drew controversy 11 years ago when they struck a deal with the Icelandic government under which the commercial company would mine the health records of all 270,000 Icelanders and link genomic data with medical information (Science, 1 January 1999, p. 13). That plan was later shot down by Iceland's courts for infringing on privacy. In the meantime, the company began building a DNA, genealogical, and medical database by recruiting volunteers—a conventional approach that “has never caused much tension,” says Jon Johannes Jonsson, a medical geneticist at the University of Iceland in Reykjavík who opposed deCODE's earlier proposal. The database now covers about 140,000 people, says deCODE. Geneticists say its combination of large size and detailed genealogical data is probably unique.

The biobank has been a boon for genomewide association studies (GWAS), which scan the entire genome for disease risk markers. Since these studies took off in 2006, deCODE researchers have netted many of the more than 500 markers found for diseases such as diabetes, heart disease, and cancer. “Their research agenda has been fabulous,” says Aravinda Chakravarti of Johns Hopkins University School of Medicine in Baltimore, Maryland.

deCODE's attempts to develop drugs from these findings haven't been a moneymaker, however. The company has been in serious financial trouble since last fall. On 10 August, deCODE announced that it had only $3.8 million in cash left and “sufficient resources to fund operations only into the latter half of the third quarter.” It plans to sell its medicinal chemistry, structural biology, and drug programs and focus on genetic tests.

deCODE spokesperson Edward Farmer says the company is “talking to a whole range of present and potential customers and partners from pharma to biotech to government and academic groups.” Included are “just about all of the big names in human genetics,” Farmer told Science by e-mail.

Some observers hope that a rescue by Wellcome, if it happens, would broaden access to deCODE's data. Although the company has collaborated with many outside groups, it has turned some down, and it does not make its data sets publicly available the way GWAS researchers now funded by Wellcome do.

Adding to the allure, deCODE's data “will become even more valuable,” predicts Jonsson. Gene hunters are now searching for rare risk variants by “deep sequencing” specific regions of DNA, an approach that requires studying families—which deCODE's biobank has in abundance.

A wholesale transfer of the biobank to the United Kingdom is not a sure thing, however. Icelanders' genetic data could be moved to another European Union country, but all personal identifiers would have to be removed or made anonymous, says Sigrún Jóhannesdóttir, director of the Icelandic Data Protection Authority. Another problem would be the use of biological samples and clinical data, which were collected by clinicians at Icelandic hospitals and institutes. Collaborative agreements that are part of deCODE's legal filings—as well as informed consent documents signed by the volunteers—generally state that DNA samples and medical data must be returned to the doctors when deCODE's studies are completed, notes Vilmundur Gudnason, director of the Icelandic Heart Association.

This means that any transfer would require agreements from deCODE's clinical partners and, potentially, modified informed consent agreements. One possible way to avoid such obstacles, some suggest, would be for the company to spin off an academic organization in Iceland with Wellcome Trust support. With deCODE's cash diminishing fast, time is running out to reach an agreement.

China

Confronting a Toxic Blowback From the Electronics Trade

Richard Stone

BEIJING—For Anna Leung, conducting research in Guiyu, a village in southern China where discarded computers and other electronics are stripped for their precious metals, was an assault on the senses. The acrid smell of circuit boards baking over coal fires and the stench of runoff from acid leaching were overpowering. More disturbing was the sight of children often helping their impoverished parents with the work. “We really feared for their health,” says Leung, an environmental scientist at Hong Kong Baptist University whose team found sky-high levels of toxicants in the air and soil.

Lately China has lurched from one toxic crisis to the next: Last year's major scandal was melamine in milk, whereas the latest is the revelation that hundreds of children were sickened by lead pollution from smelters in two cities. But e-waste processing, a burgeoning cabin industry in coastal parts of China, may end up dwarfing those incidents in severity and number of victims, scientists argued at a symposium on flame-retardants here on 22 August. “The problem is just monumental,” says marine toxicologist Susan Shaw, director of the Marine Environmental Research Institute in Blue Hill, Maine. “There is extraordinary contamination of people, especially children, living in e-waste areas,” she says.

E-waste is not a new phenomenon: China has been accepting vast quantities of discarded televisions, computers, printers, and other equipment from abroad since the early 1990s. Since 2000, the central government has prohibited importation of e-waste, and a law passed last year requires e-waste processors to register with local governments and take steps to control pollution. In Guiyu, one of the biggest and most notorious processing sites in the world, banners declare that “Dealing in imported used electronics is an act of smuggling,” says Eddy Zeng, an organic geochemist at the Guangzhou Institute of Geochemistry. But because existing regulations are poorly enforced, he says, “Tremendous amounts of e-waste have been imported illegally,” such that China now processes 70% of the world's e-waste. Much of the broken or obsolete electronics pile up in coastal villages where residents—often migrants from poorer inland provinces—use crude methods to recover minute amounts of gold and other precious metals. One site, Longtang, is a surreal scene, says Zeng, where runoff from leaching turns streams a “very, very beautiful blue.”

The roster of substances liberated during e-waste processing is a toxicological nightmare: known carcinogens like dioxins and polycyclic aromatic hydrocarbons; neurotoxic elements like lead; and brominated fire retardants, including polybrominated diphenyl ethers (PBDEs), which have been shown to disrupt endocrine hormones in lab animals and wildlife. Zeng calculates that some 76,000 metric tons of PBDEs alone are released into the environment each year at e-waste sites in China. “This is a chemical time bomb,” he says.

Digital detritus.

Electronic waste accumulates along a riverbank in Guiyu, a world-class site of toxic residues.

CREDIT: COURTESY OF ANNA LEUNG

The Hong Kong team, led by Ming Wong, has undertaken pioneering work to track the fate of e-waste toxicants. Along a riverbank in Guiyu, for example, they found levels of PBDE that were thousands of times higher than those found in soil from a control site in the province. “More and more of these toxic chemicals are getting into the food supply,” says Arlene Blum, a biophysical chemist at the University of California, Berkeley.

Already there is evidence that they are ending up in people. Researchers have reported that PBDE blood levels in Guiyu residents are, on average, nearly 600 parts per billion. “These are the highest PBDE levels ever reported to date in people anywhere in the world,” says Shaw. The Guiyu levels are 10 times higher than average levels in the United States and more than 100 times higher than in Europe. At Guiyu, says Leung, “villagers don't take health precautions” such as wearing facemasks. The bottom line, says Tom Webster, an environmental scientist at the Boston University School of Public Health, is that e-waste sites “are extremely good opportunities for epidemiology.”

In the meantime, scientists have proposed several broad strategies for reducing e-waste here. One approach would be to incorporate fewer toxicants into electronics. Another would be to choke off e-waste imports. China receives up to 80% of the United States's obsolete computers, Zeng notes. “The U.S. is so generous in its contribution to China's environmental contamination,” Linda Birnbaum, director of the U.S. National Institute of Environmental Health Sciences, says sarcastically. “This is something we should be working on.”

But China shouldn't wait for other countries to act. One urgent priority is stricter enforcement of existing laws, Zeng and colleague Hong-Gang Ni argue in a 1 June viewpoint in Environmental Science & Technology. But “the problem is a lot of local agencies don't have the resources,” Zeng says. A more promising tack, he says, might be to convince municipalities that future cleanup costs will be much greater than income from processing. And when health costs are factored in, the damage will be enormous.

Engineering Education

Changes at Berkeley Raise Fears of Shrinking Commitment to Diversity

How can U.S. universities help more minority, women, and disadvantaged students become engineers? The University of California, Berkeley, one of the first institutions to take up the challenge, has long been a model for efforts to serve that population. But a decision by its engineering dean to overhaul programs serving underrepresented minorities has roiled the campus and triggered concern among diversity professionals throughout higher education.

“It's tough to watch because Berkeley has been a leader for so long,” says Telle Whitney, CEO of the Anita Borg Institute for Women and Technology in nearby Palo Alto.

The Multicultural Engineering Program at Berkeley began in 1981 and has evolved into the present-day Center for Underrepresented Engineering Students (CUES). Its combination of early intervention, mentoring, academic and career counseling, research experiences, and other activities have helped build a sense of community among this population—and a record of achievement. “These kids come in at one standard deviation below the average but manage to graduate at the same level and go to graduate school at even higher rates,” notes Stanley Prussin, a professor of nuclear engineering who oversaw the center in the late 1990s as an associate dean. “In other words, once they adjusted to Berkeley, they flew.”

Many elite schools around the country have emulated that approach, which Karl Reid, an engineer who led the Office of Minority Education at the Massachusetts Institute of Technology (MIT) before becoming a senior executive last fall with the United Negro College Fund, calls a “high-touch program.” But those efforts haven't improved the demographics of graduating classes in engineering over the past decade. For underrepresented minorities, it's remained flat at about 11%, whereas for women it's actually dropped from 21% to 18%.

At Berkeley, the numbers are even worse: This fall's freshman engineering class of 601 students will have 37 underrepresented minorities (defined in science as African Americans, Hispanics, American Indians, and Pacific Islanders). That 6.2% share compares with 10.4% of the 2005 class. Shankar Sastry, Berkeley's dean of engineering, is especially troubled that the yield—the percentage of students deciding to enroll in the fall after being accepted in the spring—among minorities is far lower in engineering than in any other unit on campus.

So this summer, Sastry, who became dean in 2007, decided to take a different tack. Raise the quality of academic and career services offered to all students, he reasoned, and underrepresented groups will also benefit. Last month, he announced that CUES would merge with the college's student affairs office to form a new Engineering Student Services (ESS) office. The center's three-person staff, down from five in 2008, will lose their jobs, but ESS will be exempt from a campuswide hiring freeze and suffer no net loss of positions.

Getting a jump.

Berkeley has struggled in recent years to boost the number of incoming minority and women engineering students, some of whom participate in a 2-week summer “boot camp.”

Sastry, an Indian-born electrical engineer and computer scientist who joined the Berkeley faculty in 1983, calls CUES a “fantastic program that hit a high-water mark in the late '80s and early '90s. It's done well in offering personal attention to minority students, but it's become isolated” from the rest of student services, he says. Engineering students overall are unhappy with the current level of academic and career services, he adds.

Although Sastry has vowed repeatedly to continue CUES's activities and provide more resources, many students and faculty members are not convinced. He was peppered with questions at two town hall meetings held since his 15 July announcement, and participants came away dissatisfied with his answers. Sastry agrees that the college must respond to the “special needs” of underrepresented minorities but says those students also need to be “brought into the mainstream” by experiencing all the richness of a major research university.

The center's director, Michele de Coteau, says Sastry is blaming CUES for factors beyond its control. As a public institution, Berkeley is at a disadvantage in competing against elite private schools like MIT, Carnegie Mellon University, and Stanford University that can offer targeted scholarships and generous financial aid packages. There's also California's Proposition 209, passed in 1996, which prohibits state universities from considering race or gender in admissions. “Conflating yield and academic success just doesn't make sense,” she says.

De Coteau, a 1988 graduate of the college and a Rhodes scholar who says she benefited greatly from Berkeley's programs for minorities, argues that CUES's focus on community building has paid off. “Every student needs a support group to help them believe that they belong,” she says, “and to push them to do things that will foster their careers, like networking, summer internships, and scholarships for graduate school.” A recent analysis shows that CUES students over the past decade have graduated at the same rate as all engineering students and that 80% of them earn degrees in science or engineering disciplines.

Ruzena Bajcsy, a professor of electrical engineering and chair of a new task force created to advise ESS, believes that better recruitment and retention of underrepresented minorities are the keys to success. CUES's supporters say the center is already working hard on those issues and that dispersing responsibility among all ESS staff will dilute the college's efforts. And whereas some people who work on diversity issues see dissolving the center as a sign that Berkeley is backing away from its historic commitment, others don't want to jump to conclusions. “The optics don't look good,” says Reid, referring to the reduced visibility of what CUES had been doing. “But if the school can guarantee that it will continue these activities and provide more resources, then fine.”

Sequencing 40 Silkworm Genomes Unravels History of Cultivation

Ancient Chinese sought to protect the secrets of producing silk by executing anyone caught smuggling silkworms or their eggs out of the country. The threat worked for several millennia before the technique spread to Japan, Korea, the Middle East, and Europe. But now the Middle Kingdom is freely offering up silkworm secrets: In a paper published online this week by Science (www.sciencemag.org/cgi/content/abstract/1176620), a research team of primarily Chinese scientists reports on the use of modern genomic analyses to probe the domestication of the silkworm. Along the way, the group identified a host of genes likely affecting silk production, energy metabolism, and reproduction that could be exploited not only in culturing silkworms but also in raising other insects and possibly animals as well.

Size matters.

Scientists are on the trail of the genes that make the cocoons of domesticated silkworms (left) much larger than those of their wild cousins (right).

CREDIT: IMAGE COURTESY OF QINGYOU XIA

Archeological evidence points to China domesticating silkworms more than 5000 years ago. But the details are elusive and still debated. Was silkworm domestication a long, slow process that gradually emerged throughout the country and its variety of ecological backgrounds? Or was it a one-time event that took place quickly and in a limited geographic area?

Silkworm geneticist Zhonghuai Xiang of Southwest University in Chongqing and genomicist Jun Wang of Beijing Genomics Institute, Shenzhen, led the group bringing modern DNA sequencing to bear on those questions. They had a reference in the complete sequence of the domesticated silkworm Bombyx mori, which was published in Insect Biochemistry and Molecular Biology last December by the International Silkworm Genome Consortium. To supplement that data, the team sequenced the genomes of representatives from 29 diverse domesticated silkworm lines from around the world, as well as 11 wild silkworms (B. mandarina) collected within China.

The group studied differences among the 40 genomes, identifying close to 16 million single nucleotide polymorphisms (SNPs)—locations on chromosomes where a single base varies among individuals—and other genomic variations. The data analyses and demographic modeling indicate that the domesticated and wild species are now clearly genetically separate. There is very high genetic variability in the domestic lines, though less than among the wild silkworms. From this and other genome data, the group concludes that there must have been a single domestication event that took place over a relatively short period of time during which a large number of wild worms were collected for domestication. “Whether this event was in a single location or in a short period of time in several locations cannot be deciphered from the data,” Wang says. They also cannot narrow down the specific region of China where domestication may have occurred.

Party animals.

Thousands of years of cultivation has made domesticated silkworms (left) tolerant of crowding and human handling and unable to fly when they are moths—traits very unsuited to wild silkworms (right).

CREDIT: IMAGES COURTESY OF QINGYOU XIA

Yutaka Banno, a geneticist at Kyushu University in Fukuoka, Japan, says the results fit in with other recent work on the relationship between the wild and the domesticated silkworms. “This paper will be accepted by almost all researchers,” he says. Still, he cautions that the group has not located the origin of silk farming with any precision. He points out that only 11 wild specimens were collected and that six of those were from one province, even though the wild silkworm is found throughout China and even into far eastern Russia. “A wider survey is needed,” he says.

The group also compared SNP variability and other genomic features to identify DNA sequences that had been subject to selection, ultimately identifying 354 genes possibly associated with domestication. Domestic silkworms have been selectively bred for cocoon size, growth and reproduction rates, and digestion efficiency; and Wang says some of the genes they have identified have enriched expression in the silk gland, midgut, and testis. Domesticated worms have also become tolerant of human handling and crowding while losing the ability to fly and any sense of the danger of predators, all of which renders them unable to survive in the wild. “I think it will be interesting to see whether there are any commonalities between the genes proposed to have been selected for in (silkworms) and whether any are similar to those in domesticated higher animals like livestock and birds, or even cats and dogs,” says Marian Goldsmith, a geneticist at the University of Rhode Island in Kingston.

Joining Forces to Pump Up a Variable Sun's Climate Effects

Richard A. Kerr

Plenty of past changes in Earth's climate have been pinned on an inconstant sun. Ups and downs in solar output may have triggered the Little Ice Age that gripped Europe several centuries ago, as well as droughts that brought down Chinese dynasties. But how could the slight variations scientists measure be behind such large climate events? Researchers now say two different parts of the atmosphere might be colluding to amplify the effects of even minuscule solar fluctuations.

But scientists warn against blaming every climate twitch—especially the global warming of the past half-century—on a variable sun. The evidence for a dual-pronged solar amplifier is still “on the ‘edge of significance,’” says climate modeler David Rind of NASA's Goddard Institute for Space Studies in New York City, who has modeled such sun-climate connections. And even if the amplifier exists, its climate leverage is still relatively puny.

The latest support for a sun-climate link comes from the first study to combine two leading mechanisms for converting solar variations into climate variations in the same model. Some models operate from the top down, beginning with the few-percent changes in the sun's ultraviolet radiation that occur during the 11-year cycle of solar activity and letting them induce changes in stratospheric ozone, temperature, and circulation. Those variations, in turn, affect the climate in the lower atmosphere.

Other models work from the bottom up via a mechanism that comes into play around the equator in the Pacific Ocean. Solar energy added during the peak of a solar cycle evaporates more water vapor from the ocean. Through a long chain of changes in atmospheric and oceanic circulation, the energy-laden water vapor eventually causes fewer clouds to form in the subtropics, so even more solar energy reaches the ocean. The result is a positive feedback loop that amplifies the climate effect of the solar variations.

Climate researchers had found that the bottom-up mechanism worked in standard climate models, which lack a realistic, ozone-laden stratosphere. The top-down mechanism produced climate effects in stratosphere models, which lack realistic ocean-atmosphere interactions. But neither mechanism alone could explain the magnitude of climate variations linked to recent solar cycles. So climate modeler Gerald Meehl of the National Center for Atmospheric Research in Boulder, Colorado, and colleagues replaced the atmosphere of NCAR's standard climate model with an atmosphere-only model that had a realistic stratosphere.

In the melded model, solar variations drove climate responses across the Pacific much like those seen during recent solar maxima, Meehl and colleagues report on page 1114. The eastern equatorial Pacific cooled about as observed, and changes in precipitation also roughly resembled observed changes (see figure). The two mechanisms reinforce each other by boosting the rising of tropical air that is driven by evaporation, Meehl explains. “That's the key commonality,” he says. “That amplifies things.”

Like much work in the long-controversial field of sun-climate relations, the new modeling is getting a cool reception. The study “is not nearly as conclusive as they would have it,” says Joanna Haigh of Imperial College London, who developed the top-down mechanism. Among additional critiques, she and others say the researchers ran the model too few times to give reliable results. “The atmosphere and oceans are a big coupled system,” she says, “but it's incredibly complicated.”

ScienceNOW.org

From Science's Online Daily News Site

Why We Walk in Circles Adventure stories and horror movies ramp up the tension when hapless characters walk in circles. The Blair Witch Project, for example, wouldn't have been half as scary if those students had managed to walk in a straight line out of the forest. But is this navigation glitch real or just a handy plot device? A new study finds that people really do tend to walk in circles when they lack landmarks to guide them.

CREDIT: STEVEN CUMMER

Upward Lightning No Flash in the Pan That annoying crackle on an AM radio station might not be due to a lightning strike hitting the ground. Researchers have discovered that bolts jumping from the tops of thunderclouds all the way to the ionosphere, some 90 kilometers above Earth's surface, can be just as powerful as conventional ground strikes, though they form slower. The findings suggest that thunderstorms can discharge electricity into the entire atmosphere, from Earth's surface to the edge of space.

A Pterosaur Comes In for a Landing About 150 million years ago in what is today southwestern France, a flying reptile swooped onto a beach, landed on the damp sand, and walked off, maybe to find dinner. Now scientists have uncovered the fossilized tracks of this pterosaur, providing the first glimpse of how these flying creatures touched down. “It's a little Jurassic moment in time that got recorded,” says one researcher.

Where Did You Get Those Lovely Spirals? Look at an image of the Milky Way galaxy, and you can't help but notice its exquisite spiral arms. For nearly 100 years, astronomers have tried to understand how the Milky Way and other spiral galaxies formed these dramatic patterns—and now they think they finally have the answer.

Warning: Don't Let Your Elders Brainwash You

Yudhijit Bhattacharjee

Catherine Cesarsky has nothing against scientific road maps. The outgoing president of the International Astronomical Union (IAU) has contributed to several, including last year's report from the European astronomy consortium, Astronet, laying out a 20-year plan for European astronomy.

But Cesarsky, a former director general of the European Southern Observatory (ESO) and currently France's high-commissioner for atomic energy, believes that such documents can also stifle the creativity of young scientists by forcing them down well-worn research paths. She laid out her concerns this month at the IAU meeting in Rio de Janeiro, Brazil, and elaborated on the dangers of such “bandwagon effects” in a conversation with Science. Her remarks have been edited for clarity.

Pioneer spirit.

Cesarsky says young astronomers should be guided as much by original thinking as by scientific road maps.

C.C.:No doubt. By having a clear set of priorities and a clear rationale for them, we avoid killing each other's projects. That is particularly important in Europe, where different countries have competing interests. Astronet's road map also gives us an advantage in raising funds.

Q:What's the downside?

C.C.:You predefine what is important, what should be done, and how it should be done. I am worried that young scientists may be brainwashed as a result. It's like telling them, “Here is this primer, this cookbook—all you have to do to come up with a fundable proposal is read this well.” It can do a lot of good, but it may be quenching creativity from the start.

Q:Have you seen the problem occurring?

C.C.:I do see some of it when I serve on panels for fellowships and things like that. You get proposals that are clones of each other.

Q:What's an example of a bandwagon?

C.C.:Dark energy would be one. Space missions are being planned in the U.S. and Europe to learn more about it. A lot of people want to work on the problem, and many of them are thinking along similar lines. It may be that after we have done all these experiments, we may know as little about dark energy as we know now.

Q:Hasn't it always been true that senior researchers define a set of broad research questions and that young scientists start their careers by following those lines of inquiry?

C.C.:When I got my Ph.D. [in astronomy] in 1971, things were not so well organized. All the senior people had their individual ideas. Papers had very few authors. Not everybody knew what everybody else was doing. Even Ph.D. topics were chosen a little more individually. We were, nonetheless, influenced by personalities. When I was a postdoc at Caltech [California Institute of Technology], everybody used to pay attention to what Willie Fowler thought was important.

Now we do this community work. The documents are very well done. The Internet makes it very easy to find out what everybody is doing. Because road maps and plans are so detailed and comprehensive, it is difficult for individual scientists to come in and do better.

Q:How could the problem be solved?

C.C.:I think young scientists should guard themselves against brainwashing. They should look beyond the road maps, even if we put the best we know in them. Also, they should resist specializing too much at the cost of the big picture. The best way to escape [the] bandwagon effect is to look at things from a distance, to connect different ideas.

When I was in charge of ESO, I tried to encourage innovative projects under the director's discretionary time-allocation program. My expectation was that researchers would propose risky ideas that were completely new. Disappointingly, we got rather little of that. The program quickly became a way for scientists to add observation time to an existing project that would lead to a quick result and publication.

C.C.:Think again before you reject proposals that seem outlandish. When committees review ideas that go against the norm-such as pushing an instrument to its limits or trying out a new, untested method of observation—far too often they are quick to say, “Ha, that's undoable, forget it.”

Q:Did you try to change that at ESO?

C.C.:I asked committees there twice a year for 8 years to be more open to unconventional projects. I don't think it has worked.

Q:Why not?

C.C.:There are often one or two individuals on each committee who are willing to take risks, but others are not. It is very difficult for a committee to not go for the sure thing.

Q:What about attaching money to scientists instead of proposals?

C.C.:I wouldn't bet entirely on the person. I like the model the European Research Council is following in awarding its new fellowships. Half the points go to the person and half to the proposal. Grantees need not do the project exactly as they have proposed, which allows room to be creative.

Nuclear Transplantation

Researchers Prevent Inheritance of Faulty Mitochondria in Monkeys

Sam Kean

Fusing two lines of research into egg cells and embryos, Shoukhrat Mitalipov and colleagues at the Oregon Health and Science University near Portland have achieved a technical feat that could lead to new methods for preventing the inheritance of mitochondrial diseases. Mitochondria, the source of energy in animal cells, are inherited exclusively from mothers and run on their own DNA (mtDNA). In recent years, scientists have linked numerous cancers and brain diseases to malfunctioning mtDNA, which may affect up to one in 6000 people. In a paper published online by Nature on 26 August, Mitalipov's team describes their success in combining nuclear transplantation with in vitro fertilization to interrupt the inheritance of mitochondrial diseases in a monkey. “I knew we could correct this, just because of the way mitochondria are passed from one generation to another, through the egg,” Mitalipov says.

The goal was simple but difficult to execute. Mitalipov sought to extract healthy nuclear DNA from an egg with mutations in its mtDNA and transfer it into an enucleated donor egg cell with healthy mitochondria. But in mature egg cells, the nuclear membrane dissolves, which makes chromosomes invisible. To locate and remove the DNA, the Oregon team developed a technique to track the spindle proteins on which chromosomes float inside cells. In addition, to transport the fragile chromosomes into the target egg, they developed a receptacle called a karyoplast, a bubble of cytoplasm barely bigger than the naked chromosomes, to which they fused the chromosomes. In theory, the karyoplast was free of mutant mtDNA, so when they transplanted it, they produced a chimeric egg with the mother's chromosomes and healthy mitochondria from the donor.

The Nature paper is both a technical account and a glorified birth announcement: Mitalipov and his team described the successful fertilization of the manipulated egg with sperm from a selected father, its implantation into the mother, and the delivery of twin macaques, Mito and Tracker, born in late April. Two other macaque babies have followed. All appear healthy and hearty.

Stem cell researcher Jose Cibelli of Michigan State University in East Lansing was wowed: “I can only wish everybody could have such dexterity in the scope.” Still, Cibelli's enthusiasm was dampened by the prospect of translating this bench work to the clinic. “While they make it sound easy, I am afraid it will take some time to see it implemented—if ever—in a setting other than research.”

Mitalipov says he's optimistic about adapting the technology for humans because his team consciously “mirrored the techniques used in humans for IVF treatments.” He also feels the parallels with IVF and the fact that no embryos are destroyed will mitigate any ethical concerns.

Mitalipov says the work is technically germ-line gene therapy, a type of DNA manipulation that is very strictly regulated for human subjects. Questions also remain about whether the baby macaques will grow into healthy adults. The coarse sampling techniques Mitalipov used to look for mutant mtDNA could not eliminate the possibility that a few of the mother's faulty mitochondria were transplanted into the donor egg. Worse, because of the random distribution of mitochondria early in an embryo's development, this mutant mtDNA could end up concentrated in one or two organs. (Mitalipov's team did not sample tissues from all parts of the macaques, just a few.) Mitalipov says they must wait to see if the monkeys are viable adults before expanding their work to humans.

ScienceInsider

From the Science Policy Blog

Under intense pressure to save money, the University of California (UC) announced last month that all faculty and staff who receive even a fraction of their salaries from state funds will be furloughed. But researchers won't be furloughed on teaching days, according to a letter sent by UC's interim provost to faculty on 21 August.

The novel H1N1 virus is behaving unpredictably, U.S. health officials said at a press briefing 21 August. The virus has spread to turkeys in Chile and slowed its spread in the Southern Hemisphere. Meanwhile, drug companies are having difficulty growing the virus, which means that a vaccine will be in short supply this fall.

Research involving human embryonic stem cells will become easier in Japan as a result of new ethical review requirements that took effect 21 August.

The Public Library of Science has launched an “experimental” site for posting raw preprints of papers on hot topics. PLoS Currents (Beta) debuted last week with a set of papers on influenza.

National Institutes of Health Director Francis Collins has recruited a former aide to be his chief of staff. Kathy Hudson now runs the Genetics & Public Policy Center in Washington, D.C. She hopes to liaise with the FDA on overseeing genetic tests.

Top whale researchers are arguing that the U.S. National Oceanic and Atmospheric Administration should reexamine a major regulation designed to reduce collisions between ships and highly endangered North Atlantic right whales. NOAA says it is evaluating the regulation's efficacy.

More than two dozen pathologists have left the Armed Forces Institute of Pathology in Washington, D.C., to form a new company that will offer the same pathology consultation services as does the 150-year-old institute. AFIP is expected to close by 2011.

Going to the Dogs

Cognitive scientists once spurned the dog as too domesticated to study. But now many are leaping at the chance to use man's best friend to help understand how social cognition evolved.

Leading the pack.

Cat lover Ádám Miklósi works with dogs in his Budapest lab.

CREDIT: ENIKO KUBINYI

BUDAPEST—In 1994 when Ádám Miklósi, then a young ethologist at Eötvös Loránd University in Budapest, learned that his lab's director planned to switch the team's research from fish to dogs, he couldn't believe it. “‘My god, are you crazy?’ That's what I thought, although I didn't say it,” says Miklósi, who now directs the university's highly regarded dog cognition lab. “None of us were happy about this.” At the time, scientists who studied animal behavior and cognition were busy investigating a variety of species, including ants and dolphins, but they shunned dogs because they thought the animal's domestication, and the bond between human and dog (Canis familiaris), precluded objective study. In fact, lab director Vilmos Csányi's interest had been spurred by his admiration for Flip, a mixed-breed dog he had found in the woods and adopted. “He would tell us some crazy story about Flip and say, ‘Now, devise an experiment to find out why Flip can do that,’” says Miklósi.

Fifteen years later, in the wake of dozens of provocative studies from the Hungarian lab and a few others, dogs are fast becoming the it animal for evolutionary cognition research. Well-known cognitive scientist Marc Hauser of Harvard University announced in February that his lab was switching from cotton-top tamarin monkeys to dogs. Around the world, from Australia to Japan, other dog ethology and cognition labs are either fully under way or in the works. Last year, Miklósi's group held the first-ever dog cognition conference; a second is planned for July 2010 at the Clever Dog Lab and Wolf Science Center (WSC) at the University of Vienna. Last month, an issue of the journal Behavioural Processes was devoted to the dog, with 12 papers from behaviorists and ethologists analyzing everything from canine barking to dogs' guilty faces. And this week, researchers explore how dogs evolved their social smarts in a PloS ONE paper comparing how young and mature dogs and wolves follow human cues. Even the shape of dogs' faces is being studied: Scientists in Miklósi's lab reported 24 July in the journal Behavioral and Brain Functions that dogs with rounder faces, such as pugs, are better at following people's cues than breeds with longer noses, indicating that we've selected puglike breeds not for their baby faces but for their ability to look us in the eye.

Our canine pals, researchers now say, are excellent subjects for studying the building blocks underlying mental abilities, particularly those involving social cognition. The special relationship with humans that once disqualified dogs from research is now seen as worthy of study in its own right; some researchers see the dog as a case of convergent evolution with humans because we share some similar behavioral traits. And because all dogs are descended from gray wolves (C. lupus), they can reveal how domestication has altered a species' mental processes, enabling the dog to survive in its new habitat, the human home. “They're a natural experiment,” says Josep Call, a comparative psychologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who studies cognition in dogs, wolves, and great apes.

Miklósi, Call, and others even argue that dogs may teach us more about the evolution of some aspects of our social mind than can our closest kin, the chimpanzee, because Fido is so adept at reading and responding to human communication cues. But not everyone agrees. “Completely wrong,” says Clive Wynne, a psychologist at the University of Florida, Gainesville, who argues that the skills dogs share with humans are a matter of learning rather than evolutionary change. But Wynne, who edited the special Behavioural Processes issue, agrees that dog ethology and cognition are hot fields. “There's no other species on the planet that triggers so many questions and debate, and no animal that we have a more intimate relationship with, than the dog.”

Real animals?

Dogs weren't always viewed this way. Although Charles Darwin and Ivan Pavlov considered dogs fine subjects for studying evolution and behavior, cognitive scientists and ethologists placed dogs in a kind of research purgatory for many years. In the 1970s, when cognitive ethologist Marc Bekoff of the University of Colorado, Boulder, launched a study of dog play behavior, many of his colleagues sniffed at the idea. They said “‘Why don't you study real animals?’” he recalls. Domestic dogs were considered artificial and therefore “not worthy of study”—a concern that many researchers still have. Domestication makes dogs “less useful for investigations on the evolution of cognition,” says Nicola Clayton, who studies bird behavior at the University of Cambridge, U.K., “because you can't look for the ecological effects, for what originally drove intelligence in dogs.” Also, “the investigator has no control over the nurture side: how owners feed, train, or treat their dogs,” says primatologist Frans de Waal of Emory University in Atlanta. That “introduces unknown variables we would normally not accept in animal behavior research.” Still, Clayton and de Waal support the move to dogs. When it comes to dogs, “the pros outweigh the cons,” says de Waal.

Researchers have also worried that dogs are too aware of human cues and so would defeat most experimenters as did Clever Hans, the horse once celebrated for tapping out answers to math problems—and later shown to be closely watching his owner for subtle clues to when he should stop tapping. To prevent this, dog ethologists carefully monitor testing methods, generally videotape experiments, and use various techniques, including blindfolding human testers, to make certain they are not inadvertently giving clues.

Dogs were also deemed to have another problem: Their brains are 25% smaller than those of their wild ancestors, wolves. “Most people thought—and some still do—that dogs were therefore not as smart as wolves,” says Call. But “brain size is not everything. … How did domestication change wolves into dogs? Is it similar to what happened to us? That's the kind of thing that dogs can help us investigate.”

And despite the concerns, dogs in fact have many advantages as cognitive research subjects, say Miklósi and others. Dogs are willing and cooperative, and they enjoy being with people and following their commands. The dog's genome has been mapped (Science, 21 September 2007, p. 1668), opening up the possibility of linking behavioral traits to specific genes. To cap it all off, dogs are much less expensive to study than most laboratory animals, largely because most dog labs follow the model of Miklósi's Family Dog Project and don't actually house dogs (which also helps keep animal-rights activists at bay). “All you need is an empty room for a dog lab,” says biological anthropologist Brian Hare, who is busy setting up such a room at Duke University in Durham, North Carolina. “Then you ask people if they'd like to have their dogs take part in a cognitive experiment. Everybody knows their dog is smart, and the next thing you know, you have 1000 dogs to test.”

“It's like Drosophila genetics,” adds Hauser. “Why stop at 1000; why not have 10,000? It's a huge change from simply studying 30 tamarins.” Thus dog research can be replicated—a tall order when working with some other animals. “Let's face it: No one is going to be able to replicate my bonobo studies,” says Hare, “because there isn't another population of 60 bonobos to test.”

Reading minds

With so many dog labs joining the pack, new and provocative findings are emerging. For example, ethologists have shown that dogs, even as puppies, can follow human pointing gestures to find hidden food, something difficult for chimpanzees to do (Science, 22 November 2002, p. 1634). The experiments suggest that dogs can read human intentions, implying that the two species share information via a complex form of communication. Even more controversially, the discovery suggests to some that dogs may understand what another being is thinking, a sophisticated talent called theory of mind that many researchers think even chimpanzees and preverbal human infants lack.

Raised by humans.

Wolf cubs raised like dog puppies (top) play with dogs and grow up to be as good as adult dogs at following human pointing cues in experiments.

Dogs can also imitate a human's actions on command (“Do as I do!”), much like children playing “Simon Says,” according to studies done by Miklósi's colleague József Topál and his team at the Institute for Psychology at the Hungarian Academy of Sciences in Budapest. That's an indication that dogs are strong social learners, something still debated for chimpanzees. “They've been selected for this sensitivity and to cooperate,” says Miklósi, who confesses to being a cat person and has never owned a dog. Dogs can also use computer touch-screens to demonstrate some glimmerings of abstract thought, such as the ability to form a concept. For example, dogs were able to select color photographs of other dogs instead of landscapes, choosing with a quick nose-touch to the screen. Pooches can follow human rules, a social skill that helps strengthen group bonds; they have a sense of fairness; and some canine whizzes—all Border collies so far—have vocabularies of several hundred names, suggesting an ability to learn words that some say resembles that of infant children. Also like human toddlers, dogs willingly imitate another's actions, even if these don't always make sense, an ability that makes it easy to learn from others. In short, dogs are skilled at cognitive tasks, especially social tasks requiring cooperation and sharing information to achieve a goal.

Sometimes these advanced social skills lead dogs to behaviors and emotions that seem very humanlike. But are they? Take the “guilty look” every dog owner knows. While cameras rolled, Alexandra Horowitz, an animal cognition researcher at Barnard College in New York City, had 14 owners show their dogs a tasty treat and tell them not to eat it. But when the owner left the room, Horowitz either gave the dog the forbidden food or removed it, making it impossible for the pooch to misbehave. Later, some owners were told that their dogs ate the treat, even when they had not. “It didn't matter if the dogs had done the right thing. If they were scolded, they showed that guilty look,” slinking away with their heads drooping and ears flattened, says Horowitz, whose work is published in the July issue of Behavioural Processes. “And the dogs who showed the most ‘guilt’ were the ones that hadn't disobeyed. We've trained them to give us that look” in response to our anger. “The guilty look is not necessarily a reflection of anything they have done,” she says. Adds Michael Tomasello, a comparative cognition researcher also at the Max Planck Institute in Leipzig: “Guilt involves a moral dimension, which dogs very likely don't have. But by showing guilt, we may lessen our punishment. What people see in dogs is the anticipation of punishment.” In humans, says Tomasello, that anticipation is regarded as a “precursor to guilt.”

Indeed, some researchers think that the evolutionary steps gray wolves took to live in human society may in some ways mirror the transition from ape to human. The loss of fear and aggression toward others and the emphasis on cooperation rather than competition are steps that early hominids must also have taken as they organized into groups, although for dogs the change also required evolving a deep attachment to an entirely different species. “Dogs have lived with us for at least 10,000 years, and they've evolved to fit into our social world, so they've developed some social and behavioral skills strikingly like ours. It's a case of convergent coevolution,” says Call.

Wynne's having none of the coevolution argument, however. “Dogs and wolves are still the same species,” he says. “They still interbreed, and so any changes in the dog's brain are a matter of degree rather than kind.” And he is skeptical of some of the claims regarding dogs' abilities, particularly those that suggest superior mind-reading skills as compared with wolves. “Sometimes I feel that I'm always running after other people, stomping out the f ires they've started” with claims of dogs' advanced abilities, he says. He argues that the Border collies with fat vocabularies are merely “conditioned” and that their talent for picking up human sounds is not at all comparable to how children learn words. “If they were really like children, they'd be learning hundreds of words a day,” he says.

From wolf to dog

Despite Wynne's concerns, for many the evolutionary process that produced the dog is fertile ground for research. Most agree that domestication changed the dog dramatically, turning it into a creature who yearns to be with a species other than its own. Researchers hypothesize that this happened because humans selected dogs for their ability to socialize and to form strong attachments with people. Indeed, experiments in Miklósi's lab have shown that 4-month-old puppies in a choice test always preferred a human companion to a dog. Young wolves showed no preference. “Domestication changed” the dog brain, “making it more attuned to human social signals,” says Miklósi.

To test that idea, scientists have examined how well dogs and wolves can pay attention to human pointing cues and hold eye contact with people. (Wild wolves, like many wild mammals, are reluctant to make eye contact, perhaps because in many species staring is a threat.) Previous pointing studies had mixed results: In the original tests, dogs proved to be superior to wolves (and chimpanzees) in following a human's finger to a bucket that held a hidden food treat. Many researchers interpreted this as an indication of a genetic component to this skill, one favored by domestication. But later tests complicated the issue: In another study using a different method, wolves actually topped the hounds.

To clarify the question of just what wolves can do, Márta Gácsi of Miklósi's lab and her co-authors hand-raised gray wolves much as pet dogs are raised, as they report this week in PLoS ONE. Then they gave carefully designed pointing tests to wolves and dogs of different ages. Intriguingly, 8-week-old and 4.5-year-old wolves were as good as the dogs at following the human pointing cues to hidden food. But the 4-month-old wolves failed. “They were struggling and biting; they were just too busy doing other things,” says co-author Friederike Range of WSC. In addition, in contrast to the young dogs, the young wolves had trouble making eye contact with their humans. “They are on a different developmental path from dogs,” perhaps because ultimately dogs “must live in our world and obey our rules. They have to learn many of the same things that children learn,” says Range. And so it behooves dog puppies to look into their owners' eyes and pay attention.

“The study is a slam dunk,” says Hare, who says it shows “once again that dogs and wolves are different.” He and others say that the study reveals that dogs are born ready and willing to work with people, whereas wolves face a long learning curve before they can accept two-legged creatures as social partners. But because the wolves can learn over their lifetime to follow our cues, the study also points out how difficult it is to “unravel genetic factors” from socialization and training, notes de Waal.

“We have so many questions to ask,” says Miklósi. “What do dogs understand about verbal commands? How do they recognize their owners? What do they think of as significant in their environment? What does a dog understand about human relationships; what does he know about your state of mind? And then we should ask the wolves the same questions.”

On one recent day, dogs and their owners stopped by to join in an experiment at the Hungarian lab, investigating how dogs respond to certain growls, part of a larger study aimed at understanding how dogs' barks and growls have changed from those of wolves. Wolves bark, too, but only as warnings or to protest. In contrast, dogs bark for many reasons, says Miklósi. “They ‘invented’ barking,” as a means of communicating to us as much as to other dogs, and they “can modulate the frequency and pulse” to signal fear or that they are feeling lonely or playful. In a previous study, Peter Pongrácz in Miklósi's lab showed that humans can readily identify the differing barks of a lonely, fighting, or playing dog. “That means that a dog's bark is often directed at us to convey the dog's inner state,” Pongrácz says, an ability that has likely come about because dogs live with a talkative species.

As part of his ongoing, as-yet-unpublished study of dog-human communication, Tamás Faragó, one of Pongrácz's graduate students, covered a small cage with a cloth and placed a large, tempting bone close by. Kope, a bright-eyed Cairn terrier, trotted in off-leash with his owner, spied the bone, and made a beeline for it. But just as Kope reached the bone, Faragó played a recorded growl, and the dog froze in place. “That's a food-guarding growl,” Faragó whispered. “As soon as a dog hears that, he knows he better leave that bone alone.” When the unseen dog growled a second time, Kope, looking unnerved, ran halfway back to his owner, wagging his tail. He peered up at her face, then looked back over his shoulder at the bone. “He's asking for help,” Faragó explained. “‘Come on, Mommy, help me get that bone. Let's do it together.’”

That, say Miklósi and others, is what lies at the heart of dog cognition: their strong desire to work with and for us, and their ability to communicate with us without language. “That's the one thing all of us doing dog research must never forget,” says Tomasello. “Dogs are collaborating with us; they aren't doing this with other dogs.” Unraveling how that collaboration came about—how it turned wolves into dogs, and humans into dog lovers—is like a good bone: one well worth digging for.

Particle Physics

The Large Hadron Collider Redux: Hoping for a Long, Hard Slog

Adrian Cho

As they prepare to restart the Large Hadron Collider, accelerator physicists are confident that, instead of suffering a second catastrophic breakdown, the world's largest atom smasher will perform to the standards set by its predecessors—and give them lots of smaller headaches to struggle with.

In 1991, as accelerator physicists at the Deutsche Electron Synchrotron (DESY) laboratory in Hamburg, Germany, turned on their new $650 million particle smasher, they ran into some technical glitches. Researchers found one problem in the circuits meant to protect the superconducting magnets at the heart of the 6.3-kilometer-long Hadron Electron Ring Accelerator (HERA) from destroying themselves in an emergency. But in trying to fix the problem, they made it worse, recalls Karl Hubert Mess, then a DESY staff member.

To diagnose the problem, Mess and colleagues disconnected part of the circuit. But that overloaded a key element called a cold diode. “We were stupid. We disconnected the protection” for the diode itself, Mess says with a sheepish smile. Researchers burned out two or three diodes before realizing their mistake, says Mess, now a staff member at the European particle physics laboratory, CERN, near Geneva, Switzerland. Replacing each one took a week, as it required warming part of the massive machine from near absolute zero to room temperature.

It's a cautionary tale for physicists awaiting the restart of CERN's Large Hadron Collider (LHC), the world's highest-energy atom smasher, which wrecked itself last fall just 9 days after it started up. Accelerator physicists at CERN and elsewhere doubt the LHC will suffer another catastrophic failure. But they expect researchers working on the 27-kilometer-long, $5.5 billion machine to run into smaller problems, like those that dogged DESY. And given the immense size of the LHC, fixing them may take much longer.

CERN researchers have modified the LHC to prevent a repeat of last year's fiasco, in which a splice in a superconducting electrical line between magnets melted, setting off a chain of mechanical failures (Science, 26 September 2008, p. 1753). Researchers have also found additional faults in the copper cladding around those splices (Science, 31 July, p. 522), so to ensure the junctions' safety, they will initially blast protons together at half the energy the machine was designed to achieve (Science, 14 August, p. 799). Challenges remain, but “CERN is incredibly powerful intellectually,” says Stephen Peggs of Brookhaven National Laboratory in Upton, New York. “Nobody in their right mind would bet against CERN.”

In fact, given the way it purred before breaking down, accelerator physicists generally expect the LHC to start smashing particles without much ado. However, they say it will be much harder to reach the machine's design goals—collisions at an energy seven times higher than has been achieved before and a 30-fold increase in the record for the rate of collisions, or “luminosity.” The three accelerators most like the LHC—HERA, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, and the Tevatron Collider at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois—took years to reach their design goals. And some accelerator jocks wonder whether the LHC will ever reach its outsized specs.

“I'm confident that if they don't have any more of these electromechanical problems, then they'll get some usable luminosity and get up to a fairly high energy fairly rapidly,” says Fermilab's Peter Limon. “And then it's going to be a struggle to get to design luminosity and design energy.” Brookhaven's Michael Harrison, who directed the construction of RHIC, says “there will be a problem, and you'll solve it. And then there will be another problem. You fight your way up.”

An explosive mix of technologies

The LHC and its kin are circular raceways for subatomic particles known as superconducting synchrotrons. Accelerating particles to near light speed, such machines are ideal for smashing them to make exotic states of matter, blast out new particles, or even search for new dimensions of space. In a synchrotron, electrically charged particles circulate through an evacuated beam pipe. Each time they pass through powerhouses called RF cavities, the particles receive a boost from oscillating electric fields within them (see diagram, p. 1068). As the particles gain energy, electromagnets called dipoles bend their trajectory to keep them from flying out of the machine. Meanwhile, magnets called quadrupoles focus the beam.

In a superconducting synchrotron, superconducting magnets replace conventional ones, which consist of copper wire wound around iron “pole pieces.” Conventional magnets suck up lots of power, and the magnetic properties of iron keep them from producing fields above 2 tesla—40,000 times Earth's field. Because current flows through superconducting magnets without resistance, they consume much less power. And they have no iron poles, so they can reach much higher fields—8.3 tesla in the case of the LHC magnets. “This makes a collider much smaller and saves a lot of moola,” Limon says. Compactness was key for the LHC because it had to fit into a tunnel dug for an earlier collider.

Floor it.

RF cavities accelerate particles, and magnets guide them around the ring. The beam can be used to generate collisions within particle detectors.

CREDIT: MATTHEW TWOMBLY/SCIENCE

But the advantages come at a price. The superconducting alloy used in the magnets, niobium-titanium, carries current without resistance only at temperatures within 9 kelvin of absolute zero. So the hearts of the magnets must be bathed in frigid liquid helium, and an accelerator must have huge cryogenic plants to crank out the stuff. Moreover, if the magnets overheat—perhaps because stray particles hit them—they can suddenly lose their superconductivity, or “quench.” If they do, automated quench-protection systems must quickly shunt thousands of amps of current out of the magnets before they fry themselves.

The combination of cryogenics and accelerator technology paves the way for new modes of failure, as last fall's accident at the LHC demonstrated. Researchers were ramping up the current in a string of 154 of the machine's 1232 dipoles—each 15 meters long and weighing 35 metric tons—when a faulty electrical splice between two magnets melted, sending 9000 amps of current arcing through the machine.

Like a welder's torch, the current cut the tube that kept the line bathed in helium. The breach sent boiling liquid pouring into the magnets' evacuated casings at a rate of 20 kilograms per second, says Francesco Bertinelli, a mechanical engineer at CERN. Because researchers had not foreseen such a deluge, he says, the magnet casings had emergency relief valves designed to cope with only 2 kilograms per second. So a pressure wave blasted through the magnets, damaging 53 of them and ripping some from their moorings.

Some accelerator physicists say CERN researchers dropped the ball by not designing the LHC's quench-protection system to adequately monitor the tiny voltage across each splice, which spikes as a splice heats up. “It was an honest-to-God screw-up,” says Brookhaven's Harrison. CERN researchers have now installed a much better system. “You can ask why didn't we do it earlier,” Bertinelli says. “Well, … we should have.”

Troublesome loose ends

To some physicists, the LHC mishap is vaguely reminiscent of the travails of the first superconducting synchrotron, Fermilab's Tevatron. Conceived in the 1970s, the Tevatron was still being designed even as it was being built in the early 1980s, recalls Fermilab's Helen Edwards, who oversaw the machine's commissioning. “The Tevatron was peculiar in that the first sector of the machine was rebuilt a number of times before we got to a point where we could consider the design set,” she says. Nevertheless, the Tevatron came to life in 1983 fairly smoothly, providing a beam of protons to be fired into “fixed target” experiments.

After about a year, however, the connections between the Tevatron's 774 dipole magnets started exploding, recalls Fermilab's John Peoples Jr., who led the effort to turn the Tevatron into a collider. The explosions happened because the superconducting leads connecting one magnet to the next were supposed to be tied down, but many were not. When the current in the leads ramped to 4000 amps, magnetic forces made the stiff leads flex and eventually break. “By the time the quench-protection system figured out what was going on, the leads had already evaporated,” Peoples says.

After losing five magnets in 4 months, physicists shut down the Tevatron in July 1984 to secure the leads. The problems eventually reemerged, and workers repeated the repairs in 1988 and 1989. Still, the machine continued to progress. In September 1985, researchers circulated protons and antiprotons in opposite directions in the machine's single-beam pipe to make it a collider. After a yearlong shutdown, the Tevatron started providing collisions for experiments in February 1987, reaching its design luminosity a year later. Fermilab researchers benefited from their ability to warm up a part of the machine, replace a magnet, and cool it down in several days. That process takes far longer for the massive LHC. “At CERN, it's at least 2 months. That's a big deal,” Peoples says.

In contrast to the Tevatron, HERA never lost a magnet, and Brookhaven's RHIC hasn't so far. But both went through a lengthy start-up phase. In its first data run in 1992, HERA cranked out collisions between protons and electrons at only 0.14% of it design luminosity. That figure climbed to 10% a year later. “It took us until 1995 to get up to something close to design specs,” says Ferdinand Willeke, who oversaw the commissioning of HERA's superconducting proton ring and now works at Brookhaven.

RHIC came up more quickly. After a short trial run in 1999, it began smashing gold nuclei together for nuclear physics experiments in early 2000 and reached its design luminosity for brief periods by the end of 2001. Even so, of the 167 days RHIC ran in its first full year, only 38 were spent providing collisions for the four experiments the accelerator fed. The rest were given over to working on the accelerator itself.

The LHC's predecessors took different amounts of time to reach their energy goals. HERA, which shut down in 2007, came on at design energy; RHIC reached it in 2 years. Because of design changes, the Tevatron took data for 9 years at 80% of its design standard of 1 tera-electron volt per beam for a total collision energy of 2 TeV. A major upgrade begun in 1996 and finished in 2001 pushed that up to 98% of design energy, or 1.96 TeV.

Getting (re)started

The LHC won't hit its design goals right away, CERN researchers acknowledge. It's built to blast protons together at 30 times the rate currently achieved by the Tevatron, and learning how to cram enough particles into the accelerator to do that will take time. “We're working on the assumption that we will get there in 5 years,” says CERN's Lyndon Evans, who directed the LHC's construction. The LHC will need 2 or 3 years to reach design energy, Evans says. If it does, it will accelerate each beam to 7 TeV for a total collision energy of 14 TeV—more than seven times the Tevatron's collision energy.

To start that long climb, physicists first have to get the LHC up and colliding particles. CERN officials plan to start that process in mid-November. First, they must inject protons from a lower-energy accelerator and coax them all the way around the LHC's two beam pipes, one for protons circulating in each direction, which run in parallel through the same magnets.

Researchers must gain control of the beam and increase the number of laps it will make until they can hold on to it indefinitely. Next, they must “capture” the beam with the oscillating electric fields produced in the RF cavities. Only then can they begin to accelerate the particles and increase their energy. Once they have two beams circulating stably at the desired energy—at 3.5 TeV to start with—physicists can steer them to collide in the centers of the four particle detectors spaced around the subterranean ring.

In fact, CERN researchers accomplished much of this work last fall. Between sending beams around both rings on the first try on 10 September and the accident 9 days later, they circulated beams for a total of just 60 hours. But in that short time, physicists managed to control one beam and catch it with the RF cavities. “The kind of things that took them a few days took us a lot longer, on the order of weeks,” says Brookhaven's Harrison. Given that virtuoso performance, he says, this year “it's conceivable that they could have collisions within a week.”

But others say that increasing the energy of the beams may be tricky. That's because the current in the dipole magnets has to ramp up at the same time, and as the magnets'power supplies pour on the juice, that very act induces small, hard-to-predict eddy currents. Those fleeting currents create additional fields that may disrupt the beams. To compensate for such “snapback,” the accelerator's control system must adjust special sextupole magnets at the same time. Evans says CERN's system is up to the task, but other researchers say the energy ramp is the thing to watch this winter.

Once CERN researchers achieve collisions, reaching design luminosity may be difficult. In part, that's because LHC's full beam will contain so much energy—350 megajoules compared with the Tevatron's 1 megajoule—that if a tiny fraction hits a magnet, it will quench it. “The LHC luminosity is very challenging,” says Brookhaven's Wolfram Fischer. “They have really tried to push every beam parameter to the limit.”

Patience.

Colliders often take years to meet their specs for producing data.

SOURCE: WOLFRAM FISCHER/BROOKHAVEN NATIONAL LABORATORY

Others question whether the LHC can reach its design energy. The stumbling block they foresee isn't the junctions between magnets—those will be fixed, probably in winter 2010-but the magnets themselves.

To go to high fields, a magnet must be trained—allowed to quench at lower fields. Magnets are supposed to remember their test-stand training, but dozens of LHC dipoles seem to have forgotten theirs. Retraining them in place is time-consuming, so the problem could limit the LHC's energy to about 6 TeV per beam, says Fermilab's Limon. “Personally, I wouldn't worry about it,” he says. “Six TeV is pretty good, and it's not like anybody is competing with them.”

Broken fingers and ice balls

As they work their way up, CERN researchers are bound to run into unpleasant surprises. For example, when physicists at Brookhaven turned on RHIC in 1999, they found that a piece of metal in an expansion joint in the beam pipe had bent into the path of the beam. Like commuters easing around a stalled car, researchers simply steered the beam around the obstructing “RF finger” until it could be removed later.

Researchers must constantly balance the need to get subsystems up and running against the risk of pushing too fast and creating more problems, says Fermilab's Edwards. “It's all judgment,” she says. “You're going to make mistakes for sure.”

At least early on, most of those judgments will be made in a blizzard of minor crises. Brookhaven's RHIC has never suffered a catastrophic failure; nevertheless, during its first year of running, scientists and technicians had to idle the machine and enter its tunnel to fix something more than once a day, Fischer says. Often, they had to restart a power supply. Sometimes Fischer and others had to melt ice balls that would form on poorly insulated parts of the machine.

The repairs made for hectic days and long night shifts in the accelerator's control room. “I didn't do much else,” Fischer says. “It was pretty much working, eating, sleeping.” Yet, he adds without hesitation, “It was some of the best time of my professional life.” Bringing the LHC to life will likely entail years of battling one little technical problem after another. Instead of blanching at the prospect, accelerator physicists relish the challenge.

Particle Physics

How to Make a Collider

Adrian Cho

A synchrotron circulates particles in one direction, so it takes some extra work to make one into a collider.

A synchrotron circulates particles in one direction, so it takes some extra work to make one into a collider. One way is to set particles and antiparticles circulating in opposite directions within a single synchrotron, as is done with protons and antiprotons at the Tevatron at Fermi National Accelerator Laboratory in Batavia, Illinois. Another scheme that requires no antiparticles is to build in one tunnel two synchrotrons that send the same particles in opposite directions, as is done in the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in Upton, New York. The Large Hadron Collider at the European laboratory CERN near Geneva, Switzerland, is really two synchrotrons in one, with two beam pipes running through a single set of special “2-in-1” magnets. It will smash protons into protons.

Log in to view full text

Via AAAS ID

This article is available to AAAS members. If you are a AAAS Member use your via AAAS ID and password to log in. Not a member? Click here to join.

Via your Institution

Log in through your institution

If your organization uses OpenAthens, you can log in using your OpenAthens username and password. To check if your institution is supported, please see this list. Contact your library for more details.

Log in through your institution

You may be able to gain access using your login credentials for your institution. Contact your library if you do not have a username and password.