share

News this Week

Blunt Talk From Science Chair

IN AN EXCLUSIVE INTERVIEW WITH SCIENCE, SCIENCE COMMITTEE CHAIR JAMES SENSENBRENNER WARNS THAT THE OUTLOOK FOR R&D SPENDING IS PRECARIOUS AND THAT CERN AND SPACE-STATION AGREEMENTS ARE IN TROUBLE

Representative James Sensenbrenner (R-WI) is the first to admit he is no science whiz. “The only D [grade] I got at my university was in biology,” recalls the 10-term federal legislator and attorney, who graduated with a degree in political science from Stanford University in 1965. “That was from Don Kennedy, who went on to become president.” Years later, when Kennedy was hauled before Congress to explain the university's misuse of government funds meant to pay for research overhead, “I mentioned to him that what goes around, comes around.”

What came around this January was Sensenbrenner's ascension to the chair of the House Science Committee after 16 years on the panel. Drawing on that experience, he has moved quickly to exercise leadership over a panel that oversees NASA, the National Science Foundation (NSF), the Department of Energy's civilian science programs, the National Institute of Standards and Technology, research at the Environmental Protection Agency, and the National Oceanic and Atmospheric Administration. In less than 5 months, he has transformed it from a den of partisan bickering into a smoothly operating committee. Proof of that harmony came last month, when Democrats and Republicans worked together and achieved House passage of several bills authorizing 1998 nonmedical civilian R&D spending.

No party animal.

Sensenbrenner wins high marks for fairness from Democrats on his committee.

SAM KITTNER

But cooperation doesn't mean meekness. Sensenbrenner has not abandoned his trademark bluntness and tenacity, shaking up supporters of major science and technology efforts like the international space station and the European Large Hadron Collider (LHC) by raising uncomfortable questions. And despite his penny-pinching views on overall federal spending, Sensenbrenner has fought hard for a proposed 3% increase this year in the R&D budgets of the agencies under his jurisdiction.

It's not a position the congressman imagined holding when he came to Washington in 1979 after a decade in the Wisconsin legislature. Indeed, Republican leaders put him on the committee precisely because he had few R&D connections. “They wanted someone with an independent voice on science policy, who was not dragged down with a large concentration of employees or researchers directly dependent on federal appropriations,” he says. His district north and west of Milwaukee lacks any major research universities, aerospace contractors, or government labs—the sort of constituents that typically attract lawmakers to the panel.

The result, according to his colleagues and staff, is a straight-talking chair with few axes to grind, who despises pork-barrel politics and has little patience for the ideological turf battles that characterized the panel under his predecessor, Representative Robert Walker (R-PA). “He's firm but fair,” says Representative George Brown (D-CA), the ranking minority member of the panel and himself a former chair. “He has bent over backward to consult with us.” Sensenbrenner is also not the glad-hander and well-known figure that Walker was. He avoids breakfast meetings and receptions when he can. “He's not what you would call a glamorous member,” says Brown. “But he's well respected, and he's making an effort to soften his image as a loner.”

The 53-year-old congressman admits that he's still not totally at ease with the panel's constituency. “I give speeches and see their expressions of, ‘How does he know what he's talking about? He doesn't have any educational background or experience in what we do.’ “To compensate, he has asked Representative Vern Ehlers (R-MI)—a former professor of physics at Calvin College in Michigan—to act as his liaison with researchers.

Yet, that lack of knowledge hasn't stopped him from demanding that taxpayers get their money's worth from big science. “He's almost obsessive in his desire to oversee large science programs,” says Brown. CERN chief Christopher Llewellyn Smith found that out after he suggested privately to Sensenbrenner last year that U.S. researchers might lose access to the Geneva high-energy physics facility if the United States failed to help build the LHC. “I told him Congress does not respond to such threats,” says the lawmaker. Since then, Sensenbrenner has pushed the Clinton Administration to renegotiate its $450 million LHC agreement with CERN (Science, 18 April, p. 347).

He also enjoys traveling overseas—sometimes at his own expense—to visit Russia or CERN to learn firsthand about a project and to pressure foreign science managers and politicians. And he is proud of a recent visit to Antarctica that reinforced an NSF plan to build a new, lower cost South Pole station (Science, 21 March, p. 1732).

In his first in-depth interview since he became chair of the committee, Sensenbrenner spoke with Science editors last week in his office in the Rayburn House Office Building. What follows is an edited transcript.

Q:What effect will the balanced budget agreement have on R&D spending?

A: It is too early to say, but it's very important to make sure that the [$225 billion added to estimated revenues over the next 5 years] is real money. If it is not, discretionary programs are going to take an extremely vicious hit in the last 2 years of this agreement. Congress is not going to break its promise on the balanced budget—it will simply take it out of discretionary spending.

Q:How did you come up with your proposed 3% increase in R&D spending for 1998 for the agencies under your jurisdiction?

A: In March, the [House] Budget Committee staff was looking at a 5% cut from 1997 levels for scientific spending. That would have devastated science. The 3% level was an acceptable compromise between the freeze that some Republican members thought we could live with and the 7% increase that [Representative] George Brown and a Senate bill called for. It was realistic and it was salable.

Q:If the Senate fails yet again to pass any authorization bills for science agencies, is all your work for nothing?

A: We are, by far, the first committee to finish our authorization work. Mr. Brown and I are going to the Senate to tell the authorizers that it is vitally important that the bills be passed and sent to the president, lest we completely abdicate our responsibility in setting policy. I've already met with [Majority Leader Trent] Lott [R-MS], and he has expressed his desire to get authorization bills out, and I will meet with [Commerce Committee Chair John McCain (R-AZ) and William Frist (R-TN), chair of the panel's science subcommittee]. I think this can be done without tying up significant Senate floor time.

Q:Do R&D advocates have a strong enough voice in the White House?

A: I have a very high regard for [presidential science adviser] Jack Gibbons, [but] he has not been as assertive as he has been in the past. This is a White House problem.

Q:How do you intend to carry out your pledge to oversee agencies more thoroughly?

A: The Government Performance and Results Act goes into effect 30 September, and we will monitor each agency to make sure they're accomplishing their goals. This will be good for science, because it will very clearly show what the scientific agencies want to accomplish and will be able to measure how they are using taxpayers' dollars to accomplish that. The best way to avoid Golden Fleece awards and investigative reports is to speak in plain English, say what you're doing, and show the taxpayers you are doing that.

Q:What agencies need the most oversight?

A: Obviously, NASA and the Department of Energy (DOE) will. And we will continue to put the [Commerce Department's] Advanced Technology Program [ATP] on a glide path to better management.

Q:How supportive are you of ATP?

A: It has been woefully mismanaged. The program has got more money than it knows what to do with, but we all want to use tax dollars to leverage more private-sector dollars and therefore increase the total pot that is available for research. We're going to get this program into shape so it does not become a lightning rod for people who wish to point out fraud, waste, and abuse in federal programs. … I am not opposed to the philosophy behind ATP, [unless] it is government money replacing money that would come from the private sector.

Q:Has your pressure on the White House to ensure Russia meets its space-station obligations had an effect?

A: The money [to build station parts] is now flowing in Russia, although I don't know how much of that was the result of me being on their back almost continuously for the last year and a half. It's a step in the right direction, although it's not flowing in the amounts and according to the deadlines in the promises made by President Yeltsin last month.

My concern is that NASA moves the goalposts [on what the Russians are expected to do] every time there is a Russian failure. If NASA continues to be in denial, sooner or later the cost overruns will be enough to kill this program. And I think that would be a shame.

Q:Why are you so intransigent about the proposed LHC agreement with CERN?

A: Members of Congress repeatedly went to Europe for help on the Superconducting Super Collider (SSC). Former CERN Director-General Carlo Rubbia was usually in advance of the American delegations talking to European governments, saying, ‘Don't give the SSC a dime. If the SSC falls apart, the Americans will be back to help us build the LHC.’ My colleagues who got involved in that fiasco have not forgotten.

Q:Is a deal still possible?

A: I am going to try to broker a compromise, but a lot depends upon Europe—and whether the U.S. high-energy physics community realizes that the deal with CERN is a bad one. First, the Europeans keep on saying that if America does not do what Europe wants it to do, they will kick the American researchers out of CERN. We are subject to any change in policy of the CERN council. This should be a contractual agreement, so that U.S. researchers have unlimited access. Second, CERN is not making noises about kicking the Russians out. There needs to be symmetry in the treatment of the United States and Russia.

Third, CERN has a buy-American-last policy, and, fourth, they have not included contingency costs in the LHC price tag. I can imagine CERN approaching us in 3 or 4 years suggesting that we [help cover a cost overrun] lest they build a less capable machine. That will be a deal killer. Congress will withhold U.S. funding, I can guarantee that.

Q:Do you approve of DOE Secretary Federico Peña's recent decision to cancel the Associated Universities Inc. contract to run Brookhaven National Laboratory?

A: Yes. When there is a failure that impacts safety, then cancellation is a legitimate response. We plan oversight hearings on this next month or July.

Q:Does this signal that DOE labs need a major revamping?

A: Some—but not all—DOE labs have been fishing around for jobs to do following the end of the Cold War. What [NASA Administrator] Dan Goldin did in designating NASA centers as centers of excellence, to concentrate in particular areas, is something that ought to be applied to the DOE labs. If it isn't done that way, I can see Congress, in its move to balance the budget, starting to close DOE labs simply because there has been so much free-lancing to get more work as a way to keep people on the payroll.

Q:Why are you so skeptical of international projects? Are you a midwestern isolationist?

A: I am not a midwestern isolationist, but I want to make sure America gets a good deal with its international science arrangements. I support internationalizing projects like the space station because they can be too expensive for any single country. But it has to be a real partnership.

Q:Why does the biomedical field fare so much better in dollar terms?

A: Biomedical scientists have been more successful than civilian researchers in other areas because the Commerce Committee [which has jurisdiction over biomedical matters] is one of the exclusive committees. And, secondly, everybody wants to be healthier. Everyone wants the miracle cure for diseases that debilitate and kill. If I had my druthers, I would like to see all civilian research in the Science Committee, but I'm not asking for that. The Commerce Committee has always been one of the most powerful committees in the Congress.

Q:Are you enjoying the job?

A: Yes. I found it is a lot more work than I anticipated, but the types of people I have come in contact with are really awe-inspiring. [On a visit to CERN], my 15-year-old son got a power-physics lecture from [Nobel laureate and MIT physicist] Sam Ting. He's one of the few Nobel laureates I have come into contact with who is able to explain what he's doing in plain English.

AIDS

Ethics of AZT Studies in Poorer Countries Attacked

Jon Cohen

Tuskegee. Nazi experiments. The needless deaths of babies. The rhetoric certainly heated up at a congressional hearing on bioethics held on 8 May when the topic turned to U.S. government-funded studies in developing countries aimed at preventing the transmission of HIV from mothers to infants. After listening to criticisms of the studies by the Washington, D.C.-based Public Citizen's Health Research Group, Representative Christopher Shays (R-CN), chair of the Subcommittee on Human Resources, opined: “It does blow my mind.”

Public Citizen, a consumer-advocacy organization, has been waging a high-profile campaign in recent weeks to modify the trials, arguing that it is no less than mind blowing that they include as control subjects pregnant women who are given no treatment to prevent maternal transmission of HIV. But at the hearing, AIDS researchers and their sponsors vigorously defended the trials, which are under way in Africa, Thailand, and the Caribbean, testifying that they may answer critical questions for HIV-infected women in those countries. Several people called to testify also expressed dismay at the inflammatory rhetoric and the aura of an emerging crisis fostered by Public Citizen, noting that the studies were thoroughly debated before they were launched. As Harold Varmus, head of the National Institutes of Health (NIH), said to the subcommittee, “The issues that were raised by Public Citizen and brought to your attention are not new ones.”

Public Citizen's Health Research Group first weighed in on the trials at a press conference 3 weeks ago. The organization's head, Sidney Wolfe, branded nine U.S.-funded studies as “Tuskegee Part Two,” a reference to the infamous syphilis trials in which African-American men were denied effective treatment so researchers could observe the progression of the disease. Wolfe claimed that more than 1000 children in foreign countries whose mothers took part in the trials would needlessly be born with HIV infections.

The trials themselves grew out of a critical discovery more than 3 years ago. In February 1994, a large study of HIV-infected pregnant women in the United States and France, known as ACTG 076, found that an intensive course of treatment with the anti-HIV drug AZT could prevent maternal transmission of HIV nearly 70% of the time. Researchers quickly realized that the results would have little relevance in most developing countries, where the incidence of AIDS is rising the fastest. The reason is that most HIV-infected women in those countries cannot afford the treatment, which entails taking AZT during pregnancy, receiving an intravenous drip of the drug throughout labor, and feeding the infant AZT syrup for 6 weeks after birth.

This realization got many investigators interested in testing cheaper prevention strategies, such as shorter drug regimens, vitamin supplements, or HIV-antibody injections (Science, 4 August 1995, p. 624). At the time, researchers debated whether it would be ethical to incorporate into the studies a control group that would receive only a placebo.

According to a widely held ethical precept, people who volunteer to take part in clinical trials should be given, at the very least, the standard of care in their country. Proponents of placebo-controlled trials argued that if the standard of care was no treatment at all, the use of placebos would be ethically justified. At a World Health Organization meeting in June 1994, AIDS researchers from around the world agreed, recommending that “[p]lacebo-controlled trials offer the best option for a rapid and scientifically valid assessment” of alternatives to ACTG 076.

Wolfe and his co-worker, AIDS researcher Peter Lurie of the University of California, San Francisco, disagree vehemently. In a 22 April letter to Health and Human Services (HHS) Secretary Donna Shalala, they called such trials “blatantly unethical” and called for an investigation by HHS's inspector-general into how the trials received approval. “We are confident that you would not wish the reputation of your department to be stained with the blood of foreign infants,” they concluded.

In his testimony at the hearing, Lurie pointed to two trials in Thailand that he said illustrate the “inconsistencies and the lack of coordination” in this area: NIH is funding one trial that compares short treatments of AZT to a regimen similar to ACTG 076; the Centers for Disease Control and Prevention (CDC) in Atlanta is supporting a trial that compares similar short treatments to a placebo control. “How can that be?” asked Lurie. “The minute people go overseas, it's like they change their research ethics at the customs desk.”

Anne Willoughby, who heads the pediatric and adolescent AIDS branch at the National Institute of Child Health and Human Development, contends that the two studies—although not by design—actually “fit together.” Willoughby notes that the smaller and simpler CDC trial, which should end next year, addresses safety questions that the just-beginning, NIH-funded one does not. “In AIDS, we often think one study solves everything, and it doesn't,” says Willoughby. CDC director David Satcher also told the subcommittee that the trials Wolfe and Lurie are attacking were approved by independent review boards both in the United States and in the host country. He further stressed that he regards “respect” for the host country's desires as an essential ethical principle.

NIH's Varmus offered the subcommittee letters he has received from foreign and U.S. researchers blasting Public Citizen's arguments. One came from Edward Mbidde, chair of Uganda's AIDS Research Committee, who wrote that he read Public Citizen's arguments with “dismay and disbelief.” He described their attack as “patronizing” and said it reeked of “ethical imperialism.” Mbidde outlined a scenario in which a control group of women receiving the full ACTG 076 treatment would fare better than a group given an experimental treatment that could be widely used in his country. “Obviously, we would say those [experimental] treatments are inferior and therefore not recommended,” wrote Mbidde. But what if the treatments, when compared to no treatment at all, reduced transmission significantly? asked Mbidde. “The reaction and recommendations would be different!”

Shays concluded the hearing by saying there “will definitely be follow-up”—possibly in the form of another hearing.

Genomics

A Catalog of Cancer Genes at the Click of a Mouse

Elizabeth Pennisi

Webster Cavenee can't wait to start doing cancer research on the Web. Like thousands of other biologists, Cavenee, a cancer geneticist at the Ludwig Institute for Cancer Research in San Diego, has already come to depend on a burgeoning array of online databases to help search for new genes and understand what they do. But sometime next month, Cavenee and his colleagues will be able to take their computer queries to a new level, thanks to the Cancer Genome Anatomy Project (CGAP)—an ambitious effort that aims at nothing less than a complete catalog of all the genes expressed in cancer cells.

Starting with five big killers—breast, colon, lung, prostate, and ovarian cancers—CGAP will classify tumor genes by the type of cancer cell they came from and the degree of the cell's malignancy. “We want to achieve a comprehensive molecular characterization of cancer and precancerous cells,” says National Cancer Institute (NCI) molecular biologist Robert Strausberg, who is coordinating the CGAP project. With just a click of the mouse, researchers should be able to determine how gene expression changes as a cancer progresses and ultimately begin to understand how tumors arise in the first place—all for no charge. “CGAP will be a national resource that all can tap into via the World Wide Web,” says NCI director Richard Klausner. “It's a wonderful, farsighted thing to do,” says Cavenee. “It will accelerate cancer research beyond anything [else] they could have done.”

The project is the latest example of how a marriage between computer science and genetics—creating a brand-new field called “genomics”—is transforming whole areas of biology. It's also an example of fleet-footedness on the part of NCI. Klausner began talking about the project in mid-1996, even before some of the key technologies were in place. But they made their debut before the end of the year, and about the same time, Klausner got the go-ahead to fund CGAP (Science, 29 November 1996, p. 1456). NCI is putting $20 million into the effort this year, and it's also being supported by contributions from the National Library of Medicine's National Center for Biotechnology Information (NCBI), the Department of Energy (DOE), and several pharmaceutical companies.

And all that is just for starters. CGAP funds will also support the development of technologies for rapid analysis of gene activity patterns in the thousands of tumors likely to be seen in a hospital pathology lab. The hope is that, eventually, the tumor gene index and these new technologies will enable physicians to base a patient's diagnosis, prognosis, and treatment on the status of a particular tumor's genes and proteins rather than on its appearance under the microscope. “The notion that one can identify all the genes altered in a given cancer is very exciting,” comments cancer geneticist Louise Strong at the M. D. Anderson Cancer Center in Houston. “It should [provide] really good information [for] classifying tumors and identifying targets for therapy.”

Despite all this enthusiasm, however, this vision could take years to realize. Although various companies and academic researchers have begun developing methods for wholesale analysis of gene expression patterns, the methods have not yet been shown to work on anything near the large scale envisioned. And plans to expand the tumor gene index to include other cancers are still vague. A similar effort currently getting under way in Europe may, however, help out here (see sidebar). In time, NCI expects its Web site to be linked to the one the Europeans are planning.

Key techniques. The expectations are bold for a program that's still taking shape. But, as Klausner reported last month at the annual meeting of the American Association for Cancer Research, a team led by NCI pathologist Lance Liotta has already demonstrated the feasibility of constructing a gene index for one tumor: prostate cancer. The success of that work depended on two new techniques: one for pulling specific cells out of tumors, which are a heterogeneous mix of normal cells and cells in various stages of malignancy, and the other for analyzing the very tiny amounts of RNA present in the isolated groups of cells.

Cell removal.

Laser-based dissection lifts small groups of cells (right) from a prostate tumor, leaving the rest of the tumor intact (left).

CGAP

The melange of cell types in a typical tumor has long been a problem for researchers analyzing gene expression in cancer cells. They can't get an accurate picture by looking at RNAs extracted from a whole tumor, because it will be a mix of molecules from all the different cell types. And standard methods for dissecting out individual cells are not only tedious but also unreliable except in the most experienced hands, says Ramon Parson, a cancer geneticist at Columbia University College of Physicians and Surgeons in New York City. But last year, Liotta, Michael Emmert-Buck, also of NCI, and their colleagues developed a technique called laser capture microdissection that enables researchers to pick out just the cells they want to analyze in about one-tenth the time that standard microdissection requires (Science, 8 November 1996, p. 998).

The procedure starts with a thin slice of tumor tissue, placed on a glass microscope slide and covered with a transparent cap from a tiny vial, the underside of which is lined with a thin layer of plastic. A researcher simply scans the sample through a microscope to find a uniform group of cells and, when they are in the scope's cross hairs, squeezes a trigger, zapping that spot with a weak laser. The laser heats the plastic so it becomes sticky and adheres to the cells just underneath. When the cap is lifted from the sample, it pulls off the targeted cells with it. The rest of the sample remains intact, so by moving to a different section, a researcher can obtain cells in various stages of tumor development, all from one piece of tissue.

Emmert-Buck used this technique to collect four sets of cells—normal prostate tissue, cells just starting to transform, fully transformed cells, and invasive cancer cells—from a single frozen prostate tumor. Each set contained about 5000 cells.

The investigators' next challenge was to show that they could use these cells to generate libraries of complementary DNA—DNA from expressed genes—that accurately reflect the full complement of the cells' active genes. Making such libraries requires copying messenger RNA sequences into cDNAs that can then be cloned separately in bacteria. A big problem is that when there is too little RNA, the scarcest sequences can get lost. As a result, sequences of the more common messenger RNAs are overrepresented and others are underrepresented in the library.

Another method developed last year by Emmert-Buck, NCI molecular biologist David Krizman, and their colleagues seems to have solved that problem. It combines an efficient cloning technique with a few cycles of the polymerase chain reaction to increase the number of copies of each bit of sequence. When Krizman used this procedure on the cells Emmert-Buck obtained by laser capture microdissection, he found, he says, that “the diversity was really quite good [and] all the genes are kept in a good balance.” Genes common in the prostate, such as the one encoding prostate-specific antigen, were well represented in the library, as would be expected. Moreover, there appeared to be some 360 new genes not previously described.

Creating a tumor index. Emmert-Buck, Krizman, and their colleagues will spend the next few months making similar cDNA libraries for normal, precancerous, and cancerous cells from lung, colon, breast, and ovarian tumors, as well as from more prostate cancers. Robert Waterston's team at Washington University in St. Louis will then spend 6 months sequencing several thousand bits of cDNA called expressed sequence tags, or ESTs—unique sequences useful for identifying active genes—from each of these 45 libraries. After that, the task will fall to a yet-to-be-named NCI grantee.

As fast as these EST sequences come in, NCBI researchers will post them in GenBank where they can be accessed from the CGAP Web site along with data about both the tumors and the libraries. This information is not currently available for ESTs in GenBank's database. “We want to present the ESTs in the context of the biology,” says NCI's Strausberg

A program developed at NCBI will enable any Web visitor to look at differences in the patterns of gene expression between any two libraries. Those differences could point researchers to genes important to the progression of cancer, or that could serve as markers of disease. “The hope is that what we'll find are patterns of gene expression that will define much better the subpopulations of cancers,” says NCI's Kenneth Katz.

That information might help clinicians identify diagnostic markers that can distinguish between normal and cancerous cells or aid in determining a patient's prognosis. Being able to predict a particular tumor's metastatic potential “would be of tremendous benefit,” says molecular biologist Claire Fraser of The Institute for Genomic Research in Rockville, Maryland. An oncologist could then tailor the follow-up radiation or chemotherapeutic treatment according to how likely the tumor was to have spread.

In addition to building EST libraries, CGAP investigators will start constructing libraries with longer stretches of cDNA that contain more of an active gene's coding region. These will come from several dozen different types of cancers. Because larger amounts of starting material are needed to generate libraries with longer cDNAs, the RNA will be extracted from bulk rather than microdissected samples. A researcher finding an EST that appears to be unique to, say, precancerous tissue can then look for a match among the longer sequences in these libraries and, upon finding one, possibly isolate the gene faster.

Toward these ends, the NCI is putting $4 million this year into the tumor gene index. Several companies are kicking in additional support for the generation of ESTs. And DOE is allocating $1 million to help set up the bacterial clones in which the ESTs and longer clones are maintained. The IMAGE consortium at Lawrence Livermore National Laboratory in California will make these clones available to any researcher who wants them.

Another $6 million of NCI funds is slated to be distributed to grantees to develop new technologies for analyzing many genes at a time or to improve upon techniques for generating long, potentially full-length cDNAs. Several companies are already working on either DNA chips or microarrays, which arrange many thousands of bits of DNA in a small space. Once these companies have automated the process and sufficiently expanded the numbers of bits of DNA squeezed into an array or onto a chip, these devices can be used for rapid screening of large numbers of cDNA samples to detect expressed genes, Strausberg says. Finally, NCI will spend about $10 million on grants for researchers to come up with ways to put these basic data to use in the clinic.

All this should quickly produce a rich Web site. “The potential for data mining is just great,” Emmert-Buck notes. And researchers not involved in CGAP couldn't agree more. “It will generate new directions for investigators,” says cancer geneticist Kenneth Kinzler of Johns Hopkins University School of Medicine. All that potential has Cavenee's research team “salivating,” he says. “We're checking the Internet daily to see if the [CGAP] home page has come up.”

Genomics

Europe's Cancer Genome Anatomy Project

Elizabeth Pennisi

Not long after the Cancer Genome Anatomy Project (CGAP) goes online next month (see main text), the plan is to have it hook up with a similar project now taking shape in Europe. Eleven European academic and clinical laboratories have teamed up to create the Cancer Gene Expression Program, which is also intended to study gene expression in cancer “in a very comprehensive manner,” says Charles Auffray, a cancer geneticist at the National Center for Scientific Research in Villejuif, France. Auffray and colleagues from Germany, Sweden, and the Netherlands will focus first on prostate, colorectal, breast, lung, skin, kidney, and pediatric brain cancers.

Once it raises the funding needed, the European team's first goal is to make libraries of full-length complementary DNAs (the DNA from active genes)—as opposed to shorter DNA fragments called expressed sequence tags, or ESTs—from these cancers. They will then select the subsets of sequences that appear to be specific to each of the tumor types. Each subset will be used to screen large numbers of that type of tumor to verify that indeed those genes are either aberrantly expressed (as in the case of oncogenes) or not expressed (as in tumor-suppressor genes) in that cancer, Auffray explains. He expects to make this information publicly available as quickly as possible.

The project is still in the planning stages, but already Auffray has been talking with CGAP folks about collaborating and linking the European project's Web site to CGAP's. At first glance, the projects seem to be trying to accomplish the same goals. But both are needed, says Auffray, because “there are so many genes, so many different cancer types, and so many stages.” He hopes even more teams will become involved in cataloging the cancer genes.

Environmental Science

Japan Starts to Carve Out Its Place in the World

Dennis Normile

TOKYO—Sitting among a blue-ribbon panel of scientists assembled recently in Washington, D.C., to discuss Japan's research efforts, U.S. Nobel laureate F. Sherwood Rowland asked his Japanese colleagues to imagine a research plane flying over the sea of Japan to monitor atmospheric nitrogen oxide generated by China's growing fleet of automobiles. Although the pollutants directly affect the air over Japan, chances are the plane would be American, said Rowland, an atmospheric chemist, and the data would be analyzed by U.S. scientists. When, he asked pointedly, is Japan going to put more money into environmental science?

The panel, assembled by the Japan Society for the Promotion of Science, didn't dispute Rowland's premise that Japan is neglecting environmental research. And each member was quick to offer an explanation, although none was an environmental scientist. Existing environmental institutions are weak and their portfolios are not well defined, said neuroscientist Masao Ito, who heads the Frontier Research Program at the Institute of Physical and Chemical Research (RIKEN). It's hard to attract good scientists into the field, added his boss, physicist Akito Arima, president of RIKEN. Perhaps there are so many unanswered questions that it's hard to know where to begin, suggested Hirotaka Sugawara, director of the National Laboratory for High-Energy Physics (KEK).

Ironically, their criticism comes at the same time that the government is launching several new initiatives to bolster Japan's contribution to environmental science. The Environment Agency, which operates the National Institute of Environmental Studies, is forming a think tank to develop innovative policy approaches to sustainable development and global environmental problems. The Ministry of Education, Science, Sports, and Culture, which funds most university researchers, is drawing up plans for a new interuniversity institute to support interdisciplinary environmental studies. And the Science and Technology Agency (STA) recently announced a new, three-legged program—observation, research, and simulation—to understand and predict global change.

The panel's lack of familiarity with these initiatives, say environmental researchers, stems from the field's low ranking on the scientific totem pole. “There wasn't a single environmental representative on that panel,” fumed Keiji Higuchi, a hydrologist at Chubu University who serves on several international environmental committees. One contributing factor is an annual budget, roughly $920 million, that trails those of many other disciplines within Japan and lags far behind what many other countries spend on environmental research. Another is the fact that the community's efforts are often fragmented and rarely involve other disciplines. “Most Japanese scientists aren't aware of what's going on outside their own fields,” says Higuchi.

The STA's efforts, which are the most far-reaching of the new initiatives, are intended to combat these problems. The first thrust is to collect more data through increased observation. Last year, the STA-affiliated National Space Development Agency (NASDA) launched a $1 billion Advanced Earth Observing Satellite (ADEOS) that is already delivering a stream of data on ocean temperatures, greenhouse gas concentrations, and other oceanographic and atmospheric characteristics (Science, 23 August 1996, p. 1038). Three more remote-sensing satellites will be launched by 2000 to track tropical rainfall, monitor changes in land formations, and replace ADEOS with a newer model. On Earth, the world's largest oceanographic research vessel, the Mirai, this fall will join an already impressive fleet operated by the Japan Marine Science and Technology Center (JAMSTEC) to gather meteorological and oceanographic information (Science, 6 September 1996, p. 1341). “A lot of data are becoming available,” says Taroh Matsuno, a hydrologist at Hokkaido University in Sapporo.

The second prong of STA's activities is intended to boost the size of the community capable of analyzing this bounty. The Frontier Research System for Global Change Prediction expects to create 50 to 100 new research positions this year and more in future years. The researchers, who will be employed by NASDA and JAMSTEC and work at offices in the Tokyo area, will be on fixed-length contracts, and the entire program will be reviewed after 10 years. The project will focus initially on climate change, the hydrological cycle, global warming, and modeling.

Matsuno, who will head the program when it is formally established this fall, says the Frontier system will provide a welcome employment option for recent graduates who have had trouble finding research posts in environmental science. But this influx of entry-level scientists puts pressure on the program to find enough top-level scientists: “There aren't enough qualified people” for the principal-investigator and group-leader positions, he says.

One immediate source of talent is the overseas Japanese community. Syukuro Manabe, a renowned climate modeler who left Japan some 40 years ago for opportunities in the United States, will fill one of the principal investigators' slots after his retirement this fall from the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. Other higher level positions are likely to be filled by academic scientists dividing their time between their university and Frontier system duties.

It is also hoped that the Frontier program will tap the talents of foreign scientists through the creation of overseas centers, most likely at the University of Alaska, Fairbanks, and the University of Hawaii, Honolulu. The arrangement will also foster a bilateral U.S.-Japan agreement to work together on climate change research. Roger Lukas, a University of Hawaii climatologist who co-chairs a panel involved in negotiating the details, says the collaboration could provide additional resources to plug holes in current research efforts—for example, the interaction between the El Niño phenomenon and the Asian monsoon, and the related seasonal-to-decadal natural variability in the Asia-Pacific climate.

The STA's third initiative seeks a major advance in global climate modeling by harnessing the computer hardware and software expertise of six of its affiliated institutes and agencies. Its 4-year goal is a whole-Earth climate model looking for variations on a scale of 10 kilometers. Present models use grids roughly 100 to 200 kilometers on a side, and Manabe admits that the plan is “very optimistic.” But Philip Jones, a climatologist at the University of East Anglia in the United Kingdom, says that level of resolution is needed for more accurate modeling of both local and global phenomena. “It would produce fantastic results if it were right,” he says.

Despite such ambitious goals, the STA program and the other initiatives are seen as only small steps toward making Japan a major player in environmental science. Matsuno says many more positions are needed for environmental researchers, and Manabe notes that much bigger increases are necessary to bring environmental research spending in line with other R&D programs. One important change, says Higuchi, would be a national framework to coordinate efforts among the various agencies. Such a framework, he says, would give the community greater clout domestically and raise its profile internationally. Armed with such information, Japanese scientists might even be able to give Rowland a good answer to his question.

Also see transcript of 1996 Japan Society for the Promotion of Science forum, “Science in Japan; Present and Future.”

Astronomy

Refitted Hubble Probes a Maelstrom

Gretchen Vogel

This signature of a black hole at the center of M84, a galaxy 50 million light-years away, is an early result from the Space Telescope Imaging Spectrograph (STIS), one of two new instruments installed aboard the Hubble Space Telescope during last February's servicing mission. STIS precisely mapped how light from stars and gas in a band crossing the galaxy's center (upper image) is Doppler-shifted by motion toward or away from Earth. The resulting image—“the best spectrum ever of a black hole,” says Hubble project scientist Ed Weiler—shows that light from gas and stars above the galaxy's center is shifted far to the blue end of the spectrum, while just below the center the light is shifted far to the red (lower image). The shifts imply that the gas is whirling around the galactic center at 400 kilometers per second, in the grip of a black hole with a mass of 300 million suns.

Two cameras on the other new Hubble instrument, the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), are also working fine, but a third has been pushed out of focus by an unexpected expansion of its solid nitrogen coolant. The coolant is also evaporating faster than planned, which could bleed as much as 2 years from NICMOS's planned 4-year lifetime

Scientific Honors

New Members of the National Academy of Sciences

THE NATIONAL ACADEMY OF SCIENCES LAST MONTH ANNOUNCED THE ELECTION OF 54 MEN AND SIX WOMEN AS NEW MEMBERS. THE TOTAL NUMBER OF CURRENT ACTIVE MEMBERS IS NOW 1773. ACADEMY MEMBERS ALSO ELECTED 15 FOREIGN ASSOCIATES, BRINGING THE TOTAL TO 309

NEWLY ELECTED MEMBERS AND THEIR AFFILIATIONS AT THE TIME OF ELECTION ARE:

Chemists Explore the Power of One

By tracking the behavior of individual molecules as they react, chemists are finding answers to long-standing questions ranging from how muscles contract to why light-emitting polymers burn out

SAN FRANCISCO—Look in any chemistry textbook, and you'll see an ideal version of molecular behavior, a far cry from what goes on in most research labs. In books, a basic chemical reaction is a transaction in which two single molecules create a third. In life, most chemists study the to and fro of trillions of molecules at the same time. That's fine for seeing the big picture of a reaction: what compound gets created and how long it usually takes. But chemists know that not all the untold numbers of molecules in a beakerful of solution react the same way, although the reasons behind this have long been shrouded in mystery.

Spotlighting the single life.

Bursts of fluorescence of different intensity indicate the location of individual proteins in a thin film.

H. P. LU AND S. XIE/PNNL AND L. XUN/WASHINGTON STATE UNIV.

In recent years, however, a handful of researchers around the globe have begun using lasers and other instruments to lift the veil on how individual molecules behave, making the textbook view of reactions a reality. Initially, such single-molecule sightings—such as spotting the light coming from a single fluorescent molecule or recording the electronic blip when one ion gives off an electron—were little more than a novelty, proof that such sensitive detection could be accomplished. No longer. “The field is really beginning to explode,” says Hansgeorg Schindler, a biophysicist at the University of Linz, in Austria. “Its usefulness cuts across all frontiers of science.”

By looking at the behavior of molecules one at a time, the new techniques, which often rely on tiny samples, lasers, and sophisticated light detectors, have an unparalleled ability to reveal the precise timing of molecular events, including how a lone molecule changes its shape during a chemical reaction. At the American Chemical Society meeting held here last month—in a symposium dubbed the “Woodstock” of single-molecule imaging—researchers presented a raft of new results showing how monitoring single molecules can get to the heart of intractable scientific problems ranging from why all molecules of an enzyme don't work at the same pace to what limits the lifetime of new computer-display polymers.

First sight

Simply being able to see single molecules isn't new. Researchers have long been able to detect the presence of massive molecules such as DNA and many polymers using conventional optical and electron microscopes, while the advent of the scanning tunneling microscope in the early 1980s allowed researchers to image individual atoms on surfaces. But spying small molecules buried within solids or in solution—the natural environment for biomolecules—remained exceedingly difficult.

That began to change in 1989, when William E. Moerner and his colleagues at the University of California, San Diego (UCSD), first used a laser to “see” single, small organic molecules trapped inside a transparent host crystal. They scattered molecules sparsely throughout the crystal, shone the laser through it, and measured the light absorbed by the individual molecules.

Today, researchers still use lasers and sensitive photon detectors, but instead of monitoring absorption, they typically trigger and then detect brief light flashes from fluorescent tracer molecules, which are often linked to other molecules like proteins. The flashes reveal more than just a molecule's presence: By tracking the fluorescence over time, researchers can infer changes in the molecule's structure and activity.

Tracking myosin's moves.

Optical traps and lasers allow researchers to measure how much force myosin exerts on actin—and how far it moves along the actin filament.

T. YANAGIDA/OSAKA UNIVERSITY

To view only one molecule at a time, experimenters typically confine a dilute solution of light-emitting molecules to an ultrasmall volume of liquid, for example by trapping the solution in a tiny, transparent capillary tube or in the matrix of a clear polymer gel. Then, they fire a laser at the sample, and zoom in with their microscope and photon detectors to capture any light emitted by molecules in the microscope's focal plane.

At the meeting, physical chemists Sunney Xie and H. Peter Lu of the Pacific Northwest National Laboratory (PNNL) in Richland, Washington, used a variation on this experiment to watch—for the first time—as individual enzyme molecules carry out their reactions in real time. Their goal, in part, was to learn why similar enzyme molecules react at different rates.

This puzzle was studied 2 years ago by Ed Yeung and Quifang Xue of Iowa State University in Ames, who used a single-molecule setup to show that identical copies of enzymes can churn out fluorescent products at rates that vary by up to a factor of 4. They suggested that the enzyme molecules—although they all had the same chemical sequence—folded themselves into different three-dimensional conformations, thereby affecting their chemical reactivity.

Meanwhile, other researchers had suggested that shape changes in individual proteins over time could also affect reaction rates. Past measurements had shown that even nonreactive proteins can jump between several stable conformations, implying that these quiescent proteins are continuously shifting their shapes, and leaving chemists wondering whether the proteins are more active in one conformation than another. But large ensemble measurements—in which researchers shine a laser beam into a solution containing a vast number of molecules to trigger a reaction and then probe it with another beam a fraction of a second later—have never been able to witness this level of detail.

To do so, Xie and Lu looked at naturally fluorescent enzymes called flavoenzymes. When a flavoenzyme catalyzes a reaction, it gains an electron, which blocks its ability to emit light, and then later loses the electron again. So, an individual molecule blinks on, off, and on again with each full catalytic cycle—something Xie and Lu were able to watch using their single-molecule setup. They found that the flavoenzymes blinked as often as once a millisecond as the protein carried out reactions one after the other.

Next, they put their flavoenzymes in a solution without any other reagents and shone a laser on single enzymes. Each molecule gave off a steady glow, indicating that it was a healthy enzyme ready to undergo a reaction. But the precise color of that glow continually shifted, likely reflecting subtle changes in the protein's shape, says Xie.

What's more, these colors fluctuated around an average value over the course of a millisecond to a second—roughly the same time scale as the enzymatic cycle. The similar tempos, says Xie, imply that these subtle conformational changes do in fact alter the protein's chemical reactivity. Such results are “very interesting,” says University of Illinois protein-folding expert Peter Wolynes, because they may also help researchers track how proteins cycle between different stable conformations.

Motor molecules at work

Other researchers at the meeting disclosed equally compelling results on the dynamics of individual proteins, including the ubiquitous molecules that power muscle contraction. Researchers led by Toshio Yanagida from Osaka University in Japan traced the contortions of myosin, a protein that is found in all eukaryotic cells and is at the heart of muscle contraction. Myosin converts a cell's chemical energy—adenosine triphosphate, or ATP—into mechanical force by lining up with others of its kind in strands that tug on neighboring filaments composed of the protein actin. In relaxed muscle fibers, these filaments overlap by a small amount. But during muscle contraction, myosin proteins bind to actin filaments, forcing the neighboring filaments past one another and increasing their overlap, thereby shortening the muscle.

A long-held model for this process holds that one ATP molecule reacts with one myosin molecule (a reaction known as ATP hydrolysis), causing the myosin to change shape and march down the actin filament by one step. And in 1994, Yanagida and his team became the first to spy on this process. By tracking individual fluorescent-labeled ATP molecules, they saw bursts of light as myosin molecules reacted with ATP and bound to actin. They then watched as these flashes marched down the actin filament, corresponding to steps taken by the myosin.

Meanwhile, the Osaka team and others had developed another laser-based technique to measure the minuscule force—just one piconewton—that an individual myosin molecule exerts on actin. At the ACS meeting, Yanagida reported combining these previous experiments into one—in the first simultaneous measurements of the chemical and mechanical behavior of a single molecule. And he arrived at some surprising findings.

The team attached plastic beads to the two ends of an actin filament and held them steady with a pair of laser beams (see diagram). Another laser then measured the nanometer-scale movements of one of the beads when a myosin molecule bound to the actin, providing a measure of the mechanical force exerted by the myosin. Meanwhile, the researchers used their fluorescence monitoring setup to confirm that they were seeing single myosin molecules attach and march down the actin filament. When tracked together, the researchers found that sometimes, a single ATP hydrolysis reaction caused myosin to move along the actin by two or three steps, exerting a piconewton of force at each step. “That's basically blasphemy for motor protein experimenters,” says Shimon Weiss, a single-molecule expert at the Lawrence Berkeley National Laboratory in California.

Indeed, if the finding holds up, it will likely revolutionize ideas about myosin's behavior. The multiple steps suggest, says Yanagida, that “the chemical energy driven by ATP hydrolysis is stored in the myosin protein and slowly released,” rather than being used all at once. Myosin, he says, may act like a spring that releases its energy in a series of small steps, perhaps as the protein moves through a series of conformations. Yanagida quickly acknowledges, however, that “there has been no evidence for that using conventional techniques.”

Few myosin specialists heard the talk at the chemistry meeting, and they say that whether it is accurate “is hard to know without seeing the full picture” of the group's as-yet-unpublished results, says Ron Vale, a motor protein specialist at the University of California, San Francisco. “But in theory, this is the definitive experiment,” he says, because it allows researchers to directly correlate the chemical and mechanical reactions taking place. Still, says Xie, the hypothesis of slow-acting myosin requires “extreme proof.”

Yanagida doesn't have that proof yet, but at the meeting he described still another single-molecule experiment that is at least the first step: He showed that after binding to ATP, myosin can indeed adopt several stable conformations. He and colleagues tagged separate parts of a myosin molecule with two different fluorescent groups, which emit distinct colors of light depending on their proximity to each other. Then, they blasted the protein with laser light and watched the resulting flashes of color. They were able to see that after myosin bound to ATP, the labeled regions ended up in several different positions relative to one another. “That is a suggestion the protein is really changing its [conformation],” says UCSD's Moerner. Next, the Osaka researchers must link these two results, showing that myosin winds its way through several conformations as it marches several steps down the actin filament.

Preventing burnout

Single-molecule experiments are opening a window on the workings of artificial molecules as well. For example, at the meeting, University of Minnesota researchers led by physical chemist Paul Barbara described new work that begins to explain why light-emitting polymers suffer burnout. In recent years, researchers have been struggling to make these polymers into flexible panels for computer displays, but the polymers' fast burnout has hampered commercialization (Science, 16 August 1996, p. 878).

Such polymers work by absorbing energy from lasers or an electric current and later reemitting it as photons of light. But the light output of the films can drop by as much as 50% in just a few minutes. Researchers haven't known why—whether 50% of the individual light-emitting molecules turn off completely, or whether all the molecules drop their light output by 50%. “There's no way to find that out except by looking at each molecule independently,” says Barbara.

So, his team did just that. They started by adding trace concentrations of a yellow light-emitting polymer known as poly-p-pridylene-vinylene (PPyV) to a film of a nonemitting polymer. Then, they trained a pair of lasers on the film to excite light emission from the scattered PPyV molecules, and zoomed in with a microscope to watch the action unfold.

“What we found really surprised us,” says Barbara. The molecules neither immediately winked out nor dropped their emission slowly. Rather, individual polymer molecules blinked on and off thousands of times before going dark for good. Barbara theorizes that the incoming laser light triggers the blinking: By kicking an electron off the polymer, the light creates a defect in the chain that causes excess energy to be released as heat instead of light. If a free electron jumps back onto the polymer, the light switch is turned back on. Eventually, however, another unknown type of defect quashes the light emission. Whether such results will help chemists design longer lasting polymers is “still too early to say,” says Barbara. In any case, he adds, “it will give them some new things to think about.”

Polymer chemists won't be the only ones with new findings to ponder. Single-molecule experimenters are also exploring a host of other areas, including rapid DNA sequencing by detecting the subtle light-emission differences in the molecule's four bases, imaging proteins in cell membranes, and creating optical data-storage systems. These initial forays into the tiny world of single molecules show that life in the lab is not only catching up to the idealized view in chemistry textbooks; it is rewriting them.

Ecological Economics

Putting a Price Tag on Nature's Bounty

Wade Roush

How much is the world worth? A group of conservation-minded ecologists and economists has attempted to answer that question —bringing intangibles such as a livable environment into the world of economic costs and benefits—by putting a price tag on the “ecosystem services” daily provided pro bono by Mother Nature. Their ambitious appraisal covers environmental resources such as fresh water and soil, as well as processes such as climate regulation, crop pollination, and biological pest control. And their best estimate is that replacing these goods and services would cost $33 trillion per year—nearly twice the combined gross domestic product (GDP) of Earth's 194 nations.

But it's not the exact sum that matters, argue the 13 co-authors of the report, which appears in this week's issue of Nature. Rather, they say, societies need to overhaul their environmental and economic policies, for example, by taxing the loss of wetlands, to avoid facing a bill of this magnitude. Says lead author Robert Costanza, an ecologist who directs the Institute for Ecological Economics at the University of Maryland, “The big conclusion from the study is that environmental ‘externalities' “—economists' term for benefits from resources that belong to no one in particular and so are enjoyed for free—“are relatively huge. We should do something to account for them” in environmental regulations.

Some researchers welcome the report, calling it a corrective for what they consider a nearsighted assumption—that just because a resource is free, society can afford to use it inefficiently. “Having this number calls people's attention to the fact that ecosystem services are absolutely essential for human life, and that there's no price we could pay that would be enough” to replace them, says Stanford University economist Lawrence Goulder. Among the report's many critics, however, are those who say that extravagant valuations render the final estimate too high and others who consider the whole exercise pointless. “There is no debate about the need to protect resources,” says Jerry Taylor, director of natural resources studies at the Cato Institute, a Washington, D.C., think tank that has taken conservative positions on issues such as taxation. “The debate today is regarding how best to do that, and this kind of study doesn't enlighten us in any particular manner.”

The study took shape last year at a weeklong workshop held at the National Center for Ecological Analysis and Synthesis at the University of California, Santa Barbara, a National Science Foundation-sponsored institute dedicated to improving understanding of global ecosystems (Science, 17 January, p. 310). Costanza and a dozen colleagues from Brazil, the Netherlands, Sweden, and the United States first agreed on a list of 17 categories of goods and services provided by nature, including processes such as nitrogen fixation and resources such as crop varieties and plant-derived pharmaceuticals. They then partitioned Earth's surface into 16 specialized “biomes,” or environmental types, such as oceans, estuaries, and tropical forests (see table), and judged which services each biome provides.

Finally, they sifted through scores of published studies for estimates of the value per hectare of each service in each biome. Most of the studies measured either market prices, people's willingness to pay for improvements in the service, or the cost of replacing the service. For example, a 1981 study estimated that for each hectare of U.S. wetlands destroyed by development, the lost ability to soak up floodwaters increased annual flood damages by $3300 to $11,000. The group then tallied the lowest and highest estimates for each item, and concluded that all of the items put together were worth $16 trillion to $54 trillion per year, for an average of $33 trillion. For comparison, the U.S. GDP in 1996 was about $6.9 trillion.

Pricing the biosphere is useful, Costanza says, because it dramatically illustrates that “there is a value [to natural systems], even if we aren't paying it in our normal transactions. … There is no free lunch.” But the prices Costanza's group assigned to many ecosystem services are too high, says David Pimentel, an ecologist at Cornell University in Ithaca, New York. In a similar study in press at the journal BioScience, Pimentel and co-authors use different categories of ecosystems and assign more conservative values to items such as seafood and estuaries. They pin the yearly benefits from the global ecosystem at just $3 trillion. Pimentel says both totals are “very large”—but adds that in his view, Costanza and colleagues “were giving some things much too high a value.”

And the main policy recommendation Costanza sees emerging from the study—a new tax on the depletion of natural capital such as wetlands—has its own foes. Because each ecosystem is different, a general usage tax “will lead to overprotection in some areas and underprotection in others,” argues the Cato Institute's Taylor.

Costanza admits his group's numbers are “back-of-the-envelope” estimates with large, built-in uncertainties, but says they are close enough to help set ecosystem usage taxes. Stanford's Goulder agrees. The new study itself, he says, is “an important service.”

Biorhythmicity

New Clues Found to Circadian Clocks--Including Mammals'

Marcia Barinaga

If you have ever struggled out of bed the morning after flying eastward across several time zones, you've felt what happens when you try to buck the strong rhythms kept by your internal clock. Exactly how those daily, or circadian, rhythms are generated is a mystery under intense study. Until recently, clock researchers had only three clock components to go on: two proteins from fruit flies and one from a bread mold. Studies of those proteins had yielded a model of how those organisms' clocks may work, but the whole field eagerly awaited a glimpse into the circadian clock of a mammal, to see if our clocks have a similar mechanism. Now, that waiting has been rewarded.

Winding the clock.

Positive gene activators (+), such as WC-1, WC-2, and possibly CLOCK, turn on genes that make proteins (-) needed for clock function. These proteins eventually move back to the nucleus and shut off their own genes

ILLUSTRATION: K. SUTLIFF

In a pair of papers in today's issue of Cell (pp. 641 and 655), Joseph Takahashi and his colleagues at Northwestern University in Illinois report that they have cloned the first clock gene from mice. That discovery, combined with two new genes from the bread-mold clock—reported just 2 weeks ago in Science (2 May, p. 763) by a team led by Jennifer Loros and Jay Dunlap at Dartmouth Medical School—suggests that the mammalian clock indeed resembles those of simpler organisms. All three genes contain a feature known as a PAS domain, which is beginning to look like a common theme in clock proteins.

Besides providing what circadian-rhythm researcher Michael Rosbash of Brandeis University calls “a molecular link” between the mammalian clock and those of lower organisms, the trio of new genes also adds to the growing picture of how all these clocks may work. The three previously reported clock genes all code for proteins that oscillate in abundance, regulating their own expression by building up over a 24-hour period until they turn their genes off and start the cycle over. The new mouse and bread-mold genes may represent a new component equally essential to the clock mechanism: a protein that drives production of the oscillating proteins, in essence keeping the clock ticking.

The Loros team found that the genes that it is studying, known as white collar-1 and −2 (wc-1 and −2), turn on the transcription of frequency (frq), which codes for an oscillating protein in the bread mold Neurospora crassa. The Takahashi team's gene, called Clock, also appears to be a transcriptional activator, although its target isn't known. “This was almost the best possible result we could have imagined,” says Takahashi. The Clock gene “has all these motifs that allow you to figure out what class of protein it is.”

Until now, research on mammalian clocks had gone slowly. The fruit-fly clock gene, period (per), was discovered more than 20 years ago, followed by Neurospora's frq in 1978 and the second fruit-fly clock gene, timeless (tim), in 1994. All of those genes had been cloned. But efforts to clone mammalian clock genes based on similarity to those genes had failed, spurring Takahashi to begin searching from scratch for clock genes in mice.

His group, including Fred Turek and Lawrence Pinto, also at Northwestern, screened mice in which mutations had been chemically induced by William Dove at the University of Wisconsin, Madison. The researchers looked for animals that showed a disturbance in their daily patterns of running on their exercise wheels. When kept in total darkness, a mouse with a normal clock keeps a precise 23.7-hour cycle of alternate rest and running. Takahashi's team found a mutation that stretched that cycle to 25 hours in mice with one copy of the mutant gene, and to 27 to 28 hours in animals with two copies. Indeed, these latter animals completely lost their rhythmicity after 2 weeks in the dark, implying that a part in their internal clocks was broken.

Team members located the responsible gene using two different approaches. They mapped it to chromosome 5, and after narrowing its location by using genetic tricks, they found two genes in that region. Sequencing then revealed a mutation in one, suggesting that it was Clock. In an independent effort, the researchers introduced into the mutant mice progressively smaller pieces of DNA from the large region known to contain the gene—seeking the smallest piece that would correct the mutation and restore a normal rhythm.

Both approaches zeroed in on the same gene; when the team sequenced it, they found that it codes for a protein that has PAS domains—a motif first identified in PER, and subsequently found in WC-1 and WC-2. “PAS now appears to be a signature motif that is cropping up in clock genes,” says clock researcher Steve Kay of the Scripps Research Institute in La Jolla, California.

But the strongest evidence that the CLOCK protein plays a central role in the mouse clock, says Harvard Medical School researcher Charles Weitz, is the Takahashi team's finding that loading up mice with extra copies of the normal gene speeded up their circadian rhythms. Based on that, “I am really positive about CLOCK being a central part of the oscillator,” adds Kay. One of the hallmarks of a clock protein, he explains, is that “the more you build [it] up, the faster you go around the loop.”

Clues as to how the CLOCK protein may take part in the oscillator come from its structure, which bears two hallmarks of proteins that regulate genes: a sequence known as a basic helix-loop-helix (bHLH), which enables a protein to pair up with another protein and bind to a gene, and a glutamine-rich domain specific to gene-activating proteins. The mutant form of the CLOCK protein is missing part of that glutamine-rich activating region, an indication that the region is essential to CLOCK's function.

The idea that CLOCK cooperates with other proteins to activate gene expression “fits nicely” with the group's previous genetic findings, says Weitz. Just one copy of the mutant form of Clock is enough to lengthen a mouse's daily rhythm—but the researchers found that if the copy is deleted, rather than mutated, the animal's clock runs normally. That means the mutant protein actually interferes with the normal protein's function. And the gene's structure suggests it may do so by competing with the normal protein for binding with its as-yet-unknown partner. A complex containing a mutant CLOCK protein might sit down on a gene but would not turn it on.

The idea that CLOCK regulates the expression of some other clock component also fits well with the general notion of how clocks seem to work, at least in Drosophila and Neurospora. PER, TIM, and FRQ are all proteins made in increasing amounts during the daily cycle, eventually reaching levels at which they feed back to shut off their own genes. That causes the levels of the proteins to drop, and the genes to turn back on. This oscillator, controlled by rising and falling protein levels, could be how the clock keeps time, but it also needs the equivalent of a power source: a gene activator that drives the expression of genes such as per, tim, and frq when the proteins aren't shutting them off. Such a gene driver might be part of the oscillator itself if its activity is repressed each day when the other proteins turn their genes off.

Loros and Dunlap's team at Dartmouth has shown that WC-1 and -2 drive the frq gene, and there is new evidence suggesting that a CLOCK-like protein may activate per in fruit flies. Paul Hardin's team at the University of Houston recently found that the per gene includes a DNA sequence known to serve as the binding site for a still-unidentified protein with a bHLH motif—the same motif found in CLOCK. That suggests, says Hardin, that there could be a mouse counterpart of CLOCK driving per expression. That, in turn, has fueled speculation that CLOCK may drive a per-like gene in mice.

That's just one of the speculations fueled by the new results. There is also the issue of the PAS sequence, which has now been found in more than a dozen proteins, most of them transcription activators and some of them clock genes. It has several apparent functions, and it isn't clear which of them is crucial in clocks. But its presence points to a possible evolutionary link, suggesting that clock mechanisms—which evolved to deal with daily light-dark cycles—may have arisen from light-responsive proteins in primitive organisms. Besides their role in Neurospora's clock, WC-1 and WC-2 are regulators of all light-responsive genes in the mold; moreover, a number of light-responsive proteins with no known clock function—found in algae, bacteria, and higher plants—have PAS-like sequences, suggesting they share a common ancestor with clock proteins (Science, 2 May, p. 753).

Beyond all this speculation, researchers are looking forward to using the new genes as an opening for testing their hypotheses. With Clock in hand, researchers finally have a handle on the mammalian clock; they can now search for other components by looking for proteins that CLOCK interacts with and genes that it activates. Takahashi notes that there is no proof yet that CLOCK is a central part of the oscillating mechanism of a mouse's timepiece; by checking whether CLOCK activity levels rise and fall, and how manipulations of it affect mouse rhythms, researchers will learn whether it is a key component. And perhaps one day, when the molecular parts of the mammalian clock have all been discovered, jet-lagged travelers will know which molecular knob to turn to reset it.

Malaria Research

How the Parasite Gets Its Food

Gretchen Vogel

Malaria is a notoriously tenacious infection. One reason is the Plasmodium parasite's ability to sequester itself inside red blood cells, where it is protected from attack by the immune system and many drugs. Once there, however, the parasite faces a problem common to fugitives: how to get food. Red blood cells, which are little more than sacks of hemoglobin, cannot provide all the nutrients Plasmodium needs. But new results are helping to explain exactly how the parasite imports sustenance from outside the cell.

Researchers have suspected for several years that Plasmodium acquires at least some of its nutrients through a complex series of membranous tubules and vesicles that it constructs throughout the red blood cell shortly after taking up residence there. But while the structure of this network suggested that it might be a transport system, direct evidence for that had been hard to come by—until now, that is.

In work described on page 1122, biochemists Kasturi Haldar, Sabine Lauer, and Nafisa Ghori of Stanford University, with Pradipsinh Rathod of the Catholic University of America in Washington, D.C., have found that a chemical that disrupts the membrane network prevents the parasite from importing vital nutrients such as protein-building amino acids. The result is “the best evidence so far” that the membranes are an import system, says malaria researcher Barry Elford of Oxford University in the United Kingdom. The finding also suggests that drug researchers might take advantage of the system by designing antimalarial compounds that can sneak in with the essential nutrients.

The current work is an outgrowth of a previous discovery by Haldar and her colleagues. Almost 2 years ago, they showed that a chemical called PPMP, which prevents the parasite from forming a membrane component called sphingomyelin, disrupts the formation of the entire network—causing the tubules to become fragmented or constricted. To see whether this breakdown interferes with the network's proposed transport function, the team exposed both normal and PPMP-treated infected red blood cells to a dye called Lucifer yellow. In the control cells, the dye was distributed throughout the membrane network and in the parasite itself, but the PPMP-treated cells took up very little dye.

The researchers then went on to probe whether the chemical has a similar effect on the transport of nutrients. In a series of experiments, they exposed control cells and treated cells to several building blocks of nucleic acids and to glutamate, an amino acid used to build proteins—all with radioactive labels. When they measured how much radioactivity appeared in the two types of cells, they found that PPMP reduced accumulation of the nutrient molecules by as much as 91%. The team also found an even larger drop-off—up to 98%—in the amount of the imported substances actually used by the parasite to build DNA or proteins. The difference between the two figures suggests that while PPMP-treated cells were able to take up some of the molecules, they were unable to deliver them to the parasite, says Haldar.

Although the membrane-blocking compound itself might seem like an obvious drug candidate, Haldar says cutting the supply lines would kill the parasite slowly, giving it time to find alternate import routes and develop resistance. A better strategy might be to take advantage of the network to deliver drugs to the parasite. Indeed, while the network may block some drugs, a few compounds do seem to travel through it. The team found that blocking the network with PPMP also blocked the uptake of an experimental drug—a slightly modified nucleotide precursor that disrupts Plasmodium's DNA synthesis. PPMP seemed to block uptake of the lethal compound by about 90%, enabling many cells to survive treatment with the drug.

In fact, the new findings may help explain some drug-related mysteries, says Rathod: “It's always been puzzling why some modified nutrients are effective and why some very close analogs are ineffective.” Some sort of selection mechanism in the membrane network may protect the parasite from certain drugs, he says. The team hopes to figure out those mechanisms in future experiments. “If you understand the permeability and specificity,” says Rathod, “you can design drugs that take advantage of them.”

Evolutionary Biology

Morphologists Learn to Live With Molecular Upstarts

Michael Balter

PARIS—Pity the poor elephant shrew. For much of this century, taxonomists, at a loss over how to classify this small, bug-eating African mammal, put it with the insectivores, an order that includes moles and hedgehogs. Then, in the late 1980s, a reevaluation of mammalian fossil evidence led some experts to suggest that this lowly animal—with its long, flexible snout resembling a tiny elephant's trunk—was closer to rodents and rabbits on the evolutionary tree.

If new data presented at a recent meeting* here are correct, the elephant shrew's identity crisis may now be resolved. A comparison of molecular sequences from a dozen mammalian species suggests that it is much more closely related to its mighty namesake, the elephant, than to hedgehogs or rats.

The new classification of the elephant shrew, part of a thorough revamping of mammalian ancestry, was one of several revelations at the Paris gathering, which brought together 200 biologists to look back over the past decade of progress in systematics—the discipline devoted to putting taxonomy on solid scientific ground. In recent years, systematists have been struggling to reconcile classical “morphological” methods of reconstructing evolutionary trees—based on anatomical similarities and differences between living species or their extinct relatives, such as the shape of a molar or the intricate details of a bone—with an avalanche of new molecular data on genetic variation among organisms (Science, 22 February 1991, p. 872).

In the past, the face-off between proponents of molecules and those of morphology was sometimes bitter—particularly on the many occasions when the two methods gave different answers. While some presentations at the Paris meeting continued to fan those flames, there were encouraging signs that the two camps have moved much closer together—and that systematists of all stripes are coming to appreciate the important roles that both molecular and morphological evidence must play in sorting out the many remaining puzzles in the field.

And puzzles there are. Few groups of plants or animals have had their evolutionary, or phylogenetic, trees worked out with complete confidence. Debates still rage over when and how flowering plants split off from their nonflowering ancestors, the relations among orders of amphibians, the origins of rodents, and a host of other issues. And while a number of talks at the meeting showed the considerable power of molecular data to tease out elusive phylogenetic relations, there were also warning signs that molecular evidence can lead to misleading and embarrassing errors.

Nor were the questions tackled at the Paris meeting of only academic interest. “Systematics is the entire underpinning for evolutionary biology,” says Colin Patterson, a fossil-fish expert at London's Natural History Museum. “You can't even start to think about evolution without it.”

North-south split. While molecules and morphology often agree, they sometimes collide head-on. This rivalry is particularly sharp when it comes to classifying the amphibians. There are three living orders of these cold-blooded vertebrates: frogs, salamanders, and caecilians—burrowing, wormlike animals with small eyes and no limbs. Most morphological studies of living and fossil amphibians over the years have concluded that frogs and salamanders are closely related “sister groups” that split off from the caecilian line some 250 million years ago. But more recently, this neat grouping has been severely disrupted by molecular studies, which suggest that salamanders and caecilians are the sisters and that frogs are the more distant relatives of both.

Continental divide.

Early Earth history supports molecular data on amphibian relations.

SOURCE: B. HEDGES

In a talk at the Paris meeting, evolutionary biologist Blair Hedges of Pennsylvania State University suggested that the conflict might be resolved by looking at the geographical distribution of living species and correlating it with current views of early Earth geology. The amphibians are thought to have arisen at a time when there was just one supercontinent, Pangaea. About 200 million years ago, Pangaea split into a northern continental mass, Laurasia, and a southern mass, Gondwana, both of which later broke up into the northern and southern continents we know today.

If the morphological point of view is correct, Hedges argues, frogs and salamanders would have branched off from caecilians early in amphibian history, before Pangaea split. Today, then, we would expect to find frogs, salamanders, and caecilians in both the northern and southern hemispheres, but this is not the case. Modern salamanders live almost entirely in the north and caecilians almost entirely in the south. Hedges says that this geographical distribution better fits the scenario supported by the molecular data, in which salamanders and caecilians are sister groups that diverged from an early ancestor at about the same time as the Pangaean split—one branch going north and the other south.

Hedges concludes that when molecules and morphology are in conflict, the sequence data are usually a more reliable measure of phylogeny. The reason, he argues, is that morphological features are much more susceptible to “adaptive convergence”—what systematists call homoplasy—in which features that seem similar enough to derive from a common ancestor actually have an independent origin. Hedges maintains that trees should be built with molecular data alone and the morphological characters “mapped” onto the branches. The result, he concludes, would be a more accurate picture of how the morphological features changed over the course of evolution.

Hedges's proposal split the delegates along traditional lines. One participant privately referred to Hedges's suggestion as “blatant molecular chauvinism.” But it struck a chord with other proponents of molecular methods, including Morris Goodman of Wayne State University in Detroit. His decades-long contention that molecular data prove humans are more closely related to chimpanzees than to gorillas has achieved wide acceptance only in recent years. Says Goodman: “I believe that in the long run, as we learn to extract all the phylogenetic information stored in the DNA sequences of genomes, these sequences will prove to be more reliable than morphological characters when the two are in conflict.”

Taming of the shrew. The tale of the elephant shrew garnered another point for the molecular team, this time in a long-standing debate over the phylogenetic relations among placental mammals. More than 50 years ago, the American paleontologist George Gaylord Simpson divided the class Mammalia into a bewildering array of subclasses, infraclasses, cohorts, and superorders, based on morphological and fossil evidence. But despite Simpson's brave attempt, the exact relations among the 18 living orders of placental mammals have remained elusive, possibly because they evolved very rapidly after diverging from a common ancestor about 100 million years ago. This period of rapid evolution meant that organisms that once resembled each other quickly developed significant morphological differences, blurring relations among species. For example, it has long been unclear how groups such as the ungulates (which include horses, whales, and cattle) are related to the paenungulates (including elephants and sea cows). In addition, there are major questions about where aardvarks, rodents, and bats fit on the evolutionary tree.

Long-lost cousin.

Molecular data give the elephant shrew a new family tree.

PHOTO: GEA OLBRICHT/SOURCE: WILFRIED DE JONG

In recent years, most systematists have turned to cladistics, a method of phylogenetic analysis first developed in the 1950s by the German entomologist Willi Hennig. Cladistics arranges species together in special groupings, called clades, based on their inheritance of specific morphological features, such as feathers, fur, or flowers. If two species share modified versions of the same features—referred to as “shared derived characters”—it is assumed that they also share a common ancestor. And the extent to which these characters differ can be used to estimate how closely related are species or whole groups of organisms to one another.

Cladistics, which grounds classification schemes strictly on organisms' evolutionary history, has revolutionized systematics. For example, the technique allows the use of computers to compare a large number of characters from different taxonomic groups, and it is not restricted to morphological characters: Variations in molecular data—such as changes in the nucleotide or amino acid sequences of genes or proteins—can be treated as characters and plugged into phylogenetic analyses. Moreover, cladistic methods allow direct comparisons between morphological and molecular data.

Yet, despite this progress, mammalian phylogeny has remained muddled because of frequent discrepancies between and even among morphological and molecular data sets. At the Paris meeting, however, biochemist Wilfried de Jong, of the Catholic University of Nijmegen in the Netherlands, presented molecular data from his own and other labs that may point the way out of this morass.

De Jong and his colleagues, including postdoc Ole Madsen and Michael Stanhope's team at Queen's University in Belfast, used computerized cladistics programs to compare the nucleotide or amino acid sequences of six different genes or proteins taken from 12 species, each representing a different order of placental mammals. They are: human, horse, bovine, dog, pangolin, elephant, hyrax, aardvark, elephant shrew, rabbit, rat, and armadillo. And each of the six data sets independently gave the same surprising answer: The elephant shrew, the elephant, and the aardvark were all closely related members of the paenungulate clade. Also joining this group was the hyrax, a small mammal with molars like those of a rhinoceros but incisors like a rodent's; its classification had long been controversial.

“Their data show a bizarre phylogenetic relationship among a bunch of mammals that theoretically shouldn't be related,” says Timothy Crowe, an evolutionary biologist at the University of Cape Town in South Africa. But de Jong says that this clade “is so strongly supported that it's amazing it has never been recognized at the morphological level.” He adds that “It seems molecules can really tell us something.”

Indeed, most scientists at the meeting found the story of the elephant shrew and its paenungulate cousins particularly convincing, because de Jong and his team used genes and proteins with widely different structures and functions to construct their proposed phylogenies. These included a protein that aids water transport across cell membranes, a component of the lens of the eye, and a blood-clotting protein.

De Jong and his colleagues believe their work may ultimately help untangle other branches of the mammalian tree as well. For example, their molecular data also suggest that the ungulates and paenungulates are not closely related, as had long been assumed. Rodents and rabbits also appear to be close cousins. Says London's Patterson: “De Jong and Stanhope are chopping the phylogenetic tree to bits and reshaping it.”

Misleading molecules. Although the elephant shrew and amphibian stories may turn out to be triumphs for molecular analysis, a sobering presentation by evolutionary biologist Gavin Naylor of Yale University provided a wake-up call for evolutionary biologists who might be tempted to put too much stock in the molecular approach. Naylor's talk—which Patterson says received “the closest thing to a standing ovation during the meeting”—showed that sequence data can sometimes mislead or even give an entirely wrong answer.

Wrong answer.

Molecules fail to reconstruct “true” vertebrate tree.

SOURCE: GAVIN NAYLOR

Naylor, together with Wesley Brown of the University of Michigan, decided to see how well molecular data would reconstruct the universally accepted phylogenetic relations among the major groups of vertebrates and their close evolutionary relatives—relations based on strong morphological and fossil evidence. The pair compared DNA nucleotide sequences from the mitochondria of 19 different taxa. Using a computer to crunch the numbers, Naylor and Brown aligned the sequences of 13 protein-coding mitochondrial genes from the 19 groups—a total of 12,234 nucleotide sites—and calculated the phylogenetic relations that best fit the data.

The result, Naylor told the meeting, gave “really quite impressive” statistical support—for what was clearly the wrong answer. For example, the molecular tree clustered frogs and chickens in a clade with fish, even though these three species do not derive from a common ancestor. To make matters worse, echinoderms (which include the sea urchin and the starfish) branched closer to the vertebrates than did amphioxus, a primitive marine chordate that is well established as the closest living relative of the vertebrates. “I think this talk was fairly distressing for many people,” says de Jong. And Diethard Tautz, a developmental biologist at the University of Munich, comments, “It has become clear that the analysis of molecular data is not as straightforward as many would have wished.”

To figure out what was going wrong, Naylor and Brown looked more closely at the 12,234 nucleotides, to see which ones were providing accurate information about the expected phylogenetic tree and which were causing the problems. “We wanted to see what makes a good site good and a poor site poor,” Naylor says. The results were very instructive. For example, when they grouped the nucleotides into codons—nucleotide triplets that code for specific amino acids—they found that codons corresponding to the hydrophobic (water-hating) amino acids gave an “absolutely rotten” fit to the tree. On the other hand, codons for amino acids that are hydrophilic (water-loving) or carry an electric charge provided a much better fit. But the best fit of all came from amino acids that seemed to be critical for determining the proteins' three-dimensional structure.

When the analysis was rerun using only the nucleotide sites corresponding to these amino acids, the expected phylogenetic tree reemerged with considerable statistical support. Naylor concluded that rather than trying to build better trees by sequencing more and more genes—an approach common among molecular phylogeneticists—“our efforts are probably better spent investigating which kinds of sites best reflect actual historical, phylogenetic signals.” Michael Nedbal, an evolutionary biologist at the Field Museum in Chicago, says Naylor's talk was “an especially important message to those in molecular phylogenetics. Just like morphologists, molecular systematists must investigate how their characters are evolving before subjecting them to phylogenetic reconstruction.”

While the debate over the relative merits of molecules and morphology—and how to get the most out of each data set—is far from over, the take-home message from the Paris meeting was that each side ignores the other at its peril. Says zoologist Tim Littlewood of London's Natural History Museum: “We are all searching for a Tree of Life we can agree on.”

↵* “Molecules and Morphology in Systematics,” Paris, 24-28 March 1997.

Physics

Flaw Found in a Quantum Code

Charles Seife

Charles Seife is a writer in Riverdale, New York

With a basic principle of physics on its side, quantum cryptography seemed foolproof. Because the very act of observing a quantum system—a single photon or particle—disturbs it, any effort to crack quantum secrecy should leave a detectable trace. Or so physicists thought. In the 28 April issue of Physical Review Letters, researchers report with regret that another principle of quantum mechanics could undermine one quantum-cryptography scheme. The threatened scheme has never been put into practice, and the threat depends on technologies that don't exist yet outside theorists' minds. But like any blemish on something thought to be flawless, the finding has unsettled quantum cryptographers.

“I'm very disappointed with this result,” says Claude Crepeau of the University of Montreal. The papers, one by Dominic Mayers of Princeton University and the other by Hoi-Kwong Lo of Hewlett-Packard and H. F. Chau of the University of Hong Kong, do not affect a basic quantum-cryptography stratagem called quantum “key exchange.” In this stratagem, Alice (the sender) gives Bob (the receiver) a secret password in the form of a string of photons polarized in different directions. Any eavesdropper trying to measure the polarizations would alter them. But Mayers, Lo, and Chau have found that a quantum principle called entanglement, in which the state of one photon in a pair can reveal everything about its counterpart, can in theory be used to undermine a second quantum scheme called bit commitment.

Bit commitment gives Alice and Bob a way to exchange information even if they don't trust each other. “Suppose Alice wants to prove that she can make a prediction about the stock market, but wants to make sure that Bob can't use the information to his advantage,” explains Richard Hughes, a physicist at Los Alamos National Laboratory in New Mexico. That requires a way for Alice to transmit a message to Bob while retaining control over when he can read it. “It's post-Cold War cryptography,” says Charles Bennett, a cryptographer at the IBM Thomas J. Watson Research Center in Westchester County, New York. “There are no enemies anymore, but you don't trust your friends.”

In a bit-commitment scheme proposed in 1984, mistrustful Alice sends a string of photons, all of them polarized diagonally, at 45 or 135 degrees, or rectilinearly at 0° or 90°. The entire string represents either a 1 (say, a series of diagonal polarizations) or a 0 (rectilinear polarizations). Bob receives each photon and randomly chooses to determine its polarization with a rectilinear or a diagonal filter. Only the correct filter will give a real measurement, but Bob can't tell when he has guessed correctly. Using the wrong filter—measuring, say, rectilinearly polarized photons with a diagonal filter—will destroy the information in the photons and yield a string of random diagonal measurements, indistinguishable from real ones.

As a result, Bob gets no information until Alice chooses to reveal whether she sent a 1 or a 0. Bob can then verify, after the fact, that Alice really sent what she claims, by looking at the photons he measured with the correct filter. If Alice has told the truth, his readings for those photons will agree with hers. Alice can't lie, saying she sent diagonally polarized photons when they were actually rectilinear, because she has no idea what Bob saw when he used a diagonal filter. She has to guess—and because Bob randomly saw a 45° or 135° polarization, Alice will be wrong about half the time. Thus, Alice has to commit herself to a value for the bit when she sends it, but doesn't need to show her hand until later.

But there's a hole in this and all other bit-commitment schemes, the new work shows. Instead of producing each photon individually, Alice can prepare them as Einstein-Podolsky-Rosen (EPR) pairs: two photons whose polarizations are intimately linked—entangled—even as they travel in different directions. Sending a rectilinearly polarized photon to Bob, Alice stores the other without measuring it. Bob does the measurements as usual. Normally, this would mean that Alice was committed to a 0. But thanks to entanglement, she's not.

Alice can change her commitment from a 0 to a 1, or vice versa, simply by measuring each stored photon with a diagonal filter. Because her photon and Bob's make up an EPR pair, measuring one tells her all about the other; Alice thus knows what Bob's diagonal measurements were. Alice can now claim she sent a 1—a string of diagonal polarizations—and there is no way Bob can tell that she is cheating.

This scenario is still an academic exercise. For one thing, it requires the ability to store a photon without affecting its quantum state, something researchers in quantum computation are only taking the first steps toward doing. But “it's a big disappointment for anyone interested in cryptography,” says Mayers, adding that it threatens a host of post-Cold War protocols designed to keep Alice, Bob, or—in some cases—both of them in the dark about parts of the information being transferred.

“It's the foundation stone which held up a useful part of quantum cryptography,” agrees Bennett. “Now, it's gone, and there's no way to fix it.”

Tropospheric Processes

Julia Uppenbrink,

Brooks Hanson

Tropospheric Processes

Most of what we commonly think of as Earth's atmosphere is made up of two layers, the troposphere, which extends from the Earth's surface up to between 10 and 17 kilometers, and the stratosphere, which extends from the top of the troposphere up to about 50 kilometers. They are separated by a boundary, the tropopause, across which flow is relatively limited. Following the discovery of the ozone hole in the polar stratosphere, research has focused on understanding the processes leading to its formation, and we may now understand more about the processes in the more distant stratosphere than about those in the troposphere. The troposphere contains a much wider variety of species and particles, and the higher atmospheric density and other factors such as the presence of much liquid water in the troposphere enhance the interactions and reactions among these species. Chemicals and particles enter the troposphere from many diverse sources, and some persist for only a few hours before they are removed or altered. Mixing is spatially and temporally variable at all levels, and the chemicals and particles interact in complex ways with large-scale processes such as cloud formation and convection. This special issue on Tropospheric Processes provides an overview of the current understanding of these complex processes, a detailed appreciation of which is vital for addressing societal issues such as air pollution and climate change.

In a brief overview article, Kley highlights some of the key issues related to the coupling of chemistry and transport in the troposphere. Three of the detailed articles focus on related aspects of tropospheric chemistry, which today is concerned increasingly with processes involving condensed species in the atmosphere. Finlayson-Pitts and Pitts Jr. focus on air pollution, Ravishankara examines reactions on and in tropospheric particles, and Andreae and Crutzen discuss the biogeochemical sources of aerosols and their effect on tropospheric chemisty, as exemplified by reactions associated with the marine boundary layer. As emphasized in all of these articles, and highlighted in an article by Roscoe and Clemitshaw, improving our understanding of tropospheric processes requires better measurements of chemicals and particles globally. Baker describes how the microphysics of aerosols, ice particles, and water droplets affect the large-scale properties of clouds. Mahlman discusses transport processes in the upper troposphere and across the tropopause, which affect the chemistry of key species such as ozone, in both the troposphere and stratosphere. In a news story, Kerr examines the attempts to link these and other small- and large-scale processes in climate models. Finally, in a Policy Forum, Zillman highlights some of the recent problems and successes in linking basic atmospheric science and international public policy, and the need for international coordination.

Climate Change: Greenhouse Forecasting Still Cloudy

Richard A. Kerr

An international panel has suggested that global warming has arrived, but many scientists say it will be a decade before computer models can confidently link the warming to human activities

The headlines a year and a half ago positively brimmed with assurance: “Global Warming: No Longer in Doubt,” “Man Adversely Affecting Climate, Experts Conclude,” “Experts Agree Humans Have ‘Discernible' Effect on Climate,” “Climate Panel Is Confident of Man's Link to Warming.” The official summary statement of the UN-sponsored Intergovernmental Panel on Climate Change (IPCC) report that had prompted the headlines seemed reasonably confident, too: “… the balance of evidence suggests that there is a discernible human influence on global climate.” But as negotiators prepare to gather in Bonn in July to discuss a climate treaty that could require nations to adopt expensive policies for limiting their emissions of carbon dioxide and other greenhouse gases, many climate experts caution that it is not at all clear yet that human activities have begun to warm the planet—or how bad greenhouse warming will be when it arrives.

Rough approximation.

Models can't reproduce clouds, but they incorporate some cloud effects, including those of water (white) in the atmosphere, seen in the above model output.

NCAR

What had inspired the media excitement was the IPCC's conclusion that the half-degree rise in global temperature since the late 19th century may bear a “fingerprint” of human activity. The patchy distribution of the warming around the globe looks much like the distinctive pattern expected if the heat-trapping gases being poured into the atmosphere were beginning to warm the planet, the report said. But IPCC scientists now say that neither the public nor many scientists appreciate how many if's, and's, and but's peppered the report. “It's unfortunate that many people read the media hype before they read the [IPCC] chapter” on the detection of greenhouse warming, says climate modeler Benjamin Santer of Lawrence Livermore National Laboratory in Livermore, California, the lead author of the chapter. “I think the caveats are there. We say quite clearly that few scientists would say the attribution issue was a done deal.”

Santer and his IPCC colleagues' overriding reason for stressing the caveats is their understanding of the uncertainty inherent in computer climate modeling. The models are key to detecting the arrival of global warming, because they enable researchers to predict how the planet's climate should respond to increasing levels of greenhouse gases. And while predicting climate has always been an uncertain business, some scientists assert that developments since the IPCC completed its report have, if anything, magnified the uncertainties. “Global warming is definitely a threat as greenhouse-gas levels increase,” says climate modeler David Rind of NASA's Goddard Institute for Space Studies (GISS) in New York City, “but I myself am not convinced that we have [gained] greater confidence” in recent years in our predictions of greenhouse warming. Says one senior climate modeler who prefers not to enter the fray publicly: “The more you learn, the more you understand that you don't understand very much.” Indeed, most modelers now agree that the climate models will not be able to link greenhouse warming unambiguously to human actions for a decade or more.

Crucial component.

Thunderstorms like the one above help to shape climate by lofting heat and moisture.

NCAR

The effort to simulate climate in a computer faces two kinds of obstacles: lack of computer power and a still very incomplete picture of how real-world climate works. The climate forecasters' basic strategy is to build a mathematical model that recreates global climate processes as closely as possible, let the model run, and then test it by comparing the results to the historical climate record. But even with today's powerful supercomputers, that is a daunting challenge, says modeler Michael Schlesinger of the University of Illinois, Urbana-Champaign: “In the climate system, there are 14 orders of magnitude of scale, from the planetary scale—which is 40 million meters—down to the scale of one of the little aerosol particles on which water vapor can change phase to a liquid [cloud particle]—which is a fraction of a millionth of a millimeter.”

Of these 14 orders of magnitude, notes Schlesinger, researchers are able to include in their models only the two largest, the planetary scale and the scale of weather disturbances: “To go to the third scale—which is [that of thunderstorms] down around 50-kilometers resolution—we need a computer a thousand times faster, a teraflops machine that maybe we'll have in 5 years.” And including the smallest scales, he says, would require 1036 to 1037 more computer power. “So we're kind of stuck.”

To get unstuck, modelers “parameterize” smaller scale processes known to affect climate, from the formation of clouds to the movement of ocean eddies. Because they can't model, say, every last cloud over North America, modelers specify the temperatures and humidities that will spawn different types of clouds. If those conditions hold within a single grid box—the horizontal square that represents the model's finest level of detail—the computer counts the entire area as cloudy. But as modelers point out, the grid used in today's models—typically a 300-kilometer square—is still very coarse. One over the state of Oregon, for instance, would take in the coastal ocean, the low coast ranges, the Willamette Valley, the high Cascades, and the desert of the Great Basin.

Having the computer power to incorporate into the models a more detailed picture of clouds wouldn't eliminate uncertainties, however, because researchers are still hotly debating the overall impact of clouds on future climate. In today's climate, the net effect of clouds is to cool the planet—although they trap some heat, they block even more by reflecting sunlight back into space. How that balance would change under greenhouse warming, no one knows. A few years ago, a leading climate model—developed at the British Meteorological Office's Hadley Center for Climate Prediction and Research, in Bracknell—predicted that an Earth with twice the preindustrial level of carbon dioxide would warm by a devastating 5.2 degrees Celsius. Then Hadley Center modelers, led by John Mitchell, made two improvements to the model's clouds—how fast precipitation fell out of different cloud types and how sunlight and radiant heat interacted with clouds. The model's response to a carbon dioxide doubling dropped from 5.2°C to a more modest 1.9°C.

Other models of the time also had a wide range of sensitivities to carbon dioxide, largely due to differences in the way their clouds behaved. That range of sensitivity has since narrowed, says modeler and cloud specialist Robert Cess of the State University of New York, Stony Brook, but “the [models] may be agreeing now simply because they're all tending to do the same thing wrong. It's not clear to me that we have clouds right by any stretch of the imagination.”

Nor are clouds the only question mark in researchers' picture of how climate works. Modelers saw for the first time the fingerprint of global warming when they folded an additional process into the models: the effect of pollutant hazes on climate. Windblown soil and dust, particles from the combustion of fossil fuels, and ash and soot from agricultural burning all reflect sunlight—shading and cooling the surface beneath them. Including this aerosol effect in four independent climate models at three centers—Livermore, the Hadley Center, and the Max Planck Institute for Meteorology (MPI) in Hamburg, Germany—produced geographic patterns of temperature changes that resembled those observed in the real world over the past few decades, such as the greater warming of land than ocean.

Fingerprinting work since then has only strengthened the confidence of IPCC's more confident scientists that greenhouse warming has arrived. “I've worked with the models enough to know they're not perfect, but we keep getting the same answer,” says Tim P. Barnett, a climatologist at the Scripps Institution of Oceanography in La Jolla, California, and a co-author of the IPCC chapter. Another climatologist and IPCC contributor, Gerald North of Texas A&M University in College Station, is similarly heartened. “I'm pretty optimistic about climate modeling. … I don't know anybody doing [fingerprinting] who is not finding the same result.”

But the assumptions about how hazes affect climate may have taken a hit recently from climatologist and modeler James Hansen of NASA's GISS—the man who told Congress in 1988 that he believed “with a high degree of confidence” that greenhouse warming had arrived. In a recent paper, Hansen and his GISS colleagues pointed out that recent measurements suggest that aerosols don't just cool; they also warm the atmosphere by absorbing sunlight. The net effect of this reflection and absorption, Hansen estimates, would be small—too small to have much effect on temperature.

Hansen and his colleagues conclude that aerosols probably do have a large effect on climate, but indirectly, through clouds. By increasing the number of droplets in a cloud, aerosols can amplify the reflectivity of clouds, and thus may have an overall cooling effect on the atmosphere. If true, this would greatly complicate the modelers' work, because meteorologists are only just starting to understand how efficiently particles of different sizes and compositions modify clouds. “I used to think of clouds as the Gordian knot of the problem,” says cloud specialist V. Ramanathan of Scripps. “Now I think it's the aerosols. We are arguing about everything.”

And the complications don't stop with the multiplication of aerosol effects, according to Christopher Folland of the Hadley Center. Folland and his colleagues have been trying to sort out what was behind the intermittent warming of recent decades, which in the third quarter of the century was more rapid in the Southern than Northern Hemisphere. Earlier work by Santer and a dozen other colleagues showed an increasing resemblance between the observed pattern of warming through 1987, the end of their temperature record, and the results of a model run that incorporated aerosol effects. The researchers suggested that the North's more abundant pollutant aerosols could have been moderating the warming there from greenhouse gases. But when Folland compared the results of his model run with a longer, more recent temperature record, the resemblance that had been building into the 1980s faded by the early 1990s. Contrarian Patrick Michaels of the University of Virginia, Charlottesville, also has pointed out this trend.

The Hadley model suggests that “there appears to be more than one reason” for the variations, says Folland. The waning of aerosols as pollution controls took effect probably helped the North catch up, he says, but so did natural shifts in atmospheric circulation that tended to warm the continents (Science, 7 February, p. 754). Most provocatively, Folland and his colleagues are suggesting that a shift in North Atlantic Ocean circulation that has tended to warm the region also has contributed. “There's no doubt,” says Santer, “that multiple natural and anthropogenic factors can contribute, and probably have, to the interhemispheric temperature contrast. … We've learned something about detection.”

All of which only adds to the skepticism of scientists who might be called the “silent doubters”: meteorologists and climate modelers who rarely give voice to their concerns and may not have participated even peripherally in the IPCC. “There really isn't a persuasive case being made” for detection of greenhouse warming, argues Brian Farrell of Harvard University, who runs models to understand climate change in the geologic past. Farrell has no quarrel with the IPCC chapter on detecting greenhouse warming, but he says the executive summary did not “convey the real uncertainties the science has.” He further contends that if IPCC scientists had had real confidence in their assertion that global warming had arrived, they would have stated with more precision how sensitive the climate system is to greenhouse gases. But the IPCC left the estimate of the warming from a doubling of carbon dioxide at 1.5°C to 4.5°C, where it has been for 20 years. “That's an admission that the error bars are as big as the signal,” says Farrell.

Climate modeler Max Suarez of NASA's Goddard Space Flight Center in Greenbelt, Maryland, agrees that it's “iffy” whether the match between models and temperature records is close enough to justify saying that greenhouse warming is already under way. “Especially if you're trying to explain the very small [temperature] change we've seen already,” he says, “I certainly wouldn't trust the models to that level of detail yet.”

Rather than dwelling on model imperfections, IPCC co-author Barnett emphasizes some of the things that current models are doing fairly well—simulating present and past climates and the changing seasons, predicting El Niño a year ahead, and producing good simulations of decades-long climate variations. But he agrees that too much confidence has been read into the IPCC summary statement. “The next 10 years will tell; we're going to have to wait that long to really see,” he says. Klaus Hasselmann of the MPI also sees a need to wait. He and his colleagues “think we can see the [greenhouse warming] signal now with 97% confidence.” But, as North notes, “all that assumes you knew what you were doing to start with” in building the models. Hasselmann has faith in his model but recognizes that his faith is not universally shared. “The signal is not so much above the noise that you can convince skeptics,” he observes. “It will take another decade or so to work up out of the noise.”

That's no excuse for complacency, many climate scientists say. Basic theory, this century's warming, and geologic climate records all suggest that increasing carbon dioxide will warm the planet. “I'd be surprised if that went away,” says Suarez, as would most climate researchers. North suggests that while researchers are firming up the science, policy-makers could inaugurate “some cautious things” to moderate any warming. The last thing he and his colleagues want is a rash of headlines saying the threat is over.

Climate Change: Model Gets It Right--Without Fudge Factors

Richard A. Kerr

Climate modelers have been “cheating” for so long it's almost become respectable. The problem has been that no computer model could reliably simulate the present climate. Even the best simulations of the behavior of the atmosphere, ocean, sea ice, and land surface drift off into a climate quite unlike today's as they run for centuries. So climate modelers have gotten in the habit of fiddling with fudge factors, so-called “flux adjustments,” until the model gets it right.

No one liked this practice (Science, 9 September 1994, p. 1528). “If you can't simulate the present without arbitrary adjustments, you have to worry,” says meteorologist and modeler David Randall of Colorado State University (CSU) in Fort Collins. But now there's a promising alternative. Thirty researchers at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, have developed the first complete model that can simulate the present climate as well as other models do, but without flux adjustments. The new NCAR model, says Randall, “is an important step toward removing some of the uneasiness people have about trusting these models to make predictions of future climate” (see main text).

The NCAR modelers built a host of refinements into their new Climate System Model (CSM). But the key development, says CSM co-chair Byron Boville, was finding a better way to incorporate the effects of ocean eddies, swirling pools of water up to a couple of hundred kilometers across that spin off strong currents. Climate researchers have long known that the eddies, like atmospheric storms, help shape climate by moving heat around the planet. But modelers have had a tough time incorporating them into their simulations because they are too small to show up on the current models' coarse geographic grid. The CSM doesn't have a finer mesh, but it does include a new “parameterization” that passes the effects of these unseen eddies onto larger model scales, using a more realistic means of mixing heat through the ocean than any earlier model did, says Boville.

Even when run for 300 model “years,” the CSM doesn't drift away from a reasonably realistic climate, says NCAR's Climate and Global Dynamics director Maurice Blackmon. “Being able to do this without flux corrections gives you more credibility,” he says. “For better or worse, we're not biasing the results as was necessary before.”

The first results from this still vastly simplified model imply that future greenhouse warming may be milder than some other models have suggested—and could take decades to reveal itself. Doubling atmospheric carbon dioxide concentrations in the model raised the global temperature 2 degrees Celsius, which puts the model's sensitivity to greenhouse gases near the low end of current estimates. Based on an array of different models and other considerations, the UN-sponsored Intergovernmental Panel on Climate Change estimated in 1995 that a carbon dioxide doubling could raise global temperatures by as much as 4.5°C; their best guess was 2.5°C.

A 300-year run without any increase in greenhouse gases produced slow, natural variations in global temperature of about 0.5°C. If the real climate behaves the same way, “two-thirds to three-quarters of the [temperature variations of the] last 130 years can be explained as natural variation,” says Blackmon. That would make the detection of a modest-size greenhouse warming all the more difficult.

The CSM is available on the Internet, but Blackmon warns that if you want to check out future climate scenarios, you'll “need the biggest supercomputer you can get.” Indeed, even NCAR researchers haven't been able to experiment with the model on as large a computer as they would like. While their purchase of an NEC SX4 computer is tied up in a trade dispute with Japan (Science, 30 August 1996, p. 1177), they are making do with a leased Cray C-90 with perhaps 20% of the speed of the SX4. That worries some modelers. Americans have “been among the leaders of the field from the beginning,” says CSU's Randall, but “if we can't get access to the most powerful machines, we are going to be left behind.”