You are here

Predicting the world's weather

Can a single family of models simulate the weather for Manhattan, Malaysia, and Mars? Thanks to a successful interagency collaboration, the answer is “yes.” Barely a decade old, the Weather Research and Forecasting model (WRF, pronounced “warf”) is already the world’s most popular tool for predicting whether the next several days will be sunny, soggy, or snowy. Operational versions of WRF are used by the meteorological services of nations across the globe, and the model serves as the cornerstone of one- to three-day forecasts issued by NOAA’s National Weather Service.

The WRF framework not only provides a foundation for the weather forecasts on which millions rely; it also supports a vast amount of atmospheric research. More than 13,000 scientists in some 130 countries and 200 U.S. universities now use WRF. Among those users, more than 90% are employing the Advanced Research WRF (ARW), maintained by NCAR.

Leaders of the Developmental Testbed Center, which facilitates the creation of new WRF variants, include Steven Koch (NOAA) and national director Ying-Hwa “Bill” Kuo (NCAR).

“The WRF development project is the first time researchers and operational scientists have come together in a project of this magnitude,” says Louis Uccellini, director of NOAA’s National Centers for Environmental Prediction (NCEP). Along with NCAR and NOAA, other partners in this vast effort have included the Air Force Weather Agency (AFWA), Naval Research Laboratory and Army Research Laboratory, Federal Aviation Administration, University of Oklahoma, and more than 150 other universities, laboratories, and agencies from across the nation and beyond.

WRF’s hybrid nature arose from a longstanding dilemma. From the 1950s onward, both researchers and forecasters increasingly relied on numerical weather prediction models, but the two groups rarely used the same ones. Those who predicted the weather for a living insisted on stable, thoroughly tested software, while those who studied the atmosphere often built more complex, research-oriented models—tools that might yield invaluable insight on atmospheric features but that weren’t necessarily designed with day-to-day forecasting in mind.

Both paths were valuable and fruitful, yet some wondered whether a joint approach might offer even greater benefits. In 1996, a program reviewer asked NCAR’s Ying-Hwa “Bill” Kuo and Robert Gall how much of an impact the center’s community-oriented modeling had on operational forecasting. “We said, ‘It’s close to zero,’” Kuo later recalled. “But we agreed that NCAR’s work should benefit society.”

Shortly afterward, Kuo and Gall met with Geoff DiMego (NCEP) to lay the groundwork for the multiagency collaboration that led to WRF. By 2001, a beta-test version was issuing daily forecasts with solid results.

From the common seeds of WRF’s overall structure, a number of variations emerged, including two major dynamical cores that share physics routines and a software framework. In 2006 NOAA and AFWA adopted operational versions of the model after meticulous testing. “An operational system must be reliable and accurate every day over the full range of highly varied conditions that the atmosphere produces over a number of years,” says WRF program coordinator Nelson Seaman (Pennsylvania State University).

WRF data have been analyzed in countless ways, from a high-resolution illustration of water vapor coursing through the tropics (left) to a 2002 three-dimensional depiction of U.S. weather (right).

Meanwhile, the research-oriented ARW cultivated by NCAR and its university collaborators made headway on some of the atmosphere’s most intractable problems. Starting in 2003, NCAR used ARW to generate daily forecasts of convection across much of the central and eastern United States. It was the first time software had modeled individual showers and thunderstorms in real time over such a large area. Thanks to such testing, forecasters now have a stronger sense of what form severe weather is likely to take over a day’s time—a tornadic supercell, a squall line, a prolonged rainstorm, or something else.

Both NOAA and NCAR now run WRF variants attuned to depicting tropical cyclones. In research mode, NCAR scientists have drilled down to resolutions as tight as half a city block (62 meters/200 feet), which proved enough to capture turbulent eddies that can disperse a hurricane’s energy. NOAA has linked its Hurricane WRF to an ocean model from the University of Miami and will add a storm surge model to the mix. “We’re building this system component by component to address the complete problem of coastal inundation,” says NOAA’s Naomi Surgi.

WRF’s twin missions of prediction and research are tightly coupled at the Developmental Testbed Center, housed in Boulder and led by Kuo with several deputy directors, including Steven Koch (NOAA) and Barbara Brown (NCAR). Dedicated computing power allows new code to be tested for as long as several months. “The DTC is one of the shining examples of NCAR and NOAA working together,” said Kuo.

In keeping with its decades-long tradition of training and user support for community models, NCAR sponsors several WRF workshops and tutorials each year at sites ranging from Boulder to Korea and England. “People keep asking for more,” says NCAR’s Joseph Klemp. Online resources help meet the growing demand. In 2010, NCAR and NOAA teamed for a workshop and tutorial on the two hurricane-oriented variations of WRF. And WRF offshoots continue to blossom—including one designed at the California Institute of Technology to simulate the weather on Mars.

Today — DART's target: Infusing models with data

"I really appreciate the excellent user support and extensive documentation."

—Ryan Torn, University at Albany, State University of New York

Satellites, radars, and other observing tools generate a smorgasbord of atmospheric data each day. This bounty could greatly improve weather forecasts and climate projections. However, some types of data can’t be readily assimilated by a given computer model, just as the most ample, delicious meal might not go down easily for everyone.

NCAR’s Data Assimilation Research Testbed (DART) is smoothing the way for models to ingest the data they need to thrive. Prior to DART, most assimilation activities were tailored for specific models and observing systems—typically an exhaustive, time-consuming process. In contrast, the open-source DART employs a modular approach: a fixed core is paired with ready-to-use interfaces for a wide variety of computing platforms, models, and data types.

“One version of DART works for everything,” says testbed director Jeffrey Anderson. He and colleagues launched DART in 2002 while Anderson was a visiting scientist from NOAA specializing in software for ocean-atmosphere modeling. “I got interested in ensemble data assimilation while trying to do seasonal predictions. Ensembles were quite important, but nobody was quite sure how to get initial conditions for them.”

At the heart of the DART approach is an ensemble Kalman filter, which nudges the initial conditions of models toward a state more consistent with observations. Along with helping to bring data into the starting point of a forecast, DART techniques can also be used to reanalyze past states of the atmosphere and to estimate the value of a given observing technique.

With the help of DART, researchers can more easily produce ensembles that show the range of possible atmospheric variations, such as this set of 20 depictions of mid-level flow from a six-hour forecast using the Community Atmospheric Model. (Visualization by Tim Hoar, NCAR/DART.)

Students often get their first taste of data assimilation using DART on simple models developed in the 1960s. NCAR collaborators have produced interfaces that allow DART to work with a regional air quality model (University of Chicago) and an ocean prediction model (Scripps Institution of Oceanography), among many others. DART’s team of modelers and software engineers is also reaching out beyond atmospheric science to other disciplines, including geology and economics. “Lots of people outside the traditional UCAR community are using the testbed,” Anderson says.

At the Naval Postgraduate School, Joshua Hacker used DART with a simplified version of WRF in order to identify systematic errors in the model’s portrayal of conditions near the surface conditions. “Because DART can be easily tuned to give good results, several comparison experiments could be completed and analyzed quickly,” says Hacker.

Ryan Torn (University at Albany, State University of New York) has been exploring how best to initialize WRF hurricane forecasts. “All of my recent research is based on the DART system,” he says. “It allows me to focus my efforts on the basic science components of my research, rather than on maintaining and expanding code.”

National Center for Atmospheric Research | University Corporation for Atmospheric Research