Pittsburgh Supercomputing Center Again Supports Major NOAA and
University of Oklahoma Forecasting Effort

PITTSBURGH, July 2, 2008 — Once again this year during spring storm season, NOAA (the National Oceanic and Atmospheric Administration) mounted a major experiment in storm forecasting. As has been the case many times over the past 15 years, computational support came from the Pittsburgh
Supercomputing Center (PSC).

In the Hazardous Weather Testbed at the National Weather Center, University of Oklahoma, Norman, a group of researchers and forecasters review the daily forecast. Research meteorologist Jack Kain (left) of the National Severe Storms Laboratory guides discussion. PSC co-scientific director Ralph Roskies (right), visiting the HWT on May 30, looks on.

A goal of the experiment was to test and refine “ensemble”
forecasting - which involves running a forecast model multiple times
to assess the degree of uncertainty inherent in the forecast. Since
it requires running a model many times within a short time period,
ensemble forecasts demand large amounts of computational power.
Answering that need, providing more than a hundred times more
computing each day than the most sophisticated National Weather
Service operational forecasts, was PSC’s Cray XT3, a 21-teraflop
system of the National Science FoundationTeraGrid.

The annual experiment, called the Spring Experiment, is the
cornerstone of the NOAA Hazardous Weather Testbed (HWT) located in
the National Weather Center at the University of Oklahoma, Norman
(OU). Operated jointly by NOAA’s Storm Prediction Center (SPC) and
the National Severe Storms Laboratory (NSSL), both at OU, the HWT
provides a one-of-a-kind environment where researchers and
forecasters collaborate while they evaluate the most advanced
technologies available for forecasting severe weather. This year’s
experiment ran for seven weeks (April 21 to June 6) and brought to
the HWT more than 60 research and student meteorologists and
forecasters from around the world, in groups of about a dozen each
week.

Every day during the seven weeks, forecasters at the Center for
Analysis and Prediction of Storms (CAPS) at OU transmitted weather
data to the XT3 at PSC. Running a 10-member ensemble, 10 slightly
different configurations of the forecast model, for nearly the entire
continental U.S. at four-kilometer resolution, the XT3 produced
forecasts for the next day and transmitted them to OU as they were
produced. In addition to the 10-member ensemble runs, CAPS also ran
at PSC each day a single higher-resolution forecast (two-kilometer
resolution) for the same domain.

“The finer resolution of this run better captures the structure of
thunderstorms,” says Ming Xue, director of CAPS. “Another significant
achievement of this year’s spring experiment,” he adds, “is the use
of observational data from over 120 weather radars to initialize
thunderstorms in the prediction model. This has never been done
before anywhere.”

Back at the HWT, the forecast results were translated into visual
display of various forecast products, and that week’s group of
researchers and forecasters scrutinized a series of computer screens,
large and small, arrayed around the room as if it were a sports bar
with about a dozen weather reports happening at the same time. Split
into two small groups, they talk out loud in turns, with running
commentary like radio sportscasters as they interpret the screen
displays of vividly colored swirling patterns overlaid on a U.S. map.

“The two-kilometer forecast is the first to go to a more linear mode
- with that embedded bow structure.” “Most of the initiation is
approximately right.” “There’s a moist tongue extending up the
central plains toward Davenport.” “The trough axis is slowly
progressing eastward.”

After awhile they grade the forecasts, assigning a quality score on a
1 to 10 scale. “We’re evaluating not just how well the forecast
corresponds to reality,” says Steven Weiss, Science and Operations
Officer of SPC, “but also we’re looking at what indicators they give
about storm characteristics and severe weather likelihood. These
attributes are important and a key reason for testing high-resolution
models.”

Weiss and Jack Kain, a research meteorologist at NSSL, coordinate the
group discussions. NOAA has found, they say, that this interaction
between research meteorologists and operational forecasters is unique
in the opportunity it provides for the two communities to learn from
each other. Forecasters see the latest research concepts and forecast
products and researchers experience the constraints and challenges of
front-line forecasting.

This year’s experiment is only the second time that ensemble
forecasts have been carried out in a simulated operational
forecasting environment and with a spatial resolution that can
directly present the storms. “CAPS and PSC are at the cutting-edge of
what we can do technologically,” says Kain, “and this is where the
rubber meets the road - where the practical value of the latest
technologies can be assessed.”

Since 1993 PSC has collaborated with CAPS in spring experiments and
since 2004 with NOAA. Steady advances in computational technology
have led to corresponding advances in the ability to predict
storm-scale weather. “The directors and staff of PSC have been
scientific collaborators in the deepest sense,” says Kelvin
Droegemeier, co-founder and director emeritus of CAPS and associate
vice-president for research at OU. “They work hand-in-hand, not just
to get our codes to run, but with networking and data-transfer, how
the code is structured on the machine.”

In 2005, using PSC’s LeMieux, the first terascale system available
via the TeraGrid, CAPS and NOAA learned that with sufficient
high-resolution it’s possible, in some cases, to predict the details
of thunderstorms 24-hours in advance, a milestone in storm
forecasting, suggesting that weather at this scale is inherently more
predictable than previously thought.