You are here

Computer Modeling

BOULDER, Colo. — The University Corporation for Atmospheric Research (UCAR) today announced a new collaboration with The Weather Company, an IBM business, to improve global weather forecasting. The partnership brings together cutting-edge computer modeling developed at the National Center for Atmospheric Research (NCAR) with The Weather Company's meteorological science and IBM's advanced compute equipment."This is a major public-private partnership that will advance weather prediction and generate significant benefits for businesses making critical decisions based on weather forecasts," said UCAR President Antonio J. Busalacchi. "We are gratified that taxpayer investments in the development of weather models are now helping U.S. industries compete in the global marketplace."UCAR, a nonprofit consortium of 110 universities focused on research and training in the atmospheric and related Earth system sciences, manages NCAR on behalf of the National Science Foundation.With the new agreement, The Weather Company will develop a global forecast model based on the Model for Prediction Across Scales (MPAS), an innovative software platform developed by NCAR and the Los Alamos National Laboratory.The Model for Prediction Across Scales (MPAS) enables forecasters to combine a global view of the atmosphere with a higher-resolution view of a particular region, such as North America. (@UCAR. This image is freely available for media & nonprofit use.)MPAS offers a unique way of simulating the global atmosphere while providing users with more flexibility when focusing on specific regions of interest. Unlike traditional three-dimensional models that calculate atmospheric conditions at multiple points within a block-shaped grid, it uses a hexagonal mesh resembling a honeycomb that can be stretched wide in some regions and compressed for higher resolution in others. This enables forecasters to simultaneously capture far-flung atmospheric conditions that can influence local weather, as well as small-scale features such as vertical wind shear that can affect thunderstorms and other severe weather.Drawing on the computational power of GPUs — graphics processing units — such as those being used in a powerful new generation of IBM supercomputers, and on the expertise of NCAR and The Weather Company, the new collaboration is designed to push the capabilities of MPAS to yield more accurate forecasts with longer lead times. The results of NCAR's work will be freely available to the meteorological community. Businesses, from airlines to retailers, as well as the general public, stand to benefit.Mary Glackin, head of weather science and operations for The Weather Company, said, "As strong advocates for science, we embrace strong public-private collaborations that understand the value science brings to society, such as our continued efforts with UCAR to advance atmospheric and computational sciences.""Thanks to research funded by the National Science Foundation and other federal agencies, society is on the cusp of a new era in weather prediction, with more precise short-range forecasts as well as longer-term forecasts of seasonal weather patterns," Busalacchi said. "These forecasts are important for public health and safety, as well as enabling companies to leverage economic opportunities in ways that were never possible before."About The Weather CompanyThe Weather Company, an IBM Business, helps people make informed decisions and take action in the face of weather. The company offers weather data and insights to millions of consumers, as well as thousands of marketers and businesses via Weather’s API, its business solutions division, and its own digital products from The Weather Channel (weather.com) and Weather Underground (wunderground.com).This webpage was last updated on July 5, 2017.

Annual precipitation over Colorado as modeled by the low-resolution, global Community Earth System Model (top) compared to the high-resolution, regional Weather Research and Forecasting model (below). (Images courtesy Ethan Gutmann, NCAR.) February 13, 2017 | In global climate models, the hulking, jagged Rocky Mountains are often reduced to smooth, blurry bumps. It's a practical reality that these models, which depict the entire planet, typically need to be run at a relatively low resolution due to constraints on supercomputing resources. But the result, a virtual morphing of peaks into hills, affects the ability of climate models to accurately project how precipitation in mountainous regions may change in the future — information that is critically important to water managers.To address the problem, hydrologists have typically relied on two methods to "downscale" climate model data to make them more useful. The first, which uses statistical techniques, is fast and doesn't require a supercomputer, but it makes many unrealistic assumptions. The second, which uses a high-resolution weather model like the Weather Research and Forecasting model (WRF), is much more realistic but requires vast amounts of computing resources.Now hydrologists at the National Center for Atmospheric Research (NCAR) are developing an in-between option: The Intermediate Complexity Atmospheric Research Model (ICAR) gives researchers increased accuracy using only a tiny fraction of the computing resources."ICAR is about 80 percent as accurate as WRF in the mountainous areas we studied," said NCAR scientist Ethan Gutmann, who is leading the development of ICAR. "But it only uses 1 percent of the computing resources. I can run it on my laptop."Drier mountains, wetter plainsHow much precipitation falls in the mountains — and when — is vitally important for communities in the American West and elsewhere that rely on snowpack to act as a frozen reservoir of sorts. Water managers in these areas are extremely interested in how a changing climate might affect snowfall and temperature, and therefore snowpack, in these regions.But since global climate models with low resolution are not able to accurately represent the complex topography of mountain ranges, they are unsuited for answering these questions.For example, as air flows into Colorado from the west, the Rocky Mountains force that air to rise, cooling it and causing moisture to condense and fall to the ground as snow or rain. Once these air masses clear the mountains, they are drier than they otherwise would have been, so there is less moisture available to fall across Colorado's eastern plains.Low-resolution climate models are not able to capture this mechanism — the lifting of air over the mountains — and so in Colorado, for example, they often simulate mountains that are drier than they should be and plains that are wetter. For a regional water manger, these small shifts could mean the difference between full reservoirs and water shortages."Climate models are useful for predicting large-scale circulation patterns around the whole globe, not for predicting precipitation in the mountains or in your backyard," Gutmann said.Precipitation in millimeters over Colorado between Oct. 1 and May 1 as simulated by the Weather Research and Forecasting model (WRF), the Intermediate Complexity Atmospheric Research model (ICAR), and the observation-based Parameter-Elevation Regressions on Independent Slopes Model. (Images courtesy Ethan Gutmann.)A modeling middle groundA simple statistical fix for these known problems may include adjusting precipitation data to dry out areas known to be too wet and moisten areas known to be too dry. The problem is that these statistical downscaling adjustments don't capture the physical mechanisms responsible for the errors. This means that any impact of a warming climate on the mechanisms themselves would not be accurately portrayed using a statistical technique.That's why using a model like WRF to dynamically downscale the climate data produces more reliable results — the model is actually solving the complex mathematical equations that describe the dynamics of the atmosphere. But all those incredibly detailed calculations also take an incredible amount of computing.A few years ago, Gutmann began to wonder if there was a middle ground. Could he make a model that would solve the equations for just a small portion of the atmospheric dynamics that are important to hydrologists — in this case, the lifting of air masses over the mountains — but not others that are less relevant?"I was studying statistical downscaling techniques, which are widely used in hydrology, and I thought, 'We should be able to do better than this,'" he said. "'We know what happens when you lift air up over a mountain range, so why don’t we just do that?'"Gutmann wrote the original code for the model that would become ICAR in just a few months, but he spent the next four years refining it, a process that's still ongoing.100 times as fastLast year, Gutmann and his colleagues — Martyn Clark and Roy Rasmussen, also of NCAR; Idar Barstad, of Uni Research Computing in Bergen, Norway; and Jeffrey Arnold, of the U.S. Army Corps of Engineers — published a study comparing simulations of Colorado created by ICAR and WRF against observations.The authors found that ICAR and WRF results were generally in good agreement with the observations, especially in the mountains and during the winter. One of ICAR's weaknesses, however, is in simulating storms that build over the plains in the summertime. Unlike WRF, which actually allows storms to form and build in the model, ICAR estimates the number of storms likely to form, given the atmospheric conditions, a method called parameterization.Even so, ICAR, which is freely available to anyone who wants to use it, is already being run by teams in Norway, Austria, France, Chile, and New Zealand."ICAR is not perfect; it's a simple model," Gutmann said. "But in the mountains, ICAR can get you 80 to 90 percent of the way there at 100 times the speed of WRF. And if you choose to simplify some of the physics in ICAR, you can get it close to 1,000 times faster."About the articleTitle: The Intermediate Complexity Atmospheric Research Model (ICAR)Authors: Ethan Gutmann, Idar Barstad, Martyn Clark, Jeffrey Arnold, and Roy RasmussenJournal: Journal of Hydrometeorology, DOI: 10.1175/JHM-D-15-0155.1Funders:U.S. Army Corps of EngineersU.S. Bureau of ReclamationCollaborators:Uni Research Computing in NorwayU.S. Army Corps of EngineersWriter/contact: Laura Snider, Senior Science Writer

January 12, 2017 | An NCAR-based computer model known for global climate projections decades into the future recently joined a suite of other world-class models being used to forecast what may lie just a few months ahead.The Community Earth System Model has long been an invaluable tool for scientists investigating how the climate may change in the long term — decades or even centuries into the future. Last summer, CESM became the newest member of the North American Multi-Model Ensemble (NMME), an innovative effort that combines some techniques typically used in weather forecasting with those used in climate modeling to predict temperature and precipitation seasons in advance. The result is a bridge that helps span the gap between two-week forecasts and decades-long projections.The forecasted temperature anomalies (departures from average) over North America made by the entire NMME suite (top) and by CESM (middle). Observed temperature anomalies for the same period (bottom). Click to enlarge. (Images courtesy NOAA.) But NMME also builds another bridge: this one between operational forecasters, who issue the forecasts society depends on, and researchers. Now a collection of nine climate models, the NMME has proven it produces more accurate seasonal forecasts than any one model alone. It was adopted in May by the National Oceanic and Atmospheric Administration (NOAA) as one of the agency's official seasonal forecasting tools."What is so important about NMME is that it's bringing research to bear on operational forecasts," said Ben Kirtman, a professor of atmospheric sciences at the University of Miami who leads the NMME project. "The marriage between real-time prediction and research has fostered new understandings, identified new problems that we hadn't thought about before, and really opened up new lines of research."A new way to start a climate model runWeather models and climate models have a lot of things in common; for one, they both use mathematical equations to represent the physical processes going on in the atmosphere. Weather models, which are concerned with what’s likely to happen in the immediate future, depend on being fed accurate initial conditions to produce good forecasts. Even if a weather model could perfectly mimic how the atmosphere works, it would need to know what the atmosphere actually looks like now — the temperature and pressure at points across the country, for example — to determine what the atmosphere will look like tomorrow.Climate modelers, on the other hand, are often interested in broad changes over many decades, so the exact weather conditions at the beginning of a simulation are usually not as important. In fact, their impact is quickly drowned out by larger-scale trends that unfold over long time periods.In recent years, however, scientists have become interested in whether climate models — which simulate changes in ocean circulation patterns, sea surface temperatures, and other large-scale phenomena that have lingering impacts on weather patterns — could be initialized with accurate starting conditions and then used to make skillful seasonal forecasts.The NMME project is exploring this question. The global climate models that make up NMME project are all being initialized monthly to create multiple forecasts that stretch a year in advance. Along with CESM, those models include the NCAR-based Community Climate System Model, Version 4, which is being initialized by Kirtman's team at the University of Miami. (See a full list of models below.)Taken together, the individual model forecasts reveal information to forecasters about the amount of uncertainty in the seasonal forecast. If individual forecasts vary substantially, the future is less certain. If they agree, forecasters can have more confidence.The forecasted precipitation anomalies (departures from average) over North America made by the entire NMME suite (top) and by CESM (middle). Observed precipitation anomalies for the same period (bottom). Click to enlarge. (Images courtesy NOAA.)A valuable collection of dataCESM's first seasonal forecast as part of NMME, which was issued for July, August, and September 2016, was perhaps the most accurate of any in the ensemble. The forecast — which called for conditions to be warmer and drier than average across most of the United States — was issued after more than a year of work by NCAR scientists Joseph Tribbia and Julie Caron.All of the models in the NMME suite must be calibrated by running "hindcasts." By comparing the model's prediction of a historical season with what actually happened, the scientists can identify if the model is consistently off in some areas. For example, the model might generally predict that seasons will be wetter or cooler than they actually are for certain regions of the country. These tendencies can then be statistically corrected in future forecasts."We ran 10 predictions every month for a 33-year period and ran each prediction out for one year," Tribbia said. "You can learn a lot about how your model performs when you have so many runs."Once CESM was calibrated, it joined the NMME operational suite of models. But the data generated by the rigorous hindcasting process wasn't cast aside once the calibration was finished. Instead, every modeling group has saved not only monthly data, but also high-frequency daily data that are being stored at NCAR.The trove of historical predictions, along with the new predictions being generated in real-time, are an incredible resource for scientists interested in improving the techniques for initializing climate models and exploring what types of things can, and cannot, be predicted in advance."Predictability research can be a challenge. The NMME dataset allows you to check yourself in a robust way," Kirtman said. "If you think you've found a source of predictability in the hindcast mode, you can then try to do it in real time. It's really exciting — and it really holds your feet to the fire."This year, as much as 18.5 terabytes of NMME data were downloaded from NCAR monthly, according to NCAR's Eric Nienhouse, who oversees the data archive.Now that CESM is an active part of NMME, Tribbia and Caron will also be diving into the data."Now the fun begins," Caron said. "We get to start looking at the data to see how we're doing, and what we might change in the future to make our seasonal forecasts better."Models that make up NMME:NCEP CFSv2: National Centers for Environmental Prediction Climate Forecast System Version 2 (NOAA)CMC1 CanCM3: Canadian Meteorological Centre/Canadian Centre for Climate Modeling and AnalysisCMC2 CanCM4: Canadian Meteorological Centre/Canadian Centre for Climate Modeling and AnalysisGFDL FLOR: Geophysical Fluid Dynamics Laboratory Forecast-oriented Low Ocean Resolution (NOAA)GFDL CM2.1: Geophysical Fluid Dynamics Laboratory Coupled Climate Model Version 2.1 (NOAA)NCAR CCSM4: National Center for Atmospheric Research Community Climate System Model Version 4NASA GEOS5: NASA Goddard Earth Observing System Model Version 5NCAR CESM: National Center for Atmospheric Research Community Earth System ModelIMME: National Centers for Environmental Prediction International Multi-Model Ensemble (NOAA)Writer/contact:Laura Snider, Senior Science Writer

Nov. 1, 2016 | Last fall, Hurricane Patricia exploded from a Category 1 to a record-breaking Category 5 storm in just 24 hours.Patricia's rapid intensification off the coast of Mexico blindsided forecasters, whose models vastly underestimated how strong the hurricane would become. Patricia — and more recently Hurricane Matthew, which also jumped from Category 1 to Category 5 in less than a day — highlight a weakness in predictive capabilities. While we've made great strides in forecasting a hurricane's track, forecasting its intensity remains a challenge.New research using a sophisticated weather model based at the National Center for Atmospheric Research (NCAR) offers some clues about how these forecasts can be improved.The scientists — Ryder Fox, an undergraduate researcher at the New Mexico Institute for Mining and Technology, and Falko Judt, an NCAR postdoctoral researcher — found that an advanced version of the Weather Research and Forecasting model (WRF-ARW) could accurately forecast Hurricane Patricia's rapid intensification when run at a high enough resolution."Because Patricia was so out of bounds — the hurricane broke records for high wind speed and low pressure — we didn't think our model would actually be able to capture its peak intensity," Judt said. "The fact that the model nailed it took us by surprise."Hurricane Patricia approaches the west coast of Mexico on Oct. 23, 2015. (Image courtesy NASA.) Judt and Fox think that the model's resolution was one important key to its success. The scientists ran WRF-ARW with a 1-kilometer (0.6-mile) resolution on the Yellowstone system at the NCAR-Wyoming Supercomputing Center. The models being used to actually forecast Patricia at the time had resolutions between 3 and 15 kilometers."Going to 1-kilometer resolution may be especially important for very strong storms, because they tend to have an eyewall that's really small," Judt said. "Patricia's eye was just 13 kilometers across at its most intense."Still, the researchers caution that more simulations are needed to be sure that the model's ability to capture Hurricane Patricia's intensity wasn't a fluke."We're not sure yet that, if we ran the same model for Hurricane Matthew, we would forecast that storm correctly," Judt said. "There are so many things that can go wrong with hurricane forecasting."To address this uncertainty, Judt and Fox have begun running the model additional times, each with slightly tweaked starting conditions. The preliminary results show that while each model run is distinct, each one also captures the rapid intensification of the storm. This relative harmony among the ensemble of model runs suggests that WRF-ARW does a good job of reproducing the storm-friendly environmental conditions that Patricia formed in."The set-up that nature created may have allowed for a storm to intensify no matter what," Judt said. "The sea surface was downright hot, the air was really moist, and the wind shear, at times, was virtually zero. It was a very ripe environment."Fox began working with Judt through SOARS, the Significant Opportunities in Atmospheric Research program, which pairs young researchers with NCAR mentors. An undergraduate-to-graduate bridge program, SOARS is designed to broaden participation in the atmospheric and related sciences."The SOARS program means everything — not just to my ability to do this type of research, but also to grow as a scientist and to find my place within the scientific community," said Fox, who published the research results as an article in Physics Today.Fox hopes the research on accurate modeling of Hurricane Patricia may lead to improved early warning systems that could help prevent loss of life."My personal passion regarding severe weather research lies in improved early warning systems," Fox said, "which optimally lead to lower death counts."

Oct. 6, 2016 | As Hurricane Matthew churns toward the southeastern U.S. coast, scientists at the National Center for Atmospheric Research (NCAR) are testing an advanced research computer model to see how well it can predict the powerful storm's track and intensity.The Model for Prediction Across Scales (MPAS) uses an innovative software approach that allows scientists to focus on regional conditions while still capturing far-flung atmospheric processes that can influence the storm in question. This is a contrast to the forecast models typically used to track hurricanes today, which cannot simultaneously capture both global and local atmospheric processes.The experimental MPAS model simulates Hurricane Matthew hitting the Southeast. To see a range of model output, visit the MPAS tropical cyclone website. MPAS is able to do both because it uses a flexible mesh that allows it to zoom into higher resolution in some areas — over hurricane breeding grounds, for example — while zooming out over the rest of Earth. This ability to vary resolution across the globe requires a small fraction of the computer power needed to have high resolution everywhere.By testing MPAS during hurricane season, the research team can determine the adjustments that need to be made to the model while gaining insights into how to improve hurricane forecasting in the future."This is an experimental effort," said Chris Davis, a senior scientist and director of NCAR's Mesoscale and Microscale Meteorology Laboratory. "We're doing this to see if we can find systematic biases in the model so we can improve simulations of the tropics in general and hurricanes in particular."Davis and the other members of the research team, including NCAR scientists David Ahijevych, Sang-Hun Park, Bill Skamarock, and Wei Wang, are running MPAS once a day on NCAR's Yellowstone supercomputer, inputting various ocean and atmospheric conditions to see how it performs. The work is supported by the National Science Foundation and the Korea Institute of Science and Technology Information.Even though they are just tests, Davis said the MPAS simulations are often comparable with official forecast models such as those run by the National Hurricane Center and the European Centre for Medium-Range Weather Forecasts. As Matthew was in its early stages, in fact, MPAS did a better job than other models in simulating the northward movement of the storm from the Caribbean Sea toward the Florida coast.The scientists will analyze how MPAS performed and share the results with colleagues in the meteorological community. It's a step in an ongoing research effort to better predict the formation and behavior of hurricanes."We run the model even when the tropics are quiet, but an event like Matthew gives us a special opportunity to see what contributes to errors in tropical cyclone prediction," Davis said. "While a major hurricane can have catastrophic impacts, we hope to learn from it and make computer models even better in the future."Funders:National Science FoundationKorea Institute of Science and Technology InformationWriter/contact:David Hosansky, Manager of Media Relations

BOULDER, Colo. — As the National Oceanic and Atmospheric Administration (NOAA) this month launches a comprehensive system for forecasting water resources in the United States, it is turning to technology developed by the National Center for Atmospheric Research (NCAR) and its university and agency collaborators.WRF-Hydro, a powerful NCAR-based computer model, is the first nationwide operational system to provide continuous predictions of water levels and potential flooding in rivers and streams from coast to coast. NOAA's new Office of Water Prediction selected it last year as the core of the agency's new National Water Model."WRF-Hydro gives us a continuous picture of all of the waterways in the contiguous United States," said NCAR scientist David Gochis, who helped lead its development. "By generating detailed forecast guidance that is hours to weeks ahead, it will help officials make more informed decisions about reservoir levels and river navigation, as well as alerting them to dangerous events like flash floods."WRF-Hydro (WRF stands for Weather Research and Forecasting) is part of a major Office of Water Prediction initiative to bolster U.S. capabilities in predicting and managing water resources. By teaming with NCAR and the research community, NOAA's National Water Center is developing a new national water intelligence capability, enabling better impacts-based forecasts for management and decision making.The new WRF-Hydro computer model simulates streams and other aspects of the hydrologic system in far more detail than previously possible. (Image by NOAA Office of Water Prediction.) Unlike past streamflow models, which provided forecasts every few hours and only for specific points along major river systems, WRF-Hydro provides continuous forecasts of millions of points along rivers, streams, and their tributaries across the contiguous United States. To accomplish this, it simulates the entire hydrologic system — including snowpack, soil moisture, local ponded water, and evapotranspiration — and rapidly generates output on some of the nation's most powerful supercomputers.WRF-Hydro was developed in collaboration with NOAA and university and agency scientists through the Consortium of Universities for the Advancement of Hydrologic Science, the U.S. Geological Survey, Israel Hydrologic Service, and Baron Advanced Meteorological Services. Funding came from NOAA, NASA, and the National Science Foundation, which is NCAR's sponsor."WRF-Hydro is a perfect example of the transition from research to operations," said Antonio (Tony) J. Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation (NSF). "It builds on the NSF investment in basic research in partnership with other agencies, helps to accelerate collaboration with the larger research community, and culminates in support of a mission agency such as NOAA. The use of WRF-Hydro in an operational setting will also allow for feedback from operations to research. In the end this is a win-win situation for all parties involved, chief among them the U.S. taxpayers.""Through our partnership with NCAR and the academic and federal water community, we are bringing the state of the science in water forecasting and prediction to bear operationally," said Thomas Graziano, director of NOAA’s new Office of Water Prediction at the National Weather Service.Filling in the water pictureThe continental United States has a vast network of rivers and streams, from major navigable waterways such as the Mississippi and Columbia to the remote mountain brooks flowing from the high Adirondacks into the Hudson River. The levels and flow rates of these watercourses have far-reaching implications for water availability, water quality, and public safety.Until now, however, it has not been possible to predict conditions at all points in the nation's waterways. Instead, computer models have produced a limited picture by incorporating observations from about 4,000 gauges, generally on the country's bigger rivers. Smaller streams and channels are largely left out of these forecast models, and stretches of major rivers for tens of miles are often not predicted — meaning that schools, bridges, and even entire towns can be vulnerable to unexpected changes in river levels.To fill in the picture, NCAR scientists have worked for the past several years with their colleagues within NOAA, other federal agencies, and universities to combine a range of atmospheric, hydrologic, and soil data into a single forecasting system.The resulting National Water Model, based on WRF-Hydro, simulates current and future conditions on rivers and streams along points two miles apart across the contiguous United States. Along with an hourly analysis of current hydrologic conditions, the National Water Model generates three predictions: an hourly 0- to 15-hour short-range forecast, a daily 0- to 10-day medium-range forecast, and a daily 0- to 30-day long-range water resource forecast.The National Water Model predictions using WRF-Hydro offer a wide array of benefits for society. They will help local, state, and federal officials better manage reservoirs, improve navigation along major rivers, plan for droughts, anticipate water quality problems caused by lower flows, and monitor ecosystems for issues such as whether conditions are favorable for fish spawning. By providing a national view, this will also help the Federal Emergency Management Agency deploy resources more effectively in cases of simultaneous emergencies, such as a hurricane in the Gulf Coast and flooding in California."We've never had such a comprehensive system before," Gochis said. "In some ways, the value of this is a blank page yet to be written."A broad spectrum of observationsWRF-Hydro is a powerful forecasting system that incorporates advanced meteorological and streamflow observations, including data from nearly 8,000 U.S. Geological Survey streamflow gauges across the country. Using advanced mathematical techniques, the model then simulates current and future conditions for millions of points on every significant river, stream, tributary, and catchment in the United States.In time, scientists will add additional observations to the model, including snowpack conditions, lake and reservoir levels, subsurface flows, soil moisture, and land-atmosphere interactions such as evapotranspiration, the process by which water in soil, plants, and other land surfaces evaporates into the atmosphere.Scientists over the last year have demonstrated the accuracy of WRF-Hydro by comparing its simulations to observations of streamflow, snowpack, and other variables. They will continue to assess and expand the system as the National Water Model begins operational forecasts.NCAR scientists maintain and update the open-source code of WRF-Hydro, which is available to the academic community and others. WRF-Hydro is widely used by researchers, both to better understand water resources and floods in the United States and other countries such as Norway, Germany, Romania, Turkey, and Israel, and to project the possible impacts of climate change."At any point in time, forecasts from the new National Water Model have the potential to impact 300 million people," Gochis said. "What NOAA and its collaborator community are doing is trying to usher in a new era of bringing in better physics and better data into forecast models for improving situational awareness and hydrologic decision making."CollaboratorsBaron Advanced Meteorological Services Consortium of Universities for the Advancement of Hydrologic ScienceIsrael Hydrologic ServiceNational Center for Atmospheric ResearchNational Oceanic and Atmospheric AdministrationU.S. Geological SurveyFundersNational Science FoundationNational Aeronautics and Space AdministrationNational Oceanic and Atmospheric Administration

A new book breaks down climate models into easy-to-understand concepts. (Photo courtesy Springer.)
June 21, 2016 | Climate scientists tell us it's going to get hotter. How much it rains and where it rains is likely to shift. Sea level rise is apt to accelerate. Oceans are on their way to becoming more acidic and less oxygenated. Floods, droughts, storms, and other extreme weather events are projected to change in frequency or intensity.
But how do they know what they know?
For climate scientists, numerical models are the tools of the trade. But for the layperson — and even for scientists in other fields — climate models can seem mysterious. What does "numerical" even mean? Do climate models take other things besides the atmosphere into account?How do scientists know if a model is any good? *
Two experts in climate modeling, Andrew Gettelman of the National Center for Atmospheric Research and Richard Rood of the University of Michigan, have your answers and more, free of charge. In a new open-access book, "Demystifying Climate Models," the pair lay out the fundamentals. In 282 pages, the scientists explain the basics of climate science, how that science is translated into a climate model, and what those models can tell us (as well as what they can't) — all without using a single equation.
*Find the answers on pages 8, 13, and 161, respectively, of the book.
AtmosNews sat down with Gettelman to learn more about the book, which anyone can download at http://www.demystifyingclimate.org.
NCAR scientist Andrew Gettelman has written a new book on climate modeling with Richard Rood of the University of Michigan. (Courtesy photo. This image is freely available for media & nonprofit use.)
What was the motivation to write this book?
There isn't really another book that sets out the philosophy and structure of models. There are textbooks, but inside you'll find a lot of physics and chemistry: information about momentum equations, turbulent fluxes — which is useful if you want to build your own model.
And then there are books on climate change for the layperson, and they devote maybe a paragraph to climate modeling. There's not much in the middle.
This book provides an introduction for the beginning grad student, or someone in another field who is interested in using model output, or anyone who is just curious how climate works and how we simulate it.
What are some of the biggest misperceptions about climate models that you hear?
One is that people say climate models are based on uncertain science. But that's not true at all. If we didn't know the science, my cellphone wouldn't work. Radios wouldn't work. GPS wouldn't work.
That's because the energy that warms the Earth, which radiates from the Sun, and is absorbed and re-emitted by Earth's surface — and also by greenhouse gases in the atmosphere — is part of the same spectrum of radiation that makes up radio waves. If we didn't understand electromagnetic waves, we couldn't have created the technology we rely on today. The same is true for the science that underlies other aspects of climate models.
(Learn more on page 38 of the book.)
But we don't understand everything, right?
We have understood the basic physics for hundreds of years. The last piece of it, the discovery that carbon dioxide warms the atmosphere, was put in place in the late 19th, early 20th century. Everything else — the laws of motion, the laws of thermodynamics — was all worked out between the 17th and 19th centuries. (Learn more on page 39 of the book.)
We do still have uncertainty in our modeling systems. A big part of this book is about how scientists understand that uncertainty and actually embrace it as part of their work. If you know what you don't know and why, you can use that to better understand the whole climate system.
Can we ever eliminate the uncertainty?
Not entirely. In our book, we break down uncertainty into three categories: model uncertainty (How good are the models at reflecting how the Earth really works?), initial condition uncertainty (How well do we understand what the Earth system looks like right now?), and scenario uncertainty (What will future emissions look like?)
To better understand, it might help to think about the uncertainty that would be involved if you had a computer model that could simulate making a pizza. Instead of trying to figure out what Earth's climate would look like in 50 or 100 years, this model would predict what your pizza would look like when it was done.
The first thing you want to know is how well the model reflects the reality of how a pizza is made. For example, does the model take into account all the ingredients you need to make the pizza, and how they will each evolve? The cheese melts, the dough rises, and the pepperoni shrinks. How well can the model approximate each of those processes? This is model uncertainty.
The second thing you'd want to know is if you can input all the pizza's "initial conditions" into the model. Some initial conditions — like how many pepperoni slices are on the pizza and where — are easy to observe, but others are not.
For example, kneading the pizza dough creates small pockets of air, but you don’t know exactly where they are. When the dough is heated, the air expands and forms big bubbles in the crust. If you can't tell the model where the air pockets are, it can't accurately predict where the crust bubbles will form when the pizza is baked.
The same is true for a climate model. Some parts of the Earth, like the deep oceans and the polar regions, are not easy to observe with enough detail, leaving scientists to estimate what the conditions there are like and leading to the second type of uncertainty in the model results.
Finally, the pizza-baking model also has to deal with "scenario uncertainty," because it doesn't know how long the person baking the pizza will keep it in the oven, or at what temperature. Without understanding the choices the human will make, the model can't say for sure if the dough will be soft, crispy, or burnt.
With climate models, over long periods of time, like a century, we've found that this scenario uncertainty is actually the dominant one. In other words, we don't know how much carbon dioxide humans around the world going to emit in the years and decades to come, and it turns out that that's what matters most.
(Learn more about uncertainty on page 10 of the book.)
Any other misperceptions you frequently hear?
People always say, "If we can't predict the weather next week, how can we know what the climate will be like in 50 years?"
Generally speaking, we can't perfectly predict the weather because we don't have a full understanding of all the current conditions. We don't have observations for every grid point on a weather model or for large parts of the ocean, for example.
But climate is not concerned about the exact weather on a particular day 50 or 100 years from now. Climate is the statistical distribution of weather, not a particular point on that distribution. Climate prediction is focused on the statistics of this distribution, and that is governed by conservation of energy and mass on long time scales, something we do understand. (Learn more on page 6 of the book. Read more common misperceptions at http://www.demystifyingclimate.org/misperceptions.)
Did you learn anything about climate modeling while working on the book?
My background is the atmosphere. I sat down and wrote the whole section on the atmosphere in practically one sitting. But I had to learn about the other aspects of models, the ocean and the land, which work really differently. The atmosphere has only one boundary, a bottom boundary. We just have to worry about how it interacts with mountains and other bumps on the surface.
But the ocean has three hard boundaries: the bottom and the sides, like a giant rough bathtub. It also has a boundary with the atmosphere on the top. Those boundaries really change how the ocean moves. And the land is completely different because it doesn't move at all. Writing this book really gave me a new appreciation for some of the subtleties of other parts of the Earth System and the ways my colleagues model them.
(Learn more on page 13 of the book.)
What was the most fun part of writing the book for you?
I think having to force myself to think in terms of analogies that are understandable to a variety of people. I can describe a model using a whole bunch of words most people don't use every day, like "flux." It was a fun challenge to come up with words that would accurately describe the models and the science but that were accessible to everyone.