The latest news and information about NOAA research in and around the Great Lakes

As the winter months begin, they bring the phenomenon that every Great Lakes resident knows (and either loves or hates): lake effect snow.

Lake effect snowhappens while the Great Lakes are still unfrozen and relatively warm. Cold air (usually from Canada) sweeps over them, picking up moisture and warmth on the way. It then drops the moisture as snow downwind.

This might sound a little confusing — how could “warm” lake water result in snow? Well, the warmth from the lakes is only enough to get the moisture from the surface of the lake into the atmosphere. Once the moisture is up in that mass of cold air, it turns into snow and then falls — usually about the time it’s passing over whatever freeway you use for your afternoon commute.

Check out this map of the United States: you’ll notice that things generally get snowier the farther north you go, but that the really dramatic snowfall (average of more than 8 feet annually) occurs in two places: high mountain elevations and land adjacent to the Great Lakes. Many areas in the Great Lakes basin average at least 4 feet annually. You can thank lake effect snow for this.

Map showing average annual snowfall in the contiguous United States.

Notice that the western shore of Lake Michigan and Southeast Michigan/Northwest Ohio are somewhat spared — wind patterns have a lot to do with this. It is not enough to just be next to a Great Lake — the wind has to be blowing your way.

The GIF below shows lake effect snow in action. The black arrows represent wind speed (length) and direction, and the color scale shows snow accumulation. This is model output re-creating a lake effect snow event from December of 2016.

Model output for a lake effect snow event back in December of 2016. Arrows show wind speed and direction and color scale shows snow accumulation.

Researchers at the NOAA’s Great Lakes Environmental Research Laboratory, along with partners from the Cooperative Institute for Great Lakes Research (CIGLR), are working on a model that can indirectly improve predictions of lake effect snow through the expected latent heat flux from the lakes. “Latent heat flux” is fancy terminology for that warmth, and associated water, moving from the lakes to the air that we talked about earlier. So, if the latent heat flux is predicted to be high (lots of moisture and warmth being transferred from the lake to the atmosphere), there’s a greater chance of a lake effect snow event.

You can see that model output here, although for now it’s a “nowcast” — a re-creation of the last 5 days. However, forecast data (and data for more lakes) is on the way!

Share this:

Like this:

Understanding the duration, extent, and movement of Great Lakes ice is important for the Great Lakes maritime industry, public safety, and the recreational economy. Lake Erie is ice-prone, with maximum cover surpassing 80% many winters.

Multiple times a day throughout winter, GLERL’s 3D ice model predicts ice thickness and concentration on the surface of Lake Erie. The output is available to the public, but the model is under development, meaning that modelers still have research to do to get it to better reflect reality.

As our scientists make adjustments to the model, they need to compare its output with actual conditions so they know that it’s getting more accurate. So, on January 13th of this year, they sent a plane with a photographer to fly the edge of the lake and take photos of the ice.

The map below shows the ice model output for that day, along with the plane’s flight path and the location of the 172 aerial photos that were captured.

These photos provide a detailed look at the sometimes complex ice formations on the lake, and let our scientists know if there are places where the model is falling short.

Often, the model output can also be compared to images and surface temperature measurements taken from satellites. That information goes into the GLSEA product on our website (this is separate from the ice model). GLSEA is useful to check the ice model with. However, it’s important to get this extra information.

“These photographs not only enable us to visualize the ice field when satellite data is not available, but also allow us to recognize the spatial scale or limit below which the model has difficulty in simulating the ice structures.” says Eric Anderson, an oceanographer at GLERL and one of the modelers.

“This is particularly evident near the Canadian coastline just east of the Detroit River mouth, where shoreline ice and detached ice floes just beyond the shoreline are not captured by the model. These floes are not only often at a smaller spatial scale than the model grid, but also the fine scale mechanical processes that affect ice concentration and thickness in this region are not accurately represented by the model physics.”

Click through the images below to see how select photos compared to the model output. To see all 172 photos, check out our album on Flickr. The photos were taken by Zachary Haslick of Aerial Associates.

Since 1990, GLERL scientists have been measuring temperature in the middle of southern Lake Michigan (at approximately 42.68, -87.07). They’ve been using a vertical chain of instruments that measure temperature from top to bottom. This is one of the longest vertical temperature records in existence anywhere in the Great Lakes, and it reveals some interesting patterns about lake temperature and the seasons. We’ve created a static infographic as well as an interactive chart that allows you to zoom in on the data and get individual measurement values.

The research goes on around the clock. Scientists work in shifts, taking turns sleeping and sampling. The Laurentian spends a full 24 hours at each monitoring station, sampling vertical slices of the water column. Sampling at these same stations has been going on since 2010, providing a long-term dataset that is essential for studying the impact of things like climate change and the establishment of invasive species.

Sampling focuses on planktonic (floating) organisms such as bacteria, phytoplankton (tiny plants), zooplankton (tiny animals), and larval fishes which feed on zooplankton. Many of the zooplankton migrate down into deep, dark, cold layers of the water column during the day to escape predators such as fish and other zooplankton. They return unseen to warm surface waters at night to feed on abundant phytoplankton. Knowing where everything is and who eats whom is important for understanding the system.

Our researchers use different sampling tools to study life at different scales. For example, our MOCNESS (Multiple Opening Closing Net Environmental Sampling System) is pretty good at catching larger organisms like larval fish, Mysis (opossum shrimp), and the like. The MOCNESS has a strobe flash system that stuns the organisms, making it easier to bring them into its multiple nets.

The PSS (Plankton Survey System) is a submersible V-Fin (vehicle for instrumentation) that is dragged behind the boat and measures zooplankton, chlorophyll (a measure of phytoplankton), dissolved oxygen, temperature, and light levels. Measurements are made at a very high spatial resolution from the top to the bottom of the water. At the same time fishery acoustics show where the fish are. Together, these two techniques allow us to see where much of the food web is located.

Water samples are taken at various depths and analyzed right on the boat. This is a good way to study microbes such as bacteria and very small phytoplankton. The lower food web has been pretty heavily altered by the grazing of quagga and zebra mussels. Specifically, the microbial food web (consisting of microbes such as bacteria and very small phytoplankton) makes up a larger component of the food web than before mussel invasion, and scientists are working to find out exactly how this has happened.

Check out the photos below for a glimpse of life in the field!

Central Michigan University students Anthony and Allie are all smiles as they prepare to head out!

Getting the MOCNESS ready.

Chief scientist Hank Vanderploeg looks at some data.

Filtering a water sample—filtering out the big stuff makes it easier to see microbes.

In a new study, scientists from GLERL, the University of Michigan, and other institutions take a new look at changing ice cover and surface water temperature in the Great Lakes. The paper, set to be published in Climatic Change, is novel in two ways.

While previous research focused on changes in ice cover and temperature for each lake as a whole, this study reveals how different regions of the lakes are changing at different rates.

While many scientists agree that, over the long term, climate change will reduce ice cover in the Great Lakes, this paper shows that changes in ice cover since the 1970s may have been dominated by an abrupt decline in the late 1990s (coinciding with the strong 1997-1998 winter El Niño), rather than gradually declining over the whole period.

—

NOAA tracks ice cover and water surface temperature of the Great Lakes at a pretty fine spatial scale. Visit our CoastWatch site and you’ll see detailed maps of surface temperature and/or ice cover updated daily.

However, when studying long-term changes in temperature and ice cover on the lakes, the scientific community has used, in the past, either lakewide average temperature data or data from just a few buoys. We knew how each lake was changing overall, but not much more.

Now, for the first time, researchers are using our detailed data to look at the changes happening in different parts of each lake.

Using GIS (geographic information system) analysis tools, researchers calculated how fast ice cover and temperature were changing on average for each of thousands of small, square areas of the lakes (1.3 km2 for ice cover, and 1.8 km2 for temperature).

The maps below show the results. Changes in ice, on the left, are reported in the number of days of ice cover lost each year. Temperature changes are reported in degrees Celsius gained per year.

The researchers also averaged these values across major subbasins of the lakes. Maps of those results are below. The color coding is the same, and again, ice cover is on the left while temperature is on the right.

Note: These subbasins aren’t random, and were outlined by scientists as a part of the Great Lakes Aquatic Habitat Framework (GLAHF), which is meeting a need (among other things) for lake study at intermediate spatial scales.

The panel on the left shows the change in seasonal ice cover duration (d/yr) from 1973 to 2013, and the panel on the right shows the change in summer surface water temperature (°C/yr) from 1994 to 2013. Maps created by Kaye LaFond for NOAA GLERL. Click image to enlarge.

Depth, prevailing winds, and currents all play a role in why some parts of the lakes are warming faster than others. A lot of information is lost if each lake is treated as a homogenous unit. With so much variation, it may not make sense for every region of the Great Lakes to use lakewide averages. Studying changes at a smaller scale could yield more useful information for local and regional decision makers.

—

The second part of the story has to do with how ice cover has changed in the lakes. Previous studies typically represent changes in ice cover as a long, slow decline from 1973 until today (that would be called a ‘linear trend’). However, when looking at the data more carefully, it seems the differences between the 70’s and today in many regions of the Great Lakes are better explained by a sudden jump (called a ‘change point’).

The figure below shows yearly data on ice cover for the central Lake Superior basin. It is overlaid with a linear trendline (the long, slow decline approach) as well as two flat lines, which represent the averages of the data before and after a certain point, the ‘change point’.

Annual ice cover duration (d/yr) for the central Lake Superior basin, overlaid on the left with a linear trend-line, and overlaid on the right with a change-point analysis. Graphic created by Kaye LaFond for NOAA GLERL. Click image to enlarge.

Statistical analyses show that the change point approach is much better fit for most subbasins of the Great Lakes.

So what caused this sudden jump? Scientists aren’t sure, but the change points of the northernmost basins line up with the year 1998, which was a year with a very strong winter El Niño. This implies that changes in ice cover are due, at least in part, to the cyclical influence of the El Niño Southern Oscillation (ENSO).

All of this by no means implies that climate change didn’t have a hand in the overall decline, or that when there is a cyclical shift back upwards (this may have already happened in 2014) that pre-1998 ice cover conditions will be restored. The scientific consensus is that climate change is happening, and that it isn’t good for ice cover.

This research just asserts that within the larger and longer-term context of climate change, we need to recognize the smaller and shorter-term cycles that are likely to occur.

Update 08/09/2016: The buoys have drifted ashore and are being collected! The map below shows their full journey.

This map shows the journey of the drifters from July 5, 2016 to August 5, 2016. Created by Kaye LaFond for NOAA GLERL. Click image to enlarge.

Original post 07/13/2016:

Last week, GLERL scientists released two mobile buoys with GPS tracking capabilities, known as ‘Lagrangian drifters’, into Lake Erie. We are now watching the buoys move around the lake with interest, and not just because it’s fun. The drifters help us test the accuracy of our Lake Erie hydrodynamics model, known as the Lake Erie Operational Forecasting System (LEOFS).

This map shows the progress of the drifters as of July 13, 2016 08:19:00. Created by Kaye LaFond for NOAA GLERL. Click image to enlarge.

LEOFS is driven by meteorological data from a network of buoys, airports, coastal land stations, and weather forecasts which provide air temperatures, dew points, winds, and cloud cover. The mathematical model then predicts water levels, temperatures, and currents (see below).

An example of outputs from the Lake Erie Operational Forecast System (LEOFS)

We use these modeled currents to predict the path that something like, say, an algae bloom would take around the lake. In fact, this is the basis of our HAB tracker tool.

The strength of LEOFS is in how well the modeled currents match reality. While there are a number of stationary buoys in Lake Erie, none provide realtime current measurements. The drifters allow us to see how close we are getting to predicting the actual path an object would take.

Researchers will compare the actual paths of the drifters to the paths predicted by our model. This is a process known pretty universally as ‘in-situ validation’ (in-situ means “in place”). Comparing our models to reality helps us to continually improve them.

In the paper, Dr. Lofgren and his co-author, Jonathan Rouhana, explore two different ways to model the effects of climate change on evapotranspiration (the movement of water from the land to the atmosphere as the combined result of evaporation and transpiration), and, subsequently, on the water levels of the Great Lakes.

Predicting how climate change will affect the water levels of the Great Lakes is a tricky business. To answer questions like this, it is often best to use models. Modeling is central to what scientists do, both in their research as well as when communicating their explanations. Within their models, scientists study relationships between variables in nature and then apply those relationships to possible future scenarios with one or more tweaked variables.

However, earth systems are so complex and have so many moving parts, that it’s almost impossible to capture them completely in an equation or series of equations. The beauty of modeling, is that it allows scientists to start with a small amount of data and, as time goes on, to build up a better and better representation of the phenomenon they are explaining or using for prediction.

Sometimes, particularly when modeling climate change, problems arise with so-called empirically-based models. Empirically-based models are created by making observations about two or more variables over a certain time period and under certain conditions, and inferring relationships from those observations. Often, those models don’t hold up when conditions change.

An alternative is physically-based models, which use the laws of physics (like conservation of mass, energy, etc.) to make predictions. Complexity is still a hurdle, but the laws of physics hold up no matter what—even when the climate changes.

Dr. Lofgren’s paper details issues with an empirically-based model widely used in Great Lakes research, the Large Basin Runoff Model (LBRM). From the abstract:

This model uses near-surface air temperature as a primary predictor of evapotranspiration (ET); as in previous published work, we show here that its very high sensitivity to temperature makes it overestimate ET in a way that is greatly at variance with the fundamental principle of conservation of energy at the land surface. The traditional formulation is characterized here as being equivalent to having several suns in the virtual sky created by LBRM.

Several suns in the sky – wow! In the most extreme case, this method of calculating evapotranspiration behaves as though there were 565 suns.

In the context of climate modeling, “The LBRM oversimplifies the physics of the interaction between the earth and the atmosphere,” says Dr. Lofgren.

This doesn’t mean the LBRM isn’t useful in specific instances (e.g. short-term forecasting), or that you shouldn’t ever trust empirically-based models. It just means that different types of models have their place in different circumstances, and that the LBRM probably isn’t the best choice for modeling hydrologic response under climate change conditions.

Scientists often argue about the rightness of their model, and in the process, the model can evolve or even be rejected. Consequently, models are central to the process of knowledge-building.

Scientists who dare to create models know that their models will be scrutinized and tested. Research like Dr. Lofgren’s ensures not only that models are used appropriately with an acknowledgment of their limitations, but that they are continually improved upon.

Share this:

Like this:

Post navigation

NOAA Great Lakes Environmental Research Laboratory’s Blog

Welcome to the Great Lakes Environmental Research Lab's blog where you'll find the latest news and information about our research in and around the Great Lakes! We're part of the National Oceanic and Atmospheric Administration. Learn more on our website: https://www.glerl.noaa.gov.