Tennessee Floods: Anatomy of an Anomaly

Last week’s killer floods in Tennessee were unusual by almost any measure. (Except for flood-inured Texans who would call 15 inches in rain in two days “a damp weekend”.) But, were they so unusual that they would not have happened without global warming?

It would be amazing if my answer were “yes”. To date, there has not been a single weather event in the United States that I have blamed on global warming. This is for good reason, I think: temperatures have only warmed a bit more than a degree Fahrenheit on average across the continental United States, and it’s hard for that small a temperature change (small compared to the 10F difference between glacial and interglacial periods) to have a major impact on individual weather events. Climate change is still mainly the province of statistics. The effects only start becoming obvious if you average over years and decades or consider the overall behavior of a large number of weather events.

Let’s start with the statistics of this event. The Army Corps of Engineers has called this flood a 1000-year event. That’s a preliminary number, and I don’t know exactly what it’s based on. But here’s what it means: the chances of a flood of this magnitude occurring in any given year is 0.1%, which, if you work it out, means that it should only occur about once every 1000 years. In this case, 1000 years is the expected return period, but this does not mean that such floods are spaced 1000 years apart.

A moment’s thought reveals that the concept of a 1000-year return period is a little bit strange. How would we know whether an event’s odds were really 0.1% per year? Ideally, you’d like to observe several instances of the event and see how frequently it occurs, and then use that information to estimate the odds. In this case, you’d need about 10,000 years of data. Obviously we don’t have rain gauges or streamflow gauges going back that far, though there can be other ways of detecting major prehistoric floods.

In the case of rainfall, the standard technique is to estimate the likelihood from existing events in the past 100 years or so. For example, if a 5″ rainstorm occurred about every year, a 7″ rainstorm occurred maybe once per decade, and a 9″ rainstorm occurred once in 100 years, one could infer that an 11″ rainstorm would occur on average once in 1000 years. It’s also standard to combine observations from rain gauges throughout the area, so that, for example, if only one rain gauge out of ten recorded an 11″ rainstorm in the past 100 years, you’ve got more evidence that the true likelihood at a given location is only about once every 1000 years. (This description is way simplified from how it’s actually done mathematically, but the principles are the same.)

The figure above shows the estimated probabilities for a 2-day rainfall in Nashville. According to the figure, a two-day rainfall total of 8.4″ should occur about once every 100 years. The red and green lines show the 90% confidence interval, based on the statistical information available. Reading horizontally, the true return period for an 8.4″ two-day rainstorm has a 90% chance of being somewhere between 75 and 200 years. Reading vertically, the true 100-year rainfall amount has a 90% chance of being between 7.8″ and 8.9″.

Now let’s look at how much rain actually fell in the Nashville area during the first two days of May. The previous record in Nashville was 6.68″, from the remnants of Hurricane Frederick in 1979. This, based on the analysis above, is a bit lower than what would have been expected for a station in existence there for many decades. The new record, established May 1-2, 2010, is 13.57″.

Whoa.

Going back to our diagram above, we’re literally off the chart. Extrapolating the curves a bit, the expected recurrence interval for that much rain is somewhere around 3000 years!

It gets better. It turns out that 13.52″ of that actually fell in just a 36-hour period. By noon on May 2, they had already broken their previous all-time record for the month of May. And the Nashville Airport did not receive anything close to the highest amount observed during this event. Numerous areas received over 16″ of rainfall. Antioch, a suburb of Nashville, received 16.21″ of rainfall, and Franklin, in the next county to the south, received 17.87″ of rainfall.

Now, the rainfall frequency diagram for Franklin doesn’t look a whole lot different than the one for Nashville. (You can retrieve all the diagrams for Tennessee you want from this NWS web site. If I extrapolate the curves for Franklin return periods, I estimate that 17.87″ of rainfall should occur slightly less than once every 10,000 years. This estimate is similar to the 5,000 to 8,000 year flood estimate that is now being mentioned. That makes it something close to a “once per interglacial” event!

In reality, it’s absurd to draw that conclusion, because the climate has changed a lot over the past 10,000 years. The estimate is based only on what’s been observed in the climate over the past 100 years or so. The rainfall statistics 6,000 years ago might have been (and probably were) considerably different. Better to stick to conventional probability language: based on the past 100 years or so, the odds of such an event occurring in any given year were 10,000 to 1.

Keep in mind that we’re talking about the two-day rainfall total for Franklin, TN, here. If there are 10,000 long-term precipitation stations in the United States (not a bad guess), you should expect one of them, somewhere, to receive a 10,000 to 1 rain event on average once every year. [Update: a better way to say that would be that you should expect there to be a 1 in 10,000 rain event in the US recorded about once per year.] [Update #2: see the comments for an analysis considering simultaneous events at neighboring stations that concludes that “about once every two to three years” is a better estimate.] And you might expect a similar frequency of occurrence of unusual 1-day rainfall events, or 6-hour rainfall events, etc. So this sort of extremely unusual rainfall event is actually quite common, just not at any particular location.

I suspect the odds in Tennessee were not as long as the statistical analysis says. Look at the map for the 100-year 2-day rainfall event below. There’s a local minimum around Nashville: the Nashville area has (or had) received relatively few massive 2-day rainfall totals compared to its surroundings, so its 100-year 2-day rainfall event had a lower value. Similarly, the 1000-year rainfall event value was smaller near Nashville than elsewhere. There might be a geographical explanation for this: Nashville is surrounded on almost all sides by higher terrain. But other similarly situated locations in the Ohio River Valley aren’t rainfall minima to the same extent. So I believe that the statistical analysis of rainfall frequency in the Nashville area was wrong, not because of any flaw in the methodology, but because the Nashville area had, by pure chance, managed to avoid the really heavy 2-day rainfall events that had befallen its neighbors.

If I’m right about that, the proper estimate for the return period in Nashville would have been about 1,500 years, and in Franklin about 5,000 years. Still long odds, but not so long as before.

Now to the key question: how much has global warming changed the odds? The statistical analysis was based on the past 100 or so years, but our climate today is not like our average climate of the past 100 years. In particular, the moisture content of air encountering a stationary front in Tennessee is on average higher than it was 50 years ago, because in a heavy rainfall event in the central or eastern United States the air draws its moisture from the tropical Atlantic, and tropical Atlantic sea surface temperatures are higher than they were 50 years ago.

In fact, in March 2010, the latest month for which data is available, the tropical Atlantic sea surface temperature anomaly was at an all-time record high value ( the records go back to 1948). But little of the record warmth can be attributed to global warming; most of it is due to natural variability. March 2009, for example, was cooler than normal in the tropical Atlantic. The long-term temperature increase in the tropical Atlantic compared to the middle of the past century is probably only about 0.3C to 0.4C.

Now let’s convert that to a change in precipitation. The typical relative humidity of tropical air is about 70%. A good rule of thumb is that a 10C difference in temperature implies a factor of two difference in saturation water vapor mixing ratio, so a change of 0.3C implies a change in saturation water vapor mixing ration of about 0.5 g/kg. Reduce that to a typical relative humidity, and the difference is about 0.3 g/kg, compared to a likely actual mixing ratio of 17 g/kg. Since 0.3 g/kg is about 2% of 17 g/kg, I estimate that global warming was responsible for about a 2% increase in precipitation during the flood event. Assuming this is the only difference between the meteorological setup now and the analogous meteorological setup 50 years ago, the global warming contribution to precipitation was 2%, or 0.28″ of Nashville’s 13.62″.

That’s not a very big amount. According to the statistics, global warming was responsible for turning a 1 in 1200 year event into a 1 in 1500 year event. Not a big change in the odds, and not a big change in the impact either. I conclude that this flood event, while influenced by global warming, was mostly a non-global-warming event.

This estimate, while crude, is consistent with the middle-of-the road projection for Tennessee precipitation change over the next century: an increase of 0%-5%, with some models drier or wetter. But the observed precipitation increase in Tennessee over the past century, according to our calculations, is actually 10%. In my mind, it’s not appropriate to attribute this precipitation change to global warming, because a causal connection to temperature change has not been established. It could be due to natural variability, changes in atmospheric aerosol composition, or some other factor. Whatever the cause, it falls under the category of “climate change” rather than “global warming”.

Let’s redo the calculation. Climate change has produced a net 10% increase in precipitation in the Nashville area since the beginning of the last century. For Nashville, the contribution to the current event would be 1.36″. Subtract that from the 13.62″ observed gives a value of 12.26″ for what might have happened under similar circumstances at the beginning of the last century. If the current event had a return period of 1500 years, the hypothetical event in 1900 would have had a return period of about 500 years. Still a major, damaging flood, but not so catastrophic and without so much loss of life.

So my guess is that climate change turned a 500-year rainfall event for Nashville into a 1500-year rainfall event, and an 1800-year rainfall event into a 5000-year rainfall event for Franklin. In other words, an event of this severity was three times as likely to happen after a century’s worth of climate change than before. This climate change was partly natural and partly man-made, partly greenhouse gases and partly other anthropogenic contributions. However you add it up, this event probably would not have happened with such severity, if not for climate change.

And it illustrates the folly of assuming that the infrastructure of the future should be designed to withstand the climate of the past 100 years.

In the 10,000 weather station logic you’re assuming they’re all statistically independent. In other words you’re ignoring spatial correlation, which is a mistake. If you had 3,650,000 weather stations, would you expect a 1 in 10,000 rain event every day? No, you’d expect that very occasionally you’d get such an event simultaneously on thousands of the stations.

Doing statistics with spatial or temporal correlation is a specialized field.

A somewhat more rigorous statistical analysis appears to be in order. I’ll still keep it back-of-the-envelope quality, though, given the time available for the analysis.

As commenters have noted, a rigorous calculation of the frequency of occurrence in the United States depends upon the spatial autocorrelation and the station spacing. A once per year frequency would be expected if it is unlikely for more than one station to receive a 1 in 10,000 rain event at the same time. At the opposite extreme, if all 10,000 stations were located within a 5 square km area, it seems likely that most would experience their 1 in 10,000 rain event on the same day.

The cooperative network of climate stations (COOP network) is fairly evenly spaced. An ideal network of 10,000 stations across the United States would have each station representing 770 sq km. I’ll focus on the central and eastern US and use a representative area of 518 sq km there.

Now, how many stations would be expected to pass the 1 in 10,000 year threshold in a particular event, given that one such station passes the threshold? To estimate this, I’ll make use of probable maximum precipitation analysis (http://www.nws.noaa.gov/oh/hdsc/PMP_documents/HMR51.pdf). Taking 48-hr rain events as an example, the maximum expected average over a 518 sq km area is 75%-80% the maximum expected value at the station scale (26 sq km area). Stated another way, if a station were lucky enough to be at the peak precipitation location, another station within the same area would on average be expected to receive only 75%-80% of that amount. Since the next COOP station will actually tend to be in an adjoining area, its expected amount is smaller.

Using the Nashville frequency curves, a drop of 20%-25% corresponds to a return frequency change of about a factor of 3 out at the tail. So (back of the envelope estimates) a bare majority of 10,000 year storms would be big enough to impact one COOP station, while some 10,000 year storms will actually be 30,000 year storms and get two or three stations over the 10,000 year threshold, and some will be more extreme and influence even more stations. This is all looking somewhat like a geometric progression, so I’m going to estimate that typically 2 or 3 COOP stations at a time would receive a 1 in 10,000 year event, so a 1 in 10,000 rainfall event should occur at one or more COOP stations somewhere about every 2-3 years.

Working in the other direction is the fact that, in extreme rainfall events, rainfall totals are gathered from lots of sources, not just COOP stations. As noted above, the COOP station closest to the rainfall peak will be expected to receive on average only 75% to 80% of the peak. In extreme rainfall events, bucket surveys are usually conducted and any and all accumulated rainfall estimates are evaluated for accuracy. This means the chance of a 1 in 10,000 year rainfall event being detected by any means is greater than the change of a 1 in 10,000 year rainfall event being detected by the COOP network.