Tornado Intensity Index

Is the intensity of tornadoes increasing in the United States, (or, for that matter, falling)? It’s a perennial question.

NOAA gives us some clues, with their charts of EF-1+ and EF-3 to EF-5 tornadoes since 1954. (NOAA ignore EF-0’s, because many more of these weak tornadoes get to be reported nowadays than in the past because of Doppler radar, better reporting practices, increasing population etc – for the background on this, see here.)

[ The original Fujita grading system, using “F” numbers, was replaced in 2007 by the Enhanced Fujita scale, hence “EF” numbers. The new system was designed to ensure compatibility with the original Fujita scale- see here. All references to either Fujita or Enhanced Fujita should be regarded as interchangeable]

But these graphs tell us little about the distribution within the totals. For instance, could there be more EF-4’s relative to EF-3’s?

For tropical storms and hurricanes, there is the measure of Accumulated Cyclone Energy, or ACE, which is calculated by summing the squares of wind speeds for each storm, over 6-hourly intervals.

There is a similar method, called the Power Dissipation Index, or PDI, which, instead of squaring wind speeds, cubes them.

It should therefore be possible to use similar methodology with tornadoes.

Let’s start by looking at the estimated wind speeds, assumed under the EF system.

There is also a great deal of evidence that the same applies to EF-1 tornadoes. As Figure 1 illustrates, there was a marked increase in the percentage of EF-1’s to total numbers between 1953 and 1990, since when the proportion has levelled off.

This is clear evidence that many such tornadoes occurred, but were never reported in earlier decades.

Therefore, the analysis that follows will ignore both EF-0’s and EF-1’s.

Figure 1

Using the data provided by NOAA’s Storm Prediction Center, I have taken the annual tornado numbers by EF category, and applied the wind speed factors, as shown in the Table above. The totals for each category are added together for each year, to give the result in the indices shown below.

Whichever method is used, there is a clearly declining trend in intensity.

UPDATE

Following requests to show the chart including the weaker EF-1 tornadoes, I have posted this up at my blog below.

From data I find at the tornado project link I find that using Excel’s slope function, the trend line slope for tornado categories from 1950 to 2014 are as follows: Category Three -0.20 Category Four -0.09 Category Five -0.02 Combined -0.32

Looks like the moderation algorithm doesn’t like negative numbers. Well anyway, data from the Tornado Project link says all the trend line slopes are negative for tornado categories three four and five since 1950.

An interesting set of numbers and approach. By this test the tornado intensity appears to have gone down. However, that could be because of a step change that appears to have happened between 1970 and 1980. The trend should be checked before and after that change. Offhand, the trend after 1980 appears rather flat rather than decreasing or increasing.

The article doesn’t say, but it appears that ordinary least squares was used to determine the trend. That is easy and quick in excel, but because it can be seriously affected by anomalous events, it’s use in the analysis of trends for extreme events is questionable. A better method would be a non-parametric test such as Kendal’s Tau.

I agree with your point about the missng EF0 and EF1 – although there is a good rationale for deleting them, it makes it look like the data has been selected to give the required response.

Overall, I am not too convinced by ACE or measures to try and put a quantitative measure on what is – historically – a qualitative measure. It might be nice going forwards as we do have the ability to measure these storms in a quantitative manner, but I am always worried by the “above normal” ACE which NOAA quotes each year in spite of the low actual numbers of hurricanes. Their “normals” are based on a completely different data set.

Until someone does a “Leif” and reconciles the older records with new measures of detection and calibration, comparisons should not be made to older data sets.

[In English, words ending in ‘o’ take ‘es’ to make the plural. c.f a certain Vice President struggling with potatoes….]

Arctic warming driven by strengthening solar cycles, and a positive PDO and AMO, would be a first guess at mechanism for decreasing intensity. Now that AMO is negative and the incipient cycle 24-25 minimum, will a cooling Arctic lead the tornado power trend to an upward slope 2005-2035???

With regard to F-0’s, NOAA themselves warn that including these can give misleading trends.

Improved tornado observation practices have led to an increase in the number of reported weaker tornadoes, and in recent years EF-0 tornadoes have become more prevelant in the total number of reported tornadoes. In addition, even today many smaller tornadoes still may go undocumented in places with low populations or inconsistent communication facilities.

With increased National Doppler radar coverage, increasing population, and greater attention to tornado reporting, there has been an increase in the number of tornado reports over the past several decades. This can create a misleading appearance of an increasing trend in tornado frequency. To better understand the variability and trend in tornado frequency in the United States, the total number of EF-1 and stronger, as well as strong to violent tornadoes (EF-3 to EF-5 category on the Enhanced Fujita scale) can be analyzed. These tornadoes would have likely been reported even during the decades before Doppler radar use became widespread and practices resulted in increasing tornado reports.

“Although both methods of squaring and cubing are valid, I personally feel that the cubing method gives a better fit”

Could you explain why you feel that cubing is better? What is it a “better fit” to? As is, it sounds kind of arbitrary that you would choose the 3rd power.

Also, eyeballing the graphs, there looks like more of a step change in the late 70s than a steady decline over the entire period. Could there have been a change in how the tornadoes were evaluated around that time?

Also, eyeballing the graphs, there looks like more of a step change in the late 70s than a steady decline over the entire period. Could there have been a change in how the tornadoes were evaluated around that time?

I have noticed a similar step change in the precipitation and drought data for Iowa at that time. That may indicate it’s more likely a climate effect of some sort rather than a change in the evaluation of tornadoes.

Removing EF1s is a good idea. People WANT to see a tornado (except in Tornado alley of course) when there is wind damage. Something to do with “being there”. So many are reported today whereas in the past it was just a nasty storm.

Could you explain why you feel that cubing is better? What is it a “better fit” to? As is, it sounds kind of arbitrary that you would choose the 3rd power.

I guess it is a bit subjective either way. Both systems of squaring and cubing are officially accepted for hurricanes, so there can be no black and white answer.

I think the most important thing is to establish this database, and let others decide how to make the best use of it, just as with ACE and PDI.

Also, eyeballing the graphs, there looks like more of a step change in the late 70s than a steady decline over the entire period. Could there have been a change in how the tornadoes were evaluated around that time?

I am certainly not aware of any such change in evaluation, and NOAA don’t refer to any.

About the ’70’s “step change” as seen in the NOAA F3+ chart, that was the start of the satellite age, but also of more weather radar systems as was the case with a local TV station.

On one side there is growing satellite and radar evidence to confirm wind speeds, earlier it was perhaps anemometers at weather stations but mostly best estimates of NWS personnel examining damage later.

Thus this is another example of better observations making a trend, with the better evidence there was a “step change” to lower reported intensities.

Of course if I’m wrong about any of that, I’m sure someone will correct me. Feel free to do so.

Devil’s advocate: One could envision that your figure 1 shows we may be approaching the threshold of another upswing in tornadoes. A return to cooling of the 50s-60s may be only a few years away. Skeptics must be careful not to be smug about tornado or other dramatic weather not re-occurring. I would prefer to ‘scoop’ the alarmists on this. I think of Dr. Mann’s 1998 paper on the hockeystick, the very year the blade began to bend back flat. It’s ironic that he trumpeted all this precisely at the inflection point that makes the uptick nothing special. The conditions for a return of a period of strong hurricanes a la mid 1950s-1960s could also be developing.

There are numerous factors that influence tornado strength. Increasing global temperatures would only increase strength of tornadoes if it also resulted in an increase in the horizontal(and vertical) temperature disparities available to supply energy to mid latitude cyclones and weather systems(fronts).

Since warming in the 1980’s/90’s was greater in the higher latitudes, it actually decreased the horizonal/meridional temperature gradient, which in turn decreased the potential energy available.

Baroclinic instability allows perturbations in the mean flow to draw energy from the contrasting air masses.

Hurricanes can spawn a significant number of tornadoes without this meridional temperature disparity but these type of tornadoes are usually the weaker type(even though high end tornadoes have occurred)

A good illustration of the effect of this meridional temperature gradient on tornado strength can be see by observing thunderstorm frequency vs strong tornado frequency in the United States.

The state of Florida is noted for having almost twice as many days with thunderstorms than places in tornado alley……but the more violent tornado outbreaks rarely occur in Florida. This is because it’s sheltered from cold dry air masses and powerful air lifting/shear causing jet streams that far south, surrounded by ocean.

Violent tornadoes are highest where the clash between air masses is greatest. Going south into tropical climates decreases frequency and north into colder climates decreases frequency.

This explains why violent tornadoes went down with global temperatures going up in the 1980’s/90’s. Warming contributed to more thunderstorm days(and increased rain) but less violent thunderstorms and less violent tornadoes because these high end events are typically fed by a temperature clash that is almost always defined by the colder/drier air mass.

Regarding the issue of squaring and cubing, it is interesting to note that the formula for kinetic energy is E = 1/2mv^2, so it depends on the square of the v. However the power out put of a wind turbine is given by P = (0.556 kg/m^3)r^2v^3, so it varies as the cube of the wind speed.

After having had time to think about it a bit, and read Kerry Emmanuel’s articles on PDI, I think cubing is the correct approach. According to the wikipedia article, ACE is based on the idea that according to the equation for kinetic energy (E=1/2mv^2), the energy is directly proportional to the square of velocity. However, that equation is for a rigid object (fixed amount of mass) in motion. Since storms involve mass air flow, we need to multiply in the air velocity one more time to calculate air mass flowing over any given area.