Hurricane Gustav

Sept 9: Updated to follow Hurricane Ike, and possible US impacts in the next 4-5 days

The North Atlantic tropics are becoming very active as the calendar approaches the climatological peak of the season (September 11). With high sea-surface temperatures and generally favorable atmospheric conditions (i.e. low vertical shear), every circulation or puff of convection must be watched for signs of organization. Josephine [is not too far behind] died…

Here are some handy model guidance maps and sites to follow the storm over the next week:

There are plenty of weather blogs out there, so there is really no need to provide blow-by-blow coverage of Gustav’s (Hanna’s) track. However, as we are interested in numerical model performance, if you are going to provide landfall forecasts for location and intensity, please provide some logical reasoning.

Is there not a model or two that is consistently better than the rest, along with some laggards that consistently are well off the mark and can be tossed out? I understand that forecasting is no picnic, but I find it extremely interesting that models which are apparently “good” enough to be used on weather websites often disagree so much with other “good” models when it comes to path.

As opposed to GCMs that are predicting temps decades into the future and are therefore tough to verify in the short term, hurricane models can be readily verified for accuracy. Is there a “report card” anywhere out there of past performance? It would seem to me that the most accurate models would be of tremendous use.

Michael, forecast model verification is something that the NHC watches very closely, as it is an indication of how much “value” they (the human forecasters) are adding to the model guidance. Unfortunately, the models are not consistently “good” for all storms. NHC Forecast Verification

The models are at a low confidence level right now with the storm track. The GFS can be safely ignored as it has no clue after 48 hours and just does not have good enough physics to be very accurate. TD8 will be a much bigger problem for the east coast I am afraid and people should carefully monitor its progress. IMO, the LBAR model is pretty close to the end result for Gustav although a slightly more Northern turn just before landfall could see the Tx/La. border as an approximate landfall area. If everything goes the right (or wrong) way, this could hit as a strong Cat 4 which would be unfortunate for a lot of people.

I’d love to hear a fairly brief description of the “structural” and scope differences between a Global Circulation Model and a hurricane tracker model.

For example; a GCM models X number of variables, with Y equations, and Z assumptions (of which so many are pretty solid and several are picked with the only rhyme being that they will make the output when started in the past, match actuals. The GCM has so many bazillion grid elemental volumes and must run so many bazillion iterations and goes to hell in a hand basket so many days (i mean decades) in the future.

When you show the same data for the hurricane tracker models, the uninformed might get a sense of the difference.

Hi Ryan et al., a few comments on the models used to predict hurricanes. All of them depend on the large scale dynamics from the global weather models. There is alot of model to model disagreement for any given forecast, but more significantly, ensemble members from an individual model typically give a substantial spread. The most informative of the global models is ECMWF, at 25 km resolution for the control run (out to 10 days) and 40 km resolution for 50 ensemble members out to 15 days. The ECMWF ensemble runs are not publicly available (you need to pay alot of $$ to ECMWF to get them), and very few groups in the U.S. are actually using this.

So once you have a credible large scale forcing field, then you can use this to force a high res regional model like GFDL or HWRF or a statistical model. But the simulation will only be as good as the large scale forcing. for example, HWRF is run with GFS and NOGAPS forcing. in the case of fay, the two simulations gave very different answers because of the different large scale fields predicted by NOGAPS vs GFS. Ideally you should be running an ensemble of the regional models to pick up the likely track/intensity scenarios.

Some storms have high predictability and others have low predictability. Fay had the lowest predictability of any NATL TC over the last two years, in my assessment. there was no consistency between models, ensemble members from an individual model were all over the place, and there was no day to day consistency. Jeff Masters appropriately described Fay as “the Joker.” Last year, Hurricane Dean was one that had a high level of predictability. It looks like Gustav will be one with fairly high predictability, with the models generally agreeing on a track and timing of landfall (note, not a specific landfall location yet, but general track). Will see how it plays out.

Hannah could end up as another Joker, from what i am seeing right now from the ECMWF ensembles.

How about a heads up for for a poor boy at 24-46 N 081-07?😉 There is a lot of stuff happening in the area of Hanna. I don’t like that circulation to her East and I want my H to hang around a bit longer. Needless to say my Labor touristas are no shows.

Suppose you are the sad soul who must choose between operating or closing a business on the US Gulf Coast. If the business is closed in anticipation of a storm then you lose $5 million. But, if the business is running and is hit by hurricane winds, then you lose $50 million. A wrong decision has consequences.

To complicate things, you must made your decision four days before the predicted landfall, as it takes time to secure operations. You need to make the correct guess about the storm far in advance of landfall.

What decision-aiding tools should you use?

* for-hire advisory services?
* model ensembles?
* one or several specific models with the best track records?
* US NHC forecasts?
* dice and bourbon?

This is the quandry faced by many people along the northwest US Gulf Coast this evening and their cumulative decisions involve billions, not millions, of dollars. Several refineries have already “pulled the plug” on their operations as it takes a number of days to stop and secure things.

Thanks for your assessment of the different models. I moved to New Orleans around 10 yrs ago (with a hiatus due to Katrina) and have been following the models closely. I must tip my hat to the NHC for their discussions and their predictions. Their 72 hr projections these days are quite remarkable, but there are always surprises. My anecdotal observation is that the numerical models have the most difficult time with forecasting a turn from W to NW to N. It seems that much of the time, the models tend to predict this “turn” too soon. Has there been an analysis of this? Is this where the regional and global models intersect? In other words, would a regional models initialized off of the ECMWF be better or is there still much to learn about the interaction of a tropical storm with mid-latitude systems?

For now, I am watching and waiting. Car filled and ready to go well north and east.

Dr. Curry,
I concur that the NHC does a remarkable job in its explanation and anticipation of movements & developments of hurricanes and tropical storms. Nevertheless, I am a little surprised that you describe Gustav as having “fairly high predictability.” The forecasts have consistently been for Gustav to move North and West, but twice it has taken turns to the south. Perhaps the movement will still be the North and Northwest that the models are predicting, but so far Gustav seems to have a mind of its own at times.

#18 I imagine Dr. Curry is talking about the whole path to landfall and not just some small piece of it. The goal surely is not to predict every little turn, but to get the overall direction correct. Seems to me you’re doing well if you can warn 3-5 days in advance which state is going to get the worst of it.

#11, I think I have a little different take on the interpretation of the mesoscale models HWRF and GFDL.

So once you have a credible large scale forcing field, then you can use this to force a high res regional model like GFDL or HWRF or a statistical model. But the simulation will only be as good as the large scale forcing. for example, HWRF is run with GFS and NOGAPS forcing. in the case of fay, the two simulations gave very different answers because of the different large scale fields predicted by NOGAPS vs GFS. Ideally you should be running an ensemble of the regional models to pick up the likely track/intensity scenarios.

The initialization of HWRF and GFDL is step zero, and I believe for recurving tracks, like Gustav, can fatally degrade forecast quality almost immediately. The init of the storm has less to do with GFS or NOGAPS, which are roughly 30-50 km resolution, and more to do with bogusing or inserting some sort of credible interpretation of the storm’s structure at time zero (NOGAPS already does that with its own bogus, but GFS does not, so their analyses may actually not have a closed isobar.) Thus, I suggest watching the historical swaths and looking at the spin-up issues.

Same issue for the reanalysis products, which I believe are given way too much credibility when doing hurricane-climate composite analysis. Thus, for coarser model ensembles, garbage at time zero, garbage thereafter. I am gobbling up the ECMWF grids and when I get time, look forward to seeing how great it has been doing this season😉

Judith Curry failed to mention that it is not only the accuracy of the large scale model input, but the accuracy of the inflow/outflow boundary treatments in the limited area model. Those treatments are an ad hoc
application of increased diffusion near the lateral boundaries that can completely alter the interior numerical solutions of the limited area
model in a matter of hours, especially the large scale portion of that solution.
Also there is necessarily a discontinuity between the physics (parameterizations) of a large scale global hydrostatic model and a limited area model that purports to resolve mesoscale structures that are not resolved by any global model.
And in the case of mesoscale features, the heating is dominant, i.e.
any error in the heating parameterizations is immediately transferred to errors in the vertical velocity w. References available on request.

I just saw your post. The correct initialization for the slowly evolving in time continuum solution for the large scale and large and small scales in combination is completely understood and documented. Given the vertical component of vorticity (- u_y + v_x) and the total heating, all of the remaining flow fields can be determined. And with a proper mathematical treatment of the open boundaries, a mesoscale feature
can be accurately recreated in the interior of a limited area model
(Browning and Kreiss 2002 JAS and Fillion, Page, and Zwack 2005 MWR)

The obvious problems are that a large scale model does not contain the correct mesoscale vorticity nor the correct mesoscale heating. This causes a number of problems. Even if the mesoscale heating in the limited area model was accurate, the lack of the correct initial mesoscale vertical vorticity would lead to instant errors in the fluid motion. And the lack of understanding of the mesoscale heating compounds that error. Add to that the problem with discontinuties between the large scale boundary information and the small scale solutions that develop in the interior of the limited area model and you obtain a numerical solution of any type you want.

#23, Jerry, I appreciate the references. I guess the only point I was trying to relay is that we have insufficient observations to adequately resolve a tropical cyclone in an analysis at high resolutions. Hence, the manic efforts to use 4d-var with HWRF, which are not blasting GFDL’s simpler scheme out the water at all. So, I think my comments should be narrowed to say that this is more an intensity issue which then cascades into track errors for recurving-type storm trajectories.

Ike has almost disappeared. Of course, the difference between the model forecast of Hanna in the long range and my futile imagination is close to zero. I just want, if I may, to follow the long range performance, that should be very poor.

Gerald Browning: global and limited area models have many issues, known and maybe unknown, and in the short range a good analisys has the main impact on the performance. Dr. Curry’s point, as I understand it, was that using the global model with the best forecast of the synoptic features warrants a better performance of the LAMs

quick clarification, yes it is more the boundary forcing for the regional models than the initial conditions that are of relevance in context of the global models. Many of the regional models do some bogusing of the storm, since they don’t have high enough resolution data for the initialization.

Decision making re evacuation and other aspects of emergency management and response is very complex, and the tradeoffs made by individual and institutional decision makers will vary depending on what is at stake for the individual decision maker and its time sensitivity.

Two issues i am concerned about re Gustav. There are two “hot spots” in the gulf associated with the loop current. If gustav spends any time over either, it could rapidly intensify to a cat 4 or 5, and it also might increase in horizontal size/extent. We don’t really know in any detail what controls hurricane size, but i recall katrina growing in size when it went over the loop current. Size means the storm surge will be worse, and a larger region will be impacted. Also, with greater size you get more flooding and tornadoes.

So we should be very worried by this storm, and i am very heartened by the much better preparations going on in LA this time. On the upside, we are desperate for rain in the SE US, Fay gave us some huge relief (but way too much water for FL unfortunately). Lake Okeechobee is too high, if a hurricane were to go over lake Okeechobee now, there is a risk of a “lake surge” that would devastate south florida. Lets hope that Hannah “behaves” and goes somewhere harmless

“The most informative of the global models is ECMWF, at 25 km resolution for the control run (out to 10 days) and 40 km resolution for 50 ensemble members out to 15 days. The ECMWF ensemble runs are not publicly available (you need to pay alot of $$ to ECMWF to get them), and very few groups in the U.S. are actually using this.”

If true, that’s very sad and also very ironic.

Lake Okeechobee is too high, if a hurricane were to go over lake Okeechobee now, there is a risk of a “lake surge” that would devastate south florida. Lets hope that Hannah “behaves” and goes somewhere harmless

As of Aug 29, the WSE is 14.13 ft. According to this, that is still about 1 ft “short of its historical average.” But it would certainly seem high if Hanna is strong and makes a good hit. I think the 1928 hurricane that hit Okeechobee remains the 2nd deadliest natural disaster in the US.

The Corps targets the WSE to remain in the 12.5-15.5 ft range, so it’s at a pretty “normal” level right now. I am sure they’d like to lower it to at least the low end of that (or even lower) if Hanna is going to impact the area, but it will be interesting to see how low they try to and can get it.

Most of the decision-makers I know use the National Hurricane Center as their primary guidance. The NHC accuracy, though never perfect, is better than the for-hire private forecasters.

The maps of models projections are used by the decision-makers to get a feel for the uncertainty in the landfall. No one I know uses individual model projections. The other publically-available tools are seen as curiosities but not used, from what I’ve seen.

Judith, if the ECMWF forecasts are indeed superior then ECMWF could greatly grow their client base by making real-time public forecasts of storm paths for several storms, to demonstrate their capability. Decision-makers are making cumulatively hundreds of millions, perhaps low billion, decisions and better insight would be worth a lot to them.

Field outputs of deterministic IFS-ECMWF are already available to NOAA-NCEP and I think to other global model institution around the world.
I’d like to remind you all that ECMWF is controlled by MeteoFrance, UK-MetOffice, DWD etc…

It’s maybe not a case that almost all of ECMWF fields are not free. European weather offices in general adopted a resctictive policy regarding their own weather related products. Anyway, other international institutions or universities exchange their products with ECMWF and viceversa.

Regarding the Ensamble Prediction System (EPS) products, I don’t know if they are part of an exchange agreement with NOAA.
At ECMWF, an operative limited area model EPS is run daily for Europe by the COSMO consortium.

David #31, someone correct me if I am wrong, but Europe doesn’t see hurricanes too often and probably does not expend the resources of NCEP, Navy, etc. for specific TC forecasting. Instead, ECMWF puts the bulk of their effort behind the best global model and ensemble in the world with state of the art data assimilation that is arguable unmatched in many areas. The accurate prediction of TC behavior (track, genesis) is, in my humble opinion, the by-product of their exceptional global model.

Here is the most recent Production Suite I could find from NCEP. This is basically what the computer is tasked to do for each 6-hour operational cycle. They keep adding blocks and shift stuff around as more operational products are provided by the government. I am sure there will be a square for Martian climate modeling very soon…

Judith, if the ECMWF forecasts are indeed superior then ECMWF could greatly grow their client base by making real-time public forecasts of storm paths for several storms, to demonstrate their capability.

The combination of ECMWF, Meteo France and the UK-Met Office made a big deal of their TC counts 2 years ago going back through their complete history. They used an average of all three sources and felt they had out performed the statistical models and in fact made a big pitch for the dynamical models that they used. I based my TC count bet last year on an average of their predictions and that of Klotzbach and Gray. UK-Met Office announced their prediction before the season started (for 2007). ECMWF and Meteo-France have not yet announced their 2007 TC prediction results – which I think is weird and, of course, leaves my 2007 bet in limbo.

Kenneth, the simulations of individual storms using the 15 day 40 km resolution Var EPS ensemble prediction system provides a far better simulation of the hurricanes than does the ECMWF seasonal forecasting system with resolution of about 125 km, which can only simulate the largest storms and seasonal numbers of total tropical cyclones need to be adjusted to climatology.

Hurricanes are not a focus of ECMWF since they don’t strike Europe, only striking a few European islands in the Caribbean. To my knowledge, there is one person at ECMWF working part time on hurricane stuff. Researchers or consulting firms who pay $$ for the ECMWF realtime forecast products are producing some very interesting results related to hurricanes, for which there is apparently no perceived advantage seen in making any such forecasts public.

Ryan is correct, any skill that ECMWF shows is a byproduct of an excellent model; few people are actually doing hurricane stuff with it.

The ECMWF control run is publicly available (the 25 km, 10 day forecast). I don’t think the ensemble forecast system (51 ensemble members) is made available to NOAA, i could be wrong, but noaa would certainly be prohibited from distributing the raw model output from the ensembles.

There are a number of hurricane forecasters in the private sector providing customized products for a variety of applications that are not directly related to emergency management, including in the financial sector. People pay $$ for these forecasts, which would be unlikely if they did not have some value beyond what NHC produces. Some of these outperform the NHC, particularly on the longer time horizons that are of particular interest to the financial sector.

It is my opinion that hurricane forecasts could be much better than what is currently being provided by NHC. i don’t think there is anyone at NHC that wouldn’t like to see more research done to improve intensity models especially rapid intensification, hurricane, genesis, landfall impacts, etc.

The $$ hurricane forecasting services which I and acquaintances see regularly, which is most of them, do not outperform the NHC. I wish they did.

Clients pay $$, however, for two reasons. One is the hope that the $$ advice can help them in their $$$$$$$$$$$$$ endeavors – a relatively small expenditure for advice. The other is that the for-hire services do a good job of translating gobs of weather data into products which can be comprehended by laypeople.

I agree that dollars for greater hurricane research would be dollars well spent. We also need to spend more money for atmospheric sampling in the vicinity of storms, to help the models.

Hurricanes are not a focus of ECMWF since they don’t strike Europe, only striking a few European islands in the Caribbean. To my knowledge, there is one person at ECMWF working part time on hurricane stuff. Researchers or consulting firms who pay $$ for the ECMWF realtime forecast products are producing some very interesting results related to hurricanes, for which there is apparently no perceived advantage seen in making any such forecasts public.

ECMWF as part of the Meteo-France and Met-Office UK consortium made some very strong statements in a paper for their TC forecast skill and as an example of dynamical models outperforming the skill of statistical models. If that part of the ECMWF efforts are no big deal then it is even more weird that they are hestitant to release their 2007 predictions – and more so after making a big deal of the dynamical and statistical comparison.

Meteo-France has not released their predictions for 2007 either to my knowledge. What is their problem? If they are making money on their predictions and there are customers out there willing to pay then I can see delaying the release of their predictions until after the season, but delay after that does not make business sense.

Why did the Met-Office as a third member of the consortium release their predictions and before the season started? I am not sure that I have seen their 2008 prediction. Maybe someone here has seen it.

As I recall the three members of the consortium had 3 years worth of out-of-sample results (and several years worth of in-sample results) up to year 2007 and therefore 2007 and 2008 will provide an additional 2 years or 40 % of the out-of-sample results. Not reporting the results makes me very suspicious and will continue to the point that someone comes up with a rational reason for the delays.

The reason that the modelers use the 500 mb geopotential height
anomalies is that geopotential height is the smoothest large scale field for slowly evolving solutions in time, i.e. the large scale geopotential is the solution of an elliptic equation. Additionally,the vertical component of the velocity w (determines coupling of dynamics to other layers) is typically small near 500 mb so errors in the vertical velocity have the least impact there. A better measure of error is the relative l_2 error of the vertical component of vorticity at all levels, especially near the jet stream that contains all of the
kinetic energy. This latter measure shows very clearly the increase in forecast errors in a very short period of time and is the reason that it is not used (see Sylvie Gravel’s manuscript on this site).

Jerry, I appreciate the references. I guess the only point I was trying to relay is that we have insufficient observations to adequately resolve a tropical cyclone in an analysis at high resolutions. Hence, the manic efforts to use 4d-var with HWRF, which are not blasting GFDL’s simpler scheme out the water at all. So, I think my comments should be narrowed to say that this is more an intensity issue which then cascades into track errors for recurving-type storm trajectories.

I agree about the observational problem. But that is only one source of error. Errors in the total heating (parameterizations), in boundary value discrepancies, in the limited area model open boundary treatment,
etc. can be just as much of a problem. We have carefully separated
out the continuum issues (balance equations satisfied by the slowly evolving solution in time when multiple scales of motion are present) and the numerical problems associated with the use of a limited area model. The real problems are the lack of accurate observations and the lack of understanding of the real forcing (heating).

Abstract
Correlations between geomagnetic-field and climate parameters have been suggested repeatedly, but possible links are controversially discussed. Here we test if weak (Earth-strength) magnetic fields can affect climatically relevant properties of seawater. We found the solubility of air in seawater to be by 15% lower under reduced magnetic-field (20 μT) compared to normal field conditions (50 μT). The magnetic-field effect on CO2 solubility is twice as large, from which we surmise that geomagnetic field variations modulate the carbon exchange between atmosphere and ocean. A 1% reduction in magnetic dipole moment may release up to ten times more CO2 from the surface ocean than is emitted by subaerial volcanism. This figure is dwarfed in front of anthropogenic CO2 emissions.

David Smith:
While the NHC does a credible job overall, they are often way too slow to name storms and ramp up the intensity levels. Their track projections are pretty good short term but not so good out to longer ranges. If you think some “private” forecasters are not more accurate than the NHC, then you obviously do not have access to their projections. Energy interests pay people like AccuWeather BIG $$$ to be accurate and they are compared to the NHC. I “suspect” that Judith Curry has access to all the private modelling and forecasts and thus the reason for her comments.

Some very interesting research at U Miami using 1 km resolution WRF coupled to ocean is doing very good job with rapid intensification. apparently you really do need to get down to this scale to do eyewall replacement and all that (even 2 km is too coarse). NHC does not do a good job on predicting genesis/formation. Half of their invests don’t ever form, and they didn’t even have an invest on hannah until the last minute, whereas we were calling hannah a week before it actually formed; this one “screamed” loudly for formation.

It is a very challenging research problem to figure out how to apply the regional models effectively in an operational environment. to me, the track/intensity thing are inextricably coupled, and you can get totally wrong intensity if the large scale forcing is wrong and you end up with the wrong track.

Michael, some of the private companies make very good forecasts, but I wouldn’t put accuweather in that category

Why did Katrina get so huge?
We really don’t understand why Katrina got so huge, though an interesting theory was provided by Pat Fitzpatrick of Mississippi State University at a recent hurricane conference. Here’s the technical gist:

Katrina nearly doubled in size on 27 August, and by the end of that day tropical storm-force winds extended up to about 140 n mi from the center. A cursory examination of satellite imagery shows the possible influence of a trough or confluence zone to the north that may have contributed angular momentum to the intensifying cyclone.

Although the rapid intensification of Katrina was noteworthy, the expansion of the tropical storm-force winds is the key forecast issue. The devastation wrought by this storm upon landfall is attributable more to its size rather than its intensity, as it landed as a Category 3 hurricane. This large hurricane caused a record storm surge and exposed the coastal regions of Louisiana and Mississippi to hurricane-force winds for an extensive period of time.

Observations, as well as a Weather Research Forecast model simulation, suggest that an influx of vorticity associated with a remnant front near north Florida contributed to the wind field expansion.

Basically, Dr. Fitzpatrick is saying that satellite observations and computer modeling studies suggest that Katrina got extra spin that helped it grow in size by ingesting a portion of an old front that had stalled out over northern Florida. As Gustav approaches the U.S., we should be on the lookout for similar clumps of clouds with some extra spin that the hurricane could use to help grow in size.

Again, my knowledge of private $$ forecasters is either first-hand or via people who have such access.

Rather than go ’round and ’round on this I’ll leave it by saying that any private-forecaster claim of greater insight or forecasting skill needs to be backed by properly-analyzed data. This is not unlike what is needed from financial forecasters. Private forecasters certainly have incentives to provide such data, if they can.

Good point, David. If a business concern interested in making profits for its owners were making storm predictions that were more skillful than others, it would be very surprising that its record would not be made public in the process of using it to promote more business. On the other hand, if such a claim could not be shown, I would expect the claims to be made without any or little public evidence and the claimers being rather coy about revealing any evidence and why they do not.

Judith Curry:
While he can clown around too much at times, Joe Bastardi at AccuWeather is very good in his Hurricane forecasts. His forecasts for development, track, intensity and landfalls are archived on their Premium site and anyone who has access can verify this for themselves and don’t just have to take my word for it. He has been remarkably ahead of the curve on most storms this season and deserves kudos for it. There are other very good private forecasters as well for those that take the time to check their prediction records against the end results. It is kind of disturbing that the ECMWF and UKMET tend to be much better than the US models in their forecasts for Hurricanes because that is not their primary focus.

I agree with Judy #52, Accuweather is a private for profit (what little amount they can muster) company and its business model does not require verification of 3-5 day forecasts as the NHC issues. Bastardi’s experience and longevity in the forecasting game is undoubted, however, his inklings and intuition are sometimes hyperbolic. I have seen too many of Bastardi’s disaster scenarios that never panned out. But if you can provide screenshots or a multitude of brilliant forecasts, I will gladly stand corrected and give the soothsayer his due.

ECMWF and UKMET have superior forecasting systems period, but let’s also not forget other models around the world, Japanese have an excellent model, and the Aussie’s BOM model (a derivative of the UKMet model) is very good as well. NCEP, the Navy, and Canadians are definitely behind the curve…

Ryan, I don’t recall the GFS doing all that well at either track or intensity. The GFDL seems to be working better IMO, and has been all year. GFS is often the outlier of the various models so I don’t know that I would put too much stock in it’s forecast.

Here’s a site that I find useful. Tropical Atlantic. You can overlay about 30 different models over Google maps, zoom in/pan out. Also works with Google Earth if you have that. You can create quite a spaghetti map if you select all of them. Interesting to see all the various model results though. Surprising that they don’t have GFS/UKMET on their selection. They do have GFDL, the various BAMM, NOGAPS, LBAR, and CLIP5, and some others I’ve heard of.

If anyone has a better site that can generate spaghetti graphs with more models, please let me know.

Interesting the divergence between the EWMCF and the GFS. I just viewed animations of potential tracks from both solutions. The EWMCF takes Gustav pretty much over Beaumont TX and straight up towards Dallas. For Hanna, it has it hitting south FL and then turning N and hitting the FL panhandle or AL area. The GFS takes Gustav towards a central LA landfall, up along the LA/TX border, then curves to the NE. GFS takes Hanna and turns her towards SC but doesn’t go out far enough to show a landfall or curving out to sea.

I am not a little bit surprised that this discussion of who can predict what better has not referenced any objectively and statistically measured predicting skills. Perhaps after the excitement and anticipation of what Gustav will do someone will provide something more appropriate for a scholarly discussion of the matter. In the meantime, this layperson is of the opinion that Gustav could build to something awfully destructive by landfall.

I have provided very mathematical references as to why limited area models vary so dramatically. I have stated that the measures of accuracy used by modelers are very deceptive and the reasons they use that measure. If proper relativemathematical norms are used, the cause and growth of errors becomes completely transparent as seen in Sylvie Gravel’s manuscript. Of course that manuscript, even though coauthored with an employee of the Canadian Meteorological Center, was not accepted for publication for obvious reasons.

Re #62 (Kenneth Fritsch): I am a bit curious about proof of the “secret” models. On risk of being snipped, the captdallas2 model says Galveston cat 5+. Depending on the final result captdallas2 models will be available at exorbitant prices in the near future. God knows it is the cash not the common good.

Thanks for reposting the link to Sylvie’s manuscript. The draft manuscript does not contain any heavy mathematics and yet makes clear the origins of the dominant forecast errors, namely the lack of observational wind data and the inaccuracy of the parameterizations
(the boundary layer parameterization being the largest contributor).
It also shows that the periodic updating of wind data is what keeps the forecast models on track (not the parameterizations). This manuscript (backed up by earlier cited mathematical manuscripts) should be
beginning reading for anyone interested in numerical forecast or climate modeling. And hopefully discourage any person from going into this
highly suspect area of “science”.🙂

This is a rather vague statement. If you look at the absolute errors of the different models on the Canadian Mterorological Centers web site,
you find that indeed the ECMWF model has the smallest absolute errors. But the relative errors in the winds exceed 100% for all models very quickly.Could that be the reason that relative errors are not shown?

David et al., the people making the really good forecasts are not trying to promote them to the public or to other scientists or to other customers or in the blogosphere, but rather have a few exclusive high paying clients. So it is unlikely that you would have come across any.

Jonathan, in terms of damage, size is arguably as important as intensity, since size influences wind damage, storm surge, total rainfall, and hurricane-induced tornadoes. We have started to look at this issue to see what kinds of large-scale signals they are (rather than random subgridscale processes). We see some hints in the midtropospheric humidity field. But this is a topic that i think deserves much more research.

Re #67 (Judith Curry): I do love this paste link thing. If I were writing a SciFi novel about the upcoming Apocalypse on mankind and needed an antagonistic charater I might consider Julie Kurry. Totally ficticous of course, but Julie Kurry had information that could have saved the lifes of millions. The million of lives that could have been saved were not the lives considered relevant for productive procretion. To solve the world’s problems gene pool considerations had to be made. Following the massive loss of lives the following press annoucenment was made:
The loss of graduates of Aburn University is an unfortunate but inconsequential event in the develoment of mankind, we regrete their demise, but will move forward.

David et al., the people making the really good forecasts are not trying to promote them to the public or to other scientists or to other customers or in the blogosphere, but rather have a few exclusive high paying clients. So it is unlikely that you would have come across any.

Let us analyze what you have said. You seem to know that somebody is making really good forecasts and have cutomers paying lots of monrey for them. No disrespect, but I am assuming that it is not you paying a lot of money for these forecasts nor any institution to which you are connected. If those selling these forecasts are making lots of money doing it and have a major interest in keeping it a secret, I doubt rather seriously that they would causually allow a college professor any inside information even though she be a reknowned climate scientist and probably more likely because she be a reknowned climate scientist.

I can only conclude that your knowledge comes as one who is part of making these forecasts — or alternatively, you have figured out the secret handshake.

FWIW, Judith’s statement does not seem unreasonable to me. I would think that futures traders, insurance companies, and oil companies might be high paying clients. It would certainly be informative if Judith added specifics as my curiosity is piqued.

Re #75 That should be 300mb vs vortex motion, I think, not versus 850mb. It looks to me, based on GFS, that there’s a 15 knot or so southerly flow towards the center of Gustav at 300mb, while the 200mb level has less shear. It’s hard to say that with confidence, though, based on the coarse resolution of the maps.

Assuming current track, my guess is Gustav will not become a CAT 4 again. It’s borderline CAT2/3 right now. There is a cold current loop it will pass over, wind-shear is 10 – 15 knots is forecasted to increase tonight to 15 – 20. In addition, with the shallow waters nearer the coast, upwelling will reduce heat content available. I’d say it will be no more than CAT3 at landfall. Still a significant storm but not as bad as it could have been.

Current track is very similar to 1965 Betsy, the first billion dollar hurricane, which overwhelmed NO levees and caused extensive flooding.

David, it looks like you bet may be panning out at this point. Central LA looks to be the landfall.

FWIW, Judith’s statement does not seem unreasonable to me. I would think that futures traders, insurance companies, and oil companies might be high paying clients. It would certainly be informative if Judith added specifics as my curiosity is piqued.

Pete, it has been my view of this discussion that we are talking about parties paying big money for forecasts that track the path of potentially destructive TCs along with estimates of their size and wind speeds, i.e. destructive powers.

I have not been able to determine why insurance companies would be that interested in these shorter term forecasts as I would think that their bread and butter is related more to determining the longer term risks. Perhaps they need forecasts in the shorter term to make plans for putting their agents and adjusters in place and efficiently free-up cash for payouts.

Those who play the futures market could have a rational interest in the short term forecasts with hopes of making some favorable trades during the life of a TC assuming, of course, they have some contrary nonpublic information to bet against the crowd. I am not sure how this information would play out risk-wise and why either insurance companies or futures investors would want to make their bets on the future by restricting themselves to a single private forecaster or a few forecasters. It would seem that the best situation for these entities would be to have forecasters competing in plain view against one another for their business.

Oil companies with rigs in the Gulf of Mexico and offshore elsewhere would, of course, have an interest in the short term tracking of TCs, but I would think that one would have to understand the amounts of money at risks for destruction and the cost of precautionary measures and how much difference the precautions actually employed would be due to the detailed differences of the available forecasts.

I would be very interested in determining how these entities use the short term forecasts and the importance they have to their profitability as I suspect the profit motive will eventually bring the best forecasts (if forecasting differentiation is possible) to the fore.

I would also have to see some statistical results for out-of-sample for these forecasts results. It would be my guess that the forecast skills with any of these prediction models would be so sparse to date that one could not determine whether the more successful ones are not just lucky to this point.

I think what I am hearing from Judy C and Ryan M is that in their view of the competing models, the ones they consider superior are those that have the greater resolution and computing power. Of course, we then have the warnings of Gerald B who points to general limitations of these models which might in the end indicate little predicting skill for any of the models.

Without the statistical testing of prediction skills much of what I hear seems relegated to cheering for the home team much as I might declare that I am a Cubs fan and Bud man – well at least a Cubs fan and at least for right now – until they break my heart again.

Re #77 Jonathan thanks for the link. I’m looking for something which would show the mid-levels, which I believe is harder to measure. I do wonder if mid-level shear is a significant issue and one to add to the collection of questions on how AGW affects tropical cyclones.

David, it looks like your Houston bet may be panning out at this point.

I must confess something. Yesterday, while shopping for storm supplies, I was approached by guys in European dark suits and shades. They identified themselves as (Weather)Men-in-Black, who knew Gustav’s future.

They also knew I couldn’t afford their services so they offered to tell me the future path of Gustav in exchange for a kidney. I thought they meant the cow kidney sitting in my shopping basket so I said great, no problem, go for it.

David, are you sure those gentleman were not in disguise and actually an older American and younger one with a baby face. I have heard rumors that those gentleman post a very public forecast(s) for the public and then provide the real deals to private parties for big money. I also heard that when asked why they would sacrifice their reputations as academics with these antics, they both were heard to say something to the effect, “how many mansions in the Colorado Rockies will a professor’s salary buy you?”

Increased resolution from more computing power is only helpful when the truncation error from the numerical approximation is greater than errors from other sources. Clearly the numerical error (when a model correctly treats the open boundaries) is not the real problem
when there are O(1) errors in the initial conditions, the boundary conditions, and the parameterizations.

My initial search of the internet to look for skill testing of the various models forecasting TC tracking and intensities has revealed to me that the error bars for forecasted TC tracks has been improving steadily over the years but that intensity forecasting has not done that well. I could find a graph showing this historical trend, but only to 2002 (see below). That the trend of improvement for tracks but not intensities has continued seems apparent in reading reviews of more current developments.

Subject: F6) How accurate are the forecasts from the National Hurricane Center?

Contributed by Chris Landsea and Miles Lawrence

The National Hurricane Center (NHC) issues an official forecast, every six hours, of the center position, maximum one-minute surface (10 meter [33 ft] elevation) wind speed (intensity), and radii of the 34 knot (39 mph,63 kph), 50 knot (58 mph,92 kph), and 64 knot (74 mph,117 kph) wind speeds in four quadrants (northeast, southeast, southwest, and northwest) surrounding the cyclone. The NHC has been issuing predictions for the forecast periods of 12, 24, 36, 48, and 72 hours since 1964. Forecasts for 12 and 24 hours were first issued in 1954. In 2003, the forecasts were extended and now include 96 and 120 hours. All official forecast are verified by comparison with the “best track”, a set of six-hour center positions and maximum wind speed values, that represents the official NHC estimate of the location and intensity of a tropical cyclone. A best track is prepared for every tropical cyclone, after the fact, using all available data.
From this I would conclude that forecasts are updated with real data and it is perhaps the availability and quality of these data that can improve tracking over time, but intensity changes being less well understood (we have discussed sudden changes in hurricane intensities on other threads at CA) are less susceptible to improvement with real data.

The important element in choosing a given model for staking ones risk taking on would, in my mind, be the differentiated skill that a given predicting model can show consistently over time. The link and excerpt below would appear to argue for using a combination of forecasts as opposed to a single one. Maybe there are those entities that would pay big bucks for one or more forecasts to add to those that it gets free with hopes of improving their consensus forecasts – of course, it could degrade it also.

Consensus forecasts (forecasts created by combining output from individual forecasts) have become an integral part of operational tropical cyclone track forecasting. Consensus aids, which generally have lower average errors than individual models, benefit from the skill and independence of the consensus members, both of which are present in track forecasting, but are limited in intensity forecasting.

Re 385 Gustav’s outflow and radar presentation are also improving. The angled landfall in central/southwest Louisiana looks like the most likely outcome. Round two of evacuees will arrive at the house after midnight, getting crowded but it’s a good chance to visit.

Re #84 Kenneth, I have a couple of poorly-formulated thoughts on the intensity graphic from the TPC.

One is that I wonder if there is a way to scale the typical error to the typical change in windspeed. If a typical storm changes windspeed by 300 knots in 24 hours and the error is typically 10 knots, that’s a pretty good record. But, if the typical change is 5 knots and the NHC misses it by 10 knots then that is not so good.

Two, the tools for measuring windspeed have improved over the last 30 years. In the past they forecasted windspeed then measured/estimated windspeed. That measure/estimate was part guess, and if the guessers expected a windspeed of X knots then they may tend to estimate a windspeed of X knots. Today, though, the measurements are more accurate and there’s less room for a subjective influence.

As mentioned these are poorly stated but perhaps the gist is apparent.

I think Gustav got disrupted over Cuba right at a point where one might have expected eyewall replacement. It appears that this double whammy has about run its course, with a new eyewall forming rapidly now. I suspect that the hole to the NE in the cloud shield is the remnant of the old vortex that got displaced, as during the day today it seemed that there might have been competing vortices at the center of this mess. Effectively, I think the eye has reformed southwestward, responding to the loop current to its west. This has important implications for the track, as I see a growing westward component and a period of parallel motion relative to the LA coast. I don’t think we get to a TX landfall, but it wouldn’t surprise me. The TPC is amusing with reference to 20 knots of southerly shear. This puppy is booking Northwest at 18 and the so-called shear can bearly keep up with the circulation. 930 landfall West of Morgan City LA.

hswiseman, I went ‘back in time’ using the facilities here: http://www.rap.ucar.edu/weather/satellite/ and the system appeared to hold together until *after* it had crossed Cuba then the ‘disorganization’ set in.

From the NOAA NWS National Hurricane Center accessed thru the above site

Dynamical models are the most complex and most computationally expensive numerical models utilized by the NHC. Dynamical models make forecasts by solving the physical equations that govern the atmosphere, using a variety of numerical methods and initial conditions based on available observations. Since observations are not taken at every location around the world, the model initialization can at times vary tremendously from the atmosphere, and this is one of the primary sources of uncertainty and forecast errors within dynamical models. Errors in the initial state of a model tend to grow with time during the actual model forecast; therefore small initial errors can become very large several days into the forecast. It is largely for this reason that forecasts become increasingly inaccurate in time.

And the errors from the initial data are only one component in the total error as I have discussed above.

I thought you also might find this bit of discussion on the above site amusing.

The National Hurricane Center does not generate a graphic of the guidance models it uses to produce its forecasts. Such graphics have the potential to confuse users and to undermine the effectiveness of NHC official tropical cyclone forecasts and warnings. NHC’s Tropical Cyclone Discussion product, issued with each advisory package, contains a description of the reasoning behind the official forecast, including discussion of the specific models considered in the preparation of the official forecast. Some users may desire more information on the characteristics of these models, and for their benefit we have prepared a Technical Model Summary.

I have been watching the bouys in the gulf and it looks like the storm is continuing to weaken. Accuweather is forecasting that Gustav will be a CAT-2 or at best CAT-3 at landfall. The eye wall is passing between the deep water gulf bouys right now (11:03 PST) and it is not looking anywhere near as strong as Katrina did.

Gustav with the hurricane’s eye well-offshore, shortly before ‘disorganization’ set in:

0402 UTC 31 Aug 2008

Notice the extremely cold cloud top SW of the eye depicted in this image; from this point forward the storm had this artifat present with a weaker ‘area’ (less of a high-level cloud mass) to the NE of the eye in later IR images.

Thanks, Bob Koss, for the link. I will read it over carefully in the near future. In the meantime, I extracted the following comment from the 2007 review.

NHC official track forecasts in the Atlantic basin set records for accuracy from 36-96 h in 2007. They beat or matched the consensus models at most time periods, but generally trailed the best of the dynamical models. Examination of trends suggests that there has been little net change in forecast skill over the past several years. Among the consensus models, CGUN (the corrected version of GUNA) performed the best overall.

The GFSI and UKMI/EGRI provided the best dynamical track guidance, while the GFDI and NGPI performed relatively poorly. The performance of the EMXI in 2007 was mediocre.

My interest in predictive skill has gone beyond a comparison of models to an attempt to estimate the practical value of any additional skill a particular model or average of models might produce over its rivals.

Re: #88

David, I think you make some good points here. From a layperson’s view I subjectively see the models performing better of late on predicting the tracks of the TC path (to landfall) than on forecasting the intensities (to landfall). That, of course, does mean more objective data would indicate the same. If one were attempting determine the importance of a 10 knot difference between a level of 100 knots and a level 40 knots in terms of possible damage, I guess looking at energy and PDI, with the resulting squaring and cubing of velocities, the 100 to 110 knot jump would be more important than the 40 to 50 knot jump – at least on an absolute scale, I think?

This is an attempt to get the post linky thingy right and to comment that my view of the forecast track you gave, David, would indicate that the forecast was quite good. I am probaly missing something in both of these cases.

Well, Gustav came ashore this morning as a CAT-2. Not to minimize the potential impact of that, but it was significantly different from what was forecast, at least by some outlets. I heard last night CAT-3 at least, and some were forecasting CAT-4 at landfall. I told my wife last night these people had no idea what they were talking about and that it would be a strong CAT-2 or weak CAT-3 at best at landfall. I have no skill at prediction of course, it was just based on many of the things I have read. The news outlets I was watching on tv never indicated who their sources were but kept hyping the CAT-4 thing all night. I finally was so disgusted I changed the channel.

NO seems to have dodged a potential catastrophe again, fortunately. Hopefully the damage will be minimal along the path.

Gustav never seems to really have reorganized itself properly after it’s traversal of Cuba. At one point, there was some strengthening, but then it appears that some wind shear on the south side, a missing chunk of the eyewall, and traversal of a cold pool eddy in the GOM prevented it from fully restructuring. Then, as it approached land, the water becomes much shallower with lower latent heat content, and also begins the process of induction of dry air from land, which also reduces strength if the forward speed is slow enough to allow it to wrap in some.

I’m interested to see the final path. Yesterday’s model runs were beginning to coalesce around turning to the SW after moving into TX, but now appear to take it into E TX and then to the NE.

I’ve seen about 30 – 40 different TC models that get published. Some of course, are just slight variations of themselves, initialized with different parameters and such. That said, is there any sight which provides statistical analysis of the performance of all these models, on an individual storm basis, total storm basis, etc? I’m talking more about track than intensity. For example, the CLP5 model for days was showing a landfall in AL or FL peninsual for Gustav. Clearly, it was an outlier. But I’ve seen this same model year after year be very wrong in it’s path prediction and wonder what value there is in even running it? There are a number of other models as well that I have noticed the same bad estimation year after year. In my opinion, running them and providing them to the public just leads to more potential confusion as to the expected track, when compared to others, such as GFDL, HWRF, etc. Of course, any individual model can be wrong at any time, but some are a lot closer than others on a much higher percentage basis, at least over a shorter time period, and often over a 3 – 5 day best track estimate.

Re: Kenneth Fritsch (#106), and Re: tetris (#107), The forecast track after a little faux pas at Haiti was quite good. The intensity estimates were not very good. End end result, especially oil price, is very nice. Now if they can just figure Hanna out. The NHC and ECMWF are way off.

Gustav came ashore as a Cat 2 putting and end to some pretty bad forecasting and a lot of hype. As a reality check it is interesting to note that crude immediately dropped by US$4.5 and is now trading at US$110, the lowest value in the past 8 weeks.

I’m not clear on where the forecast fails on unpredictable events and where the models need to be better. It seems to me that a cat4/5 storm with very good symmetry hitting a easily modeled land mass in Cuba, wobbling a bit left into the mountain range, becoming asymmetric and never recovering should be a more predictable outcome than it was. Perhaps all except the never recovering part but should be degree and nature of the asymmetry have been more predictable?

#107, the increased price of oil/commodities was factored in last week on whatever expectation of supply disruption in the Gulf due to Gustav. Thus, we already paid for Gustav, whether it did any damage or not. However, the “evil” speculators are now factoring in the end of the summer driving season, Gustav’s small impact, and global economic slowdown (although US GDP grew at 3.3% last quarter) and are betting DOWN the prices. Gas prices should be under 3.00$ in some parts of the US very shortly based upon the wholesale market.

In terms of forecast bias, as a very “crude” way of seeing how GFDL and HWRF have done with Gustav, I plotted up the forecast locations for the previous 54-96 hour forecast cycles. GFDL Gustav 0901 12Z landfall location matrix and HWRF equivalent. You can tinker with the address bar and type in different dates to see previous forecast cycle matrices.

But I’ve seen this same model year after year be very wrong in it’s path prediction and wonder what value there is in even running it? There are a number of other models as well that I have noticed the same bad estimation year after year.

Maybe like an ensemble of climate models, perhaps having a TC forecast model outlier(s) provides such a wide range that the ensemble range can never be wrong. Further if the outlier is always biased in one direction and average of the remaining models in the other direction, well, then you can see the dilemma of removing it. I don’t do smiley faces.

DISCLAIMER … THESE DATA ARE PROVIDED AS GUIDANCE. THEY REQUIRE INTERPRETATION BY HURRICANE SPECIALISTS AND SHOULD NOT BE CONSIDERED AS A FINAL PRODUCT. PLEASE SEE THE TPC/NHC OFFICIAL FORECAST.

We need your help. I am a bit confused though whether your expertise is primarily TCs or also includes commodity pricing. I agree with your analysis and would add that a worldwide economic slowdown would do more to reduce CO2 emissions than any climate treaty like Kyoto and primarily because it takes it out of the politicians control.

Ken, #114, The disclaimer is the generic README file included in every batch of GFDL/HWRF produced by NCEP. As links to various forecast cycle animations were thrown around the internet, I definitely did not want anyone in the public to think of them as Official Forecasts and make decisions based upon them. Something about liability…

Huh? You need my help? I follow the commodities markets very closely and from the IP addresses that ping the FSU domain everyday, I know that the commodities markets follow tropical cyclone activity very closely. I only claim to be an expert pinochle player.

A summary of the various models, including the climatology plot (“CLP5”), is here . I haven’t seen a full summary of their performances but I know that several get a lot of respect from the experts. Personally, I’m more interested in the spread of the paths (after tossing out the BAMS, CLP5, etc) than in any particular model.

At landfall Gustav decided to make a quantic jump and show his strenght right over me. I was busy (and excited) due to an explosive thunderstorm (and water inside and outside, branches and leaves), which you can see (if you like) here.
After Gustav, let’s focus our attention to Hanna.
The last IFS-ECMWF run (20080901 12) insists for Hanna to get to longitude 50 W in front of the northern Florida coast and then turn to the north.Ike follows…

I’m leaving, going to the very central Med for my end of summer vacation. If you want to check the ECMWF forecasts for the next two cyclones go here. Good luck to you all living on the shores of the tropical Atlantic.

Re 3120 As stated, the plot shows the actual and forecast values for one time a day (09Z), not every six hours. As such it misses observations. I’ll add the 21Z observations/forecasts to the plot and we’ll see how it looks.

I’m still not sure if the model of land interaction is adequately handling the possibility of major disruptions in symmetry (the first “failed” Gustav prediction). I don’t have satellite pictures, but perhaps Dennis had better symmetry coming off of Cuba. The second prediction of Dennis or Gustav restrengthening depends on the emergent form along with all the other surrounding conditions (shear, water temperatures, etc). The second prediction will probably fail if the first one does. As David depicts in #123, the prediction of strengthening from the 130 knot peak is a Gustav failure (but would have been a Dennis success).

Joe Bastardi on record on O’Reilly’s program (Bastardi shows up on Fox regularly) talking about Hanna being a major hurricane on the scale of Floyd or at worst Hugo hitting South Carolina Fri or Saturday. Let’s just say, he was very confident in his forecast…Pretty decent outlay of the facts, and I pretty much agree with him.

Rotten timing and worse trajectory for those just back in New Orleans. The truth is — this city is part of the triage decisions on climate we are already making. Triage is applied in an emergency to allow the most globally beneficial use of inadequate resources. There will be severe climate disruptions, which will be left untreated because they will be recognized as able to recover autonomously. Selected climatically-induced emergencies where tax-payers’ money can reduce suffering will be funded. Last, and most sadly, there may be even situations where unlimited funds cannot reverse impacts and the limited funds are deemed better deployed on other projects. See http://www.climatechangetriage.net

#126, Gil, you definitely need to work for some sort of non-profit or “non-partisan” think tank because your phraseology clearly demonstrates your ability to take an unfortunately event like Gustav and misrepresent fairly elementary things like “weather” and “climate”. Take your advertisements to the bulletin board.

Let’s not let the facts interfere with the analysis! :>
Thanks for the info and running the time warp machine. I think the endgame story is probably dry air infiltration at mid-levels, which may have had its origin in the subsidence region between Gustav and Hanna.

General note on tropical track models. As storms approach landfall there are numerous additional observation based datapoints to work with (Thank You Hurricane Hunter Crews!!). All the models use these in their initalization, and hence they generally converge as landfall approaches. Poor initialization equals poor model “skill” and it seems all the models take turns being lousy.

Capt: Continuing with my theme of mountain-hurricane interaction, if Hannah does reach Haiti/DR it will get torn to bits by the mountains and there won’t be anything left to move north. It should be clear in the next few hours.

Re: Eric (skeptic) (#130), With the SE shift the mountains on Haiti (Hispaniola) are breaking the symmetry of convection of Hanna. This is where I generally find the models totally lose their validity. Just as they lost it when Gustov broke down at Haiti, they may lose it now.

With a shift west I am in the next potential cone, but conditions do not seem favorable for major development on a NW track. That means the Captain is going to add lines to the boat and buy a two day supply of Scotch.

This free update curtesty CDWS. BTW CDWS predicted landfall for Gustov at Galveston was within our accepted nominal margin of error. CDWS advisories are for informational use only and should not be used for critical political guidance.

More commodities news: Little major damage is expected from Gustav in the Gulf of Mexico when platform dwellers return to their rigs this week. Thus, the price of fossil fuels, metals, and agriculture has crashed greatly today. Price of oil is down 8$ and gasoline is now 2.60$/gallon for wholesale October delivery. The hurricane premium in the market has largely evaporated over the past two days.

#130, wonder if Hanna will be able to reconstruct her inner-core or suffer the same fate as Gustav after its Cuban vacation?

re 132 me. Belay that thought. Hanna is moving SSE to tag up with haiti. David Smith I would love to see an animated track error plot for this storm. If I can find a Spirograph I may attempt one of my own.

…Paolo M … Why didn’t you announce Josephine 4 days in advance?? Like
Ike??
Ike doesn’t seem too mean right now but Josephine is going bananas
just 7 degrees W of African mainland and has/had an eye…[wiki]

Hi Capt, according to the atlas, it looks like the mountains rise to 2 to 3000 feet across northern D.R. The land interaction should be about as easy to model as the northerly shear although the shear has had more effect so far. The disruption from land would be more complex: dry air, restricted convergence, disrupted circulation. Even though the mountains can be precisely modeled, the asymmetric globs of Hannah are going to make it difficult to specify initial conditions.

Re: Eric (skeptic) (#139), Even though I am no where near smart enough to be an authority, interaction of the storms bands with the mountains per radar did show a disruption. The Gustav outflow appears to be drier air that more reduces gain in intensity than steering direction. Hanna is an enigma until something takes control.

The delay in Hanna’s turn north due to Gustov’s outflow and other conditions resulting from Hanna’s indecision could make Ike’s path more complicated. The CDWS predicts lower intensity storms due to TC interaction resulting in cooler overall climate and reduction in drought conditions in the southeastern US.

But a reduction in intensity will change the track a margin at least. The faster the winds it will move around more unless it faces a front or no fronts at all. Something like a baseball (think curve and slider).

#139 …Eric(skeptic) …Excuse me “2-3000 feet” sorry Eric you must
have a non-US atlas!? The highest peak, Pico Duarte is 3175 METERS, I REPEAT METERS NOT …FEET ASL!! It does snow up here now and then in winter.
Tropical systems may be fragile…Gustav didn’t really recover from the relatively flat W Cuban heights 600m/2000 feet ASL…Further to
the east we have the by Paolo M long awaited “Ike”, A “Tiny Tim” or as I prefer to say “living-room storm” it even ressembles “Vince” of 2005
Josephine leaving Africa is at present much bigger…Is arctic ice
more exciting?? [2008-09-03 09.14 GMT] Bastardi!! Hannah won’t be another
“Hugo” [Sept 1989 SC,NC,VA] FYI there’s a risk in 1000…But, after
all who am I, A Swedish dilettant, to tell you that? Time will tell…

Florida State University study links global warming to stronger hurricanes

Warmer seas, particularly in the North Atlantic and Indian oceans, have caused strong hurricanes to be stronger than ever before, according to researchers at Florida State University.

FSU geography professor James Elsner and postdoctoral researcher Thomas Jagger have completed a study on how tropical cyclones are getting strong and ocean temperatures play a role in that trend. Wednesday the two researchers explained how their study, which will be published in the Sept. 4 edition of the journal Nature, impacts the average person.

“It (the study) certainly doesn’t predict how strong Hannah or Ike can get,” Elsner said about the two hurricanes brewing in the Atlantic. “It doesn’t say we’ll have more storms. It points to the increasing frequency of the serious storms.”

Jagger explained by saying it used to be that people would see a Category 5 storm like Miami’s Hurricane Andrew every 50 years. That’s not the case anymore, Jagger said. Hurricane Katrina reached Category 5 strength before it hit the Gulf Coast causing nearly $100 billion in damages, according to the federal government’s final report.

Category 5 storms are the most catastrophic hurricanes that can form, and occur only about once every three years on average in the Atlantic basin. Only four times — in the 1960, 1961, 2005 and 2007 hurricane seasons — have multiple Category 5 hurricanes formed.

Given that 31 CAT 5 hurricanes have been recorded since 1931, and that no location has been hit by more than 2 cat 5’s during that timespan, the once every 50 years is no different today than before.

The first number means it was hit be a hurricane that was at some point in its lifespan a cat5 but was less than cat5 when it hit that location. So, the Bahamas have been hit 6 times by hurricanes that were cat5 at some point in the lifespan, but only 2 of those times was the hurricane at cat5 strength when it hit.

Florida, the Bahamas, and the Yucatan are the only areas that have been hit twice by cat 5 hurricanes, and all of those strikes were about 50 years apart.

“It (the study) certainly doesn’t predict how strong Hannah or Ike can get,” Elsner said about the two hurricanes brewing in the Atlantic. “It doesn’t say we’ll have more storms. It points to the increasing frequency of the serious storms.”

Jagger explained by saying it used to be that people would see a Category 5 storm like Miami’s Hurricane Andrew every 50 years. That’s not the case anymore, Jagger said. Hurricane Katrina reached Category 5 strength before it hit the Gulf Coast causing nearly $100 billion in damages, according to the federal government’s final report.

Why do these scientist make these statements that seem unrelated to the papers that they have published? Lets see the statistical anaylsis and significance behind Jagger’s statement. I remember Emanuel Kerry questioning the statistical significance for the occurences of landfalling TCs because of the small amount of data available. Cat 5 hurricanes would indeed be even rarer.

Instead of spinning the results for policy considerations why not let us know the precautions we should be taking away from their papers as scientists have historically done? I have obviously not read this paper but excerpts that I have heard from it were that one TC region did not show any increase in the strengths of the storms in question and some of the other TC regions showed increases but not to a statistically significant extent. The study goes back only 26 years which certainly could miss the cyclical nature of TC activity.

Certainly Judy Curry would not endorse a 26 year study of TC activity as conclusive — oh, wait according to Roger Pielke on another thread at CA she may have made an exception for this study. And the beat goes on.

Oh goodie Jim for that Accuweather link: you reminded me of Joe Bastardi’s prediction of Hanna. In #125, I said that Bastardi has a pretty good command of the facts and the current situation, when he showed up on Fox News. At that time, Hanna looked like a monster to be reckoned with. Not so much today, with the NHC suggesting a 70 knot Category 1 hurricane at landfall. Here is a condensed version of the segment on Fox News from LexisNexis: (I am bold face the fun parts)

[dateline 8 pm September 1, 2008]

BASTARDI: Now let’s take a look what’s going on out further out at sea, because we’ve got a lot of action going on. This is Ike here. And you see Gustav here. But here is our next big worry. This is Hanna, and Hanna is going to be a monster of a storm.

I’ll tell you what, Bill. This reminds me of what we see in the Pacific, the southwest Pacific with typhoon development. We actually have a situation that should be suppressing the development of Hanna, where you see this jet or this cirrus cloud coming over the top here into the storm and trying to stop the storm. And yet it’s blowing up in the face of that. (um, not so much)

What I expect to happen here over the next two or three days is that hostile atmosphere gives way to a much more conducive situation for development. And this is going in to the southeastern United States, perhaps with the strength of Floyd; in a worst case, Hugo in 1989. (so far, thankfully that has not happened)

Now, mentioning these storms so our viewers down there understand the potential that we have here. This is a potentially very serious situation. When you take a look at the cloud photograph Thursday and Friday, this is going to be an enormous storm, a great Atlantic hurricane like we used to see in the ’40s and ’50s during the last time of climatic hardship when the Atlantic warmed up quite a bit. So a very, very nasty situation. (always back to the 40s and 50s)

…Friday — Friday, I expect it to sideswipe Florida Thursday night into Friday morning and hit up the coast between St. Simon Island and Wilmington during the day Friday as a Category 3 hurricane or perhaps greater. (oh my goodness!)

O’REILLY: OK. Then you can’t track it any further than that. You don’t know whether it’s going to go inland or does it go back to the ocean, right?
BASTARDI: Oh, yes, I can. I think it’s going to go right up over D.C. Saturday night and up to Boston on Sunday as a weakening — weakening hurricane or tropical storm. You’ll get a lot of wind and rain in the mid- Atlantic states. But the real problem is going to be the fact that it’s going to hit the Carolinas.

This would be considered an outlier forecast at this juncture…O’Reilly introduced Joe with the following:

O’REILLY: In the “Back of the Book” segment tonight, let’s go to perhaps the best weather forecaster in the world, our pal, Joe Bastardi, in State College, Pennsylvania. He’s going to put Hurricane Gustav into some perspective for us.

#154, The transcript doesn’t do the segment justice. When O’Reilly suggests that Joe can’t track the storm any further than Wilmington, Joe almost explodes with “Oh, yes I can!”.

Also, on the Nature website, there is one comment that someone has posted.
Anyways, out of all the comments that Nature receives that they can put online, they put this brilliant tidbit, which ironically to me is my first name and last initial. If only I could take credit for this obvious gem:

I wonder if people will ever become fed up with the yearly threat produced by these increasingly destructive storms. Will they eventually decide to move elsewhere? Posted by Ryan M | Sept 3 2008

#157
Kenneth I certainly think that you can’t do any study with out taking into account the cycle lengths. AMO is I think???? 35 years and PDO is 31 or so. With my many back and fourths with Leif at his threads he constantly reminds that you need a great many cycles to find the trend.

After rereading the Landsea comments critiquing the Elsner et al. paper in the link given in the above post, I would strongly recommend it and particularly for those who might not follow my ramblings on the subject. Best leaving it to the professionals.

The article, “The increasing intensity of the strongest tropical cyclones”, by J. Elsner, J. Kossin and T. Jagger, would appear to this layperson to be novel in the approach that the authors used in looking at quantile regressions of maximum wind speeds in global TCs. Instead of regressing the mean, they regress the median and several other quantiles of maximum wind speed over the period 1981 to 2006. They then look for (and find) increasing trends for the global TC data as one gets beyond the median into the upper quantiles for wind speeds.

The following excerpts from the paper and the paper’s SI describes some of the statistical manipulations that the authors used to transcribe satellite data into an estimates/interpolations of maximum wind speeds for 2,097 tropical cyclones around the globe.

The first PC tracks the departure from the average profile. Colder than normal cloud-top temperatures associated with stronger tropical cyclones are associated with negative values of the first PC. The data set used to build the model is similar to that used in ref. 16. The final regression model includes PCs 1, 3, 4, and 5 as well as cyclone latitude and age.

..The statistical model is subsequently used to estimate lifetime maximum wind speeds for 2097 cyclones around the globe during the period 1981–2006. These satellite-derived wind speeds are estimated at the time of the best-track lifetime maximum wind speed.

I believe it is the use of the raw data for the purposes the authors of this paper put it that Chris Landsea questions in the post above. I did not see any justifications offered in the SI why PCs down to number 5 PC were used and why PC 2 was not used.

Getting down to the basic results presented in the paper, the authors show graphs of the trends for the entire quantile range and use 90% confidence bands (why not the more conventional 95% CIs) which for the globe shows the trends increase with increasing quantile above the median with the trend CIs drifting above the 0 trend slope at about the 0.7 quantile, i.e. at those quantiles and higher the null hypothesis that the slope is zero can be rejected. The greatest slope shown indicates a 0.2 meters per second per year trend in maximum wind speed over the period 1981-2006. The global values were taken from the six TC development areas of the world. The separate areas, however, showed vastly differing trends with increasing quantile. The slopes for the WNPO, ENPO and the SPO areas showed no slopes significantly different from zero for any of the quantiles, while the SIO and NIO areas had slopes for the highest quantiles (0.8 and above) that were marginally significantly different than zero. The exceptional standout was the NATL area that shows statistical significance for the slope from the median upwards in quantiles. The authors make no efforts to explain the difference between the TC areas of the globe.

There are two items in this paper that, in my mind, make the press conference comments about it appear to be a big to do about little or nothing.

The first item is a graph relating the quantile trend slopes in meters per second per degree C (change in SST for the area). The graph shows results for satellite derived maximum speeds and for maximum speeds as recorded from the best track observational archives. Why the authors would show the best track when they made a major case initially in the paper for using satellite data is probably best answered by looking at the results using these two data sources. The satellite data shows no significant slopes for any of the quantiles while the best track does. In fairness to the authors, they do comment that the original regressions used to interpolate the satellite data reduce the variations in maximum wind speed below what could be observed and at the same time the best track has problems with changing quality over time.

The second item is shown in the SI where the authors show a corrected and uncorrected version of the quantile trend slopes (Fig 8) for the SIO area. The corrected version shows the SIO trends (meters per second per year) becoming significant (marginally) at the high quantiles, while the uncorrected shows no significant trends for any of the quantiles. (I think this point was also referred to by Chris Landsea in the post above.) The authors do not show confidence bands in this graph which renders the comparison less obvious to make, but is very obvious once made.

The correction is apparently used to adjust for changing angle of the satellite observation of the IR brightnesses at high altitudes over the TC and is applied to the SIO and NIO areas. The authors use the best track data (which they appear to discredit elsewhere in the paper) as reference by noting that the corrected trends are closer to the best track than the uncorrected. They also use an observation in the PC1 of their principle components analysis to make the corrections which in this layperson’s mind does not make the adjustment reach the level of one made from some independent observations and measurements. Without the correction we would have 5 very similarly reacting TC areas (no significant trends) and the single exception of the NATL

I believe it was Pat Michaels who found TC intensities increasing up to some plateau level with SST that were measured temporally and spatially under very localized constraints vis a vis an individual TC. From that information, and any offered by the authors here, I cannot theorize, again in my layperson’s view, why we should see significant trends over time, or with SST, upwards in wind speed maximums in the upper quantiles only.

The toll that Gustav took is yet to be calculated, but I would not consider it an ill wind that blows no good. Our local weather people have attributed a much needed and substantial rainfall in the Illinois area at least partially to Gustav. Lawns and farmers needed the relief. Unfortunately I have already spent the big bucks in water billings to keep my lawn green. Maybe next time I’ll pay to consult one of those expensive forecasters.

The Elsner et al. Sept 3, 2008 paper that I discussed above does not even acknowledge the existence or potential existence of the cyclical nature of TC activity in the NATL. Nor does it note the cyclical nature of the AMO, AMM or ENSO and the relationships the oscillating nature of these phenomena might have on TC activity. I see many papers analyzing TC activities that make the claim for a relationship of TC activity in the NATL with SST and completely ignore any other explanations.

Look at the results presented in the Elsner et al. paper from a different viewpoint and one can arrive at a totally different conclusion than that of the authors. We have six major TC regions in the globe with only one showing a clear cut increase in maximum winds in the upper quantiles. The NIO and SIO are made significant only with a correction made to the data after the fact with the results of PC 1. Why then does the NATL TC activity appear different than the other TC regions of the world during the time period of this study and other studies that are restricted to going back no further than the 1970s?

Could it be explained or at least mostly explained by the cyclical nature of the NATL TC activity and acknowledgment that the cycle has been hit in the sweet spot? It would appear not when one can haul out the SST increases that happened to occur at approximately the same time and one can emphasize what is occurring in the NATL and not globally.

It is clear from the “debate” surrounding the results of Elsner et al. that many TC scientists have entrenched ideological viewpoints that border on the absurd. This loss of objectivity is now manifested in vague statements to the press, which often include masterful demonstrations of short-term memory loss.

Here is the question of the day: Why hasn’t anyone published this same study using the Best Track hurricane data that all the other hurricane papers have used, and immediately had it published internationally in Nature?

When I say “study” this is the process:

Step 1: Take best track dataset (only from 1981-2006).
Step 2: Find maximum wind speed of each storm and output to another file, for each year.
Step 3: Create a box and whisker plot for each year and put them in one chart.
Step 4: Draw some trend lines.
Step 5: Publish.

Anyone professors out there that can put a team of post-docs on this very important problem? It seems to have international importance!

Ryan and David, I got the impression from the Elsner et al. paper that the Best Track data homogeneity was such as to be beyond the pale for doing the analysis without the massaging that they finally did with principle components on satellite data.

What I got from the Landsea critique was that he had no issues with their NATL findings but did with the correction that the authors used for the SIO and NIO and also had reservations with using the raw satellite data. He seemed to owe the NATL condition TC-wise to the cyclical nature of TC activity in that basin.

From what was presented in the Elsner paper, I would suspect that it would have been easier for the authors to make their case using Best Track than satellite massaged and corrected data.

Re #168 Ryan, I played with the best track data for the Northern Indian Ocean (NIO) last evening and will post a few plots this evening. Elsner’s results for the NIO particularly puzzle me.

Their NIO results, significant or not, show decreasing windspeed of the weaker storms and increasing windspeed of the stronger storms. I understand the conjecture for the stronger storms but why should AGW weaken the weaker storms?

Re: David Smith (#169), Probably due to weak storms being more susceptible to lower levels of shear. With warmer oceans there should be move activity both convention and sheer. More storms in the tropics would mean more interaction, more disruption. The ones that make a clean break can build up steam. Right now Ike is going up against 20 to 30 knots shear. As a minimal cat three this should cause disruption where as a cat 4 he would have plowed right through. Or it could be that Ike just needs to touch base with Haiti like Gus and Hanna. Now that Ike is getting to the area Hanna drew her boot things may get interesting.

Anywho, I am in the cross hairs of one of those AGW super cyclones yet again.

With the exception that Ike WAS a Cat 4 and the shear still managed to affect it. I suspect it has to do with not only the speed of the shear but the direction in relation to the overall storm movement. This was one of the observations of Dr. Masters as to why Gustav was unable to significantly restrengthen after crossing Cuba. There was an increase in shear which “tilted” the eyewall at an angle to the ocean’s surface. A hurricane functions most efficient as a heat engine when the eye is perpendicular to the surface. If the eye angle begins to tilt, it is not as efficient in pulling up heat and therefore cannot strengthen as much.

Now, in Ike’s case, it would seem to me that the direction of the shear is what may be causing the interference. I say this because it’s only running about 20kts, which for a Cat4 shouldn’t be all that significant, and yet it has managed to knock it down some. We’ll see what happens longer term, but Ike will also be approaching areas that Hanna has been over, which should reduce the sea temps, further limiting strengthening or even weakening it. Last model runs I checked were more consistent but still had pretty wide differences in track.

In Dr. Master’s Wunderblog today, as he was talking about Ike, he displayed a graphic that showed a fair amount of spread in tracks. All of these runs were from the GFS ensemble, and each run represented slightly different parameterizations of initial conditions.

I guess I am confused by that. Why would you vary initial conditions? If you know what the current conditions are, then that’s what you would use. If you wanted to parameterize various other factors over time, like shear, SST, troughs, high pressure areas, forward speed, etc, then I could understand why you would get different tracks, etc. But varying initial conditions doesn’t make any sense to me.

Re: Joel McDade (#174), Excellent and an easy read.Re: Jonathan Schafer (#172), The sheer Ike is experiencing now is what I was referencing. Earlier as a cat 4 he has a couple of factors. Ike is encountering a relatively small area of sheer with 20 to 30 knot winds moving slowly North between its current position and Hispaniola. This patch is nudging Ike’s path southward despite the weather channel’s analysis that the northern circulation is in control.

The strength of this patch has varied over the past day and may not have much influence. But it has seemed to intensify just at the right time to steer and slightly weaken Ike. If this patch does not significantly weaken and continues its NE movement it will push Ike towards Hispaniola/Cuba and save me a ton of money and aggravation.

What is hard for me to determine is if these small patch highs will steer, sheer or both. I expected more sheering and less steering, but the reverse is happening. I’ll know more in the morning, but at least I am not dead in the cross hairs right now.

Thanks for the link. I understand their explanation well enough, but it certainly seems counter-intuitive to me. Initial conditions to me should only vary if they are unknown. If you know forward speed, wind speed, direction, shear, SST, etc., then varying those parameters to see different results, then combining the results for a best track just doesn’t seem right. What does seem right, would be to vary the parameters AFTER initialization, so that if extra shear were present, or if SST increased, or high pressure ridge moves in, etc., then you could model all those differences, combine them, and then take a best track. But then, I guess that’s why I’m not a climate scientist or hurricane expert, and I didn’t stay at a Holiday Inn Express last night.

Re #171 Interesting and plausible idea, Captain. The question then becomes, why does this appear in the NIO and not in the other basins?

I’ve made a first-read of the Elsner paper and SI and I like much of their approach. I suspect, as Kenneth and Chris Landsea have mentioned, that there are two key issues. One deals with the satellite data from the Indian Ocean and the other is the well-worn issue of North Atlantic multidecadal cycles. I’m still trying to absorb some aspects of the paper, particularly in the NIO, so I’m pausing before commenting in detail.

I wish they had shown a plot of the global quantile wind trend for all basins but excluding the North Atlantic. Ideally this plot would have two lines, one based on best-track data and the other based on the satellite data. This would help illustrate the impact of the North Atlantic and Indian Ocean satellite questions.

Does anyone know where I could find a list of the 2000+ storms and their satellite wind estimates, the ones used by Elsner? The best-track data is readily available, of course, but I saw no link to the satellite estimates in the SI or elsewhere. If not, then I’ll ask the authors. By the way, I’ve made occasional requests for information from both Kossin and Elsner in recent years and found them to be very helpful and gracious.

Re: David Smith (#176), Just a SWAG but there is only 15 degrees or so Lat for building in the NIO. The IO has some of the warmest waters in the world. While the IO has the energy, a storm would need an extra push to build fast enough to hit cat 4 or 5. Probably why their peak season is May and November.

This is the question. Why one out of six basins shows this. This is only about 16% of the studied area. Hardly a huge figure to base a conclusion. AMO is a factor in this? I don’t think anyone here will object that higher SST’s will increase the likelihood of more and possibly intense TC’s. The fact that they take 3/4 of one oscillation and 1/4 of another is just lack of foresight IMHO. I would say that you need at least four complete oscillations (one equals a positive and negative) to prove any trend but even that may not be enough considering all the variables. When you start to go out hundreds and thousands of years you start to move into the variables such as rotation and orbital. But that aside if you look at my previous post and the relation of SST to CAT 3 to 5 TC’s you see it well follows AMO. Take into account SST, the real factor in most TC’s is shear. Look at Ryan’s critique of Joe B. Hanna forecast, huge TC but not quite so when shear ripped it apart. Ike may be going down the same path but we will see if the shear is still there. I love it when the AGW crowd starts up with more El Nino’s and intense TC’s, excuse me but El Nino and TC’s are mutually exclusive.

A question for anyone reviewing Elsner et al: Figure 2a and 2b appear to be identical. Might one be an error? The odds of two basins being identical would seem to be quite small. No major point turns on this but it does make me hesitate from creating a chart on the Pacific storms.

David, that observation is why we pay you the big bucks for reviewing (like those excellent TC forecasters). I certainly missed your observation which now that I look seems as you say it is. The gray confidence bands are the same also but these TC regions have differing number of storms which should change the bands even if the trend numbers matched (which as you point out do).

Notice also that while the quantiles are the same on the lower x axis, the upper x axis giving the maximum wind speeds are different.

David, I think that sharp eyed kind of reviewing would disqualify you from ever doing it for a Team paper, while I am only disqualified because I cannot come up with a good rationalization for the what I eventually saw.

Notice also that in Table 1 in Elsner et al. that the 0.85, 0.90, 0.95, 0.975 and 0.99 quantiles have different values for WNP and ENP.

I also had a question about how one gets and confidence limits in the upper reaches of the quantiles, since for WNP, the 99 quantile would contain 7 storms and the ENP would have just 4. And that few storms scattered over a period of 26 years.

Re #181 I have the same question about the 99th percentile, Kenneth, but will let that rest for a while.

The apparent typo is of no consequence other than to frustrate my attempt to look at the Pacific basins. If I can get the raw data then that can be overcome.

I’m playing along the edges of the study at the moment, just to get a feel for things. One item which I missed at first glance is the heavy weighting which the West Pacific carries in their global index. The weighting seems to be as follows:

The most-intense cyclones are the global top-10% (Elsner’s 0.9 quantile). The NIO carries almost no weight in their global figure.

Latest model runs are in and now push Ike further into the GOM. Not to wish ill on Cuba, but if the path centers across the various models, then there probably won’t be much left but a weak TS after traversing all that land. That would be good for the US because after Cuba, it’s not too far from LA who certainly don’t need another TS to drop any more rain on them.

OTOH, not to wish ill on S TX, but a weak TS that would come up and actually drop some rain on us in N TX would be much appreciated. We got nothing out of Gustav. Winds from the SW just kept it to far east and no rain for us.

In the paper, “Sea-surface temperatures and tropical cyclones in the Atlantic basin,” by Patrick J. Michaels, Paul C. Knappenberger, and Robert E. Davis, the authors show that in the NATL for the period 1982-2005 there appears to be a threshold temperature around 28.25 degrees C that when exceeded does not produce more intense TCs with increases in SST, i. e. the trend upward before 28.25 degrees C becomes flat after 28.25 degrees C. They looked at the maximum SST that the storms experienced before or at their maximum wind speed and used satellite SST data and Best Track TC data for their analysis.

Based on a simplified view of the TC intensity reaching a plateau with SST would seem to keep the higher quantiles that Elsner et al (2008) mentions rather constant assuming that a goodly portion of them developed after seeing an SST of 28.5 degrees C or higher. The Elsner paper talks about the effect of those TC regions, of the six worldwide, that have the higher SSTs appear to have little or no upward trending of the maximum wind velocities in the upper quantiles. I got this from the excerpt here that states: “Regional differences in the magnitude of the upward trends are possibly due in part to the rate of warming relative to the existing warmth in the basin.” That observation would in my view be in line with what the Michaels paper found. Elsner et al. surprisingly makes no mention of the Michaels et al. paper. From the following excerpt the Michaels paper makes the simple minded view more complicated by noting that while the intensity plateaus above 28.25 degrees C for the period studied, the portion of more intense storms within this plateau has increased over the 1982-2005 period. They attribute this rise to something other than SST.

To investigate recent claims that the warming that has occurred during the past several decades has been responsible for the observed increase in Atlantic tropical cyclone intensity [Emanuel, 2005; Webster et al., 2005], we divide our study into the periods 1982–1994 and 1995–2005—years before and after a noted shift in Atlantic hurricane activity [Goldenberg et al., 2001]. In the 13 years prior to 1995, a total of 71 storms encountered 28.25 degrees C before reaching maximum intensity, and 16 (22.5%) of those storms became major hurricanes. In the 11 years since 1995, 124 storms encountered 28.25 degrees C SST and 42 (33.8%) of those storms attained category 3 strength or greater. The doubling of the annual frequency of Atlantic tropical systems that encounter SST that exceed the 28.25 degrees C threshold for major hurricane development is a clear indication that the SST encountered by named Atlantic tropical systems were higher in the 1995–2005 period. However, in addition to the increase in the frequency of storms exceeding the threshold, there has been a 50 percent increase in the percentage of storms above the threshold that have reached major hurricane strength. This latter increase suggests that there have been changes in the tropical environment other than SST increases that have proven conducive for the development of major hurricanes, as the average maximum SST encountered by storms that experienced SST greater than 28.25 degrees C is similar during both periods (28.95 degrees C vs. 29.28 degrees C). Gaining a better understanding of these other changes is part of our ongoing research.

The Elsner paper states, in passing, that since the strongest TCs are nearest to their maximum potential intensity for a tropical cyclone, increases in maximum wind speeds should occur with SST in the upper quantiles. I think the authors are using a Kerry Emanuel term of potential intensity without Emanuel’s limiters that I have heard him since put on the potential intensity. This conjecture also would appear to go against what the Michaels paper showed. I have to admit I do not see Elsner’s point in this case even if there were no limiting factors involved.

Ten thousand quatloos to the person who can tell me where Ike will make landfall in the US.

Preparations for a possible Ike landfall in the northwestern Gulf are underway as of this afternoon. That planning and coordination effort will have my time occupied for the next week or so, until Ike makes his strike somewhere.

Re: David Smith (#185), The models have been over predicting the turn north so far which leads me to believe they will under predict after the turn. The ridge seems to be splitting over Florida so my guess is the Florida panhandle. The models should catch up after Ike gets out of Cuba.

Ten thousand quatloos to the person who can tell me where Ike will make landfall in the US. Preparations for a possible Ike landfall in the northwestern Gulf are underway as of this afternoon. That planning and coordination effort will have my time occupied for the next week or so, until Ike makes his strike somewhere.

20 miles north east of Corpus Christi will be the center of the eye wall.

(what the heck, I like quatloos [does the blond with the batlef looking thingy come with it?])

Partly to be different from the captain and partly because I believe it, the blob that was Ike will hit southern TX or nothern Mexico. Like a top that has lost its spin, Ike will no longer have enough upper air structure to be steered northward and I think the low level winds point west.

Have not we learned that 10,000 quatloos will only buy you another and perhaps different forecast. If you pay lets say 1,000,000 quatloos for a forecast and it fails to give you the correct prediction, you can at least tell those depending on its accuracy that you bought the best available information. And if there are questions about how you know it was the best available, you can reply that these forecasters keep their predictions secret and only great forecasts would be kept secret. Or have I gone astray of the real world?

Seriously though, I have attempted unsuccessfully to put myself in the shoes of those who live in a targeted residence for potential TC landfalls and have to endure the days between forecasts and the hurricane/TC passing, and furthermore, do it year after year. I think I know people here in the Midwest who could have serious emotional issues with such situations. Do people living in these areas make some kind of pyschological adjustment?

Do people living in these areas make some kind of pyschological adjustment?

Of course. Most of the time they don’t think about it at all. Just like the people living in California on or near major faults don’t think about earthquakes every day or racing drivers don’t think about how badly they would be hurt in an accident.

Bender, being a Bear/Cubs fan is my reference point for asking the question as that does cause a strain of anticipation, but nothing comparable to the real life issues involved in awaiting the potential effects of a hurricane.

DeWitt, you point to occurrences that mostly happen without warning, and something like tornadoes in the Midwest. My issue with hurricanes is that you can see it coming for days or weeks and that has to put a larger strain on the psyche than those that are over before you have time to worry. Waiting for a hurricane is more like waiting for the Cubs to fold only on a real and potentially much more devastating scale.

Re: claytonb (#190), Absolutely correct. We in the Keys are often criticized for not evacuating for every storm. We track the storms closely and make the best decision we can. I stayed put for Ike, but the truck was full of gas and we had hurricane vacation cash in pocket. Just grab the golf clubs a go.

Looks like the Grotto may have spared Key West again. Ike is currently down to a strong TS/weak CAT 1 storm. And it’s not finished with Cuba yet. I say it will only be a TS when it finally emerges into the GOM.

Whither then for Ike? GFS, UKMET, and one other take Ike into far S TX or MX. HWRF and GFDL take Ike into TX, between Port Arthur and Corpus. Intensity forecasts vary from CAT 2 to CAT 3 from what I’ve seen so far. It all depends on how fast the ridge pushes Ike west and how quickly the trough approaching from the N comes down and how strong it is.

I wouldn’t at all be surprised to see a landfall in MX and just keep on truckin to the west. I’ve lived in the D/FW area for 11 years now, and can only remember one TS that came up our way after a TX landfall. The rest stayed S/W of us or missed us to the N/E. Won’t be surprised if this one does as well. But you never know.

I think I’m going for a Brownsville landfall, then up the Rio Grande valley before curving to the NE.

Re: Jonathan Schafer (#196), Yep, That ridge hung in there making Brownsville likely. I heard about a little flooding in Key West and US1 washed over around mile marker 74. Usual stuff. The Weather Channel has been making the Keys impact sound much worse than it is. I would love to see an animation of the daily change in the five day cone for this storm.

This is the kind of thing that has to drive the modelers crazy. Ever since Ike was still in the Atlantic, they have been forecasting it to curve to the N. First it was FL, then AL, then LA, then TX, and now nearly MX. That every single model has pretty much been consistently wrong is pretty telling, as far as the track goes. Now, whether that’s because it can’t handle the ridge strength, the interaction with land, etc., I don’t know. But it has to be tough on them to keep being wrong every single day about the 5 day track forecast.

On a lighter note, I see the CLP5 model still forecasts Ike to hit the FL panhandle. Is there even a point of running that model?

NHC discussion at 11:00am EDT 9/9 finds flight level windspeed measured by aircraft at 66kt; BUT estimates surface windspeed at 70kt, thus keeping hurricane category rating on the storm. NHC/NOAA could just as logically designated IKE as a tropical storm at this timeframe. My guess is that NHC doesn’t want to give the public a chance to underestimate potential risk of substantial strenghening when the storm crosses the Gulf. Nevertheless, NOAA is left defending the axiom that “the public can’t handle the truth”.

Just south of Houston, but it will be close. Very close. A short wave in the Rockies will decide matters. Until it gets into the data stream, all bets are off, although trends suggest the above. We’ll know much, much more by this time tomorrow.

Also, rapid intensification is likely in the 6-24 hour time period. I would not be surprised if this hits cat 4 by this time tomorrow.

Tucker, there are emergency planners near Houston who have concluded/guessed the same thing and who are triggering their “hard landing” plans this evening. This are known as hard-landing cases because time has essentially run out for smooth, orderly actions.

I guess they’ll be seeing one shortly. Intensity forecasts still vary from Cat 1 to Cat 4. Latest model runs are trending towards Corpus now. I’m not backing off my Brownsville hit as of yet, but I would guess that Houston will be in the clear. It really would take an earlier passage of the trough that just came onshore to push it that far north/east. Not saying it won’t happen, but likelihood is low. NHC gives it a 5% chance, Masters thinks it’s about a 10% chance.

The downside of a major hurricane along the TX coast is apparently a shallow continental shelf. Apparently the storm surge is higher even for less intense storms. Doesn’t bode well if Ike is a Cat 4 for any significant amount of time. Since shear is low, and forecast track takes Ike over a break from the warm current loop, I won’t be surprised if it’s cat 3 by tomorrow afternoon.

I suppose the only good thing about a more western TX landfall is that it’s less populated than Houston, especially dependent upon where exactly the eye hits.

Sat night/Sun will be interesting in D/FW if timing and path holds. My thoughts are with people on the coast right now.

Ike has redeveloped a very small inner-core in rapid fashion: T numbers are near 6.5, indicative of a major hurricane. Quite a different scenario than Gustav, clearly a research question of major implication. Why one storm redevelops inner-core and pinpoint eye, while another flies apart at the seams…?

I’m still not buying the south TX coast, and since I live in the NE USA, I have no stake in this (disclaimer). As a friend said to me, if Ike travels north and east of 26N, 92W, then Houston or just south is a serious threat. Also, don’t forget that a hard right will occur near the coast. It could be before hitting or inland. It’s too soon to know which track is correct. I think we’ll know which way it will go later today when the models ingest new data.

Re: Tucker (#207), My original bet was the Florida Panhandle so I have vertually no shot at the quatloos. Unless Ike takes a breather and chills out at less than eight knots forward movement for a while. I’m sticking with CLP5 in any case.

Models are starting to zero in on a Corpus landfall, but there still remains about a 200 mile cone of uncertainty due to possible deviations in strength of ridge over SE US and timing of trough currently digging S through CA. Assuming things stay as forecast, looks like landfall will be a little off for me but not too far. However, I did expect it to just ride up the Rio Grande valley and not recurve until the NM border area. If models hold up, we’ll end up with 4 – 6inches of rain and TS force winds in the D/FW area. Nothing like the potential Cat3/4 winds and storm surge down south, so I’m not going to complain.

Latest info has landfall at north end of Matagorda Bay. I still believe Ike will squeeze a little more northward movement than that by landfall. Houston/Galveston is a good call still, and becoming more so.

Expect mandatory “all hell breaks loose” evacuations later today. This will become serious in a hurry. In fact, just beneath the surface in that area, it is seething. This will get ugly.

Landfall is likely to be cat 3, but one never knows. The big issue will be surge. The Galveston area is prone to large surges, and IF Ike can attain cat 4 or higher for a while in the next day or so, the size of Ike will cause a surge potential on the broad level of Katrina. Not 28 feet of surge, but a large surge area that would be devastating along that stretch of coast.

Impact Weather’s last update predicted the next tracking be further north than their current track, which is landfalling around Port OConnor. This is due to the hurricane moving more slowly in the gulf and wobbling lately. That would put it around Palacios/Matagorda, perhaps.

Joe Bastardi update!!! On Fox News Neil Cavuto show, Bastardi analoging Ike to be Carla 1961, with expected 135 mph winds (Cat 4) at landfall somewhere in Texas. With his zero skill in intensity forecasting for Hanna, I wonder how well Joe will forecast Ike. Even a broken clock is right twice a day.

Wow, latest model runs have shifted significantly north. D/FW area may end up yet on the subsident side and get no rain at all. Heck, at this point, LA may not be out of the woods if it keeps turning north. Still waiting on the ridge to strengthen and push Ike more to the west but doesn’t really seem to have happened yet.

Time is running out on evac plans for those along the coast. At this point, I’d say everyone from Houston to Corpus might want to think about relocating inland and preparing your house. I hope they have both N/S lanes in Houston set up to go N. Rita was an absolute fiasco. 110 people died during the evac, way more than the hurricane itself.

Captain, we’re into quite a “social fabric” test here in Houston. Mandatory evacuation for my house, so I’m packing the family up for them to drive to higher ground. I have other duties so I will be staying, in a shelter.

There are only a few hours in which to evacuate half a million to perhaps a million people so either we will cooperate and succeed or you’ll be seeing horror stories of us on TV.

I’ll take the quatloos and the kitten. You may give the car to Jonathan as second place winner!!

Seriously, I believe Galveston is going to take this on the chin. Especially in the storm surge department. Ike, while the winds are not mindboggling, is building quite the surge potential due to its rather unique structure, which has allowed for an anomolously large hurricane force wind field to develop. My take is that surge may likely be on the high end of the ranges that are put out along the gulf coast.

Since we are less than 48 hours from landfall, my final landfall spot is an area 15 miles south of Freeport to 45 miles north of Freeport along the coast. Winds, while they may go as high as 130-135mph in the next 24 hours, will be at 110-ish at landfall due to increasing Southwest shear near the coast at landfall.

Ike is a Cat 2 at this time, moving W/NW at 10 MPH. So we’re looking at influencing land around Fri night and over land Sat morning. It’s at 25.8 N 88.8 W right now. Looks like it will cover Houston/Galveston/Port Aransas/Freeport/Lake Jackson/Corpus Christi/Victoria and all around it, as far as the wide track.

My forecast, based upon extensive state of the art supercomputer models (or in other words, I guessed) is that this will turn out to be a relative non-event. I think when (if) it hits land, it will be at 25-50 MPH at best. But who knows; it could plow into Houston at 175 MPH gusts as a Cat 5.

Tidbit of info; since records started, only three hurricanes have hit land as as Cat 5. 1935 (Labor Day Hurricane), 1969 (Hurricane Camille) and 1992 (Hurrican Andrew).

Hey, did you know it’s only about 300 KM to get from Mexico to Cuba? Yep, right there between the Gulf of Mexico and the Carribean Sea, the Yucatan Channel. Don’t take my word for it, it’s at 21.578333 -859075

Anyway, the reason I mention it is that Ike crossed Camagüey Province in Cuba, then crossed over Pinar del Río Province (over near the Yucatan Channel) before heading up towards the Texas coast as it currently is.

Captain, I have the family in a safe place and I will be leaving ground zero in about 12 hours.

About 3.5 million people have evacuated the Gulf coasts in the last two days, including about 750,000 all-of-a-sudden today. Things have gone smoothly. People know how to work together in a pinch and follow instructions.

My personal worst case is if I’m in the eastern eyewall which, per the models, could push 20 feet of water into the NASA (“Houston-we-have-a-problem”) area, home to maybe 150,000 or so.

The Ike models have sure had difficulty projecting the storm’s intensity. HWRF, at least a few hours ago, seem to have Ike weakening as it approaches land. No bet from me on that.

Re: David Smith (#228), Glad to hear it David. After Rita I was worried about your evac. I was in Wilma’s right front quadrant, but well away from the west coast (just missed the eye, but hurricane force winds went around the clock). Awesome experience, but not for the faint of heart. I don’t have a dog in the hunt so I am sticking with under predicting the turn north and over predicting intensity. Still, it does not take a much of a storm to create big damage. I hope it misses you to the north.

I hope you are safe with your family. It appears you have a rough 12 hours coming up. Once you get into the clear light of day, please let us know.

Also, after you report the good news of your survival, I will send you my swiss bank account number with which to deposit those rare 50,000 quatloos into. I may be able to retire before death now. Happy days!

Tucker, we’re on the (temporarily) dry side and we actually have a dust storm underway, as the soil is being blown by 30-50MPH winds and has not been wetted by rain yet. Gulls and other birds are visible in the night sky, trying to return to their nests without luck. That’s a sad sight, as they’ll try to fly until they’re exhausted.

Re: David Smith (#232), I am confident that you will do well, If there is property damage at your home send me a message through my blog. My nephew is working in that area. He can’t make special cases, but he can give good advice. If you are in a safe place enjoy the show. Nature is awesome.

“There was a mandatory evacuation, and people didn’t leave, and that is very frustrating because now, we are having to deal with everybody who did not heed the order. This is why we do it, and they had enough time to get out. It’s just unfortunate that they decided to stay,” said Steve LeBlanc, city manager in Galveston.

Sedonia Owen, 75, and her son, Lindy McKissick, defied evacuation orders in Galveston because they wanted to protect their neighborhood from possible looters. She was watching floodwaters recede from her front porch Saturday morning, armed with a shotgun.

If there’s one thing the communist island does right, it’s evacuations. And in the end, that saves more lives than anything else. . . .

Of course, this is easier done in Cuba than in the United States because the communist government owns and controls most of the nation’s resources. Unlike the U.S. Federal Emergency Management Agency, it doesn’t have to buy supplies or contract services from private companies, or pay overtime.

Most Cubans work for the government and don’t have to worry about losing wages if they take off from work. And because police keep a close eye on evacuated areas–and because most Cubans have few possessions of value anyway–looting isn’t a major concern

The damage amounts from Ike are in the 10-20$ billion range, which will rank up there as Texas’ largest insured loss, not sure about correction for inflation. Forecasts were pretty good for Ike, and the NWS warnings of imminent death likely and rightly scared more health-conscious coastal residents to find higher ground. Like 2005, Gustav and Ike provided a 1-2 punch to Louisiana and Texas as did Katrina and Rita. Hopefully the North Atlantic takes a break for a while, like 10 years.

I thought I might take this opportunity to note an observation: I did not detect one lightning discharge due to Ike this entire day, as determined by monitoring an 80 Meter emergency com net all today.

Even brief monitoring (40 min.) of a weak MW broadcast station (590 out of Austin) at the low end of the band (where there is enhanced ground wave propagation vs the high end of the band, not to mention the higher energy content of a ‘stroke’ at the lower frequency) revealed -no- detectable lighting activity at mid-day.

I don’t know what this storm (Ike) was producing during its time over water, but while over land to my east (while traversing East Tx) as observed from N Central Tx (DFW area – NB no “/” between D and FW to the residents here!) there was no (to be honest) detectable lighting activity.

Now that ’80’ (Meters) has opened (gone long) there is detectable lightning activity, most likely due to thunderstorm activity along the cold front running roughly from WF Tx to the NE (or perhaps along what looks like a squall line developed on one of the bands extending NE from TS Ike SW of Little Rock AR down into LA).

#246 Not very much rain, in TX either?? 1.81 inches in 4.5 hours marked
on the NOAA maps some hours ago but I have’nt found any total amounts,
SST:s behind Ike dropped 2-3C though, if that hadn’t happen …???
Jim, you are/were also a DX-er?? More later. WORK…BREAD…

STAFFAN, more of a ‘propagation enthusiast’ (DC to ‘daylight’) EM fan than a hardcore DXCC/DXer per se, but one could easily be transformed into the other; furthest contact was Canary Islnds on 10 M in 2005 running 25W/Omni-directional vertical, No wait; Czech Republic about the same time, may have been ~ 30W mobile too …

J. S., Re: DFW – style sheet appears split on usage by the ‘madia’ [sic] with electronic media using DFW vs. print media using ‘D/FW’, however airport code for our BIG a/p is DFW (DFW Airport, not seen as D/FW a/p in literature and a/p not to be confused with FAA ‘airframe and powerplant’ certification!)

Well, here just north of D/FW area (sorry _Jim, I’ve lived here for 11 years and I still / it), we got a bit of rain, probably nothing higher than 20 – 22 mph winds, no lightning. Pretty much a non-event. The storm just passed too far to the east to do anything here. It’s amazing how once a hurricane has been on land for just a bit, those on the NW side of the storm are usually spared any significant impacts, even when the storm is as big as Ike was.

Hope David is doing well in Houston. Lot’s of damage there from the pictures I’ve seen.

“The storm just passed too far to the east to do anything here. It’s amazing how once a hurricane has been on land for just a bit, those on the NW side of the storm are usually spared any significant impacts, even when the storm is as big as Ike was.”

I disagree about the NW side comment regarding rain amounts. Often after landfall, the northwest side of a storm is associated with a frontal boundry and a jet entrance region. This will enhance rainfall greatly to the northwest of the storm. However, your quote has the ready answer as to why you did not get much rain. You were too far away! Let’s take Hanna as a recent example. Areas to the north and west of Hanna’s track received 3-5″ of rain. Those to the south and east received 1-2″. Those who were on the north and west of Hanna but 100 miles from the center received greatly reduced rainfall (under 1″ and in some cases zero). It really is about location, location, location.

* Recovery is proceeding rapidly in the areas which did not experience storm surge. I have no clue how the media is portraying things but people, with few exceptions, are highly cooperative and proactive. An ounce of preparation/self-reliance/pride is worth a pound of government help.

* Ike appeared to have vortices or swirls in the eyewall with local wind maxima, based on damage. The woods near my house caught one of these swirls and lost perhaps 80% of its trees, while the surrounding area lost only 10-20% of the trees.

* Hurricanes are gusty and the gusts make a howling or roaring noise. One is able to hear the roar of the gusts overhead as well as the roar of nearby gusts, including the ones approaching ones’ location. The mental picture is of an attack by a pack of animals, perhaps the dogs of war, some attacking you and some attacking neighboring areas.

* If I was a social science student I’d do a thesis on the social aspects of storm recovery. It’s fascinating and, to me, has some similarities with the ways that biological systems heal.

Glad to hear that you are well and in recovery mode. The media here (DFW) hasn’t talked much about Houston at all. They’ve talked much more about Galveston.

My heart goes out to the people that lost so much. OTOH, I think to myself, why are you living where a hurricane can destroy everything you have? Seems to me the way to fix this is to make people be self-insured or pay such a huge insurance premium that it basically becomes not worth it to take the risk.

Re: David Smith (#252), For the social aspects it is interesting. Early, everyone works well together to salvage what they can and meet basic needs. As time goes on stress builds to the breaking point for many. After Wilma I found it was less stressful to help others than deal with my own loses.

Losing everything you own, being homeless and at the mercy of insurance companies and FEMA is incredibly stressful. While FEMA always gets a bad rap, they do a pretty amazing job. Some of the other organizations like the American Red Cross try, but the effort is almost not worth the assistance in some areas after the Katrina scandals.

Re: Jonathan Schafer (#253), Insurance is nearly there. After Wilma many in the Keys canceled their wind insurance opting for only flood. Flood insurance in the Keys and much of Florida is through a state funded program with some of the highest rates in the nation.

Captain, we’re endeavouring to persevere, to borrow a line from a Clint Eastwood movie. No house damage but still without power here, though the power folks are making good progress and have restored about 50% of the region. The neighborhood caught 80 MPH winds and 100 MPH gusts (my estimates) but there was little damage to structures other than the houses which caught trees.

Insurance rates and building codes are keys to modifying behavior. Those who choose to live in high-risk areas should pay premiums that are consistent with the risk.

Jim Kossin kindly provided a link to the best-track and satellite-reanalysis data used in Elsner et al. I plan to browse this data to better understand where the two sets diverge, out of curiosity.

On Ike, we got electricity back two days ago, after ten days of camping out in the house. Not fun but not bad. Now that my computer has power again I’ll offer a few plots and comments on the Elsner paper.

David, good to hear you are back and analyzing. I am hoping that things are well for you and your family. It appears that Ike made me forget the Elsner paper and now I need to go back over the details. As I recall there was a lot of meat left for reviewing/analyzing.

When you get time you might want to give us an update on the contest predictions versus actual for NATL TC activity. I think my predictions might be on track (the center of the ranges that I gave) and I would like/need some recognition for being right, albeit only fleetingly.

The Elsner paper is a chance to look at data other than that of the well-trod Atlantic. I did some exploration of the Northern Indian and West Pacific data before the storm and will try to complete that in the coming days.

I’ll update the contest later this evening.

Here’s my final note on Ike – a view of my street this evening. Fall came early this year.

David, glad to hear things are getting more back to normal for you. I have relatives in Ohio that are still without power because of Ike. Had Ike followed the original forecast track through TX and come just W of DFW area, we could have been in the same situation.

The to-date North Atlantic ACE is 116, which is about 50% above average. In an average season we’d see another 25 or so between now and December 1, which would give a seasonal ACE of about 140. On the other hand, if the above-average behavior continues, then we’d see a seasonal ACE of about 155.

So, I think our likely end-of-contest range is 140 to 155, which is in the above-average quantile.

David, I had not anticipated that I may be forced to share some of my prediction glory. I can only hope that you will take into consideration my unique use of a model to give a range of values not only for ACE, but for TC and hurricane occurrences – somewhat like the big boys do.

Unfortunately, I have already revealed my model to the public (here at CA) and thus its future monetary value is probably zero.

To regain my perspective on the Elsner paper I went back to some of my previous posts on this thread and found that my attempt to show were the Elsner paper and the Michaels papers were in agreement and were they might differ was not articulated well at all. All I can suggest at this point is that the interested parties here read both papers and look for the agreements and discrepancies. I will also make another attempt below to describe what I see as potential agreements between these papers and add, again, that I was surprised that Elsner did not reference the Michaels paper.

Elsner’s main point is that, when looking over the past 26 years at the maximum wind velocities of individual TCs, the velocities in the upper level quantiles of TCs for the NATL TC region in particular, have increased significantly, but not necessarily worldwide or in all of the TC development regions of the world. The quantiles below approximately the 60th have not increased in the NATL. Elsner also notes that the those TC regions with lower nominal SSTs like the NATL show the largest effect.

What Micheals et al. show in their paper is that NATL TCs increase in maximum wind velocities up to a point with increasing SSTs and then plateau after that point where there is no longer a correlation between SST and maximum wind velocity. It is in those upper SST regions where most of the upper level quantile maximum wind velocities reside that one would look for compatibility with the Elsner observations. It is important to note that the plateau region in the Michaels graph of maximum wind speed versus SST has many TCs that developed with SSTs at the higher levels that have lower maximum wind velocities.

Michaels also shows and explains the interesting phenomena that when the time period studied over the past 30 years is divided in two, that the earlier period has a lower percent of storms in the plateau region reaching major hurricane wind velocities than that in the later period. The authors do not attribute that difference to an increasing SST in the NATL MDR because of the SST plateau region shows a limiting temperature for maximum wind velocities for a TC. They attribute the difference to non SST conditions being more favorable for developing major hurricanes during that period.

Therefore, we have two results from two different analyses that well could be in agreement, but with one attributing the results mainly to increasing SST and the other attributing it mainly to factors other than increasing SST. What if the TC MDRs of the world had differing nominal SSTs? The Michaels proposition would appear to show more of the TCs falling on average into the plateau region in those MDRs with the higher nominal SST and a region where non SST factors would appear to become more important. In a MDR with a lower nominal SST (the NATL?) and increasing SSTs over time, the Michaels proposition would appear to predict that more of the TCs developing at temperatures below the plateau limit would “ramp” up into the plateau region for maximum wind velocities. The shape of the scatter plot in the Michaels et al. paper would, however, appear to indicate that one would expect a low nominal SST MDR with an increasing SST to generate higher maximum wind velocities in the lower quantiles and not the higher quantiles. The foregoing reasoning would, in my mind, appear to indicate that using the Michael proposition/conjecture, the NATL would give the results shown by Elsner due to changes in non SST factors and the failure of the same increase in maximum wind velocities with SST in other MDR regions of the world, regardless of nominal SST, would be due to the non SST factors not being favorable for producing TCs with higher maximum wind velocities in the plateau temperature region.

Finally, I think it is important to note that the Michaels et al. paper shows that approximately 75% of the TCs formed in the NATL over the past 25 or so years are in the plateau region, i.e. where maximum wind velocity is independent of SST. Also the relationship of maximum wind velocity to SST appears to be nearly flat against SST below the plateau minimum SST of 28.25 degrees C and above that SST with a regime change right at that temperature.

Kenneth, I’ll try to construct scatterplots using SST and max windspeed for several basins. This will be coarse (I’ll use seasonal basin SST rather than the individual storm SST used by Michaels) but it’ll be interesting to see if, coarse or not, any patterns emerge.

The two warmest basins are the Northern Indian Ocean (NIO) and the Western North Pacific (WPAC). The NIO runs about 29.5C while the WPAC runs aroung 28.5-29C.

Here is a simple time series of the strongest 10% of NIO storms:

There appears to be little change over time in the windspeed of the top-10% NIO storms.

I could show the strongest 5%, or smaller, but there are few NIO storm points in such small quantiles.

The WPAC has many more points so I’ll use the top-6% there:

There, too, any change over time is near-zero. If I reduce the group to the top-2% (150 knots and higher) I think we’d see a positive slope but if I reduce it to the top 1.3% (155 knots and higher) the slope becomes negative.

My conclusions from these simplistic plots? None, but I’d more-readily accept the Elsner findings if my simple plots robustly showed increases in the average windspeeds over time.

I’m curious about how the best-track max intensity data compares with Kossin’s estimates based on reanalysis of satellite images ( link, see bottom of page ).

If I’m interpreting the data correctly then the table lists the max wind based on best track and (the final entry) the max wind based on the reanalysis. I took the differences between the two and plotted those in order of increasing (best-track) intensities. This is for the North Atlantic, where the best-track data is based on input such as recon and ship reports as well as satellite estimates:

The plot seems to show pretty good agreement for average storms (50 to 100 knots) but, for the strongest storms, the satellite reanalysis tends to report lower windspeeds compared to the best-track report.

This probably makes little difference to Kossin’s original goal (an apples-to-apples estimate of storms across all basins) but I’d sure want to understand what drives the apparent divergence before looking for small changes over time. Nothing turns on this – it’s simply a curiousity at the moment.

David, please keep posting your analyses results. I have spent much too much time on the bailout thread and would like to attempt to get my mind around what your analysis show and how it fits with any Michaels versus Elsner disparities and agreements. I think it will be very important to understand the differences between measurement methods used in Michaels and Elsner – and something I have failed to comprehensively produce in my postings.

Kenneth, all I have are a few observations with little thread to connect them. But, they’re more enjoyable to me than dwelling on what’s coming to the US, and to some extent global, economy. The US economy has been fueled by debt for some years and that is coming to a painful end. But, every ending is also a beginning.

Here’s the difference between best-track and satellite-reanalysis max winds for all storms since 1980:

If I correctly understand the raw data then I’m surprised that the satellite reanalysis produces such muted results. But, as mentioned before, nothing turns on this so I think it’s end-of-story time. I’ll use the best-track data for my purposes.

David Smith I downloaded the data you were sourced from Kossin for maximum wind speeds in individual TCs for the period 1981-2006. After looking at these data I determined that the quantile regression must do some kind of magic to get any significant trends for the smaller quantile increments. I think we need to consult a statistician on how quantile regression actually handles the data. My only guess is that it uses more data then that from an individual year as the numbers for most of the six global MDR of interest would not provide sufficient individual TCs per year to give a complete set of a lower 10th and upper 90th quantiles by using whole number individual TCs.

I calculated the average of the top 2 and bottom 2 TCs for maximum wind speed for each year and determined the trends for the 6 MDRs of the globe. I actually could only use the top 1 and bottom 1 for the IO MDR as the numbers would overlap significantly if I used top 2 and bottom 2. I made this calculation using the satellite PCA data listed in the Kossin referenced data and the Best Track data listed with the same reference.

I can provide the plots, if anyone is interested, but will contain my reporting of results here to the following:

The averages of the bottom 2 (or 1 for IO) for all 6 of the MRDs did not have a statistically significant trend for either the satellite PCA data or the Best Track. The counterpart averages for the top 2 (or 1) had significant trends for the NATL and OI for the satellite PCA data and only the NATL had a significant trend for the top 2 Best Track.

These calculations confirmed for me the rather underwhelming evidence that the maximum wind speeds for the NATL TCs in the higher quantiles have increased rather uniquely. That result would appear to be in agreement with the results of the Michaels et al. paper discussed and reference above.

I will attempt to do a weighted global average for the top and bottom TC maximum wind speeds and look more closely at quantile regression.

Meanwhile I was rather surprised to see that the Elsner paper used 6 PCs in their PCA analysis – and without explaining their criteria for selecting that number of PCs.

David Smith, I have included in this post the plots for the high and low maximum wind speeds for the MDRs: NATL, IO, WP, EP, SI ands SP. They are arranged for ease of comparison with Best Track and Sat PCA derived plots side by side.

My previous post giving the statistically significant trends needs to be adjusted as not only the NATL but the SI had a statistically significant trend with the Best Track. Therefore in summary the only statistically significant trends for both high and low maximum wind speeds were the high wind speeds with NATL being significant for both Best Track and Sat PCA derived plots while IO was significant for the Sat PCA but not the Best Track and SI was significant for the Best Track and not for the Sat PCA derived plot.

David have the Best Track maximum wind speed measurements been improved over the period in question (1981-2006)? If so I was wondering what a Sat PCA minus Best Track max wind speed plot over time would look like? Like you I was quite surprised by how much speed the Sat PCA trimmed off the Best Track speeds of individual TCs at the higher wind speed levels.

It is also interesting to look at the plots below for the times periods within the overall period 1981-2006 when the trend for the higher maximum wind speeds are flat. I have not done the correlation of the higher maximum wind speeds with SST but those plateaus usually degrade any SST relationships.

Re: Kenneth Fritsch (#271), Kenneth, there have been improvements in accuracy since 1981, both in measurement tools and, to a smaller extent, in interpretation techniques. But, the majority of cyclones still rely on satellite imagery, which has not greatly changed over the period. So, to answer the question, the best-track wind analyses have likely improved since 1981 but not by a great deal.

I think I’ll take a look at some of Ryan’s ACE data to see if ACE changes are consistent with the changes suggested by Elsner et al. ACE and most-intense-storms are second cousins, not twins, so any comparison is problematic, but it’s worth a look anyway.

Here are a few more plots. The first shows the distribution of maximum winds for all tropical cyclones since 1980, worldwide, using best-track data. I used 1-1-1 smoothing for display purposes:

The weakest cyclones are the most common and the strongest are the least common. This is as-expected and is consistent with what we saw in Steve M’s and other’s North Atlantic distribution plots back in 2006. No oddities here, with one exception.

The oddity is circled in green here:

The oddity is the hint of a hump in the region of cat 4 and cat 5 cyclones. Maybe there’s “something special” about these most-intense storms. Perhaps they are somehow structurally distinct from weaker cyclones. Cat 4 and 5 storms form, with few exceptions, in “explosive” ways characterized by rapid pressure drops. My guess is that they are structurally different in some aspect of their mid- to upper-air structure, a difference in structure which suddenly makes them better heat engines, like a turbocharged machine.

Another plot shows the same thing, except that I used the satellite reanalysis data:

This distribution looks unphysical, or at least unlike what I expect to see in the distribution. The most-frequent storms are those near hurricane/typhoon strength, which is curious. My sense is that I don’t grasp what this data set is supposed to be.

David, #272, excellent work on auditing the Kossin dataset. Perhaps the prevalence of near H or TY strength is simply the wind speed when an eye becomes visible (or infrared). Remember that the Elsner et al. paper has no definitive explanation for the statistical observations that they present and they explicitly say so. It is rife speculation.

The Elsner paper uses 6 PCs from their PCA and do not discuss any criteria for using that many PCs or that large number. That such an exercise could result in something non physical should perhaps not be surprising.

If we restrict our observations of quantiles and average maximum wind speeds in individual TCs for 1981-2006 to those for Best Track (excluding Sat PCA as unrealistic) we have the following:

The trend for the average maximum wind speed for all 6 basins are not statistically significant (p = less than 0.05), while for the top 2 the trends are significant for the NATL and SI basins.

Examining the top 2 NATL and SI results more closely one can see 2 plateaus in the 1981-2006 time period. It is more evident with the SI data.

I did some auto correlation analysis and found that these time series have little to no AR1 correlations.

I need to look further at quantile regressions, but I have not formed a good understanding how the Elsner paper could report such small increments for quantiles for basins with the smaller number of data points per year.

Remember that the Elsner et al. paper has no definitive explanation for the statistical observations that they present and they explicitly say so. It is rife speculation.

That is not the spin we saw from some unnamed commenters on the paper.

David Smith, the diferrence in the histogram shapes for Sat PCA and Best Track data that you show was revealing. I looked at the 6 basins individually and found that the individual basins followed the same pattern as those you showed for the combined basins, i.e. the difference is in each basin. The one exception was the NATL basin where the Sat PCA had a shape similar to that for Best Track.

I also went back and looked at the average maximum wind speed trends for Best Track and Sat PCA for the period 1981-2006 and found that the only satistically significant trend was that for the NATL Sat PCA (but not the NATL Best Track).

I plan to do a further sensitivity test using the lower quantile 0-50% and upper quantile 50-100% maximum wind speeds for comparing the 1981-2006 trends for the six basins with the Sat PCA and Best Track data.

Based on the recent posts here that indicate that a more realistic view of maximum wind speeds (MWS) is that provided by the Best Track, I did some further analyses of the six TC basins over the period 1981-2006 that coincides with that covered in the Elsner paper.

Since my previous calculations showed a significant trend for only the NATL and SI basins for the annual top 2 TCs for MWS, I looked at the six basins with a eye toward any uniqueness of the NATL and SI basins.

In the first table below, I show the results of comparing the 6 basins over the 1981-2006 period for the annual top 2 MWS versus TC counts and then the annual TC count trends. I report the R^2 values of the trends and note that all trends listed in the first column are positive and those in the second are denoted as positive (P) or negative (N).

The major basin standing out in this analysis is, not unexpectedly, the NATL. The SI basin showed a significant trend for MWS Top2 vs TC count and for comparison was like the NATL in having a significant trend over time of Top 2 MWS. The SI differs from the NATL by not showing a significant trend of TC counts over 1981-2006 period.

Since I have more available data for variables that can affect TC activity for the NATL than the other basins, I wanted to look at the trends of the Top 2 MWS for the NATL versus the SST for August, September and October (ASO) for the Main Development Region (MDR at 10-20N, 20-60W) and some wind shear variables (200-850 for 12.5-17.5N, 40-85W) and zonal wind (925 hPa for 7.5-17.5N, 30-100W).

Over the period 1981-2005, I measured the following R^2 values for SST, wind shear and zonal wind trends against the annual top 2 TCs for MWS:

SST: R^2 = 0.32; Wind Shear: R^2 = 0.37; Zonal Wind: R^2 = 0.43

I think it is important to remember that earlier work (reported here at CA) with trends of Easy to Detect ACE (west of 60W) and TCs counts in the NATL versus SST, wind shear and zonal winds in the NATL basin showed trends against SST that were inconsistent over time and, by contrast, trends against wind shear and zonal wind that were noticeably more consistent. To demonstrate that difference, I have included a second table below.

I will not be so presumptuous as an interested layperson to make conclusions about these results, although I do have some ideas of my
own.

I went back to the original Nature article on prevailing winds (925hPa) and vertical wind shear (200 hPa- 850 hPa) by Saunders and Lea to verify the regions used and found that indeed the regions are correct but that the months used for SST, prevailing wind and vertical wind shear were Aug-Sept in the paper. All the regressions, I did were using the authors’ regions, but for the months Aug-Sept-Oct – which is more in line with what I see in other published papers dealing with TC activity in the NATL.

I recall doing some sensitivity analyses with regional extent and time period lengths and as I recall they showed the R^2 for the regressions were basically the same, but that R^2 varied more over the time periods tested in my analysis above when the Aug-Sept months were used in place of the Aug-Sept-Oct months. Any way, I just wanted to correctly reference the regions and time periods used, whether they were cherry picked originally or not – and denote the potential for cherry picking by yours truly.

I fitted the Elsner satellite PCA (SPCA) and the Best Track (BT) data as David Smith showed in histograms in this thread for frequencies of maximum TC wind speed to a Weibull distribution using the 3 parameters for shape, scale and location. I did a separate fit for each of the six TC basins for BT and SPCA distributions. I was hoping to perhaps see some artifacts and evidence of two different Weibull distributions in the SPCA (or BT) distributions. I will not show the results here, since the distributions/data for all basins and using either BT or SPCA gave excellent fits for a Weibull distribution. The only observations of note was that all the basins had differing shape parameters and to some lesser degree scale and location parameter differences. I interpreted the differences as evidence that the survival rate of the more intense TCs varies with basin. Another even larger difference was in the shape parameter between the distributions from BT and SPCA for any individual basin.

I also found that Elsner and Jagger have used the data for frequencies of the maximum wind speeds in land falling TCs for fitting a Weibull distribution.

When fitting Weibull distributions from failure rate data in reliability engineering, the percentage failure is plotted versus the time units to failure, while for TCs the time unit is replaced with maximum wind speed. I guess one can think about the maximum wind speed obtained by a TC as a function failure rates with the most intense being the least prone to failure at a lesser maximum wind speed.

The point of contention in contrasting the BT histogram with that of the SPCA is that the BT version shows the less intense storms survival rate is higher, in general, and become lower as one progresses to higher maximum wind speeds, while that for the SPCA version shows storms with the lower and higher maximum wind speeds having lower survival rates than those tending toward the middle of the maximum wind speed distribution. It would appear that the BT distribution, of the two, is the more naturally expected one. Due to my lack of a comprehensive understanding of the physics of TC, and particularly with regards to maximum wind speeds, I would appreciate a detailed discussion of the important variables.

I think we are off the beaten path sufficiently for Steve M not to see my confession to determining the Weibull parameters graphically. In R or any other statistical package worth its salt, this determination would be done with maximum likelihood estimators, but 100 years ago we did it my way and it kind of brought back memories.

I’m still playing with the best track and (what I think is) satellite reanalysis data. I took a look at the absolute differences in windspeed between the two for each global storm since 1980. The plot below shows the results, ordered by increasing satellite windspeed:

It is an odd image which I do not grasp. The x-axis shows the satellite-estimated windspeed, increasing left to right, for all storms globally since 1980. The y-axis shows the absolute difference between the satellite-estimated and best track data. Why the apparent criss-cross pattern?

Whatever slight chance I had to understand this pattern is now zero, due to the total beating which bender’s team inflicted on my team, and my psyche, this evening. Ouch.

Re: David Smith (#283), Well, about a nanosecond after posting #283 it dawned on me that the pattern is likely due to the best track data being stated in increments of 5 kts while the satellite reanalysis data is stated in much smaller increments. I blame my lapse on bender’s gators.

As a long time Cub and Bear fan, I have learned to work through the sting of defeat and disappointment.

The curve you show seems to have varying incremental values between the hash marks on the x axis.

The fan shape of the scatter plot that increases the maximum differences with increasing Satellite values, I would guess, says that the higher max winds have the potential for the largest differences but that some have smaller differences. Or put another way that wind speeds cannot be negative.

Is it the “weave” pattern in the plots that puzzled you and after thinking about it related it to the 5 kts increments in the reported Best Track wind speeds?

Re: Kenneth Fritsch (#285), Yes, it’s the weave pattern from the 5 kt increments that stumped me for a while. After bender’s gators wove through the tigers’ secondary at-will last night, I was seeing weaving patterns everywhere.

We need about 20 ACE points from Nana, or some other thundershower, to get us to the 140 or so that our GCMs forecasted for the season.

Tropical Storm Nana can be added to the Tiny-Tims file. It may not last 12 hours at tropical storm intensity. I guess this is the “active” October that the Gray and Klotzbach forecasting duo were expecting: Marco and Nana.

Unless a major storm occurs which has some sort of effect on climate, their forecast will be a colossal bust just like 2006 when the season stopped on October 2.

Klotzbach, at least admitted that their forecasts had no skill and particularly for those into the distant future. They did change their model this year (as they have in the past and, like those investment strategies that keep changing, makes a decent out-of-sample test impossible for the most recent edition).

Despite the glitch for Oct., as I recall K-G are on track for an excellent seasonal forecast and join some pretty good company in the persons of DS and KF.

I am also forecasting that a short kick off with 11 seconds left in the game and leading by a point will be called stupid by all Monday morning QBs.

It starts with the nuts and bolts of scientific publishing. Hundreds of thousands of scientific researchers are hired, promoted and funded according not only to how much work they produce, but also to where it gets published. For many, the ultimate accolade is to appear in a journal like Nature or Science. Such publications boast that they are very selective, turning down the vast majority of papers that are submitted to them.

The assumption is that, as a result, such journals publish only the best scientific work. But Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

Without naming names, I am sure you can think of a few studies in the climate community including global warming and hurricanes research community that fit this theory. What an embarrassment.

#289 So we needed the financial “meltdown” (…) [33 zillions and counting…] to get HC Andersen up to date again…The emperors are
always naked…FYI…#287 David, we here in Europe have Nana wind
speeds every day, as you have…so how many NONOs do we need I only
need …. a hundred, whereas you need … 160?? But have some faith in
TD 15, missing PR and USVI and making no harm… [some eels in the Sargasso Sea
may be disturbed if it’s mating season?!] and maybe reaching TH status
Marco, advisory like “extremely small…”…AND 2008 OCT 06 1100 PM
EDT DISCUSSION STEWART:
MINIATURE MARCO HAS MAINTAINED ONE SMALL COLD-TOPPED THUNDERSTORM
CLUSTER…ABOUT THE SIZE OF THE STATE OF DELAWARE…OVER THE
LOW-LEVEL CENTER. THEREFORE…THE INITIAL INTENSITY IS BEING KEPT
AT 55 KT IN LINE WITH THE LAST AIR FORCE RECON DATA…EVEN THOUGH
DVORAK SATELLITE CLASSIFICATIONS ARE T2.0/30 KT FROM BOTH TAFB AND
SAB. I HAVE WORKED SOME TINY TYPHOONS IN THE WESTERN PACIFIC
BEFORE…BUT HORIZONTALLY-CHALLENGED MARCO COULD BE THE SMALLEST
TROPICAL CYCLONE ON RECORD….[my commentary: Tiny typhoon aka microphoon…][5 days in the penalty box, Lindstroem…]

Omar is now Category 3, with 120 mph winds and is a major hurricane. Accuweather and Drudge are really hyping the storm as a threat to the Virgin Islands and the Hess company oil refineries. About a half million barrels of crude are processed daily, but with the world stock markets in the ditch and the Dow likely to follow tomorrow, Omar’s effects likely will go unnoticed. All in all, the 2008 Atlantic hurricane season, with Gustav and Ike smack dab in the Gulf of Mexico, clearly was an example of good luck for petro producers and consumers alike. Aside from the artificial shortages in the southeast US, gas prices are quickly spiraling towards $2 a gallon. Today, cheapest gas is in Harlingen Texas, near the Mexican border and Brownsville at 2.23$ a gallon.

Omar is good for ACE points, which is important to some of us contestants. Kenneth Fritsch will have to reveal his technique to us if the season ends at about 140-145, which is very close to his June prediction. Kenneth, you’ll have to publish.

David, are you kidding me. After Judy Curry’s revelations of the monetary value of those models that forecast TC activity accruing to the greatest extent when the methods are not revealed, I have decided against saying anymore about my models. Before profiting from my methods I will need to find an established forecaster of TC activity with credentials, since a very rank amateur practitioner such as yours truly will obviously not have any marketing value. I will also have to get my methods sold before another year of forecasting might reveal that my 2008 forecast was simply lucky.

As an aside, I have to admit admiration for the knowledge of TC processes that those who forecast possess. They have to understand the process well even if they have no forecasting skills.

I still want to estimate the value of forecasts and the precision needed to make the forecast valuable. It would appear that forecasting the TC activity in the NATL for a given season is the most difficult of all forecasts to make and particularly when they are made into the more distance future. It appears to me that those forecasts are much less valuable than those that attempt to predict the tracks of a developed storm. I supposed one needs to correlate TC activity with land falling storms and storm damage before making such a statement but I am guessing that the correlation will not be a strong one.

On the other hand, it appears that mid to short term forecasting of TCs’ spatial tracks has improved over time, while that of the intensity has not. The question I have in this area of forecasting is: what is the value of a spatial forecast with its uncertainty band and then in conjunction with the inability to make mid term intensity forecasts?

Kenneth, I’m sitting in your town at the moment, at Chicago Midway, waiting for a flight. As we were landing I looked out the window towards Lake Michigan and spotted a cloud swirl. I think that qualifies, by today’s standards, as at least a subtropical storm, which in this case is named Paloma.

New cyclone in the Indian Ocean nearing the tip of Somalia and likely interfering in the piracy activities of the local warlords. There is the potential (albeit small) for this storm to enter the Gulf of Aden south of Yemen and then reach the Red Sea. The SSTs are very warm in the vicinity … SST PLOT

Here’s the distribution of storm max winds for the last ten years in the North Atlantic. Two groupings are shown – one for storms whose duration was less than six days and the other for storms of six or more days.

It shows that “immature” storms cluster around the low end, which is unsurprising because they did not have the time to fully develop. The “mature” (pink) storms have a broader distribution centered around perhaps 80 knots, as their strength is determined by environmental conditions such as wind shear.

David, thanks for the thought provoking post and email explaining the maximum wind speed histogram. You have got me to thinking with your over/under six day duration TC distributions of maximum wind speeds. That is a great analytical way of viewing the data and might make me rethink the Weibull approach I attempted previously.

I am not sure whether this has any relevance or not, but the six days and under duration histogram shape looks more like that from the Best Track with all storms and the six days and more duration TC distribution more like the Elsner et al. Satellite PCA derived distribution.

We are having a balmy autumn day here in Chicago land and I have a picnic bench that needs a coat of paint before winter gets here and you throw this at me, David.

Re #304 ken, we still have time for one more garden-variety storm with an ACE of, say, 10, which would bring the ACE above 140. That dead-on accuracy would earm for you the title of Wizard, plus 500 quatloos and preferred stock in an insolvent bank of your choice.

ok. I’ll stick my neck out early
“Wizard” Level (two pieces of information should be submitted)
1. Forecast the ACE category for the season 127 (132)
2. Forecast the number of named tropical storms 15 (15)
_________________________________________________________________________
“‘You Da Man” or “You Da Wo-man” Level (nine pieces of information should be submitted)
1. Forecast the number of named tropical storms 15 (15)
2. Forecast the number of hurricanes 8 (6)
3. Forecast the number of intense storm-days 8 (?)
4. Forecast the number of hurricane-days 33 (?)
5. Forecast the number of storm-days 66 (?)
6. Forecast the seasonal ACE 127 (132)
7. Forecast the number of Wimps (less than 40 kts or 24 hrs or less) 2 (?)
8. Forecast the number of subtropical storms 3 (?)
9. Name your forecast reasoning: handicapping gray

Steven Mosher, do not be bashful about your good fortunes in forecasting. I was unaware of your good luck in forecasting in other areas. Please list all your lucky guesses here as I want the world to know that I have been competing, by diligently putting all my complex models together, with someone with a golden touch.

How did the rest of the bloggers do on their 2008 predictions for the number of hurricanes and the number of named storms. I predicted 6 hurricanes and 15 named storms but was low on the ACE number[75-80]

re 314. Kenneth, I’m waiting to see how well I guessed GISS land surface temp for 2008 before I claim the climate nostradamus crown

Steven, I was going to reply to you, but I thought it would not be of interest to anyone besides you and you, well you, already know what I was going to write. It was going to be a pretty snappy reply wasn’t it Steven.

In attempting to make sense of why the fitting of a Weibull distribution to the frequency of maximum wind speed (MWS) for individual TCs can be made with near exactness (see results below) for both the Best Track and satellite PCA data (from Elsner), I did the following:

Since a Weibull distribution is frequently used with failure times, I quickly looked at the TC duration time relationship with MWS and rejected this approach (see below for evidence).

I then attempted to find a simplistic explanation for obtaining failure rates as an increment of wind speed (or MWS if the TC fails to progress to a higher wind speed) in place of the time increments that are ordinarily used in reliability engineering treatments.

Below I show the relationship of TC duration (to the nearest day) to MWS. While the R^2 shows an overall good correlation, it can be seen that at the lower MWSs the relationship is more linear and when the duration times exceed 12 kts, the relationship breaks down. I, therefore, concluded that the Weibull fit has little bearing on time or failure times.

In looking at the failure rates by MWS increments, I start by noting that all TCs counted for MWS have to go through progressive increments of wind speeds with the terminal speed being the MWS. I ignore for these purposes how quickly or slowly the wind speed incrementally increases or decreases, or, for that matter, whether the wind speed decreases and then increases again to a higher wind speed. In other words, a TC after attaining an incremental wind speed, for example, one between 60 and 70 kts, can survive to the next higher increments of wind speed or diminish in wind speed and eventually die. That death of the TC after attaining that wind speed then fixes the MWS and counts as failure for that increment of MWS and the failure rate is determined using the incremental MWS counts divided by the total TC MWS counts (or total TC counts).

That actual failure rates for the MWS increments for Best Track data shows a progressive increasing rate for a short range at the lowest MWSs up to a point where the failure rate becomes nearly constant and then increases at the highest MWSs. The increase in failure rate at higher MWSs would appear to me to be logically the result of the TC (hurricane) approaching a physically limiting bounds for MWS as noted in papers by Emanuel. Evidently, the lower failure rates at the lowest MWS indicates that it is relatively easier for those TCs to progress to higher MWS than to die than it is for TCs at the higher MWS.

For the Elsner satellite PCA data, the failure rates at the incremental MWS appear to mainly to increase with increasing MWS. The data for both the satellite PCA and Best Track has few TCs counts at the highest MWS and thus obtaining a failure rate in that range is uncertain.

It is my conjecture that, since the satellite PCA does not pickup the MWS measured for the Best Track at the lower MWS, the Elsner PCA is, as was suggested here by Ryan M, measuring something physically different than MWS. That something different can be well fitted to a Weibull distribution, but then again with 3 free parameters maybe that is not all that surprising.

The shape, scale and location parameters for the fitted Weibull distributions for the six TC basins worldwide are listed below when using Best Track and satellite PCA data for fitting.

I played more with the parameters of the Weibull distribution and specifically looking at the MWS incremental failure rates using, not the actual values as I did above, but this time those derived from the Weibull distribution with the fitted parameters.

To make a long story short it was obvious to me that some of the frequency gyrations in the actual results, and particularly at the high MWS end of the distribution, were making the failure rates more difficult to determine. What the fit to a Weibull distribution does is, in effect, to smooth these gyrations in the actual data so that one can calculate incremental MWS failure rates.

What I found was that for all basins, using either Best Track or satellite PCA data, that the failure rate increases as one proceeds from the lower MWS to the higher MWS. The general difference between the Best Track and satellite PCA failure rates is that the Best Track, having the lower value shape parameter and closer to 1 (at 1 the failure rates become constant ), the failure rates increase significantly less on going to the higher MWSs than is the case for the satellite PCA data. A larger value scale factor, on the other hand, spreads out the increase in failure rates with MWS, i.e., needs a higher MWS to reach a final failure rate.

I do not know how clear this explanation is to those reading here, but if it is I would be interested in hearing whether these failure rates (as I defined them previously) have any physical meaning vis a vis the MWS frequencies found in TCs.

re319. hehe. Looks like I lost the ice bet over at lucias. I handicapped the field, but some guys bet way too high because they misunderstood the bet.
I like to think of my approach as an ‘average of model ensembles’ approach. yuk yuk