Vecchi and Soden

Here is Judith Curry’s review.

Our understanding of the climate variability of tropical cyclone intensity is
hampered by lack of an insufficient historical data record and the inability of
current climate models to resolve tropical cyclones. Hence efforts are being
made to develop proxies for hurricane activity for which we have adequate
historical data or that can be resolved by climate models, to help us understand
how and why hurricane activity has varied in the past and how it might change in
the future.

As a proxy for changes in hurricane intensity, the Vecchi and Soden paper
examines the tropical cyclone maximum potential intensity. Maximum potential
intensity is a theoretical upper limit on hurricane intensity based on sea
surface temperature and the local vertical thermodynamic structure of the
atmosphere. There have been several theories of maximum potential intensity
proposed, and the most widely accepted of these is by Kerry Emanuel. Because
historical data on the vertical thermodynamic structure of the atmosphere are
not available prior to 1950, Vecchi and Soden further develop a proxy for
potential intensity based upon the tropical sea surface temperature anomalies
relative to the average tropical sea surface temperatures.

The main result of Vecchi and Sodens analysis is that maximum potential
intensity does not correlate well with the global increase in tropical sea
surface temperature, but rather with the regional variations in the surface
temperature. Looking specifically at the North Atlantic, the historical data
record since 1880 shows an increase in the Atlantic potential intensity that is
strongest near the equatorial latitudes in the eastern Atlantic, while the
Caribbean and Gulf of Mexico show a decrease in potential intensity. In terms
of the climate model projections for the North Atlantic, the maximum potential
temperature increases towards the equator and the African coast and also in the
Gulf of Mexico, but decreases in a swath extending from the Caribbean northeast
to the African coast north of about 25o.

So what is the significance of this paper in assessing the impact of global
warming on hurricane intensity, and particularly for the North Atlantic? North
Atlantic hurricane intensity has shown a strong relationship to sea surface
temperature, and the theory of maximum potential intensity has been used as a
physical link between increasing tropical sea surface temperatures and
increasing hurricane intensity. This paper is one of a number of papers
published in the last few years indicating that the influence of increasing sea
surface temperature on hurricane activity is complex; it is not just the local
heating effect of warmer sea surface temperatures (the maximum potential
intensity argument), but changing gradients of sea surface temperature influence
atmospheric circulations that can influence hurricanes. The eastern Atlantic
has been warming more rapidly than the western Atlantic (Gulf of Mexico and
Caribbean) which is influencing the atmospheric circulations and resulting in
more hurricane genesis in the eastern Atlantic and more equatorward tracks.

This paper is an interesting contribution that will be of some help in filling
out the picture of how climate variations influence hurricanes. There are some
caveats about the methods used in the paper, that will hopefully be addressed in
future research; these caveats are:
* Current theories of hurricane maximum potential intensity still fall short in
terms of accurately accounting for the exchange of heat between the atmosphere
and the ocean and the asymmetric vortex dynamics.
* The SST data is not of adequate quality prior to 1920 especially in the
Pacific Ocean, particularly for the proxy regional SST anomalies.
* Climate model projections of potential intensity depend on the
parameterization of tropical convection to provide the upper atmospheric
temperature and humidity used in the calculation of potential intensity.
Tropical convection parameterization is one of the weakest links in climate
models. Confidence in the climate model projections of potential intensity
should have been established by looking at the climate model simulations for the
past century and comparing with the historical data record of potential
intensity.
* Climate models generally do a poor job of simulating realistic interannual
and decadal modes of natural climate variability. The climate models used here
show strong interannual and decadal variability in maximum potential intensity;
assessing this variability using climate model simulations of the last 100 years
would have made the authors claim about the magnitude of the natural
variability exceeding the trend more credible.

I’ve finished the Kossin and Vimont 2007 paper and it is excellent. The paper is an attempt to pull together various hypotheses on Atlantic hurricane climatology and it succeeds. It’s well-written but perhaps a bit heavy on meteorology for non-stormheads. There’s a neat map which I’ll extract and link later today.

“This paper … will be of some help in filling out the picture of how climate variations influence hurricanes,” said Judith Curry, an Earth science researcher at the Georgia Institute of Technology.

This Judith Curry quote from the first link and taken without context would be considered as a no comment.

Kerry Emanuel, a professor of atmospheric science at Massachusetts Institute of Technology, for instance, found the combined power of Atlantic hurricanes has more than doubled since 1970.

On Wednesday, Emanuel said Vecchi and Soden’s findings might be valid over a short-term period, but his studies show a clear “upward trend” in hurricane intensity.

“I don’t really agree with their conclusions,” Emanuel said.

Another study also maintains that global warming is supercharging hurricane intensity. Scientists at the National Center for Atmospheric Research and Georgia Tech, including Peter Webster and Judith Curry, found that in the past 35 years, the number of Category 4 or 5 hurricanes has almost doubled.

The second link notes comments and findings of Emanuel, Webster and Curry but fails to reference the works of Kossin and Vimont on reanalysis of global tropical storm data sets and associations of NATL storms with AMM as linked in the David Smith post below.

I find these areas of climate science much more interesting and telling when you dig enough to establish your own views on these matters and do not depend on press releases and idle chatter.

AB: The tropical warm pool plays a determining role in the global climate since it acts as a sorce of thermodynamic forcing for the atmospheric general circulation. The warm pools (SST>28°C) extend from the Indian Ocean, across the Indonesian Archipelago into the western Pacific with a secondary area crossing Central America into the Caribbean and the central Atlantic ocean. The heating in the atmosphere above the warm pool influences climate over wide ranges of the planet. As there are zonal asymmetries in the extent of the warm pool, and hence variations in the locations of total heating of the atmospheric column, the warm pools also create centers of diabatic heating along the equator which set up the position and strength of the east-west Circulations which play integral roles in the coupled ocean-atmosphere tropical climate. In fact, almost all of the global vertically integrated heating resides over waters >27°C. The tropical warm pool is characterized by large-scale variations of SST on time scales that range from intraseasonal to interdecadal, considerably altering the forcing to the atmosphere. In addition to the existence of the large variability of the tropical warm pool SST, there is an upward trend in the tropical warm pool area, which is evident in the Atlantic, Indian and Pacific oceans with the area encompassed by the 28C isotherm groewing by 67% since 1920. Changes in the zonal and meridional circulation associated with the variability and expansion of the warm pool are studied using NCEP-NCAR and ERA40 reanalsysis. It is found that the impacts extend around the tropics and are associated with a slowing down of the Asian monsoon circulation and modulation of the of the equatorial Walker cells. Analysis of the IPCC-CMIP3 models for the 20th century show similar changes in the warm pool extent suggesting that changes that occur under different future emission scenarios may poossess credence. With greenhouse warming it is found that the warm pool doubles in size from the observed values in the 21st century. We explore the changes in the column integrated heating in the different CMIP3 integration scenarios to see if the threshold between columnar heating and cooling changes to higher temperatures and whether there will be linear or nonlinear transitions. Results suggest the existence of important changes in the zonal and meridional circulation in the Atlantic, Pacific and Indian Ocean in association with the expansion of the tropical warm pool. We also show an example of how the expansion of the tropical warm pool not only has effects on the atmospheric general circulations but also in the dynamics and sustainability of the marine biodiversity. Corals for example are highly sensitive to temperature increase and could face extensive bleaching under the expansion of the tropical warm pool.

Every time I use a link tag, all the text after the &#lt; vanishes in preview. So I stay away from them .

I get the same thing. The way I stumbled upon to fix it is to create the link with the “Link” button, and then put a space immediately after where it says “href=” in the start of the link. That fixes the preview.

Judith C, good to hear from you. Thanks for the report from the front lines.

Judith:
I would be very interested in the data that supports the finding that “in the Atlantic, Indian and Pacific oceans with the area encompassed by the 28C isotherm groewing by 67% since 1920.” That is an amazing level of precision given the nature of the available data. Frankly I would be more impressed by the % growth as defined by satellite data – but that is another story. Let’s hope the article sheds light on this assertion.

Sorry to be pedantic, but I don’t think you really mean what you say in your opening sentence:

Our understanding of the climate variability of tropical cyclone intensity is hampered by lack of an insufficient historical data record and the inability of current climate models to resolve tropical cyclones.

“In addition to the existence of the large variability of the tropical warm pool SST, there is an upward trend in the tropical warm pool area, which is evident in the Atlantic, Indian and Pacific oceans with the area encompassed by the 28C isotherm groewing by 67% since 1920.”

Prove it. And by prove it, I mean, prove that, with all error factors and possible changes in measurement methodology and coverage during the measurement period, there has been a real, irrefutable, physical expansion.

Re the Hoyos talk, he is presenting evidence in his talk at AGU. A paper will be submitted shortly to a refereed journal. This paper includes an extremely careful data analysis (note they do not use the SST data prior to 1920) and statistics that I would expect to be beyond reproach (Carlos Hoyos is one of the best statisticians in our field). AGU is a conference for exchanging new ideas and new results (much of which hasn’t been published yet). so be patient and stay tuned.

Without more detailed information this excerpt tells us laypeople virtually nothing.

The important measure of change would be knowing what the 67% increase in the 28 degree C isotherm was supposed to replace, i.e if it replaced a 27.5 degree isotherm and it was shown that the SST in general increased by that amount then it would be just another way of saying SST has increased — and that is hardly news. Perhaps if one stated that the warm pool got a 0.5 degree warmer over the period of time in question, it would not have the same impact.

Without reading further on this I suppose one could have made an a prior case for 28 degrees being a thresh hold for some atmospheric processes. Or that the warming in this tropic area is occurring at an accelerated rate. Or that 28 degrees is approaching some theoretical maximum obtainable temperature. David, Bob, Ryan, you guys are good at throwing some maps (isotherms) together in a hurry so we laypeople could get a better view of what the author is talking about.

I am sorry, but I simply cannot believe that coverage in any part of the tropical Atlantic other than near the US and near possessions of Spain and Portugal can be trusted prior to the 1960s or 70s. When someone claims that “the seed areas of the East Atlantic have incurred an SST increase” my retort is, against what sort of baseline and against what sort of quality level characterizing the baseline. I don’t care if the baseline is 1920 or 1940, it is probably of low quality. Also, the buoy array (note, trusting ships is foolhardy, for reasons well discussed here) is no where near sufficient to be able to create any sort of reasonble isotherm at a high degree of confidence. This leaves satellite measurements, which, as we know, really only have been in place in a reliable fashion since, being generous, 1970. Isotherm schmischotherm … you say 1E, 5 N …. I say 5W, 1N. Based solely on buoys, that’s about the level of confidence we might arrive at.

RE: #17 – No, just a grizzled veteran of decades of trials and tribulations incured as a result of various claims made by people that one or another metric “demonstrates X.” When I was young and naive, I would sometimes fall for it or get all wound up due to some claim of an adverse trend in a metric. However, after a few false alarms, needlessly shutting down production lines or needlessly challenging product readiness, one learns to ask lots of questions about sources of data.

Here is a wager. I bet I can make the 28 deg C isotherm in the Atlantic show either a rise or a fall in enclosed area, over the 1920 to present time period. It’s all how you treat the earlier parts of the period.

As a former lab auditor, I’m still trying to figure out how Hadley arrives at their +/- .050C accuracy claims, or NASA’s .050C (95% confidence), for global mean temperature. They have more confidence in the surface station network than I ever had for climate controlled labs using NIST traceable thermometers. At least with satellites they can be calibrated to a known standard and corrected to a reasonably testable extent.

Surely others chuckle just a bit at some of these accuracy and precision numbers for temperature data?

In addition to the existence of the large variability of the tropical warm pool SST, there is an upward trend in the tropical warm pool area, which is evident in the Atlantic, Indian and Pacific oceans with the area encompassed by the 28C isotherm groewing by 67% since 1920.

Here’s a look at some data:

A source for the Pacific and Indian Ocean Warm Pool area data is this article . It’s an easy read and well worth a look, even for those uninterested in the particular topic. The definition of Warm Pool in this article is 28.5C, not 28C, but the conclusions should be about the same.

Note this figure which covers 1910-2000. There are obvious questions about data quality, I agree, but the thing that stands out to me is the considerable decadal variability in area and what appears to me to be a lack of trend over the 20’th century.

The large variability also makes the choice of beginning and ending points quite critical to one’s conclusion. If one had chosen, say, to compare 1935-1945 to 1990-2000, one might conclude that the Indo-Pacific Warm Pool had actually cooled and shrunk, evidence of AGC (Anthropogenic Global Chillin’).

What about the Atlantic? I suspect that the 67% increase is about true, but there is something important to remember about the Atlantic: it oscillates. There’s a thing known as the Atlantic Multidecadal Oscillation (AMO) for which there’s good evidence that the tropical Atlantic SST varies on about a 60-year cycle.

Cycles have high points and low points. To look for trends I imagine Dr. Wegman or Steve M would advise one to compare peaks-to-peaks and valleys-to-valleys, and not valleys-to-peaks.

It so happens that 1920-1925 was a valley in the Atlantic cycle and 2000-2005 was a peak in the cycle. Hoyos’ comparison appears to be a valley-to-peak comparison. I suspect that if Hoyos had done a peak-to-peak comparison then the magnitude of any trend would have been notably smaller.

I offer as evidence of this a chart from a Webster 2007 Powerpoint ( link ) . It shows 1920, 1960 and 2000 SST area greater than 28.5C . Indeed 1920 was a cool year but 1960 (a near-peak-but-declining year) is little different from 2000.

Having written this, though, it’s important to clearly note that we need to read Hoyos’ paper before forming any conclusions, as one sentence in the abstract may not accurately capture the gist of the article.

Having written this, though, its important to clearly note that we need to read Hoyos paper before forming any conclusions, as one sentence in the abstract may not accurately capture the gist of the article.

I agree that forming conclusions would be totally premature and particularly so for someone like me who has much background work to do. Thanks, David, for the article and charts which start to put the subject in better perspective for me. As you say there is much variation in the warming zone area and over several year periods. The excerpt did leave me, on its own, thinking so what and that the full explanation of the point being made lies elsewhere. The point of a 28 or 28.5 degree isotherm being important might come from your first link which in effect stated that:

Cycles have high points and low points. To look for trends I imagine Dr. Wegman or Steve M would advise one to compare peaks-to-peaks and valleys-to-valleys, and not valleys-to-peaks.

Aye, but what do quasi-cyclic LTP red noise anomaly sequences have? No convincing peaks. No convincing valleys. Just all-scale Hurst-like variability. Only if you smooth do you obtain those cyclic artifacts. (That’s why they do it!) So in reality there is no way with realistic climate time series to decide where the start and points should be placed. All choices are equally invalid.

The entire surface record is within what I’ve always considered to be minimum standard error for thermometer measurements ie plus or minus 0.5 degrees C. The belief in these higher precisions is certainly hard to swallow. All this within a system that varies by plus or minus 50 degrees C. One can almost imagine a scientist from another era looking at it another way and concluding that that the climate, within standard error is remarkably stable.

I’d imagine also that a scientist who has staked his entire reputation on trying to say that hurricanes are getting worse would not welcome any report that concludes the opposite.

I agree JamesG, it all depends on how you draw the field. Goalposts at either end..check. But how wide and how long. A good deal of the grand work being done at CA seems to be an attempt to define the boundary conditions and when, and how, they might be said.

My background is in law and economics; every time I read one of these papers I look for actual evidence and then consider its implications. Occasionally I see something which would work in court; most of the time I see weak correlations and a set of “Hail Mary” passes with the data.

The problem is that we have some great forensic guys here but a limited number of advocates who can at least get a sense of the science and its imperfection and then get it out into the world.

Inteligent and honest scientist like Judith Curry are helping to put the science out in the world. From there it is up to us advocates to make science compete with falacy.

Hans
The usual cure for error propagation is to check for Gaussian distributions. I’d like to see them done for the Hadley/GISS plots. Discrete Gaussian plots overlaid on a graph are quite normal in my line of work. Of course there likely isn’t nearly enough data for that. I’m also really curious that 2007 is supposed to be one of the hottest years ever and yet my body tells me that Europe has been unusually cold for most of this year.

The Vecchi Soden paper is interesting and I think that Judith Curry does a good job of summarizing its significance (above).

I’m more interested, though, in physical explanations based on historical information (to the extent possible) and so I remain fascinated by the Kossin Vimont paper. It’s the kind of paper which I wish someone like Eric Berger would read and summarize, but admittedly it would be a difficult subject for a newspaper.

The KV paper centers around something known as the Atlantic Meridonal Mode (AMM), which for now can be thought of as a multidecadal climate oscillation in which positive AMM = active hurricane season and negative AMM = quiet hurricane season.

I’ll try to write up some key points in the coming days, once I finish with the kids’ science project displays (a too-often parental chore). For the moment there is one interesting chart in the paper which I’ll mention.

KV’s Figure 3 provides a lot of good information but it takes a few minutes to get it in focus. It is a map of the Atlantic and it shows certain behaviors during positive and negative AMM.

During positive AMM (left map) the Atlantic is warmer than normal (warm colors), wind shear is decreased in the mid-Atlantic (dotted lines) and major hurricane generation shifts eastward towards those favorable conditions.

During negative AMM the deep tropics cool (bluish colors) and see much higher wind shear (solid lines). The higher latitudes see somewaht more favorable conditions but there are drawbacks to that due to proximity to land, the cooler waters of the North Atlantic and mid-latitude wind shear.

The period 1970-1995 saw primarily negative AMM (the right map) while since 1995 the Atlantic has been mostly positive (left map).

The are two other things I’ll mention regarding sea surface temperature (SST). One is in regards to the SST in the equatorial Pacific (lower left on the map). During positive AMM the Pacific tends to be in La Nina condition (which is somewhat favorable for Atlantic hurricanes) while during the negative AMM the Pacific tends to be in El Nino condition (somewhat unfavorable for hurricanes).

More importantly, note the “trail” of temperature anomalies. The anomalies follow a trial down the eastern side of the Atlantic, which is consistent with the wind pattern. To me that’s an indication that surface wind anomalies (which affect evaporation and mixing) play a major role in the SST anomalies.

KV’s Figure 3 is neat, with a lot of good information crammed into it.

Re #36 Here’s one other note on Figure 3 and some of Steve M’s and others’ earlier work:

The AMM from 1940 to 1995 generally shifted from the left map (postive AMM and high activity in the eastern Atlantic)) to the right map (negative AMM and lower activity in eastern regions). Yet the work on storm longitudes, storm-days, etc from Steve M and others showed storm data shifting from the west Atlantic towards the east Atlantic over that period. To me that indicates that the pattern Steve M picked up is not consistent with natural events (AMM tendency to shift activity westward) but rather reflects problems with the storm data (our much-discussed detection problems of remote and weak storms).

The entire surface record is within what Ive always considered to be minimum standard error for thermometer measurements ie plus or minus 0.5 degrees C.

JamesG, I read in another blog awhile back that the error of precision thermometers is in the range of +/- 3.5 degrees – much more than +/- 0.5 degree. The person who wrote it said he was responsible for calibrating the instruments and that was one reason he wasn’t buying any of the AGW arguments.

I don’t for a minute believe the entire surface record is within +/- 0.5 degree. Not with all the issues that have been discussed here in CA with station location, movement, upgrading, and so on. Too me, it is much like the spaghetti charts; the data is so poor and the resolution of the data so bad that climate science, or at least AGW relies far too much on smoke and mirrors and fear – mostly fear.

The +/- 3.5 degree value is before figuring out how to handle microsite and UHI contamination.

I think there are valid questions about whether or not they really know how to adjust for those things and if their adjustment is valid. What is it measured against? Has it been duplicated and found to be robust? What about outside the U.S.? No other country, to my knowledge, has close to the number of stations and records. An “entire surface record”? No I’m sorry; there is no such thing.

I apologize for intruding again. I usually just read because I am unable to contribute in a meaningful way as others do.

no such thing as proof when it comes to science? I for one would not dare challenge the movement of an anvil falling out a third floor window. Something to do with gravity?

Philosophers such as David Hume would argue that this constitutes an observation of something that happened, but there’s nothing proved by this observation. Technically, “proof” only exists within the realm of mathematics (and by extension, logic), though you can get pretty darned close in science.

no such thing as proof when it comes to science? I for one would not dare challenge the movement of an anvil falling out a third floor window. Something to do with gravity?

Scientific knowledge grows by disproof of false conjectures. So, yes, theories are not, strictly speaking, “proven” in science. But this is a narrow definition of the concept of “proof”. In the real world, theories are considered “proven” when they pass the smell test a million times a day.

But this is irrelevant for the example given. If belief in anvils falling stems from a grasp of physics, then why do 5-year olds intuitively understand the risk of falling objects when they know nothing about gravity, physics, or science? People like to flatter themselves by pretending they live by science. The fact is people get on planes because other people get on planes, with no apparent risk. It’s empirical, not rational. It has nothing to do with faith in the physics of flight, or a rational understanding of how it works. Just as the choice not to fly usually has more to do with irrational fear of unlikely consequences than of reasoned consideration of the probability of flight failure.

Don’t give the cortex too much credit. Reptilian instinct counts, in this debate as in all others.

The paper is on my bed stand as we speak and I’ll have another go at it tonight, but as a retired person my days are nearly all committed with requests from SWMBO and other interests.

Re: #36

David, your keeping track of the connections between papers and posts in these tropical storm threads provides an important and needed function for maintaining some continuity to typical internet discussions that too often become disjointed.

Based on the KV paper, I am contemplating doing another Poisson distribution fit calculation using the Easy Detect storm counts and replacing the Nino 3.4 index (positive/negative) with an index derived from the AMM per the KV paper.

Just as the choice not to fly usually has more to do with irrational fear of unlikely consequences than of reasoned consideration of the probability of flight failure.

Hehe, I almost fall into that boat (or out of that plane, appropriately), but the scientist inside allows me to overcome my irrational fear because I know it is inherently safer than driving across town in my SUV. :)

which sentence could be false? Which sentence could be overturned by observation?
which is true by definition? Which is true by stipulation? Which sentence was discovered
to be true by human action? Can I prove that 2+2 does not equal 4? ,
could I collect data that would contradict F=ma? Could the world have been otherwise?
Which would suprise you more a proof that 2+2 did not exactly equal 4 or a proof that F did not exactly equal Mass times
acceleration?

just asking questions here.

in science there is no proof. There is confirmation. There is disconfirmation. and there is ignoring.

You forgot one in #50, mosh, besides self-refuting; confirmation, disconfirmation, ignoring, self-refuting: And then, there is modeling….

Anywayz, geez, I make one little joke about “proving things in science” in response to asking for proof that since 1920 G28C isotherm grew by 67% and I start a philisophical discussion.

Okay. If I claim that right now, on Earth, standing here, under current conditions I can prove there is gravity, most folks would think that’s a strange thing to say; we know there is. But how could I show you? Drop one. Now if observing something happen (Does methane burn? Gas flow in the presence of oxygen without any strong winds, match, fire) doesn’t prove anything, what does? Can we (do we have to) prove that if you heat a 10 pound block of iron to 500 F and touch it your hand will burn?

There is no scientific proof!!!! You can’t prove to me that water is made of hydrogen and oxygen by using the gasses to make water! That’s only me observing it, not proof it happens. :)

Re #46 That sounds like a good exercise, Kenneth. I plan to try to reconstruct KV’s Figure 1 using storms with an ACE of 2 or higher only, so as to reduce the impact of the detection changes. I think I’ll get patterns more robust than those shown in Fig 1.

By the way, I just realized that I needed to learn how to spell “meridional”. I wish WordPress had spellcheck :)

You can add N. America, large and widely separated parts of Asia, New Zealand and Australia to that list. December 2007 is set to be the coldest month anomaly-wise in quite a long time. My suspicious mind says the people who track global temps, know it. Hence the curious announcement about 2007 before the year is over.

I agree with the synopsis of Judy’s review of VS 2007. The take home point, which most should acknowledge, is that the relationship between SST changes and hurricane intensity/frequency/lifecycle changes is not clear nor simple. Nevertheless, with every additional study in the peer reviewed literature, we beam a laser-focus on certain specific aspects of the problem, but rarely is the whole picture provided all at once.

Climate model projections of potential intensity depend on the
parameterization of tropical convection to provide the upper atmospheric
temperature and humidity used in the calculation of potential intensity.
Tropical convection parameterization is one of the weakest links in climate
models. Confidence in the climate model projections of potential intensity
should have been established by looking at the climate model simulations for the
past century and comparing with the historical data record of potential
intensity.

This pretty much sums up our current state of affairs for tropical climate prediction. The modeling of upper-tropospheric temperature changes represent a wild-card in climate change, which is an important component to potential intensity calculations.

Didnt a more recent review of Eddingtons 1919 observations of the bending of light due to the Suns gravitational field, as predicted by Einstein in 1905, show that Eddingtons results were within the margin error.
Of course the observations have been repeated in other more sophisticated instances since and Eddingtons actual data could be checked 100 years later

Just in case anyone is wondering how far we have come in resolving tropical cyclone wind velocities, I offer Jelenak, Chang, Sienkiewicz,Knabb & Chelton with an excellent slide presentation on QUICKSCAT and other modern RS instruments. Real scientists are committed to getting reliable obs from high resolution instruments. See Slides 15, 19, 21, 23. They also scream like banshees when someone is messing with their instrument budgets, which I think ultimately is the point of this slide show. I dare say these folks wouldn’t tolerate their sats getting parked next to a barbeque. (Did I really just write that?)

On Topic-It would be interesting to test prior year(s)Gulf loop current/ Eastern Gulf upwelling/perturbation against subsequent intensities. My list of Loop Current crossers (hurricanes only): 1998-Earl (minimal Hurricane) Georges-(the unheeded warning to NO-75 miles east of being Katrina) 1999-Brett (not really a loop storm but I am in a generous mood; 2000 Gordon (lame-o storm); 2001-None; 2002-Lilli (long tracker-western edge of loop); 2003-Claudette (even further west-See Brett); 2004-Charley (You sick freak meso-more of a blowup in the hot shallows of FL Gulf Coast), Ivan-tips off the loop current barrage in 2005 with Katrina, Rita, Wilma (I stayed up all night in disbelief watching this pinhole). If you really want to get picky, you go from 1998 to 2004, Ivan, before an intense cyclone taps the really hot water of the Eastern Gulf. When the steering currents finally started dumping closed circulations into the eastern gulf at the end of 2004, the SST was way hot and probably deeply saturated. The 2004-2005 blowup was focused regionally in the eastern Gulf which probably not coincidently was an unplowed, overheated, fertile breeding ground for some high energy events.

The bottom line is that there is not enough compelling and unchallengeable evidence to support the AGW hypothesis. I especially liked the way this letter makes a direct challenge to the credibility of ‘The Models”.

David, I looked at the Easy Detect annual storm counts in two categories of negative and positive AMM index for 1900-2005 to determine how well the separate categories fit a Poisson distribution. The AMM index split the years evenly at 53 in each category and gave a chi square goodness of fit test to a Poisson distribution as follows:

Negative AMM index p = 0.34

Positive AMM index p= 0.77

I can compare these fits for what I found using the 1871-2006 Easy Detect storm counts categorized by negative and positive nino 3.4 index. Recall that these categories had equal numbers in each and the chi square goodness of fit to a Poisson distribution were as follows:

Negative nino 3.4 index p = 0.67

Positive nino 3.4 index p = 0.70

I would give the fit using the nino 3.4 index a decided edge over using the AMM index. I will now have to reread the VK (2007) discussion of the similar but independent effects of ENSO and AMM on NATL tropical storms.

I compared the correlations (which I think may be ill advised to calculate the way it was without showing that it was applicable to the distribution) of VK (2007) using Easy Detect storm counts and did not see an improvement in correlation over what VK calculated using all the storm counts.. I did not have the raw data and thus had to extract the data from the VK graphs.

Getting back to the issue. My brief take on this new paper is that while it’s prevously been assumed that tropical storms arose due to absolute temperatures, in fact they behave just like every other storm in being based on temperature (or potential) differences. Of course in a global warming scenario, temperature differences should be the same or reduce, even if absolute temperature increases. I commented exactly this recently on the Nature blog, so I feel vindicated, even though I was just stating what should have been the bleeding obvious. In passing I also noted that some attempts to weasel out a rising trend from non-rising data used Bayesian stat. methods. Bayesian methods are highly dependent on your inference rules so they can be a really good way of confirming your bias. For example, if your primary rule is “higher temperatures cause more tropical storms” then your Bayesian system will happily confirm that if temperatures increase then storms will increase. It’s just wrapping your crude assumptions up in nice mathematical clothes. However, if you assume that temperature differences cause all storms then you don’t even need the Bayesian cheat because your assumption agrees with the landfalling hurricane record, which has no trend. Since that is an accurate record it would usually be normal to assume that it could be extrapolated to say that all tropical Atlantic storms could be assumed to have not increased unless the actual locus of storm activity has changed. Clearly the mid-Atlantic records are still under heavy scrutiny as to the extent of the effect of better detection methods, but as Roger Pielke Jr just wrote on his own blog, if it’s just mid-Atlantic should we care?

I’m trying to familarize myself with the AMM and so I made several time series like this one .

This one shows Atlantic SST variation (Gulf of Mexico, Western Tropical and Eastern Tropical Atlantic) plus an AMM index (related to the north-south SST gradient near the equator). What impresses me is the oscillatory nature of all of these parameters, with roughly a decade from peak to peak. I have not found an explanation for this.

The other point is that the pattern becomes more distint in the 1970s – I wonder if this is tied to improved SST measurement as satellite data began to supplement ship data in the late 1970s.

Steve Sadlov: Come on, you know theres no such thing as proof when it comes to science!

Would it not be more charitable to say that “scientific truth is not absolute for a subject, but it is the aim of purer scientists to explain and account for the maximum amount of known uncertainty”.

BTW, in your # 53 the spelling is “gases” not “gasses”, with fair certainty, source – many reputable English dictionaries. One letter can make a difference. Inulin is not insulin.

Mea culpa too, I am a lousy typist.

Back to thread. A CSIRO climatologist emailed me about satellite measurements for seal level changes, email early in 2007. I’ll just quote him verbatim, it’s easy to get what I mean.

keeping satellites on track – there I meant in the horizontal
plane, ie steering the satellite to overfly the same ground tracks. This
only needs to be within a few km, since the footprint of the sounding is
many km anyway.

Noise floor: 100mm is the noise level for 1Hz observations
(which are averages of the original 20Hz observations).Each ‘snapshot’
of global sea level takes 10 days to sample. That’s 864000*(sea fraction
of planet)= about 600,000. This does not reduce the noise as much as you
might hope because some of the sources of error have large covariance
scales.

The implication seems to be that 100mm error can be vastly reduced by taking 60,000 readings. My personal feeling is that if the 100mm is all a bias in accuracy, no number of readings will correct for it. So do we quote change of 3mm a year globally for a device accurate to 100 mm? Try building a home with a ruler graduated only in half yards, or as some of us express in a more enlightened decimal way, about 50 cm.

An absolutely shocking article was published in the NY Times online Link to Piece that seems completely foreign to the typically liberal Times.

A few stunning pieces of wisdom cited from Roger Pielke Jr. with regard to Vecchi and Soden:

Roger A. Pielke Jr., a professor of environmental studies at the University of Colorado, recently noted the very different reception received last year by two conflicting papers on the link between hurricanes and global warming. He counted 79 news articles about a paper in the Philosophical Transactions of the Royal Society, and only 3 news articles about one in a far more prestigious journal, Nature.

Guess which paper jibed with the theory  and image of Katrina  presented by Al Gores Inconvenient Truth?

It was, of course, the paper in the more obscure journal, which suggested that global warming is creating more hurricanes. The paper in Nature concluded that global warming has a minimal effect on hurricanes. It was published in December  by coincidence, the same week that Mr. Gore received his Nobel Peace Prize.

And who can forgot the historically quiet hurricane season in the Northern Hemisphere: (and 2007 is over, by the way)

When Hurricane Katrina flooded New Orleans in 2005, it was supposed to be a harbinger of the stormier world predicted by some climate modelers. When the next two hurricane seasons were fairly calm  by some measures, last season in the Northern Hemisphere was the calmest in three decades  the availability entrepreneurs changed the subject. Droughts in California and Australia became the new harbingers of climate change (never mind that a warmer planet is projected to have more, not less, precipitation over all).

The author of this piece may need to go into the Witness Protection program for straying so far away from his comrades at the Times. I am stunned and at the same time, impressed with the breadth of the article. Is this a new year or what?

Thank you, oh thank you, for picking up the example of my lousy typing. I now know that at least one person read the post.

On the more substantial issue, can anyone add extra information about the absolute accuracy of satellites used for the determination of sea level? Is it not delusionary to model figures which spend their lives in and out of error bounds? Besides, how are satellite altitudes measured and corrected? By bouncing signals off land and sea. The land/sea ensemble is almost circular, at least for one orbit.

The implication seems to be that 100mm error can be vastly reduced by taking 60,000 readings. My personal feeling is that if the 100mm is all a bias in accuracy, no number of readings will correct for it. So do we quote change of 3mm a year globally for a device accurate to 100 mm? Try building a home with a ruler graduated only in half yards, or as some of us express in a more enlightened decimal way, about 50 cm.

You can go beyond your personal feeling, Geoff. What you state is a fact. Increased sample size only reduces sampling (i.e. random) error. Increased sample size does not reduce systematic error (i.e. bias).