91 Comments

Re point 4, (in my last post in the previous thread, which also contains Emanuel’s data), my error. A closer examination reveals that the PDI data I sent are the adjusted PDI data, which include Emanuel’s decreases in the pre 1970 data. So, I don’t know why the SSTs are that different.

It also highlights the fact that, even with the pre-1970 decreases in the PDI (which Emanuel agreed were too large), there is still no upward trend in the 1949-2004 PDI …

Good work, Willis! This helps me understand some of my head-scratching over Emanuel’s plots.

Bender, if you want a challenge, see if you can figure out how Emanuel constructed his Figure 3. It is a combo of North Atlantic and Northwest Pacific storm data, with half of the SST data coming from the Southern Hemisphere for some reason.

I don’t know how one combines the storm data and I don’t know why one would use Southern Hemisphere SST when examining Northern hemisphere storms.

I could try if I had the two data streams. (And we’re talking raw data, not smoothed.) Is the composite not simply a straight weighted average?

Playing guessing games in order to reconstruct someone’s graph is not too interesting to me. Answering Willis’s questions about why the PDI=f(SST) relationship varies, and whether it is biased by data pre-processing, is, in contrast, very interesting.

An example of where I get stumped by Emanuel’s Figure 3 is his 1955-60 PDI. It rose about 100% (0.25 to 0.5) from 1955 to about 1958. Yet the Atlantic PDI for that period went slightly down while the Pacific was up maybe 30%. If the combined PDI is a weighted average, how can that doubling have happened.

Well, I’ve completed getting the data from the Emanuel paper. The interesting thing is this …

The SST is correlated with the PDI (per his figures) with r^2 = 0.48. But because of the strong autocorrelation of both the PDI and the SST, there is no statistical significance to this correlation (p = 0.16).

So there is no significant trend in the Atlantic PDI, and there is no significant correlation between the Atlantic SST and the PDI …

Re-reading #9, a couple of comments. I was talking about his smoothed figures for the Atlantic. These have been smoothed using the 1-2-1 smoothing filter. It has been used twice on the SST data (near as I can tell), and four times !?! on the PDI figures.

Some fascinating discussion here, which i haven’t had time to fully digest yet. I am hoping to turn the discussion towards specific recommendations as to how the data should be analyzed and presented, and what kind of conclusions can actually be drawn from the data. Lets forget for now the uncertainties in the data quality and focus on the statistical analyses. If you can suggest a better way of doing what we have been doing, , I would be happy to redo the plots and conclusions at least for the talks I give (giving climateaudit credit), and if all this substantially changes the conclusions that emanuel, webster et al. are drawing, then I would be prepared to to write another paper on this regarding (bender has apparently turned down my offer to coauthor a paper on this🙂 So rather than just doing the due diligence, finding flaws in what was done, why not try to take it to the next step.

Here is what i am concluding so far from how to approach these analyses (again, this is based on cursory reading of all this, hopefully i will have more time next week):
1) looking at an individual ocean basin like NATL, we find tons of autocorrelations, making it difficult to elicit a significant trend or correlation
2) looking at the global dataset, the autocorrelation problem is much smaller, although there is an apparent strange 5 year autocorrelation in NCAT45 (no idea what that one is, very intriguing esp since it is global)
3) inferring anything about changing hurricane statistics and SST is more robustly done on the global data set (if the forcing is global, then we expect to see a global signal, and the autocorrelation problem is smaller)
4) plotting the data in 5 yr bins would have been ok if there hadn’t been 5 yr autocorrelation (4 yr bins would have been ok?)
5) while it is ok to plot the data this way, the 4 yr bins leaves us with too few degrees of freedom for a meaningful trend analysis (what can we actually conclude about the trend from the global data used by WHCC?)
6) looking at global data relating some measure of hurricane intensity and SST makes sense since the forcing is global and we have a theoretical relationship that should relate intensity and SST
7) integral measures like PDI and ACE include not only intensity but duration and number of storms; given that we don’t have theory esp for number of storms, N may have big influence on PDI the N part may not relate to SST (but in the NATL, there is a relation between N and SST)
8) sorting out the actual physics of what is going on beyond the basic intensity/SST relationship seems to be more logically done at the level of the individual basin, where we can sort out the contributions from N, Ndays, intensity and understand what is controlling them in terms of cyclones, atm dynamics, SST, etc. But we have statistical significance problems in just looking at individual basins.
9) re the focus on intensity and SST relationship, Emauels potential intensity theory refers to wind speed (not to PDI, etc). we need to come up with the best metric to reflect actual intensity (maybe average peak wind speed is the best). I do like NCAT45 since that is the part of the intensity distribution that is changing the most, but it seems vulnerable to observational errors.

So if this discussion can help provide concrete suggestions for moving forward on this, climateaudit will have entered a new era of productively collaborating with domain experts to move the science forward, rather than just serving as whistle-blower/due diligence watchdog.
i think this discussion is on the brink of making such a contribution.

p.s. the “no increase in peak windspeed” found by WHCC is probably a red herring issue that can be ignored, the hurricane forecasters who assign such numbers to the storms have tended not to move very easily out of a prescribed range. the satellite reanalysis that is underway should give us some much better numbers to look at for that one.

Willis was the adjusted data from the Landsea paper the same data used by Emanuel? The most recent graph that you displayed showed standard deviations on the y axis for the PDI and SST time series. Is that correct?

It was my view that we have unadjusted PDI data and 2 or 3 sets of pre-1973 adjusted data with one set originally over-adjusted by Emanuel and a second set readjusted by Emanuel and perhaps a third set adjust according to
Landsea. Could you show a graph of all versions together?

Thanks for Landsea paper. I have not completed reading it but it seems to add much to the discussions. If my questions can be answered from reading this paper just say so.

Would it add to the discussion if we put together all the reasonable remaining questions we have about Emanuel’s paper and submit them through you to Emanuel?

Re #10
Read the text carefully, Willis! Emanuel [p. 687] craftily says: “This filter is generally applied twice in succession.” So I guess it is up to the reader to determine whether in any paticular instance this general rule was not followed. i.e. He probably did apply it four time in some instances.

Re 13
Again, probably not a question of deceitfulness here, but Nature’s brutal page restrictions. I’m coming to the conclusion that climate science should never ever be published in Nature because their editorial policies require such drastic textual oversimplifications that it is very damaging to what is surely a very complex story.

Nature and Science have the same issue with all fields. I know of a fraud by a young turk academic that has not come to light yet. It is essentially a place for people to put press releases. If the work is solid behind it fine, but there is not enough info to check on things well or even to use the report for future endeavours. They have basically become “letters journals” with the same problems that PRL and APL have in physics, but with more newsworthy content.

It was written in 1955, that full papers in the specialty literature are the appropriate way to report results. Was true then. Is true now. Note that GRL has an L in it…

TCO, why are you still contributing to this thread? I thought you said it was all played out? [Kidding. I agree with your point. The problem is the “Letters” papers are still viewed as the most prestigious, and that ain’t gonna change.]

1. Adjusted yearly PDI is from Landsea’s “Communications Arising”
2. Original PDI is reconstructed from the Landsea adjusted version, using the Landseas data on the smoothed version of the original PDI as a template.
3. Smoothed SST is from Emanuel’s paper, for September in 6°N-18°N, and 20°W-60°W.
4. Raw HadSST data is from the HadSST database (available online), and is for the month of September for the area 5°N-20°N, 20°W-60°W.

Supposedly, all of Emanuels data is available on his website. However, it is in “NetCDF” format, and I’m on a Mac, and I can’t find anything to do the translation for the Mac … anybody got ideas? Is there a script in “R” that would do it?

IMPORTANT NOTE, I just noticed I switched the labels on the pinche cabràƒⲮ. The Original PDI has the higher values in the pre-1970 period, and thus is the third column, with the Adjusted PDI in the second column.

re #11: I am a lay person in this area. However, I truly respect the attitude demonstrated here by Judith Curry. Her attitude is as a scientist’s attitude should be, and it is clear that she is being treated with respect by the CA crowd.

It would be great if those on CA capable of contributing to the issue would be prepared to work constructively with her, Emanuel, Landsea et al in a collaborative effort to find out what is really going on, and to strengthen the robustness of the scientific conclusions.

Notably, the raw data are not available from the “Papers, Data, and Graphics” page at Emanuel’s website. (There he cites the use of a 1,3,4,3,1 filter, which will be problematic if the lag-5 autocorrelations are significant.)

Judith, I read your comment about autocorrelation in the North Atlantic, but I don’t find the problem in the data. You say:

1) looking at an individual ocean basin like NATL, we find tons of autocorrelations, making it difficult to elicit a significant trend or correlation

However, the Emanuel data for example, which has strong autocorrelations, still shows significant relationships (r^2=0.24, p = .006 on the raw SST vs original PDI data, not the smoothed), and significant temporal trends (for the SST data, but not the PDI data). Here’s the autocorrelation of the SST and the original PDI (adjusted PDI is only slightly different).

bender, I was talking about r^2, the regression coefficient of determination.

Re #12, I normalized the data to make the relationships clear, so the Y axis is in standard deviations. To me, this is preferable to say, Emanuel’s practice of multiplying the two datasets by some arrangement of mx+b to bring them into line, as this is subject to … well … subjectivity, as far as chartsmanship.

Hello, Judith, good to see your input and your points are well-taken by me. I’d like to answer in two parts, due to some personal time constraints. I cannot comment on the statistical issues (my interest is meteorology) but will do so on other aspects.

Regarding Emanuel’s approach:

* For Emanuel’s Figure 1, the approach should be to look at the Basin surface temperature, which means the area covered in Webster et al’s Atlantic box plus the Gulf of Mexico, plus the Bahama region. The latter two are, along with the western Caribbean, the warmest regions of the Basin and the ones which account for much of the PDI. I suggest dropping the 6N to 9N, which is ITCZ-related but which does not seem to be part of his hypothesis so far as I can tell.

* For Figure 2, expand the SST box westward and northward to include the warmer regions of the western Pacific. the reasoning is the same as above.

* For Figure 3, exclude from the SST region those basins, and that Hemisphere, in which the PDI storms did not occur.
(As mentioned before, the PDI in this one is a head-scratcher for me and doesn’t seem to jibe with the PDI data from the two basins. I recommend explaining the method of combining and smoothing PDIs. perhaps he did and I have missed it.)

* On the Emanuel/Mann paper, drop the use of pre-1900 storm data. (I also have doubts about the completeness of storm count east of 60W prior to 1940, as mentioned before.)

For both data series (SST, PDI) filtering (“smoothing”) serves to amplify weak 5-yr periodicity (ENSO?) up to the level of a true cycle. Do it twice and you get a strong decadal cycle. Fifty years of data divided into cycles of ten years yields roughly 5 independent observations, or 4 effective degrees of freedom to do the regression with.

Re #19
Wait a sec. Willis, if these are really the source data for Emanuel’s (2005) PDI graphs, then how come the PDI’s tabulated in #19 are U-shaped over time, whereas Emanuel’s are more HS-shaped? Do we have our wires crossed?

Re 36, the “Smoothed SST” is from Emanuel’s data, while the HadSST data is from HadSST. I don’t have the raw data for the Emanuel “Smoothed SST”, nor do I know why they are different.

Re 33, the HS shape is from two factors. One is that he has adjusted the pre 1970 values to reduce them, and the other is that his final data points are exaggerated by the filtering. The first point is too low, and the last is too high, which makes the HS shape …

I thought that the Lag(yrs) in the vertical axis shows that this is something else than the actual comparison of strength or some other real-world number. Some sort of statistical thingee. Therefore it’s not surprising the shape is different.

Aha – I think I see half the problem. The data listed here correspond to the dates 1940-1996, not 1949-2005. Looks like maybe a column cut-and-paste error or something llike that, Willis. The uptick from 1995-2005 is totally missing from the tabulated data in #19. Carefully compare the third column of #19 to the Fig 1 solid line in Emanuel (2005).

Re #22
What Emanuel has done is to post the Atlantic basin 1970-2005 data on his website. This spans the “30-year” time-frame mentioned in the title of the Nature paper (“Increasing destructiveness of tropical cyclones over the past 30 years”), but does not cover the earlier data period 1930-1970, which his graphs cover. Thus it is not possible to do a sensitivity analysis on the cutoff date. We are stuck with the cherry-picked cutoff date of 1970, unless we can straighten out the mess referred to in #40.

I suggest an insider, like Judith Curry, write to Emanuel to acquire the raw, unsmoothed, original data used in Emanuel’s (2005) Figs 1 & 2, and that a collective effort be made to analyse the sensitivity of his conclusions to any or all of the following:
-degree & method of smoothing
-degree of 3-7y partial autocorrelation in underlying raw data
-degree of correction for reduced degrees of freedom due to enhanced 1st order autocorrelation
-choice of start date (as opposed to cherry-picked 1970)
-choice of trend model
-choice as to whether basins are considered jointly or separately
-choice of spatial data frame within basins

Re #43
No, thank you, Willis. It takes alot of effort to get all this data together and to read and reply to these endless demands of ours. And it takes some guts to admit when you’ve made a rock simple error. You are a prince.

Re #42, bender, as you note Emanuel has posted an excel spreadsheet with his 1971-2003 data. While useless for our purposes in this thread, I found this very valuable to check my own work.

Because it is often such a struggle to get authors to release their data, I have taken up the practice of extracting the data directly from their graphs. I take their graph, blow it up, and using a graphics program, place a locus at each data point. This gives me the data in a very exact way. To verify the results, I use Excel to graph the extracted data, copy the graphed line, and overlay it on the original. This lets me spot any mistakes.

But how exact is this procedure? Well, this spreadsheet of Emanuels lets me check my work in this case. I previously sent you the data from his 1970-onwards graph. Now I have his actual data to do the comparison. Here are the figures for the errors due to my extraction procedure:

DATA RANGE 26.8°C – 27.6¯C

Average error , .002°C

Std. Dev. error, .003°C

Max Abs error, .009°C

Correlation, my extracted data with his data = 0.99987

In other words, my procedure is way more than accurate enough, the biggest single error from my procedure is one hundredth of a degree C.

On other matters, because of the shortness of the record and the high autocorrelation due to the smoothing, there is NO SIGNIFICANT TREND in either the SST or the PDI datasets for the 1970-on data.

The Atlantic basin relationship is not nearly as strong as Emanuel (2005) suggests. His excessive double filtering is indeed bringing out a decadal cycle that is not present in the original data series (both of them). The high degree of decadal coherence means from 1970+ he’s got ~4 independent observations, and thus only ~3 effective degrees of freedom to assess the significance of that “strong” r=0.91 correlation. Take away the smoothing and the 1949+ correlation drops to r=0.49. Which means the relationship is probably valid (depending on the results of the spatial-framing sensitivity analysis), but not even close to the level that his text indicates. I would advise insurers to take a close second look at this paper.

Re #46
Willis, I do the same and I agree 100%: accuracy of digital reconstruction is really a non-issue. (Believe it or not I have digitized some tree-ring series that are 1000y long. O that people would just release their data.)

The high degree of decadal coherence means from 1970+ he’s got ~4 independent observations, and thus only ~3 effective degrees of freedom to assess the significance of that “strong” r=0.91 correlation. Take away the smoothing and the 1949+ correlation drops to r=0.49.

Or with R^2=0.83 with smoothing and R^2=0.24 without smoothing and p=??.

Re #49
His graphical error fooled me for a few minutes as well. Yet another example where proper archiving could alleviate the miserable job of methods reconstruction.

Beats me why these individual Figs 1-3 are not plotted in a single graph. Easier to read, fewer opportunities for labelling errors, saves space. As a reviewer I would have insisted on that. Is this yet another example of editorial fast-tracking?

Re #54
Hmm, that’s not right either. The Emanuel Fig 1 data only go to 2003, so there should be an uptick for 2002-2003 in the smoothed PDI data. I get that with a single filtering (series ending in 2003), but not a double filtering (series ending in 2002). The PDI patterns 1970+ are close to Emanuel’s, but not close enough – and the patterns pre-1970 are off by a fair bit.

What you have to do is smooth the Landsea adjusted PDI four times using the 1-2-1 filter and retaining the original end points. Then throw away the 2004 data and you have an exact match to the Emanuel data.

Re my 57, upon further examination, bender, I find that what he actually did was smooth the 1949-2003 data four times, keeping the end points untouched, and used the whole thing. It gives almost the exact same result as the procedure I gave last time, but Landsheidt says that he held on to both end points, and that makes sense rather than discarding just one.

1. Keeping the end points is not justified. That is another permutational flavor to test in a sensitivity analysis.
2. What do we make of the significant difference between the Landsea & Emanuel PDI pre-1970? I guess that difference should be quantified, if nothing else.

3. “Keeping the end points untouched” is not justified either. That’s precisely what is preserving the HS shape, and what I’ve been complaining about. Each smoothing should lop a point off, thus eliminating the blade. THIS is what makes sense – because of the high likelihood that the blade is simply part of a background 3-7 year noise cycle, and not some sudden non-stationary warming trend. The crafty devil.

Also, bender, note that applying a 1-2-1 smoothing filter four times is the same as a 1,8,28,56,70,56,28,8,1 filter … over time, this will approach a Gaussian filter. The net effect of this smoothing on the autocorrelation of the adjusted PDI is shown below:

When you make arbitrary choices in data preprocessing that systematically bias the analysis in favor of your hypothesis, then the paper should not be accepted without major revision. When there are more than 4 such choices, then the analyst’s objectivity and the reviewers’ competence needs to be questioned.

Interesting stuff Bender and Willis! Nice work. Better than TV. But in Emanuel’s favor is there any rational, objective reason that you can think of for retaining the two end points untouched and not subject them to the same smoothing process? Or to put it another way, how obvious is it that this is an error?

It is not an “error”. It is more a case of poor judgement. It is not fatal to the hypothesis test, but it will reduce its significance by some small measure. Whether that “small measure” is significant from an insurer’s perspective remains to be seen. But every dollar counts.

The bigger point is that you string four or six of these small questionable judgements together and all of a sudden you ARE influencing the hypothesis test. Maybe even doubling the slope of the response. Which is what this is all about: why do PDI responses to SST in Atlantic & Pacific appear to differ so much? Is it the result of an overfit model in each region?

#64 In my opinion after smoothing twice he should get rid of the 5 end points on either side. The number of end points he needs to remove isn’t necessarily the length of the filter. Rather it should be based on the time constant or auto correlation of the filter.

It may be worth pointing out that you would consciously have to do this (include those end data points). If you were to copy and paste in Excel or use a filter() function in R, then the data stream end points will not automatically compute. You’d have to manually type in the unprocessed data overtop of the computed missing value. So it’s certainly not inadvertent; it’s deliberate.

Most people would probably agree with hardline #66. Leaving those points intact is much like MBH98 deciding to arbitrarily extend the Gaspé cedars back from AD1404 to AD1400 just so that they would not drop out of the PCA. They needed that leverage. They did what it took to get it.

My guess is that Emanuel did a double smoothing and correctly eliminated one data point of the end of each series twice. Thus his raw data are 1970-2005 and his smoothed data are 1972-2003, as it should be.

Re 59 & 60, bender, you are right, each implementation of the 1-2-1 filter should lop off one year at each end. This is one of the things that Landsea said in his “communications arising”, but unfortunately, Landsea didn’t realize that Emanuel had done the procedure four times, and thus should have lopped off four years rather than one.

SENSITIVITY ANALYSIS TO ADJUSTMENTS.

Using either the smoothed or unsmoothed data, neither PDI series has a significant trend. Nor does his smoothed SST.

Using either the adjusted or unadjusted PDI series (unsmoothed) with the HadSST data, there is a statistically significant relationship (p less than 0.05), but it is small (Adj. PDI vs HadSST, r^2 = 0.23; Orig. PDI vs HadSST, r^2 = 0.17).

Using the 4x smoothed PDI with Emanuel’s smoothed SST, on the other hand, there is still a relationship, of course a stronger one, but the unadjusted data are no longer statistically significant (p=0.07), whereas the adjusted data are significant (p=0.014) … which may be why he did it.

It also may be why he smoothed the PDI four times, because the single smoothing vs the SST data is less significant (p=0.03).

I’m still curious about the pre-1970 difference in his data versus the HadSST data …

If you read the Landsea paper and Emanuel’s response, you find that Emanuel says the following:

Landsea correctly points out that in applying
a smoothing to the time series, I neglected
to drop the end-points of the series, so that
these end-points remain unsmoothed. This
has the effect of exaggerating the recent
upswing in Atlantic activity. However, by
chance it had little effect on the western Pacific
time series, which entails about three times as
many events. As it happens, including the 2004
and 2005 Atlantic storms and correctly dropping
the end-points restores much of the
recent upswing evident in my original Fig. 1
and leaves the western Pacific series, correctly
truncated to 2003, virtually unchanged. Moreover,
this error has comparatively little effect
on the high correlation between PDI and SST
that I reported.

Emanuel himself seems not to have noticed that he has applied the filter four times. But it is clear that he incorrectly left in the endpoints.

Emanuel claims that leaving off the end points makes no difference to his hypothesis test. However the point is that:
1. his claims of a doubling in PDI are exaggerated
2. his attribution to SST is exaggerated (biased regression slopes)
3. his uncertainty is underestimated
4. he has not recognized or accounted for the variable 0.5%,5%,30% effect first mentioned by Willis a month ago – and this may well be the result of regional overfitting.

bender/willis – can one/both of you summarize this in one post? It looks pretty nice, but I haven’t been able to keep track.

bender – you mentioned the Slutsky effect – is this in play with these multiple smoothings?

There’s an interesting article about the Slutsky effect in a climate context by Klemeà…⟠- in which he notes that many natural geophysical systems perform smoothings, esp in hydrology (e.g. lakes, rivers).

Re #76
Can summarize tomorrow, if you like. [But we really need to know what’s going on with the pre-1970 PDI data. Landsea’s U does not euqal Emanuels’s HS. Moot point if your analysis is 1970+, but we want to know what happens when you pick a different breakpoint, 1960, 1950, etc.] Your call.

Sluzky effect was described here. Slutzky was concerned about smoothing methods that enhance the strength of a cycle where none formerly exists. It was Yule that was concerned about nonsense-correlations between cyclic processes. Put them together and you have the double concern expressed here.

In this case, however, it may well be that Emanuel’s method is enhancing a causal relationship between SST and PDI. What Willis & i show is that part of that significance is due to ultra-low frequency coherence (AGW trend?), but part of it appears to be due to 3-7y (ENSO-scale) coherence. Emanuel’s method exaggerates the latter, but he takes it as evidence of the former.

As you shift the analysis breakpoint from 1970 to 1960 to 1950 the r^2 drops from 0.73 to 0.40 to 0.33. The PDI/SST regression slope drops from 15.5±1.6 to 11.5±2.1 to 10.8±2.1. (The ± is the S.E.E. and as an indicator of uncertainty should always be reported with regression parameters.) p-levels do not change, but are, of course, highly exaggerated to begin with, as this analysis is for the 2x smoothed data.

Can summarize tomorrow, if you like. [But we really need to know what’s going on with the pre-1970 PDI data. Landsea’s U does not euqal Emanuels’s HS. Moot point if your analysis is 1970+, but we want to know what happens when you pick a different breakpoint, 1960, 1950, etc.] Your call.

Landsea’s PDI data smoothed 4x is exactly the same as Emanuels. Take Landsea’s original 1949-2003 data. Smooth it with the 1-2-1 filter four times, keeping the endpoints (1949 and 2003) unchanged each time. Here’s what I get from that process …

Seems like the Emanuel PDI data to me … I’m working now on the SST data, should have it done by tomorrow.

PS – by “Landea’s original data” in #80, I meant the data that is lower in the pre-1970 time frame. Above I called it the “Adjusted Data”, because it was adjusted from a higher previous value. Sorry for the confusion.

The p value has been calculated using Bartletts formula for the effective N,

While these relationships are statistically significant, they are quite small.

3) The method of smoothing (pinning the end points and repeatedly using a 1-2-1 smoothing filter) distorts the results. By pinning the endpoints, the start and finish of the curve are held in place, and the smoothed curve is adjusted to meet them. Because the start and end points are low and high respectively for both the PDI and the SST, this converts a “U” shaped curve into more of a hockeystick shape, by pinning the start low and the end high …

1) While there is a significant trend in the HadISST in the area from 1949-2003, there is no significant trend 1931-2003.

2) While there is a small relationship between the September HadISST sea temperature and the original PDI (r^2 = 0.21, p = 0.01), the relationship drops to r^2 = 0.08, p=0.12 when we use the August to October HadISST sea temperature … looks like my original suspicions of cherry picking were correct.

* the all-event PDI chart that Emanuel e-mailed to Willis is noticeably different from 1995 on, when compared to his original Figure 1. Any idea if that is due to Emanuel changing his smoothing technique, or due to a change in his database?

* the inclusion of 6N to 9N in the SST box helps convert an oscillating SST pattern (1930-2003)into more of a hockey stick.

Because the start and end points are low and high respectively for both the PDI and the SST, this converts a “U” shaped curve into more of a hockeystick shape, by pinning the start low and the end high

Very sharp observation. I was just going to investigate that possibility. This is exactly where my U-shaped smoothing was deviating from his HS.

I dislike interrupting the fine detective work that Willis E, bender and David S are doing in disassembling and understanding the work that Emanuel has published, but I did want to say how great it is to follow the process and what has been accomplished. What a change it has been from the tutorial on standard/sampling error with the recalcitrant students.

In a former life I was part of a group that on occasion would take apart technical papers looking for findings that could be applied to an electronics manufacturing process. Others in the group were often much more technically orientated and knowledgeable than I and that made the process a more enjoyable learning experience for me.

An aspect that seemed constant with this process was that the science and technical parts were much more readily understood than the motivations of the authors of the papers. We often would contact authors and put questions to them directly when they were not employed by a competing organization or meet them at conferences and discover that we had the science right but the personal motivations and, therefore, sometimes the stated conclusions or limitations, wrong. I always thought a few beers and an engineer’s/scientist’s ego were a deadly combination for extracting background information.

* the all-event PDI chart that Emanuel e-mailed to Willis is noticeably different from 1995 on, when compared to his original Figure 1. Any idea if that is due to Emanuel changing his smoothing technique, or due to a change in his database?

Probable a change in the smoothing, as he had been notified by Landsea of the problem.

The fragility of his inferred trend (exaggerated by the pinning effect of not deleteting smoothed endpoints) will probably be exposed once the relatively calm 2006 data are all in. Nothing like an honest-to-goodness out-of-sample validation test.