Lewandowsky and Oreskes Are Co-Authors of a Paper about ENSO, Climate Models and Sea Surface Temperature Trends (Go Figure!)

UPDATE 2: Animation 1 from this post is happily displaying the differences between the “Best” models and observations in the first comment at a well-known alarmist blog. Please see update 2 at the end of this post.
# # # #
UPDATE: Please see the update at the end of the post.
# # #The new paper Risbey et al. (2014) will likely be very controversial based solely on the two co-authors identified in the title above (and shown in the photos to the right). As a result, I suspect it will garner a lot of attention…a lot of attention. This post is not about those two controversial authors, though their contributions to the paper are discussed. This post is about the numerous curiosities in the paper. For those new to discussions of global warming, I’ve tried to make this post as non-technical as possible, but these are comments on a scientific paper.

OVERVIEW

The Risbey et al. (2014) Well-estimated global surface warming in climate projections selected for ENSO phase is yet another paper trying to blame the recent dominance of La Niña events for the slowdown in global surface temperature warming, the hiatus. This one, however, states that ENSO contributes to the warming when El Niño events dominate. That occurred from the mid-1970s to the late-1990s. Risbey et al. (2014) also has a number of curiosities that make it stand out from the rest. One of those curiosities is that they claim that 4 specially selected climate models (which they failed to identify) can reproduce the spatial patterns of warming and cooling in the Pacific (and the rest of the ocean basins) during the hiatus period, while the maps they presented of observed versus modeled trends contradict the claims.

IMPORTANT INITIAL NOTE

I’ve read and reread Risbey et al. (2014) a number of times and I can’t find where they identify the “best” 4 and “worst” 4 climate models presented in their Figure 5. I asked Anthony Watts to provide a second set of eyes, and he was also unable to find where they list the models selected for that illustration.

Risbey et al. (2014) identify 18 models, but not the “best” and “worst” of those 18 they used in their Figure 5. Please let me know if I’ve somehow overlooked them. I’ll then strike any related text in this post.

Further to this topic, Anthony Watts sent emails to two of the authors on Friday, July 18, 2014, asking if the models selected for Figure 5 had been named somewhere. Refer to Anthony’s post A courtesy note ahead of publication for Risbey et al. 2014. Anthony has not received replies. While there are numerous other 15-year periods presented in Risbey et al (2014) along with numerous other “best” and “worst” models, our questions pertained solely to Figure 5 and the period of 1998-2012, so it should have been relatively easy to answer the question…and one would have thought the models would have been identified in the Supplementary Information for the paper, but there is no Supplementary Information.

Because Risbey et al. (2014) have not identified the models they’ve selected as “best” and “worst”, their work cannot be verified.

INTRODUCTION

The Risbey et al. (2014) paper Well-estimated global surface warming in climate projections selected for ENSO phase was just published online. Risbey et al. (2014) are claiming that if they cherry-pick a few climate models from the CMIP5 archive (used by the IPCC for their 5th Assessment Report)—that is, if they select specific climate models that best simulate a dominance of La Niña events during the global warming hiatus period of 1998 to 2012—then those models provide a good estimate of warming trends (or lack thereof) and those models also properly simulate the sea surface temperature patterns in the Pacific, and elsewhere.

Those are very odd claims. The spatial patterns of warming and cooling in the Pacific are dictated primarily by ENSO processes and climate models still can’t simulate the most basic of ENSO processes. Even if a few of the models created the warning and cooling spatial patterns by some freak occurrence, the models still do not (cannot) properly simulate ENSO processes. In that respect, the findings of Risbey et al. (2014) are pointless.

Additionally, their claims that the very-small, cherry-picked subset of climate models provides good estimates of the spatial patterns of warming and cooling in the Pacific for the period of 1998-2012 are not supported by the data and model outputs they presented, so Risbey et al. (2014) failed to deliver.

There are a number of other curiosities, too.

ABSTRACT

The Risbey et al. (2014) abstract reads (my boldface):

The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

Curiously, in their abstract, Risbey et al. (2014) note a major flaw with the climate models used by the IPCC for their 5th Assessment Report—that they are “generally not in phase with observations”—but they don’t accept that as a flaw. If your stock broker’s models were out of phase with observations, would you continue to invest with that broker based on their out-of-phase models or would you look for another broker whose models were in-phase with observations? Of course, you’d look elsewhere.

Unfortunately, we don’t have any other climate “broker” models to choose from. There are no climate models that can simulate naturally occurring coupled ocean-atmosphere processes that can contribute to global warming and that can stop global warming…or, obviously, simulate those processes in-phase with the real world. Yet governments around the globe continue to invest billions annually in out-of-phase models.

Risbey et al. (2014), like numerous other papers, are basically attempting to blame a shift in ENSO dominance (from a dominance of El Niño events to a dominance of La Niña events) for the recent slowdown in the warming of surface temperatures. Unlike others, they acknowledge that ENSO would also have contributed to the warming from the mid-1970s to the late 1990s, a period when El Niños dominated.

CHANCE VERSUS SKILL

The fifth paragraph of Risbey et al. (2014) begins (my boldface):

In the CMIP5 models run using historical forcing there is no way to ensure that the model has the same sequence of ENSO events as the real world. This will occur only by chance and only for limited periods, because natural variability in the models is not constrained to occur in the same sequence as the real world.

Risbey et al. (2014) admitted that the models they selected for having the proper sequence of ENSO events did so by chance, not out of skill, which undermines the intent of their paper. If the focus of the paper had been need for climate models to be in-phase with obseervations, they would have achieved their goal. But that wasn’t the aim of the paper. The concluding sentence of the abstract claims that “…climate models have provided good estimates of 15-year trends, including for recent periods…” when, in fact, it was by pure chance that the cherry-picked models aligned with the real world. No skill involved. If models had any skill, the outputs of the models would be in-phase with observations.

ENSO CONTRIBUTES TO WARMING

The fifth paragraph of the paper continues:

For any 15-year period the rate of warming in the real world may accelerate or decelerate depending on the phase of ENSO predominant over the period.

Risbey et al. (2014) admitted with that sentence, if a dominance of La Niña events can cause surface warming to slow (“decelerate”), then a dominance of El Niño events can provide a naturally occurring and naturally fueled contribution to global warming (“accelerate” it), above and beyond the forced component of the models. Unfortunately, climate models were tuned to a period when El Niño events dominated (the mid-1970s to the late 1990s), yet climate modelers assumed all of the warming during that period was caused by manmade greenhouse gases. (See the discussion of Figure 9.5 from the IPCC’s 4th Assessment Report here and Chapter 9 from AR4 here.) As a result, the models have grossly overestimated the forced component of the warming and, in turn, climate sensitivity.

Some might believe that Risbey et al (2014) have thrown the IPCC under the bus, so to speak. But I don’t believe so. We’ll have to see how the mainstream media responds to the paper. I don’t think the media will even catch the significance of ENSO contributions to warming since science reporters have not been very forthcoming about the failings of climate science.

Risbey et al (2014) have also overlooked the contribution of the Atlantic Multidecadal Oscillation during the period to which climate models were tuned. From the mid-1970s to the early-2000s, the additional naturally occurring warming of the sea surface temperatures of the North Atlantic contributed considerably to the warming of sea surface temperatures of the Northern Hemisphere (and in turn to land surface air temperatures). This also adds to the overestimation of the forced component of the warming (and climate sensitivity) during the recent warming period. Sea surface temperatures in the North Atlantic have also been flat for the past decade, suggesting that the Atlantic Multidecadal Oscillation has ended its contribution to global warming, and, because by definition the Atlantic Multidecadal Oscillation lasts for multiple decades, the sea surface temperatures of the North Atlantic may continue to remain flat or even cool for another couple of decades. (See the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO)webpage and the posts An Introduction To ENSO, AMO, and PDO — Part 2 and Multidecadal Variations and Sea Surface Temperature Reconstructions.)

For more than 5 years, I have gone to great lengths to illustrate and explain how El Niño and La Niña processes contributed to the warming of sea surface temperatures and the oceans to depth. If this topic is new to you, see my free illustrated essay “The Manmade Global Warming Challenge” (42mb). Recently Kevin Trenberth acknowledged that strong El Niño events cause upward steps in global surface temperatures. Refer to the post The 2014/15 El Niño – Part 9 – Kevin Trenberth is Looking Forward to Another “Big Jump”. And now the authors of Risbey et al. (2014)—including the two activists Stephan Lewandowsky and Naomi Oreskes—are admitting that ENSO can contribute to global warming. How many more years will pass before mainstream media and politicians acknowledge that nature can and does provide a major contribution to global warming? Or should that be how many more decades will pass?

RISBEY ET AL. (2014) – AN EXERCISE IN FUTILITY

IF (big if) the climate models in the CMIP5 archive were capable of simulating the coupled ocean-atmosphere processes associated with El Niño and La Niña events (collectively called ENSO processes hereafter), Risbey et al (2014) might have value…if the intent of their paper was to point out that models need to be in-phase with nature. Then, even though all of the models do not properly simulate the timing, strength or duration of ENSO events, Risbey et al (2014) could have selected, as they have done, specific models that best simulated ENSO during the hiatus period.

However, climate models cannot properly simulate ENSO processes, even the most basic of processes like Bjerknes feedback. (Bjerknes feedback, basically, is the positive feedback between the trade wind strength and sea surface temperature gradient from east to west in the equatorial Pacific.) These model failings have been known for years. See Guilyardi et al. (2009)and Bellenger et al (2012). It is very difficult to find a portion—any portion—of ENSO processes that climate models simulate properly. Therefore, the fact that Risbey et al (2014) selected models that better simulate the ENSO trends for the period of 1998 to 2012 is pointless, because the models are not correctly simulating ENSO processes. The models are creating variations in the sea surface temperatures of the tropical Pacific but that “noise” has no relationship to El Niño and La Nina processes as they exist in nature.

Oddly, Risbey et al (2014) acknowledge that the models do not properly simulate ENSO processes. The start of the last paragraph under the heading of “Phase-selected projections” reads [Reference 28 is Guilyardi et al. (2009)]:

This method of phase aligning to select appropriate model trend estimates will not be perfect as the models contain errors in the forcing histories27 and errors in the simulation of ENSO (refs 25, 28) and other processes.

The climate model failings with respect to how they simulate ENSO aren’t minor errors. They are catastrophic model failings, yet the IPCC hasn’t come to terms with the importance of those flaws yet. On the other hand, the authors of Guilyardi et al. (2009) were quite clear in their understandings of those climate model failings, when they wrote:

Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).

ENSO is one of the primary processes through which heat is distributed from the tropics to the poles. Those processes are chaotic and they vary over annual, decadal and multidecadal time periods.

ENSO processes release more heat than “normal” from the tropical Pacific to the atmosphere, and

ENSO processes redistribute more warm water than “normal” from the tropical Pacific to adjoining ocean basins, and

through teleconnections, ENSO processes cause less evaporative cooling from, and more sunlight than “normal” to reach into, remote ocean basins, both of which result in ocean warming at the surface and to depth.

As a result, during multidecadal periods when El Niño events dominate, like the period from the mid-1970s to the late 1990s, global surface temperatures and ocean heat content rise. In other words, global warming occurs. There is no way global warming cannot occur during a period when El Niño events dominate. But projections of future global warming and climate change based on climate models don’t account for that naturally caused warming because the models cannot simulate ENSO processes…or teleconnections.

Now that ENSO has switched modes so that La Niña events are dominant the climate-science community is scrambling to explain the loss of naturally caused warming, which they’ve been blaming on manmade greenhouse gases all along.

RISBEY ET AL. (2014) FAIL TO DELIVER

Risbey et al (2014) selected 18 climate models from the 38 contained in the CMIP5 archive for the majority of their study. Under the heading of “Methods”, they listed all of the models in the CMIP5 archive and boldfaced the models they selected:

Those 18 were selected because model outputs of sea surface temperatures for the NINO3.4 region were available from those models:

A subset of 18 of the 38 CMIP5 models were available to us with SST data to compute Niño3.4 (ref. 24) indices.

For their evaluation of warming and cooling trends, spatially, during the hiatus period of 1998 to 2012, Risbey et al (2014) whittled the number down to 4 models that “best” simulated the trends and 4 models that simulated the trends “worst”. They define how those “best” and “worst” models were selected:

To select this subset of models for any 15-year period, we calculate the 15-year trend in Niño3.4 index24 in observations and in CMIP5 models and select only those models with a Niño3.4 trend within a tolerance window of +/- 0.01K y-1 of the observed Niño3.4 trend. This approach ensures that we select only models with a phasing of ENSO regime and ocean heat uptake largely in line with observations. In this case we select the subset of models in phase with observations from a reduced set of 18 CMIP5 models where Niño3.4 data were available25 and for the period since 1950 when Niño3.4 indices are more reliable in observations.

The opening phrase of “To select this subset of models for any 15-year period…” indicates the “best” and “worst” models varied depending on the 15-year time period. Risbey et al. (2014) presented the period of 1998 to 2012 for their Figure 5. But in other discussions, like for those of their Figures 4 and 6, the number of “best” and “worst” models changed as did the models. The caption for their Figure 4 includes:

The blue dots (a,c) show the 15-year average trends from only those CMIP5 runs in each 15-year period where the model Niño3.4 trend is close to the observed Niño3.4 trend. The size of the blue dot is proportional to the number of models selected. If fewer than two models are selected in a period, they are not included in the plot. The blue envelope is a 2.5–97.5 percentile loess-smoothed fit to the model 15-year trends weighted by the number of models at each point. b and d contain the same observed trends in red for GISS and Cowtan and Way respectively. The grey dots show the average 15-year trends for only the models with the worst correspondence to the observed Niño3.4 trend. The grey envelope in b and d is defined as for the blue envelope in a and c. Results for HadCRUT4 (not shown) are broadly similar to those of Cowtan and Way.

That is, the “best” models and the number of them changes for each 15-year period. In other words, they’ve used a sort of running cherry-pick for the models in their Figure 4. A novel approach. Somehow, though, this gets highlighted in the abstract as “These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.” But they failed to highlight the real findings of their paper: that climate models must be in-phase with nature if the models are to have value.

As noted earlier, I’ve been through the paper a number of times, and I cannot find where they listed which models were selected as “best” and “worst”. They illustrated those “best” and “worst” modeled sea surface temperature trends in cells a and b of their Figure 5. See the full Figure 5 from Risbey et al (2014) here. They also illustrated in cell c the observed sea surface temperature warming and cooling trends during the hiatus period of 1998 to 2012. About their Figure 5, they write, where the “in phase” models are the “best” models and “least in phase” models are the “worst” models (my boldface):

The composite pattern of spatial 15-year trends in the selection of models in/out of phase with ENSO regime is shown for the 1998-2012 period in Fig. 5. The models in phase with ENSO (Fig. 5a) exhibit a PDO-like pattern of cooling in the eastern Pacific, whereas the models least in phase (Fig. 5b) show more uniform El Niño-like warming in the Pacific. The set of models in phase with ENSO produce a spatial trend pattern broadly consistent with observations (Fig. 5c) over the period. This result is in contrast to the full CMIP5 multi-model ensemble spatial trends, which exhibit broad warming26 and cannot reveal the PDO-like structure of the in-phase model trend.

Let’s rephrase that. According to Risbey et al (2014), the “best” 4 of their cherry-picked (unidentified) CMIP5 climate models simulate a PDO-like pattern during the hiatus period and the trends of those models are also “broadly consistent” with the observed spatial patterns throughout the rest of the global oceans. If you’re wondering how I came to the conclusion that Risbey et al (2014) were discussing the global oceans too, refer to the second boldfaced sentence in the above quote. Figure 5c presents the trends for all of the global oceans, not just the extratropical North Pacific or the Pacific as a whole.

We’re going to concentrate on the observations and the “best” models in the rest of this section. There’s no reason to look at the models that are lousier than the “best” models, because the “best” models are actually pretty bad.

That is, to totally contradict the claims made, there are no similarities between the spatial patterns in the maps of observed and modeled trends that were presented by Risbey et al (2014)—no similarities whatsoever. See Animation 1, which compares trend maps for the observations and “best” models, from their Figure 5, for the period of 1998 to 2012.

Animation 1

Again, those are the trends for the observations and the models Risbey et al (2014) selected as being “best”. I will admit “broadly consistent” is a vague phrase, but the spatial patterns of the model trends have no similarities with observations, not even the slightest resemblance, so “broadly consistent” does not seem to be an accurate representation of the capabilities of the “best” models.

A further breakdown follows. I normally wouldn’t go into this much detail, but the abstract does close with “spatial trend patterns.” So I suspect that science reporters for newspapers, magazines and blogs are going to be yakking about how well the selected “best” models simulate the spatial patterns of sea surface temperature warming and cooling trends during the hiatus.

My Figure 1 is cell c from their Figure 5. It presents the observed sea surface trends during the hiatus period of 1998-2012. I’ve highlighted 2 regions. At the top, I’ve highlighted the extratropical North Pacific. The Pacific Decadal Oscillation index is derived from the sea surface temperature anomalies in that region, and the Pacific Decadal Oscillation data refers to that region only. See the JISAO PDO webpage here. JISAO writes (my boldface):

Updated standardized values for the PDO index, derived as the leading PC of monthly SST anomalies in the North Pacific Ocean, poleward of 20N.

The spatial pattern of the observed trends in the extratropical North Pacific agrees with our understanding of the “cool phase” of the Pacific Decadal Oscillation (PDO). Sea surface temperatures of the real world in the extratropical North Pacific cooled along the west coast of North America from 1998 to 2012. That cooling was countered by the ENSO-related warming of the sea surface temperatures in the western and central extratropical North Pacific, with the greatest warming taking place in the region east of Japan called the Kuroshio-Oyashio Extension. (See the post The ENSO-Related Variations In Kuroshio-Oyashio Extension (KOE) SST Anomalies And Their Impact On Northern Hemisphere Temperatures.) Because the Kuroshio-Oyashio Extension dominates the “PDO pattern” (even though it’s of the opposite sign; i.e. it shows warming while the east shows cooling during a “cool” PDO mode), the Kuroshio-Oyashio Extension is where readers should focus their attention when there is a discussion of the PDO pattern.

Figure 1

The second “region” highlighted in Figure 1 is the Southern Hemisphere. According to the trend map presented by Risbey et al (2014), real-world sea surface temperatures throughout the Southern Hemisphere (based on HADISST data) cooled between 1998 and 2012. That’s a lot of cool blue trend in the Southern Hemisphere.

I’ve highlighted the same two regions in Figure 2, which presents the composite of the sea surface temperature trends from the 4 (unidentified) “best” climate models. A “cool” PDO pattern does not exist in the extratropical North Pacific of the virtual world of the climate models, and the models show an overall warming of the sea surfaces in the South Pacific and the entire Southern Hemisphere, where the observations showed cooling. If you’re having trouble seeing the difference, refer again to Animation 1.

Figure 2

The models performed no better in the North Atlantic. The virtual-reality world of the models showed cooling in the northern portion of the tropical North Atlantic and they showed cooling south of Greenland, which are places where warming was observed in the real world from 1998 to 2012. See Figures 3 and 4. And if need be, refer to Animation 1 once again.

Figure 3

# # #

Figure 4

The tropical Pacific is critical to Risbey et al (2014), because El Niño and La Niña events take place there. Yet the models that were selected and presented as “best” by Risbey et al (2014) cannot simulate the observed sea surface temperature trends in the real-world tropical Pacific either. Refer to Figures 5 and 6…and Animation 1 again if you need.

Figure 5

# # #

Figure 6

One last ocean basin to compare: the Arctic Ocean. The real-world observations, Figure 7, show a significant warming of the surface of the Arctic Ocean, and that warming is associated with the sea ice loss. The “best” models, of course, shown in Figure 8 do not indicate a similar warming in their number-crunched Arctic Oceans. The differences between the observations and the “best” models stand out like a handful of sore thumbs in Animation 1.

Figure 7

# # #

Figure 8

Because the CMIP5 climate models cannot simulate that warming in the Arctic and the loss of sea ice there, Stroeve et al. (2012) “Trends in Arctic sea ice extent from CMIP5, CMIP3 and Observations” [paywalled] noted that the model failures there was an indication the loss of sea ice occurred naturally, the result of “internal climate variability”. The abstract of Stroeve et al. (2012) reads (myboldface):

The rapid retreat and thinning of the Arctic sea ice cover over the past several decades is one of the most striking manifestations of global climate change. Previous research revealed that the observed downward trend in September ice extent exceeded simulated trends from most models participating in the World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (CMIP3). We show here that as a group, simulated trends from the models contributing to CMIP5 are more consistent with observations over the satellite era(1979–2011). Trends from most ensemble members and modelsnevertheless remain smaller than the observed value. Pointing to strongimpacts of internal climate variability, 16% of the ensemble member trendsover the satellite era are statistically indistinguishable from zero. Resultsfrom the CMIP5 models do not appear to have appreciably reduceduncertainty as to when a seasonally ice-free Arctic Ocean will be realized.

WHY CLIMATE MODELS NEED TO SIMULATE SEA SURFACE TEMPERATURE PATTERNS

If you’re new to discussions of global warming and climate change, you may be wondering why climate models must be able to simulate the observed spatial patterns of the warming and cooling of ocean surfaces. The spatial patterns of sea surface temperatures throughout the global oceans are one of the primary factors that determine where land surfaces warm and cool and where precipitation occurs. If climate models should happen to create the proper spatial patterns of precipitation and of warming and cooling on land, without properly simulating sea surface temperature spatial patterns, then the models’ success on land is by chance, not skill.

Further, because climate models can’t simulate where, when, why and how the ocean surfaces warm and cool around the globe, they can’t properly simulate land surface temperatures or precipitation. And if they can’t simulate land surface temperatures or precipitation, what value do they have? Quick answer: No value. Climate models are not yet fit for their intended purposes.

Keep in mind, in our discussion of the Risbey et al. Figure 5, we’ve been looking at the models (about 10% of the models in the CMIP5 archive) that have been characterized as “best”, and those “best” models performed horrendously.

INTERESTING CHARACTERIZATIONS OF FORECASTS AND PROJECTIONS

In the second paragraph of the text of Risbey et al. (2014), they write (my boldface):

A weather forecast attempts to account for the growth of particular synoptic eddies and is said to have lost skill when model eddies no longer correspond one to one with those in the real world. Similarly, a climate forecast of seasonal or decadal climate attempts to account for the growth of disturbances on the timescale of those forecasts. This means that the model must be initialized to the current state of the coupled ocean-atmosphere system and the perturbations in the model ensemble must track the growth of El Niño/Southern Oscillation2,3 (ENSO) and other subsurface disturbances4 driving decadal variation. Once the coupled climate model no longer keeps track of the current phase of modes such as ENSO, it has lost forecast skill for seasonal to decadal timescales. The model can still simulate the statistical properties of climate features from this point, but that then becomes a projection, not a forecast.

If the models have lost their “forecast skill for seasonal and decadal timescales”, they also lost their forecast skill for multidecadal timescales and century-long timescales.

The fact that climate models were not initialized to match any state of the past climate came to light back in 2007 with Kevin Trenberth’s blog post Predictions of Climate at Nature.com’s ClimateFeedback. I can still recall the early comments generated by Trenberth’s blog post. For examples, see Roger Pielke Sr’s blog posts here and here and the comments on the threads at ClimateAudit here and here. That blog post from Trenberth is still being referenced in blog posts (this one included). In order for climate models to have any value, papers like Risbey et al (2014) are now saying that climate models “must be initialized to the current state of the coupled ocean-atmosphere system and the perturbations in the model ensemble must track the growth of El Niño/Southern Oscillation.” But skeptics have been saying this for years.

Let’s rephrase the above boldfaced quote from Risbey et al (2014). It does a good job of explaining the differences between “climate forecasts” (which many persons believe they’ve gotten so far from the climate science community) and the climate projections (which we’re presently getting from the climate science community). Because climate models cannot simulate naturally occurring coupled ocean-atmosphere processes like ENSO and the Atlantic Multidecadal Oscillation, and because the models are not “in-phase” with the real world, climate models are not providing forecasts of future climate…they are only providing out-of-phase projections of a future world that have no basis in the real world.

Further, what Risbey et al. (2014) failed to acknowledge is that the current hiatus could very well last for another few decades, and then, after another multidecadal period of warming, we might expect yet another multidecadal warming hiatus—cycling back and forth between warming and hiatus on into the future. Of course, the IPCC does not factor those multidecadal hiatus periods into their projections of future climate. We discussed and illustrated this in the post Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?

Why don’t climate models simulate natural variability in-phase with multidecadal variations exhibited in observations? There are numerous reasons: First, climate models cannot simulate the naturally occurring processes that cause multidecadal variations in global surface temperatures. Second, the models are not initialized in an effort to try to match the multidecadal variations in global surface temperatures. It would be a fool’s errand anyway, because the models can’t simulate the basic ocean-atmosphere processes that cause those multidecadal variations. Third, if climate models were capable of simulating multidecadal variations as they occurred in the real world—their timing, magnitude and duration—and if the models were to allowed to produce those multidecadal variations on into the future, then the future in-phase forecasts of global warming (different from the out-of-phase projections that are currently provided) would be reduced significantly, possibly by half. (Always keep in mind that climate models were tuned to a multidecadal upswing in global surface temperatures—a period when the warming of global surface temperatures temporarily accelerated (the term used by Risbey et al.) due to naturally occurring ocean atmosphere processes associated with ENSO and the Atlantic Multidecadal Oscillation.) Fourth, if the in-phase forecasts of global warming were half of the earlier out-of-phase projections, the assumed threats of future global warming-related catastrophes would disappear…and so would funding for climate model-based research. The climate science community would be cutting their own throats if they were to produce in-phase forecasts of future global warming, and they are not likely to do that anytime soon.

A GREAT ILLUSTRATION OF HOW POORLY CLIMATE MODELS SIMULATE THE PAST

My Figure 9 is Figure 2 from Risbey et al. (2014). I don’t think the authors intended this, but that illustration clearly shows how poorly climate models simulate global surface temperatures since the late 1800s. Keep in mind while viewing that graph that it is showing 15-year trends (not temperature anomalies) and that the units of the y-axis is deg K/decade.

Figure 9

Risbey et al. (2014) describe their Figure 2 as (my boldface):

To see how representative the two 15-year periods in Fig. 1 are of the models’ ability to simulate 15-year temperature trends we need to test many more 15-year periods. Using data from CMIP5 models and observations for the period 1880_2012, we have calculated sliding 15-year trends in observations and models over all 15-year periods in this interval (Fig. 2). The 2.5-97.5 percentile envelope of model 15-year trends (grey) envelops within it the observed trends for almost all 15-year periods for each of the observational data sets. There are several periods when the observed 15-year trend is in the warm tail of the model trend envelope (~1925, 1935, 1955), and several periods where it is in the cold tail of the model envelope (~1890, 1905, 1945, 1970, 2005). In other words, the recent `hiatus’ centred about 2005 (1998-2012) is not exceptional in context. One expects the observed trend estimates in Fig. 2 to bounce about within the model trend envelope in response to variations in the phase of processes governing ocean heat uptake rates, as they do.

While the recent hiatus may not be “exceptional” in that context, it was obviously not anticipated by the vast majority of the climate models. And if history repeats itself, and there’s no reason to believe it won’t, the slowdown in warming could very well last for another few decades.

I really enjoyed the opening clause of the last sentence: “One expects the observed trend estimates in Fig. 2 to bounce about within the model trend envelope…” Really? Apparently, climate scientists have very low expectations of their models.

Referring to their Figure 2, the climate models clearly underestimated the early 20th Century warming, from about 1910 to the early 1940s. The models then clearly missed the cooling that took place in the 1940s but then overestimated the cooling in the 1950s and 60s…so much so that Risbey et al. (2014) decided to erase the full extent of the modeled cooling rates during that period by limiting the range of the y-axis in the graph. (This time climate scientists are hiding the decline in the models.) Then there’s the recent warming period. There’s one reason and one reason only why the models appear to perform well during the recent warming period, seeming to run along the mid-range of the model spread from about 1970 to the late 1990s. And that reason is, the models were tuned to that period. Now, since the late 1990s, the models are once again diverging from the data, because they are not in-phase with the real world.

Will surface temperatures repeat the “cycle” of warming and hiatus/cooling that exists in the data? There’s no reason to believe they will not. Do the climate models simulate any additional multidecadal variability in the future? See Figure 10. Apparently not.

Figure 10

My Figure 10 is similar to Figure 2 from Risbey et al. (2014). For the model simulations of global surface temperatures, I’ve presented the 15-year trends (centered) of the multi-model ensemble-member mean (not the spread) of the historic and RCP8.5 (worst case) forcings (which were also used by Risbey et al.). For the observations, I’ve included the 15-year trends of the GISS Land-Ocean Temperature Index data, which is one of the datasets presented by Risbey et al. (2014). The data and model outputs are available from the KNMI Climate Explorer. The GISS data are included under Monthly Observations and the model outputs are listed as “TAS” on the Monthly CMIP5 scenario runs webpage.

It quite easy to see two things: (1) the modelers did not expect the current hiatus, and (2) they do not anticipate any additional multidecadal variations in global surface temperatures.

One of the key points of Risbey et al. (2014) was their claim that the selected 4 “best” (unidentified) climate models could simulate the spatial patterns of the warming and cooling trends in sea surface temperatures during the hiatus period. We’ve clearly shown that their claims were unfounded.

It’s also quite obvious that Risbey et al. (2014) failed to present evidence that the “best” climate models could reproduce the spatial patterns of the warming and cooling rates in global sea surface temperatures during the warming period that preceded the hiatus. They presented histograms of the modeled and observed trends for the 15-year warming period (1984-1998) before the 15-year hiatus period in cell b of their Figure 1 (not shown in this post). So, obviously, that period was important to them. Yet they did not present how well or poorly the “best” models simulated the spatial trends in sea surface temperatures for the important 15-year period of 1984-1998. If the models had performed well, I suspect Risbey et al. (2014) would have been more than happy to present those modeled and observed spatial patterns.

My Figure 11 shows the observed warming and cooling rates in global sea surface temperatures from 1984 to 1998, using the HADISST dataset, which is the sea surface temperature dataset used by Risbey et al. (2014). There is a clear El Niño-related warming in the eastern tropical Pacific. The warming of the Pacific along the west coasts of the Americas also appears to be El Niño-related, a response to coastally trapped Kelvin waves from the strong El Niño events of 1986/87/88 and 1997/98. (See Figure 8 from Trenberth et al. (2002).) The warming of the North Pacific along the east coast of Asia is very similar to the initial warming there in response to the 1997/98 El Niño. (See the animation here which is Animation 6-1 from my ebook Who Turned on the Heat?) And the warming pattern in the tropical North Atlantic is similar to the lagged response of sea surface temperatures (through teleconnections) in response to El Niño events. (Refer again to Figure 8 from Trenberth et al. (2002), specifically the correlation maps with the +4-month lag.)

Figure 11

Climate models do not properly simulate ENSO processes or teleconnections, so it really should come as no surprise that Risbey et al. (2014) failed to provide an illustration that should have been considered vital to their paper.

The other factor obviously missing, as discussed in the next section, was the modeled increases in ocean heat uptake. Ocean heat uptake is mentioned numerous times throughout Risbey et al (2014). It would have been in the best interest of Risbey et al. (2014) to show that the “best” models created the alleged increase in ocean heat uptake during the hiatus periods. Oddly, they chose not to illustrate that important factor.

OCEAN HEAT UPTAKE

Risbey et al (2014) used the term “ocean heat uptake” 11 times throughout their paper. The significance of “ocean heat uptake” to the climate science community is that, during periods when the Earth’s surfaces stop warming or the warming slows (as has happened recently), ocean heat uptake is (theoretically) supposed to increase. Yet Risbey et al (2014) failed to illustrate ocean heat uptake with data or models even once. The term “ocean heat uptake” even appeared in one of the earlier quotes from the paper. Here’s that quote again (my boldface):

To see how representative the two 15-year periods in Fig. 1 are of the models’ ability to simulate 15-year temperature trends we need to test many more 15-year periods. Using data from CMIP5 models and observations for the period 1880_2012, we have calculated sliding 15-year trends in observations and models over all 15-year periods in this interval (Fig. 2). The 2.5-97.5 percentile envelope of model 15-year trends (grey) envelops within it the observed trends for almost all 15-year periods for each of the observational data sets. There are several periods when the observed 15-year trend is in the warm tail of the model trend envelope (~1925, 1935, 1955), and several periods where it is in the cold tail of the model envelope (~1890, 1905, 1945, 1970, 2005). In other words, the recent `hiatus’ centred about 2005 (1998-2012) is not exceptional in context. One expects the observed trend estimates in Fig. 2 to bounce about within the model trend envelope in response to variations in the phase of processes governing ocean heat uptake rates, as they do.

Risbey et al. (2014) are making a grand assumption with that statement. There is insufficient subsurface ocean temperature data, for the depths of 0-2000 meters, before the early-2000s, upon which they can base those claims. The subsurface temperatures of the global oceans were not sampled fully (or as best they can be sampled) to depths of 2000 meters before the ARGO era, and the ARGO floats were not deployed until the early 2000s, with near-to-complete coverage around 2003. Even the IPCC acknowledges in AR5 the lack of sampling of subsurface ocean temperatures before ARGO. See the post AMAZING: The IPCC May Have Provided Realistic Presentations of Ocean Heat Content Source Data.

Additionally, ARGO float-based data do not even support the assumption that ocean heat uptake increased in the Pacific during the hiatus period. That is, if the recent domination of La Niña events were, in fact, causing an increase in ocean heat uptake, we would expect to find an increase in the subsurface temperatures of the Pacific Ocean to depths of 2000 meters over the last 11 years. Why in the Pacific? Because El Niño and La Niña events take place there. Yet the NODC vertically average temperature data (which are adjusted for ARGO cool biases) from 2003 to 2013 show little warming in the Pacific Ocean…or in the North Atlantic for that matter. See Figure 12.

Figure 12

It sure doesn’t look like the dominance of La Niña events during the hiatus period has caused any ocean heat uptake in the Pacific over the past 11 years. Subsurface ocean warming occurred only in the South Atlantic and Indian Oceans. Now, consider that manmade greenhouse gases including carbon dioxide are said to be well mixed, meaning they are pretty well evenly distributed around the globe. It’s difficult to imagine how a well-mixed greenhouse gas like manmade carbon dioxide caused the South Atlantic and Indian Oceans to warm to depths of 2000 meters, while having no impact on the North Atlantic or the largest ocean basin on this planet, the Pacific.

REFERENCE NINO3.4 DATA

For those interested, as a reference for the discussion of Figure 5 from Risbey et al. 2014), my Figure 13 presents the monthly HADISST-based NINO3.4 region sea surface temperature anomalies for the period of January 1998 to December 2012, which are the dataset and time period used by Risbey et al for their Figure 5, and the NINO3.4 region data and model outputs were the bases for their model selection. The UKMO uses the base period of 1961-1990 for their HADISST data, so I used those base years for anomalies. The period-average temperature anomaly (not shown) is slightly negative, at -0.11 Deg C, indicating there was a slight dominance of La Niña events then. The linear trend of the data is basically flat at -0.006 deg C/decade.

Figure 13

SPOTLIGHT ON CLIMATE MODEL FAILINGS

Let’s return to the abstract again. It includes:

We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

What they’ve said indirectly, but failed to expand on, is:

IF (big if) climate models could simulate the ocean-atmosphere processes associated with El Niño events, which they can’t, and…

IF (big if) climate models could simulate the decadal and multidecadal variations of those processes in-phase with the real world, which they can’t because they can’t simulate the basic processes…

…then climate models would have a better chance of being able to simulate Earth’s climate.

Climate modelers have been attempting to simulate Earth’s climate for decades, and climate models still cannot simulate those well-known global warming- and climate change-related factors. In order to overcome those shortcomings of monstrous proportions, the modelers would first have to be able to simulate the coupled ocean-atmosphere processes associated with ENSO and the Atlantic Multidecadal Oscillation…and with teleconnections. Then, as soon as the models have conquered those processes, the climate modelers would have to find a way to place those chaotically occurring processes in phase with the real world.

As a taxpayer, you should ask the government representatives that fund climate science two very simple questions. After multiple decades and tens of billions of dollars invested in global warming research:

I suspect that Risbey et al (2014) will get lots of coverage based solely on two of the authors: Stephan Lewandowsky and Naomi Oreskes.

Naomi Oreskes is an outspoken activist member of the climate science community. She has recently been known for her work in the history of climate science. At one time, she was an Adjunct Professor of Geosciences at the Scripps Institution of Oceanography. See Naomi’s Harvard University webpage here. And she has co-authored at least two papers in the past about numerical model validation.

Stephan Lewandowsky is a very controversial Professor of Psychology at the University of Bristol. How controversial is he? He has his own category at WattsUpWithThat, and at ClimateAudit, and there are numerous posts about his recent work at a multitude of other blogs. So why is a professor of psychology involved in a paper about ENSO and climate models? He and lead author James Risbey gave birth to the idea for the paper. See the “Author contributions” at the end of the Risbey et al. (my boldface):

J.S.R. and S.L. conceived the study and initial experimental design. All authors contributed to experiment design and interpretation. S.L. provided analysis of models and observations. C.L. and D.P.M. analysed Niño3.4 in models. J.S.R. wrote the paper and all authors edited the text.

The only parts of the paper that Stephan Lewandowsky was not involved in were writing it and the analysis of NINO3.4 sea surface temperature data in the models. But, and this is extremely curious, psychology professor Stephan Lewandowsky was solely responsible for the “analysis of models and observations”. I’ll let you comment on that.

CLOSING

The last sentence of the abstract of Risbey et al. (2014) clearly identifies the intent of the paper:

These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

Risbey et al. (2014) took 18 of the 38 climate models from the CMIP5 archive, then whittled those 18 down to the 4 “best” models for their trends presentation in Figure 5. In other words, they’ve dismissed 89% of the models. That’s not really too surprising. von Storch et al. (2013) “Can Climate Models Explain the Recent Stagnation in Global Warming?” found:

However, for the 15-year trend interval corresponding to the latest observation period 1998-2012, only 2% of the 62 CMIP5 and less than 1% of the 189 CMIP3 trend computations are as low as or lower than the observed trend. Applying the standard 5% statistical critical value, we conclude that the model projections are inconsistent with the recent observed global warming over the period 1998-2012.

Risbey et al. (2014) also failed to deliver on their claim that their tests showed “that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.” And their evaluation of climate model simulations conveniently ignored the fact that climate models do not properly simulate ENSO processes, which basically means the fundamental overall design and intent of the paper was fatally flawed.

Some readers may believe Risbey et al. (2014) should be dismissed as a failed attempt at misdirection—please disregard those bad models from the CMIP5 archive and only pay attention to the “best” models. But Risbey et al. (2014) have been quite successful at clarifying a few very important points. They’ve shown that climate models must be able to simulate naturally occurring coupled ocean-atmosphere processes, like associated with El Niño and La Niña events and with the Atlantic Multidecadal Oscillation, and the models must be able to simulate those naturally occurring processes in phase with the real world, if climate models are to have any value. And they’ve shown quite clearly that, until climate models are able to simulate naturally occurring processes in phase with nature, forecasts/projections of future climate are simply computer-generated conjecture with no basis in the real world…in other words, they have no value, no value whatsoever.

Simply put, Risbey et al. (2014) has very effectively undermined climate model hindcasts and projections, and the paper has provided lots of fuel for skeptics.

In closing, I would like to thank the authors of Risbey et al. (2014) for presenting their Figure 5. The animation I created from its cells a and c (Animation 1) provides a wonderful and easy-to-understand way to show the failings of the climate-scientist-classified-“best” climate models during the hiatus period. It’s so good I believe I’m going to link it in the introduction of my upcoming ebook. Thanks again.

# # #

UPDATE: The comment by Richard M here is simple but wonderful. Sorry that it did not occur to me so that I could have included something similar at the beginning of the post.

Richard M says: “One could use exactly the same logic and pick the 4 worst models for each 15 year period and claim that cimate [sic] models are never right…”

So what have Risbey et al. (2104) really provided? Nothing of value.

UPDATE 2: Dana Nuccitelli has published a much-anticipated blog post at TheGuardian proclaiming climate models are now accurate due to the cleverness of Bisbey et al (2014). Blogger Russ R. was the first to comment on the cross post at SkepticalScience, and Russ linked my Animation 1 from this post, along with the comment:

Dana, which parts of planet would you say that the models “accurately predicted”?

The gif animation from this post is happily blinking away below that comment. I wasn’t sure how long it would stay there before it was deleted, so I created an animation of two screen caps from the SkS webpage for your enjoyment. Thanks, Russ R.

Great Review. NOW you can take’em to the cleaners. WE hope this trash is going to be retracted.BTW AW talk and presentation of data at the 9ICCC conference was spectacular and basically agrees with SG re Confirmation Bias (except that SG call’s it fabrication) of USA data. So lets all be pals. LOL

The lack of any Supplementary Information (SI) suggests these authors aren’t much interested in transparency…but we already knew that from dealings with Lewandowsky. It’s all about getting that talking point in the media “climate models replicated the pause”, and really little else.

Few if any journalists will understand what they have been fed beyond “climate models correctly simulated “the pause”, so all is well with climate science”. They’ll miss the fact that a handful of cherry picked models from the CMIP5 ensemble were used (shades of Yamal and the handful of trees) or that just because they line up with ENSO forcing in the period doesn’t mean they have any predictive skill.

Basically what went on here is that they a priori picked the “best” models that lined up with observations (without identifying them so they can be checked) They chose which models performed best with observations and called that confirmation while ignoring the greater population of models.

It would be like picking a some weather forecast models out of the dozens we have that predicted a rainfall event (weather) most accurately, then saying that because of that those weather forecast models in general are validated for all rainfall events (climate). It says nothing though about the predictive skill of of those same models under a different set of conditions. Chances are they’ll break down under different combinations of localized synoptic forcings, just like those “best” climate models likely won’t hold up in different scanarios of AMO, PDO, ENSo, etc.

Given the number of citizen scientists that populate this blog it is probably not a good idea to question the ability of a psychologist to contribute to the science and politics of climate change. In the case of this particular psychologist it is far better to allow his record to scream its madness for all to hear and to highlight that cacophonous prattle as indicators of the quality of his work. He is quite capable of stepping on his own diction.

” For those new to discussions of global warming, I’ve tried to make this post as non-technical as possible,..”
==========
Umm, not sure it is possible to get “non-technical” enough for this reader.
However, the thought is appreciated :)
And your hard work.

Isn’t Oreskes and Lewandowskys role here quite clear? They have been reliable foot soldiers of the climate scare industry. To co-author a paper which is strictly about the physics of climate is just a prop up, perhaps one to be wielded as a bat to hunt down the likes of (say) Moncton which are somewhat at the fringe of climate science trying to get in while these two clowns get a free ride to inflate their lists of publications.

The lack of any Supplementary Information (SI) suggests these authors aren’t much interested in transparency…but we already knew that from dealings with Lewandowsky. It’s all about getting that talking point in the media “climate models replicated the pause”, and really little else.

All of these papers by the “team” of true believers start out with the flawed premise that Anthropogenic CO2 driven global warming is unequivocally true as an a priori assumption. Then they try and come up with reasons the real world doesn’t comply with their predictions. This is exactly the wrong approach, a true scientist in Feynman’s mold would try and determine what is wrong with the models and theory, rather than try and find out why the data is “wrong.” They assume the theory is right and try and find weasel room excuses for why the real world isn’t complying with their pet theory.

And the two coauthors? Can we do the same thing they do to discredit this? Say, well, they aren’t “climate scientists” so what they say doesn’t matter?

And another thing…once again with the hiding the pea. We have models that are accurate! We won’t tell you what they are though…you just want to prove us wrong. Idiotic, and disgraceful that they will probably get away with it, just like the list of Chinese weather stations that went missing after the paper that proves there is no UHI effect. Dog ate my homework again.

JohnWho says: “Since the “best” models keep changing depending on the 15-year period chosen, are they saying that all of the models are “best”, according to when they are chosen?”

We don’t know which models were chosen to be “best” for any 15-year period. Therefore, we don’t know if each of the 18 models with sea surface temperature data made it to the “best” list at least once or whether “best” was dominated by a handful of models.

Everything Lewandowsky touches feels like a trap these days. I wonder what his ethics committee thinks they’ve approved for this one… My commenting alone will obviously prove that I wear a tinfoil hat, in his next paper.

Publication of this paper is the stimulus designed by Lewandowsky for his continuing research into the psychology of “deniers”. Lewandowsky’s minions are gathering blog responses and comments as we speak which will in due course be analysed, rated, binned, and characterized in his next psychology paper.

If you fire enough climate models and pick the one that hits the target after the event… you are actually aiming by picking where the shot hit – not by having a clue which end of the gun is the pointy bit.

And you also seem to be saying thy changed the “best models” for each 15 year period (repeating the logical error)?
Surely that can’t get published. Not even in Nature Climate Change.

Bob Tisdale wrote: “ENSO is one of the primary processes through which heat is distributed from the tropics to the poles. Those processes are chaotic and they vary over annual, decadal and multidecadal time periods.”

Then he quoted from the paper:

Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).

The oceans via ENSO factors play a major role in the natural heat distribution of our planet and the models can’t handle that; and yet the IPCC claims that ONLY CO2 could account for any observed warming since they say that they can think of nothing else. It looks to me as if this paper admits there is at least one “something else” that needs to be accounted for especially since CO2’s correlation with warming has failed over the last 17 years. It has been my position for years that we are not accounting for the ocean and the atmosphere property while we chase the magic molecule CO2 as the driver of climate.

Good work gentlemen! And a recension of the paper clear enough for even us dumb arts guys to follow. Good science and the thoughtful explanation of the evidence, that’s a rare combination in Climate Studies. Keep up the good work- and thanks.

Bob: Well done. I really enjoyed the article and appreciate the work that went into pulling it apart. My one suggestion is that now you have time perhaps you can develop a shorter more pointed piece. For me the most telling part was the fact that the models chosen failed to describe the actual pattern of 15 year trends in the Pacific, Atlantic or Arctic Oceans. It seems to me that this is a massive disconfirmation of Risbey’s approach and a confirmation of your point that models fail to encapsulate ENSO processes..

Climate models are not yet fit for their intended purposes. Again I think this paper, like the actual meaning of Adaptation in IPCC, is once again confirming that climate models are not intended to be Enlightenment type hard objective science. They are social science models created to justify forcing social theories into K-12 classrooms and policy making. Did you know Jeremy Rifkin now even uses the term ‘Empathy Science’ to describe this new view of science where students will learn to reparticipate in the Biosphere?

This is where Lew’s background factors in. The use of this climate modelling to promote to continued emphasis on the Biosphere in the political sense. The one created by Verdansky in the USSR that US Earth System Science is founded upon in the first place. It keeps coming up in my research on the complementary concept of Teilhard de Chardin’s noosphere.

On the basis of comparison of published figure 5a to 5c, this paper should never have gotten through peer review. Let alone that the ‘best 4’ models which fail in figure 5 are not identified. I would think an appropriate note to NCC concerning corrigendum/ retraction would be in order on those grounds alone.
This one is going to end as an own goal, since the basic flaws are self evident.
Very nice work, Bob. I had just finished reading the paper when you got this post up. Could have saved myself some wasted time.

I’m afraid that the major input of Professor Lewandowsky into this paper will lead to much criticism on the grounds that, as a cognitive psychologist, he knows nothing about the analysis of climate models and climate observations. This criticism will unfortunately tend to obscure the fact that he knows nothing about questionnaire design and data analysis in his chosen field of psychological research either. Similarly, Oreskes knows nothing about the analysis of historical sources. They are charlatans in their academic fields, but they’re big at the Guardian, Salon, and Huffington Post.
Londo suggests above that Lewandowsky and Oreskes have been given a leg up into the world of serious science. But isn’t the push in the other direction? Two media superstars in the wonderful world of warmology are willing to promote the unknown Risbey into the media limelight of climate change superstardom in exchange for billing as supporting acts. (Remember, Lewandowsky and Oreskes get cited on Obama twitter accounts, something that doesn’t happen to many scientists).
It’s just possible that the whole thing may backfire. The hook to catch the attention of the media big fish is that this is a paper in Nature. Thanks to the speedy footwork of Bob Tisdale and Anthony, Nature editors will already be aware that they’ve got a can of rotting worms on their hook. Not all Nature’s readers are blinkered activists.

Bob (if I may be so familiar) thanks for this, I must now read the paper.

However in reading your comments (and having been reading the recent stuff at Jo Nova’s) and having an interest in modelling, it occurred me to observe that the spatial distributions of surface temperatures (say) across the globe could be used as input to a “black box” model of the climate.

I’m obviously aware of your interest in this area – have you (or anyone else to your knowledge) considered taking the step towards modelling using this information as a basis?

If I understand this correctly then the following is a list of the issues that Bob Tisdale claims this paper has:

1 It doesn’t show that the “best models” match observation but the paper asserts it does.
2 The “best models” are just closest to observations by chance – best has no meaning.
3 The “best models” are defined through applying the Texas Sharpshooter method of picking whichever model is closest to observations at that time period and not discussing the reason why it’s closest (which is luck – see 2)… and then repeating the error for each time period with no discussion as to what ceased to be working for the previous “best models” when they are discarded.
4 It neglects to describe which models are the “best models” for figure 5 which makes the paper untestable.
5 The expected variation of the models is so large that the findings of “consistent with” are almost trivial.

The first version of the Animation 1 didn’t have the units identified for the color scaling, so, after I added the note and uploaded the revised animation, I deleted that first version. But then I forgot to go back and update that link in the final paragraph. Sorry for the confusion.

HAS says: “… it occurred me to observe that the spatial distributions of surface temperatures (say) across the globe could be used as input to a “black box” model of the climate. I’m obviously aware of your interest in this area – have you (or anyone else to your knowledge) considered taking the step towards modelling using this information as a basis?”

The climate science community actually uses sea surface temperature data for specialty models that are categorized as AMIP.

I haven’t studied the Texas Sharpshooter method so I can’t confirm, and I did not address your item 5 in my post: “The expected variation of the models is so large that the findings of “consistent with” are almost trivial.”

It is with great satisfaction that I notice that this paper from the climate industry from Risbey et al. (2014) which includes luminaries such as Lewandowsky and Oreskes now acknowledges that changes in the intensities and frequencies of El Niños has an important influence on global temperature anomaly.
Of course as noticed here the current ENSO models are unable to simulate or forecast ENSO variations.
The only thing that would happen if they utilize more powerful computers to forecast ENSO is that they would arrive to an erroneous result quicker.
To only way for these models to create better result is if the main drives are included and understood, which I now know is from a combination of tidal forcing and changes in the electromagnetic activities of the Sun. I’m currently compiling material for a presentation that I’m going to make on this subject.

Sorry about point 5.
It was last because I was least confident of that.
I took it from:

I really enjoyed the opening clause of the last sentence: “One expects the observed trend estimates in Fig. 2 to bounce about within the model trend envelope…” Really? Apparently, climate scientists have very low expectations of their models.

My misunderstanding – I withdraw point 5:
“5 The expected variation of the models is so large that the findings of “consistent with” are almost trivial.”
But I do think you’ve spotted the Texas Sharpshooter fallacy in this paper. So did daviditron at July 20, 2014 at 12:37 pm and he provided a handy link.

Risbey et al. (2014) took 18 of the 38 climate models from the CMIP5 archive, then whittled those 18 down to the 4 “best” models for their trends presentation in Figure 5. In other words, they’ve dismissed 89% of the models.

Lewandowsky (psychologist) and Oreskes (historian) now lecture the plebian masses about ENSO? I’m surprised John Cook (cartoonist) is not also in the author list. The next logical step is for Mark Steel, Jeremy Hardy and Stewart Lee (far-left wing UK comedians) to join in for the next installment.

They are having a laugh, or “taking the p***”. They know nothing and care far less about ENSO.

All the climate models – whether they admit it or otherwise – have an element of random walk. Thus make enough models and a few of them will always look similar to a short period of climate history a few decades long. So what?

The Sydney Morning Herald:Setting aside the fact the equal hottest years on record 2005 and 2010 fall well within the past 17 years
This applies to GISS and Hadcrut4, but not the satellite data UAH and RSS as well as Hadsst3 where 1998 is still the warmest.

phlogiston says:
July 20, 2014 at 2:20 pm
Lewandowsky (psychologist) and Oreskes (historian) now lecture the plebian masses about ENSO? I’m surprised John Cook (cartoonist) is not also in the author list. The next logical step is for Mark Steel, Jeremy Hardy and Stewart Lee (far-left wing UK comedians) to join in for the next installment.

Isn’t Ron Painter an ex-cop? Sounds like a band. The ♫♪YMCA ♫♪ of climate change.

This paper will be used the same way as the 97% consensus paper. Some models were cherry-picked to match an expected outcome. Some models do not match the outcome. However, by implication they are all correct, or by the Sydney Morning Herald’s standard, “On the Mark”…

Therefore, the fact that Risbey et al (2014) selected models that better simulate the ENSO trends for the period of 1998 to 2012 is pointless, because the models are not correctly simulating ENSO processes.
. . .
Now that ENSO has switched modes so that La Niña events are dominant the climate-science community is scrambling to explain the loss of naturally caused warming, which they’ve been blaming on manmade greenhouse gases all along.

Yes, and yes. And you can go further than that. During 1950s -1975 there was a negative PDO holding warming down, a repeat-act of the “pause”. Complicating this is that the PDO was not even properly described by science until 1996. (Hansen made his famous speech in 1988.)

Warming from 1950 to date (the “CO2 period”) does show Arrhenius-type raw CO2 warming. But only at 0.11C/decade. And that period has a roughly even number of positive and negative PDO years (with AMO following on). And what that does is dispute the hypothesis of net positive feedback. There are many feedbacks, of course, but the net effect appears to be insignificant.

Kon, Nature long ago succumbed to the lure of “impact” and popular readership in publishing crap. Fifteen years ago I went through 4 rounds of submission and review of a total dismantling of a cover story “paper” on organic apple production that was just rotten to the core. They eventually ruled that while my and my co-author’s critique was valid, it was “too specialized a discussion” for their general readership. So they could mislead the masses generally, but not correct it specifically.

The value of a climate model is determined by how well it is able to forecast the future. If the same models that were rated “best” at simulating global surface temperatures over the past 15 years are not very good at simulating the next 15 years, then it was all dumb luck, and they are useless. If a different set of models are rated best for each 15-year period, and you can’t determine ahead of time which models will end up being the best, then there is no real value to any of them.

Jeremyp99 got it right. Science is just a battleground for a cultural/ideological war. But we play by the rules (as true adherence to the philosophy demands) and they count on that, for the most part. That’s how they win the propaganda battles, though reality is getting harder and harder to hide. Just note; We’ve gotta play a bit on their side, too.

Bob, once again you have graced us with meticulous and excellent work. Thank you!

Although an earlier poster asked for a more succinct version, to really get a good understanding of the fallacies of RIsbey et al. one has to be patient and follow the evidence laid out. Bob has put forth a cogent argument and in several places summarizes his point that even for all the cherry picking that these authors engage in, none of the championed models is capable of replicating ocean temperature variabilities including El Nino and La Nina events. I remember very early Tisdale posts in which evidence was provided without the context of an argument and we had to figure out things for ourselves. Bob has become excellent at communicating the intricacies of complex ocean phenomena.

No. But I can and will. I use the top-down, meataxe approach (which is all I consider appropriate for our state of the knowledge). My supercomputer consists of the back of an old envelope:

THE MODEL — Done PNS style, but top-down, the way logic demands:

The pause should continue (possibly with mild cooling when AMO and the others flip) for another 20+ or so years. Around that point, PDO will go positive again (pulling or pushing the other major multidecadal cycles along with it) and we will see another 30-year rise not dissimilar to the positive-PDO dominated period from 1976 – 2007. Then it will go flat again.

The major wildcards are solar (we just don’t know yet, but if Cycle 26 is a bust, we will find out a lot), and what mankind does in terms of CO2 output and if (when, really) China puts scrubbers on its plants so they stop showering soot all over the Arctic. Plus the Dreaded Unknown Wild Card (there always is at least one).

Also, if Anthony & team are correct that the surface record is exaggerated by ~70%, that would imply a ~20% exaggeration of the total surface record (including oceans) during the CO2 era.

There’s you model, made on what we know and what we have observed. These bottom-to-top models cannot work like they work for building a bridge. (And sometimes even a bridge will collapse). We do not have the level of knowledge of climate that we have for building a bridge. You might as well try to simulate the Eastern Front using Advanced Squad Leader rules: your results will be irrelevant to reality, and not even interesting.

So there is my version of Post-Normal Science (but properly done, this time), which is an approach using what we do know, what we have observed, and making note of unknowns and plausible possibilities.

Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution.

And rather than consider the most likely, alarming, and and very costly possibility that the models all overestimate the emissivity of the CO2 molecule, we will churn out almost anything as a distraction.

TheLastDemocrat says:
“No one has yet said this, so I will reluctantly volunteer. Here goes:
A broken clock is correct twice a day. With more clocks, you have the accurate time much more often.”
_____

With one stopped clock, I am guaranteed to have the correct time twice a day. But with two stopped clocks, I have to choose one of them. I might not get the correct time even once for a given day if I alternate between the clocks at the wrong time. If I average the two clocks, I’m back to being correct only twice a day. So what good does it do me to have a room full of broken clocks?

I took a closer look at Risbey et al.’s figure 2. It really is hilarious!
So you draw a very fat, wriggling grey snake, than you clearly demonstrate for all to see that observations are really having a very hard time staying within that grey area (except for that one period the models were tuned to), and next you boldly proclaim “One expects the observed trend estimates in Fig. 2 to bounce about within the model trend envelope…”

Ever worse Louis, if you have, say, 30 broken, stopped analog clocks, and you average how far off 4 of them are every 15 minutes, you may get a 15 minute period where they are reasonably close, so your average time is much better than at other 15 minute time periods.

Now, continuing to do what these Alarmists have done, keep changing which 15 minute period you use, and keep changing which 4 clocks you use and, viola!, your methodology for showing how close the clocks are to actual time is, uh, “on the mark”.

evanmjones says: “And you can go further than that. During 1950s -1975 there was a negative PDO holding warming down, a repeat-act of the “pause”.”

La Nina events dominated from the 1950s through 1975 and as a result, the PDO was negative. The PDO (the spatial pattern of the sea surface temperature anomalies in the extratropical North Pacific) is an aftereffect of ENSO (and sea level pressures in the North Pacific). There is no mechanism associated with the PDO that can cause global surface temperatures to warm or stop warming. The processes are associated with ENSO.

I know swedish state run media will jump on this opportunity to present Oreskes as a published climate expert now that her name is on a “real” climate paper. The mere thought makes me nauseated, and they will love Lewandowsky.
I cant stand it, seriously, two of the most vile personalities available as climate authorities telling us we must cut down on carbon.

Anthony, Bob, great review, thanks.
Reading the review, My guess is that this paper was not prepared as a serious contribution to science community but rather to intentionally provide talking points for the lemmings in the media to obfuscate the fact that the computer models fail miserably when compared to measured temperatures.

Below is evidence that the attempt will likely work::
The Sydney Morning Herald headline about Risbey et al (2014): “Climate models on the mark, Australian-led research finds”

This is not unlike the paper claiming 97% consensus that the President and all the liberals repeat again and again following the principal that if you repeat a lie enough the public will believe it is a fact. I must have heard that talking point about a hundred times even on FOX News. The objective of this paper is the same

It’s really unfortunate the paper is paywalled. (US $32, and I bought it today just to make sure the advanced copy I had was the same as the final online version.) The more I dug into it, the more I was flabbergasted with what the were doing. If their intent had been to show the reasons why climate models need to have natural variability in-phase with the real world, their approach was a novel way to approach it. But that wasn’t the intent of the paper.

Phil Jones – if the pause lasts 10 years, the models are in trouble.
Someone Else – No, no, it needs to be 15 years before the models are in trouble.
Bent Santer – No, no, no, it has to be 17 years, if we hit 17 years, the models are in trouble
Risbey 2014 – actually, if we only look at some of the models some of the time and only in some of the places, it turns out they predicted this all along.

Of course I am paraphrasing. If someone has exact quotes and dates would be much appreciated.

This is likely an abuse of the ensemble concept. No wonder they have Lewadowsky and Oreskes signed on. They need help from people who are practiced in deflecting questions and blaming the one asking the question for problems. Conspiracy accusations will fly from Lew and Oreskes like water being shaken off a wet dog.

please excuse my typing; I have had brain cancer surgery. and please excuse my musings. but I have to wonder why this paper was ever published. it is a load of garbage. the sole purpose seems to be to provide another excuse that the models do show that the cessation of global warming in their output, and, therefore, their claim that cagw is still real is sustained.

but this can only be a temporary reprieve. global temperatures have risen, this rise has now ceased. the data is ergodic, and no-one has any idea what is going to happen in the future. with that Sword of Damocles hanging over their heads, the authors seem to be willing to sacrifice their reputations, in furtherance of The Cause.

Last week Naomi’s name came up in a Forbes post written by a columnist who was trying to expose alleged fact twisting he attributed to the Heartland. http://www.forbes.com/sites/eriksherman/2014/07/15/the-latest-climate-change-denial-fact-twisting/
He wrote……”I had an exchange with Naomi Oreskes, a Harvard professor of the history of science and associate professor of earth and planetary sciences as well as co-author of the 2010 book, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco to Global Warming. She said the following about Heartland”:
[cue the activist history of science shill] [yes, Naomi, that’s you]

“Heartland has been promoting disinformation about climate, and before that about tobacco, for more than two decades. It is usually based on misrepresenting factual information, or cherry picking data in misleading ways. It is almost never consistent with what actual scientific or biomedical researchers have to say about these matters.”

After reading ‘Risbey et al 2014’ might I suggest we swap from the above quote
-Heartland with Naomi Oreskes
-Tobacco with history
-biomedical with factual

It is worse than that. The Texas Sharpshooter works by shooting a gun at the side of a barn, and then painting the bulls-eye afterward with the bullet hole in the centre. What this paper appears to have done is worse. They’ve fired 18 guns once each at several different barns. They’ve then painted the bulls-eye on four bullet holes on four different barns. But not only will they not tell us which guns made which bullet holes, they won’t even show us the pictures of the barns with the holes and the painted on bulls-eyes. But they want us to believe that the shooter in nonetheless an excellent marksman.

Further, what Risbey et al. (2014) failed to acknowledge is that the current hiatus could very well last for another few decades, and then, after another multidecadal period of warming, we might expect yet another multidecadal warming hiatus—cycling back and forth between warming and hiatus on into the future.

Allowing another few days for an answer (which will, of course, not be forthcoming) to your and Anthony’s request for naming the 8 models, you could then send a further request to Nature asking for the author’s data.

M Courtney;
But no-one has ever named the fallacy of doubling, trebling and umpteening up on the Texas Sharpshooter fallacy so I couldn’t short-hand it.
>>>>>>>>>>>>>>>>>>>

Let’s call it the Texas Scatter Gun Turkey Shoot Fallacy. You shout a shot gun repeatedly in to the air. Then you go to the grocery store, and buy several frozen turkeys. You present them as the turkeys you would have shot had the grocery store not gotten them first.

A men’s clothing store stocks suits in 38 different sizes. Customers arrive in 15 minute increments. At 3pm a man comes in needing a 35″ short. The store has a 36″ short — close enough! At 3:15pm a man comes in needing a 44″ tall. The store has a 44″ tall. Every 15 minutes another man comes in and every 15 minutes the store has something close to what the next man needs. At the end of the day the store triumphantly claims their 15 minute predictions were spot-on.

Statistics 101 — How does the sample data compare to the expected data??http://en.wikipedia.org/wiki/Goodness_of_fit
The methods for comparing real world data to models probably is beyond quantification.
Or, what test is appropriate to measure the differences in slopes??
etc, etc.

All the authors had to do is describe the method used, and that the 4 models with the “best scores” were such and such.

Quantifying the difference between reality and expectations can be done methodically, but there may be some interpretive variances.
I hope the publisher is alerted to Bob’s analysis.

Ok, I have a research paper to propose. In the spirit of other climate studies, I have written the abstract, first. Perhaps one of you would like to write the conclusion, then we can find someone to do the actual experiment, using the conclusion and abstract as a guide.

Here is the abstract:

We present a novel methodology for the construction of computer generated climate modelling. This method provides a superior result than existng models but at a fraction of the cost.

This methodology employs the use of a climate model random generator (CMRG) developed to analyze random climate models against the actual observed climate. Thirty-eight random models were generated, then compared to the observed global climate over the last fifteen years. Fully one-third of those models showed no significant temperature change during the period, which corresponds with actual observation. The remaining models were equally split between climates with increasing global temperatures and and those with decreasing temperatures.

We extracted the best four models of those showing no change in global temperature, and demonstrated that they closely simulate actual temperature variations during the fifteen-year period. The results obtained more closely correlated with actual measurements than any previously published climate model. The CMRG models share many of the same attributes of the most popular climate models to date, i.e., it does not attempt to model climate sensitivity to ocean temperatures, clouds, water vapor, dust, or any solar-related phenomena.

Finally, the cost of creating these models, $174.65, compares favorably with the total cost, approximately $20.7 M, for the most cited thirty-eight models in climate studies.
……….

Who wants to add their name to the paper, and to whom do we submit it?

davidmhoffer says:
July 20, 2014 at 4:01 pmBut not only will they not tell us which guns made which bullet holes, they won’t even show us the pictures of the barns with the holes and the painted on bulls-eyes.

That reminds me of Hawking’s statement.

One of Albert Einstein’s most famous statements is “God does not play dice with the universe”.

Stephen Hawking:
“Not only does God play dice but… he sometimes throws them where they cannot be seen.”

21 July: SMH: Peter Hannam: Climate models on the mark, Australian-led research finds
A common refrain by climate sceptics that surface temperatures have not warmed over the past 17 years, implying climate models predicting otherwise are unreliable, has been refuted by new research led by James Risbey, a senior CSIRO researcher…
The Bureau of Meteorology last week maintained its estimate of a 70 per cent chance of an El Nino this year. It noted, though, that warming sea-surface temperatures in the central and eastern Pacific had yet to trigger the constant reinforcing atmospheric patterns such as a stalling or reversal in the easterly trade winds…http://www.smh.com.au/environment/climate-change/climate-models-on-the-mark-australianled-research-finds-20140720-zuuoe.html

but plenty of pessimism from Shukman at BBC – read all:

20 July: BBC: David Shukman: Shuttle diplomacy in climate countdown
A senior British minister is once again launched on a long-haul high-carbon mission of shuttle diplomacy in the cause of tackling climate change.
The target is to try to land an international deal on limiting greenhouse gases at what is billed as a major summit in Paris in late 2015…
Buoyed by a trip a fortnight ago to Washington – where President Obama recently announced his plan to limit emissions for power stations – Ed Davey, the Energy and Climate Change Secretary, arrives in China on Monday for his second major visit there and he will then fly on to India…
Mr Davey was not in the job at the time of the chaotic scenes in Copenhagen in December 2009 so he does not carry the wounds of that event and, to someone who was there, he sounds surprisingly upbeat.
So, I ask, is he genuinely optimistic that something might come out of all this?
“I’m more optimistic than I thought I’d be,” he says.
“I think there’s a desire in many capitals to do a deal – there’s been a real shift.
“People are now thinking about what’ll be in a deal not ‘will there be a deal?’”
Given the US Senate has never been supportive of a climate treaty and that China and India have long argued that too many of their people are living in poverty to contemplate any action on emissions, how does the minister come to this view? …
Mr Davey is also heartened by new figures on China’s use of coal. Instead of rising by 10% a year, coal use is now only increasing by 5%.
That does not mean less coal is being burned, only that the year-on-year growth in coal use has become smaller.
***This counts as success in the often strange world of climate diplomacy, where the smallest straws in the wind can acquire huge significance…http://www.bbc.co.uk/news/science-environment-28375267

CMIP5 model results were accepted right to the end of 2013 according to the IPCC (even after publication of the WGI report in September 2013).

Modelers had access to actual data at least into the early part of 2013 so what is so fantastic about only 4 models being able to simulate the already-known SST pattern from 1998-2012. Nothing, all the models should have incorporated the actual know results from 1998-2012. The fact that only a few could come close to the already known numbers should tell you something.

Look for “CMIP5 output will be accepted through at least 2013” in this IPCC letter to modelers.

HAS says: “On a quick look seems not to be used for/robust for long-term forecasting.”

The AMIP models are used in reanalyses and in tuning the atmospheric portion of the climate models that hindcast and project. That portion of the tuning with AMIP models takes place before they couple the “atmospheric” model with the ocean portion of the model.

/sarc-on
Since the Dorn/Zimmerman survey kindly eliminated anyone who was not a climatologist (i.e. Astrophysicist, Geophysicist, Meteorologist, etc) because they could not comprehend the complexities of climate science. Could one question the qualifications of the authors of this article since they are only an Adjunct Professor of Geo-sciences and a Professor of Psychology
/sarc-off

So lets get this straight. You take your show on the road and in one town, 4 of your actors get applause and 4 get booed. Then you go to another town, and 4 actors get applause and 4 get booed, but different actors in each case. In the end you call the tour a success, since you got some applause.

Somehow it’s labeled as 20 June 2014 although he talks of the paper published today and the first and only comment is July 21, other side of the International Date Line. Someone might want to take a screenshot of that date, see if it gets corrected later.

Possibly notable parts:

One thing that has changed since 2000 is that more heat is now going into the oceans—rather than the atmosphere—and at an accelerating pace. Or as Dana Nuccitelli put it recently:

“The rate of heat building up on Earth over the past decade is equivalent to detonating about 4 Hiroshima atomic bombs per second. Take a moment to visualize 4 atomic bomb detonations happening every single second. That’s the global warming that we’re frequently told isn’t happening.”

(Yes, he used quote marks on text formatted as a quote.)

The Hiroshima bomb had a yield of 67 Tera-Joules (TJ), Joule is a Watt-second, so 67 * 10^12 Ws.
Earth surface area 510.1 million km^2 = 5.101*10^8 km^2 = 5.101E14m^2

(4 * 67E12 Ws)/s / 5.101E14 m^2 = 0.53 W/m^2

Average surface insolation is about 240 W/m^2. Lewandowsky is still whining about a rounding error, that neither Tisdale nor anyone else can convincingly find building up in any oceans anywhere.

We begin by noting that the observed global temperature increase remains comfortably within the 95% envelop of model runs, as shown in the figure below, which is taken from a recent Nature Climate Change paper by Doug Smith.

Otherwise, sadly, the models used in the paper are not specified. It’s a vanity piece elevating the paper as a fourth line of “evidence” showing that when the models are repeatedly tuned by differing methods to match the observations of the oceans, they can somewhat match the global surface temperatures, albeit for a certain period like 15 years or so.

Lewandowsky has basically shown that since global surface temperatures follow the ocean temperatures, mainly since 70% of global surface temperatures are sea surface temperatures by default, if the models are tuned to the ocean temperatures (that already happened) then they’ll follow the surface temperature record (which already happened).

“… with that Sword of Damocles hanging over their heads, the authors seem to be willing to sacrifice their reputations, in furtherance of The Cause.”
__________________________
The authors do not plan on sacrificing anything. Considering that the state of modern journalism lies somewhere near Orwellian, the authors count on figuratively parading naked before adoring believers, all chanting: “what wonderful clothes”.

The PDO (the spatial pattern of the sea surface temperature anomalies in the extratropical North Pacific) is an aftereffect of ENSO (and sea level pressures in the North Pacific). There is no mechanism associated with the PDO that can cause global surface temperatures to warm or stop warming. The processes are associated with ENSO.

Okay, now I understand what you were saying earlier to me in an earlier post. (I was quite baffled at your response.) You are saying SO (atmospheric) phase drives PDO (SST) phase and that this either increases or decreased the rate of upwellings over a positive or negative phase. Is that right?

That works for me just fine.

That does not change the model in any case. You and I are proposing an almost identical result.

Anthony, still time for a contest on pick the reviewers for this article.
I nominated John Cook, Prof C Turney, Dr Gergis and that PhD student who reviewed Gergis’s last publication.
Do we find out when it is published?
Will I win?
On a lighter note we need more of these heavyweight articles in great scientific publications by such well credentialed scientists.
This article will do more for scepticism than 10 IPCC reports.

This is a question for Bob Tisdale. I’ve asked you about this before, but I still don’t quite get it, and others my have the same question. I believe you attribute much of the warming in the last century to repeated El Ninos. My question is, how does the ENSO contribute to long term warming, which would otherwise not exist? It’s easy to understand how the ENSO is storing heat and then periodically releasing it back to the atmosphere. However, in order to contribute to a sustained warming trend, the ENSO must alter the earth’s heat balance, all other things being equal. If you believe this to be the case, then the ENSO must have been contributing to warming for many centuries. If not, what interrupts this process? Another way to express this is to assume that the ENSO does not affect the earth’s heat balance, in which case, during a charging cycle, the earth should cool, which it apparently is not doing. I hope I’ve adequately expressed the question.

One thing that has changed since 2000 is that more heat is now going into the oceans—rather than the atmosphere—and at an accelerating pace.
===
they don’t realize that’s impossible….and it reads as stupid as it sounds

Shouldn’t we be saying 14 out of 18 (or 38) climate models proved completely wrong in new study and the other 4 are mostly wrong. Thanks to Prof Lewindowsky.
Oops , I see a small flaw.
Stephan is actually a good fellow in this case.

A men’s clothing store stocks suits in 38 different sizes. Customers arrive in 15 minute increments. At 3pm a man comes in needing a 35″ short. The store has a 36″ short — close enough! At 3:15pm a man comes in needing a 44″ tall. The store has a 44″ tall. Every 15 minutes another man comes in and every 15 minutes the store has something close to what the next man needs. At the end of the day the store triumphantly claims their 15 minute predictions were spot-on.

Have I got it about right?
———————————-
I like that version. Just imagine, we can predict everything now! We just need to make enough models.

I am wrong to say because of the mass difference between the atmosphere and the ocean, on the order of 250 times, that for there to be any significant warming of the ocean by transfer from the atmosphere there would have to be a much larger increase in the temperature of the atmosphere?

If this is the case then any average increase in ocean temperature would be by necessity be caused by something other then a warming atmosphere.

They’re here.
===========================
From the article:
1 comment
The abstract of the paper states “These tests show that climate models have provided good estimates of 15-year trends”
It is my understanding that the author is stating that 4 out of 18 computer models are good.
GOOD, not very good or excellent, just good. Oh yeah what about the other 14?, Are they worse than good?
In the AR4 2007 report from the IPCC they stated that the models say the globe will warm at a rate of 0.2 deg/dec. It hasn’t so the models are wrong. It’s that simple.

Commenter
waza
Location
Date and time
July 21, 2014, 8:25AM

Comments are now closed
============================
Gee, that was quick!

The Sydney Morning Herald: Setting aside the fact the equal hottest years on record 2005 and 2010 fall well within the past 17 years

This applies to GISS and Hadcrut4, but not the satellite data UAH and RSS as well as Hadsst3 where 1998 is still the warmest.

======================================================

As long as the peak temperature years which occur within any 17-year period continue to fall within the boundaries of the model confidence intervals, the claim will be made by climate scientists that temperature observations are consistent with climate model predictions, regardless of what the calculated slopes of those 17-year trends actually indicate.

Because of how wide the model confidence intervals are, then unless a huge downturn in GMT occurs which continues indefinitely into the future, this kind of claim will be made by climate scientists for the next thirty to fifty years.

Following the UN’s spirit of celebrating global diversity and equality, Risbey et al cherry picks 4 (heretofore unidentified) of the “best” CMIP5 models (out of 100+ CMIP5 models available) that by sheer blind luck just happen to vaguely resemble reality for a brief period of time, and declare that because all CAGW models are created equal, their averaged projections of catastrophic warming are magically validated….

My head just exploded….

In a rational world, all the CMIP5 models that already exceed reality by 2 standard deviations should be trashed as they are obviously fatally flawed. If such a rational process were done, the remaining CMIP5 model mean ECS would be well below 2C, which would, for all intents and purposes, disconfirm the CAGW hypothesis, or at least inject some very serious doubts of its validity.

The ONLY thing Risbey et al proves is that CAGW advocates’ days are numbered. Real scientists outside the CAGW cult must certainly realize the level of quackery on display, and cannot allow the integrity of science to be sacrificed on the altars of political agendas and grant grubbing for much longer.

…so I guess, that great idea is out of the question. Tide gauge data going back 150 years also shows utterly no trend change in our high emission era, from Church & White 2011:

That last plot utterly falsifies nearly all climate alarm headlines. It’s the only long running plot out there where sea level is actually real instead of some virtual construct fraudulently labeled “sea level.” You see, in science, adjustments must move closer towards reality rather than away from it or you can’t still label the graph “sea level” no can you? Adding water from dams and water reservoirs on land to sea level is what climate “science” does instead.

Headline: At least 89% of climate models now shown to be wrong!
Funding will now be stopped for all researchers supporting faulty climate models. The four worst modelers will be asked to refund their grants.

There are hypocrites without climate credential posting to the SMH article (comments were closed then re-enabled, interestingly) recommending that skeptics shut up…unless they have PhD in “climate science”!
There are no comments yet objecting to that “argument,” pointing out that a “climate science” badge is an arbitrary thing assigned to anyone willing to support the team; that a shifty psychologist or a cartoonist can be a “climate scientist.”
I’m locked out of Fairfax for debunking the religious editor, so I can’t even try.

When they started saying a while back that it would be ‘unsettling to the science community’ to merely point out that the models don’t match the data, I sighed; move to the next person who still has their brain functioning.

There was a good scene in the film Creation about the life of Darwin, when he walked out of a sermon about how supposedly god always looks after the sparrow, which then showed a baby sparrow being kicked out of the nest and eaten by maggots. The difference between models which people desperately want to believe, and reality, as well as morality and reality.

So Professor of Psychology Lewadowsky now knows the heat is going into the oceans.
Which begs the question where was it going before?
I guess in the climate obsessed world ‘heat’ is a spirit, like a djin of the desert, capriciously going where it will. If a djin is trapped in a lamp it must grant wishes to regain its freedom.
If heat is trapped in the oceans, will the heat grant wishes to gain its freedom?
Powerful guys, these psychologists.

It will be interesting to see if this sort of ensemble analysis will hold up under scrutiny. I look forward to what Steve McIntyre and Steve Mosher have to say about it, as well as any other math qualified opinion makers.

“It will be interesting to see if this sort of ensemble analysis will hold up under scrutiny. I look forward to what Steve McIntyre and Steve Mosher have to say about it, as well as any other math qualified opinion makers.”

1. its clear, the same as in hurricane prediction, that some models do better than others.
2. Still people typically present the full envelope.
3. after the fact people may go through and see if some model is consistently better than others.
this can help you improve models.
4. Some people ( i believe judith) weight models to get a better prediction

There is a lot of ad hocery. I dont know how or even IF you would be able to apply any stats after making a selection

what you are able to say is that IF the models could get the phasing right ( and we know this will be hard) that the pause is understandable.

folks need to stop thinking about models in the way they do.

want to predict the future?

you could do a stats model from the past
you could do any sort of physics based model

they will all be wrong.. more or less.

want to base policy on them? you dont need models to limit carbon. you just need a pen and phone

So is this really just a variant of the old Texas Sharpshooter Fallacy.
Surely that can’t get published. Not even in Nature Climate Change.
==============
The hockey stick, which introduced the notion of “calibrating” tree rings is also a variant of the old Texas Sharpshooter Fallacy. In that case, global average temperatures for the past 150 years were used to select those tree rings that were “good” proxies, without considering that some tree rings will match global average temperatures by chance.

Now this paper is doing the same. Selecting climate models based on how well they match observations, then claiming that this shows they were “good” models, without considering that some models will match observations by chance.

Phlogiston, if you are not working in academia, you may have tried to buy a copy of Nature anyway, just out of healthy layman ‘s curiousity. – In which case you would have realised that Nature is not for the plebs, it can’t even be easily obtained by the plebs. Plebs take to online blogs to get their science fix.

I know because I have tried. London isn’t exactly a hick small town, yet, even the largest flagship book stores with the best stocked magazine departments won’t be able to sell you a copy of Nature. They are also not able to suggest where else you might get lucky…

Lewandowsky is not trying to sell you anything, evidenced by the fact that he does not grace Anthony with his attention, despite running ‘the most read blog on climate ‘…

Also, you can’t just walk into the British Library and read it there because ‘just walking in there ‘ is not possible either – not past the reception desk, anyway ;)
I needed a letter of reference from my employer stating a specific research purpose to get a library card… That’s how hard it is for the plebs to learn what Lewandowsky has to say other than by way of hear-say, or through rehashed bits, as in this article.

Mani the parakeet (hatched 1997), also called Mani the parrot, is a Malaysian-born[1] Rose-ringed Parakeet who resides in Singapore. Mani became a celebrity in Singapore, and later internationally, when he picked the correct winners for all of the 2010 FIFA World Cup quarter-final ties, as well as the Spain-Germany semi-final.[2]http://en.wikipedia.org/wiki/Mani_the_parakeet

Paul the Octopus (26 January 2008[1] – 26 October 2010) was a common octopus who supposedly predicted the results of association football matches. Paul correctly chose the winning team in several of Germany’s six Euro 2008 matches, and all seven of their matches in the 2010 World Cup—including Germany’s third place play-off win over Uruguay on 10 July. Aside from his predictions involving Germany, Paul also foretold Spain’s win against the Netherlands in the 2010 World Cup Final.http://en.wikipedia.org/wiki/Paul_the_Octopus

hunter says:
July 20, 2014 at 8:47 pm
It will be interesting to see if this sort of ensemble analysis will hold up under scrutiny. I look forward to what Steve McIntyre and Steve Mosher have to say about it, as well as any other math qualified opinion makers.

Mr. Mosher is an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

Question… if “the science is settled”, then why are there still multiple models? A cynical mind would assume that it’s to allow warmists to claim that a “model predicted events X, Y, and Z”. Well it was 3 separate models that each predicted one event… picky, picky, picky.

I too can make bold claims… I can model the exact results that will occur when you flip a coin 10 times in a row (i.e. heads/tails). Given 1024 models, I guarantee you that one model will get it right.

Clearly the best way to predict the climate is to fabricate “climate dartboards” with possible values for climate variables such as temperature and cloudiness, then start with 256 darts (2^8). Using an automatic randomized dart thrower to avoid experimenter bias, call out the first month of a year, then throw each of the 256 darts. Repeat for the other months. For each variable for each year, average the results, put aside the four darts that come closest, put away the four worst, then work through the next year with the remainder.

Repeat until you have none remaining, then restart with the set-asides.

Continue until you only have four best ones and four not-that-bad ones remaining.

Then you have the four best ones for your climate predictions for a particular variable, and four possible back-ups if a best dart should inexplicably start predicting badly and needs to be rotated out.

Note that per Mosher, the darts that will be best at predicting temperature might not be those which are best at predicting precipitation, for example. Organize and keep track accordingly.

Mr. Mosher is an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

Among other reasons, while Mosher can also come across as an arrogant ass, people know it’s not personal as it’s just his commenting style, and he’s certainly not anonymous and is willing to put his real name to his words.

Mosher preens: “you dont need models to limit carbon. you just need a pen and phone”

…the very week Australia just nixed its big lie carbon tax that sunk liberalism in the whole country after Gillard specifically promised no carbon tax, just as in his second term Obama was found to have lied about Obamacare. And no, his pen and phone have not limited carbon, something only fracking did and his actual hands off approach to it while merely offering trivial little symbolic gestures like a single pipeline from Canada that will result in more energy being used to transport the exact same output. Obamacare is the climate cult’s downfall now since lies are now an American leftist legacy, standard operating procedure. That puts climate alarm very much in the crosshairs.

Because Mosher’s boss Muller opportunistically attacked the hockey stick team, his BEST temperature product has been now ignored in favor of amateur hour Cowtan & Way’s bizarre Frankenstein update of recent Saudi academic appointee Phil Jones’ Climategate University plot but it used satellites to estimate the missing Arctic in a way that the satellite data itself falsifies. What a mad house of cards these jokers constructed as their fanatacism is now helping topple the entire left wing of politics in Western nations, one at a time. A bit of political inertia in the US has them doing a victory dance now that the smart money has already fled after Gore cashed out to the oil kingdom of Qatar and and now billionaire Steyer’s big climate alarm fund raising this year flopped. Mosher’s boss had to scam money out of the Koch brothers instead by pretending to be a skeptic willing to solidify the temperature record instead of further corrupt it with a mere black box that hasn’t had its parameter settings publicly discussed except as they troll this site to obscure it all.

@Steven Mosher on July 20, 2014 at 9:05 pm
Steven, it has been a long time since you have actually put forth a coherant comment that wasn’t just a drive by. It appears that even you could not come to grips with this paper. I agree with your comments about the models, but your final comments were the most telling for what this paper was intended for.

Let’s call it the Texas Scatter Gun Turkey Shoot Fallacy. You shout a shot gun repeatedly in to the air. Then you go to the grocery store, and buy several frozen turkeys. You present them as the turkeys you would have shot had the grocery store not gotten them first.

Would these be the same 4 models that skid long at the lower range of the CMIP5 set and show a slight warming and even a return to about 0.5K anomalu by 2100? Would that be the reason for the lack of identification?

Is that Lewandowsky’s contribution or is some work by Oreskes in Comparitive Historical Forecasting of Climate Change or such like the genesis of this?

Steven Mosher says: July 20, 2014 at 3:35 pmB) even if you had A, you need the inititial (initial) conditions NAILED in 1850

GT trends are set by wider Equatorial area, as it happens the most of it is the Pacific, Indian and Atlantic oceans, Atlantic being only a minor player, simply because of its geographically much smaller equatorial belt area.
Pacific is regulated by the ENSO (see Mr. Tisdale’s comments) which is on the downtrend, while Indian and Atlantic oceans are still in the warm phase. Divergence in the trends is the most likely cause of the pause.
I suspect that tectonics is the principal driver of the ENSO, while Atlantic and Indian Oceans are linked to the Arctic.
None of these are predictable and thus not possible to model, further more I suspect, that what was 1850 like is very little or next to nothing to do with the current GT trend.

Check the below link, Peter Hannan of the Sydney Morning Herald has now pontificated that fewer sunspots is terrible from a global warming (not that the warming has stopped, mind you!!!) perspective because, well, what if the spots come back? The we’re all certainly doomed!!!!

I read through Peter Hannam’s articles on Climate Change, Global warming.
Conclusion: He’s in the tank, beyond help, a CAGW adherent who doesn’t like facts getting in the way of his beliefs. Emailing him will make no difference. He is stupid, and by definition, you can’t fix stupid.

One other thing, if I may, about Peter Hannan, from the Sydney Morning Herald. His face. That expression.

It’s that, “Essential in progressive circles I’m deeply concerned about the future of humanity, so don’t question anything I tell you look.” Man, Hannan’s got it. In spades. He is so morally superior, the poor dumb bugger!

In theory. the additional longwave radiation from manmade greenhouse gases provides the added energy to warm the oceans. In reality, the additional longwave radiation can only penetrate the top few millimeters of the oceans, but that is the layer where evaporation takes place so most of the additional energy is lost to evaporation. Field tests (without experimental control) show that the warming of the ocean skin is so small that it would have a negligible impact on the warming of the mixed layer.

Chris Marlowe says: “This is indeed strange because Environmental Research Letters recently rejected a paper on the same topic by more distinguished scientists than these, including Dr. Lennart Bengtsson.”

So, what is really the relation of the warmist to ENSO? How do the imagine enso factors into the theory of vapour amplified CO2 greenhouse effect? Is it that the amplified greenhouse effect pushes the heat into the oceans and it gets released by a more frequent el Niño? Considering the earlier stance that ENSO cannot have a decal or multi decal effect, it seems they make these stuff up as it goes. They gave McLean et.al. a tough time when they tried to imply that 70% of climate change originated from ENSO.

evanmjones says: “Okay, now I understand what you were saying earlier to me in an earlier post. (I was quite baffled at your response.) You are saying SO (atmospheric) phase drives PDO (SST) phase and that this either increases or decreased the rate of upwellings over a positive or negative phase. Is that right?”

Close but ENSO is not only atmospheric; ENSO is a coupled ocean-atmosphere process.

Maybe its easier to think of the PDO as nothing more than an index that explains where in the North Pacific the surface temperatures are warm and cool relative to one another (like a temperature difference index between east and west), but that the PDO does not explain the temperature of the North Pacific itself.

Angech says: “Shouldn’t we be saying 14 out of 18 (or 38) climate models proved completely wrong in new study and the other 4 are mostly wrong…”

Except that those 14 are only for the period of 1998-2012. If we back up another year, there are another group of bad models that may or may not include the same bad models and there might be more of less of them. We don’t know, because Risbey et al did not identify what models were good or bad for any time period.

“AKSurveyor says:
July 20, 2014 at 11:00 pm
@Steven Mosher on July 20, 2014 at 9:05 pm
Steven, it has been a long time since you have actually put forth a coherant comment that wasn’t just a drive by.”

Perhaps you should be inquisitive and wonder why?
It used to be that one could have a reasonably good discussion here.
But now I come in and I see 90% of the comments are the same thing. “drive bys” on a paper posted for your consideration.

So, If I were the author of this paper and saw all the drive bys, well, I’d drive by.

See how annoying it is when you read a drive by?

now put yourself in the authors shoes ( yuk lewandowsky) but just do that.
Then start to read the thread?
count the drive bys.
count the “amens” brother.

Want to know why my counter drive bys stick out?

ask yourself.

Folks who demand a debate and then offer up forums for debate that are downright intolerable. amuse me.

Judith currie’s place is no better, so Im not singling WUWT out. and RC is intolerable.

any hope of having an intelligent discussion about the problem of phasing ( and why bob doesnt get it)
is lost,

Phlogiston, if you are not working in academia, you may have tried to buy a copy of Nature anyway, just out of healthy layman ‘s curiousity. – In which case you would have realised that Nature is not for the plebs, it can’t even be easily obtained by the plebs. Plebs take to online blogs to get their science fix.

… an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

What made no sense was for the other commenter to suggest that we needed to listen to mathematicians in the first place. What is needed more is the opinion of someone qualified in statistics like, say, William M. Briggs. But in a more general sense what we all need to do is to listen to those who can use logic in a rigorous fashion and are honest enough to do so even when the outcome goes against their prejudices.

I know that up until the 70s, courses in formal logic were often taught by the Philosophy profs rather than the mathematicians in some universities. I asked my favorite math prof why that was in class one day and he told us that mathematicians did not have a lock on logic and that the Philosophy department made perfect sense for many reasons.

The prime example of a mathematician producing erroneous stuff on climate is that horrible example of “Dr” Mann and his hockey stick made of blatantly misused statistical methods. So, please, let us judge comments on their merits and not judge the commenter on his/her supposed qualifications. Some of the worse advice ever given to the public came from people with a “Dr.” in front of their names.

There is no reason why learning should stop just because your formal schooling has ended. (h/t to Mark Twain I think) For this reason, every science “paper” should be presented in an open forum where everyone on the planet can read it if they want to, and many can comment on the paper if they choose to. Open, honest, transparent, reproducible, … what science was claimed to be when I was just a lad.

If anyone’s wondering why Lewandowsky & Oreskes were able to get their names attached to a climate science paper – a bit of Googling on James Risbey, the lead author, may provide clues.

He has appeared with Lew & Oreskes before, in Lew’s academic alarmist journal of choice The Conversation – venturing outside his are of expertise and dipping his toes into the murky waters of “climate psychology”:-

TBear says:
July 21, 2014 at 12:11 am
One other thing, if I may, about Peter Hannan, from the Sydney Morning Herald. His face. That expression.
==========================================================

… Folks who demand a debate and then offer up forums for debate that are downright intolerable. amuse me.

Judith currie’s place is no better, so Im not singling WUWT out. and RC is intolerable.

any hope of having an intelligent discussion about the problem of phasing ( and why bob doesnt get it) is lost,

so. I amuse myself and annoy you.

In fact, most of the time you only amuse me. I don’t get annoyed at the biased illogic that your comments display here continually. I always try to remember … “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” (Upton Sinclair)

Those who comment here who work for one of the duplicitous data-sets that have been “adjusted” to show what the paymasters want to see are no better than the con-men who steal the life savings of old widows. But I don’t get angry or annoyed, there are worse villains on the planet and karma will take care of you anyway. I get annoyed at those who mention you as if you are important enough to waste time on every darn thread.

Bob Tisdale says:
July 21, 2014 at 12:44 am
…….
“In theory. the additional longwave radiation from manmade greenhouse gases provides the added energy to warm the oceans. In reality, the additional longwave radiation can only penetrate the top few millimeters of the oceans, but that is the layer where evaporation takes place so most of the additional energy is lost to evaporation. Field tests (without experimental control) show that the warming of the ocean skin is so small that it would have a negligible impact on the warming of the mixed layer.”
///////////////////////

Bob

People frequently, incorrectly, in my opinion, assert that longwave radiation can penetrate the top few millimetres. I would suggest that this is erroneous, it is microns, not millimetres.

The optical absorption characteristics of LWIR in water is such that 20% is fully absorbed in just 1 micron, and 40% of it within 2 microns, 60% within 4 microns, and 83% of it within 10 microns. See, our friends at:

From this alone, one can see that already 60% of LWIR does not make it beyond 4 microns of depth. However, and this is relavant and should not be overlooked, DWLWIR is omnidirectional, So approximately 20% of DWLWIR has a grazing angle relative to the oceans of no more than about 20deg, and approximately 30% of DWLWIR has a grazing angle relative to the oceans of no more than about 30deg,.and so on.

Given the omnidirectional nature of DWLWIR, around a little under 80% of all DWLWIR must be fully absorbed withing the first 4 microns of the oceans, and does not penetrate the ocean below.

So very little, if any at all, of DWLWIR even makes it to 6 microns! Two issues are raised. The first and foremost is whether the energy that is absorbed within the first 6 microns can in any way whatsoever find its way down to depth. Second, to what extent does any DWLWIR actually find its way to the oceans in the first place, and in particular how much of it is fully absorbed by wind swept spray and spume, which acts like a LWIR block (much the same way as a parasol, or suncream, may be used to shield solar irradiance), and which is raging over large areas of the ocean all of the time (every day somewhere over the oceans are large storms, the areas of which can sometimes be the size of countries, of force 7 and above where the very top microns of the ocean have been ripped off it, and are airborn and are effectively divorced from the ocean below)

Looking at the first issue, what are the processes that would carry (dissipate) energy absorbed within the first 6 microns to depth, and at what rate is the energy absorbed in the first 6 microns disipated to depth? It cannot be by conduction since the energy flux is upwards in the first few millimetres of the ocean. See http://en.wikipedia.org/wiki/File:MODIS_and_AIRS_SST_comp_fig2.i.jpg

You will see from that the very top surface of the ocean is cooler than the ocean below (this is not surprising since the top of the ocean is evaporating and due to latent heat, this cools the water from where such evaporation takes place). So we know that all the energy in absorbed in the first 6 microns is not carried to depth by conduction, unless our understanding of thermodynamics is wrong, and one can conduct cold against the direction of the energy flux. I suggest that is doubtful.

The only other process that I have seen mentioned is ocean overturning. However there is a problem with this, namely that it is a slow mechanical process, and possibly one that is diurnal only! The heart of the problem is that the energy from DWLWIR being absorbed in the first 6 microns, is being absorbed at speed, and the speed of such absorption far exceeeds the very slow rate at which the ocean is overturned. Thus even if energy in the top 6 microns could be overturned, it is not being overturned at a rate quicke enough, by which I mean, the energy absorbed in the top 6 microns is not being carried down to depth and thereby the energy disipated and diluted at a rate faster than the rate at which evaporation would be driven.

There is so much theoretical DWLWIR energy absorbed in the first 4 microns 9and I mean 4), that it would be sufficient to drive between 14 to 20 metres of rainfall annually. We do not see that amount of rainfall, so something is amiss somewhere.

One explanation is that DWLWIR lacks the capability to perform sensible work, ie., it cannot heat the oceans, because if it could heat the oceans, the oceans would boil off from the top down (or at any rate, we would see massive amounts of annual rainfall). Another explanation, s that there is some process going on (other than ocean over turning) that we do not know about. yet a further explanation may be some form of photonic exchange taking place at the very surface.

I do not know what the explanation is, but one thing is clear, there is a significant issue with the K&T energy budget, and the GHG theory that DWLWIR in some way heats the oceans and prevents them freezing. There is something we do not know about, still less understand.

But it is important to be clear about the facts. DWLWIR is not absorbed in millimeters but rather in microns, and if our optical physics is correct, about 80% of all DWLWIR is absorbed within just 6 microns!

The oceans are a GH Liquid. They are opaque to LWIR and are in effect a LWIR block. The fine layer of wind swept spray and spume which consists of water droplets more than a few microns in size and which rages over, to more or less extent, perhaps 30% of the oceans all the time, means that much of DWLWIR never reaches the oceans in the first place since it is absorbed within the water droplets of the spray and spume, half of which is radiated upwards and away from the oceans. This is not taken account of in the K&T energy budget; it takes account of reflected solar, but fails to take account of reflected DWLWIR which is reflected at relatively low altitude above much of the oceans, and therefore never reaches the ocean surface below but remains part of the energised water vapour which powers the clouds above etc.

We do not understand the oceans, nor the atmosphere immediately above the oceans, and the understanding of this is the key to understanding climate science.

What do the best 4 models project the future warming will be by 2030, 2050 and 2100?

It will be interesting to test their predictive skill going forward.

It will also be interesting to see whether they suggest that future warming will be moderate only.

The newspapers are already latching on to the story that the models are correct and have been validateed etc. Well OK, we know how superficial that is, but what do those ‘correct’ and ‘validated’ models project about future warming since the press should also make that clear.

As the MSM pick up the wrong end of the stick, the value of Orestes and Lew becomes apparent. They bookend the paper, as one is a writer of historical faction, the other a writer of hysterical fiction. Together, they ensure that the Lie is half-way round the World before the Truth has got it’s boots on. Good, solid, Leftist propaganda trickery.

Surely the only quality data that exists is from the US and the UK, possibly supplemented by, but to a limited extent, data from Scandinavia, Germany and France. At a stretch, may be Russia and Japan could be thrown int the mix.

The idea that we have sound data for global temperatures pre war is farcical in the extreme. The Southern Hemisphere is sparcely populated, and quality data on southern continents is sparce indeed. prior to ARGO, there is no quality data of the oceans, and ARGO was ‘adjusted’ shortly after inception since it was not showing the oceans to be warming, and we have yet to see whether there is an inherent bias in ARGO since the buoys are free floating and drift with currents, which themselves are temperature related and could therefore potentially introduce a bias.

That is why the contiguous US temp data, and CET are of such importance, at any rate from the land temperature record point of view.

Steven Mosher says: “any hope of having an intelligent discussion about the problem of phasing ( and why bob doesnt get it)”

Bob understands the problems associated with phasing, and if memories serves, I discussed the primary one a number of times in my post. The primary one is that the climate modelers still cannot simulate the coupled ocean-atmosphere processes that are known as ENSO and the Atlantic Multidecadal Oscillation. They cannot, in any way imaginable, hope to be able to place the multidecadal variations in ENSO and in sea surface temperatures of North Atlantic (and the North Pacific) until they can first simulate the basic processes. Your general “problem of phasing” is a bad excuse for the even larger problem related to processes. I get it. You, Stephen, are ignoring the blatantly obvious.

They cannot, in any way imaginable, hope to be able to place the multidecadal variations in ENSO and in sea surface temperatures of North Atlantic (and the North Pacific) in phase with nature until they can first simulate the basic processes.

You fail to take into account the fact that liquid water being a very strong absorber of IR is an equally strong emitter. So if the temperature of the air and the water is the same, virtually the same amount of energy is emitted as IR radiation from a water surface as is being absorbed by it. No need for any mysterious surface processes. You have been fooled by the constant carping of “climate scientists” about the huge “back radiation”. Well it’s not “back radiation” it’s “omnidirectional radiation” and it is only the (small) anisotropy of the radiation field that is of any importance for the temperature. This is almost never emphasized, because it would make it obvious that the dominant process in determining surface temperature isn’t IR radiation (which is influenced by GHG:s) but rather convection, particularly the latent heat of convecting water vapour.

As usual the devil is in the details. Liquid water has a continuous IR spectrum, not a band spectrum like CO2 or water vapour. However it is far from being a black body. Also the temperature of the water and the air are usually not exactly the same (though they are assumed to be in global temperature estimates) and evaporation is a further mechanism removing heat from the surface layer. So, locally and over short time periods there is a net flow of heat either into or out of the water. In the long term these always balance out (if not, the oceans would either have frozen or boiled away billions of years ago), but short-term quite a few Hiroshima bombs per second most likely are going one way or the other most of the time.

You’ve lost me completely… GCM’s don’t reproduce La Nina or El Nino cycles. Or at least, a few have attempted “La Ninia/EL Nino” like patterns with mixed results. So isn’t it futile to be complaining that Bob Tilsdate doesn’t “get” “phasing” when Bob’s point is that the models are nowhere close to even getting to such a discussion yet?

Anthony Watts made a similar point earlier on. If you pick a few models that tend to replicate SST’s well in this region the only possible explanation is that’s due to pot luck. It doesn’t tell you anything about the ability of those models to say anything about the future.

I think you’re trying too hard to show everyone here that you’re a superior intelligence. You need to chill.

Very true. There is no continuous record from anywhere in Antarctica before 1957, only a few isolated years here and there when a scientific expedition was around.
Furthermore Southern Ocean SST data are much worse for the 1930’s than for e. g. 1850-1914. The reason being that the sailing ships then still used for transcontinental trade sailed quite far south to catch the strong winds in “the roaring forties” and the “howling fifties”. This trade was essentially dead by 1920, and consequently the Royal Navy and the New Zealand government even stopped their occasional visits to the subantarctic islands to look for wrecked sailors. The depression of the ´30s didn’t help either. There was little money for scientific expeditions and many whaling stations were abandoned. So, actually there are practically no data south of latitude 40 (not even halfway between the equator and the pole)

I do hope everything is going OK Jim and you have a complete and successful recovery. Unfortunately on the 16th of April this year I lost my elder brother to an in operatable butterfly tumour (back and both sides of the brain). He was only diagnosed six months prior. He was only 59.

Mr. Mosher is an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

Because he’s an applied statistician. As far as I’m aware he’s written a significant amount of open source code that is used by scientists and statisticians alike. You seem to make the mistake of thinking that a qualification earns a person a set of skills rather than the other way round.

Richard, I guess this is off topic but everybody is talking about the oceans heating and the mechanisms by which they are supposedly heating.
Down welling long wave infra red radiation doesn’t seem to be the mechanism if we are to believe the experts which leaves the heat absorption mechanism by oceans somewhat up in the air.
[ alas the pun! ]
Ocean overturning still requires a large volume of ocean waters to have some energy absorption mechanism and that can only be solar or direct heat absorption from the air in contact with the surface layer for the water to warm so as to be a contributor to ocean heat through the claimed overturning process.

Being a land lubber with the nearest ocean a couple of hundred kilometres away I see some real problems in the present claims on ocean heating or not heating by any recognised mechanisms.
[ Regretfully I missed Judith’s post on this that you referred to ]

First, as a youngster I used to swim in the small Australian farm dams / tanks which cover a couple of hundred or less square surface metres and range from a couple of metres deep to ten or so metres deep [ all in feet in those long gone days of yore ]

On warm and hot days and lots of sunshine and not much wind, the top couple of feet of water, err, half a metre or more would be very stratified with very warm water in those first couple of feet of depth and then a very sharp transition to cool to cold water deeper down.
Warm to hot but overcast days usually mean’t the surface couple of feet / half a metre of water didn’t heat anywhere near as much.

Cold days and that top surface cooled down pretty darn quick although the stratification was still there with colder water at depth.
No overturning there and very little wind and spray effects in that dam/ tank which was usually enclosed by high earth banks around it from the soil scooped out to dig the dam / tank. .

Thats one and a personal experience.
Next is also non oceanic.
It is the non oceanic solar pond. I’ll use the initial Wiki description below;
_________________
A solar pond is simply a pool of saltwater which collects and stores solar thermal energy. The saltwater naturally forms a vertical salinity gradient also known as a “halocline”, in which low-salinity water floats on top of high-salinity water. The layers of salt solutions increase in concentration (and therefore density) with depth. Below a certain depth, the solution has a uniformly high salt concentration.

There are 3 distinct layers of water in the pond:

The top layer, which has a low salt content.
An intermediate insulating layer with a salt gradient, which establishes a density gradient that prevents heat exchange by natural convection.
The bottom layer, which has a high salt content.
If the water is relatively translucent, and the pond’s bottom has high optical absorption, then nearly all of the incident solar radiation (sunlight) will go into heating the bottom layer.

When solar energy is absorbed in the water, its temperature increases, causing thermal expansion and reduced density.
_______________
As in that description;
“If the water is relatively translucent, and the pond’s bottom has high optical absorption, then nearly all of the incident solar radiation (sunlight) will go into heating the bottom layer”

So we have small isolated, non overturning, relatively protected by earth banks, unaffected much by wind, farm dams with a half a metre or more of very warm water stratified on the top surface on sunshine dominated warm days.

We have a very well known and even commercial salinity density stratified solar pond systems where solar heat is used to warm the highly stratified saline solution near or on the BOTTOM of the metre or more deep solar pond.
And yet the experts are saying that solar infrared radiation can’t and doesn’t penetrate beyond a few nanometers / millimetres or whatever to warm the ocean waters but apparently does so.
Or at least something solar seems to do so for non oceanic waters.

From my sojourns to the ocean [ Rosebud on Port Phillip Bay in Victoria ] for holidays as a little kid some seventy years ago now, I can still recall the identical warm surface layers of the Bay waters and the often very cold water below that layer of warm water a couple of feet down from the surface.

Why the the difference in science claims on ocean heating or the inability of solar to heat the ocean waters to some depth and the real time actual practical experience in farm dams and solar ponds?

James Risbey sounds like a moron. He spent about a minute telling us that we should be really, really worried without actually telling us why and without any attempt at providing any evidence in support of his shrill assertions. If we’re so certain and have all the answers, why on Earth are we paying him to do any more research. Surely, spending his pay and research money on engineering research would be a better use of funds.

A) you need to have the physics of the process NAILED. they don’t.
B ) even if you had A, you need the initial conditions NAILED in 1850

For numbers calculated by numerical methods, the most important, by far,

C ) you need numerical methods that accurately preserve distributions in time and space. Dispersionless and dissipationless numerical methods.

In my industry measures of fidelity between calculations and empirical data include considerations of temporal and spatial variations. Calculations that are do not correctly represent the distributions in both time and space are considered to be incorrect. Measured phase relationships between related physical phenomena and processes are required to be correctly calculated.

“Why do Australian climate scientists all seem to lead a double life as activists?”

– Moreover, why is it people on that side of the debate are lauded for their activism and it is seen as compatible with being a good scientist whilst sceptics are pilloried for even a hint of activism or funding from a perceived vested interest? Just look at the furore for example over Lord Lawson getting an airtime in the UK.

Calculations that are do not correctly represent the distributions in both time and space are considered to be incorrect.

I think this is true in most areas of industry, but it’s not true of academia. The difference is that your job is on the line industry while nothing is on the line in academia. While I don’t have a problem with this as we learn as much from failed hypotheses as successful one, it has been allowed to create an environment of irresponsibility.

The other thing I’d like to add, is that commercial value of most climatology is nil. Therefore, there is little real need to get it right. The AGW issue is largely a political one now, and since like most areas of scientific research, it is of very little real use, there is no easier way to get research money than to create scare or facilitate the ambitions of our political class. Just listen to the ambiguous and emotional arguments being made by a “scientist” in Foxgoose’s comment. It is neither measured nor rational.

Here’s another nice chart from Ed Hawkins comparing all of the IPCC CMIP5 models/scenarios to the actual observations. Interesting how 95% of the models/scenarios are high compared to actual observations. One could cherrypick the bottom 5% I guess.

Some of the points that you raise, I would not disgree with eg.,”the fact that liquid water being a very strong absorber of IR is an equally strong emitter.” However, I consider (with respect) your next statement “So if the temperature of the air and the water is the same, virtually the same amount of energy is emitted as IR radiation from a water surface as is being absorbed by it” is an over simplification.

Personally, I consider that article to be about the weakest that that author has posted on this blog, for a number of reasons, First, that the argument adduced was circuitous in nature (having set out the energy budget in terms of gross flows, by subtracting the gross element, the equation no longer balnced as proof that it must be gross that we are dealing with). Second, it failed to draw a distinction between land and wayer, and the unique properties that water possesses (evapoartion and changes in latent heat etc). Third, in not addressing the issues/problems raised if the gross energy budget is correct.

Now the reason that this is important is that evaporation comes from the very top of the ocean, and is powered by the energy in the top few microns.

The gross energy budget looks at most at what is going on with the oceans as a whole, not what is happening in the top few micron layer. Why is this? Primarily because all the DWLIRI is absorbed in the top few microns, whereas almost none of the solar is absorbed there!!!

So if one looks at the energy budget for the top micron layer of the ocean, you get almost all the DWLWIR element but very little of the solar element (perhaps just 1 to 3 w/m2 of the solar energy is absorbed within the first few microns), so the energy budget does not balance!!!

Looking at the figures used the article on Radiating the Oceans, you immediately see the problem (and it has a bearing on your comment “…if the temperature of the air and the water is the same, virtually the same amount of energy is emitted as IR radiation from a water surface as is being absorbed by it”).

From the article Radiating the oceans: “We know the radiative losses of the ocean, which depend only on its temperature, and are about 390 w/m2. In addition there are losses of sensible heat (~ 30 w/m2) and evaporative losses (~ 70 w/m2). That’s a total loss of 390 + 30 + 70 = 490 w/m2. But the average solar input to the surface is only about 170 watts/square metre.”

So you see that the oceans are radiating about 390 w/m2.(average figure). DWLWIR is (on average) inputting about 324 w/m2 so these two are far from about the same. And of course, the oceans are in addition, at the surface, losing both sensible and latent energy thereby further exacerbating the problem..

It is not easy to see that the difference is accounted for by solar energy, because the amount of solar being absorbed by the first few microns is not “170 watts/square metre.” but rather just 1 to 3 w/m2 (or thereabouts). Solar being absorbed in the top few microns does not make up the diference in radiative emissions out of 390 w/m2 less back radiation of about 324 w/m2 received.

I agree that the devil is in the detail. The problem that I see is the rate at which energy is being received, emitted, absorbed and the precise location where this is taking place and the energy flows within those locations. The proposition that vasts amount of DWLWIR is being absorbed within just a few microns of the ocean causes a problem, namely that there is so much energy that cannot be quickly diluted and dissipated that vasts amounts of evaporation would ensue.

The absorption of solar is not concentrated to just a few microns of depth, but rather to many many metres, hence the energy absorbed from solar is dissipated by the sheer volume of liquid within which it is being absorbed. It there fore takes a long long time to warm the water and drives evaporation at the slow rate that we observe.

The GHG thory seems to suggest that a watt of energy is the same whereever it is received. I do not consider that assumption is correct, and I am of the view that it is important to ascertain where that energy is received. That has an impact upon how matters play out in the real world in which we live in.

The net energy flow equation, in my opinion, better represents the position since you are looking at the ocean as a whole, and you do not therefore have to differentiate in the same way between the absorption characteristics of solar verses DWLWIR. The oceans are effectively radiating only about 66 w/m2, and the slow absorption of solar over a huge volume powers evaporation at the modest rates that we observe. ,

Why would Lewandowsky and Oreskes be co-Authors of a paper about ENSO, climate models and sea surface temperature trends?

I would think that it was obvious. There own research judges whether people have a ‘right to comment’ on ‘climate’ by whether they have published peer reviewed papers on the subject. Voila!! by being co-authors to this flawed paper, they can now both claim to be ‘climate scientists’. That is the sole reason – and it shows the shallowness of their reasoning and arguments.

The old quandary kicks in here that if natural cycles now account for a pause due to their negative phase kicking in then so too might recent warming be due mostly to the recent positive phase of the same cycle, destroying claims that the main driver is now carbon dioxide.

Bob Tisdale says:
July 21, 2014 at 12:25 am
TRG says: “My question is, how does the ENSO contribute to long term warming, which would otherwise not exist?”

Short answer: Through periodic (chaotic) decreases in cloud cover over the oceans.

Bob, can’t you see the problem with this statement and hence the problem with your view on the PDO? If it were chaotic then how could it show cyclic behavior? Think about it.

I suspect the PDO has a driver that is simply unknown at this time. One of the reasons it is unknown is likely because there is no research funding available. I have commented before that one cause may be the THC/MOC. Through changes in speed it would inhibit or enhance the probability for El Niño events and their strength. Of course, it could be something else entirely. But, it is unlikely to be due to chaos.

I think your problem is viewing the PDO as it is described in the literature as a pattern. It is obvious to you that the pattern itself is an after effect of ENSO, however, that doesn’t mean there isn’t a driver of these ENSO changes. Which results in the PDO pattern. In fact, given the cyclic behavior it is more probable that there is a driver than it is chaos at work.

Anybody who claims that a computer game model can predict the future state of an effectively infinite non-linear chaotic system driven by an unknown number of feedbacks – of which in some cases we cannot even agree on the sign – that is subject to extreme sensitivity to initial conditions is either a fool or a computer salesman.

Ironically, it was Edward Lorenz, a climatologist, who was the first to point this out.

Further, given the “pause” and that the water vapour feedback which is absolutely essential to the whole AGW alarmism industry has stubbornly refused to put in an appearance – in fact Solomon et al showed that atmospheric water vapour had declined by 10% in the first decade of the 21st century – I’m surprised anyone bothers churning out such nonsense an longer.

I’m wondering if maybe there are not even 4 best models, but maybe 4 best models for each time increment to replicate reality- they seem to suggest that in their press release. Hindcasting will always work when you put in reality!

Richard M says: “Bob, can’t you see the problem with this statement and hence the problem with your view on the PDO?”

There’s no problem.

Sorry for not being more specific. My comment “Through periodic (chaotic) decreases in cloud cover over the oceans” had to do with the tropical Pacific and with the North Atlantic and Indian Oceans.

Tropical Pacific: We’ve discussed and illustrated innumerable times how cloud cover over the tropical Pacific is reduced during La Niña events, which allows more sunlight than “normal” to recharge the heat released by El Niños and the warm waters redistributed in their wake.

Okay, back to the North Pacific: The North Pacific, on the other hand, warmed primarily through the redistribution (tropical Pacific to extratropical North Pacific) of warm water leftover from the strong El Nino events of 1986/87/88 and 1997/98.

Maybe you can answer a question for me, Richard M. Other than the fact that it’s convenient, why people are so fixed on the PDO? The PDO (the spatial pattern of the sea surface temperature anomalies in the extratropical North Pacific) reflects the aftereffects of ENSO and the sea level pressure of the North Pacific. There are no processes through which the PDO can impact global surface temperatures since the PDO does not itself represent sea surface temperature. I’m at a loss.

If it were chaotic then how could it show cyclic behavior?
=============
the N body problem is a result of chaos. 2 objects in obit around each other can be calculated exactly. 3 objects in orbit around each other cannot, unless they lie in the same plane.

yet when you view the 3 body system, it will appear cyclical, as the objects are clearly orbiting in cycles around each other. the problem is that the trajectory of the orbits in the 2 body system will converge on the average, while in the 3 body system they will diverge.

thus, the 2 body system will forever remain in the same pattern, while the 3 body system will flip-flop to new patterns, in an manner that requires infinite precision to calculate. something that is impossible on current computers.

Mr. Mosher is an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

Because he’s an applied statistician. As far as I’m aware he’s written a significant amount of open source code that is used by scientists and statisticians alike. You seem to make the mistake of thinking that a qualification earns a person a set of skills rather than the other way round.

No he is not, these urban legends never cease to amaze me. He writes some uncomplicated crappy R code like an amateur hack. Do you have any idea how many clowns there are like Mosher who attempt to write “code” and have no professional training or experience as a software developer? The results are laughably bad. The amount of technically knowledgeable people in this debate are almost non existent and I have been following this for over 8 years. Your comment is embarrassing to my profession.

Here’s an interesting little tidbit from Naomi Oreskes. This superhero coauthored an interesting, writhing, heap of intellectualism with Erik Conway entitled, “The Collapse of Western Civilization: A View from the Future” which appeared in Daedalus. Ok, now hang onto your seats for a brief thrill ride ’cause I’m going to give you a little glimpse of the astounding brilliance contained within:

‘To the historian studying this tragic period of human history, the most astounding fact is that the victims knew what was happening and why. Indeed, they chronicled it in detail precisely because they knew that fossil fuel combustion was to blame.

…

‘A key attribute of the period was that power did not reside in the hands of those who understood the climate system, but rather in political, economic, and social institutions that had a strong interest in maintaining the use of fossil fuels. Historians have labeled this system the carboncombustion complex:…’

Are you still alive after that brief peek-a-boo? Congratulations! You are composed of stern stuff indeed. What I particularly enjoyed was: “A key attribute of the period was that power did not reside in the hands of those who understood the climate system, …” Wow! Talk about ego. Talk about immature self importance. Apparently Naomi and her ilk want the “power to reside” in their own goddam hands; the hands of the self proclaimed climate scientists. Talk about power hunger. These people wanna’ be emperors, dictators, imperialists. Yikes!

“There’s an unmistakable warming trend over the last 100 years and that warming trend is well simulated by the models for the past, so there’s no reason to distrust the magnitude of future warming trends based on the past 15 years,” Dr Risbey said.

So, if I understand this correctly, Risbey is saying that since the models are able to (somewhat) backcast the trend of the past century post facto, then there is no reason to to distrust their ability to forecast the next century trend despite the fact that they have failed miserably to forecast the last 15 years?

kadaka (KD Knoebel) says:
July 20, 2014 at 10:37 pm
From Poptech on July 20, 2014 at 9:53 pm:

Mr. Mosher is an English major and has no such math qualifications so it makes no sense why you would be interested in his opinion.

Among other reasons, while Mosher can also come across as an arrogant ass, people know it’s not personal as it’s just his commenting style, and he’s certainly not anonymous and is willing to put his real name to his words.

No, he comes across as an arrogant […] and gets away with it because it is all based on urban legends he helped create and those that know him never debunked so I am. Actually it is personal as he looks down on most everyone here and you are too clueless to see it. Nothing I say requires my real name and never will.

I don’t see what the problem is. They went dumpster-diving for the best climate garbage they could find, and by jove, they found it! They should be congratulated.
“Eureka!” said one climate dumpster-diver to the other.
“No, you reeka”, said the other.

Folks who demand a debate and then offer up forums for debate that are downright intolerable. amuse me.

Judith currie’s place is no better, so Im not singling WUWT out. and RC is intolerable.

Yeah, people are not afraid of your mythical status at Judith Curry’s site anymore either. Mosher you are a true clown and cannot handle real debate which is why you run away and hide when the tough questions get asked.

I have compiled similar descriptions for each. Are these binned correctly?

–Also, I never did quite understand how much the IPO overlapped or how much it included o0r excluded (PDO+SO+NPO? Not sure at all).
–Also, I have the overall description of the IOD, and how it shifts the heat, but for some reason it fails to note which phase is warmer (if there is a warmer phase). Is there a warm vs. cool phase? Or just an East/West shift?

BTW, anyone who is interested in the notes I made on any of these cycles, I’ll be happy to share them.

Final question: There seems to be a tendency to mark the end of the last positive PDO at 2001. But 2007 seems to be the logical date, seeing as how there was that triple set of El Ninos from 2001 – 2007 (then ended, followed by the 2008 La Nina).

It is legitimate to pluck out the 4 best models and show how well they hindcast, IF you also dismiss the other 34 as having failed to hindcast well and therefore demonstrate reasonableness for prediction AND commit yourself to the 4 (with allowances for tweaking) for future results.

It boggles me that the number of models doesn’t decrease year by year. It is as if the modelers say that each model has a piece of the climate right, we aren’t sure which, but together they get it right. But if that were so, then they should be able to merge all the models and get the combined parameters right. OR that the climate can arbitrarily jump from one “system” to another, so that what worked the last decade is not the style this decade, and we don’t know which of the styles will be operating during the next decade.

How do you have 95% certainty when you continue with 38 models, and even then say you don’t know what the end result is likely to be, no “best estimate”?

As long as the peak temperature years which occur within any 17-year period continue to fall within the boundaries of the model confidence intervals, the claim will be made by climate scientists that temperature observations are consistent with climate model predictions, regardless of what the calculated slopes of those 17-year trends actually indicate.

Because of how wide the model confidence intervals are, then unless a huge downturn in GMT occurs which continues indefinitely into the future, this kind of claim will be made by climate scientists for the next thirty to fifty years.

The consensus here on WUWT is that another 18 months of the plateau will bring the global temperature below the IPCC’s 95% confidence envelope.

Is it possible to, somehow, calculate a 97% consensus on the models being right, instead of 89% of them being wrong?

How? What part of “The models are NOT accurate over a 15 year period” do you not understand? How can they be correct over the next 84 years, the next 184 years, if they cannot get the first 16 even close?

Imagine trying to justify your ruinous gambling addiction to your spouse by selecting from 38 past bets 18 with the ‘best’ (aka least worst) results and then paring that down even further to four bets that you actually won. Imagine the response when you tried to claim that even though you were thousands in the hole…..:

“These tests show that my knowledge and hunches have provided good chances for WINNERS”

No he is not, these urban legends never cease to amaze me. He writes some uncomplicated crappy R code like an amateur hack.

You could say that about a lot of engineering and science graduates.

Do you have any idea how many clowns there are like Mosher who attempt to write “code” and have no professional training or experience as a software developer?

This seems a bit snooty. I agree that there is a big difference in implementing a conjugate gradient method from first principles in say C, as opposed to just using a single line in R to use the same functionality. But you could accuse programmers in C of using a high-level language in order to avoid getting into assembler or even machine code. And I do agree that it’s always better to write things from “first principle” as it forces you to,if not to understand the maths, at least appreciate the nuances of what you’re doing. Most young physicists, as far as I can tell, are more familiar with R than C or FORTRAN these days.

The results are laughably bad.

That’s opinion.

The amount of technically knowledgeable people in this debate are almost non existent and I have been following this for over 8 years. Your comment is embarrassing to my profession.

I’m sorry you feel that way. I agree that in society today everything is becoming easier to do, so that by pressing a button anyone can employ fairly complex methodology to get results. This is surely a good thing in that it democratises science but I also accept that there is flip-side – it can give false confidence to individuals who wrongly assume that they know how to use a given technique.

“I suspect the PDO has a driver that is simply unknown at this time. One of the reasons it is unknown is likely because there is no research funding available. I have commented before that one cause may be the THC/MOC. Through changes in speed it would inhibit or enhance the probability for El Niño events and their strength. Of course, it could be something else entirely. But, it is unlikely to be due to chaos.

I think your problem is viewing the PDO as it is described in the literature as a pattern. It is obvious to you that the pattern itself is an after effect of ENSO, however, that doesn’t mean there isn’t a driver of these ENSO changes. Which results in the PDO pattern. In fact, given the cyclic behavior it is more probable that there is a driver than it is chaos at work.”

The PDO is not itself a driver, merely a superficial expression of a certain Pacific climate regime, of which there are several. Changes in the mean state of Pacific climate tend to be congruent north and south of the equator, so they generally seem to be initiated in the tropics. One very well-established climate shift, the 1976/77 one, occurred with a sudden drop in the mean level of pressure gradient from east to west across the vast tropical basin (the SOI), a state which persisted for the next three decades. There were also regime shifts in 1988/89 and 1998/99.

This article exposes in detail how poor the science in the paper in question is. When did defining the bulls-eye only after taking the shot become considered sound scientific method? My initial criticism of Climate Science was not that it was a bad thing but that it was an immature science. When it rapidly morphed to “settled science”, I knew science was in trouble. Anthony has correctly identified the political purpose of this peer-reviewed paper, “It’s all about getting that talking point in the media “climate models replicated the pause”, and really little else.”

The damage of the claims of Climate Science is not to our global eco-system, but instead to science. How could science practice go so far off the rails? It is like discovering the Umps fixed the World Series.

I followed the trial in in Dover, Pennsylvania where school administrators tried to force the teaching of Intelligent Design in their public science classes. I was horrified at the thought of mixing science with religion. Not because I am against religion, I am against clearly non-scientific practices corrupting science. I read every transcript available for that trial, and the judge in that case ruled easily against the Intelligent Design proponents and chastised them for their deceptiveness.

Climate Science is a much more difficult case since it did get in the front door of science (intelligent design was never able to do this) and then spread like a virus of poor practice and political activism. Climate Science threatens science not by religion but by politics and finance. Can you be an activist and a scientist at the same time? Apparently, in Climate Science you can without scrutiny from the media or peers on how your activism influences your scientific judgment.

I think this is a bit of a trial balloon and that’s why no-name climate scientists and non-climate scientists are the authors. To have two authors that haven’t a clue about sea-land-atmosphere dynamics is absolutely astounding. Even Trenberth, Gavin, et al should slam this piece of work. I know If I had a butcher a baker and a candlestick maker authoring a paper in my field I would slam the publication for permitting its seeing the light of day.

My daughter really hates her chemistry classes, as an industrial engineer this is a major part of my profession, so she has my sympathy. She would sometimes come to me and ask for help with her homework (i think leaving the choice completely up to her did the trick). She would show me her work and when she has it wrong, i show her how it is supposed to look like.

She never argues that she had it kind of right, when she can clearly see the difference. And she never tried to convince me her “work” is proof that humanity is facing its biggest threat ever.

I like to think it’s because she would feel really stupid if she does.

It is legitimate to pluck out the 4 best models and show how well they hindcast, IF you also dismiss the other 34 as having failed to hindcast well and therefore demonstrate reasonableness for prediction AND commit yourself to the 4 (with allowances for tweaking) for future results.

It is also legitimate to point out that all models show little skill regardless of any meaningless attempts to improve their accuracy through averaging. Even if you cherry pick slices from each series and average the optimal slices across all models they’re still wrong and cannot be used for policy unless that policy is intended to show that climate evades description by modeling. I’m really surprised the same people who claim garbage in – invalid analysis out regarding Dr. Evans’ work accept dodgy modeled garbage.

There are only 2 words, albeit very important words, missing in the abstract. Can you guess which?
Here they are in CAPITALS: “These tests show that ONLY FOUR climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.”

These two words change the entire significance of the paper.

Back to the real world: ENSO continues it’s 300 monthly cycle. The previous cycle ended with the ’97/’98 el nino. The current will therefore likely last to ~2023. The previous cycle was an el nino dominated cycle, the current cycle is a la nina dominated cycle. Can you guess what that will do to global surface temperatures?

I’ll never understand the Mosher hate. He does work, he puts his name on it, and he stands and faces the music. You don’t agree with him? Tell him! He actually makes himself public, that is a lot more than most. Besides, I am still impressed with him pulling Peter Gleick out of his arse. Life time pass after that one.

I agree with your points about Mosher, he does do things above board. I think most of the issues though have to do with his staccato writing and commenting style, which seems at times like he has some sort of climatic commenter Tourette’s syndrome. For an English major, it seems counter to his own training for him to write as he does. I do give him props on analyzing Gleick’s language though.

When (if?) the dust finally settles on this nonsense, with the amount of material online its going to enable me to carry out one mega-research project on attempts to maintain orthodoxy online, including such funny characters as Dana.

Thinking about how – somehow – “consensus” became synonymous with both good scientific knowledge and practice I’m minded of what one explorer of the epistemology of the web had to say:

“Another New York Times piece took up the problematic effects of perfecting the information stream in science. The criticism is similar to the one above. The piece said that physicists—Web inventors and Web innovators—are becoming wary of a fallowing of the field by the Web. Idiosyncratic avenues of research supposedly are being abandoned because of increasingly perfected information flows.

[I]nstead of fostering many independent approaches to cracking each difficult
problem, the Web, by offering scientists a place to post their new results
immediately, can create a global bandwagon in which once-isolated scientists
rush to become part of the latest trend. . . . “[S]corekeeping” Web sites, which automatically track the number of times a paper is cited by others, create . . .social pressure against marching to a different drummer.

The scorekeeping Web sites are the culprit in the story of the flattening of difference and the drying up of prospects for radical innovation previously brought about by relative isolation.”

The comment in the Guardian Non Nomen is referring to was made by a certain Steve Keogh, who a few posts later admits:
[quote] Oh okay, no I don’t have access to the paper either. Only this article and the summary. [/quote]

The comment in the Guardian Non Nomen is referring to was made by a certain Steve Keogh, who a few posts later admits:
[quote] Oh okay, no I don’t have access to the paper either. Only this article and the summary. [/quote]
________________________
Tks a lot. I just overrrrread it…

The use of the models in the manner described by many above is a demonstration of the ‘study’ being yet another example of climate quackery. That is all it is. Can you imagine if the moonshots were dependent on such thinking and methods? Of course not. Risking lives and spending billions used to be something that was carefully vetted by independent analysts.

The Texas sharpshooter analogy is appropriate. It is after-the-fact wiggle-matching. I have some garden snails that are willing to put slime to paper to produce wiggles of equal validity for predicting the future.

As long as the peak temperature years which occur within any 17-year period continue to fall within the boundaries of the model confidence intervals, the claim will be made by climate scientists that temperature observations are consistent with climate model predictions, regardless of what the calculated slopes of those 17-year trends actually indicate.

Because of how wide the model confidence intervals are, then unless a huge downturn in GMT occurs which continues indefinitely into the future, this kind of claim will be made by climate scientists for the next thirty to fifty years.

The consensus here on WUWT is that another 18 months of the plateau will bring the global temperature below the IPCC’s 95% confidence envelope.

===================================================

When they make the claim that temperature observations are consistent with the predictions of the climate models for some chosen interval, say 17 years, climate scientists are not looking at the central trend line of that period and at whether or not the entire line falls within the model confidence intervals.

When a peak year inside of that interval falls within the aggregated model CI, that is proof enough for them that observations are consistent with the models for the entire 17 year period.

This is what the climate scientists mean when they say something like, “Two of the hottest years on record occurred in 2005 and 2010.”

Those two peaks fall within the model CI’s, and so from their point of view, all is well with the models — regardless of the trend line’s final calculated location.

If the pause in GMT continues as a flat or slightly rising trend, we will be hearing much more of this kind of talk from climate scientists in the future, because some number of peak years will always be occurring inside the model CI’s as the flat or slowly rising general trend in GMT continues forward.

BobT: It seems to me that DN at the Guardian is essentially saying that the paper shows that if you happen to map the temperatures generated by the 4 models that best represent the ENSO index then when you aggregate it all together you find that you explain the hiatus. If this is essentially correct, then two things occur to me. First, it seems as though little more has been done than to remap the hiatus from a measure of the hiatus – since it is unclear how ENSO is incorporated into these models except as an overall temperature change index. Second, your comparison of panels 5a and 5c show that the best models do not actually map ENSO. So it seems to me that this boils down to the idea that if we take a little from this model and a little from that one we can recreate the hiatus. The talk about ENSO is handwaving.

Non Nomen: I was over at the Guardian and read the Steve Keogh comment about not having read the paper. It was hilarious!! I was following his ridiculous ripostes all the way and I realised he hadn’t read it. I so wanted to ask him how much he’d paid to read the paper ($32 according the Bob) but I can’t be arsed to get on the CiF. And I knew he wouldn’t know.
The other ‘fnar fnar’ moment was dana explaining why they used 30 year periods while trying to defend the 15 year periods of the paper – and then later saying that trends can be any length of time they like. Priceless.

“…If our continued emissions fuel warming of more than a couple of degrees, that is likely to commit us to irreversible melting of the Greenland and West Antarctic ice sheets. That, in turn, locks us in to sea level rises of tens of metres at rates, foreseeably, in the range of several metres per century.”

I wanted to comment there, and link to this WUWT article. But I’m not willing to register at a blog run by a School of Experimental Psychology goofball who fancies himself a physicist, or a climatologist. Or anything in the hard sciences.

Non Nomen: I was over at the Guardian and read the Steve Keogh comment about not having read the paper. It was hilarious!! I was following his ridiculous ripostes all the way and I realised he hadn’t read it. I so wanted to ask him how much he’d paid to read the paper ($32 according the Bob) but I can’t be arsed to get on the CiF. And I knew he wouldn’t know.
The other ‘fnar fnar’ moment was dana explaining why they used 30 year periods while trying to defend the 15 year periods of the paper – and then later saying that trends can be any length of time they like. Priceless.
_______________________
I found that Keogh comment and immediately thought about someone defending the undefendable, not because of the facts and the truth but just because of the blind belief in some strange stranger’s assertions. Mind-boggling that is. BTW, they are true de_niers: they even deny the 17-years+ hiatus. And this Guardian obviously has problems with freedom of speech, quite a lot of comments have been deleted, of course those thought to be coming from sceptics.
What a wonderful world of mental constraint….

This just gets more entertaining by the minute – over at the Guardian. One of their persistent commenters is now saying that the ‘we cannot predict the ENSO sequence’. Funny that. I guess that means that ENSO is not part of the global climate which, apparently the models can predict.

And these are people with a visceral hatred of WUWT – yet demonstrate that they have never been here to read the counter to their ‘beliefs’.

I won’t dirty myself on Dana’s Guardian articles anymore. He deleted my comment ( in part) to make the remainder look unjustified and then kicked the remainder. He knew his argument was unfounded as the rebuttal was already posted – but he deleted the rebuttal before sliming across the wound.

Still with the thought police busy covering Mr Banana’s blog I had free rein to embarrass then on the “Paterson’s a Nutter” article.
He is actually, in my opinion, but that doesn’t mean we have to believe the world’s ending if we don’t sacrifice the poor. They left Carrington to blow in the wind.

Gunga Din says:
May 14, 2012 at 1:21 pm
joeldshore says:
May 13, 2012 at 6:10 pm
Gunga Din: The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
As for me, I would rather hedge my bets on the idea that most of the scientists are right than make a bet that most of the scientists are wrong and a very few scientists plus lots of the ideologues at Heartland and other think-tanks are right…But, then, that is because I trust the scientific process more than I trust right-wing ideological extremism to provide the best scientific information.
=========================================================
What will the price of tea in China be each year for the next 100 years? If Chinese farmers plant less tea, will the replacement crop use more or less CO2? What values would represent those variables? Does salt water sequester or release more or less CO2 than freshwater? If the icecaps melt and increase the volume of saltwater, what effect will that have year by year on CO2? If nations build more dams for drinking water and hydropower, how will that impact CO2? What about the loss of dry land? What values do you give to those variables? If a tree falls in the woods allowing more growth on the forest floor, do the ground plants have a greater or lesser impact on CO2? How many trees will fall in the next 100 years? Values, please. Will the UK continue to pour milk down the drain? How much milk do other countries pour down the drain? What if they pour it on the ground instead? Does it make a difference if we’re talking cow milk or goat milk? Does putting scraps of cheese down the garbage disposal have a greater or lesser impact than putting in the trash or composting it? Will Iran try to nuke Israel? Pakistan India? India Pakistan? North Korea South Korea? In the next 100 years what other nations might obtain nukes and launch? Your formula will need values. How many volcanoes will erupt? How large will those eruptions be? How many new ones will develop and erupt? Undersea vents? What effect will they all have year by year? We need numbers for all these things. Will the predicted “extreme weather” events kill many people? What impact will the erasure of those carbon footprints have year by year? Of course there’s this little thing called the Sun and its variability. Year by year numbers, please. If a butterfly flaps its wings in China, will forcings cause a tornado in Kansas? Of course, the formula all these numbers are plugged into will have to accurately reflect each ones impact on all of the other values and numbers mentioned so far plus lots, lots more. That amounts to lots and lots and lots of circular references. (And of course the single most important question, will Gilligan get off the island before the next Super Moon? Sorry. 8-)
There have been many short range and long range climate predictions made over the years. Some of them are 10, 20 and 30 years down range now from when the trigger was pulled. How many have been on target? How many are way off target?
Bet your own money on them if want, not mine or my kids or their kids or their kids etc.

My fortune teller is good. I go there about every fifteen days and she gets my current life affairs right every time. She has a large battery of techniques: palm reading, astrology, tea leaves, crystal ball, tarot, etc. In each session, she applies one technique after the other, and I say, “No, that’s not right….No, that’s not right, either…” until she hits on one that nails it and I say, “That’s it! You’re right! I am having trouble with my boss at work right now!” or such. Most of the techniques get it wrong and she has to go through quite a few of them before she gets a hit. In fact, most of the techniques are almost never right. A small handful, I think about four, are right quite a bit of the time. It doesn’t matter which ones. The point is the set of techniques are amazingly powerful.

Some denier said that none of her techniques is right enough of the time to be worth a darn. I pointed out to him that she almost always gets it right with her array of techniques, that he wasn’t seeing the forest for the trees. Then, I called him a crank.

Catastrophic Anthropogenic Global Warming fanatics have taken a climatologist hostage and forced him to co author ‘scienttific’ paper for peer review and publication. Two notorious and quite obsessed international CAGW alarmists, Dr Stephan Lewandowsky, a psychologist, and Dr Naomi Oreskes, a science historian, have abducted or otherwise forced/coreced/seduced CSIRO climatologist Dr James Risbey to lead author a paper claiming that certain anonymous climate models do actually reflect the recent pause in global temperature rise.

The paper does not reveal if these are the same models that appear to predict no or little additional warming by 2100, in other words models that appear to not be dominated by CO2 related effects. If so, it would appear bizzare and utterly desperate to cite such models as ‘explaining’ the pause in warming on the one hand but try to conceal that they explain away AGW as not actually occurring at the same time. In Professor Lewandowsky’s case this is not quite so incredible it is widely believed.

Dr Risbey has co authored other articles with the two notorious alarmist advocates. There are concerns for Dr Risbey’s mental health in that he may be suffering from a type of Stockholm syndrome.

Couldn’t someone try to figure out which models they used by pulling together all the model outputs and the observations data and applying their criteria:

“To select this subset of models for any 15-year period, we calculate the 15-year trend in Niño3.4 index24 in observations and in CMIP5 models and select only those models with a Niño3.4 trend within a tolerance window of +/- 0.01K y-1 of the observed Niño3.4 trend”

Do they say what they use for “observations”? I know they mention GISS, Cowtan and Way, and HadCRUT4. If they don’t say what they used, one could just use GISS and see if the result turns out like their graph.

It would be very interesting to see which models they used. The post above (Ursus Augustus says:
July 21, 2014 at 4:17 pm) got me to thinking…

It occurred to me that in rejecting models that are “out of phase” with ENSO, they will also tend to reject the models that blow up way too fast. That is, while they say that they are just looking for the ones that happen to be “in phase” with ENSO, it turns out that their criteria also tends to avoid using the ones that are too steep, since they require the slope of the model output Niño3.4 trend to closely match the slope of the observed Niño3.4 trend. For example, if a given model is perfectly “in phase” with ENSO for the 15 year period (that is going up at the same time as the observations), but is increasing too fast, it will (conveniently) be rejected by their criteria. I think this will tend to get rid of the crazy high models. Perhaps the criteria is so good at rejecting the fast rising models that their selected group of models ended up being dominated by the few models that show very little warming by the year 2100 (as suggested by Ursus Augustus).

(My apologies if this has already been covered, in your post or in comments. I tried to read or skim most everything, but may have missed it.)

[Bob Tisdale]
I’ve read and reread Risbey et al. (2014) a number of times and I can’t find where they identify the “best” 4 and “worst” 4 climate models presented in their Figure 5. I asked Anthony Watts to provide a second set of eyes, and he was also unable to find where they list the models selected for that illustration.

I have said it before, the models are good and have great potential.
STRIKE ONE.

Abstract
The Key Role of Heavy Precipitation Events in Climate Model Disagreements of Future Annual Precipitation Changes in California
Climate model simulations disagree on whether future precipitation will increase or decrease over California, which has impeded efforts to anticipate and adapt to human-induced climate change……..Between these conflicting tendencies, 12 projections show drier annual conditions by the 2060s and 13 show wetter. These results are obtained from 16 global general circulation models downscaled with different combinations of dynamical methods…http://dx.doi.org/10.1175/JCLI-D-12-00766.1

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdfhttp://www.pnas.org/content/early/2012/11/28/1210514109

Fifty years ago, Joseph Smagorinsky published a landmark paper (1) describing numerical experiments using the primitive equations (a set of fluid equations that describe global atmospheric flows). In so doing, he introduced what later became known as a General Circulation Model (GCM). GCMs have come to provide a compelling framework for coupling the atmospheric circulation to a great variety of processes. Although early GCMs could only consider a small subset of these processes, it was widely appreciated that a more comprehensive treatment was necessary to adequately represent the drivers of the circulation. But how comprehensive this treatment must be was unclear and, as Smagorinsky realized (2), could only be determined through numerical experimentation. These types of experiments have since shown that an adequate description of basic processes like cloud formation, moist convection, and mixing is what climate models miss most.http://www.sciencemag.org/content/340/6136/1053.summary

Bob Tisdale says:
July 21, 2014 at 6:47 am
Richard M says: “Bob, can’t you see the problem with this statement and hence the problem with your view on the PDO?”

There’s no problem.

[snip] – no disagreement

Maybe you can answer a question for me, Richard M. Other than the fact that it’s convenient, why people are so fixed on the PDO? The PDO (the spatial pattern of the sea surface temperature anomalies in the extratropical North Pacific) reflects the aftereffects of ENSO and the sea level pressure of the North Pacific. There are no processes through which the PDO can impact global surface temperatures since the PDO does not itself represent sea surface temperature. I’m at a loss.

It’s convenient is the right answer. Let me explain further.

I think you are taking the use of “PDO” too literally compared to many others. I think a lot of us just think of it as an index which represents a climate cycle with as yet unknown driver/s. We are really referring to these drivers when we use the letters P D O because they are in common usage and nothing else yet exists. However, it is looks quite likely that something exists or the ~60 year cycle wouldn’t have been repeated so many times (I’m a math guy and it doesn’t take too many repeats for it to look driven rather than chaotic)). Thus, I don’t think it is very likely that the variation can be assumed to be chaotic or random. If those choices are unlikely then a driver must exist. If a driver is highly likely then why not use the letters PDO since it represents the pattern that the driver creates?

PS. Great article. This little side discussion takes nothing away from your analysis of the paper.

The middle panel that Russ R omitted was the one showing regional trends of the worst models. Cherry picking would be selecting the worst models to compare against observations. The correct, scientific procedure is to pick the best 4 of 18 models to compare against observations.

Sounds like really sour cherries. So, picking the best four models is okay, picking the worst four is cherry picking?

“This seems a bit snooty. I agree that there is a big difference in implementing a conjugate gradient method from first principles in say C, as opposed to just using a single line in R to use the same functionality. But you could accuse programmers in C of using a high-level language in order to avoid getting into assembler or even machine code. And I do agree that it’s always better to write things from “first principle” as it forces you to,if not to understand the maths, at least appreciate the nuances of what you’re doing. Most young physicists, as far as I can tell, are more familiar with R than C or FORTRAN these days.”

I can not let this pass without comment!

In engineering terms, it is always better to use a pre-written, tested, used, proved over time – library function. Why on earth would anyone want to spend time and therefore money on writing a function to completely replicate something that is already available of known worth?

A well tested function is worth much more than an equivalent untested one.

Why someone would want to write a function in C that duplicated a function in R, without all of the testing required to ensure that it would work under all scenarios, on all possible platforms is beyond me.

– I see it is absolute havoc with censored comments over there. I was lucky enough to catch your initial comment before it was removed. There was nothing remotely wrong with it, so I’m guessing most of the remainder of comments were removed on the basis of the same pathetically low standard.

Michael Hopkin has just posted this justification for the wide swathe of censorship. I’ve read several of the comments that have been removed and can’t for the life of me come up with any remotely reasonable justification for their removal. The only thing they seem to have in common is disagreeing with the author. I originally saw this as pretty pathetic, but Michael’s justification is something I’ve found quite disturbing:

“Geoff’s original post (and subsequent replies, including mine) have been deleted in line with our policy of moderating against comments that introduce misinformation or distortion. We much prefer commenters to link to sources that are informed by credible, peer-reviewed evidence.”

From comments at the Guardian article, I think I have got the jist of it:

There are not 4 models selected, but 4 runs (I think of all models) that best match the actual ENSO. These were selected to match the actual data (of the ENSO), and then their results were tested against other actual data.

I’m not trolling, but it seems that there has been a fairly large misunderstanding here, and the results of the paper are not what is believed by Bob. I’ve not read it, so I cannot say, put I suggest a bit of calm reappraisal.

Geoff’s response to Michael Hopkin is hilarious and deserves being recorded for posterity (and because it will likely disappear very shortly)….

“I’ve read and understood the Cory blog post you link to which outlines Conversation’s approach to comments about climate science.

In line with your policy, you really need to delete the six posts above this one, which are replies to deleted posts, plus the two above that, which are objections to the deletion of my original post.
Then you need to delete my post below this one, since it links to an article by Dana Nuccitelli, who has no relevant scientific qualifications and works for a fossil fuel company when he’s not writing in the Guardian.

And the six replies to my post.

And then my reply to Peter Campbell, and the three replies to me.

And of course you need to delete this comment, which is way off-topic.

Have a good conversation.”

The ‘blog post by Cory’ being referred to is this. It seems anything deviating from “Duh Consensus” is deemed to be “deliberate misinformation and distortion of facts.”:

Richard M says: “I think you are taking the use of “PDO” too literally compared to many others.”

When someone new to the discussion wonders what the PDO is, they then search the web and find the JISAO website. They then are given the definition of the PDO that I’m using, not a generality. The use of the PDO, other than the traditional definition, adds confusion to an already complex discussion for newcomers.

Richard M says: “I think a lot of us just think of it as an index which represents a climate cycle with as yet unknown driver/s.”

The drivers of the PDO (the spatial pattern in the extratropical North Pacific) are known. They are ENSO and the sea level pressure of the North Pacific.

Richard M says: “We are really referring to these drivers when we use the letters P D O because they are in common usage and nothing else yet exists.”

The term Pacific Decadal Variability (PDV) has been used in papers before when describing something other than the classic definition of the PDO, and Stephen Wilde has, at my past suggestions, been using PDV in his general discussions as opposed to PDO.

Richard M says: “However, it is looks quite likely that something exists or the ~60 year cycle wouldn’t have been repeated so many times (I’m a math guy and it doesn’t take too many repeats for it to look driven rather than chaotic)).”

Geoff Chambers: I read your comments over at the Conversation before they were deleted and left them on my screen last night. This morning I was reading them (or not, as it happened) on my tablet. I was fascinated with the logic of the Editor, Michael Hopkin, who justified the deletion of your comment because it did not follow Conversation rules in that it did not reference with links that which you were alluding to. The fact that you had linked back to WUWT just goes to show that it’s not the lack of a link that gets you deleted, it’s the type of link. And he calls himself an editor! Censor would be a better word.

Jer0me says: “There are not 4 models selected, but 4 runs (I think of all models) that best match the actual ENSO.”

Sounds like whoever you’re referring to at the Guardian was confused as to what was presented. The paper identifies the 38 climate models that exist in the CMIP5 archive. Many of those models include more than 1 ensemble member (run). The paper also identifies the 18 climate models they selected because sea surface temperature data are available. Again, many of those models include more than 1 ensemble member (run). The paper then states that they selected the “best” and “worst” models, not ensemble members (runs).

Why someone would want to write a function in C that duplicated a function in R,

Because R is a scripting language it is an interpreted language, and therefore, has generally very poor performance and should never be given serious consideration if doing serious number crunching or building commercial software. Furthermore, how does one deal or manage numerical instabilities. If you’re using R you can’t! If you write your own functions then you can! In short, and I think what poptech was eluding to, is that R gives you nothing more than glorified button pressing. In short, and as stated, this means people can run this functionality without having ever to delve into the maths.

If you want to do something that can be optimised and improved do it yourself.

without all of the testing required to ensure that it would work under all scenarios, on all possible platforms is beyond me.

Never said you should.

In engineering terms, it is always better to use a pre-written, tested, used, proved over time – library function.

On limited reading, there does not seem anything particularly wrong with this paper, just of limited significance. Everyone, it seems, agrees GCMs cannot replicate actual climate over 15 year timescales. It is demonstrated that choosing 4 models which do a half way decent job of reproducing ‘natural variability’ in terms of ENSO also reproduce observed temperatures rather better. This is hardly novel, let alone a vindication of GCMs, especially when the authors themselves suggest GCM agreement with observed natural variability is a matter of chance. It is not ‘sharpshooter’ datamining however, that would be treating the 4 ‘best’ models as if they were the only models in the sample.

Why someone would want to write a function in C that duplicated a function in R,

Because R is a scripting language it is an interpreted language, and therefore, has generally very poor performance and should never be given serious consideration if doing serious number crunching or building commercial software. Furthermore, how does one deal or manage numerical instabilities. If you’re using R you can’t! If you write your own functions then you can! In short, and I think what poptech was eluding to, is that R gives you nothing more than glorified button pressing. In short, and as stated, this means people can run this functionality without having ever to delve into the maths.

Using that argument, you should not use Excel, because you will not ‘understand’ how it gets the result. I once saw a scroll function that scrolled part of the screen using only machine code. Lots of effort went into that. But Windows has a function to do it for you. Why would I use that already built function and not write my own machine code? Need I list the reasons?

Nothing wrong with scripted languages, if you have the time. Most processors are so incredibly fast these days, such as mine with three billion instructions a second (that is a staggering number), on one of four processors, hyper-threaded, so make that a potential eight processors. They can afford to waste some time, they’ve got nothing else to do, and doing that is much faster than learning to code in C, compiling, debugging, etc.

– Surely if it is specific ‘runs’ we’re talking about rather than the overall performance of models over multiple runs then the problem in question is “even worse than we thought”?

No, because that commenter stated that of all the runs, the ones that best matched the actual ENSO were selected. As the models do not attempt to model the ENSO (and that is another issue altogether), that is purely a matter of chance. So when they did match, the other results matched reality.

The further argument was that since ENSO all evens out over 30 years (not sure about that), ENSO itself is not relevant to long-term forecasts.

It’s all a bit moot, however, as Bob says this is not what the researchers (and I use that term advisedly) actually did. The argument seems to hold water, but is based on an erroneous assumption or deliberate obfuscation in this case.

bernie1815 says:
July 21, 2014 at 1:26 pm
“Second, your comparison of panels 5a and 5c show that the best models do not actually map ENSO. So it seems to me that this boils down to the idea that if we take a little from this model and a little from that one we can recreate the hiatus. The talk about ENSO is handwaving.”

That is an interesting idea. Linear mixing of several model runs to achieve a best fit to observations. Maybe some climate scientist does that.

The problem is of course that climate is a nonlinear system, so this mixing is invalid.
And this applies to all metrics as well.

This also means that the ensemble averages that the IPCC loves to print are meaningless.

You can’t average several runs of the simulation of a nonlinear system and pretend the average has a meaning. You can only do this for linear systems.

Using that argument, you should not use Excel, because you will not ‘understand’ how it gets the result.

Firstly, this is not my argument – you need to follow the thread. Secondly, there is nothing wrong with using Excel or any stats package. But because it does have a suite of statistical, tools than can be used without any regard for the nuances of the statistical tools, it can be abused and often gives people a sense of inflated-confidence in their analyses. I was only conceding the point to poptech that the use of R can sometimes lead to blackbox implementations. But I also recognise that it makes it easy to analyse data and therefore is a powerful tool that democratises science.

Why would I use that already built function and not write my own machine code? Need I list the reasons?

The answer has been given but here again for you…

Scripting languages are inefficient for processor-intensive activity (do I need to list the reasons?). Therefore, if you’re doing large-scale number crunching operations R is not very efficient compared to C. There are also primary issues such as numerical instabilities that you can’t deal with using R.

Nothing wrong with scripted language

I never said there was. Scripting languages are fine for most interrogation purposes and front-end operations, but as I say for processor intensive activities compiled code is much, much better than AOT/JIT.

Most processors are so incredibly fast these days, such as mine with three billion instructions a second (that is a staggering number), on one of four processors, hyper-threaded, so make that a potential eight processors.

Try solving 10000000 systems of linear equations with 100,000 regressors. R (with the best implementation) would probably take a day or more, or more likely kill your machine. C if coded properly with the right method would rip-through this in a about half an hour.

You also highlight a problem with R. What part of the underlying interpreter (written in C) knows when to implement using parallel programming?

“cd says: Because R is a scripting language it is an interpreted language, and therefore, has generally very poor performance and should never be given serious consideration if doing serious number crunching or building commercial software. “

Who is talking of writing commercial software, were talking of research, lots of scripts, run over and over again.

If you were to write commercial offering and suggest writing all of the functions yourself, I suspect you would have made a career limiting suggestion.

Well written proven commercial libraries of know quality save a fortune in time and money on any commercial project.

If you were writing in fortran you might use the nag numerical library, if your were writing in C or C++ you might use roguewave numerical libraries (other vendors and free versions are available).

Whenever I hear a programmer say I can write better code that that supplied by a tried and tested library routine, my eyes role upwards. In my previous careers I have had to pick up the pieces from extremely bright programmers who did not (probably still do not) realise that not all floating point operations are the same across a range of compilers/cpu’s/OS’es.

Without extensive testing (the word extensive can not be emphasized too much here) you will not be able to guarantee you function/subroutine/procedure will work correctly under all possible scenarios.

We have had posts here at WUWT discussing papers where researchers have run the same program (climate model) on a variety of ‘supercomputers’ and not one run produced identical results.

As Jer0me says, why would you want to duplicate an existing ‘proven’ routine?

Just to test/verify/regression test a simple function that did a matrix operation requires test harnesses, a range of compilers, are range of operating systems and a range of CPUs. Not an insignificant task.

However if all you want to do is add up your household expenditure more quickly that excel of openoffice then by all means write your own C++ function, just do not try to write quality software used for serious purposes.

Are you trolling or is it that you don’t understand any of the points being made. The reference to commercial software affirmed the notion that R is generally not taken serious for number crunching were efficiency is required. The only exception to this rule is an academia were time isn’t money. BTW I think R is really a nice idea and on the whole well implemented. It has spawned a number of really clever relatives such as Rcpp. These guys play at a far higher level than most software groups and it’s really impressive what they do. But that doesn’t mean for all its strengths, it does not have many limitations.

If you were to write commercial offering and suggest writing all of the functions yourself, I suspect you would have made a career limiting suggestion.

Well I have been doing that for the last 10 years.

Well written proven commercial libraries of know quality save a fortune in time and money on any commercial project.

If commercial libraries exist, then it makes the point that R can’t be that great as it’s opensource and can be shipped with commercial software! Again I think it’s pretty good but just not efficient enough.

Third party software is used throughout the software industry but in my experience you’re far better, when it comes to the science and mathematical parts of science/engineering software, to write your own. In the long run saving a weeks work in development/testing time is nothing compared to hitting an impasse in third party utilities, for which you can do nothing about but make a request to that company to make something more efficient.

Whenever I hear a programmer say I can write better code that that supplied by a tried and tested library routine, my eyes role upwards.

Just as well I don’t work with you then.

…not all floating point operations are the same across…

I doubt that very much. All operations are dependent on the CPU design, the compiler, the compiler settings, degree of optimisation etc. The OS issue has more to do with how bytes are mapped to and from memory. I have yet to meat a programmer that does not appreciate any of this.

The rest of your post seems to be making the same point but in different ways.

“Causes for the PDO are not currently known. Likewise, the potential predictability for this climate oscillation are not known.”

Bob, you are a basic view of the PDO. Some of us believe that ENSO has a cause that is beyond random, chaotic behavior. That could be the cause of the PDO, it is not known and it is not ENSO itself. That is, both ENSO and the PDO are result of this unknown driver. They just appear on different time scales.

A simple analogy. A pot of boiling water with steam rising above. What is the cause of the steam. One might claim the cause of the steam is the boiling water. Yes, that is true but the water is boiling for a reason, that reason is a heating element was applied to the pot of water. The heating element caused both the water to boil and the eventual release of steam. Both descriptions are right at some level.

When I use the term PDO I’m referring to a deeper analysis of the situation. I’m not saying that ENSO is not involved in the entire process, but that the situation goes beyond the existence of ENSO.

Thanks for the information, I don’t know where your’e going with this though. This has always been an issue from day zero hence in the early days PC’s required separate math processors for science applications. The only difference today is that they’re incorporated into the CPU architecture as standard since the mid-1990s. But this again, and the point made in the pdf referenced, the issues relate principally relate to the processor and compilation design. Again, anyone who writes scientific software will know all this. I’m not sure what point you’re making and can’t see the relevance to the relative merits of compile vs interpreted code. I guess we’ve went off on a tangent here. But thanks again for the reference.

– Surely if it is specific ‘runs’ we’re talking about rather than the overall performance of models over multiple runs then the problem in question is “even worse than we thought”?

No, because that commenter stated that of all the runs, the ones that best matched the actual ENSO were selected. As the models do not attempt to model the ENSO (and that is another issue altogether), that is purely a matter of chance. So when they did match, the other results matched reality.

The further argument was that since ENSO all evens out over 30 years (not sure about that), ENSO itself is not relevant to long-term forecasts.

It’s all a bit moot, however, as Bob says this is not what the researchers (and I use that term advisedly) actually did. The argument seems to hold water, but is based on an erroneous assumption or deliberate obfuscation in this case.

This is important to the discussion and I think it has been missed in most of the previous comments. The point of the paper appears to be that models that get ENSO right do the best job of modelling the pause. Of course, this is not an Earth shattering finding for skeptics. Many of us believe the -PDO driver also leads to fewer/weaker El Niño events. Hence, there should be less warming right now. What this also means is the opposite is true (and Bob pointed this out in his analysis). However, all the propagandists fail to mention this latter requirement and hence leave people to believe that the warming was all the result of our emissions while only the pause is due to ENSO.

Also, the claims the ENSO “evens out” is a big problem. I think this is an area where skeptics need to attack the propagandists. The entire thrust of the solar-ocean model is that ENSO does NOT even out over time. It is dependent on the amount of solar “fuel” available in the oceans and that has risen over the last 400 years.

Nope, I do not like his urban legend status and inflated ego because of it.

No he is not, these urban legends never cease to amaze me. He writes some uncomplicated crappy R code like an amateur hack.

You could say that about a lot of engineering and science graduates.

I do all the time.

In short, and I think what poptech was eluding to, is that R gives you nothing more than glorified button pressing.

Yes and the fact that they try to call themselves “software developers” or “professional programmers”. I have no problem with easy to use tools, it is the self-appointed titles people apply to themselves from using them that I have a problem with.

But because it does have a suite of statistical, tools than can be used without any regard for the nuances of the statistical tools, it can be abused and often gives people a sense of inflated-confidence in their analyses.

Exactly, and in their programming ability.

My main point is I see far too many instances of more capable people shying away from debate because they are hesitant due to existing urban legends.
REPLY: OK. We are done with this particular discussion. You don’t like Mosher, we all get it. We get it on every thread you thread bomb with this. While we are pointing out dislikes, I don’t like the fact that you hide behind a fake persona while attacking people. So we’ll call it even and move on, all further replies on this issue will be deleted – Anthony

I scanned the comments looking for some examination of the power transfer mechanism that is purported to be heating the oceans, and I was gratified to see some mention of it.

The atmosphere nor land temperatures went up, so the heat must be hiding in the oceans, according to this week’s sermon.

Lewandowski’s blog post /web page makes mention of 4 atomic bombs per second additional energy going into the ocean. This sensational claim is intended to capitalize on ignorance in order to lend credence to their claim that warming continues uninterrupted.

The ignorance of the general public that is assumed is that putting it in terms of energy content (that nobody can quantify without looking it up somewhere) obscures the fact that we are talking about tiny fractions of a degree in actual temperature difference.

Nonetheless, thermal energy quantity must translate to temperature. It’s pretty difficult to measure it any other way.

The main hypothesis, if I am not mistaken, asserts that increased CO2 drives increased water vapor, creating a blanketing effect, and the atmosphere as a result ‘backscatters’ more longwave infrared radiation, and this increased radiation then gets absorbed by the oceans (as the repository for the ‘missing heat’ that didn’t show up anywhere else). I assert that if this is the energy transfer/trapping mechanism, any increase in the temperature of the ocean by this mechanism MUST be caused by an increase in the temperature of the air that is doing the blanketing.

I looked on Wiki (source of all human knowledge) and discovered that the specific heat capacity ration between water and air by volume is roughly approximate to 4,000 to 1.

What that means to me is that if one atomic bomb can raise the temperature of one cubic blob of water by one degree, it would raise the temperature of about 4,000 cubic blobs of air by one degree. However that gets spread out by volumes, various transport mechanisms, and other factors, it hints at the magnitude of the temperature increase of the atmosphere that must occur somewhere if the radiation blanketing model is driving the temperature of the oceans up.

But the reason they went looking at the ocean floor was because the heat was missing from the atmosphere. Their attempt to save the alarmist warming trend by obscuring it in tiny fractions of a degree in water temperature actually points up the biggest flaw in their underlying model — if the temperature of the atmosphere isn’t raised enough to to the blanketing/radiating, then it can’t be radiating into the ocean.

You can’t posit the atmosphere as the cause of a temperature increase and then just skip the mechanism you are promoting by pretending the energy slipped through and got trapped in the water. If CO2 climate sensitivity is the money-burger, then the energy flow has to be found where the models all say it is occurring. Any decrease in energy flux outbound due to radiative blanketing MUST show up as a temperature increase in the atmosphere. It might be that the whole ocean temperature thing is a red herring. Or that in their attempt to cover up the pause by putting the heat somewhere else, they didn’t notice that they were contradicting the whole basis of their model.

This is not my field, so if someone can point out the energy transfer mechanism they are proposing as the means by which the oceans are heating up (by this almost unmeasurable amount), please point it out to me.

>>Lewandowski’s blog post /web page makes mention of 4 atomic bombs per second additional energy going into the ocean. This sensational claim is intended to capitalize on ignorance in order to lend credence to their claim that warming continues uninterrupted<<
Lewandowski quoted Dana Nutticelli who meant the Hiroshima bomb. The omniscient waste dump named Wikipedia says that it had the power of 13 to 18 KT of TNT. That is 4.7MJ per kg of TNT or (averaged[I hate that word] at 15KT) 70'500'000 MJ. Gasoline has approx. 43MJ per kg. So that is equivalent to 1'640'000kg or approx 2.2 million liters of gasoline. The energy of one bomb is roughly equivalent to 11.2 million kilometers or ~7 million miles in a 1996 Ford Bronco 4WD.
That is -in other words- the stuff he is trying to sell.
But does anyone knows where the honeypot is from which Nuccitelli sipped his wisdom? And, equally important: can you believe Nuccitella? I think, there has been a very interesting post concerning that lie-of-omission-guy right here at WUWT….

“Also, the claims the ENSO “evens out” is a big problem. I think this is an area where skeptics need to attack the propagandists.”

All claims that “ENSO evens out in the long term” are based on the naive idea that the NINO3.4 index represents the full extent of the ENSO phenomenon, that it captures the signal from all ENSO-related processes. It doesn’t. But they absolutely must stick to this belief, for otherwise they’ll have to acknowledge that ENSO drives global temps even in a multidecadal perspective.

Anthony, I must say you should not be so critical of people using anonymous identities, like myself. Let us not forget that one of the greatest patriots of this country, Benjamin Franklin, wrote under the alias of “poor Richard”. Many of us fear that the carbonists will attack us personally and professionally if they know our true identities.

Richard M says: “Also, the claims the ENSO “evens out” is a big problem. I think this is an area where skeptics need to attack the propagandists.”

For those who comprehend ENSO and understand that it works (according to Trenberth et al 2002 and data) as a sunlight-fueled recharge-discharge oscillator, the claims that “ENSO is an oscillation and evens out” makes no sense at all. How can a La Nina event undo the discharge of heat and distribution of warm water that took place as a result of the El Nino, when the role of La Nina is to recharge the heat released by the El Nino?

REPLY: OK. We are done with this particular discussion. You don’t like Mosher, we all get it. We get it on every thread you thread bomb with this. While we are pointing out dislikes, I don’t like the fact that you hide behind a fake persona while attacking people. So we’ll call it even and move on, all further replies on this issue will be deleted – Anthony

Thanks for answering my previous questions. I have another. (Sorry if this has already been addressed somewhere in the thread. I didn’t see it.)

There is something bothering me about the whole discussion of models being “in phase” with observations.

Here is the quote from the abstract:

“We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations.”

Now, doesn’t “in phase” refer to similar placement of peaks and troughs? When I look at a graph of the Niño 3.4 index, I see about 3 or so peaks in most 15 year periods. So if you want to pick models that are “in phase” with the observed Niño 3.4 index, wouldn’t you look at the placement of peaks and troughs? The magnitude of the peaks shouldn’t matter, that has nothing to do with phase. It seems to me that if you want to achieve a match with the phase, the last thing you would look at is the trend (slope of the regression line?) over the 15 year period. After all, since there are about 3 peaks during a typical 15 year period, the slope of the regression line could match perfectly even if the peaks are completely out of phase over that period. Yet, to pick models “in phase” with observations, the trend is what they use for comparison. From the abstract:

“To select this subset of models for any 15-year period, we calculate the 15-year trend in Niño3.4 index24 in observations and in CMIP5 models and select only those models with a Niño3.4 trend within a tolerance window of +/- 0.01K per year of the observed Niño3.4 trend.”

So my question:

Am I missing something? Does it make sense to compare the trends over a 15 year period to try to find models in phase with observations when the period of a cycle is about 5 years?

As I mentioned in my other question, it seems to me that rejecting models that are more than 0.01K / year different from the observations in the trend of the Niño3.4 index over the 15 year period will tend to reject models whose global temperatures are also rising too fast. So it seems to me that their procedure makes a pretty good “running cherry pick” if the purpose is to show that one’s selected models match observed global temperatures fairly well, even if it does not do so well (as I suspect) at the stated goal of ensuring that the models you select are in phase with the observations with respect to Niño3.4 phase.

Bryan, I agree with you that the trends in NINO3.4 sea surface temperature anomalies do not necessarily tell us whether El Nino or La Nina events are dominant. In fact, I had prepared a graph, using NINO3.4 sea surface temperature anomalies, that compared 180-month trends and 180-month averages.

But I didn’t want to open yet another can of worms.

Curiously, though, Risbey et al (2014) are pretty much stating that the trend in NINO3.4 sea surface temperature anomalies for any 15-year period dictate the 15-year trends in surface temperatures.

Bob Tisdale says:
July 22, 2014 at 7:05 am
Richard M says: “Some of us believe that ENSO has a cause that is beyond random, chaotic behavior.”

Belief without data to support the belief is conjecture.

I provided you the reason for my belief. The fact you chose to ignore it doesn’t change its relevance. The odds of multiple repeating cycles based on chaos/random chance is quite low. Hence it is more likely there is a driver than there is not.

Once again to make it clear. My point is that the existence of long term quasi-periodic behavior usually indicates that a driver exists. No, it does not guarantee it, but those who choose to assume a low probability situation are bucking the odds. If one throws a die 20 times and gets a six 19 times the chances that the die is unfair are quite high. Since the PDO has been tracked back more than 500 years what are the chances that it is random? Not very good. Probabilities are not quite data but does something with a high probability need a more detailed explanation before you accept it exists?

Richard M says: “That could be the cause of the PDO, it is not known and it is not ENSO itself.”

As described above, the sea level pressure also impacts the spatial pattern of the sea surface temperature anomalies of the extratropical North Pacific.

Thus whatever is driving the changes in sea level pressures could be the driver of the PDO which then influences the number and strength of ENSO events.

Or, it could be something else. My point is lack of a knowing exactly what causes the six to appear in my die roll example does not change the fact that the die is almost certainly unfair.

bernie1815 says: “Bob: Haven’t they simply taken a sub-set models that in some runs produced overall flat SSTs for 1998-2012?”

They based their model selection on NINO3.4 SSTa trends during that period. I can’t say whether the modeled global sea surface temperature trends were flat because they did not present the actual sea surface temperature trends for that period…just the spatial patterns of sea surface temperatures.