Category Archives: Interviews

Readers of my weblog know that I have been very critical of the ability of regional climate models to add any skillful information to multi-decadal global models from whatever is already present in those models [which themselves have no skill at predicting even the statistics of large scale circulation features, much less the change of these statistics due to human climate forcings]. For a discussion of this inability, see, for example,

in which I discussed with Rasmus Benestad the science behind the concept of regional climate downscaling of multi-decadal global climate model predictions [which we define as a Type 4 downscaling in Castro et al 2005).

Rasmus and I continued our discussion via e-mail and, with his permission, I have reproduced below. For clarity, Rasmus’s comments are in regular font and my comments are in italics. Koji Dairaku and Rob Wilby, two colleagues and internationally well-respect experts on regional downscaling, were copied on all of the e-mails.

As an introduction, below is a short biographical summary of his outstanding professional credentials.

Benestad, R.E., 2006: Can we expect more extreme precipitation on the monthly time scale? J.Clim., Vol.19, 630-637

The e-mails are reproduced below. There is some redundancy since I embedded my replies within several of the e-mails. I have highlighted key text.

From R. Pielke Sr.

Hi Rasmus

I enjoyed talking with you via skype while you were in Japan at the downscaling workshop that Koji organized.

As a follow up, I thought you might be interested in these papers below which present a viewpoint distinct from yours in terms of the value of downscaling from multi-decadal global climate model predictions.

I would be interested in discussing with you how our major concerns can be refuted.

Thanks for these! I think you make some good points – especially with the ‘bottom-up’ approach and thinking in terms of contextual vulnerability. It seems to me fairly obvious that this way of thinking is sensible, and also a bit surprising that these notions are not more ingrained.

I think that your papers provide an excellent starting point for discussions. Some of the points that you raise are in my view up for debate (and I don’t know the answers). It’s good to have some critical voices in the literature, but I also think that you’ve used a ‘broad brush’ in some of the criticism. Saying that, there is also another paper by Oreskes et al. (2010) – see link below – that fits in with your view.

When it comes to what’s the point of downscaling, I think the main concern is the GCMs ability to project regional climate change in ‘type 4′. I appreciate this point – at least when it comes to individual GCMs. The question is whether there is at least some information embedded in all the GCMs we have available. Can we use the CMIP3/5 to describe the range of possibilities?

A useful question, I think, is what are the real sources of information? To me, the obvious candidates seem to be empirical data and the laws of physics. Even acknowledging that the climate is extremely complicated and complex implies information, be it about the fact that the temperature may vary strongly over short distances due to very local phenomena.

I agree with several of your points, but I’m not convinced about the statements that the models are not able to simulate the NAO, ENSO, etc. But this depends on the expectations and degree of fidelity. Another question is whether the character of the NAO and ENSO will change in the future is still unknown, and I agree that if their character will change, we do not know if the models are able to predict this change.

Aside from that, I think some of the criticism is a bit exaggerated. But it really depends on what you want to look at. Some models are worse than others, but there are some models which are used in seasonal prediction and are able to provide some description of ENSO – albeit with biases. My own analysis of various GCMs also suggest that they reproduce the characteristics of the NAO too.

I think your discussion on empirical-statistical downscaling (ESD) also is a bit narrow because the field traditionally has been narrow (I guess it’s a bit analogous to the ‘top-down’ and ‘bottom-up’ discussion). To me, it involved more than just regression, and I’m getting more into predicting how the shape of the probability disntribution functions may change in a future climate (see attached paper – this is only the teoretical foundation of a new and promising method that attempts to predict extreme rainfall).

Furthermore, it is important to design ESD in such a way that minimises non-stationarity, and it is important to test the strategy to see if the design is successful. ‘Testing’ and ‘assessing’ are two key words – probably not appreciated enough. There are at least two types of tests: against the past and using quasi-reality to test for the future. Furthermore, by drawing in more information based on the past – as you point out – and improved statistical analysis, I believe that we in some cases can do better than just looking at the past. Sometimes, statistical models can be quite useful in terms of describing ‘uncertainty’.

Thank you for your detailed reply. Please see my responses embedded in your text. Do I have your permission to post your e-mail and my reply on my weblog?

Best Wishes for the Holidays!

Roger

From Rasmus with my replies embedded inside of his comments

Dear Roger, Ron and Koji,

Thanks for these! I think you make some good points – especially with the ‘bottom-up’ approach and thinking in terms of contextual vulnerability. It seems to me fairly obvious that this way of thinking is sensible, and also a bit surprising that these notions are not more ingrained.

Thank you for the encouragement. I hope we can obtain a wider acceptance of this approach.

I think that your papers provide an excellent starting point for discussions. Some of the points that you raise are in my view up for debate (and I don’t know the answers). It’s good to have some critical voices in the literature, but I also think that you’ve used a ‘broad brush’ in some of the criticism. Saying that, there is also another paper by Oreskes et al. (2010) – see link below – that fits in with your view.

When it comes to what’s the point of downscaling, I think the main concern is the GCMs ability to project regional climate change in ‘type 4′. I appreciate this point – at least when it comes to individual GCMs. The question is whether there is at least some information embedded in all the GCMs we have available. Can we use the CMIP3/5 to describe the range of possibilities?

Even with ensemble results, there is no evidence that Type 4 runs can provide skillful predictions on multi-decadal time scales. Your last two questions are hypotheses. Until this is properly tested, we really should not be presenting these results to the impacts community and claim they have any skill.

A useful question, I think, is what are the real sources of information? To me, the obvious candidates seem to be empirical data and the laws of physics. Even acknowledging that the climate is extremely complicated and complex implies information, be it about the fact that the temperature may vary strongly over short distances due to very local phenomena.

The models are not fundamental physics, as only the dynamical core (such as advection, pressure gradient force, gravity) can fit in that definition. All other components of the climate models are engineering code with tunable coefficients and functions.

I agree with several of your points, but I’m not convinced about the statements that the models are not able to simulate the NAO, ENSO, etc. But this depends on the expectations and degree of fidelity.

Please provide papers that show an ability to simulate NAO, ENSO etc when run for multi-decadal time periods. I agree the models can replicate some aspects of these features when they are run in a weather prediction mode (primarily, in my view, because of the relatively slow changes in time of SSTs such that seasonal runs are a type of nowcasting for SST).

Another question is whether the character of the NAO and ENSO will change in the future is still unknown, and I agree that if their character will change, we do not know if the models are able to predict this change.

This is a key fundamental issue. If the models cannot skillfully predict CHANGES in the statistics of weather patterns, they add no value for the impacts community beyond what is available from the historical record, the recent paleo-record and worst case sequence of real world observed events. Thus, we should not be giving multi-decadal climate model climate change statistics to the impacts community and claim they have any skill.

Aside from that, I think some of the criticism is a bit exaggerated. But it really depends on what you want to look at. Some models are worse than others, but there are some models which are used in seasonal prediction and are able to provide some description of ENSO – albeit with biases. My own analysis of various GCMs also suggest that they reproduce the characteristics of the NAO too.

Seasonal prediction is not a Type 4 application. I agree there is limited skill such as for ENSO on this time scale. The ability to faithfully predict seasonal weather, however, is a necessary condition but not a sufficient condition to then assume that multi-decadal predictions of climate is skillful. Seasonal prediction is a Type 3 application.

I think your discussion on empirical-statistical downscaling (ESD) also is a bit narrow because the field traditionally has been narrow (I guess it’s a bit analogous to the ‘top-down’ and ‘bottom-up’ discussion). To me, it involved more than just regression, and I’m getting more into predicting how the shape of the probability distribution functions may change in a future climate (see attached paper – this is only the theoretical foundation of new and promising method that attempts to predict extreme rainfall).

I do not see how the paper you sent adds any new information in terms of predicting changes in statistics. The example from the Benestad et al 2012 paper that you enclosed is a type 2 downscaling study; see the text

“…the RCMs from ENSEMBLES, all of which had a spatial resolution of 50km and used ERA 40 as boundary conditions.”

It is not Type 4 downscaling.

Furthermore, it is important to design ESD in such a way that minimises non-stationarity, and it is important to test the strategy to see if the design is successful. ‘Testing’ and ‘assessing’ are two key words – probably not appreciated enough. There are at least two types of tests: against the past and using quasi-reality to test for the future. Furthermore, by drawing in more information based on the past – as you point out – and improved statistical analysis, I believe that we in some cases can do better than just looking at the past. Sometimes, statistical models can be quite useful in terms of describing ‘uncertainty’.

We agree on the value of testing models in the hindcast mode. However, what is a “quasi-reality” test for the future? If you cannot have real world data to evaluate against, it is not a robust test.

Thanks again for engaging in this discussion! Please let me know if I can post.

All the best,

Rasmus

From Rasmus

to me, R, Koji

Sure – you can post my response on your blog. I hope you will let me emphasis the need for thorough testing of the models. It is important to carefully consider how these tests should be designed, and one important element is to see if the models are able to predict changes (it depends on the use of the models).

I did some evaluation of type-4 GCM skill reproducing the NAO (see attached paper 2001a), and some further work is described in various reports (I can provide you with more information, although this is ‘grey literature’). It is also important to keep in mind that the actual mode of variability that is of interest may not be exactly the NAO/ENSO, but a related pattern that covaries more strongly with the variable in question (see attached paper 2001b). In any case, ESD can be used to evaluate type 4 GCM simulations.

The same GCMs used in type 3 and type 4 runs incorporate the same set of physical processes encoded in computer lines, but they differ in terms of their initial conditions. In my mind, the fact that these are used for making ENSO prognoses suggest that they have some skill in simulating ENSO (however, the models also have some shortcomings too, such as double ITCZ, poor MJO). Furthermore, we see that the models – or their components – provide solutions which embed natural phenomena that have not been prescribed, be it the Hadley cell, westerlies, tropical cyclones, ocean currents, jets, tropical instability waves, ENSO, or the NAO. I agree that the models consist of a mixture of a dynamical core and parameterisations, these parameterisation scheemes are based on our physical understanding (representing the bulk physics) and the tuning should be restrained by observations. This means that the models are not perfect, but I think they can provide useful information.

I also agree that the bottom line is the question whether the models (in type 4 runs) can predict *changes* in the statistics of weather patterns. Again, I’ll stress the importance of tests and evaluation. And the importance of including information from other sources – very much in line with the discussion in your papers. This aspect provides the connection to the example provided by Benestad et al 2012 paper – there is such a clear pattern in the daily rain guage statistics that seems to be universal (I’ve looked at more than 30,000 rain gauges by now). Robust inter-dependencies provide us with additional information and constraints. The type of ESD can also be applied for type 3 & 4 cases.

Actually, I’d like to expand the discussion about information sources. In addition to empirical (which also includes geographical), and physics-based, there is information/constraintes from the mathematics (hence, statisticians can provide very useful contribution to climatological research). Part of this set of information is used in modelling, but should also be used in testing/evaluating the models – using information from independent sources. This can be done in many different ways, e.g. comparing spatial and temporal structures, predicting out-of-sample data.

Thank you for your detailed and thoughtful reply! In terms of showing value-added using downscaling (statistical and dynamic) it is crucial, in my view, to discriminate between the 4 types of downscaling. We all agree that there is very significant value added with Type 1 downscaling (NWP), and also for Type 2 and Type 3, although the value added becomes progressively less.

However, I do not see how your 2001 papers demonstrate value-added for Type 4 since the key fundamental requirement is that the models would have to skillfully predict changes in the climate statistics. The Type 4 models certainly do predict such changes, but what observational data, in hindcast of course, has been shown to agree with these predictions?

We agree on what you have written

“I also agree that the bottom line is the question whether the models (in type 4 runs) can predict *changes* in the statistics of weather patterns.”

but I do not see how the 2012 (or your earlier papers) have shown we can do this. The finding of a universal rainfall behavior is, of itself, a quite important finding, but it does not show we can predict changes in the statistics in mulit-decadal time periods. Indeed, I do not see how this could be done unless long term changes in major circulation patterns (such the PDO, ENSO, ect can be skillfully predicted.

Type 3 downscaling is distinct from Type 4 as the key variable, SSTs are prescribed in the former as an initial value, and only change relatively slowly over a season in the real world. Thus, Type 3 forecasts provide an upper bound on the skill of Type 4 runs [and Type 2 is an upper bound for Type 3].

I do completely agree with you that the models are very valuable to inform us on climate processes (e.g. the Hadley cell, ITCZ, etc). However, this does not mean they can skillfully predict the multi-decadal changes in these features. That is climate prediction and it must be validated against real-world data.

A necessary condition, of course, for skillful predictions of the multi-decadal changes in climatology is that the climate processes be faithfully replicated. As you note, there are still issues with doing that. Until this level of skill is achieved, we cannot claim any skill of predicting changes in these processes.

Thus the bottom line remains that the Type 4 model runs (including the parent global models) have not shown skill in predicting changes in climatology of weather features that influence society and the environment such as drought, floods, hurricanes, heat waves, etc. Indeed, I have concluded this is a very daunting task.

I would welcome specific examples of where Type 4 runs have been shown in hindcast to skillfully predict changes in such features as drought frequency, ect.

I will plan to post our e-mail exchanges when we have completed them. If you could, please send me the urls for the papers you have sent and referred to as that will make it easier for the reader to access them when I post.

I very much appreciate this constructive interaction!

From Rasmus

to me, R, Koji

Hi Roger,

I think you are right that we strictly do not know whether the models are able to predict the future – type 4 case – but this is also true for all predictions. Take the weather forecast for example – we do not know whether they will forecast the true situation, especially at longer ranges. Nevertheless, we are fairly confident about nowcasting, and there is a gradual decline in the model skill.

I think that the skill in these models is mostly due to both the inter-dependencies in the atmosphere/ocean and the observations (constraints). In your distinction, however, you imply that the skill is mostly due to the observations (constraints). I think that the models deserve a little more credit, and that analyses of inter-dependencies between different types of ‘fields’ (e.g. surface temperature, sea surface temperature, mean sea level pressure, geopotential heights, winds) can shed light on the intrinsic model skills.

When the GCMs (type 4) simulate the spatial structure and the statistical nature (mean, variance, interannual variability, and coupling between different fields) approximately correctly, then we see that the models are able to describe some of the most important inter-dependencies. The cited 2001 papers describe both the simulated spatial structure of the NAO and the stationarity between scales as simulated by the GCM, but this analysis is not exhaustive, so you have a point. We do not know for certain.

The hypothesis with which we are concerned is whether there are similar inter-dependencies between the greenouse gas concentrations (warmer world) and regional atmospheric phenomena. The presence of strong inter-dependencies in both temporal and spatial dimensions, such as shown in the 2012-paper, provides useful information for type 4 predictions.

Working for a meteorological service, I have to provide practical and useful information for e.g. decision making. We need to live with uncertainties and we need to specify the unknowns. This is nothing new, and large sums of money is used to plan for uncertain outcomes – the most obvious example is a country’s defence (we do not know if the country will need the army and the arms). The same deal with mitigation and climate adaptation – we need to think risk analysis. Therefore I think that your idea concerning ‘bottom-up’ strategies and ‘contextual vulerability’ is so valuable.

All the best,

Rasmus

From R. Pielke Sr.

Hi Rasmus

Thank you for your detailed and thoughtful reply! In terms of showing value-added using downscaling (statistical and dynamic) it is crucial, in my view, to discriminate between the 4 types of downscaling. We all agree that there is very significant value added with Type 1 downscaling (NWP), and also for Type 2 and Type 3, although the value added becomes progressively less.

However, I do not see how your 2001 papers demonstrate value-added for Type 4 since the key fundamental requirement is that the models would have to skillfully predict changes in the climate statistics. The Type 4 models certainly do predict such changes, but what observational data, in hindcast of course, has been shown to agree with these predictions?

We agree on what you have written

“I also agree that the bottom line is the question whether the models (in type 4 runs) can predict *changes* in the statistics of weather patterns.”

but I do not see how the 2012 (or your earlier papers) have shown we can do this. The finding of a universal rainfall behavior is, of itself, a quite important finding, but it does not show we can predict changes in the statistics in mulit-decadal time periods. Indeed, I do not see how this could be done unless long term changes in major circulation patterns (such the PDO, ENSO, ect can be skillfully predicted.

Type 3 downscaling is distinct from Type 4 as the key variable, SSTs are prescribed in the former as an initial value, and only change relatively slowly over a season in the real world. Thus, Type 3 forecasts provide an upper bound on the skill of Type 4 runs [and Type 2 is an upper bound for Type 3].

I do completely agree with you that the models are very valuable to inform us on climate processes (e.g. the Hadley cell, ITCZ, etc). However, this does not mean they can skillfully predict the multi-decadal changes in these features. That is climate prediction and it must be validated against real-world data.

A necessary condition, of course, for skillful predictions of the multi-decadal changes in climatology is that the climate processes be faithfully replicated. As you note, there are still issues with doing that. Until this level of skill is achieved, we cannot claim any skill of predicting changes in these processes.

Thus the bottom line remains that the Type 4 model runs (including the parent global models) have not shown skill in predicting changes in climatology of weather features that influence society and the environment such as drought, floods, hurricanes, heat waves, etc. Indeed, I have concluded this is a very daunting task.

I would welcome specific examples of where Type 4 runs have been shown in hindcast to skillfully predict changes in such features as drought frequency, ect.

I will plan to post our e-mail exchanges when we have completed them. If you could, please send me the urls for the papers you have sent and referred to as that will make it easier for the reader to access them when I post.

I very much appreciate this constructive interaction!

Best Wishes for the Holidays!

Roger

Rasmus’s Reply

Sure – you can post my response on your blog. I hope you will let me emphasis the need for thorough testing of the models. It is important to carefully consider how these tests should be designed, and one important element is to see if the models are able to predict changes (it depends on the use of the models).

I did some evaluation of type-4 GCM skill reproducing the NAO (see attached paper 2001a), and some further work is described in various reports (I can provide you with more information, although this is ‘grey literature’). It is also important to keep in mind that the actual mode of variability that is of interest may not be exactly the NAO/ENSO, but a related pattern that covaries more strongly with the variable in question (see attached paper 2001b). In any case, ESD can be used to evaluate type 4 GCM simulations.

The same GCMs used in type 3 and type 4 runs incorporate the same set of physical processes encoded in computer lines, but they differ in terms of their initial conditions. In my mind, the fact that these are used for making ENSO prognoses suggest that they have some skill in simulating ENSO (however, the models also have some shortcomings too, such as double ITCZ, poor MJO). Furthermore, we see that the models – or their components – provide solutions which embed natural phenomena that have not been prescribed, be it the Hadley cell, westerlies, tropical cyclones, ocean currents, jets, tropical instability waves, ENSO, or the NAO. I agree that the models consist of a mixture of a dynamical core and parameterisations, these parameterisation scheemes are based on our physical understanding (representing the bulk physics) and the tuning should be restrained by observations. This means that the models are not perfect, but I think they can provide useful information.

I also agree that the bottom line is the question whether the models (in type 4 runs) can predict *changes* in the statistics of weather patterns. Again, I’ll stress the importance of tests and evaluation. And the importance of including information from other sources – very much in line with the discussion in your papers. This aspect provides the connection to the example provided by Benestad et al 2012 paper – there is such a clear pattern in the daily rain guage statistics that seems to be universal (I’ve looked at more than 30,000 rain gauges by now). Robust inter-dependencies provide us with additional information and constraints. The type of ESD can also be applied for type 3 & 4 cases.

Actually, I’d like to expand the discussion about information sources. In addition to empirical (which also includes geographical), and physics-based, there is information/constraintes from the mathematics (hence, statisticians can provide very useful contribution to climatological research). Part of this set of information is used in modelling, but should also be used in testing/evaluating the models – using information from independent sources. This can be done in many different ways, e.g. comparing spatial and temporal structures, predicting out-of-sample data.

All the best

Rasmus

R. Pielke Sr. Reply

Hi Rasmus

Thank you for your detailed reply. Please see my responses embedded in your text. Do I have your permission to post your e-mail and my reply on my weblog?

Best Wishes for the Holidays!

Roger

Rasmus wrote with my reply embedded with italics

Dear Roger, Ron and Koji,

Thanks for these! I think you make some good points – especially with the ‘bottom-up’ approach and thinking in terms of contextual vulnerability. It seems to me fairly obvious that this way of thinking is sensible, and also a bit surprising that these notions are not more ingrained.

Thank you for the encouragement. I hope we can obtain a wider acceptance of this approach.

I think that your papers provide an excellent starting point for discussions. Some of the points that you raise are in my view up for debate (and I don’t know the answers). It’s good to have some critical voices in the literature, but I also think that you’ve used a ‘broad brush’ in some of the criticism. Saying that, there is also another paper by Oreskes et al. (2010) – see link below – that fits in with your view.

When it comes to what’s the point of downscaling, I think the main concern is the GCMs ability to project regional climate change in ‘type 4′. I appreciate this point – at least when it comes to individual GCMs. The question is whether there is at least some information embedded in all the GCMs we have available. Can we use the CMIP3/5 to describe the range of possibilities?

Even with ensemble results, there is no evidence that Type 4 runs can provide skillful predictions on multi-decadal time scales. Your last two questions are hypotheses. Until this is properly tested, we really should not be presenting these results to the impacts community and claim they have any skill.

A useful question, I think, is what are the real sources of information? To me, the obvious candidates seem to be empirical data and the laws of physics. Even acknowledging that the climate is extremely complicated and complex implies information, be it about the fact that the temperature may vary strongly over short distances due to very local phenomena.

The models are not fundamental physics, as only the dynamical core (such as advection, pressure gradient force, gravity) can fit in that definition. All other components of the climate models are engineering code with tunable coefficients and functions.

I agree with several of your points, but I’m not convinced about the statements that the models are not able to simulate the NAO, ENSO, etc. But this depends on the expectations and degree of fidelity.

Please provide papers that show an ability to simulate NAO, ENSO etc when run for multi-decadal time periods. I agree the models can replicate some aspects of these features when they are run in a weather prediction mode (primarily, in my view, because of the relatively slow changes in time of SSTs such that seasonal runs are a type of nowcasting for SST).

Another question is whether the character of the NAO and ENSO will change in the future is still unknown, and I agree that if their character will change, we do not know if the models are able to predict this change.

This is a key fundamental issue. If the models cannot skillfully predict CHANGES in the statistics of weather patterns, they add no value for the impacts community beyond what is available from the historical record, the recent paleo-record and worst case sequence of real world observed events. Thus, we should not be giving multi-decadal climate model climate change statistics to the impacts community and claim they have any skill.

Aside from that, I think some of the criticism is a bit exaggerated. But it really depends on what you want to look at. Some models are worse than others, but there are some models which are used in seasonal prediction and are able to provide some description of ENSO – albeit with biases. My own analysis of various GCMs also suggest that they reproduce the characteristics of the NAO too.

Seasonal prediction is not a Type 4 application. I agree there is limited skill such as for ENSO on this time scale. The ability to faithfully predict seasonal weather, however, is a necessary condition but not a sufficient condition to then assume that multi-decadal predictions of climate is skillful. Seasonal prediction is a Type 3 application. I think your discussion on empirical-statistical downscaling (ESD) also is a bit narrow because the field traditionally has been narrow (I guess it’s a bit analogous to the ‘top-down’ and ‘bottom-up’ discussion). To me, it involved more than just regression, and I’m getting more into predicting how the shape of the probability distribution functions may change in a future climate (see attached paper – this is only the theoretical foundation of new and promising method that attempts to predict extreme rainfall).

I do not see how the paper you sent adds any new information in terms of predicting changes in statistics. The example from the Benestad et al 2012 paper that you enclosed is a type 2 downscaling study; see the text

“…the RCMs from ENSEMBLES, all of which had a spatial resolution of 50km and used ERA 40 as boundary conditions.”

It is not Type 4 downscaling.

Furthermore, it is important to design ESD in such a way that minimises non-stationarity, and it is important to test the strategy to see if the design is successful. ‘Testing’ and ‘assessing’ are two key words – probably not appreciated enough. There are at least two types of tests: against the past and using quasi-reality to test for the future. Furthermore, by drawing in more information based on the past – as you point out – and improved statistical analysis, I believe that we in some cases can do better than just looking at the past. Sometimes, statistical models can be quite useful in terms of describing ‘uncertainty’.

We agree on the value of testing models in the hindcast mode. However, what is a “quasi-reality” test for the future? If you cannot have real world data to evaluate against, it is not a robust test.

Thanks again for engaging in this discussion! Please let me know if I can post.

I think you are right that we strictly do not know whether the models are able to predict the future – type 4 case – but this is also true for all predictions. Take the weather forecast for example – we do not know whether they will forecast the true situation, especially at longer ranges.

Since their time horizon is short, we have millions of validation tests of weather predictions on the hours to days to several week time scales. Even for seasonal prediction, we have hundreds of tests.

However, this is not true for multi-decadal climate predictions. I am in favor (and have advocated for) assessing the limits of skillful predictability, but this is distinct from providing forecasts decades from now for the impacts community.

Nevertheless, we are fairly confident about nowcasting, and there is a gradual decline in the model skill.

Even with NWP, the statistically evaluated decline of forecast skill is generally exponential with the rate of drop off depending on the variable.

I think that the skill in these models is mostly due to both the inter-dependencies in the atmosphere/ocean and the observations (constraints). In your distinction, however, you imply that the skill is mostly due to the observations (constraints). I think that the models deserve a little more credit, and that analyses of inter-dependencies between different types of ‘fields’ (e.g. surface temperature, sea surface temperature, mean sea level pressure, geopotential heights, winds) can shed light on the intrinsic model skills.

It is the combination of observations and the physical rules of the models which provide the skillful forecasts. Models are essential for this (much of my career has involved working with the models. :-) ). However ,with Type 4 downscaling (and for their parent models), the real world observations are mostly forgotten (exceptions being the deeper ocean temperatures, the terrain, land-water boundaries, etc). We are relying on models to create changes in the statistics of weather due to forcing such as added CO2, yet have no way to validate the predictions except in a hindcast mode.

The challenge is to show, in this hindcast mode, that changes in local and regional climatology can be skillfuly predicted as a response to human and natural climate forcings. This has not been done.

When the GCMs (type 4) simulate the spatial structure and the statistical nature (mean, variance, interannual variability, and coupling between different fields) approximately correctly, then we see that the models are able to describe some of the most important inter-dependencies. The cited 2001 papers describe both the simulated spatial structure of the NAO and the stationarity between scales as simulated by the GCM, but this analysis is not exhaustive, so you have a point. We do not know for certain.

The successful simulation of the spatial structure and the statistical nature of the climate system is, unfortunately, only the first requirement (and as you correctly note even this has not been done completely). For the impacts community to have confidence in multi-decadal local and regional predicted changes in climate, the models must show skill in these predictions. They have not.

Thus, our bottom-up, resource-based approach becomes, in our view, the preferred approach as we can assess thresholds beyond which key resources are threatened. In terms of climate, we can use historical, recent paleo-record, worst case sequence of events, and even sensitivity experiments (e.g. +5% relative humidity arbitrarily prescribed in NWP runs) to estimate plausible impacts with today’s and estimated future societal conditions.

The hypothesis with which we are concerned is whether there are similar inter-dependencies between the greenouse gas concentrations (warmer world) and regional atmospheric phenomena. The presence of strong inter-dependencies in both temporal and spatial dimensions, such as shown in the 2012-paper, provides useful information for type 4 predictions.

The 2012 paper is not a Type 4 prediction. We all agree that added CO2 will have an effect [I actually feel the biogeochemical effects are the larger concern). However, there is no skill on Type 4 and, in my view, we are misleading the impacts community by providing these forecasts.

Working for a meteorological service, I have to provide practical and useful information for e.g. decision making. We need to live with uncertainties and we need to specify the unknowns. This is nothing new, and large sums of money is used to plan for uncertain outcomes – the most obvious example is a country’s defence (we do not know if the country will need the army and the arms). The same deal with mitigation and climate adaptation – we need to think risk analysis. Therefore I think that your idea concerning ‘bottom-up’ strategies and ‘contextual vulerability’ is so valuable.

The assessment of risk is one reason that I have concluded that the top-down IPCC approach is much too narrow of an approach. Thank you for the positive feedback on the bottom-up approach!

Rasmus wrote

to me

You’re welcome. I think we agree on many issues, but might have sligtly different views on others. That’s a good thing.

R, Pielke Sr wrote

Hi Rasmus

I will work to post our e-mail exchanges. Others should benefit from our interactions. :-)

In recent years, climate change has become a major focus of public and political discussion. Ongoing scientific inquiry, revolving predominantly around understanding the anthropogenic effects of rising greenhouse gas levels, coupled with how successfully findings are communicated to the public, has made climate science both contentious and exigent. In the AGU monograph Climate Dynamics: Why Does Climate Vary?,editors De- Zheng Sun and Frank Bryan reinforce the importance of investigating the complex dynamics that underlie the natural variability of the climate system. Understanding this complexity—particularly how the natural variability of climate may enhance or mask anthropogenic warming—could have important consequences for the future. In this interview, Eos talks to De- Zheng Sun.

Examples, of the insightful comments from De-Zheng include

“….even without any external forcing from human activity, the state of the climate system varies substantially.

“….one thing this book emphasizes is that, at least for interannual and decadal time scales, the climate is capable of varying in a substantial way in the complete absence of any external forces.”

“the anthropogenic effect was initially studied in a way that gave focus to its effect on the globally averaged energy balance. One- dimensional models were the early tools used to study the anthropogenic effect. While these models illustrated the importance of various radiative feedbacks in the context of global energy balance perturbed by an increase in CO2, they did not have the MJO, monsoons, ENSO, or the Pacific Decadal Oscillation, all of which may change in behavior as more greenhouse gases are released. All you can get from these one- dimensional models is a monotonic increase in global mean temperature as you increase CO2 in the atmosphere. Although three- dimensional models have now been developed, the conceptual picture about the effect of an increase in CO2 is still largely underpinned by the results from those one-dimensional models, as is evident in the notion that a significant linear trend in temperature will be the defining feature of global warming caused by anthropogenic enhancement of the greenhouse effect. The continuing influence from these early models over the way people conceptualize the anthropogenic effect is also because the state-of- the- art three- dimensional models still do not properly simulate natural variability such as MJO and ENSO. As a result, models are not yet able to capture the anthropogenic effect that takes place in the form of climate variability. In other words, our models may be underestimating the effect from anthropogenic forcing on natural variability.It is time to look seriously at an alternative hypothesis, which is that the defining feature of global warming will be changes in the magnitude of climate variability…”

“I also wanted to use this book to urge caution with regard to another traditional idea: that even if we don’t simulate natural variability very well, we may still get anthropogenic global warming right. Such an assertion is probably too optimistic. One of the key questions is whether the simulated climate system is in the correct dynamic regime, because a system near a critical point can respond very differently than a system that is in a very stable regime. A poor simulation of a natural mode of variability such as MJO or ENSO suggests that the involved system is not in the correct dynamic regime. Also, those feedbacks that affect global energy balance, such as cloud and water vapor feedbacks, may depend on the natural variability—MJO, ENSO, etc. However successful state-of-the- art climate models may be in simulating some key features of the climate system, the question of whether these models capture fully the complexity of the dynamics—in particular, whether or not these models are in the same dynamic regime as the observed climate— has yet to be answered.”

“The prevailing conceptual framework that has been used to quantify climate changes stemming from anthropogenic forcing is that increasing CO2 concentration will create a linear trend in temperature and other state variables that define the mean climate. I would suggest that the focus may need to shift from looking for trends in mean temperature to statistical changes in the magnitude and other attributes of natural variability. The effect of anthropogenic forcing is likely to manifest in climate phenomena at all time scales so long as these climate phenomena derive energy from differential heating. This shift in paradigm may further highlight the need for a better understanding of the mechanisms behind natural variability, in particular, their thermal and nonlinear aspects.”

“…the complexity of the dynamic processes seems to be either overlooked or oversimplified in many communications to the public. I suspect that could cause problems down the road, because we know climate can vary strongly on a range of time scales in the complete absence of any external forcing. If you overlook the complex dynamics of the climate system, and you don’t explain those processes up-front to the public, then you can cause confusion down the road.”

“Some climate scientists and the media, who have been more inclined to equate climate change to a monotonic warming, appeared to experience some trouble in explaining this harsh winter to the public. If one feels difficulty and potential embarrassment in explaining away a cold winter, what happens if we witness that the coming decade is cooler than the decade we have just lived through, something we know is possible within the realm of natural variability? If what is conveyed to the public is essentially a monotonic warming, how do we explain a halting or a reversal in the global temperature trend? “

This interview, and the AGU book that was the reason for the interview, should be read. It further shows that the IPCC 2007 reports ignored the broader view of the climate system which is what is needed if we are going to robustly interpret how humans are altering it.

John Neilsen-Gammon and I have had a constructive exchange of viewpoints on his weblog Climate Abyss under his post Roger Pielke Jr.’s Inkblot. John is an outstanding scientific colleague and I value such interactions with him.

the surface temperature and lower tropospheric global average temperature anomalies are actually diverging in recent years.

This can be seen quite clearly in the RSS MSU LT anaylsis (see Figure 7)-http://www.ssmi.com/msu/msu_data_description.html, where there has not been an increase a a number of years.

With respect to the issue “Tyndall gas climate signal to emerge from the other signals in a manner clear enough to convince a non-scientist”

the documentation of the increase of CO2 from human emissions is already obvious. We do not need to wait to see a radiative forcing signal to discern this human influence on the climate.

The fundamntal issue, however, in my view, is the relative role of this added CO2 with respect to othe human (and natural) climate forcings. You seem to accept that the radiative effect of the added CO2 will emerge as the dominant climate change forcing, yet other human forcings, such as due to land use/land cover change are emerging as possibly larger effects.

Roger – You challenge my statement “satellite temperature measurements show similar warming” with the statement “the surface temperature and lower tropospheric global average temperature anomalies are actually diverging in recent years”. So we both agree they differ to some extent. The question, for the purpose of hypothesis testing, is whether the satellite measurements indicate that no substantial warming has taken place over the past several decades.

The RSS Fig. 7 doesn’t address the issue because it doesn’t compare the different data sets, but the Klotzbach et al. (2009) paper does. The global surface temperature trends from NCDC and HadCRU from 1979 to 2008 are 0.16 C/decade. The global lower tropospheric temperature trends from UAH and RSS are 0.13 C/decade and 0.17 C/decade, respectively. Seems pretty similar.

Klotzbach et al. (2009) point out that the surface and lower temperature trends shouldn’t be expected to match exactly, because none of the forcings are expected to produce temperature trends that are uniform with height. They assume a model-derived expected lower tropospheric amplification factor of 1.2, which I agree with. This means the lower tropospheric trend would need to be 0.19 C/decade to perfectly match the surface temperature trend.

I computed updated trends through July 2011 using woodfortrees.org. HadCRUT3 trend = 0.152, GISTEMP trend = 0.164. Amplified for the lower troposphere, these are 0.182 and 0.197. Actual satellite values are 0.137 for UAH and 0.143 for RSS. So estimated lower troposphere warming (from surface measurements) is about 0.19 C/decade, and actual satellite-based lower troposphere warming is about 0.14 C/decade. Is this difference worth investigating? Absolutely. Is this difference so large that it calls into question whether the Earth is warming? Absolutely not!

You don’t have to convince me that surface temperature measurements have biases and errors, but they’re more than good enough to get the sign right.

The other issue you raise, the relative importance of other anthropogenic effects, does not argue against the hypothesis I stated, and neither does the Pielke et al. (2009) Eos article.

Hi John – Thank you for your reply. In terms of agreement between the surface and lower tropospheric temperature trends, in my view we need to examine shorter time periods as well. In recent years, the annual global average tropospheric temperature trends have been flat, as has the annual global average upper ocean heat content trends since about 2003.

The difference in recent years between the surface and lower tropospheric trends raises questions on mechanisms for these temperature trends, since, if these difference are real it indicates the lower tropospheric vertical temperature lapse rate has changed.

There is also conflicting information on the other climate metrics that you present, such as glacial retreat. It is more complicated as there are quite a few glaciers that are advancing. This variabibility illustrates why we need to examine regional atmospheric and ocean circulations as the more important metric to assess climate. The never-ending drought and heat you have had to endure this summer is due to a regional circulation feature, not a global average surface temperature anomaly. Parts of western Europe are reported as having their coolest summer in 50 years for example.

In terms of agreement between the surface and lower tropospheric temperature trends, in my view we need to examine shorter time periods as well. In recent years, the annual global average tropospheric temperature trends have been flat, as has the annual global average upper ocean heat content trends since about 2003.

Shorter-period trends tell us more about all the other things that affect climate than they do about Tyndall gas buildup, since Tyndall gas concentrations have relatively little short-term variability. Since we do care about all the other things that affect climate, I agree that short-term trends need to be examined too. But that’s a separate issue.

To isolate the effect of Tyndall gases on shorter-term variability, it’s necessary to account for the causes of the short-term trends. Several people have done this; a handy example is Tamino. When you account for the effects of just three such causes (ENSO, solar variability, and volcanic eruptions), the long-term warming trend emerges even on the sub-decade scale.
You said:

The difference in recent years between the surface and lower tropospheric trends raises questions on mechanisms for these temperature trends, since, if these difference are real it indicates the lower tropospheric vertical temperature lapse rate has changed.

Klotzbach et al. (2009) concluded that the differences were not real: “The characteristics of the divergence across the data sets are strongly suggestive that it is an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations.” I agree. You did too, at one time, since you’re second author.

As one might gather from earlier portions of the response, I am of the view that the climate system is responding to a variety of forcings and also changes due to natural variability. It’s not surprising that when all factors are active, there’s more variation than when only Tyndall gases are used as a forcing agent (though even some models’ natural variability is sufficient to produce a flat decade of surface temperatures or a flat decade of ocean heat content). Again, in the context of the hypothesis I put forth, the issue is the magnitude of the response due to Tyndall gases, and the most telling evidence of that response comes when shorter-term factors have evened out over the long haul.

You said:

There is also conflicting information on the other climate metrics that you present, such as glacial retreat. It is more complicated as there are quite a few glaciers that are advancing. This variabibility illustrates why we need to examine regional atmospheric and ocean circulations as the more important metric to assess climate. The never-ending drought and heat you have had to endure this summer is due to a regional circulation feature, not a global average surface temperature anomaly. Parts of western Europe are reported as having their coolest summer in 50 years for example.

Yes, your regional mileage may vary. As I said before, this is a separate issue than what’s happening on the global average. Saying my hypothesis is not the most important one does not go very far in disproving it.

And since you’ve brought up the Texas drought and heat:

The circulation anomaly is not all that unusual. More important is the lack of preexisting soil moisture, which alters the Bowen ratio in favor of hotter temperatures while at the same time making the atmosphere less susceptible to locally-generated convection and prolonging the drought conditions in a positive feedback.

Texas is running about 5.5F warmer than normal so far in August. Most of that is due to the dry soil feedback, but the degree or so add-on caused by global warming doesn’t help. (If this were a cold spell, it would help.)

We would be thrilled if this summer’s temperatures were only the warmest in 50 years.

I am not sure where we actually disagree except on the preeminence of the radiative effect of added CO2 and a few other greenhouse gases on the climate system over a long enough time period (decades). This dominance, presumably, would be due to the long residence time of CO2 in the atmosphere. Other human climate forcings are usually dismissed because they have a shorter residence time (i.e. aerosols).

We agree on this long term accumulation of CO2. Actually, I am more concerned on the (very poorly understood) biogeochemical effect of this added CO2, but, regardless, this is a significant concern. In terms of long term global warming, however, the presence of aerosols (which, despite their relatively short residence time) will continue to be reemitted into the atmosphere indefinitely. A number of these aerosol effects (e.g. sulphates) cause radiative TOA cooling. We present other examples of a diversity of cooling and warming aerosol efects in the NRC 2005 report (e.g. see Table 2-2 – http://www.nap.edu/openbook.php?record_id=11175&page=40).

Moreover, while land use change has not, apparently, caused a global average change in TOA radiative forcing, it certainly may in the coming decades if low latitude population growth continues. This is has a very long “residence time”.

Thus, while the added CO2, if that was all that was involved, would produce global warming (of a magnitude that depends on the magnitude of the water vapor/cloud feedback), these other human climate forcings significantly complicate the actual changes in the heat content of the climate system in the coming decades.

Therefore, I do not have your confidence on whether the coming decades will be warmer than the current or recent decades.

Roger

P.S. I still agree with

Klotzbach et al. (2009) concluded that the differences were not real: “The characteristics of the divergence across the data sets are strongly suggestive that it is an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations.”

Notice I used the conditional “if” in “if these difference are real it indicates the lower tropospheric vertical temperature lapse rate has changed.”

Roger – Yes, I think we’ve sort of converged. We agree that the evidence confirms hypothesis 1, that the earth has warmed on a multidecade scale. We agree that Tyndall gases are a warming factor, though you haven’t pinned yourself down on whether the size of the effect would be enough to cause about 2 degrees of warming by the middle of this century, all else being equal. Conversely, I haven’t pinned myself down on whether I think other forcings (such as aerosols and land use change) might be enough to cancel this effect.

I will now do so: No, I don’t think they will be near strong enough to cancel the effect of Tyndall gases on a globally-averaged basis. I believe this because: (a) they haven’t been near strong enough to cancel the effect in recent decades; (b) the influence of CO2 can only increase relative to aerosols (simplified argument: since the same power generation that produces aerosols produces CO2 and CO2 accumulates in the atmosphere, aerosols can never catch up); and (c) I can’t conceive of more land use change in the next 100 years than the past 100 years (we can’t double our arable land, for instance).

An excellent article, as usual Dr. n-g, I of course fully agree with both Dr.s Pielke. The problem is not whether or not human CO2 is increasing and has some effect on global temps, of course that is true and is theoretically and data based. The problem is the significance and magnitude of the effect vis a vis other myriad climate factors. And further, for the greater unwashed population that activist climatologists are trying to influence, why is that problem of such overwhelming significance that we should undertake drastic policy actions to alter it, or indeed, would those actions really do anything in human time frames we could actually see and benefit from. This is where the whole stack of cards comes down, in my opinion. In fact, I just submitted an abstract for a presentation at the annual AGU meeting in Frisco, I doubt it will be accepted, as we know how careful AGU is not to allow such things (it would upset the clanish Princess of China dining event), but that is the topic of it.

The radiative forcing of CO2 and the other greenhouse gases is a relatively minor warming effect unless there is a significant positive water vapor/cloud feedback. However, if the other human climate forcings (or natural forcings) prevent a significant ocean warming, this positive feedback will not occur or will be very muted.

There is significant research that shows that the model simulation of the water vapor/cloud feedback is overstated (e.g. see

“……extended calculation using coupled runs confirms the earlier inference from the AMIP runs that underestimating the negative feedback from cloud albedo and overestimating the positive feedback from the greenhouse effect of water vapor over the tropical Pacific during ENSO is a prevalent problem of climate models.”

In terms of long term climate system heat changes, I agree with you; the evidence is convincing (based on the most robust metric which is the upper ocean heat content in Joules) that it is warmer now than 50 years ago.

However, in the last 8 years, this heating has halted (or, at least, is very small). This clearly indicates that the added radiative effect of CO2 and the other added greehouse gases is being countered (by natural and/or human forcings). It does tell, us, however, that the water vapor/cloud feedback will be muted irrespective of the cause as the ocean sea surface temperatures have not increased much, if at all, during this recent time period.

Regarding the assumption that a reduction in CO2 would also result in a reduction of aerosols; this is certainly true for some aerosols such as sulphates (and I in favor of this just for health reasons!). However, dust from degraded landscapes, biomass burning in the tropics will continue indefinitely.

In terms of landscape change, it is expected to continue quite vigorously in the coming decades as the 3rd world develops. Moreover, it is not just a question of arable land, but also other landscapes such as the boreal forest that are of concern. There is also tropical forests still at risk.

My question to you, is how many more years of an absence of significant warming in the upper ocean (e.g.equivalent in Joules to a significant fraction of Hansen’s ~0.6 Watts per meter squared) would have to occur before you rethink your view on the dominance of the added CO2 and the other greenhoue gaes?

And now for the rest of the story from Sun et al: “…There is no significant correlation found between the intermodel variations in the cloud albedo feedback during ENSO and the intermodel variations in the cloud albedo feedback during global warming. The results suggest that the two common biases revealed in the simulated ENSO variability may not necessarily be carried over to the simulated global warming.” I therefore reject your characterization of this paper.

“How many more years”? Do you mean how many more years without understanding OHC variability or how many more years after understanding OHC variability? For example, there’s recent observational evidence (Chu 2011, Ocean Dyn., http://www.springerlink.com/content/776065j782110161/fulltext.html) that interannual upper-level OHC in the Pacific is dominated by El Nino and El Nino Modoki modes. Given that short-term global surface temperature variations are already explained by ENSO, the Sun, and volcanoes, the most likely explanation seems to be that short-term OHC variations are also attributable to ENSO, the Sun, and volcanoes.

The models analyzed by Katzman and van Oldenborgh (2011, GRL http://www.agu.org/journals/gl/gl1114/2011GL048417/) do not include variable solar and volcanic activity, and those models include an occasional 8-year period of no 0-700 m OHC increases. During such periods, ENSO variability does indeed account for the bulk of the “missing heat”, though a large portion also ends up being transferred to the deep ocean. This is consistent with another recent modeling study that finds that it’s necessary to go down to 4000 m to obtain reliable estimates of the total OHC variability (Palmer et al. 2011 GRL http://www.agu.org/journals/gl/gl1113/2011GL047835/).

[You’ve noted in your blog comments on the lattertwo articles that energy lost to space or stored in the deep ocean is permanently (or almost permanently) unavailable for affecting the surface climate. However, those losses are in the context of model simulations of long-term 0-700 m OHC trends that match observed trends, so periods of more rapid losses than normal are offset by periods of more rapid gains than normal to yield the simulated (and perhaps observed) long-term gains.]

A flat trend over any time period only shows that other forcings or natural processes are canceling the warming effect of Tyndall gases over such a period. There are lots of time-varying forcings and natural processes with a variety of periods: ENSO (2-7 years plus longer-term variations), solar (11 years plus longer-term variations), PDO (50-70 years), for example. Any of those could be strong enough to cancel the Tyndall gas effect during half its phase. We know for certain that ENSO is more than strong enough to do that, but yet, over the long haul, the magnitude of global warming has recently exceeded the magnitude of ENSO variability. So, in addition to a flat trend over some period of years, I’d want evidence that it was not merely a temporary flat trend. In the absence of such evidence, I’d settle for a trend longer than half a PDO cycle, or 35 years or so. With such evidence, the trend could be as short as a year, because I’d be swayed not by the trend but by the evidence.

I asked him about that statement in his paper. I e-mailed to him at the time the following

In order to obtain an answer to the above question, I contacted Dr. Sun with the following:

“I have set for your paper to be weblogged on in a couple of weeks. However, I have a question on your conclusion that ‘We thereby suggest that the two common biases revealed in the simulated ENSO variability may not be carried over to the simulated global warming, though these biases highlight the continuing difficulty that models have to simulate accurately the feedbacks of water vapor and clouds on a time-scale we have observations’, however it is not clear how such a bias could be removed when the models are applied in longer term model projections. Indeed, what is the data which says that the biases are removed?

Please clarify and I can add to the weblog.”

He replied

REPLY FROM DR. SUN

“You are right that no data have shown that those biases will not be removed. We are just mentioning the possibility that there could be error cancellation as global warming may involve more processes that those in ENSO, and the errors may cancel in such a way that prediction of global warming by these models that have these errors may actually get the answer right. It is just a possibility worth mentioning.”

You should have not been so quick to reject my characterization of the paper.

[Roger – There’s no evidence in the paper that the feedbacks behave in the same way on global warming time scales, and instead there’s actual evidence in the paper itself that they behave differently. I agree with Dr. Sun that his paper doesn’t show that the models are right at global warming time scales, but neither does his paper show that they’re wrong on global warming time scales. So I still reject your characterization, albeit more slowly this time. – John N-G]

Hi John – We can agree to disagree on the interpretation of the Sun et al results. It is, of course, not alone at raising serious questions on the robustness of the global climate models. This includes, as just two examples,

In my view, if they cannot skillfully predict the shorter time scales of the climate system, they will necessarily not be skillful tools on the longer time periods of decades.

My question on the length of time of upper ocean heating that could occur before you would reject them as robust tools is still there. Jim Hansen, for example, very specifically stated in [https://pielkeclimatesci.files.wordpress.com/2009/09/1116592hansen.pdf

“Our simulated 1993-2003 heat storage rate was 0.6 W/m2 in the upper
750 m of the ocean. The decadal mean planetary energy imbalance, 0.75 W/m2, includes heat storage in the deeper ocean and energy used to melt ice and warm the air and land. 0.85 W/m2 is the imbalance at the end of the decade.”

Thus we can concentrate on the upper ocean heating as the metric to test, irrespective of how much heat goes deeper into the ocean. The advantage of the upper ocean diagnosis is that it is much better sampled than the deeper ocean.

With respect the “Tyndall gas effect”, you and I are in complete agreement that the addition of greenhouse gases results in a radiative warming effect. The disagreement is in terms of its contribution in terms of altering atmospheric and ocean circulation patterns, which, in my view, is the more important issue in terms of what matters to society than what is the global surface temperature anomaly. The spatial distribution of radiative heating from added CO2 is much more homogeneous than the spatial distribution of diabatic heating from the aerosols and land use/land cover change. We document this in our paper

The much larger spatial variations in this heating would result in larger changes to weather patterns than from the added CO2.

Finally, we both agree that the addition of CO2 and the few other greenhouse gases is a warming effect. However, what other skillful (value-added) information have the multi-decadal global models (even with downscaling) provided for the coming decades on changes in regional climatology?

“shorter time scales”: Your position on this matter has the inherent advantage of making sense. You’d think that the ability of a model to simulate current climate ought to be strongly determinitive of its ability to simulate future climate change. Indeed, I’d really like to investigate the ability of models to reproduce all the different seasonal types of precipitation in Texas before I would have much confidence in their Texas projections.

And yet…when people look for correlations between skill at simulating current climate and projections of future climate, they usually find that they are unrelated. (The recent Mote et al. 2011 Eos article has an example of this.) Likewise, Chu found that the Enso-time-scale cloud feedbacks were apparently uncorrelated with the climate-change-time-scale cloud feedbacks.

I can think of a couple of possible reasons for this. First, each climate model comes to its own unique climate equilibrium. Certain processes will be more important in some models than others. Yet for climate change, the important characteristic is not the overall importance of a process and whether its importance matches what’s observed, but the sensitivity of the process to climate change and whether that sensitivity matches what’s observed. Second, maybe there really are characteristics of present-day climate simulations that let us predict how well a model will handle climate change and we just haven’t figured out which characteristics those are.

Bottom line: I agree with you that cloud feedbacks are the most poorly-handled of all important feedbacks and that the uncertainty is so large that even its sign is unknown. They’d have to be quite negative, though, to negate the other (more precisely known) net positive feedbacks, though, and that is unlikely.

Also, Spencer has recently dropped below my credibility threshold so don’t bother citing him here unless the work is corroborated.

“length of time”: I guess I wasn’t clear; the last paragraph in my previous response was my answer to your question. To summarize here: 35 years without any explanation; 1 year with convincing explanation. Your Hansen quote gives one data point, which is not useful for establishing the range of uncertainty. My turn to ask a question: what’s your best estimate for climate sensitivity to doubled CO2?

“the more important issue”: You may be right and you may be wrong. I haven’t seen research persuasive enough to convince me one way or the other, so I’ll not debate you on it.

“what other skillful information”: The climate record since the projections is not long enough to assess skill. It seems highly likely that skill decreases in proportion to the non-uniformity of the change in question, but beyond that, who knows?

I do not think this is a well-posed concept (even though it is widely used). “Climate sensitivity” is applied by the IPCc and others to mean the change in the global average surface temperature to an imposed radiative forcing.

Climate, however, is much more than this as we discuss in depth in

National Research Council, 2005: Radiative forcing of climate change: Expanding the concept and addressing uncertainties. Committee on Radiative Forcing Effects on Climate Change, Climate Research Committee, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, The National Academies Press, Washington, D.C., 208 pp. http://www.nap.edu/openbook/0309095069/html/

In addition, all of the assessments of this sensitivity are based on model estimates of the radiative forcing and the observed surface temperature trends. Without commenting here on the uncertainties and apparent biases in the surface temperature trends (which you and I are both very well aware of), the use of models (which themselves are hypotheses) necessarily prevents such an assessment of sensitivity from a real world observational validation.

Do you agree that the monitoring of the ocean heat content changes is a more robust way to assess the radiative imbalance of the climate system, than using the global average surface temperature trends in order to determine a so-called “climate sensitivity”? [which is actually just a subset of climate, and is a really the measure of the heat accumulation rate due to the TOA radiative imbalance].

I discuss the value in using the ocean heat content changes as the diagnostic metric to monitor global warming (and cooling) in

On Roy Spencer’s paper, I recommend you read it for the science he presents. It is in the peer reviewed literature. If he is wrong, this needs ot be reported. If he is correct, it certainly should not be ignored.

Roger –
Would this be a well-posed question: If the Earth in 2060 were to have a CO2 concentration of 540 ppm, increasing steadily from its present value, but with no further changes in land use, volcanic activity, solar insolation, or other climate forcings, how much warmer do you think the global average temperature would be compared to preindustrial values?

It may be difficult for us to carry out this experiment, given that we can’t control volcanic activity or solar insolation, but we may get lucky and have them remain stable anyway. So this is a physically realizable future climate state, achieved through a physically possible trajectory. What’s your best estimate?

“all assessments”: Forster and Gregory (2006) made their estimate completely independent of climate model estimates of the radiative forcing. You are free to apply an ad-hoc correction to the magnitude of the surface temperature trend used in their calculation in order to arrive at your own climate sensitivity estimate independent of model or surface temperature biases. (Actually, that’s pretty much how I do it.)

“Do you agree”: I agree that monitoring of ocean heat content has the potential of being a more robust way of assessing radiative imbalance. Since it’s a fairly new technology, I’d like to give it a few more years for most of the bugs to shake out, but this may be skepticism based on ignorance. On the other hand, weather is affected much more directly by ocean (and land) surface temperatures than by ocean heat content, so I prefer global surface temperature over ocean heat content as a convenient metric for how much the sensible climate has actually changed or is going to change.

“Spencer’s paper”: It didn’t take my colleague Andrew Dessler long to work out a demonstration that Spencer’s new paper is wrong. Many of his colleagues have counselled against publishing this demonstration, arguing that the time wasted refuting yet another in a series of incorrect papers by the same author would be better spent advancing our knowledge about the climate system and that at some point it’s better just to ignore incorrect papers. I personally agree with you that an incorrect paper should be publicly refuted in the scientific literature, but I can see how it would get annoying to be working on one public refutation after another.

“If the Earth in 2060 were to have a CO2 concentration of 540 ppm, increasing steadily from its present value, but with no further changes in land use, volcanic activity, solar insolation, or other climate forcings, how much warmer do you think the global average temperature would be compared to preindustrial values?”

I prefer to answer in terms of Joules of added heat that the models are predicting from the radiative imbalance of added CO2 and a few other greenhouse gases. Jim Hansen concludes that there was a radiative imbalance of 0.85 Watts per meter squared at the end of the 1990s which converts to ~1.38 x 10**23 Joules per decade or about 5.5 x 10**23 Joules of accumulation over the next 40 years if we assumed his value is representative of this time period (it actually would be larger presumably as CO2 increases).

I do not know, however, a unique and accurate way to convert this to a global average surface temperature anomnaly, since it matters whether it is the mean, the maximum or the minimum temperature anomaly (which matters, as you know over land), and also the value of the absolute temperature (through sigma T**4).

In terms of the robustness of ocean heat content anomalies, since about 2003, it is considered by Josh Willis as quite accurate in the upper 700m as a result of the density of Argo network and the use of satellite measures of sea level. It is also, in my view, more informative regarding weather than sea surface temperature alone since it the sensible and latent fluxes of this heat into and out of the ocean which strongly influences such features as tropical cyclones, ENSO ect. The sea surface temperature is just a sample of this heat.

We both agree that IF the added greenhouse gases were the only human climate forcing, there would be warming. However, even in this thought experiment, the magnitude is still uncertain since the water vapor/cloud feedback is still incompletely understood, as exemplified by the Stephens paper.

I also would not be so quick to dismiss Spencer’s paper. If Andrew Dessler can refute it, that is how science works, but it needs to be shown in the peer reviewed literature.

“in answer”: If you do not know a unique and accurate way to convert 5.5 x 10**23 Joules of accumulation into a global average surface temperature anomaly, would it be fair to say that you do not know how to convert 5.5 x 10** Joules of accumulation into any particular quantitative consequence for surface weather and climate? And if that is so, doesn’t that mean that ocean heat content is not an adequate metric for global climate change?

[I regard ocean heat content as a useful metric for some things, and global surface temperature anomaly as a useful metric for other things. For the instantaneous state of the atmosphere, global surface temperature anomaly is more useful than ocean heat content.]

“robustness”: …and the surface temperature network is considered quite accurate by Brohan et al. for global temperature trends, including minimal impact of urbanization. I hear what Josh is saying, but expert testimonials by themselves are of limited usefulness.

Update:
P.S. Thank you for the opportunity to show readers what a “robust scientific debate” actually looks like. Each person gets to check the other’s claims, examine the research on the subject, and evaluate the balance of evidence before responding. And neither side knows how the debate will resolve itself, whether one will convince the other, or whether the debate will unearth a third possibility. Here the debate is happening at a very high-speed rate, and even so it’s taking several days. Now, dear reader, consider how impossible this would be in a one-hour debate event. Notice also, dear reader, that unlike in a one-hour debate event, it’s possible to fact-check, and this necessarily keeps both sides honest. (Roger and I are both honest anyway, but that’s not true of all debaters.) Adequate response time also tends to prevent unconstrained Gish-Galloping.

P.P.S. If you’re trying to keep track of the threads, we’ve wandered fairly far afield of the original issues, so I’ll summarize. Hypothesis 1: we both agree it’s correct. Hypothesis 2: Roger doesn’t necessarily think it’s incorrect, but that it’s not very useful.

“…..If you do not know a unique and accurate way to convert 5.5 x 10**23 Joules of accumulation into a global average surface temperature anomaly, would it be fair to say that you do not know how to convert 5.5 x 10** Joules of accumulation into any particular quantitative consequence for surface weather and climate? And if that is so, doesn’t that mean that ocean heat content is not an adequate metric for global climate change?”

The use of ocean heat content change is not an adequate metric for climate change, as that issue encompasses a wide variety of issues. However, in my view (and actually that of Jim Hansen as I have heard him say this), the use of ocean heat content is a much more robust metric of global climate system heat changes (i.e. global warming).

The global average surface temperature, in contrast, is a much more difficult quantity to define. Besides whether the mean, minimum and maximum is what is meant (and it is usually the mean that is used), there is the question as to what height of measurement is used (e.g. surface skin value; 2m, etc). Now that we have improved sampling of actual heat (in the upper ocean), I recommend we move towards that metric to diagnose global warming.

For climate change more generally, we need to develop new metrics as we recommended in the 2005 National Research Council report. The metric we proposed in

There was a disclaimer that was added which I did not include in the above post. It was required by the AGU editors in specific response to my interview. The disclaimer reads (see at the end of the interview)

“The opinions presented in the interview do not necessarily represent those of the interviewer or the AGU.”

I see nothing wrong with this disclaimer, but it should be included with ALL interviews.

Roger A. Pielke Sr. is currently a Senior Research Scientist at the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado and a Professor Emeritus of the Department of Atmospheric Science, Colorado State University. Pielke has studied weather and climate on local, regional and global scales using both models and observations throughout an over 40 year career. He has authored, co-authored and co-edited several books including “Mesoscale Meteorological Modeling” (1984; 2002), “The Hurricane” (1990), ‘Human Impacts on Weather and Climate” (1995; 2006), “Hurricanes: Their Nature and Impacts” (1997) and “Storms” (1999). Roger Pielke Sr. was elected a Fellow of the AMS in 1982 and a Fellow of the American Geophysical Union in 2004. He has served as Chief Editor of the Monthly Weather Review and Co-Chief Editor of the Journal of the Atmospheric Sciences. He is currently serving on the AGU Focus Group on Natural Hazards (August 2009-present) and the AMS Committee on Planned and Inadvertent Weather Modification (October 2009-present). Dr. Pielke has also published over 350 papers in peer-reviewed journals, 50 chapters in books, and made over 700 presentations during his career to date. A listing of papers can be viewed at the project website: http://cires.colorado.edu/science/groups/pielke/pubs/. He is among one of three faculty and one of four members listed by ISI HighlyCited in Geosciences at Colorado State University and the University of Colorado at Boulder, respectively.

Hans von Storch Question

Prof Pielke, you are an atmospheric scientist – what were the main scientific issues you have tackled in your long professional career?

Roger A. Pielke Sr. Reply

Our research team has investigated a wide range of climate processes. This includes studies in meteorology, hydrology, ecology and oceanography. Among our findings has been the clear demonstration of the close coupling between land surface processes and weather. I have also worked extensively to improve our understanding of the transport and dispersion of air pollution, as well as ways to reduce the risk from this environmental hazard.

Hans von Storch Question

How do you weigh the role and the potentials of models?

Roger A. Pielke Sr. Reply

Models are powerful tools with which to understand how the climate system works on multi-decadal time scale as long as there are observations to compare reality with the model simulations. However, when they are used for predictions of environmental and societal impacts decades from now in which there is no data to validate them, such as the IPCC predictions decades into the future, they present a level of forecast skill to policymakers that does not exist. These predictions are, in reality model sensitivity studies and as such this major limitation in their use as predictions needs to be emphasized. Unless accompanied by an adequate recognition of this large uncertainty they imply a confidence in the skill of the results that is not present.

Hans von Storch Question

You have become known for dissenting views in the present debate about the perspective of anthropogenic climate change. For example, you stress the role of land uses chances as another key driver in influencing our climate. Could you outline your position?

Roger A. Pielke Sr. Reply

My perspective is summarized in a recent publication with 18 other Fellows of the American Geophysical Union in an EOS article titled “Climate change: The need to consider human forcings besides greenhouse gases” [Pielke Sr. et al., 2009]. We wrote “the 2007 Intergovernmental Panel on Climate Change (IPCC) assessment did not sufficiently acknowledge the importance of these other human climate forcings in altering regional and global climate and their effects on predictability at the regional scale” and because “global climate models do not accurately simulate (or even include) several
of these other first order human climate forcings, policymakers must be made aware of the inability of the current generation of models to accurately forecast regional climate risks to resources on multidecadal time scales.”

Hans von Storch Question

If you were right, how would the range of options for response measures for limiting man-made climate change within certain bounds differ from what is commonly considered?

Roger A. Pielke Sr. Reply

We need to recognize that the IPCC starts from an inappropriately narrow perspective that the human input greenhouse gases is the dominate environmental concern in the coming decades and then the IPCC presents policymakers with a resulting broad range of expected regional and local impacts. This is, however, at best a flawed significantly, incomplete approach.

The IPCC process should be inverted. In our 2009 EOS article that I referred to above, we recommend that the next assessment phase of the IPCC (and other such assessments) broaden its perspective to include all of the human climate forcings. It should also adopt a complementary and precautionary resource based assessment of the vulnerability of critical resources (those affecting water, food, energy, and human and ecosystem health) to environmental variability and change of all types. This should include, but not be limited to, the effects due to all of the natural and human caused climate variations and changes.

After these threats are identified for each resource, then the relative risk from natural and human-caused climate change (estimated from the GCM projections, but also the historical, paleo-record, and worst case sequences of events) can be compared with other environmental and social risks in order to adopt the optimal mitigation/adaptation strategy.

The issues we should focus on can be summarized in this set of questions:
1. Why is this resource important? How is it used? To what stakeholders is it valuable?
2. What are the key environmental and social variables that influence this resource?
3. What is the sensitivity of this resource to changes in each of these key variables? (this includes, but is not limited to, the sensitivity of the resource to climate variations and change on short (e.g. days); medium (e.g. seasons) and long (e.g. multi-decadal) time scales.
4. What changes (thresholds) in these key variables would have to occur to result in a negative (or positive) response to this resource?
5. What are the best estimates of the probabilities for these changes to occur? What tools are available to quantify the effect of these changes. Can these estimates be skillfully predicted?
6. What actions (adaptation/mitigation) can be undertaken in order to minimize or eliminate the negative consequences of these changes (or to optimize a positive response)?
7. What are specific recommendations for policymakers and other stakeholders?

I have been commissioned as Chief Editor of a set of five books which will apply this bottom-up, resource based perspective.

Hans von Storch Question

You have retired a few years ago from your active duty as a professor at Colorado State University. Did retirement present for you a loss of opportunities, for instance with respect to teaching, or an opening of new possibilities?

Roger A. Pielke Sr. Reply

I continue to work with graduate students at the University of Colorado, and at other institutions including Purdue University and the University of Alabama at Huntsville. I continue to be active in research and mentoring of younger scientists.

Hans von Storch Question

What would you consider the most twosignificant achievements in your career?

Roger A. Pielke Sr. Reply

First, the opportunity to mentor graduate students and postdoctoral research staff, a number of who have become leaders in atmospheric and climate science has been an achievement I am proud of. Second, the perspective that climate is an integrated nonlinear physical, chemical and biological system, which requires the understanding of all components of the atmosphere, ocean, land and cryosphere, is starting to become more widely accepted. I have sought to promote this view over the last 20 year. This broader view of climate as a complex, nonlinear geophysical system is more scientifically robust than has been presented in the IPCC reports.

Hans von Storch Question

When you look back in time, what where the most significant, exciting or surprising developments in atmospheric science?

Roger A. Pielke Sr. Reply

The ability to monitor the climate system from space has provided a much better understanding of climate as a system. We also are developing an improved recognition of the difficult challenges we face in seeking to skillfully predict climate decades from now. In terms of negative developments, the bias in the funding of climate science research which tends to exclude perspectives that differ from the IPCC viewpoint is a major concern. Also, the introduction in the last 10-15 years of the publication in peer reviewed research papers of climate forecasts and impacts decades into the futures. Their publication subverts the scientific process since these predictions are not testable until after that time period has elapsed.

Hans von Storch Question

Is there a politicization of atmospheric science?

Roger A. Pielke Sr. Reply

Very definitely. There is a clear intent, for example, in the climate assessment report process to exclude scientists who disagree with the IPCC perspective from research papers and from funding. This was exemplified in the CRU e-mails, but it is a much wider problem as I have documented on my weblog, testimony to the U.S. Congress and in Public Comments.

Hans von Storch Question

What constitutes “good” science?

Roger A. Pielke Sr. Reply

“Good” science is completed when hypotheses are presented and tested with real world data to see if they can be refuted. Unfortunately, the IPCC uses multi-decadal global climate model predictions as a basis for policy action yet these model predictions cannot be tested since we need to wait decades to obtain the real world data. Even in hindcasts of the last few decades, these models have shown no regional predictive skill.

Hans von Storch Question

What is the subjective element in scientific practice? Does culture matter? What is the role of instinct?

Roger A. Pielke Sr. Reply

Science needs to advance by following the scientific method. This needs to be independent of culture or any other external influence.

The interview that the Weather Channel did with us on Monday was quite instructive as it highlighted areas of significant agreement among us as well as the areas of disagreement. I did post today on one comment you made; see

and would like to see if you would respond in a comment that I can post. It is with respect to the reasoning for your conclusion of a 5-10% increase in precipitation in the flood areas of Pakistan and China. Any other comments on what I wrote would be welcome also.

Unfortunately, the interview is not on-line at the Weather Channel, but they have promised to send a DVD with this show which we will plan to convert and post.

In the interim, I wanted to comment on one statement that Kevin made (paraphrased below) that

…with global warming there is probably about a 5-10% increase on the precipitation that has occurred with the events in Pakistan and China…

This is an interesting conclusion. First, we need a basis for this number, and I have e-mailed Kevin to respond to this request. Second, if we accept this as true, it still means that the devastating floods would still likely have occurred even with 5-10% less rainfall.

The Economist has an informative article on the floods in Pakistan titled “Swamped, bruised and resentful” [subscription required]. With respect to the reasons for the flood damage, the article writes

“The deluge, which was many times the usual monsoon and fell farther north and west than usual, has exposed the lack of investment in water infrastructure, including big dams, much of which was built in the 1960s. The removal of forest cover may also have allowed rainwater to drain faster into the rivers. “

Clearly, there is a climate component in terms of where the anomalous rainfall fell. However, the failure to reduce the region’s vulnerability from foods by adequate water resource management (the lack of infrastructure development) and the environmental damage from deforestation (the accelerated runoff) significantly magnified the seriousness of this disaster. This is why we need the bottom-up, resourse-based perspective that I mentioned in my answer on the Weather Channel, and in my papers and blog posts; e.g. see

“The water cycle is among the most significant components of the climate system and involves, for example, cloud radiation, ice albedo, and land use feedbacks [NRC, 2003]. Regional and local variations in water availability, water quality, and hydrologic extremes (floods and droughts) affect humans most directly.”

“Risk assessments require regional- scale information. Thus, in addition to the current approach based on global climate models, local and regional resource- based foci are needed to assess the spectrum of future risks to the environment and to the resources required for society. For example, by regulating development in floodplains or in hurricane storm surge coastal locations, effective adaptation strategies can be achieved regardless of how climate changes.”

“We recommend that the next assessment phase of the IPCC (and other such assessments) broaden its perspective to include all of the human climate forcings. It should also adopt a complementary and precautionary resource- based assessment of the vulnerability of critical resources (those affecting water, food, energy, and human and ecosystem health) to environmental variability and change of all types. This should include, but not be limited to, the effects due to all of the natural and human caused climate variations and changes.”