I’ve written a number of times about energy balancemodels. I think these are nice ways to estimate effective climate sensitivity (both transient and equilibrium) but they are quite simple and do suffer from some issues. For example, they are largely incapable of accounting for inhomgeneities in the forcings and cannot account for possible non-linearites.

To be clear, this doesn’t mean that I think energy balance estimates are wrong, or not useful; simply that one has to be slightly careful as to how one interprets the results. There are, however, a couple of other things that typical energy balance models cannot incorporate. We’re fairly certain that internal variability can influence that rate at which the surface warms. Energy balance models typically consider the change in various quantities (temperature, radiative forcing, system heat uptake rate) across some time interval. Internal variability could, therefore, influence the value of these changes. For example, it is largely agreed that we’ve undergone a surface warming slowdown in the last 10 years or so. Therefore, one might expect that the change in temperature today, relative to some earlier time, will be slightly smaller than if internal variability was not playing a role.

Something else that energy balance models do not consider is that there is a small lag between a change in forcing and the resulting warming. Now, I’m not talking about the time it takes for the entire system to reach equilibrium, but the time it would take for the upper ocean, atmosphere and land to reach a quasi-equilibrium (i.e., all attain the same temperature). This is relativly quick, but still probably a few years. This does mean, however, that if you consider the temperature and forcings at the same time, you may be slightly over-estimating the change in forcing.

and is essentially the NINO3.4 ENSO index and represents internal climate variability. You don’t even really need to define what the parameters are, but they’re essentially an offset (), climate sensitivity (), a factor representing the strength of the influence of ENSO events () and a lag time (). Note also that the forcings, , are convolved with the function , which means that the forcing response isn’t instantaneous, but rather rises and then decays with a timescale set by .

My understanding of how they apply this (and the authors can correct me if I’m wrong) is that they have a forcing dataset (with a range for each forcing) and a temperature dataset (HadCRUT3v-gl). For a particular choice of forcings, they vary the parameters so as to get a best fit to the temperature dataset. Then, using the same values, they determine the transient climate response (TCR) by doubling atmospheric CO2 through an annual increase of 1% per year, over a period of 70 years. This is then repeated with different possibly forcings so as to produce a range and best estimate for the TCR. The result is shown in the figure below. The top left is the forcings, the top right is a comparison of the model and the observations, bottom left is the anthropogenic forcing, CO2 only forcing, and natural forcings, and the bottom right is the probability distribution function (PDF) for the TCR.

credit : Cawley et al. (2014)

What Cawley et al. (2014) find is that the TCR has a 95% credible range of 1.3-2oC with best estimate (peak of the PDF) of 1.66oC. The range is similar to many other estimates, but the best estimate is higher than that obtained using energy balance models. Energy balance estimates typically get a best estimate for the TCR of between 1.3 and 1.4oC (see, for example, Otto et al. 2013, and Lewis & Curry 2014). I suspect there are a number of reasons for this. By explicitly including internal variability (through the ENSO index) and by fitting to the entire time series (rather than just the beginning and the end) Cawley et al. have probably reduced the influence of internal variability on the TCR estimate. Cawley et al. (2014) have also determined the TCR by doubling CO2 at a 1% per year increase, over a period of 70 years, which is the formal way of estimating TCR. Energy balance models cannot do this, and simply estimate it from the change – across the time interval – in the various quantities. Given that there are climate models that match the 20th century warming, but which have higher TCRs than energy balance models suggest, this may not be all that surprising.

This post has actually got rather longer than I intended, and I don’t really have any major conclusions to draw from this. It seems like an interesting paper that attempts to estimate climate sensitivity using a fairly simple model, but one that includes various factors that energy balance models are unable to incorporate. The result is broadly consistent with other estimates (TCR of somewhere between 1.3 and 2oC) but has a best estimate that is a little more consistent with climate model estimates than is typically the case for energy balance estimates. That doesn’t make it right, though, but it may indicate that the simplicity of basic energy balance models might mean that they have a tendency to under-estimate TCR, since they can’t incorporate factors that would tend to produce a larger estimate. As usual, feel free to add your own thoughts through the comments and if anyone thinks I’ve got something wrong, or misunderstood the paper, feel free to point it out.

76 Responses to A minimal model for climate sensitivity

Are they really trying to debunk the Lowly=L&S paper that way? They should really look at how simple it is to do. Lowly obviously refuses to believe that a CO2 forcing can create a knee in the warming. So Lowly artificially create two linear segments. That is so obviously ridiculous — so end of argument.

As Richard Telford wrote when the Lowly paper came out:

http://quantpalaeo.wordpress.com/2014/07/24/a-minimal-model-for-estimating-climate-sensitivity/
“One of the classic tells that a fake climate skeptic is trying to squeeze a weak paper into the literature is that they submit it to a journal where it falls outside the usual scope of papers published there. The problem is that the editor is not familiar with the relevant literature nor with the most suitable reviewers and the paper does not get the robust peer review that it might get at a more appropriate journal.”

BTW, the TCR is 2C, so I don’t even think Cawley is doing the calculation correctly.

At first glance it looks like the model is overdamped, but if you look closely at the top right figure one sees that the model follows the rise of the observations but peaks out too soon which would imply that θ3 is too small. It might be more interesting to optimize the model on the annual variations than the overall match BTW, good post

BTW, the TCR is 2C, so I don’t even think Cawley is doing the calculation correctly.

As I understand it, even the Cawley et al. model can’t incorporate inhomogeneities or future non-linearities, so there is still an argument that it will also be an under-estimate.

Eli,
Thanks, Interesting point. It could be that is too small. Could also be that they haven’t lagged the ENSO index, I guess. I think it’s just interesting that you can get what appears a better match with GCMs with what is still a simple model but that includes some of what isn’t included in simple EBMs.

Write a blog post reviewing Isaac Held’s blog post. You will get everyone going in a different direction. And you will also leave people like Nic Lewis in the dust, as he is capitalizing on this same poor assumption.

Good heavens, non-linearities in an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions, who would ever have thought it!

“You should consider the possibility that even if you are rather uncertain about how this all works,that may not be true for everyone else.

Just the possibility. 🙂

“Something else that energy balance models do not consider is that there is a small lag between a change in forcing and the resulting warming. … This does mean, however, that if you consider the temperature and forcings at the same time, you may be slightly over-estimating the change in forcing. “

With the model they developed they could study this. Did they try and set to zero? How large is that effect?

You should consider the possibility that even if you are rather uncertain about how this all works, that may not be true for everyone else.

This aspect of the climate ‘debate’ continually amazes me. Few among us (IANAL) would argue the law with our solicitor (attorney), presume to dispute investment strategy with an expert financial advisor or tell an accountant that she was clueless about tax regulations. We recognise domain expertise in others and respect it. But somehow, when it comes to climate science, everybody knows better than the experts.

Victor,
I think you mean 🙂 I’m not sure but as I understand it, they marginalised over these parameters to get the best fit, so maybe they were not specified (although maybe I’ve got that wrong). Easy enough to test, though.

Actually, maybe I’ve just realised something. I had assumed that this method consider the range of possible forcings, but I may be mistaken about that. Having thought about it a bit more, it might be that the TCR PDF is determined by marginalising over the values for a fixed set of forcings and then determining the TCR value for each set of values.

Comparing their figures 3a. and 3c. which present both natural forcings (3a) and the related model component (3c) it seems that the decay time of the forcing is around 5 years. It’s a bit surprising that they don’t tell the number they use, but the smoothing and delay that results from the convolution is there to see.

As Kevin Cowtan is one of the co-authors, it is interesting to compare the minimal model of climate sensitivity with his n-box model (two box in default setting) hosted at the University of York, which is obviously related. The default forcings in the n-box model are listed as RCP2011, which I presume are the Meinhausen et al (2011) forcings used in the paper. Beyond that, so far as I can tell, the n-box model on default settings differs only in having two time constants representing intuitively the upper ocean plus atmosphere (T1=1), and the deep ocean (T2=30), and in having a fixed θ2.

Making the comparison, it is obvious that θ2 in the n-box model better characterizes the response to ENSO than the value found in the paper. The result is an excellent fit to observed temperatures (r^2 = 0.932) without any issues of over damping. The resultant best fit TCR is 1.649, ie, essentially indistinguishable from the result in the paper. That should address WHT’s concerns about heat flow, and the effect on TCR. It also suggests that ENSO has not been a significant factor causing inaccuracy in observational estimates of TCR, at least when the forcings are fitted against temperatures over the whole period, not just at endpoints.

On a side note, when looking at the IPCC AR5 data on attribution, I noted that the fact of near zero internal variability between 1950 and 2010 end points allows a straightforward estimate of TCR. On the IPCC figures, that turns out to be 1.66 C per doubling of CO2. Based on Isaac Held’s blog post referred to by WHT, that should provide an accurate estimate, limited primarily only be our uncertainty about the temperature increase and the change in forcing.

“That should address WHT’s concerns about heat flow, and the effect on TCR. “

Indeed. A two-box model gets one on the road to a full diffusional response — not perfect but much better than using a single time constant.

I would suggest that TCR is 1.5C for ocean only and 2C for combined land-ocean, which makes it ~3C for land only (which is essentially equal to ECS, as land has no real thermal sink). This is all based on my unpublished CSALT model.

Hello all, the key point of the one-box model was to illustrate that you can make a more minimal model with some basis in physics that can also predict GMSTs similarly well, and that the one-box model can be used to explore its assumptions, rather than just being a statistical fit of some function to the data. For instance you could substitute different estimates of aerosol forcings and see what difference it makes to the estimate of climate sensitivity. As GEP Box would say, “all models are wrong, but some are useful”, I think this one is useful in illustrating the benefits of a more physics based approach to modelling.

The thing that our model did that is different to Kevin’s model is that we used a Bayesian approach to fitting the model, which takes into account the uncertainty in estimating the model parameters (although there are also frequentist approaches to this as well). The model was fitted to the same calibration period as used by Loehle (ending in 1950), so you might get a slightly different answer using the whole dataset.

Another important point to mention is that Loehle’s method tries to explain as much of the observations using his cyclic model as possible and then estimate the effect of CO2 from the residual. This means the estimate of TCR will be subject to “missing variable bias”, in that any effects of CO2 forcing which are correlated with any component of the cyclic model will be absent from the residual and hence the model will systematically underestimate climate sensitivity. The 1-box model is the other way round in that it attempts to model the observations as a function of the forcings, and the residual is treated as natural variability (except ENSO which is included in the model). Again this means there may be missing variable bias (as an effect of natural variability that is correlated with the forcings will be attributed to the forcings) this time tending to over-estimate climate sensitivity.

At the end of the day, being a statistician (of sorts), I am much more reassured by a model based on physics, rather than a statistical model, because there is more reason to believe the model isn’t giving the right answer for the wrong reason. The more physics is used to constrain the model, the more reliable it is likely to be.

Gavin,
Thanks. I think I got a couple of things wrong in the post. I hadn’t noticed that you fitted to the same period as Loehle (which makes sense given the motivation behind your paper) and that the TCR PDF was based only on the uncertainty in estimating the model parameters and not on the possible range for the forcings. Presumably one could extend this and do a Nic Lewis-like study where you also include the uncertainties in the forcings. That – if I understand this properly – wouldn’t change your best estimate for the TCR, but would increase the 95% range.

ATTP, adding in the uncertainty in the forcings would surely broaden the 95% credible interval, whether it left the most probable value would depend on the distribution of uncertainty in the forcings, if the distribution were highly skewed it might have a more substantial effect. I would expect it to stay broadly the same though as the uncertainty in estimating the parameters is farily high. For me as a non-physicist statistical type, I would have thought the uncertainties due to uncertainty in the forcings is small compared with the uncertainty indicated by the difference between energy balance models and estimates from paleoclimate data, and we need to avoid single study syndrome and consider all estimates based on their merits, but not disregard estimates at either end of the spectrum.

I would have thought the uncertainties due to uncertainty in the forcings is small

Except for the aerosol forcing, I think.

we need to avoid single study syndrome and consider all estimates based on their merits, but not disregard estimates at either end of the spectrum.

Yes, I agree. In a sense what struck me about this was that it is still quite a simple method but seems to have somewhat reduced the discrepancy between EBMs, climate models and paleo. I don’t think we should ignore EBM results, but we also shouldn’t ignore that there are factors they can’t consider and – as I understand it – much of what they can’t consider would increase the estimate, rather than reduce it.

In some sense I find it interesting how some have used the EBM results. If it wasn’t for the nature of the topic, I suspect they would be interpreted differently. You have paleo estimates and climate model estimates that broadly agree. You then have EBMs that tend to produce a lower estimate, but still have a range that is consistent with other estimates. Also, when you consider the assumptions used when doing an EBM calculation (linearity in forcings, assume internal variability plays no role, assume forcings are homogeneous, …) one can plausible argue that they’re lower limits, rather than exact. I would have expected people to regard them as sensible sanity checks, rather than as real evidence for a lower climate sensitivity.

Full GCMs are still too much “black boxes” in the sense that it’s not possible to know all relationships between input and output. They can be modified in very many ways and very many different combinations may result in similar output for part of the results, but differ on other parts.

Energy balance models have the great advantage that their behavior can be understood fully. They are extremely aggregated presentations of the real Earth system and may describe it poorly, but at least we know, what they are doing.

Personally I think that the best way of using models in estimation of the climate sensitivity is to combine EMNs and GCMs in the way that the final calculation is done with improved EBMs that are a little mode complex. GCMs would in this approach be used in figuring out, how the EMBs could be improved keeping them physics based and transparent but correcting their largest deficiencies. Even if GCMs are not fully realistic, they are still realistic enough for determining quantitatively enough, how EBMs can be improved.

How far that approach has been used, I don’t know, but small steps in that direction have been made in several analyses.

miker613,
Yes, I saw that. I sometimes feel a little guilty that I have a tendency to be a little critical of Nic Lewis and then he refers to people as “SkS activists” and I feel he deserves it. He’s no better than the typical commenters on those sites really, which is rather sad since I applaud that he actually goes out and publishes papers.

I think that is a genuine error in the Cawley et al. paper, in that they got that Loehle’s method under-estimates TCR, rather than overestimates it (by mutiplying rather than dividing). A silly mistake I imagine since it is trivial, and I don’t doubt all those involved in the paper could easily do this. Of course, what Nic Lewis fails to point out is that the Loehle paper is rubbish and full of completely unjustified assumptions. That it’s already low TCR estimate should actually be lower if he’d done it with more reasonable numbers, just highlights this even more. Would be nice if he focused on the interesting part of the Cawley paper, which I discuss here, rather than nit picking something minor and rather trivial (however, this does seem to be his normal style, so not surprised).

One minute its peddling a hot MWP, the next you are here peddling low sensitivity. Can’t you see that these positions are flatly incompatible? It’s been explained to you often and clearly enough now.

As for Lewis’s stuff, we cannot get round a simple truth: the plausible (ie compatible with paleoclimate behaviour) sensitivity ranges for ECS and TCR are such that without emissions reduction policy, potentially dangerous warming will result.

It’s instructive to see Nic Lewis throwing around terms like ‘SkS activists’ without apparently the slightest awareness that he could very reasonably be described as GWPF activist. To be more accurate, his activism would be on behalf of whatever or whoever is funding the GWPF, but since it refuses to acknowledge its sponsors, we can go no further along that road. Yet.

BBD,
Yes, I had wondered if while Nic Lewis is happy to refer to others as “SkS activists” I should allow people here to refer to him as a “GWPF stooge”. Maybe we could even get some kind of cartoon drawn. Of course, Rachel probably wouldn’t allow us to refer to him in that way, and cartoons that simply mock others with whom you typically disagree is a little infantile.

WHT,
Technically, TCR is a model metric that tells you how a model responds if you double CO2 at 1% per year. What Nic Lewis calculates is really the effective Transient Climate Response. That’s why I quite like what Cawley et al. did because, as I understand it, they determined what model parameters produced the best fit to the temperature dataset, and then determined the TCR by doubling CO2 at 1% per year. So, it is more equivalent to a GCM TCR.

As I mentioned in the post, you can’t really do this using the Energy Balance approach, because it is really just a method/calculation that estimates an effective TCR. You could, of course, cast it as a simple one-box model, but that would be one where you’d defined the model to have a TCR that was the same as the effective Transient Response that your basic energy balance calculation has already determined.

In principle, I agree. We should all be encouraging people, and should be proud ourselves, to be active in promoting what they/we believe to be true/ethical/etc. However, it would take quite some doing to convince me that those who use it in the context used by Nic Lewis, were not intending it to be pejorative.

What you do is estimate the TCR value from the historical data — (1) the temperature change is there (2) the CO2 change is there. Remove the zero-sum fluctuations and you have the TCR. It is close to 2 C for doubling of atmospheric CO2.

“However, it would take quite some doing to convince me that those who use it in the context used by Nic Lewis, were not intending it to be pejorative.” Truth is, I’m not sure of his point. The term is not pejorative. He could just be sayng: I didn’t realize they were on the other team, interesting to note…
However, if he means, I don’t trust activists, so I assume the authors of this paper are fudging their results to fit their political goals – that would certainly be pejorative. I just don’t have a reason to think he’s saying that. It would be weird, as he is certainly a GWPF activist.

In any case, given how many negative terms are thrown about at this blog, I am not sure why you should react if Nic Lewis uses one.

In any case, given how many negative terms are thrown about at this blog, I am not sure why you should react if Nic Lewis uses one.

Oh, I don’t really care if he uses one, but do you really think your claim that there are many negative terms thrown around here is fair (I know there are some, but it does seem quite small compared to some other blogs and maybe I’m biased). I actually just thought better of him, and so if he is intending it pejoratively, I find it disappointing that someone who has put effort into actually doing research and publishing papers, then behaves like a typical blog commenter when commenting on climate blogs. He’s also complained of me attacking him when I’ve been somewhat critical of his behaviour, and yet did so on a blog whose modus operandi is to attack other scientists. People don’t have to be consistent, but it might be nice if they at least tried to be. Of course, if he doesn’t mean “activist” in a pejorative sense, then I’m simply mis-interpreting him.

”
WHT,
Yes, I realise, but really I think we should consider the change in anthropogenic forcing (CO2 and other GHGs, land use, aerosols), not just CO2 alone.
“

CO2 and other GHGs (and especially H2O) have always been lumped together.

Aerosols are taken into account by the volcanic data.

Land use? How about ocean use? Please think that through. The reason that the land is warming faster than the oceans is because the latent heat is transferred from ocean to land. The TCR of land is ~3 C for doubling of CO2.

WHT,
There are anthropogenic aerosols as well as volcanic; I was referring to the former. Land use, as I understand it, refers to a change in forcing due to anthropogenic changes to the land, not to the warming of the land. I guess all I’m trying to understand is whether or not your forcing is all anthropogenic, or only GHGs. Given your result, it would seem probably that you’re basing it on all anthropogenic forcings, or you’re doing what I think Loehle has done, which is to consider CO2 only, because that – by chance – is quite similar to the change in anthropogenic forcings.

Loehle has no idea what he is doing. He doesn’t realize that increasing CO2 can create a knee in the warming profile. And so he creates some artificial piecewise model that is junk.

Again, the shorthand question is what do you expect the change in temperature as we go from 400 PPM to 420 PPM. You extrapolate from this curve, which is over 130 years of past trending with respect to CO2 accumulation:

Alternatively, you don’t use this curve and then you have to explain what has changed.

You extrapolate from this curve, which is over 130 years of past trending with respect to CO2 accumulation:

Yes, but I have a suspicion that you might be slightly underestimating the change in forcing. The change in forcing due to CO2 alone is Wm-2. If, however, you consider the AR5 radiative forcing diagram, the change in anthropogenic forcing is more like 2.3 Wm-2 (relative to 1750, so a little less relative to 1880).

What good does that diagram do? It just puts the values in different units.

All that really matters is the measured change in temperature for a doubling of CO2. Everything else “goes along for the ride”.

The analogy is that someone is putting on some extra weight. Do you single out one food as the cause or can you judge it by the amount of a particular food and then scale ? In fact, CO2 is the indicator of consumption and everything scales from this. That’s why it is used as a shorthand.

What good does that diagram do? It just puts the values in different units.

No, I don’t think it does.

All that really matters is the measured change in temperature for a doubling of CO2. Everything else “goes along for the ride”.

Technically, TCR and ECS are defined as model metrics when a simulation is run in which the only change in forcing is CO2 and it doubles at 1% per year. In reality, however, there is more than just CO2 that is producing changes in anthropogenic forcings and so the correct comparison is with the change in forcing that would correspond to a doubling of CO2 (3.7 Wm-2) , but it doesn’t have to be CO2 only. For example, when we double CO2 the change in anthropogenic forcing may be greater than that due to CO2 and so the change in temperature would be larger than the TCR.

The analogy is that someone is putting on some extra weight. Do you single out one food as the cause or can you judge it by the amount of a particular food and then scale ? In fact, CO2 is the indicator of consumption and everything scales from this. That’s why it is used as a shorthand.

Well, there is CO2 equivalent (CO2e), which does this, but it’s not clear that you’re using this.

It is mind numbing that you dismiss such a massive and blatant mistake in the Cawley paper with a bunch of mindless hand-waving. loehle may have made a few mistakes, but at least he knows the difference between multiplication and division. What this mistake says about the peer review (or absence of) in climate science is astonishing. It also shows that climate scientists clearly know what their results will be before they start the research, but in this case they were so hell-bent to get there they couldn’t bother to check basic mathematics. What a supreme embarrassment for a group of so-called academics. What this says about you is that you cannot be trusted, this blog is nothing more than propaganda. This is my first trip to this blog and unquestionably also my last.

It is mind numbing that you dismiss such a massive and blatant mistake in the Cawley paper with a bunch of mindless hand-waving.

I didn’t dismiss it, it’s a mistake. But it has nothing to do with their own model and was simply a discussion about how the forcings used by Loehle were too small.

loehle may have made a few mistakes, but at least he knows the difference between multiplication and division.

So do the authors of Cawley et al. (2014). If you really think they don’t, then you’re a [Mod : redacted]. Loehle, however, appears not to understand the underlying physical processes associated with climate change. The assumptions in his paper are largely unphysical. It’s an embarrassingly poor paper. If you think that a silly mistake in a paper commenting on Loehle somehow makes Loehle’s paper more credible, then you should think again.

What this mistake says about the peer review (or absence of) in climate science is astonishing.

It says very little. Peer review isn’t auditing. Mistakes happen. Blowing these out of proportion is what’s astonishing.

What a supreme embarrassment for a group of so-called academics. What this says about you is that you cannot be trusted, this blog is nothing more than propaganda.

No, it was a mistake. They happen. Decent people do not turn mistakes into supreme embarassments. They recognise that mistakes happen. This was one silly mistake in what was quite a long and complicate paper.

How many times have I seen this – both in my own activity and in that of the others. When you think that something that you read is wrong, you develop eagerly counterarguments and are even ready to publicize them, but this is a risky approach, because you jump to an issue without the same background work you would have in standard research. You are far too much influenced by confirmation bias, when you are happy with the apparent result that tells, how stupid the other side was.

Even if you are right that the other side is seriously wrong this approach may make yourself look stupid. This kind of errors tend are easily made in the active debunking of skeptics, and such errors influence commonly the credibility of debunking sites. The errors are seldom this serious, but using arguments that are actually very weak and do not show at all what they are claimed to show is common. Trying to make the arguments simple and strong makes them actually wrong.

The concept is so simple — divide the change in temperature by the change in CO2 so far and you get the TCR. That is exactly the definition (apart from the 1% per year).

Maybe I should stop, but that definition only applies if the only change in anthropogenic forcing is CO2. In reality, it isn’t the only change in anthropogenic forcing, so really one should by using CO2e, not just CO2. As I understand it, over the period of the instrumental temperature record, the change in CO2e is slightly greater than the change in CO2 alone.

”
It is mind numbing that you dismiss such a massive and blatant mistake in the Cawley paper with a bunch of mindless hand-waving. loehle may have made a few mistakes, but at least he knows the difference between multiplication and division.
”

scf, I recommend that you calculate the TCR yourself. First, remove the fluctuations from the global temperature trend (I do it by a multiple regression approach that I call CSALT). Then plot the logarithm of the CO2 concentration for each of the temperature values. The slope of this line is the TCR.

Since you know the difference between multiplication and division, you should be able to know where to place the logarithm of 2 to get the doubling scaling correct.

Knock yourself out and you will see the TCR much closer to 2C than the ridiculous low-ball estimate that Loehle makes.

”
Maybe I should stop, but that definition only applies if the only change in anthropogenic forcing is CO2. In reality, it isn’t the only change in anthropogenic forcing, so really one should by using CO2e, not just CO2. As I understand it, over the period of the instrumental temperature record, the change in CO2e is slightly greater than the change in CO2 alone.
“

Well, I don’t see Nic Lewis and company making estimates of all the other GHG’s such as CH4, NO(x), etc and then placing error bars on those and then making an equivalent doubling estimate for the combined TCR.

No, in fact what they do is try to isolate the CO2 by itself, without mentioning that all the other GHGs go along for the ride as industrial anthropogenic outputs that strongly scale with the emission of CO2.

Consider that it is clear that H2O scales along with CO2 but no one in their right mind is going to keep quoting the “isolated” CO2 doubling by itself w/o factoring in the control knob effect that the CO2 exerts over atmospheric H2O. Neglecting that effect would really knock the TCR value down! What would you do if the skeptics tried that tactic?

WHT,
I think Nic Lewis does as he uses – I think – the forcing datasets that include all the different anthropogenic forcings and their uncertainties. I don’t have a huge problem with Nic Lewis’s calculations other than they’re very simple and have some assumptions with which I probably disagree. Craig Loehle, on the other hand, does only use CO2 and that’s essentially the root of the criticism of Cawley et al. (2014). When they consider all the anthropogenic influences, the forcing is 1.361 times bigger than from CO2 alone. They then suggested that this would increase the Loehle estimate, but it’s the other way around. It would reduce it by a factor of 1.361.

I can’t see why it wouldn’t be trivial for you to redo your calculation using a full forcing dataset, rather than simply the CO2 concentrations. I think it would reduce your TCR estimate, but it might be interesting to see how it compares to what Cawley et al. (2014) got. They estimate, using their model, a mean TCR of about 1.66K.

WHT,
Here’s another way of looking at this. When we’ve doubled CO2 itself, if the ratio of the change in anthropogenic forcing to the change in forcing due to CO2 alone is the same as it is today, then your calculation would tells us roughly what the temperature would be at that time. On the other hand, most define the TCR in terms of the change in temperature when the change in anthropogenic forcing is the same as it would be if the only change were a doubling of CO2 (i.e., around 3.7 Wm-2). This will happen before we’ve doubled CO2 and hence the temperature change would probably be less than your estimate.

Of course, this all assumes that feedbacks remain linear, etc, which they may not.

If you can get me extensive atmospheric concentration data on NO(x), CH4, halocarbons, etc going back to 1880, I probably would take that approach. As it is though, the data on CO2 is complete based on a combination of Mauna Loa and ice-core measurements (backed by estimates of fossil fuel emissions) and so that is what I am going with. And that is what everyone seems to use as a shorthand for describing CO2 sensitivity — which includes water vapor and the other GHGs that get dragged along with it.

I can’t defend Cawley’s work because that is the way he is doing it, and he needs to defend it on his own terms.

WHT,
If you go do Kevin Cowtan’s website, you can run one of his models. It produce various plots, one of which if the anthropogenic forcings going back to 1880. The GHGs are mixed, but that’s all you really need. You can then download that data (a button below the figure). If you sum the forcings at every time, you’ll get the net anthropogenic forcing, which I think you should be able to put into your model quite trivially.