Thursday, May 20, 2010

Another look at climate sensitivity

This is the title of an interesting paper by Zaliapin and Ghil that appeared some time ago on the Arxiv and more recently in peer-reviewed form in NPG. It's presented as a criticism of Roe and Baker, which has already been debunkedenough, so after a quick glance I didn't pay too much attention to it at first. I also know the second author to be extremely clever, so I was worried that it might be really technical. On a second, slower, reading, however, it's actually quite straightforward and very interesting. It also seems rather harsh on R&B, because the criticism applies to a whole host of similar work.

Z&G start with the basic premise that R&B - and indeed all work of this nature - use, that there's a functional relationship between radiation R (at the top of the atmosphere) and the surface temperature T, which we can write as R=R(T,a(T)) where the notation indicates that a(T) is the "feedback" term, that is to say, if we replace a(T) with zero then the function returns the zero-feedback relationship.

We can then perform a Taylor series expansion to investigate how the radiation balance changes with temperature:

DR = dR/dT*DT + dR/da*da/dT*DT + O(DT2)

(read the real paper for more elegant typesetting)

By writing 1/L0 = dR/dT (L0 is the zero-feedback sensitivity) and defining f=-L0*dR/da*da/dT we arrive at the familiar expression

DR = (1-f)/L0 * DT + O(DT2)

Now what everyone does at this point is to drop the last term and use the linear approximation, which can be re-arranged to give

DT = L0/(1-f) * DF

exhibiting the well-known singularity for a feedback of f=1.

What Zaliapin and Ghil point out is really startlingly simple and IMO elegant. They merely observe that if f is close to one, the linear truncation was not justified because the quadratic term is now large enough to matter! Once it is included, the singularity at f=1 goes away, as their Fig 2 shows:

(The feedback factor f cannot be larger than one or the initial equilibrium is unstable, even with the nonlinear term, hence the upper bound on the x-axis is sound.)

I'm not yet sure how much this really matters. We can still get a high sensitivity so long as the nonlinearity is small. AIUI most models do show a fairly, but not perfectly, linear response over quite a range of forcing and temperature, and the existence of complex climate models with sensitivities above 6C implies that such high values are at least not a mathematical impossibility. It may, however, make it harder to justify the sort of pathological "long tail" arguments beloved by some. Of course I've argued against them on a number of grounds already - not least of which is that, from a policy perspective, we are on really shaky ground if all the calls to action have to be based on highly improbable events that we are pretty confident won't happen irrespective of what mitigation we do or do not attempt. In any case, the maths is interesting in its own right.

63 comments:

Anonymous
said...

You have to be careful with this argument, since in this proof you are approximating a series of linked processes with a single equation. It works only to the extent that the single equation approximates the set. This is the same problem represented by truncating the single equation.

Well there's certainly an assumption that there is a well-defined function describing the (mean) outgoing radiation as a function of temperature. But without that, we can't meaningfully talk about climate sensitivity anyway.

Maybe it'd help to talk about _which_ climate sensitivity _when_; the number describes Earth at some particular time, right?

From the paleo work from various different spans of time; from current observations and what we learn about for example changes in primary productivity during times when CO2 is changing very rapidly; add in factors newly added by human activity like the excess spike of nitrogen compounds both in rainfall and flushing out into the ocean from artificial fertilizers used over a few decades; approximately doubling the natural petroleum seep with spills, and so on?

From the top down climate sensitivity is a simple-ish number; from the bottom up, it's the sum of many different processes at different times.

I think that's so obvious only an amateur like me needs to put it in words. But I wonder if reality is so complicated that nobody's trying to do the bottom-up estimate. I watch papers coming out by Dr. LeQuere and others on changes in proportions of species and distribution of plankton, which can change year to year under selection pressure, and wonder.

Then there's the monkeywrench question whether, of all the chemicals we're dumping, there may be another one that will have great leverage on some process that changes things even at low concentrations, but (shrug) who knows ....

I can sort of see the need for "chopping off tails" for conservative policy decisions. I have often wondered though, if we (the human race) are perturbing things in such weird ways that (for example) paleo-derived priors clamped onto models of future forcing scenarios of climate model ensembles are missing a lot.

It kind of seems to me like trying to predict the Gulf of Mexico oil snafu based on old records of natural leakage i.e. ignoring the fact we're drilling 5 mile deep holes and know bugger all about how to stop it when the pipe breaks etc.

I admit I don't have any robust scientific rationale for this other than we rush in where angels fear to tread etc. But I sort of understand (I think) where the likes of James et al are coming from in want to declare a tight range on climate sensitivity, at least as a matter of national/international policies.

IMHO, Carl, tightening up our understanding of possible outcomes is important work, although OTOH the point gets made that even 2C by 2100 is striking off into the dangerous unknown. Unforunately, constraining sensitivity isn't going to tell us much about what are in my opinion the two big worries for the next century, ocean acidifcation and a major release of permafrost methane.

Say we measure the rise in temperature while the feedback is increasing, and estimate that a = 0.1. It won't be until f gets greater than 0.4 that we will know that actually a = 0.0. By then it will be too late.

Put it another way, James, you reckon sensitivity is about 2C. That makes a = 1.0. But if the feedback increases because the Arctic ice does not recover each winter, then we may well find that a is much lower, and a = 0.2.

Mathematics is deductive and always right, but science is inductive. Applying mathematics to science does not make science deductive, nor does it remove science's tipping points.

James, This was Lubos Motl on his blog once -I've posted this link on your blog before -

Moreover, it takes a lot of change for the nonlinearity of T^4 to kick on. Set the current temperature, 15 °C or T = 288 K, to one, and write T as (1+t). Then (1+t)^4 = 1 + 4t + 6t^2 +...

Keeping only the 4t term is linearization. The next term is 6t^2. It is only equal to the linear term for t=2/3 which is 192 Kelvin of temperature. This is a huge temperature change. For temperature changes much smaller than 192 K, the linear approximation of T^4 is almost perfect.

Therefore, is the linearity versus non-linearity really relevant practically speaking? We'll be long-dead before temperature changes by 192K, for instance.

Am I missing something? It will take me some time to see how this 'a' number translates into an equivalent region for f. If the answer is that f greater than 0.95, I think those are mostly theoretically interesting.

I partially amend that - let's say when you at about 10% of 192K, the non-linearity starts to become significant - that is still almost 20K of temperature rise - I'm not sure at all that economists are looking at fat tail possibilities worse than 20K rise, something higher than even PETM.

Actually, Zaliapin/Ghil seem to state on page 115 that for f less than 0.95, it would indeed correspond to the quadratic term being less than 10% of the linear term. This corresponds to a 20X effect from the no-feedback case.

.. it is easy to check that, for all f < 0.95,their model satisfies |R00/R0| < 0.1 and is therewith veryclose to being linear.

The Motl stuff is easily dealt with - the nonlinearity in the SB response is indeed very small but the rest of the climate system is not necessarily so well-behaved. I'm aware of some model simulations which indicate significant nonlinearity, I'll try to work out some numbers in the next few days.

[Of course they are only model results but still may give some indication of the sort of magnitude that we might find in reality.]

Well the thing is, that any standard "rational" analysis must make an honest attempt to account for our uncertainty, and if the cost (due to high sensitivity) increases more rapidly than the probability of high sensitivity decreases, then we have a problem...but I agree that this is all very far removed from what we can really claim to understand. Of course the technological and socioeconomic situation 100 years into the future is always going to be hard to understand, irrespective of climate change!

It says at Heartland's copy "submitted to Geophysical Research Letters as Michaels et al., 2010" -- has it passed review there? Will there be any changes from the PDF Heartland published?And from those of us who aren't academics, how would you describe the take-to-the-IPCC message here?

The basic content (eg slides 5-10) are the same in the manuscript (which is not currently available AFAIK). I suppose it will be worth a longer post at some time.

As for the message...well, the obs are near the bottom end of the model range, one can spin the "consistency" a bit either way (eg satellite is clearly less consistent than surface obs) but IMO the numbers are a fair and useful analysis. It also sets out a clear methodology for comparison with future data at least over the next few years, which IMO is a valuable function in itself.

I find it very peculiar that a third party (Patrick Michaels's sidekick Paul Knappemberger) is releasing details and drawing exaggerated conclusions from a paper that no one can even look at.

Were you aware that this was going to happen? Can we at least see the abstract?

My first reactions, based on available information:

(a) Knappenberger's headline conclusion "Global warming has stopped" is unsupported by the paper's analysis (since it doesn't look at the evolution of trends, only at short-term trends of various lengths, but all up to the end of 2009).

In fact, I'm not aware of any proper statistical analysis that can support this "greatly slowed" or "stopped" meme. Be that is it may, this paper certainly isn't it.

I think you need to be a lot clearer, one way or the other, on this specific point.

To my mind, the fact that the decadal average for global surface temperature for 2000-2009 is anywhere from 0.17C to 0.19C higher than the previous decade is strong evidence that Knappenberger's statements are unsupported by the facts. The long-term trend (since 1979) is still at or close to its peak value, and is higher in all data sets than it was at the beginning of the analysis period (1995).

So the observational record does not support the assertion that global warming "stopped" or even "greatly slowed". Not even close.

(b) There appears to be no treatment of the various uncertainties in the temperature record itself. For example, how wide are the confidence intervals for the observed trends? Are they wider than those of the models?

I'm sure lots of folks will have a lot more to say, as soon as the authors release the submitted paper. When can we expect that?

Frankly, I find the paper's analysis less than compelling. But the real problem is the gross exaggeration of the paper's implications.

I find it very peculiar that a third party (Patrick Michaels's sidekick Paul Knappemberger) is releasing details and drawing exaggerated conclusions from a paper that no one can even look at.

Exaggerated conclusions (or not) aside, this really isn't uncommon. I've presented stuff that I've been a contributing but not lead author on (as Knappenberger is here) and which hasn't been published or made available yet. The first "publication" of the data we were presenting was via the conference. This is a very regular occurrence.

I did a recalc of slides 8 and 9 from the standpoint of four months later here. It's quite a lot different, with the trends a lot higher. It seems like a moving target - I'm wondering whether the big bold claim that "Global warming has stopped!" will look a bit silly when the paper finally appears.

It wasn't clear that Knappenberger was a co-author, because his name did not appear in the list of authors. But on reflection I suppose he must be.

To be pedantic, that's not strictly the list of the authors of the paper, it's the co-workers on the study that Knappenberger is presenting at the conference. You'd reference the talk as Knappenberger et al (Knappenberger, P.C., Michaels, P.J., Christy, J.R., Herman, C.S., Liljegren, L.M. and Annan, J.D.).

"However, the trend in global surface temperatures has been nearly flat since the late 1990s despite continuing increases in the forcing due to the sum of the well-mixed greenhouse gases (CO2, CH4, halocarbons, and N2O), raising questions regarding the understanding of forced climate change, its drivers, the parameters that define natural internal variability (2), and how fully these terms are represented in climate models."

DC, you can wind your neck in a bit. Of course Chip is a co-author, I'm sure you can imagine my reaction would be a little less sanguine if he had plagiarised my work! And yes, I knew he was going there, and even asked him if he thought I could get an invite too (I was joking about that, I wouldn't have wasted my time on them).

We haven't discussed making the manuscript available.

But I am curious about how strongly you would object to the quoted phrase:

"the trend in global surface temperatures has beennearly flat since the late 1990s despite continuing increases inthe forcing"

James, yes, I object. It was arguable when you wrote it, using end 2009 data - at the end of a cool spell. But your analysis, or what I've seen of it, greatly underestimates the variability associated with this statement, and this is shown by the fact that just four months later, it is much less arguable.

The GISS and UAH trends are decidedly positive going back from 5 to 15 years; only NCDC and RSS indices show patches of non-growth. And all indices show a big uptrend from around 2000. There is an effect where the 1998 El Nino reduces the trend if it is included near the start of the period.

I'm sorry if *I* wasn't clear. I did acknowledge my mistake (or thought I did), which was engendered by the omission of Knappenberger's name from the list of authors of the submitted manuscript. I admit I'm still curious about the "pecking order", but it's a minor point.

From watching Knappenberger's presentation, I did get a better sense of the history of this paper. Knappenberger explains how he reached out to you following "rejection after rejection after rejection" and that, coincidentally, you had been a reviewer of one of the rejected submissions.

Knappenberger also makes a number of statements that appear to go beyond the paper itself, but I'll look at those another time.

Yes, I would object to the statement cited. The slope of the short trem trend depends very much on the surface data set cited, as well as the specific start and end chosen. As I mentioned previously, there appears to be no analysis of these fluctuations, or indeed any of the uncertainties associated with the short-term observational record.

Meanwhile, *over the length of the analysis period*, the long term trend has continued to grow, if anything. And its p-value continues to shrink, indicating ever greater statistical significance. The bottom line is that one can argue that the short-term surface record is consistent with the model projections or with natural variation; it's simply too noisy to distinguish between these. But the long-term trend continues to be consistent only with anthhropogenic forcing.

And speaking of the surface record, here is a quote from Knappenberger discussing one of the possible "sources" of the "problem" of model inconsistency with "stopped" global warming, namely "unknown errors in the observational temperature record".

As Anthony [Watts] has just shown us, the temperature record is probably warming too much, which would push things even further out on the left, which would cause even more problems with the model projections.

I presented the results of some work we have on-going to a conference…this is hardly unusual, if fact, it is the norm. I also mentioned that we are trying to get it published—hardly a shocker. Since we don’t have it published, I didn’t give an explicit reference or hand out any preprints. Instead, I listed me as the presentation giver as well as a list of people who are working on the project. Again, hardly unusual.

I guess I don’t see what the big issue is.

I started my talk saying that a lot of times you hear “global warming has stopped” but that without any context, it is impossible to determine the significance of such a statement. I then proceeded to describe how we went about trying as best we could (with a bunch of caveats) to actually come up with a way of putting such a statement into context. The way we did it was an extension of the way others have done it (e.g. Easterling and Wehner, GRL, 2009; Knight et al., BAMS 2009). Our findings, summarized in my slide 10, indicate that over the last 5 to 15 years (ending in December 2009) the observed temperatures are very much on the low side of the distribution of model projected trends of similar lengths. So much so, that they are flirting with being statistically different—which indicates that there might be a problem.

My slide 12 listed some potential sources of the problem.

As far as the statement on my slide 11 “Global warming has stopped (or at least greatly slowed) and this is fast becoming a problem” this seems perfectly in line with our analysis.

By “fast becoming a problem,” I mean that as the period of low warming rate grows longer, it pushes more and more towards the left-hand tail of the distribution and into problematic territory, i.e. territory that requires an explanation besides just random occurrence.

And as far as solar goes, I must admit to being pretty surprised by the number of people who are coming forward to claim that solar has such a large impact on the temperature trend. When global temperature was trending happily upward, solar was largely dismissed as playing much of a role, but now that the temperature rise has slowed, it’s all about the sun. I find it amusing that many of the arguments made by “skeptics” to try to play down the warming (solar, ENSO, starting and stop dates, etc.) are now being used by folks of the other viewpoint to play down the *dearth* of warming.

-Chip

PS. Deep Climate; Anthony Watts was the speaker just before me, so it hardly seems out of line to extemporaneously point out to the audience how the results that he just presented impacted my results. At the very least, it seems like common courtesy.

“and even asked him if he thought I could get an invite too (I was joking about that, I wouldn't have wasted my time on them)”

After attending the conference, I am happy to report, that you’d be surprised at how open some of “them” are to frank, open, civil discussions about various aspects of the science. To be sure, gathered all together in a large crowd, the hardliners carry the moment, but in the smaller breakout sessions, there is a lot of interest in how things work.

The organizers seem very interested in bringing in scientists of different viewpoints and in fact, a few attended this year’s conference. What most of the attendees found out, was that these folks aren’t the devil, but instead just hard working scientists interested in moving our knowledge forward. For instance, see these comments by Scott Denning (after the commercial, http://www.pjtv.com/?cmd=mpg&mpid=144&load=3605).

In that regard, I will encourage them to step up their efforts at getting more scientists to come next year and discuss their work.

I would recommend to them that you would be a very valuable addition.

While your time may be wasted on some, it would not be wasted on all, and in fact, may be less wasted on this group than on many others (you definitely would not be preaching to the choir!)

Oh come on, a quick glance of their "global warming experts" above (the usual suspects) and it is pretty obvious they just want to hear that a few- year "trend" means "global warming has stopped." And it looks like you played right into their hands for a free trip to Chicago.

At least James is such a colossus he can easily straddle papers with the "climategate" anti-heros & with "Heartland" wanna-be's! ;-)

wow you're in such great company "Chip"! I anxiously await your paper with Sen. Inhofe on how the Gulf of Mexico oil catastrophe is great for the environment, as it no doubt provides a "protective coating" on wildlife etc....

Look, I can’t imagine that there is anyone out there who doesn’t admit that at some point that global warming (i.e. the rise in the global average temperature) had stopped (i.e. the OLS trend was zero or below) over some period of time during the past 5-15 years. The goal of our analysis (which I think we accomplished very nicely) was to develop a framework for which any observed trend (below zero or not) could be set against model expectations to ascertain just how commonplace any particular observed trend was (in model world). During my talk, I discussed the observed trends ending in December 2009—some one which were indeed below zero (i.e. global warming had stopped over that period of time).

Think of it this way: a car sets out towards some destination. And some point in time, we can detect that it stopped (or slowed considerably). We are trying to figure out whether it just stopped temporarily at a red light 9as we might expect), or whether there may be some indication that something was amiss that may (or may not) impact whether it reaches its final destination. We can speculate on any number of reasons why it may have stopped (red light, refueling, driver had to take a pee, flat tire, overheating, complete mechanical breakdown, etc.), but that was not to purpose of the analysis. The purpose of the analysis was to try to determine whether or not we should even start to wonder whether or not the car had stopped (or slowed) long enough to begin to suggest that there may be a problem over what would normally be expected.

In my opinion, we found evidence suggesting that global warming had stopped (or slowed) over a sufficient length of time to begin to wonder why.

-Chip

PS. Carl C: Sen. Inhofe couldn't make it to the conference, so I didn't get a chance to discuss the idea with him. :^)

Rather stunning article, for me as an amateur reader anyhow. It certainly puts the decadal variation in context. Or rather outside of the previous context; the info is mostly from:http://dx.doi.org/10.1007/s10584-010-9821-x

> the OLS trend was zero or below)> over some period of time during> the past 5-15 years.

James, is it fair to say this paper's procedure was to-- pick 5-15 years up to 12/09-- assess the variability-- choose a statistical test-- conclude there's no basis to say a trend existed during that time span?

I'm wondering basically how this is read along with the stories from a year or two ago saying that stretches of fifteen to twenty years are expected from what we know about the variability of the data over the longer term.

I'm not good at statistics, I'm more asking how to trust what's being said about the real world using this paper as support.

Well I hope you will share your concerns with the author of the statement....which is not from our manuscript, but rather Susan Solomon writing in Science not so long ago. I don't recall the storm of criticism at that, but perhaps it's not so much what you say, but who says it.

Nick, I think your accusation of cherry picking is way off base. The paper simply used data available to date at the time of writing and quite clearly checked numerous trend lengths specifically to address that concern. Of course it can be updated, and the calculations re-done, every time another set of figures are published, but doing it in whole years is hardly unprecedented - in fact many "mainstream" scientists still work with data up to 2000, just for convenience. IMO an important function of the paper is to outline clearly how such comparisons should be made in the future and it is a bit of a stake in the ground (hostage to fortune) whatever your expectations. Science is supposed to be about making predictions.

Carl, on the contrary I need every paper I can get my name on :-) Well, actually, I believe the method is worth setting out clearly which as well as being basically sound has one or two neat tricks to extract full value from the models (ie using all n-year intervals).

“ is it fair to say this paper's procedure was to-- pick 5-15 years up to 12/09-- assess the variability-- choose a statistical test-- conclude there's no basis to say a trend existed during that time span?”

No that is not right. The procedure was:

--Collect A1B model output to develop the distribution of model projected trends of 5 to 15 years in length (i.e. assess the variability of the projections)--see where the observed trends of lengths between 5 and 15 years (ending in 12/09) fell within the model distributions--assess the probability of occurrence of the observed trends if they were to occur in a model world--conclude that observed trends were pretty unusual (during some periods more so than others)

“I'm wondering basically how this is read along with the stories from a year or two ago saying that stretches of fifteen to twenty years are expected from what we know about the variability of the data over the longer term.”

We developed a chart that allows you to assess the probability of any value of the observed trend (surface or MSU lower troposphere) from length 5 to 15 years (see slides 6 and 7 of my Heartland presentation for example). As a preview of those slides, we found that observed trends of zero or less for a period of 11 years or longer fell outside the 95% range of model variability)

James,I made no accusation of cherry-picking. I said you'd used the current data at time of writing, which is fine. My contention is that you've used a quite inappropriate estimate of variability; the low trend during that period is not as far from expectation as your analysis suggests. The fact that four months later it looks like global warming might not have stopped clearly shows that something was wrong with the analysis.

It's just a fancy version of "how can there be global warming when it's so cold today?". Not properly accounting for variability.

Hank, it's the existing (AR4) model ensemble that we used. This is all that is generally available - people are just now doing the next batch.

People do indeed expect that the models should represent the statistics of natural variability reasonably well - it's a fundamental requirement of detection and attribution studies - this is not the same thing as matching the specific trajectory, which requires detailed initialisation. This is what Keenlyside and others are aiming for. Testing the observations against the distribution of internal variability is a very natural hypothesis, it's been done (in a simpler analysis, but based on the same principles) on RC for example...

Thanks, James, this makes it easier to understand. I assume each modeling groups isn't adding each new month's contemporary material -- do you pull from an archive of model runs at the IPCC? Can you pull out say all the models and runs that have data covering any time interval you want? I'm not sure how easy it would be to actually redo your analysis with new information or how you'd get it.

If you can set up to automatically pull new data and rerun the analysis (and rewrite the paper?) monthly, that suggests a promising new form of scientific publication.

There's an archive of model outputs freely available (and will be in due course for the next IPCC). The model trends were calculated by splicing their A1B scenario runs onto the appropriate 20th century hindcasts. A1B emissions/concentrations are certainly close enough to reality for this - in fact any difference in emissions has very little effect over a decade or two.

Was there any reason to think those models of that era would have been expected to resolve 'lulls' in warming of 5-15 years? My impression was that it was only a few years ago that models were being talked about that could give that sort of expectation.

Can't you also show that those same models -- or some of htem -- couldn't discriminate relatively small geographic areas accurately either?

I'm sure I've seen mention of known model failings, like -- I've forgotten, is it that all of them or all but one or two fail to match an alternation between hemispheres that's known to happen in reality, or show an alternation when there's a consistent shift?

Just trying to get at, how did you choose which models to assess, and were you asking them to show something they could be expected to show as of when they were run?

I'm sure there's a statistical assumption in these questions somewhere.

Eli assumes, and he could be wrong, that every model does a spin up when conditions are held constant or runs a check when the conditions are held constant, and they check those results against the observed statistics of variability. Indeed, it was such a simple check that was shown as strong evidence that the GISS 1988 model was reasonable.

"... an interesting article on results of research with results indicating a direct relationship between carbon dioxide emissions and global warming. Paper to be in June 11 edition of Nature... "http://www.realclimate.org/index.php/archives/2010/05/on-attribution/comment-page-8/#comment-176325

James I really liked the deal you pulled with the Solomon quote. I trust it will cause Nick and others to meditate on their reasons for holding views. None, that I can see saw fit to reiterate their judgements. That's a fascinating view into their minds and issues of hierarchy that they may want to reflect on. The OTHER thing that is fascinating is that you KNEW that they would fall for the trap, and you must have known that none would come back to say "I don't care who said it." THAT is very interesting.

I take it that since you don't exactly take a party line on some things they felt comfortable challenging you. it wasnt so much the statement and its not so much the truth of the matter, but rather the preservation of a hierarchy that matters to them. It is ok for YOU to be mistaken, but not for Solomon to be mistaken. Wow. just Wow.

But it gets better. For there is the niggling matter of the citation for this sentence from Solomon et al, which turns out to be – wait for it – Easterling and Wehner! But, as we have already seen, that paper points out that the trend “since the late nineties” depends very much on the selection of the start year.

1998-2008 gives a range of linear trend from 0.11C/decade (NASA-GISS) down to 0.02C/decade (HadCRUT). But if one starts in 1999, NASA-GISS jumps to 0.19C/decade, while HadCRUT is at 0.11C/decade.

So it’s even somewhat debatable whether this set of linear trends should be called “nearly flat” since the “late nineties”. But what is not debatable is the dishonesty of Michaels and Knappenberger in citing Solomon et al as further support for the mendacious claim that surface temperatures have been categorically “flat” and that there has been “no warming whatsoever over the past decade”.

Solomon is not completely wrong, but somewhat sloppy in her exposition. And she has paid the price in being quoted out of context, and even distorted by Michaels and Knappenberger.

By the way, Hansen et al rebutted Solomon's "flattening", pointing to continued decdal increase in global temperature, a point I've made many times.