A couple of years ago, I had a guest post about Pat Frank’s suggestion that the propagation of errors invalidate climate model projections.. The guest post was mainy highlighting a very nice video that Patrick Brown had produced so as to explain the problems with Pat Frank’s suggestion. You can watch the video in my post, or on Patrick Brown’s post.

Pat Frank has, after many rejections, managed to get his paper published. If you want to understand the problems with this paper, I suggest you watch Patrick Brown’s video, and read the comments on my post and on Patrick’s post. Nick Stokes also has a new post about this that is also worth reading.

However, I’ll briefly summarise what I think is the key problem with the paper. Pat Frank argues that there is an uncertainty in the cloud forcing that should be propagated through the calculation and which then leads to a very large, and continually growing, uncertainty in future temperature projections. The problem, though, is that this is essentially a base state error, not a response error. This error essentially means that we can’t accurately determine the base state; there is a range of base states that would be consistent with our knowledge of the conditions that lead to this state. However, this range doesn’t grow with time because of these base state errors.

As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.

Maybe the most surprising thing about the publication of this paper is that the reviewers (who are named) both seem to be quite reasonable choices. It seems highly unlikely that they missed the obvious issues with this paper. Did it get published despite their criticisms? Did they eventually just give up and decide it wasn’t worth arguing anymore? Or, did someone decide that this was something that should play out in the literature? I think the latter can sometimes be a reasonable outcome, but only if the paper has something that’s actually interesting, even if it is wrong. Pat Frank’s paper really doesn’t qualify; it’s simply wrong, and not even in an interesting way.

47 Responses to Propagation of nonsense

You really only need to read the abstract and introduction to dismiss the paper, as he says “The unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.”

The whole thing sounds to me like Dr Frank found a way to break a climate model and used that to claim all models are broken. If you were building a climate model and it output such a result you’d immediately say to yourself, “Hm, I clearly got something wrong.” Then you’d find your error and fix it.

If the watch was correct 24 hours ago, and is now off by 1 minute – 24 hours later, then isn’t it possible it will be off by an additional 1 minute per day, as time moves forward? So in one year, isn’t it possible it could be off by 365 minutes?

Just a hypo of course – but why assume the watch error will not change, but remain a constant 1 minute over time?

I am glad the paper was published and others will be able to study it and either support or refute it. That is the way science is supposed to work.

I congratulate Dr. Frank for working on getting his paper published for the last six years!

If Dr. Frank’s work turns out to be refuted – well then kudos to the person who writes the paper to refute it. But what if Dr. Frank turns out to be right? That would be interesting, now wouldn’t it.

Just the act of warming (whether natural, human made or a mixture of both) will cause CO2 to be released from the ocean – so warming itself can explain (at least some part) of the increasing CO2 atmospheric concentration. At least I have read some material which suggests that is a possibility.

I look forward to watching this paper and the responses to it play out and see what comes out of it.

“If Dr. Frank’s work turns out to be refuted – well then kudos to the person who writes the paper to refute it. But what if Dr. Frank turns out to be right? That would be interesting, now wouldn’t it.”

You seem to be missing the fact that he claims that a global temperature anomaly is impossible to measure. I just refuted what he claimed if you change “impossible” to “possible” in the last sentence.

“If Dr. Frank’s work turns out to be refuted “
How would you ever know? Anyone who tries to follow the maths can see that it is nuts? The rest, well…
ATTP has noted a major issue. If you want to turn a state error into something that accumulates at a rate, the question is, what rate? PF has a two step process:
1. He insists (elementary error) that if you average something, the units change. If you get the average height of Dutchmen, it is 1.8 m/Dutch. And if you get an average of annually binned data, say temperature, then the units are °C/year.
2. So there the time rate is determined. The rate per year goes into the calculation and determines the outcome. If you had averaged the monthly T data, you’d have the watch ticking much faster, and get a much bigger result.

Presumably at some point we’ll begin to see some of this so called ‘error’? 🙂

I think I’d be remiss in not noting that up to 2 years ago there was still wide spread belief that models were off because satellite data (erroneously) showed lower than expected temperatures. This was playing loudly with global warming deniers who are always hoping for some tangible justification for their beliefs.

@RickA No one needs to refute it. If its any good other scientists will pick up the ball and use it. That is how science is done. More than likely capable scientists are laughing.

By the way, if you are interested in understanding how error propagates through time… check out Jerry Mitrovica’s paper on sea level rise. As you know sea level affects how fast the earth spins. So if what is happening now for sea level rise was happening 2000 years ago, then astronomers would have recorded eclipses at vastly different times. Its a good read;https://advances.sciencemag.org/content/advances/1/11/e1500679.full.pdf

There are common themes circulating in the discussions on the moment linked in part to this argument.
The recurrent theme is to how well we can predict ECS.
The fact that the posited range is still so wide and uncertain must lend some slight credence to the notion that we cannot measure anthropogenic effects as well as we claim.
“As Gavin Schmidt pointed out when this idea first surfaced in 2008, it’s like assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today.”
Again, and this is a long bow, if climate prediction is subject to the forces of chaos, then mathematically, like with fluid theory, unexpected extreme changes can accumulate.
The good thing is that historically it does look as if we operate in very strong, self regulating constraints.
Mathematically though there is no guarantee.

“Pat Frank has, after many rejections, managed to get his paper published. If you want to understand the problems with this paper, I suggest you watch Patrick Brown’s video, and read the comments on my post and on Patrick’s post. Nick Stokes also has a new post about this that is also worth reading.”
Thanks for putting this up for discussion, I look forward to some more comments and explanation of where people know it goes wrong. Also to any sections that might be considered right in the Maths as to be published with these views against it known suggest the opposing views are not as clear as they should have been.
I will watch the video again.

If the watch was correct 24 hours ago, and is now off by 1 minute – 24 hours later, then isn’t it possible it will be off by an additional 1 minute per day, as time moves forward? So in one year, isn’t it possible it could be off by 365 minutes?

No, the hypothetical considers a watch that otherwise keeps accurate time. If you set such a watch to have an initial time that is slightly in error, you don’t then propagate that error.

I look forward to watching this paper and the responses to it play out and see what comes out of it.

I suspect it is mostly going to be ignored. I don’t think many serious researchers are going to bother formally refuting it.

Again, and this is a long bow, if climate prediction is subject to the forces of chaos, then mathematically, like with fluid theory, unexpected extreme changes can accumulate.

No, not really. The chaos really refers to the dynamics (the motion of air in the atmosphere and water in the oceans). This can clearly move energy around and can lead to short-term energy imbalances (i.e., warming or cooling). However, the system is quite strongly constrained to remain near equilibrium, which is set mostly by how much energy we’re getting from the Sun, the albedo, and the composition of the atmosphere (greenhouse gases). So, the chaotic nature of the system can lead to variability, but this is almost certainly constrained to be small, especially on multi-decade timescales.

Angech “Again, and this is a long bow, if climate prediction is subject to the forces of chaos, then mathematically, like with fluid theory, unexpected extreme changes can accumulate. ”

No. The flip of a coin is subject to the forces of chaos, but they never (at least in my experience) go shooting off into space, or come down “elbows” instead of “heads” or “tails”. Chaos does not necessarily imply tipping points, just sensitivity to initial conditions.

“However, the system is quite strongly constrained to remain near equilibrium, which is set mostly by how much energy we’re getting from the Sun, the albedo, and the composition of the atmosphere (greenhouse gases). ”

This shows exactly what is wrong with Frank’s argument. The Stefan-Boltzmann law means the planet radiates according to the fourth power of its temperature. That is a *very* strong feedback. The idea that a constant error in cloud feedback can accumulate and indefinitely overcome the SB feedback is so obviously unphysical that I don’t understand how the reviewers could have let that pass. The climate is not going to warm in a century by 20+C due to clouds.

It only matters if the error in cloud feedback is changing with time. Do models predict any change in global cloud cover associated with warming? My understanding is that they do. If so, then this change in cloud forcing (feedback) does have an associated error. This propagates in time, whereas the fixed absolute error clearly doesn’t.

Clive,
Yes, models do predict that there will be a cloud response to changing temperatures. Indeed, the uncertainty in this should propagate (it’s one of the main reason for the uncertainty in climate sensitivity). However, what Pat Frank is using the cloud forcing, not the cloud feedback. An uncertainty in the cloud forcing would change the base state, but would not propagate through the simulation. It would be equivalent to not quite knowing the solar insolation. If there was some uncertainty in the level of solar insolation, then that would imply an uncertainty in the base state (or equilibrium state) but it would not be something that we would propagate through the calculation so as to produce an ever growing uncertainty.

One thing I don’t understand is what the proposed implications are supposed to be? I believe Frank has been careful to suggest that this error propagation doesn’t apply to the real world, so that he isn’t associated with a claim that future warming could be much greater than previously thought. However, given that there is uncertainty in real world longwave cloud forcing (of a similar magnitude to the CMIP5 model spread used by Frank) this error propagation should logically apply to the real world too if it applies to model uncertainty.

If it’s suggesting a physics problem with CMIP5 models the obvious thing to do would be to test whether his hypothesised error propagation actually happens by running the models into the future. Which of course has already been done many times and no such huge errors appear. So is Frank suggesting that something is being “hidden”?

Another thing is that Frank’s calculations are dependent on the spread over the CMIP5 model ensemble. If there were only one GCM with one average LWCRF then there would be zero error according to Frank.

“However, given that there is uncertainty in real world longwave cloud forcing (of a similar magnitude to the CMIP5 model spread used by Frank) this error propagation should logically apply to the real world too if it applies to model uncertainty.”

I think the key difference is that the Earth is our integrator. One doesn’t assume, say cumulative degree days, then go about integration of a very accurate (say perfect, sigma ~ 0, which is impossible) high frequency (say 10 Hz) digital thermometer calibrated to the highest NIST (or SI) standard. That instrument will have the same biases in the future (bias offset, frequency response and sigma).

The Earth (e. g. humans) is the real time integrator, we only need to come back a decade-century-millennium later and use our same old MIG thermometers.

In the same way, the path taken by AOGCM’s/ESM’s is very much less important than the final delta T, A final climate sensitivity will emerge which has essentially a zero error bar.

Clive Best says: “It only matters if the error in cloud feedback is changing with time. Do models predict any change in global cloud cover associated with warming? My understanding is that they do. If so, then this change in cloud forcing (feedback) does have an associated error.”

When I was still doing cloud observations years ago there was a study that showed that the climate models which reproduced the 3D state of the clouds best according to observations where the ones with the highest climate sensitivity. So yes, there could be such a relationship, but that does not mean that study X gives you information on Y. You will have to study Y.

PaulS: “I believe Frank has been careful to suggest that this error propagation doesn’t apply to the real world, so that he isn’t associated with a claim that future warming could be much greater than previously thought.”

Frank does make a claim about reality, even about the now:

“The unavoidable conclusion is that an anthropogenic air temperature signal cannot have been, nor presently can be, evidenced in climate observables.”

Somehow based on a paper on dynamical climate models he is able to make conclusions about observed warming.

I am not expecting anyone wasting their time to refute this. Not refuted it makes for a nice honeypot trap.

I notice that in Equation 5 of Pat Frank’s paper he simply drops the year-1 in his uncertainty, so he would probably argue that the units are right. Of course, he doesn’t really explain why he can do so.

In fact, it’s even more bizarre. Pat Frank’s fundamental equation is essentially

where is some coefficient, is the warming due to the greenhouse effect, is the total forcing due to greenhouse gases, is the incremental change in greenhouse gas forcing at the ith step, and is either 0 if considering anomalies, or the base temperature if not.

Hence, you evolve the above without considering an explicit timestep. It’s simply assumed to be linear in change in forcing. So, Frank includes a year-1 in his cloud forcing uncertainty, and then simply drops it.

That is a weird paper.
Frank doesn’t know units. And he uses this odd +/- in front of (6), coming from \sigma^2 as being the argument under the sqrt, probably thinking in intervals.
He argued, too, that the units originate from calculating “statistical averages” instead of “measurement averages”. Very weird.

In a comment, Pat Frank claims that every scientist he's discussed his analysis with immediately understood and accepted it. I've yet to find one who doesn't think it's nonsense. For fun, I thought I would do a quick poll: I'm a scientist, I'm aware of Pat Frank's analysis, & I

Ah, but ATTP, he could argue that you did not specify it to be *physical* scientists – and of course the No True Scotsman Fallacy will apply when some physical scientist comes along and says it is indeed nonsense. I guess Peter Thorne is thereby automatically ruled out to be a physical scientist (he’s apparently rejected the paper when submitted elsewhere, perhaps even multiple times, considering his Twitter comment). And you are ruled out, too, of course.

Marco,
Yes, I am indeed probably ruled out. Pat has already mentioned that Nick Stokes is no scientist and that he suspects I’m not either. It’s quite convenient when you can redefine people so as to delegitimise their critiques.

I actually downloaded the file with all the various journal submissions, comments and responses. The paper has been rejected by 13 different journals. It’s also remarkable how many people in the climate science community are cowardly idiots who don’t understand science. I’d also forgotten that James Annan was also one of those who rejected the paper.

“It’s also remarkable how many people in the climate science community are cowardly idiots who don’t understand science”
Another interesting “cowardly idiot” on the list is Ronan Connolly, highly praised by Lord Monckton. I think he was included as a friendly referee. But he wanted changes, so he copped it too.

I don’t think it is always wise to target the messenger, but in this case I will make an exception. Pat Frank has all kinds of dubious associations, including the Heartland Institute. Along with Patrick Moore and Jay Lehr, who probably both need no introduction here, he wrote a ‘climate change primer’ for Heartland last year. He also just wrote a risible piece for WUWT. He seems to wear his ideological baggage on his sleeves.

It apparently took him 6 years to get this rubbish published. To be honest, it doesn’t surprise me that it ended up in an open access Frontiers journal. Imho I don’t rate any of the Frontiers journals very highly. This one has an IF apparently of 1.31. More of a bottom-feeder than a high flier. Moreover, as I wrote a couple of years ago with two colleagues, there is a lot of concern over the push for open access as a money-making business model rather than as a conduit for sound, repeatable science.

According to Pat Frank, “I was really glad they chose Carl Wunsch. I’ve conversed with him in the rather distant past, and he provided some very helpful insights. His review was candid, critical, and constructive.

I especially admire Davide Zanchettin. He also provided a critical, dispassionate, and constructive review. It must have been a challenge, because one expects the paper impacted his work. But still, he rose to the standards of integrity. All honor to him.”

I do not understand how that is possible that either of those scientists would offer “constructive” reviews for a paper whose central thesis is absolutely ridiculous. Since I know people who know Carl, I may ask one of them to check in on him and find out if Pat Frank is accurately reporting the content of the reviews…

Don’t be surprised if the conclusion of this paper: “whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now” escapes the echo-chamber and get recited somewhere as fact. I lump Pat’s nonsense in the same toilet bowl as Monckton’s feedback bs. It’s bs, but its plausible enough to the uninformed to be ammo in another front in the doubt-mongering war. Look how hard Watts is pushing it.

So important this crap is immediately and brutally strangled at birth.