A ‘Red Team’ Exercise Would Strengthen Climate Science

Put the ‘consensus’ to a test, and improve public understanding, though an open and adversarial process. – Steve Koonin

Steve Koonin has an op-ed published today in the Wall Street Journal: A ‘Red Team’ Exercise Would Strengthen Climate Science [link]. Annotated text of the op-ed is provided below:

Tomorrow’s March for Science will draw many thousands in support of evidence-based policy making and against the politicization of science. A concrete step toward those worthy goals would be to convene a “Red Team/Blue Team” process for climate science, one of the most important and contentious issues of our age.

The national-security community pioneered the “Red Team” methodology to test assumptions and analyses, identify risks, and reduce—or at least understand— uncertainties. The process is now considered a best practice in high-consequence situations such as intelligence assessments, spacecraft design and major industrial operations. It is very different and more rigorous than traditional peer review, which is usually confidential and always adjudicated, rather than public and moderated.

The public is largely unaware of the intense debates within climate science. At a recent national laboratory meeting, I observed more than 100 active government and university researchers challenge one another as they strove to separate human impacts from the climate’s natural variability. At issue were not nuances but fundamental aspects of our understanding, such as the apparent—and unexpected—slowing of global sea level rise over the past two decades.

Summaries of scientific assessments meant to inform decision makers, such as the United Nations’ Summary for Policy Makers, largely fail to capture this vibrant and developing science. Consensus statements necessarily conceal judgment calls and debates and so feed the “settled,” “hoax” and “don’t know” memes that plague the political dialogue around climate change. We scientists must better portray not only our certainties but also our uncertainties, and even things we may never know. Not doing so is an advisory malpractice that usurps society’s right to make choices fully informed by risk, economics and values.[i] Moving from oracular consensus statements to an open adversarial process would shine much-needed light on the scientific debates.

Given the importance of climate projections to policy, it is remarkable that they have not been subject to a Red Team exercise. Here’s how it might work: The focus would be a published scientific report meant to inform policy such as the U.N.’s Summary for Policymakers or the U.S. Government’s National Climate Assessment. A Red Team of scientists would write a critique of that document and a Blue Team would rebut that critique. Further exchanges of documents would ensue to the point of diminishing returns. A commission would coordinate and moderate the process and then hold hearings to highlight points of agreement and disagreement, as well as steps that might resolve the latter. The process would unfold in full public view: the initial report, the exchanged documents and the hearings.

A Red/Blue exercise would have many benefits. It would produce a traceable public record that would allow the public and decision makers a better understanding of certainties and uncertainties. It would more firmly establish points of agreement and identify urgent research needs. Most important, it would put science front and center in policy discussions, while publicly demonstrating scientific reasoning and argument.

The inherent tension of a professional adversarial process would enhance public interest, offering many opportunities to show laymen how science actually works. (In 2014 I conducted a workshop along these lines for the American Physical Society.)

Congress or the executive branch should convene a climate science Red/Blue exercise as a step toward resolving, or at least illuminating, differing perceptions of climate science. While the Red and Blue Teams should be knowledgeable and avowedly opinionated scientists, the commission should have a balanced membership of prominent individuals with technical credentials, led by co-chairmen who are forceful, knowledgeable and independent of the climate-science community. The Rogers Commission for the Challenger disaster in 1986, the Energy Department’s Huizenga/Ramsey Review of Cold Fusion in 1989, and the National Bioethics Advisory Commission of the late 1990s are models for the kind of fact-based rigor and transparency needed.

The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed. For those reasons, all who march to advocate policy making based upon transparent apolitical science should support a climate science Red Team exercise.

Mr. Koonin,a theoretical physicist, is director of the Center for Urban Science andProgressat New York University. He served as undersecretary of energy for science duringPresidentObama’sfirstterm.

The intensity, frequency, and duration of North Atlantic hurricanes, as well as the frequency of the strongest (Category 4 and 5) hurricanes, have all increased since the early 1980s. The relative contributions of human and natural causes to these increases are still uncertain. Hurricane-associated storm intensity and rainfall rates are projected to increase as the climate continues to warm.

The first ominous sentence is literally correct but misleads by not mentioning comparable decreases in the decades prior to 1980, as discussed in one of the NCA’s principal references (Knutson et al., 2010: Tropical cyclones and climate change. Nature Geoscience, 3, 157-163, doi:10.1038/ngeo779). Somehow this survived the NCA’s extensive pre-publication reviews, but would have been flagged by a red team. [Curiously, an online version of this Key Message omits the second sentence about uncertainties.]

It is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on Atlantic hurricane or global tropical cyclone activity. That said, human activities may have already caused changes that are not yet detectable due to the small magnitude of the changes or observational limitations, or are not yet confidently modeled (e.g., aerosol effects on regional climate).

However, it seems inappropriately wistful for an objective scientific statement. Something like “There has been no detectable human influence on Atlantic hurricane or global tropical cyclone activity over the past 70 years” would have been more neutral.

At the recent House Science Committee Hearing, both John Christy and myself argued for a Red Team approach to climate change science assessments. Some additional posts at Climate Etc. relevant to this topic:

Steve Koonin’s op-ed provides heft and a new rationale for a climate ‘red team’ exercise, in context of the March for Science. If scientists are truly marching for SCIENCE (rather than for funding and political power), then they should celebrate the opportunity for a climate science Blue Team – Red Team exercise. Such an exercise, as pointed out by Koonin, would strengthen climate science, improve public understanding of science, better inform the policy process, and would publicly demonstrate scientific reasoning and argument.

If the ‘consensus’ is really as strong as they think it is, then the ‘consensus’ scientists have nothing to lose in such an exercise — the consensus would emerge as strengthened. However, if the ‘consensus’ scientists are real scientists rather than consensus enforcers for the sake of policy advocacy, they will probably feel threatened by such an exercise. It will be interested to see how they react to such a proposal.

Based on my experiences with the APS Workshop to review their climate policy statement (which was organized by Steve Koonin), I can think of no one who is better qualified and suited to organize such a Blue Team – Red Team exercise.

I also think that the National Security Agency is the right organization to coordinate this. The NSA has the experience and expertise in organizing Blue Team – Red Team exercises. Further, they don’t appear to have a dog in this fight (they just need to understand the risks). And finally, climate change — and particularly the proposed climate change policies — are arguably a national security risk. Having the USGCRP or the National Academies organize this would be pointless, given their entrenched and institutional biases on this subject.

Let’s get on with it and act on Steve Koonin’s proposal. I hope that this will come to the attention of the Trump administration and the NSA.

In the long run, yes. Anything build on a false foundation will eventually crumble. But how long of a run did the ether have? Or polywater, or cold fusion? If there are vested interests, and a definitive experiment cant be done (i.e “Vitalism and Wohler’s urea synthesis, the ether and the M-M expts) bad ideas can persist for a very long time. It’s why good science is never above being debate

OP: “usurps society’s right to make choices fully informed by risk, economics and values”

This right is something most scientists simply don’t accept. They don’t believe society is competent to make these choices unless the choice happens to be the same as the so-called scientific consensus. “Shut up and do what we say” is what’s actually coming across.

Reblogged this on Finding Confluence and commented:
Cutting through the rhetoric of climate science with a practical approach to understanding a sensible way forward. Thank you Dr. Curry, for your insights.

“human activities may have already caused changes that are not yet detectable”

Not only is it wistful. It is a meaningless tautology that could be stated about anything at anytime, for example, “human activities may have caused changes that will turn frogs into people (and vice versa) that are not yet detectable” which is an equally true statement. This is an example of the complete dishonesty and/or complete inability to understand simple logic exhibited by the alarmist crew at NOAA.

Among many other reasons, statements like “human activities may have already caused changes that are not yet detectable” all indicate that consensus climate science practitioners, politicians, bureaucrats and politicized actors of every stripe are using unethical means to manipulate me.

It irritates me that warmista posters on these threads keep repeating the discredited mantra.

How do we do this? Do people apply to be part of these teams? Do we force people into different teams? How do we fund it? Do they apply and are the proposals assessed (and potentially rejected if they’re poor) or do we guarantee some level of funding for these teams? How do we assess the outcome – who decides? Have those proposing this considered that this has maybe already happened through the standard scientific process and that doing it all again would be a complete and utter waste of time? Has it been considered that this is simply being proposed by those who are not willing to admit that their ideas were wrong and simply want another chance?

This comment would seem to be relevant to the last question I ask above.

Pentagon has already declared climate change a national security threat, so I doubt it would make it past Dr. Curry’s “no obvious bias” filters, Jim D.

Myself, I think trying to give publicly-funded science a golf handicap is a tacit concession that one’s own research chops are subpar. OTOH, lotsa things are studied on the public dime which don’t have any immediate obvious benefit. So indulging a Red Team in the spirit that pure research programs in fields like mathematics, physics, astronomy (and any number of “soft” social sciences) could be palatable and even fruitful. At this point, *anything* to get this crew doing actual research instead of blog-publishing their pet homework assignments for everyone else to do would be a welcome development.

“Myself, I think trying to give publicly-funded science a golf handicap is a tacit concession that one’s own research chops are subpar.”

Hmmm, the Otto et al. 2013 was like an 11th hour addition to AR5 and followed up with Lewis and Curry 2014. The downward trend in TCR is pretty significant. Stephens et al. 2012 Earth energy budget was a significant departure from the typical Blue Team efforts that have been criticized by some heavy weights in the Climate Science field. Highest end sensitivity estimates have been panned by folks like Hargreaves and Annan who also question the motives of some Blue Teamers. I am not sure you are talking about the right kind of handicap.

Pentagon report predicated on climate changes that have not and may not come to pass.

“It is in this context,” they continued, “that the department must consider the effects of climate change — such as sea level rise, shifting climate zones and more frequent and intense severe weather events — and how these effects could impact national security.”

Sea level rise unexpectedly slowed to early 20th century level (2mm/yr) in the second decade of accurate satellite altimetry. This rate poses no threat and neither does the previous decade’s rate (3mm/yr) in meaningful time frames for defense planning.

There is no evidence of shifting climate zones but the expectation is that temperate climate zone will expand to displace polar climate. That’s a good thing. It means critically longer growing seasons where they are badly needed. But again this is so slow as to be beyond meaningful time frames for defense purposes.

There is no evidence whatsoever that severe weather events have increased or will increase in frequency. This is a prediction that has already failed.

The problem with NSA: it has close ties to the CIA, the military, and the establishment behind the government, and as such it can subtly control the outcome to be whatever is deemed in the interests of the powers-that-be. If the science on warming is being corrupted for political reasons, the NSA would be a very, very, very, very bad choice to oversee the process, no matter how much experience it has with red teams. If the problem is purely a scientific/ideological problem, then I suppose NSA would be OK, but we really don’t know why the science is distorted to such a degree, do we?

Sounds like some people may simply want to avoid public scrutiny in forums they don’t control. Pretending that the normal process of science is already doing this adequately indicates a lack of understanding of what the red team process is like. And it indicates a tired and rather self interested denial that the scientific process in the modern world does not have serious problems with selection and positive results bias and a flawed literature and peer review process. In short, apologetics is not really a posture that is helpful.

I’ve participated in or witnesses many such red team processes and they always lead to more careful assessments and new information and usually document this information and remaining disagreements.

No Brandon, that’s true of climate science. Generally, consensus scientists have refused to debate skeptics after some very pubic losses in these forums. If you look at the history, you will see a strong tendency to cover up failures, such as the Hockey Stick and deny that there was ever a problem. McIntyre’s critique of paleoclimate is pretty strong and has received nothing like an honest and direct response. People simply refuse to allow Steve to comment for example at Real Climate. That’s not good. In the red team setting, that becomes a lot more difficult to do.

It would help explain the resistance of climate alarmists to bald evidence if everyone read “A Profile in Courage” of Edmund G. Ross, from John F. Kennedy’s “Profiles in Courage.” It is a relatively short read.

Edmund G. Ross was a U.S. Senator from Kansas when the Radical Republican mob impeached President Andrew Johnson, President Lincoln’s successor. Leading up to and after his “not guilty” vote, Senator Ross was reviled, attacked and shunned, especially since he was from a Radical Republican state and was, in fact, opposed to President Johnson and his policies.

Senator Ross recognized the intemperance and anti-Constitutional mindset of the majority mob. It is telling that a number of Senator Ross’ harshest critics acknowledged years later his principled act, and acknowledged that he helped avoid the disintegration of the Constitutional separation of powers among equal branches of the government. Conviction of President Johnson on flimsy pretexts would have led to hamstring the Presidency, and a rampant and uncontrolled Congress.

The Warmista Mob is now transcendent. They will continue to do their damage to science for awhile. Ultimately, though, their passions will fade as sanity returns to unbiased people.

President Lincoln is credited with (roughly): You can fool all of the people for some of the time, some of the people all of the time, but you can’t fool all of the people all of the time.

It hopefully also makes it impossible to unilaterally declare victory, dpy6629. But since you already have, I’m even less inclined to see the point. This may also explain why “opportunities” for further debate have been refused. Folks using my tax dollars to do real work to advance knowledge have better things to do than accept every offer to rehash the same arguments over and over, IMO.

No scientific paradigm has ever been overturned solely by publicized debate, and certainly not by repeating ad nauseum, “I remain unconvinced” and leaving it at that. I really don’t think I can be any more clear on the kind of behavior a Red Team could engage in which would genuinely impress me.

Yes, ATTP, If you read the entire comment thread you would see the excellent response to Lacis later on. His analogy with the human body is fundamentally wrong. The temperature of typical continental locations varies by O(60 C) over the course of a year. The human body dies at +-3 degrees C. The Earth is vastly more variable than a closed system like mammal bodies.

> The Earth is vastly more variable than a closed system like mammal bodies.

Yet small climatic variations came have big impacts, just like AndyL’s example shows. Another good illustration:

. The amount of extra CO2 is small relative to the volume of the atmosphere, but it is large compared to the volume of infrared-opaque gases in the atmosphere.

How much ink does it take to change the color of a tank of water, relative to the volume of the tank? It’s exactly the same question. CO2 is infrared-colored ink. It turns out to be not very much. And how does colored water behave in the presence of light? Well, differently than pure water, which is the point. So if there are still any sincere skeptics out there, most of them are operating on a gut check about how much invisible ink there is in a tank, a quantity which is not visible but is easily measured! Others are willing to throw away all of astrophysics along with climate science, and suggest that nothing about radiative transfer is known. How fat should their tails be? There’s no telling.

Now, we know that costs go up nonlinearly with global temperature. Clearly, a tenth of a degree is noise, and ten degrees is massive redistribution of population and ecosystems at best. Far more than a hundred times worse. So the less you know, the more scared you should be.

I think the whole point of the Red/Blue moderated public forum would be to allow the public to be a jury, witnessing the best case on all points and thus be able to do their own risk assessment. The fallacies of the climate catastrophe argument are:
1) There are no other global risks competing for resources and management.
2) Further study of climate related risks and the potential tech tools to deal with them are less valuable than immediate direct expenditures in order to get an early start.
3) Surrenders of control of energy (a means of production) to government has no risks.

Brandon, I think the guidelines on dietary fat were based on Keyes research from the 1960’s which was quite remarkably never seriously questioned. I don’t believe industry funding had much to do with it.

ATTP >
Organising a Red Team is no more complex than what happens now. Instead of as now selecting people and projects to buttress the groupthink/consensus, select those that question it.

> Have those proposing this considered that this has maybe already happened through the standard scientific process and that doing it all again would be a complete and utter waste of time? Has it been considered that this is simply being proposed by those who are not willing to admit that their ideas were wrong and simply want another chance? <

It is considered that those opposing it are not willing to admit that their ideas were wrong, and simply want to silence dissent and politically incorrect thinking.

It is considered that those opposing it are not willing to admit that their ideas were wrong, and simply want to silence dissent and politically incorrect thinking.

I think you’re misreading the criticism. I think those who would like to challenge our current understanding should go ahead and do so (in fact, it already happens all the time). What I’m trying to understand is how those who are proposing some kind of formal red/blue team think this should work.

1. How do we decide who should be part of the teams? Mostly, in science, you get to decide what research programme you will follow and aren’t forced to do something you don’t think has much value.

2. How do you fund it? In most research environments you need to write proposals and convince a panel that your project is worth funding. It sounds like this red/blue team idea is arguing for some kind of baseline funding that won’t (initially at least) be based on some kind of assessment of what is being proposed (apart from at a very superficial level).

3. How do we assess the outcome? In science, our understanding normally evolves with time as we do more research and publish the results. There isn’t normally some kind of formal assessment as to what is correct and what isn’t – it’s normally obvious (to those who work in the field) – based on what has been published/presented. So, how do we assess this red/blue team exercise?

ATTPI think those who would like to challenge our current understanding should go ahead and do so (in fact, it already happens all the time).

The issue is funding. The vested interest and so institutional bias of the funder means alarmism gets tens of billions, challenges get pennies.

How will it work? What happens now is that agencies allocate funding based on precommitted support for the consensus.
So now from high create a new agency alongside, fund it by transferring 10% (?) of the money away from the others, and find some uncommitteds and skeptics to run it. Christy? Curry? …

Indeed, but we currently work in an environment where we’re expected to compete to get funding by writing proposals that are then normally assessed by reviewers and a panel. It’s not perfect and there are often good proposals that are not funded. The norm is to simply try again, not to suggest that everyone else is biased against your research and then try to get your funding without your proposal being assessed. To be fair, I do think there are problems with how we fund research, but I don’t think the solution is to give a formal advantage to one set of ideas simply because they aren’t regarded as mainstream.

ATTP
When I say the issue is funding, I’m thinking of the bias that will attach to it in a situation like climate science, where the monopoly funder has a vested interest in what type of conclusions are arrived at. iow, a situation where one set of ideas is given advantage simply because they are regarded as mainstream.

No amount of resubmitting is likely to overcome this. Imperfect as it no doubt is, a Red Team approach is still going to make the overall system less imperfect.

Good to see you can still come up with vapid nonsense Ken. Like the “fake” questions you list. The first seven are a weak attempt to raise “obstacles” to the proposal. Ever consider offering possible solutions instead? (Of course you haven’t.)

Then you add these two gems:
“Have those proposing this considered that this has maybe already happened through the standard scientific process and that doing it all again would be a complete and utter waste of time?”

and
” Has it been considered that this is simply being proposed by those who are not willing to admit that their ideas were wrong and simply want another chance?”

The first tries to discredit the idea – “Quick, let’s kill this before someone realizes it sounds like a good idea” – and at the same time implies that people like Steve Koonin and Judith Curry are incompetent ditz’s having no business participating in a science debate.

The second is the old “consensus” play, telling us deplorables that “Hey, this is settled science. Go away, there is nothing here to see.” You know it would be a waste of time, because you are Ken Rice and if you think it, it has to be true.

Mann’s Hockey Stick has already been thoroughly debunked. It was the product of a knowing and purposeful deception– a scientific fraud; Global warming is a hoax! What is a Red Team to do… sift through the ashes and look for further evidence that Western Academia stabbed America in the back and sacrificed scientific integrity and honesty on the alter of Leftist ideology?

Perhaps the hockey stick case could offer some lessons on this topic. The NAS Panel and the Wegman committee were sort of like respective blue and red teams. With the NAS Panel, there is a good example of where they avoided an issue they were supposed to deal with. This would be the question of Michael Mann’s Pearson R squared results. Mann reported them for a part of his original ’98 paper, where they passed, but would not reveal them for other parts when asked for them. In Wahl and Ammann’s replication, they failed miserably, and were revealed only after McIntyre made an academic misconduct complaint. At a NAS Panel workshop, John Cristy asked Mann If he computed them and what they were. Mann said he didn’t compute them because it would be “silly and incorrect reasoning”. Here’s how McIntyre describes it in a comment at The Blackboard:

Lucia, another backstory on the NAS panel workshop. In our presentation to the NAS panel the previous day, we demonstrated that Mann et al 1998 said that they used verification r2 statistics and even showed a figure showing verification r2 statistics for the AD1820 step. And yet the NAS panel did not follow up Mann’s lie. No one pointed to the figure in Mann et al 1998 contradicting his claim not to have calculated the verification r2 statistic.
The panel was set up so that the panel got to ask Mann questions and, for other witnesses, after the panel finished their questions, others could comment or ask questions. But in Mann’s case, he hared out of the room the minute that the panel ended their questions, denying others any chance to ask him questions. Mann claimed at the time that he had a plane to catch, but I recall someone saying at the time that they saw him walking around outside afterwards with his NSF handler.
When I got a chance to comment, I sharply criticized the NAS panel for sitting there like bumps on a log and not resolving a simple question that could have been resolved (and which they had been asked to resolved.)
Afterwards, panelist Doug Nychka told me that they had noticed Mann’s answer and their silence didn’t mean that they hadn’t noticed it. However, they also failed to grasp the nettle in their report, completely neglecting the issue, even though it was one of the main questions in the original Barton letter. According to Cicerone in JUly 2006, the House Science COmmittte, which had commissioned the NAS panel (not the House Energy and Commerce Committee) refused to pay NAS for the project because they had failed to deal with the issues that they had asked about.

The 2014 National Climate Assessment could never withstand such an exercise. Essay Credibility Conundrums in ebook Blowing Smoke examined each of the climate change indicia offered by the first chapter. Each and every one was false. Blatantly. Cherry picked, misrepresented, or worse. All that was needed to show the NCA2014 to be Obama administration deceptive propaganda was historical records for the ‘extreme’ and region claimed. Texas droughts. Oklahoma heat waves. Chicago snow storms. Regional rainfalls. Obama’s minions could not disappear those histories.

When I read the Koonin article in the WSJ this morning I thought of you of course. I concur that deciding which agency takes leadership in organizing the Red/Blue Team approach is the first priority. The second priority will be the individual participants selection. Finally, the summary document will be written by whom? As has happened in climate science writings before, the final output may not reflect the uncertainties, rather, the opinions of the writing author. Would you have a preference on the final writing author(s) name(s)?

What a poor (or is that selective?) memory you have – perhaps you should go back through the archives of ClimateAudit and refresh your memory on several attempts to get code/data. IIRC, that was where you first coined “Free the code, free the data”, was it not? And in relation to mainstream papers, was it not?

Perhaps you recall the attitude you had in 2011?
Perhaps you recall the difficulty in getting data from Jones at that time?
No?
Here, this might help…

“Posted Jun 29, 2011 at 2:30 AM [@climateaudit]
…
Seriously, Jones held the data for personal reasons. He should not have done that. It wasnt a crime, but for gods sake, take the man out of the role where he gets to decide or influence who gets data. Seriously. He’s sloppy ( by his own admission), lost the agreements, misused his power. Put the data in the hands of a competant document control specialist.”

“Posted Jun 28, 2011 at 11:28 AM [@CA]
Well Jones shared it with Steve in 2002 when mcIntyre was a Nobody ( sorry steve, just kidding)

Some strange dude writes Jones in 2002 and jones just shares the data. no questions, no checks, AND he says that this data SHOULD BE FREE AND OPEN.

That my friend is the state of play before mcIntyre published.

AFTER mcintyre published, Phil Jones beliefs about confidentiality changed. His attitudes are OPPORTUNISTIC”

It was more complex than that. It wasn’t Jones own data. He had gathered it from many national sources. Anyone else could have also gone directly to those sources and gathered it themselves and signed their no-transfer agreements that they had too. It was just more work for them than trying to get Jones’ copy. It was data in plain sight. Not hidden for those who wanted to make their own effort.

Does not appear to be accurate. Given Mosher’s posts of 2011 @ CA that I quoted, do you think his characterisation as quoted above ( “ONLY skeptics…” ) is accurate? Even if technically accurate (it wasn’t Mosher’s request, eg) is it misleading?

Americans like myself have no problem with the idea in the abstract, but don’t particularly care for advocacy through rhetorically spinning the concept by employing argumentative fallacies to score points in the climate wars.

Present a solid and realistic and feasible plan for implementation based on non-steawman rationale and avoid the antipathy and motive impugning.

Joshua, your use of argumentative fallacy idea is as vague as what you think those you oppose are doing. Argumentative fallacy? That’s a good phrase for suggesting shutting down all discussion. There’s no strawman and no impugning -the red team has made their case clearly.

The so called “March for Science” has always been a bit of a hash; it will be interesting to see if any of the masked and violent protesters we have seen on the streets of the US at protests since the November election will be joining in tomorrow. The sage advice of “Follow The Money” is ALWAYS true. Lots of folks woke up on November 9th and had to check the barn and examine the health of their herd of oxen.

While acknowledging that the burden of proof in this debate still rests with those recommending fundamental changes to global society to avert a highly theoretical risk which, it would seem, is only one of a vast array of speculative risks they could have chosen from, I fully support the idea of red team analysis and a public, transparent scientific dialogue on these issues, something long overdue.

If the proponents of CAGW truely believe what they sell, and want to move forward with policy responses, then they need to settle the controversies. One way to do that would be through a rigorous and objective scientific debate in full view of the public, whose wellbeing will be dependent on the outcome. If the “consensus” proponents don’t support such a process, then I can only conclude they must feel totally unarmed for a battle based on objective scientific evidence.

Sigh. I’ve been seeing all the “considering alternative theories is anti-science!” articles, but really, with a few tens of billions of dollars how hard would it be to build a comparable suite of plausible GCMs and impact studies that all said CO2 was harmless or a net benefit? The differences would be small inflection points amid a lot of noise.

Acemoglu has shown pretty conclusively that quality of institutions is the dominant factor in modern economics, and the glacially inexorable ideological drift of those institutions has real consequences for our society, only some of which can be challenged at the ballot box. Amid scientific institutions particularly, ideology should not substitute for rigor.

Pons was delusional. He got way out ahead of the science and held a press conference with his lawyers. There may be something to LENR, but it wasn’t what Pons said.

—
Many scientists tried to replicate the experiment with the few details available. Hopes faded due to the large number of negative replications, the withdrawal of many reported positive replications, the discovery of flaws and sources of experimental error in the original experiment, and finally the discovery that Fleischmann and Pons had not actually detected nuclear reaction byproducts.[5] By late 1989, most scientists considered cold fusion claims dead,[6][7] and cold fusion subsequently gained a reputation as pathological science.[8][9] In 1989 the United States Department of Energy (DOE) concluded that the reported results of excess heat did not present convincing evidence of a useful source of energy and decided against allocating funding specifically for cold fusion. A second DOE review in 2004, which looked at new research, reached similar conclusions and did not result in DOE funding of cold fusion.[10]

Steve, I understood your point, but I’m not sure you’ve grasped mine — by November nearly all disinterested observers agreed Pons was delusional (and the evidence was pretty clear). The fact that Koonin had made a public statement to that effect several months prior was de minimis at that point, certainly not comparable to having built an entire decades-long career on one side of a highly politicized issue that was still the subject of a lot of science and policy debate.

Koonin was right. Pons was delusional. Cold fusion per se cannot overcome the Coulomb barrier. For the magnetic equivalent of the electrostatic Coulomb barrier (like charges repel) try bringing the north poles of two simple bar magnets together. No one is strong enough to do that at small enough spacing.
But, Pons had a real, intermittent, and mischaracterized phenomenon. I sent Motorola’s top theortetical physicist to Pons toyota funded French lab in 1996 to check it out. We did nothing further because of intermittency. The physics has since been explained by weak force Widom Larsen theory. There are two inducement modes, phonons and plasmons. Whether it will ever be commercial, dunno. Typical energy gain needed is >7x (based on ITER), but lab demonstrated (Brillouin Energy) is only ~2x.
Covered in detail as an energy example in The Arts of Truth ebook published 2012.

Lacis was right. Koonin was delusional. Human influences on the climate cannot be considered really small.

We really don’t need public funding to play this game, Rud. We already know where that road leads. However, if funding were to be used to produce novel, publishable research which is then formalized into a full-blown climate model — I’m all for it.

Failing that, my vote is that Team Red gets to do ever MOAR audits on its own dime.

I followed your physics argument as far as I needed to grok the moral of the story, Rud. Perhaps you should read my response closer to find the bit where I’m not averse to taking a chance on presently “mischaracterized phenomen[a]” … neighborhood of where I mentioned “novel, publishable research”.

Modern-day Galileos might do well to realize that his model didn’t ultimately prevail because he spent his years penning letters about what a biased fool Ptolemy was … he actually built the darn model.

No Brandon, Lacis was wrong and his analogy to the human body remarkably bad. In any case, Koonen was right that the changes in energy flow in the climate system are very small compared to the overall energy flows. That’s why its so hard to get these small changes right or measure them accurately.

The overall energy budget is about 1000 W/m2 at the top of the atmosphere. It’s perhaps O(300) at the surface. A 3 W/m2 change due to greenhouse gases is about 1%, which is a small change. Estimates of aerosol forcings for example are on the order of 0-2 W/m2 which is small. But it plays a big role in the way AOGCM’s balance various uncertain forcings.

ristvan — Widom-Larsen still has some holes (neutron detection being the most glaring). There are some interesting LENR observations, but no theory really explains them very well, and of course many of them are probably in error. Good science requires a robust “We Don’t Know” category.

brandonrgates — That is more or less the opposite of how science works. Galileo prevailed precisely because the Ptolemaic model failed. Besides, as Armstrong showed it’s trivial to produce a model that outperforms GCMs — the naive forecast will suffice.

eli rabett — Bart successfully demonstrates that choosing a large enough bound will contain any trend, but he doesn’t seem to realize he’s comparing his actual weight to a model that says he should weigh 150 kgs in 2010.

If you haven’t been working with forecasting software for decades, it might not be obvious that the benchmark model is just a benchmark.

By analogy, it’s as though pointed out “these models are no better at predicting Super Bowls than a coin flip!” and then you criticized con flips as an inferior forecasting tool due to your vast knowledge of fly sweeps and DVOA.

Tony’s spin doesn’t contain anything that substantiates “Armstrong showed it’s trivial to produce a model that outperforms GCMs.” Neither does Green & al, in fact, which rather concludes that even perfect forecasts would not even help policymakers. Which begs the question as to why we’d need them in the first place. The authors are prudent never to assert that need, but they do neglect to justify their assumption that GCMs are meant to produce forecasts. We already know that dumping CO2 into the atmosphere like there’s no tomorrow will produce more AGW. No need for stoopid modulz to tell us that.

As for outperforming GCMs, here’s one that can do it within one single millikelvin:

Willard, “citation needed” is snark,as opposed to a polite request. This isn’t Wikipedia and Armstrong’s commentary on models is not terribly obscure. Did you try Google first?

I am beginning to think responding to you is perhaps not a productive use of my or readers’ time, so I will leave you with a simple exercise: compare a simple extension of the satellite-era trend to the IPCC forecasts in 1991 or 1995.

The approach by United Nations was at best incredibly naive. The Principles governing IPCC work are more or less free from sound scientific principles – no mentioning of scrutiny or application of a sound scientific method there. Rather than imposing sound scientific principles on IPCC, United Nations allowed IPCC to be governed by:

– the unscientific principle to: “concentrate its activities on the tasks allotted to it by the relevant WMO Executive Council and UNEP Governing Council resolutions and decisions as well as on actions in support of the UN Framework Convention on Climate Change process” (§1)

– the unscientific principle to: “In taking decisions, and approving, adopting and accepting reports, the Panel, its Working Groups and any Task Forces shall use all best endeavours to reach consensus.”(§10)

– an approval process and organization principle which must, by its nature, diminish dissenting views.:”differing views shall be explained and, upon request, recorded.” (§10) “Conclusions drawn by IPCC Working Groups and any Task Forces are not official IPCC views until they have been accepted by the Panel in a plenary meeting.” (§11)

Dr. Roger Pielke Sr. experienced the gauntlet of consensus enforcement firsthand. After fruitless protests against having his assessments squelched by his group’s leader, Thomas Karl, he resigned from further participation on 8/13/05.

Dear Dr. Mahoney
I am resigning effective immediately from the CCSP Committee “Temperature Trends in the Lower Atmosphere-Steps for Understanding and Reconciling Differences”. For the reasons briefly summarized in my blog (http://ccc.atmos.colostate.edu/blog/), I have given up seeking to
promote a balanced presentation of the issue of assessing recent spatial and temporal surface and tropospheric temperature trends….This entire
exercise has been very disappointing, and, unfortunately is a direct result of having the same people write the assessment report as have completed the studies. Their premature representation of aspects of the report to the media and in a Senate Hearing… Only the minimal representation of the perspective that I represent will be begrudgingly included in the report. I also learned earlier this week that a member of the Committee drafted a
replacement chapter to the one that I had been responsible for… This sort of politicking has no place in a community assessment. If such committees are put together with no intention of adequately accommodating minority, but scientifically valid perspectives, then it would be best in the future not to invite such participation on CCSP committees.
Roger A. Pielke Sr.
Professor and State Climatologist
Department of Atmospheric Science
Colorado State Universityhttps://pielkeclimatesci.files.wordpress.com/2009/09/nr-143.pdf

“After some prolonged deliberation, I have decided to withdraw from participating in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). I am withdrawing because I have come to view the part of the IPCC to which my expertise is relevant as having become politicized. In addition, when I have raised my concerns to the IPCC leadership, their response was simply to dismiss my concerns.”

We can only speculate about how many undocumented biasing effects the principles, and organization culture of IPCC may have caused.

If one person has a view and nine people have a different view, the reports will reflect the majority. People feel marginalized largely because they have failed to convince many other people. That’s the way it works.

Jim D, if you believe that Pielke Sr. and Landsea expected their view to be the final and only view you are deluding yourself. They just wanted representation of a minority view, and not have their contributions deleted (censored).

Ideas that only have one person promoting them are destined to fail. This is the problem with many skeptic issues. Little support even within the skeptical “community” who are just as skeptical of each other, and there’s the rub.

Jim, I agree with you. Ignoring political conformity in favor of skeptical rigor is not naturally fruitful. It must be nurtured with understanding the search for truth is not necessarily popular and often lacks a constituency.

Good ideas don’t stay a minority opinion for long, but unfortunately in these cases, they seem to have not pursued their musings any further or gained independent support. Hence the fizzling out, and it is back to the drawing board for them.

If one person has a view and nine people have a different view, the reports will reflect the majority

Yep, Einstein had a view and hundreds or thousands had a different view and the world followed wrong views for many years. This happens time and time again. The evil consensus view squashes anyone who disagrees, including those with a better view. This consensus evil is still dominates in the way we advance, 99 false views for every 1 correct view.

Go back in time and tell that to Galileo. There are many examples where Good ideas stay a minority for really too long.
you picked a really bad idea to promote. History proves you wrong so many times they cannot be counted.

Good ideas sometimes stay a minority opinion forever if the decision makers don’t get rich from them.

Maybe the NSA has already done this. With the results being off limits for the public until 2060 or so. Any leakers? Come on.

Kidding aside, I agree that this will only work in a depoliticized environment.

If the march for science was about strengthening the authority of science it would be for a depoliticized science. Actually AAAS wants to recruit/interest scientists for advocacy. The social sciences and humanities are already streamlined. What remains to be cracked are the “hard” sciences. Better use the Trump lever before Trump goes mainstream and the opportunity fades.

Perhaps if the NSA keeps the blue/red team contest secret it may work. Then we will have to wait for Wikileaks to get the results out.

But in a depoliticized environment it isn’t needed. Happens naturally as part of the normal science process.

We only need it because climate science is already riddled with politicization, due to its funding by political bureaucrats. Undermining the normal science process by preferentially funding those ‘findings’ most favourable to political expansionism. ie, alarmism.

After reading some more comments and thinking about it, the weak link of the whole affair seems to be the delegitimization of scientists who want to pursue projects/ideas that are not of the consensus type.
And this is probably all the red/blue team thing is about. Legitimizing the investigation of non consensus ideas. If some measure of scientific ethic were restored such an exercise were unnecessary.
This will either happen in the climate science community or not. The political spin is a different problem altogether.
Allocating some extra money to such projects may help. Maybe the times are such that problems of scientific ethics can only be addressed by allocating money?

Basically the 15K$ came from a 5K$ and a 10K$ donation. That covers the health care cost of treatment. Zeke and Robert have indicated that Robert would appreciate more help for his family because he is going to have a considerable loss of income while he is treated so any contribution would be welcome.

SM, did some checking. Burkitts is a B cell lymphoma, rare in US unless immunocompromised. Median long term survival with tumor removal and aggressive chemo is 80-90%. A (for such blood cancers) very Good prognosis.

If needed, I can introduce him to the former head of Mayo Clinics cancer, Dr. Frank Prendergast, MD/Ph.D. Frank was a customer of my (precious career) Mot biochip business (Frank morphed into heading individualized medicine using our technology on Mayo’s vast and well characterized tissue sample bank, biochips now part of GE Medicine as Mot self destructed), then a personal friend based on my first VC Venture in genetic testing, and then as Mayo supporter of my nosocomial infection prevention startup (licensed from P&G). Would not be the first such Mayo cancer introduction under similar circumstances. Just a simple phone call. But the cancer center is in Rochester MN, and Rhode is in Europe. The intro offer stands if/when needed. Regards

«However, it seems inappropriately wistful for an objective scientific statement.»

The IPPC report is full of statements that are inappropriate for objective scientific statements. The foreword to the Working Group I contribution to the fifth assessment report of IPCC states:
«As an intergovernmental body jointly established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP), the Intergovernmental Panel on Climate Change (IPCC) has provided policymakers with the most authoritative and objective scientific and technical assessments.»

Whatever assessment is supposed to mean within scientific conduct, an objective assessment should at least be an assessment that is not influenced by personal feelings or opinions. However, the Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties impose upon the lead authors to assign subjective levels of confidence to their findings:
«The AR5 will rely on two metrics for communicating the degree of certainty in key findings:
1 Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., mechanistic understanding, theory, data, models, expert judgment) and the degree of agreement. Confidence is expressed qualitatively. ..
…
A level of confidence is expressed using five qualifiers: “very low,” “low,” “medium,” “high,” and “very high.” It synthesizes the author teams’ judgments about the validity of findings as determined through evaluation of evidence and agreement.»

Here is an comment to the IPCC practice in the Inter Academy Council review report of IPCC:Climate change assessments; Review of the processes and procedures of the IPCC
«IPCC authors are tasked to review and synthesize available literature rather than to conduct original research. This limits their ability to formally characterize uncertainty in the assessment reports. As a result, IPCC authors must rely on their subjective assessments of the available literature to construct a best estimate and associated confidence levels.»

Whatever IPCC was doing it was largely subjective, and not objective as required by their principle (§2) and erroneously promised by the foreword. That is easily seen by the abundant use of the level of confidence qualifiers in the IPCC WGI; AR5 report

A red team exercise seems appropriate. Maybe the red team can perform some objective testing on the IPCC lead authors subjective levels of confidence.

It was great this morning to see Koonin weigh and add heft, as you say, in support of your idea heft that comes from a former Obama administration official.

I think this is far better than what Lindzen and others have supported, which is just cutting funding to the current crop of climate scientists and their projects. The problem I see with that approach is that it leaves the field solely to the vast accumulation of questionable science well known to the commenters here. If no countervailing science comes forth, the current science will come back into vogue with the next alarmist administration.

A red team blue team approach will surface publicly the bad science while confirming the well done science. That record will be hard to sweep under the rug, and Koonin would be a great choice to lead the effort..

Those are surmountable with only ordinary hauling and shoving. How is any official commission selected? All methods are objectionable on some ground or other, but that doesn’t paralyze us. Koonin has already addressed the major How questions. When and Where are minor matters. (I suggest over the Internet, in a format that builds on the Climate Dialogue site’s.) Who isn’t a deal-breaker either, because borderline candidates who didn’t make the cut could be allowed to comment “below the line,” as on the Climate Dialogue site.

In the end skeptics will reject the results.

A more accurate statement would be the both sides will object to many of the findings that they don’t like. (Koonin isn’t proposing a winner-takes-all format, BTW.) Anyway, so what? We don’t say that court cases mustn’t be conducted just because the loser won’t accept the results. Skeptics must be given their day in court regardless. They deserve it; the public deserves it; and Science should give it to them to maintain its own credibility. (By failing to give an official hearing to skeptics of mainstream nutrition for decades, and for similar high-handed behavior in other controversial matters, Science has already dinged its reputation.)

Steven MosherIn principle this is a good idea … The questions are all practical. Who when where how

How hard can it be? What happens now is that agencies allocate funding based on precommitted support for the consensus. Seems to deliver well enough.
So now from high create a new agency alongside, fund it by transferring 10% (?) of the money away from the others, and find some uncommitteds and skeptics to run it. Christy? Curry? … Heck, Mosher even.

Climate Science involves science and engineering in an extremely wide range of disciplines. Almost all disciplines at the macro scale, in fact. One B/R team combo cannot possibly cover all the issues in all the domains.

Maybe a Big Blue Team / Big Red Team at the top level with specialized sub-teams. Some regulatory agencies are structured this way.

So far as I am aware, the response functions in the physical domain have not yet been defined beyond super-meso global responses. Additionally, any potentially advantageous responses seem to be generally ignored.

Conversely, to hear them tell it, self-proclaimed modelling/software development/validation experts could have long ago built a better model, Dan H. More coding, less lip-flapping might actually get some real traction.

I certainly endorse the Red Team approach. However, a monster Issue Tree diagram would be even better. The big shortcoming of the Red Team method is that each line of argument is scattered among numerous documents. The starter document makes a given claim or argument. Then the first Red Team document probably makes several responses to that argument. The next Blue Team document offers responses to the Red Team responses, and so it goes, document after document.

Note that this is a tree structure (the issue tree) because often one response elicits multiple responses, level after level. One can only see these response-response-response, etc. paths by going from document to document to document, etc., which is very difficult to track given the branching structure of the arguments. The Issue Tree diagram displays all of this in one easy to see place. One document. Mind you the diagram will be very large, because that is the nature of the climate debate.

Tony, David has done that several times over the years (brief explanation), I don’t have a link but he will have. And the DT tells me “Snow and storms to strike Britain with low temperatures lasting for weeks.” In late April! Enjoy!

“which is very difficult to track given the branching structure of the arguments.”

Oh, I duuno about that – a hyperlinked document, with a database tracking and annotating which issues are affected would seem a plausible way to keep this under control and draw the attention of various sub-groups to “game changing” findings in other sub-groups – this would need to propagate up and down the tree of course. Doesn’t seem too difficult to set up, although it would need time and testing, of course.
Such a system would be incredibly useful for any number of issues aside from just climate science. If no-one has made it yet, it sounds like one of those “how did we ever do without it?” systems, and worthwhile on its own merits.

K63, I spent 15 years in consulting to Boards and such. On questions like should Harley Davidson survive the Japanese onslaught? There are methods other than DW’s that elucidate the same issues. His is but one. Quite sophisticated, but still just one. Raffia decision trees much simpler, yet same goal. The point is always clear communication and basic intuitive understanding to uneducates.

The issue tree is the fundamental structure of writing and speaking. With certain important exceptions, every sentence in a text (except the first) is answering a specific question posed to a specific prior sentence. Thus the set of relations between sentences is the set of all possible questions. The tree structure occurs because more than one question can be asked of a given sentence and this frequently occurs. The questions are often quite simple, such as how?, why?, such as?, what evidence?, etc.

For example consider this string of sentences: We have to go. The cops are coming. Use the back door.

The second sentence is answering the question why? of the first, while the third sentence is answering the question how? of the first. This is a simple issue tree. Note that these are reasoning relations, not rhetorical relations.

When there are many sentences, as in an article or long blog post, the issue tree can be difficult to grasp just by reading the linear string of sentences. Here the issue tree diagram becomes useful. One can see the reasoning. One can also measure it in various useful ways. But the issue tree and the issue tree diagram are two very different thing, like highways and highway maps. The issue tree per se is there whenever we write or speak.

Moreover, the issue tree diagram can be scaled to show the reasoning relations between documents rather than sentences. Let’s say we have 400 recent journal articles on a given climate topic, which is a fairly typical number. An issue tree diagram of a few thousand nodes could show the collective reasoning that ties this corpus together. The state of the reasoning, as it were. Likewise for the successive Red and Blue Team response documents. The technology is powerful.

David, I would need to go through likely several iterations of a design process with some “sample” data, and a sit-down discussion with end users, but my initial thoughts are along the lines of hyperlinked references, annotations, questions, answers, parents/children – that sort of thing, as well as outstanding/resolved etc for each issue. Dynamic document created “on-the-fly” on each click, links to follow/add/edit/delete all those things etc. If you can create something on paper – no matter how “messy” – then creating a web based hyperlinked version is not too much of a stretch and certainly offers more ease of use, automatic updating etc. The biggest issue that I can see up-front would be propagation issues – circular references that are several steps removed from each other, so an iterative update never ends. Probably not insurmountable, but requires careful design.
I haven’t even looked at an existing “flat” (paper) version, so this is hardly definitive, but it seems to me quite do-able. I’ve created quite a few web based systems with database back-ends, one even linking several disparate databases, that automate otherwise very time intensive processes. This doesn’t seem much different – at least in general terms.

Kneel63, you have given me a big new idea, regarding the climate debate issue tree diagram. Namely, use a relational database that generates local subtree diagrams on user demand.

I am used to doing relatively small issue tree diagrams, on the order of a few hundred to a few thousand statements. Even for the latter I normally break them up into subtrees, the design of chemical plants for example, or bank consumer contracts.

But the climate debate is much larger. For example, the number of comments here at Climate Etc. is approaching 850,000. The number of distinct sentences is much larger, certainly in the millions. Assuming that just one percent of these are unique and substantive, that yields an issue tree of well over 10,000 significant sentences.

While it might be profoundly interesting to display this entire issue tree all at once, it is probably far more practical to display specific subtrees, devoted to specific sub-issues. The basis for this is a relational database where every data element is a single sentence. The relations for a given sentence specify just two things::
(1) the sentence that this sentence is responding to and the nature of the response, which is either the question answered or the objection made.
(2) those sentences that respond to this sentence and the nature of these responses.

Mind you these response links themselves might also be data elements, making this what is called in mathematical logic a second order system, because it ranges over the relations as well as the things related. Have to think about that.

Willard: I went to the linked website, but did not see anything about “equivalence classes”. Perhaps you can explain yourself (for a change).

The website does in fact summarize some of the high level issues, each of which has been debated with many thousand sentences, here and elsewhere, so I do not see just what your point is, contra mine. The detailed debate clearly exists. I just want to make is easily visible.

“Namely, use a relational database that generates local subtree diagrams on user demand.”

David, if you’ve never done this sort of thing, then…

Database design tips:

1) you can’t have too many tables – don’t store multiple copies of the same data unless you have to. Keep a reference to a row of another table instead. This saves space and time, and improves data extraction efficiency too.

2) think about a form the user needs to fill out to get data into the system, and what reports and what data (inc summations, etc) you need to generate these reports, then base your tables on this information. This is your input “log” as well as data, and a “snap shot at date” of output data. Keeping this in the format used and tagged with “this user did it” and “at this time/date” – useful and easy to store with each row – gives you instant logs of all interaction that changes data as well as output data summaries at specific times. This is vital and useful if you are concerned with data consistency (you should be), or with being able to audit not just the data, but decisions made based on the data and its summaries at that time.

Advanced: If you want “roll-back changes” to any arbitrary point in time, and/or by particular users, and/or …, then have a look at triggers – these can let you create tables of the “before and after change” data that you need for this automatically, when it happens, and without the need to worry about who changed the data or how – even your “quick and dirty” back-end “fixes” get the changes logged.

Triggers are also highly useful to ensure your meta-data remains consistent – a lot of work at implementation time, but well worth it!

3) Queries can transform input data into intermediate or output data – they can be nested, so save these sub queries somewhere (implementation specific) and re-use a query rather than directly reference a table. This sounds odd, but you need to do this even if you start with a “select all” query on a table – it actually insulates you from how you store the data If you decide to split a table and use references instead for speed reasons, for example. In this case, you can re-create just one (stored) query and “fix” all the rest the your change broke.

4) indexes are your friend – they make queries much faster for the “on the fly” or “interactive” searches, especially ones spanning multiple tables, or with a large number of base records that you are gathering statistical data about (count, sum, average etc), and/or sorting, filtering etc.

5) try it out with a small amount of sample data – try every interaction you can think of, and take note of what’s easy and what looks hard – you don’t have to complete every bit of all queries, just make a start to see what data is needed, and how you would fetch that data.

With indexes on key input data fields, tables of these indexes of multiple such input tables are the most valuable data you can collect – relationships! Think about that too for each user interaction that uses this meta-data.

6) Toss the whole lot away today and start again tomorrow – this sounds harsh and perhaps belittling of your abilities, not to mention a waste of time, right? Nope, trust me – you’ll do it differently the second time, based on what you learnt the first time about this particular data set, how you want/need to interact with it, and what first order meta-data is useful.

Give yourself overnight to let it all “sink in”.

Don’t put this off, do it now – before it becomes too hard to change, and/or too hard to live without because you’ve already imported full data (assuming you have a big pile you can import, that is)

I’ve done a few of these systems, and I still go with the “Mark II” approach as an integral part of the design – wouldn’t do it any other way.

Good database tips kneel63. It’s funny you mention that — I helped to develop and support a dynamic SQL application that generated hundreds of thousands of new lines of SQL every time it was run, including dropping and rebuilding the indexes. Challenging, but it was really the only way to process the data the way the users wanted.

We finally retired it just a few months ago, but the users still kind of miss it. Can you guess what it was used for? Hint: rhymes with “boar-blasting.” :)

David Wojick: I have been trying to get someone to fund a climate debate issue tree project for many years.

Doesn’t Climate Etc do that? We have short essays on hot topics, usually accompanied with hot links and references; then disputants chime in with counter arguments and informative (hopefully) hot links and references.

A formal arrangement, like Wikipedia, would run aground on common problems: Who builds the issue tree? Who selects the issues? Who selects the hot links and contrary links? Can you count on anybody to paraphrase others’ claims and findings accurately? Who cross-checks, Steve McIntyre style, for conflicting/changing claims by particular disputants?

Almost every day Marc Morano and Anthony Watt put up counter arguments, complete with references and links, to recent claims by scientists, and then the regular readers chime in. You can get some of this at RealClimate, which supports rather than challenges the “consensus”.

Mathew. An issue tree diagram is a very specific structure, in which each response is visually linked to the response it is responding to. One can certainly see that there is a response-response-response tree structure in the Climate.etc debates, but the blog structure is still just a linear string of statements. The limited nesting provides a hint of the tree structure, but that is about all.

As to content, my project would be best if every contributor approved the statements that they originated. That is the beauty, because no one is interpreting anyone without their approval. I have even toyed with an “author pays” model where each contributor pays to have their statements formulated and posted.

But the issue tree has to be built by trained specialists in issue trees. It is a little like doing word problems in algebra, far from trivial.

But absent that approval process, Climate.etc is indeed a great source of content. Just doing tree diagrams of some of the fundamental debates here would be a wonderful first step.

The proposed action is pointless because the result is easily predicted. The Berkeley Earth group performed a ‘red team’ assessment of the surface temperature. The leader of that group was certain that previous work was flawed and that he could do better. But the conclusion was consistent with previous assessments of the surface temperature. One prominent “skeptic” who supported the Berkeley Earth project promised to accept its conclusions. When it didn’t show what he had hoped, he changed his mind. Many others who had not accepted the previous assessments didn’t accept the new one.

If this broader ‘red team’ project happens, the results will be equivalent.

Jeffrey Tarvin: “The proposed action is pointless because the result is easily predicted. . . . One prominent “skeptic” who supported the Berkeley Earth project promised to accept its conclusions.”

The Berkeley Earth group shouldn’t be presumed to have the last valid word on the matter (it was naive of Watts and other skeptics to grant him that), and it wouldn’t have it in the format Koonin proposed. He wrote:

A Red Team of scientists would write a critique of that document and a Blue Team would rebut that critique. Further exchanges of documents would ensue to the point of diminishing returns. A commission would coordinate and moderate the process and then hold hearings to highlight points of agreement and disagreement, as well as steps that might resolve the latter.

Watts and other skeptics who bridled at BEST’s conclusions couldn’t just be dismissed as sore losers in that format; instead, their reasons for withdrawing their consent would be on the table along with all else for all to see. (They accused the BEST leader of not acting in good faith, IIRC.)

And it’s not just skeptics who don’t accept “settled” conclusions. The majority of warmist activists don’t accept IGPOCC’s non-alarmist conclusions about hurricanes and other weather extremes, and certain other topics the I can’t recall at the moment.

The notion that BEST performed a bona fide “red team” test can be entertained only by those with no realistic–evidence-based–concept of spatio-temporal temperature variability. Their entire approach is an arcane exercise in primitive ad hoc presumption pursued with customary academic hubris using strongly biased data.

To those who have never examined analytically the discrete approximation of a surface integral of inhomogeneous random processes with various spectral signatures nor have ever seen professionally obtained temperature data at pristine, non-urban locations, the very idea that a relatively unbiased estimate of GAST can be obtained seems, of course, preposterous. How dull is that?

The complex surface statistical models say that it warmed steadily for 1978 to 1997. The satellite statistical models say that it did not warm at all during this period. This is a major unsolved problem. It renders vague generalities like “The earth has been warming” scientifically meaningless. Warming how much and when is the unanswered scientific question (and very prior to why)?

If so, we should observe that the IPCC reports:
* include the low rates of warming observed in the range of possible
* notes the large uncertainties, as Curry testified
* noted (once) the missing hot spot, as Christy testified
* notes the lack of increase in extreme weather, as Peilke testified

It’s not so much alternative findings as it is the spin, sometimes by omission, in the IPCC reports.

TE
Trump may cut funding for IPCC. Plus first withdraw from Paris accords.

Maybe NAS can run an evaluation, if they can be credibly unbiased. Isn’t the head the ex editor of Science that was extremely biased in allowing skeptics articles? Her replacement continues blocking fair balanced evaluations. How one finds a group that is science based is tough>

Did APS every revise the position paper after Dr Koonin forced resignation from the review team?

Dr. Koonin has credibility but now his status is smeared by just his attempt at fairness from the likes of Dr. Mann. Even as Mann swears he never called Judy a denier when his written testimony did precisely that.

NAS is headed by McNutt, so completely biased at the top. APS has not made a revision, partlynwhy Koonin went rogue. Ran a process, saw how shakey or worse CAGW is, and that was not the answer APS wanted.

A member of of APS committee said in the press that the arguments presented were considered to be weak.

They complained they were not being given a fair hearing. They set one up. They had their chance. Their arguments were deemed to be weak.

They just dodge and dance… remain unconvinced… remain unpersuaded.

Give me an ironclad assurance the red team will shut up and go away if they lose. So far the evidence is they lack the honor to do so. That is why Mosher is so reviled. He had the honor to accept the conclusion… widely seen as a betrayal in the red camp.

A member of of APS committee said in the press that the arguments presented were considered to be weak.

They complained they were not being given a fair hearing. They set one up. They had their chance. Their arguments were deemed to be weak.

They just dodge and dance… remain unconvinced… remain unpersuaded.

Give me an ironclad assurance the red team will be quiet and go away if they lose. So far the evidence is they lack the honor to do so. That is why Mosher is so reviled. He had the honor to accept the conclusion… appears to be widely seen as a betrayal in the red camp.

Much as I would welcome a well-documented, fairly refereed, public process for delineating the keen differences between the evidence provided by CAGW believers and skeptics, the designation of the latter as the “red team” is deplorable. After all, it’s the former who engage in emotionally charged, alarmist presentations and appeals to (non-existent) authority, in contrast to the comparative calm of skeptics. Alas, too many post-modern appelations stand classic mental associations and, indeed, hard logic completely on their head.

> Here’s how it might work: The focus would be a published scientific report meant to inform policy such as the U.N.’s Summary for Policymakers or the U.S. Government’s National Climate Assessment. A Red Team of scientists would write a critique of that document and a Blue Team would rebut that critique.

NIPCC, GWPF, Heartland, et al. already do this on their own dime. For my money to be involved, I don’t want yet another “assessment report” (Dr. Christy’s words) from Team Red — I want science.

Here’s how that might work: The focus would be on producing a CMIP5-compatible model which incorporates “alternative hypotheses” (Dr. Christy’s words). Team Red gets its 5-10% of the annual climate research budget (Dr. Christy’s suggested budget). Either their model beats GHG-forced AOGCMs and wins widespread climate community consensus (and a Nobel Prize) OR it finally sinks in that GHG forcing really is the dominant post-industrial driver of observed warming.

Since I consider it plausible that neither of those ending conditions will ever be met (and considering their high degree of subjectivity), adding a time limit on funding might be appropriate. Ten years seems like a minimum to obtain any meaningful results. I have been trained to multiply all time estimates by (at least) three, so let’s call it thirty years of guaranteed funding, no strings attached. I think that’s rather generous.

Team Blue will not want their budgets impacted by this exercise, so I’d want the 5-10% Red Team budget be in addition to — not in lieu of — other funding. (I’ll pretend for a moment that Trump doesn’t exist … it’s a comforting fantasy and some days I like to indulge myself.)

On the political side of things, Team Green will want something in exchange for giving tax monies to what they’d call a pack of den!ers.

The primary publishing route will be the same for any gummint-funded science agency: refereed journals. It should go without saying that the model codes and outputs will be freely available to the public at time of publication so that anyone (but particularly other qualified and capable modelling teams) can inspect them at will.

***

Echoing Dr. Curry in the OP: If the “alternative hypotheses” are really as strong as they think it are, then the “Red Team” scientists have nothing to lose in such an exercise — the “alternatives” would emerge as strengthened. However, if the “Red Team” scientists are [not] real scientists [but] rather [] contrarian enforcers for the sake of policy advocacy, they will probably feel threatened by such an exercise. It will be interest[ing] to see how they react to such a proposal.

Since the IPCC activity is funded by US taxpayers, then I propose that five to ten percent of the funds be allocated to a group of well-credentialed scientists to produce an assessment that expresses legitimate, alternative hypotheses that have been (in their view) marginalized, misrepresented or ignored in previous IPCC reports (and thus EPA and National Climate Assessments). Such activities are often called “Red Team” reports and are widely used in government and industry. Decisions regarding funding for “Red Teams” should not be placed in the hands of the current “establishment” but in panels populated by credentialed scientists who have experience in examining these issues. Some efforts along this line have arisen from the private sector (i.e. The Non-governmental International Panel on Climate Change at http://nipccreport.org/ and Michaels (2012) ADDENDUM:Global Climate Change Impacts in the United States). I believe policymakers, with the public’s purse, should actively support the assembling all of the information that is vital to addressing this murky and wicked science, since the public will ultimately pay the cost of any legislation alleged to deal with climate.

Topics to be addressed in this “Red Team” assessment, for example, would include (a) evidence for a low climate sensitivity to increasing greenhouse gases, (b) the role and importance of natural, unforced variability, (c) a rigorous and independent evaluation of climate model output, (d) a thorough discussion of uncertainty, (e) a focus on metrics that most directly relate to the rate of accumulation of heat in the climate system, (f) analysis of the many consequences, including benefits, that result from CO2 increases, and (g) the importance that affordable and accessible energy has to human health and welfare.

If my tax money is going to be used for a Red Team, I don’t want 5-10% of the annual climate budget going toward yet another “assessment report”. In my view, it’s past time for scientists arguing for low climate sensitivity to formalize their “alternative hypotheses” in a full-featured AOGCM and publish it.

IOW, I’m willing to put my money where their mouths are so long as what primarily comes out of their mouths is model data. All the better if the results are superior to the current crop of high-sensitivity AOGCMs.

Modelling is the crucible for testing of a physical hypothesis, as you and other contrarians go out of your way to (needlessly) remind consensus researchers on every single thing you can find (read: they have found) which does not agree with observation. Therefore, I’m not shocked when contrarians balk at my counter-proposal.

Bandon, Your statement about AOGCM’s is just silly. It is very easy to get almost any possible value of ECS by changing parameters controlling clouds, convection, and precipitation. Isaac Held has a recent post on this. There is not good enough data to independently constrain the parameters so it is not a meaningful exercise that you propose. I would instead focus on a new AOGCM with better methods and ab initio sub grid models. And we need to get vastly better data too to have a chance.

> It is very easy to get almost any possible value of ECS by changing parameters controlling clouds, convection, and precipitation.

Now might be a good time for me to remind everyone that any value of ECS is moot when ΔF = 0, dpy6629.

> I would instead focus on a new AOGCM with better methods and ab initio sub grid models.

I don’t see how that solves the parameter fitting problem, but skip it because that’s never NOT going to be a modelling problem — if Team Red wants to write one from scratch, it’s their budget. My main condition here is that they apply their allegedly superior hypotheses and practices to producing an actual research product. If funding cuts down on any number of other excuses for why their pet hypothesis hasn’t gained widespread acceptance to date, I’ll consider it money even better spent.

> And we need to get vastly better data too to have a chance.

I’m sure Team Blue would be happy share the cost of it out of their own respective budgets.

Brandon, My main point is that it should be very easy to get any ECS one wants for a new AOGCM. I do believe however, that fields like climate modeling have a very strong sense of community and are defensive about their choices. The recent flurry of papers on tuning seems to have at least brought the issue out into the open. I have found in my own personal experience that in the current scientific climate, every modeling group is very anxious to maintain that their model is “good” and agrees with the data. That can result in overlooking some attractive alternatives to the popular ones.

> My main point is that it should be very easy to get any ECS one wants for a new AOGCM.

I agree, dpy6629. However, I’m interested in a lot more than the abstraction of the value of climate sensitivity.

> The recent flurry of papers on tuning seems to have at least brought the issue out into the open.

The first one I read — Mauritsen (2012) I think? yep, that’s the one — was a real eye-opener. This is a good development in my book.

> I have found in my own personal experience that in the current scientific climate, every modeling group is very anxious to maintain that their model is “good” and agrees with the data. That can result in overlooking some attractive alternatives to the popular ones.

Beauty is also in the eye of the beholder. I think a low-ECS GCM produced by a Red Team would be beautiful. Just the fact that they built one would be major forward progress in my book.

Bonus points if it shows a tropical “hot spot” … just so TE would be less likely to bring it up every third post.

“Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

“A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics. Moving toward higher resolution, eddy resolving oceanic models should help reduce this deficiency. Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings (13). Viewed in this light, the lack of modeled compared to observed interdecadal variability (Fig. 2B) may indicate that current models underestimate climate sensitivity. Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.” http://www.pnas.org/content/106/38/16120.full

I ‘d suggest – and this would seem to be the discipline consensus – using one of the CMIP models in a perturbed physics application. Models are wild as well.

And then picking a solution (from thousands of plausible solutions) on the thick black trend extrapolation I added. Where can I send the bill?

> To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.

… the nasty implication of plausible climate intransitivity so many Lorenz fans in these parts overlook when falling over themselves to thump on the unpredictability argument.

> And then picking a solution (from thousands of plausible solutions) on the thick black trend extrapolation I added.

The ultimate ideal of an ensemble solution is to bound the possibilities, Robert E. We’ve been doing probabilistic ensemble weather forecasting for some time now. Computational expense is a main limiting factor on doing on the order of a hundred runs in climate GCMs.

Folks who require exact solutions as a condition of utility in this application are demanding the impossible.

> To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.

… the nasty implication of plausible climate intransitivity so many Lorenz fans in these parts overlook when falling over themselves to thump on the unpredictability argument.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.” IPCC TAR 14.2.2.2

If you’d actually bothered to read the Tim Palmer and Julia Slingo paper – you might understand that it was precisely the probabilistic procedure with perturbed physics models that was under discussion. Until there is a probability density function devised – however – there is no rigorous rationale for choosing any of the many plausible solutions of a perturbed physics ensembles. Any of the 1000’s of solutions of the Rowland et all 2012 – http://www.nature.com/ngeo/journal/v5/n4/abs/ngeo1430.html – graph above are equally plausible.

If you applied yourself – you might finally understand that the evolution of the many feasible solutions of any model – after a short lead time – are dominated by nonlinearities in the core set of equations of fluid transport rather than any simulation of physical processes.

So climate is unpredictable – and I have spoken often about dynamic sensitivity in the climate system. Most lately at the end of comments on the last post here.

The ultimate ideal of an ensemble solution is to bound the possibilities, Robert E. We’ve been doing probabilistic ensemble weather forecasting for some time now. Computational expense is a main limiting factor on doing on the order of a hundred runs in climate GCMs.

The opportunistic ensemble procedure is very different – a single solution from a model is chosen on the basis of a ‘plausible’ model formultion and a posteriori solution behaviour – that is it looks about right. Each single solution is then mapped against other single solutions and a fake statistics generated over the total. It is seemingly a sham and a disgrace.

“In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS…

AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior…

Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.” http://www.pnas.org/content/104/21/8709.full

Folks who require exact solutions as a condition of utility in this application are demanding the impossible.

The real worry is people who play with words rather than applying themselves to understanding the limitations of data and methods.

“Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.” op.cit.

It remains the case that – over the next few decades at least – the best approach is to extrapolate recent warming. It is probably an upper limit estimate.

That’s review material for me. Kudos to you for including the balance of text past “long-term prediction of future climate states is not possible”.

> If you’d actually bothered to read the Tim Palmer and Julia Slingo paper – you might understand that it was precisely the probabilistic procedure with perturbed physics models that was under discussion.

I hadn’t read it, but have now. I won’t pretend to understand all of it, but what I do think I understand is consistent with other similar papers.

What I still question is why you’re talking about “picking a solution (from thousands of plausible solutions)” in a discussion of probabilistic ensemble forecasting.

> Until there is a probability density function devised – however – there is no rigorous rationale for choosing any of the many plausible solutions of a perturbed physics ensembles.

Fine. What then is your rigorous rationale for continuing to change the radiative properties of the atmosphere if it’s not possible to confidently know what the future effects will be?

This really should be the end of the discussion.

> If you applied yourself – you might finally understand that the evolution of the many feasible solutions of any model – after a short lead time – are dominated by nonlinearities in the core set of equations of fluid transport rather than any simulation of physical processes.

The state space is, for all intents and purposes, infinite. We’ll never, ever, be able to search all of it.

> Each single solution is then mapped against other single solutions and a fake statistics generated over the total. It is seemingly a sham and a disgrace.

As I mentioned previously, computation resources are limited and the “ensemble of opportunity” is what we get out of what’s available. The limitations of that approach are all explained, and the “fake” statistics are appropriately caveated. Calling the constraints of reality a “sham and disgrace” *seems* biased to me.

> The real worry is people who play with words rather than applying themselves to understanding the limitations of data and methods.

Bizarre.

> It remains the case that – over the next few decades at least – the best approach is to extrapolate recent warming. It is probably an upper limit estimate.

… he says after lecturing me about the lack of probability density functions and the limitations of data and methods.

That’s review material for me. Kudos to you for including the balance of text past “long-term prediction of future climate states is not possible”.

Just as Tim Palmer and Julia Slingo discussed – they were talking families of solutions in perturbed physics ensembles – e.g. the Rowland result is for the HadCM3l model – and not the opportunistic ensemble approach of the IPCC. I think you have failed to make the distinction – as do many people without much of an understanding.

James McWilliams discusses this as well.

“Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” op.cit

“If you’d actually bothered to read the Tim Palmer and Julia Slingo paper – you might understand that it was precisely the probabilistic procedure with perturbed physics models that was under discussion.”

I hadn’t read it, but have now. I won’t pretend to understand all of it, but what I do think I understand is consistent with other similar papers.

What I still question is why you’re talking about “picking a solution (from thousands of plausible solutions)” in a discussion of probabilistic ensemble forecasting.

The chosen solutions come together in an ‘opportunistic ensemble’ – very different to perturbed physics ensembles. Probabilities may theoretically be assigned to the range of feasible solutions – but that is a future development as yet. Again – these are very different approaches.

Until there is a probability density function devised – however – there is no rigorous rationale for choosing any of the many plausible solutions of a perturbed physics ensembles.

Fine. What then is your rigorous rationale for continuing to change the radiative properties of the atmosphere if it’s not possible to confidently know what the future effects will be?

This really should be the end of the discussion.

Not bothered to read my comment at the end of the last post? That should really be the end of the discussion – such as it is – right there.

“If you applied yourself – you might finally understand that the evolution of the many feasible solutions of any model – after a short lead time – are dominated by nonlinearities in the core set of equations of fluid transport rather than any simulation of physical processes.”

The state space is, for all intents and purposes, infinite. We’ll never, ever, be able to search all of it.

Here’s a schematic of a perturbed physics model – try to understand.

” Each single solution is then mapped against other single solutions and a fake statistics generated over the total. It is seemingly a sham and a disgrace.”

As I mentioned previously, computation resources are limited and the “ensemble of opportunity” is what we get out of what’s available. The limitations of that approach are all explained, and the “fake” statistics are appropriately caveated. Calling the constraints of reality a “sham and disgrace” *seems* biased to me.

You’ll have to explain the caveats for overlaying means and ranges over a set of curves each individually arbitrarily selected on the basis of ‘a posteriori solution behaviour’. Many other wildly divergent but plausible solutions can be derived from the same model.

The argument is that it should not be done at all – and the money put to more fundamental research. Less CMIP and more observations, more model families exploring forcing uncertainties and more process modelling simulating fundamental physical processes at which models are fundamentally very bad at. CMIP is an attempt to answer a question for political purposes that may not – as James McWilliams said in a quote I provided – be theoretically answerable. Objectively it seems to me to be a a scientific fraud.

It is a rhetorical bother that he then suggests that skeptics build their own models. A rhetoric I hope I have thoroughly deconstructed. People are building models – ones that better represent the uncertainties in forcing, in natural variability, in fundamental physical processes – but they are not the simple idea of a temperature range in a scientifically invalid opportunistic ensemble.

“The real worry is people who play with words rather than applying themselves to understanding the limitations of data and methods.”

Bizarre.

QED

” It remains the case that – over the next few decades at least – the best approach is to extrapolate recent warming. It is probably an upper limit estimate.”

… he says after lecturing me about the lack of probability density functions and the limitations of data and methods.

I couldn’t make this stuff up.

Not sure what his problem is – extrapolating observation requires no pdf and while the surface record has extreme limitations – it does integrate the effects of some fundamental climate processes.

I suspect he is an activist with little understanding beyond the level of climate narratives practiced in climate blog echo chambers across the planet – and who descends readily into denigration to divert from his paucity of understanding.

brandonrgates: Folks who require exact solutions as a condition of utility in this application are demanding the impossible.

What might be possible some day (I am hopeful) but has not been demonstrated yet is that solutions sufficiently accurate to reliably inform public policy decisions can be found.

A body of knowledge was used in the planning and construction of the Golden Gate Bridge, no doubt full of useful approximations. Substantially the same body of knowledge was used in the planning and construction of the Tacoma Narrows Bridge, which basically blew apart in a storm. The mere assertion by experts that a body of knowledge is dependable should not be accepted without the demonstration of a record of sufficiency for the job. In like fashion, the body of knowledge that guided the construction, maintenance and operation of the Space Shuttles proved not to generalize to the case of a cooler than usual launch environment.

Approximate solutions abound in science, and lots of pithy aphorisms can be cited. You wouldn’t intentionally start to construct an aircraft without thorough review of how you know that the model for the lift capacity of the wing is sufficiently accurate that the actual wing will lift the actual aircraft with the actual engine installed.

I do know of a case where a pharmaceutical company launched a Phase III trial without first confirming that the model for the concentration of the drug in the target hepatocytes was accurate — after millions of $$$ spent on the trial, it was indeterminate whether the drug was efficacious or not, because post-hoc biopsies showed that the target concentration had not been reached with the doses used..

> What might be possible some day (I am hopeful) but has not been demonstrated yet is that solutions sufficiently accurate to reliably inform public policy decisions can be found.

Then we don’t have solutions sufficiently accurate to reliably inform us that continuing to change the radiative properties of the atmosphere is an ok course of public non-policy, Matt M. All your further (good) examples apply.

What we know with *absolute* certainty is that the climate parameters of the recent past are compatible with an industrialized world supporting 7.125 billion human lives. No forward-looking models or even empirical data required to know this — the hard *proof* (a term I rarely use) is that we’re here and thriving in present conditions.

My position is that we need Teh Modulz to get better quicklike not so much for making “optimal” mitigation decisions, but more because as a species we’ve dragged our feet on transitioning to a low/zero-carbon economy and the policy relevance is adaptation planning for that which we’ve failed to avoid.

This is one of those rare arguments I’d dearly love to lose. Time will tell.

brandonrgates: Then we don’t have solutions sufficiently accurate to reliably inform us that continuing to change the radiative properties of the atmosphere is an ok course of public non-policy, Matt M.

My first comment objected to your assertion an unrealistic or unrealizable standard of accuracy was being required of global climate models, and other models.

Those models are not the only bodies of knowledge. Researchers are accumulating empirical evidence regarding the hypothesized threats of damages to expect if we do not reduce CO2. So far, the joint effects of increased CO2, increased temperature, and increased rainfall are benign or beneficial.

brandonrgates: NIPCC, GWPF, Heartland, et al. already do this on their own dime. For my money to be involved, I don’t want yet another “assessment report” (Dr. Christy’s words) from Team Red — I want science.

…

Since I consider it plausible that neither of those ending conditions will ever be met (and considering their high degree of subjectivity), adding a time limit on funding might be appropriate. Ten years seems like a minimum to obtain any meaningful results. I have been trained to multiply all time estimates by (at least) three, so let’s call it thirty years of guaranteed funding, no strings attached. I think that’s rather generous.

That’s good, imo.

Part of the motivation for my proposal of publishing and annotating the grant proposals is my conjecture (maybe a “belief”) that Team Red already exists among the grant-funded scientists, and they are already receiving funds to do that work. It’s just that the disagreements are papered-over in the published work.

Ok, I understand your argument about grant proposals better now. I know pretty much what most anyone working in the field would say: that there’s plenty of disagreement and debate between teams and that nothing’s getting papered over that is good enough to publish. Still, that’s not us knowing what’s really there, and I wouldn’t be opposed to that being part of a formal review.

I’d still rather see those calling for a Red Team do more actual science.

That would be a good opportunity to effectuate a policy that I recommended: publish the grant proposals that highlight all of the unknowns.

Who chooses the team members? The Blue Team (or Red Team) could simply claim that all the members of the other team were untrustworthy or “not in good faith”, hence Blue Team (or Red Team) citations of published science should be ignored. Schmidt made that claim about Bengtsson upon B’s agreeing to advise GWPF.

With two parties in Congress, and with GWPF and others (former NASA scientists, etc.) engaging the debate all over the place, I do not see the added value of an officially sanctioned additional Red Team. For anyone interested, there is a lively debate ongoing. Publishing grant proposals would not require new teams, just a new policy.

> The Blue Team (or Red Team) could simply claim that all the members of the other team were untrustworthy or “not in good faith”, hence Blue Team (or Red Team) citations of published science should be ignored.

BINGO!

> Publishing grant proposals would not require new teams, just a new policy.

I didn’t read your original proposal, so I may be missing a nuance, Matt M., so I’m not sure what that would do other than further stoke an already existing argument — a grumble I hear from time to time is that grants for …

… isn’t being funded by the gatekeepers to everyone’s satisfaction. Ok, fine; I’ll fund that, but it has to be put into a CMIP5-compatible AOGCM. I not only want to read about the evidence it’s based on in refereed literature, I want to see it better explain the past than current state of the art. Anything less is hand-waving to me, and that got old over a decade ago.

brandonrgates: I didn’t read your original proposal, so I may be missing a nuance, Matt M., so I’m not sure what that would do other than further stoke an already existing argument — a grumble I hear from time to time is that grants for …

The grant proposals review the literature and highlight what is not known, frequently highlighting conflicting claims and results: the response of cloud cover to warming, for example; or extensions of the Romps et al approach to other regions of the global surface (diverse regions of the ocean surface, for example, or the Amazon basin); or what is missing and can be obtained relative to transient climate sensitivity.

Instead of a Red Team vs a Blue Team, you could sort the proposals by Scientific Problem and Conflicting Claims. I do not know who said this first (it might have been John Ziman in “Real Science”), but arguably grant proposals constitute the most thorough and reliable summaries of what is known. A great deal of time and effort by talented people has gone into writing them and improving them, including substantive responses to criticisms. I think they are subjected to more thorough evaluation than papers submitted for publication.

Independent groups of scientists have evaluated AGW before, NAS, RS, AAAS, APS to name a few. When these side with AGW in their statements, the skeptics always cry foul and suggest these are all part of the plot. Why would it be any different with a new panel? Even if the skeptics promise to keep quiet if the result goes against them, it is in their nature not to, and to find fault with any process as they have been for the last few decades. This is not a final solution. They won’t just pack up their blogs and institutes and go home. There will always be an excuse for them.

Brandon, the apparent goal of Karl’s work was to put just enough slope in the temperature graph to make the pause go away. The apparent goal of all data fiddlers is to produce a smooth, monotic global temperature curve. Just look at how BEST torture the already tortured data.

JCH, you fail to distinguish between science and data fiddling and alarmism. Your hysterical response and touching support for non-scientist Mossshhher the Great and Powerful and his self-indulgent drivel takes you nowhere.

Koonin seems unaware that independent groups of scientists have already evaluated AGW many times over. Perhaps their outcome wasn’t to his liking, so he puts them in the same category as climate scientists, as he will with any new group that does it. It is also interesting that he says Congress or the executive branch should put this together, because I am sure he wouldn’t have said that if Dems were in power. The US political branches are the worst people to convene this, but he seems oblivious to the problems of his idea.

Jim D seems unaware that due to institutional bias flowing from vested interest, virtually all the funding goes to those committed to the consensus, and close to nothing for skeptics. Which is of course why an approach like the Red Team is needed. Only those afraid of what might be discovered oppose it.

You seem unaware that Red Teaming by skeptics has been going on for decades. Perhaps we all discounted their efforts because they seem too closely tied to fossil fuel thinktanks and Republican congressmen, so you need to find a whole new group from somewhere. Good luck with that.

You doggedly persist in being unaware of the many orders of magnitude more money precommitted to alarmism. And willfully ignoring that that the alarmism is closely tied to vested interest government funding.

Skeptics need more than money. They need to find a clue. So far, no sign they have one. The earth has warmed one degree C, and the skeptics still don’t know why having dismissed all science on the topic going back a hundred years.

Skeptics need more than money
Still you persist in thinking that science grows on trees. No – first you get money to do science, then you do it. Alarmism already has over a hundred billion invested in it, for example.

The alarmists’ billions have produced a theory. The skeptics scant resources haven’t. How remiss of them.

To get money, you have to have a clue. They don’t just give money to crackpots, so they have to separate themselves from those by saying something credible in their proposal, and preferably having some real accomplishments in the past. Normally these are low enough barriers.

It was an answer to a question. There’s a lot of crackpot ideas out there. You don’t have to go far on skeptic sites to find them, and I am sure they would all welcome money to take their musings further. Your task is to sort the chaff from the chaff.

To get money you need to fit in with the interests of the funder, toe the consensus line. “Having a clue” in this case means following the bogus certainty of the alarmist groupthink.
Which is why a separate initiative must be found to this inbuilt bias.

Nobody is barred from research into ocean variability or solar variability. In fact, there is a lot of forefront work in these areas, even cosmic rays and urban heat island effects, and why skeptics think they can’t get funding for these is beyond me. Perhaps they are just unqualified.

But Jim D, people are not saying that there is no GW. Precious few are even saying that there is no AGW. What people on this website like to point out is that claims for high sensitivity to a doubling of concentrations of CO2 lack convincing evidence to date.

I think there currently is a noticeable shift from saying CO2 can’t be doing all of it to the realization that it really is doing all of it but it won’t be harmful, but I am not really sure where any particular skeptic is on this spectrum. They cover the gamut and have a wide range of ideas. This complete lack of coherence is a major problem for them. No two even seem to agree with each other.

Punksta | April 23, 2017 at 2:52 am |
What Jim D can’t get his schoolboy mindset around, is that saying “we don’t know the answer” yet is a valid position to take.

Jim D | April 23, 2017 at 4:56 am |
That’s argument from ignorance which doesn’t work when there is actually a lot of knowledge on the topic.

Saying “we don’t know the answer yet” is not an “argument from ignorance.” The latter is this: “We can’t think of anything but X that could account for what we observe, so X must be the cause.” This argument from ignorance implicitly dismisses any unknown unknowns (like multi-decadel fluctuations in global windiness) as well as marginalizing known unknowns (like cloudiness), so it’s a wobbly basis for policy.

There are two forms of the argument from ignorance. One used by the skeptics is it can’t be proven so it must be false. The other is it can’t be proven false so it must be true. The scientists would make neither claim, but they would point to the weight of evidence for the truth of a statement and the lack of evidence for its falsity. In science, unlike in mathematics, very little is plain true or false. It is an evolving state of knowledge. You don’t prove things, but you do measure things with more accuracy over time that lend support to theories.

Punksta, even you hang your argument on that it is not ‘proven’, so there you go changing and accusing me of a straw man in the process. When you said it is not proven, I guess you meant it is most likely true and not likely false, which you should have said in the first place.

OK, I said “Warming is dominated by forcing, and forcing is dominated by anthropogenic forcing. Skeptics can counter neither of these.” You mumbled something about not proven. Want another go? More likely true?

Specifically the recent warming. The rising ocean heat content tells us that this is forced and continues to be as long as it rises on decadal scales. The positive imbalance tells us that all the warming so far has not caught up to the forcing change we have driven.

I am not really sure where any particular skeptic is on this spectrum. They cover the gamut and have a wide range of ideas. This complete lack of coherence is a major problem for them. No two even seem to agree with each other.

That is how real science works. People disagree and debate and study and get to better answers. The disagreement is not a problem, it is a major asset. I learn more by talking to people who disagree with me than from people I agree with. Not always, today I learned a lot from a lady who agreed with me. She came to similar conclusions to mine on her own. The consensus people are wrong, their models are wrong, their theory is wrong.

In the skeptic camp, the right theory does most likely exist. We must continue to disagree and debate and try to sort out which of the skeptic theories is most correct.

We skeptics cover the gamut and have a wide range of ideas. That is our strength and that is how we helped elect Trump and a Republican Congress. Unlike the consensus people who have not changed anything or learned anything in multiple decades. It must be really boring to not know more now than you did twenty years ago.

OK, I said “Warming is dominated by forcing, and forcing is dominated by anthropogenic forcing.

You cannot prove this! The only things supporting it are climate models that don’t work correctly and make valid forecasts. The only things supporting this are models based on flawed theory. You have less than nothing on your side that is really science.

Modern hydrology places nearly all its emphasis on science-as-knowledge, the hypotheses of which are increasingly expressed as physical models, whose predictions are tested by correspondence to quantitative data sets. Though arguably appropriate for applications of theory to engineering and applied science, the associated emphases on truth and degrees of certainty are not optimal for the productive and creative processes that facilitate the fundamental advancement of science as a process of discovery. The latter requires an investigative approach, where the goal is uberty, a kind of fruitfulness of inquiry, in which the abductive mode of inference adds to the much more commonly acknowledged modes of deduction and induction. The resulting world-directed approach to hydrology provides a valuable complement to the prevailing hypothesis- (theory-) directed paradigm.

Real climate science is a process of discovery – much of it has not been that for a very long time.

I don’t see how absolving scientists of any obligation to be open minded or to deal with the evidence fairly is going to help.

Anyway, this is not a problem that needs solving. When the church was fighting it out with early astronomers and artillerists, society chose to adopt the theories of the scientists because they made predictions that were sufficiently useful. The church insisted that things only moved in strait lines or circles, but artillerists discovered you couldn’t hit anything with that model–you had to plot a parabola to do that. The entire hand-wringing project about getting people to “accept science” is basically turning that decisionmaking on its head. When climate science is capable of making predictions that people find useful, they’ll use those predictions. Until then, they won’t–any why should they?

Dear Dr. Judith Curry,
I am huge fan of your blog. I had written a blog post myself sometime ago discussing various issues regarding climate change. While my blog post is only a preliminary analysis, I would be happy to receive a critical review of the content, which admittedly is shallow compared to the debates taking place here. I am strongly in favour of a red team vs blue team exercise, having suggested a similar thing on my blog post. http://themaxwelldemon.blogspot.in/2017/01/climate-change-need-for-balance.html

> Based on my experiences with the APS Workshop to review their climate policy statement (which was organized by Steve Koonin), I can think of no one who is better qualified and suited to organize such a Blue Team – Red Team exercise.

R. KOONIN: All right. I have got to say, I come away, Bill, and thanks for being so clear, that this business is even more uncertain than I thought…

[…]

DR. HELD: I think you are getting the concept of radiative forcing wrong.

DR. KOONIN: Thank you. Please tell me.

DR. HELD: It’s a hypothetical quantity how much the balance would change if you fixed temperature. It’s not showing up on this picture.

DR. KOONIN: So, the temperature would be very different in 1700?

DR. HELD: Colder.

DR. CURRY: Colder, yes.

DR. KOONIN: So then, maybe the second question related to that, the ocean is cold. Isn’t the ocean always warming as a result of, I mean, the long-term average heat flow from the surface of the ocean. Is it always in that direction? I am trying to understand……

DR. HELD: If it was sustained, the ocean would have a big temperature gradient.

DR. KOONIN: And it doesn’t?

DR. HELD: It doesn’t. We can go back to measurements of the deep ocean from the Challenger expedition and changes are in the hundredths of a degree.

DR. KOONIN: But I don’t understand the mixing in the deep ocean.

[…]

DR. HELD: … these things, again, are implicit in various fingerprinting studies…. Where do you expect low-frequency variability to emerge in the coupled climate system? It’s going to emerge at high latitudes because, where you have memory on these multidecadal time scales is in the deep ocean…. … … subpolar Atlantic warming over the 50- to 100-year time scale. So, to combine those two things, without any reference to the magnitude of internal variability in the models, it’s pretty inconceivable to me . And we haven’t see it. I don’t think it’s a – it’s not a mystery to me that no one has produced a model that gives you something that looks like the warming over the last 50 or 100 years from internal variability. I just don’t think you can do it. I haven’t tried to put a number on it. I don’t know if I come up with 95 percent or 90 percent or what . I am not holding my breath. So, I think there is a point of confusion here . [next page ] … Does the IPCC actually say that the level, that our confidence has increased from this 90 to 95 percent level? Actually, it doesn’t…. They are both in chapter 10 of AR5. In fact, they are both right next to each other in the summary of chapter 10. And so, for people who read chapter 10, these are two different statements. And it’s discussed in some detail in chapter 10.

DR. CURRY: The issue is what showed up in the summary for policymakers.

So Chief doesn’t ask himself why teh Koonin’s is being suggested to lead that Blue/Red team crap when he keeps touting contrarian claptraps and shows more concerns about what is not science but is important?

I don’t think it’s a – it’s not a mystery to me that no one has produced a model that gives you something that looks like the warming over the last 50 or 100 years from internal variability. I just don’t think you can do it. I haven’t tried to put a number on it. I don’t know if I come up with 95 percent or 90 percent or what . I am not holding my breath. So, I think there is a point of confusion here.

Willard thinks Koonin is an unworthy candidate for having an opinion that policy is important. Willard can’t read the room. For Willard, Koonin is not forthcoming in the details of the Workshop dialogue by not linking to it despite being easily found through a Google search.

So he must not be an honest broker worthy of leading a red/blue team exercise. (An idea that is a bunch of crap anyways because Willard deems it so.). Willard is pointing out Koonin is not completely dispassionate about the importance of how policy is formed. By pointing out its not science, he tips his hand to his own bias. The red/blue team exercise should only look at the science dispassionately and apart from policy. Of course this is only part of the dialogue in the whole transcript. Like I said, Willard can’t read the room, the transcript is easily accessible to anyone with half a wit of how to use the Internet, no direct link needed.

The red and blue teams are already at work. It’s members at scientists like M. England, M/ Zelinka, C. Zhou, TR Knutson, RN Jones… just to name a few. They are already publishing. They are slicing away the unknowns to do with natural variability. There’s an Atlantic branch and a Pacific branch.

The Pacific branch is winning. An example is Koonin’s big flub on SLR trends, which somebody is claiming was repeated by the United States Department of Defense, so one has to wonder if they can still hit the broad side of a barn. Speaking for the red team, Koonin claimed the last two decades show a decreased rate of SLR, and cited the Fasullo paper on the imminent acceleration of SLR. He completely blew it. The 2nd decade of the data shows a decrease over the first decade. The last 20 years is on trend, and the last ten years is far above trend… since Koonin seems to think 10-year trends matter the last 10 years is over 4 mm/yr.

Why? Because the blue Pacific team is right; the red Atlantic team is wrong.

> Willard thinks Koonin is an unworthy candidate for having an opinion that policy is important.

JohnC can’t read my words, so he tries to read my mind. Fat chance. He misconstrues what teh Koonin dodged IsaacH’s point with “that’s not science but it’s important”. Teh Koonin wasn’t alone in dodging IsaacH’s points that day. Of course JohnC also misses the point that linking to the transcript may lead readers to read it, which may in turn lead them to see teh Koonin’s back then under a different one than the sugarcoated one we’re being served here. An appetizer:

Another source of real frustration is that Dr. Koonin had a real opportunity to listen. To consult experts in many different aspects of climate science. To do a deep dive into the science. To seek understanding of complex scientific issues. He did not make use of this opportunity. His op-Ed is not a deep dive – it is a superficial toe-dip into a shallow puddle, rehashing the same tired memes (the “warming hiatus” points toward fundamental model errors, climate scientists suppress uncertainties, there’s a lack of transparency in the IPCC process, climate always varies naturally, etc.)

-Of course JohnC also misses the point that linking to the transcript may lead readers to read it, which may in turn lead them to see teh Koonin’s back then under a different one than the sugarcoated one we’re being served here.

Hank Roberts cut and paste of the days events into an order that works for him notwithstanding. Perhaps Willard needs to go back to the actual source and read for himself. The ‘dodge’ is a figment of Hank Roberts imagination.

> cut and pasted to form a narrative different from how the days events actually played out.

Easy for you to say, JohnC. But tough to show. IsaacH’s presentation starts on p. 408. There’s even a link.

On pages 410 and 412 there’s a simple demonstration that should put the issue of natural variability to rest. Yet teh Koonin still touted it in his bout with Gavin. But that’s before we get to IsaacH’s point:

Does the IPCC actually say that the level, that our confidence has increased from this 90 to 95 percent level? Actually, it doesn’t.

This is the point being dodged with teh Koonin parrots the infamous “it’s not science, but it’s important.”

What you’re doing right now is a bit like when Chief feigns ignorance about the meaning of “contrarian” in a thread discussing Red teams. Is that the kind of trick you allow yourself to do because you “read the room”?

90% and 95% confidence are explicitly defined by the IPPC as associated with the terms very likely and totally bloody likely. These both appear in relation to the atmospheric warming attribution – and the only the higher – apparently – appears in the AR5 summary and press release. Is this important? Is anything the IPCC does important?

wee willie has an unfortunate habit of misinterpreting the totally bloody obvious. Last week it was some contrarion advocating lying. It was of course nothing of the sort. It was some bloke making a represention to the UK parliament that contrarion views should be allowed in the ideas marketplace even if dickheads think they are wrong.

Isaac Held put ‘natural variability to rest’ by finding in some silly little narrative that if atmosphere wasn’t warming from CO2 then the heat would have to come from the oceans which would be therefore cooling. It is utter – and demonstrable – nonsense.

Finally – wee willie whines that I haven’t taken on – in the spirit it is intended – the mantle of contrarion. He can go get rooted as we say.

I’m happy to know Willard is looking out for me. We see the same thing differently. As Held points out, the IPCC doesn’t say there was an increase in confidence, yet what was presented in the summary to policy makers was an increase in confidence. Why? What do you think the media picks up on? Held acknowledges the media gets in a tizzy about these things. Willard should ask himself what was the reason for the APS reviewing their climate change statement? Level of confidence in our understanding of the climate and how its communicated to the general public was one of the big reasons. There is no dodge there, it’s what the whole day was about.

He has an email. It’s easy to find. Mailing an image should be short and sweet.

Go Team!

***

> We see the same thing differently.

That’s a bit more plausible than saying I’m misrepresenting what teh Koonin did, but that’s still not good enough. Here’s the meeting purpose, as stated by teh Koonin himself:

the meeting’s purpose is to explore through expert presentations and discussion the state of climate science, both the consensus view as expressed by several thousand pages of the IPCC AR5 Working Group report that came out three months ago, but also the views of experts who credibly take significant issue with several aspects of the consensus picture.

That’s on page 5.

That teh Koonin raises concerns about “how [climate science] communicated to the general public” is just par for his stealth advocate course.

One does not simply invite IsaacH for what is not science but is important nevertheless.

The seems the consensus position is about as clear as wee willies perpetually foggy mindset. But it is quite silly to assume that electromagnetic fluxes at toa do not vary. I doubt if an email will remedy that.

A decline in cloud cover will cause both ocean and atmospheric warming. Quite obviously a large natural variation. Here it is in words because you evidently cannot decipher a graph on your own.

“The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

“With this final correction, the ERBS Nonscanner-observed decadal changes in tropical mean LW, SW, and net radiation
between the 1980s and the 1990s now stand at 0.7, 2.1, and 1.4 W/m2, respectively, which are similar to the observed decadal changes in the High-Resolution Infrared Radiometer Sounder (HIRS) Pathfinder OLR and the International Satellite Cloud Climatology Project (ISCCP) version FD record but disagree with the Advanced Very High Resolution Radiometer (AVHRR) Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both datasets show variations of roughly 1.5 W/m2 in planetary net heat balance during the 1990s.” http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3838.1

-One does not simply invite IsaacH for what is not science but is important nevertheless.

One invites him to give his expert opinion, among other things, on the level of uncertainties. On where scientists start disagreeing about the state of knowledge. The state of knowledge then goes to how confidently one can make a statement about climate change and human attribution. Thus the review of the APS statement. After all these years I guess I should not be surprised that all you saw was stealth advocacy.

ain’t what IsaacH has been trying to make understand to our favorite Synthetic Expert, Chief:

[W]hat would things look like conceivably if I was completely wrong? And let’s go to the extreme limit, that it’s pretty much all internal variability.

Well, first of all, you are talking immediately about a low-climate sensitivity, much lower than the consensus picture, because otherwise the forced response would be there.

If you are saying it’s mostly internal variability, you are talking about a very low-sensitivity system compared to the consensus picture, which means that we are talking about, say observed warming.

So, you would have a huge outgassing of heat from the ocean because that’s what you mean by “low-sensitivity model.” For the same warming, you get a huge output of energy trying to restore that. And for the same forcing, I am assuming the forcing estimate is not uncontroversial. You have heat coming out of the ocean. That’s the bottom line. We don’t see that.

Our Chief could of course seek clarification with IsaacH, but he has to win thread after thread with the same tired squirrels.

Winning against wee willie is not all that hard – he has really not the slightest clue.

“You have heat coming out of the ocean. That’s the bottom line. We don’t see that.”

Instead – as the graph from Tekmen Wong et al (which I quoted) shown way up above in response to the HEld nonsense – and again just above – shows ocean warming as a result of decreased cloud cover in the 1990’s – a period of atmospheric warming.

“Figure 1 (middle) shows that these climate mode trend phases indeed behaved anomalously three times during the 20th century, immediately following the synchronization events of the 1910s, 1940s, and 1970s. This combination of the synchronization of these dynamical modes in the climate, followed immediately afterward by significant increase in the fraction of strong trends (coupling) without exception marked shifts in the 20th century climate state. These shifts were accompanied by breaks in the global mean temperature trend with respect to time, presumably associated with either discontinuities in the global radiative budget due to the global reorganization of clouds and water vapor or dramatic changes in the uptake of heat by the deep ocean. Similar behavior has been found in coupled ocean/atmosphere models, indicating such behavior may be a hallmark of terrestrial-like climate systems [Tsonis et al., 2007].” http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/full

The regressive leftist twaddle is predicated on being right on everything. The reverse is true and no amount of disingenuous prevarication will change that.

If the oceans are warming and the atmosphere is also warming, the oceans are not warming the atmosphere, therefore CO2. Moving into this interglacial, the oceans and the atmosphere warmed with low levels of CO2. But the oceans did not warm the atmosphere as they were warming themselves. So something else warmed the atmosphere. Today it’s CO2. Then it was unicorns.

If there is no response to CO2 – Held assumes that the observed atmospheric warming must come from the oceans. The ERBS data says it is overwhelmingly from less cloud. Now you may reject the data – for less than satisfactory reasons in my view – but Held’s assumption is at any rate not necessarily true. So it doesn’t put paid tpo internal variabilithy as wee willie insisted.

At the glacial/interglacial scale there is a change in TOA rediant flux from albedo change of some 25W/m2. The equilibrium sensitivity to radiative forcing – btw – is thus 0.2 degrees C/W/m2 – or 0.74 degrees C for a CO2 doubling.

Chief wins again, with the same squirrel, oblivious to IsaacH’s point that for the same warming, you get a huge output of energy trying to restore that, “that” being the outgassing of heat from the ocean.

Perhaps IsaacH anticipated what a contrarian would say, for he continues:

We can argue about whether the heat going into the ocean is accelerating or who knows what. But all the estimates are that the ocean is gaining heat over this time period. We have sea level going back for longer periods.

There is no way I can construct a simple model that would give me heat going into the ocean if the response is basically internal. I am willing to discuss that with panel members.

Handwaving to a figure in a paper whose results “do not support the Iris hypothesis” and “are consistent with heating predicted from current stateof-the-art coupled ocean–atmosphere climate models” (p. 4034) might not be the best way to construct that counter-model. Neither is Chief’s armwaving about the complexity of it all.

As long as it makes him win while we post more of that APS transcript, all is well.

“The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s

Rather than prattling on about squirrels and handwaving to complexity – and going off on mad, pejorative tangents – we should remember that the toa radiant balance is ‘complicated’.

“1. The new results do not support the recent Iris hypothesis (Lindzen et al. 2001, Lin et al. 2004). As tropical and global SST warms in the late 1990s during the 97/98 El Nino, the Iris negative feedback predicts net flux to decrease (ocean cooling) as opposed to the increase (ocean heating) seen in Figure 7.”

There is an iris effect of sorts – as shown in the ERBS data IR emissions. SST warm, the atmosphere warms and IR emissions increase. But it is offset by more SW reaching the surface.

“With this final correction, the ERBS Nonscanner
observed decadal changes in tropical mean LW, SW, and net radiation between the 1980s and the 1990s now stand at 0.7/-2.1/1.4 Wm-2, respectively, which are similar to the observed decadal changes in the HIRS Pathfinder OLR and the ISCCP FD record; but
disagree with the AVHRR Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV Edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both data sets show variations of roughly 1.5 Wm-2 in planetary net heat balance during the 1990s.”

The other component of the satellite record is reflected IR as shown in the ERBS record. A 1% change in albedo is 3.4W/m2. Held assumes that albedo doesn’t change. He is wrong – and so God love him – is wee willie. Wee willie just doesn’t know how or why but just blindly quotes stuff he doesn’t understand and that is not hugely relevant to the point that Held is erroneously making.

“2. The ocean heat storage and net radiation data, while showing relatively large interannual variability, are consistent with heating predicted from current state of the art coupled ocean/atmosphere climate models.”

So we have ocean heating from cloud cover changes associated with ocean and atmospheric circulation – and anthropogenic warming in opportunistic ensembles that have zilch scientific credibility at any rate. The discussion doesn’t and cannot negate the data.

“There is no way I can construct a simple model that would give me heat going into the ocean if the response is basically internal. I am willing to discuss that with panel members.”

The first differential global energy storage equation is –

Δ(work and heat) = Ein – Eout

The change in work and heat in a period is equal to energy in less energy out. Held assumes that work and heat don’t change in a world where CO2 has little effect – that energy is merely redistributed between ocean and atmosphere. In the real world – energy out varies substantially. The data shows how both oceans and atmosphere can warm in a world of low sensitivity to carbon dioxide.

And just while my response is caught in the moderation trap – I will add that the current generation of monitoring systems is much more precise. Yet some still seem to be stuck in the era of simple conceptual models based on questionable assumptions.

Chief does not always look for the heat that would be outgassed from the oceans, but when he does, he looks at the top of the atmosphere.

Another win for the Chief.

Since we’re into complexity stuff, let’s jump IsaacH’s second reason why natural variability provides little solace (hint: fingerprints, see also the recent sniggering at them elsewhere on this blog) and quote his bit about Chief’s favorite meme:

So, we have the seasonal cycle and 15 Milankovitch. Those are both changes in our orbit. And that looks pretty linear, too, 18 at least in the sense that you see the periods of the orbital changes coming out.

DR. KOONIN: If you take a given model, one of the ones in the middle of the pack, and start doing the linear study on one or several of 25 the forces, start cranking up the solar constant or the aerosol loading 3 or CO2, does it behave in a linear 4 way?

DR. HELD: Yes.

DR. KOONIN: Over the range of what we are talking about?

DR. HELD: A lot of people 9 looked at that. It’s very linear.

DR. COLLINS: Yes, it is very linear.

DR. HELD: The whole language, the whole forcing-feedback language we look at is assuming that this linear picture is useful. Otherwise, what is forcing and what is feedback? I don’t even know where to start.

DR. COLLINS: At the risk of breaking protocol, may I?

DR. KOONIN: Yes.

DR. COLLINS: You can force the model separately with different forcing agents, look at the separate response, add the response and then 25 add the forcings and compare the total response to the total forcings. That has been done ad nauseam, not a problem.

DR. HELD: The models look pretty linear. The observed seasonal cycle, that looks linear. Even if in the Ice Age times, things look pretty linear. We don’t know that much about it.

So, why should I assume that things are, gee, the anthropogenic CO2 pulse is going to interact in some exotic way with internal modes of variability? Well, it’s conceivable. But I am not convinced. I don’t think that is particularly relevant.

DR. KOONIN: But to come back to my earlier hobbyhorse, that means that the sensitivity you determined to, let’s say, CO2 from the last 30 years, you should use in extrapolating out of next century?

DR. HELD: Yes, I don’t think there is much evidence that there is much secular variation in sensitivity.

No wonder that teh Koonin switched to good ol’ “modulz are stoopid” line in his recent op-ed.

Readers should beware that most of the lukewarm business model is based on the bluff that readers won’t read what they cite.

He obviously didn’t read the Loeb et al reference – just didn’t get the reference – or repeats this as a subtle discrediting. But ‘the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ The ‘complications’ have no part in Held’s simple model

Chief does not always look for the heat that would be outgassed from the oceans, but when he does, he looks at the top of the atmosphere.

Another win for the Chief.

Heat is not outgassed from the oceans. It is emitted as IR – and ultimately ‘outgassed’ to space where all energy flux is electromagnetic. So it can be balanced in Accordance with the 1st law of therrmodynamics.

‘Since we’re into complexity stuff, let’s jump IsaacH’s second reason why natural variability provides little solace (hint: fingerprints, see also the recent sniggering at them elsewhere on this blog) and quote his bit about Chief’s favorite meme:

So, we have the seasonal cycle and 15 Milankovitch. Those are both changes in our orbit. And that looks pretty linear, too, 18 at least in the sense that you see the periods of the orbital changes coming out.

In the sense of climate you get you get large variability in annual variability – and the rapid 100,000 changes are due to ice sheet feedbacks. Nothing is linear. Especially models. The core set of equations of fluid motion are non-linear.

Models are not good at decadal to longer variability either. Even if we could get around the exponential divergence problem of sensitivity to initial conditions.

Interdecadal 20th century temperature deviations, such as the accelerated observed 1910–1940 warming that has been attributed to an unverifiable increase in solar irradiance (4, 7, 19, 20), appear to instead be due to natural variability. The same is true for the observed mid-40s to mid-70s cooling, previously attributed to enhanced sulfate aerosol activity (4, 6, 7, 12). Finally, a fraction of the post-1970s warming also appears to be attributable to natural variability…

A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics. Moving toward higher resolution, eddy resolving oceanic models should help reduce this deficiency. Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings (13). Viewed in this light, the lack of modeled compared to observed interdecadal variability (Fig. 2B) may indicate that current models underestimate climate sensitivity. Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed. http://www.pnas.org/content/106/38/16120.full

Nothing in the wee willies quotes mean much at all. It is all lamentably superficial – which is wee willies habitual mode.

Readers should beware that most of the lukewarm business model is based on the bluff that readers won’t read what they cite.

I expect the wee willies of the blogosphere to scan for a form of words they can use in an argument with no scientific context.

All this Held discussed a crap conceptual model on the spur of the moment? One may as I say quibble unreasonably about the data but the interpretation is very simple.

In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system. IPCC AR4 3.4.4.1

> Heat is not outgassed from the oceans. It is emitted as IR – and ultimately ‘outgassed’ to space where all energy flux is electromagnetic. So it can be balanced in Accordance with the 1st law of therrmodynamics.

Therefore, looking at the top of the atmosphere to find the ocean’s missing heat makes sense.

With the majority of Western academia buying into the charade of a 97% consensus, we’re dealing with a underlying apocalyptic vison of modernity that a rational Red/Blue Team approach to science simply does not and cannot address. Science cannot fix itself. The closest thing we have to a savior is a 250 year old contract that gave power in the last election to disenfranchised voters in fly-over America–i.e., the politics of the electoral college trumped the hoax and a scare tactics of global warming alarmism.

There is a desirable, even necessary step before the red team exercise starts.
This step requires formal, proper error estimates to be calculated for a selection of key topics. For example, the true errors of estimates of the temperature/time series used to estimate global temperatures, land and ocean. Another, the true error of the TOA radiation balance.
If this step is not taken, it will not be possible tovevaluate the IPCC subjective confidence scheme. Time would be wasted whenever present error estimates had cause for doubt, if the red team had to do its own calculations.
(In my own experience, I waste a lot of time evaluating if available error estimates are correct. Overall, many are horribly over-optimistic. Best to recalculate them all to remove this time impediment for the red team.)
Geoff.

So there’s gonna be a big ‘march for science’.
One speaker after the other will say …
“the science is settled”
“the skeptics are the paid shills for the fossil fuel industry”
“DJT is anti-science”
(it will be all about Trump anyway)
“all who disagree with us are uneducated heathens”
(true in my case)
“Resist” …
Not much room for discussion.

All that’s needed for this debate to happen is for Trump to say he wants to see it occur before he makes a decision on whether to pull out of the Paris accord and to challenge the Social Cost of Carbon finding.

Warm periods, 3000 years ago, 2000 years ago, 1000 years ago, this modern period, are times when ice shelves are minimum, times when sea ice is minimum, are times when ocean effect snowfall places more ice on land than melts every year.

Cold periods, 2500 years ago, 1500 years ago, 500 years ago were times when ice shelves are maximum, times when sea ice is maximum, are times when ocean effect snowfall places ice on floating ice and land ice flow continues and depletes the ice volume until ice extent decreases and the ice cycle repeats.

This is simple stuff, fully supported by ice core data.

Someday, some one with credentials will recognize this and it will go viral.

Warm times are normal natural and necessary, cold times are normal natural and necessary. The upper temperature bounds are limited because it snows too much when oceans are warm and thawed. The lower temperature bounds are limited because it never snows enough when oceans are cold and frozen. This is simple stuff that most people can understand. This also works with sea level. I have good luck with most people who have not swore to support the alarmist viewpoint. The people who have already made up their minds mostly do not ever change.

Dr. Curry: It might be useful for you and Dr. Koonin to have a well-prepared answer for why the standard NAS report writing process isn’t suitable or needs to be replaced by a red team-blue team approach.

The IPCCs Lead Authors have been chosen by the previous report”s Lead Authors all the way back to the activists created the IPCC. They and the politicians who control the SPM believe in reporting a “consensus” that will motivate policymakers to act. The NAS process supposedly doesn’t have these disadvantages. In theory, NAS supposedly chooses authors with a range of views.

There were lots of shenanigans with the North report on the MWP.

Spenser and Christie were authors on at least two reports (one from the NAS) on the putative tropical hot-spot: Reconciling Observations with Global Temperature Change. (2000). With Santer, Hansen, Wentz (RSS), Peterson, and Trenberth on the other side, today this could been a classic red team-blue team event. But this was

In the opinion of the panel, the warming trend in global-mean surface temperature observations during the past 20 years is undoubtedly real and is substantially greater than the average rate of warming during the twentieth century. The disparity between surface and upper air trends in no way invalidates the conclusion that surface temperature has been rising. The recent corrections in the MSU processing algorithms (referred to above) bring the global temperature trend derived from the satellite data into slightly closer alignment with surface temperature trends, but a substantial disparity remains. The various kinds of evidence examined by the panel suggest that the troposphere actually may have warmed much less rapidly than the surface from 1979 into the late 1990s, due both to natural causes (e.g., the sequence of volcanic eruptions that occurred within this particular 20-year period) and human activities (e.g., the cooling of the upper part of the troposphere resulting from ozone depletion in the stratosphere). Regardless of whether the disparity is real, the panel cautions that temperature trends based on data for such short periods of record, with arbitrary start and end points, are not necessarily indicative of the long-term behavior of the climate system.

Previously reported discrepancies between the amount of warming near
the surface and higher in the atmosphere have been used to challenge the
reliability of climate models and the reality of human-induced global warming.
Specifically, surface data showed substantial global-average warming, while
early versions of satellite and radiosonde data showed little or no warming
above the surface. This significant discrepancy no longer exists because errors
in the satellite and radiosonde data have been identified and corrected. New
data sets have also been developed that do not show such discrepancies.
This Synthesis and Assessment Product is an important revision to the
conclusions of earlier reports from the U.S. National Research Council and
the Intergovernmental Panel on Climate Change. For recent decades, all
current atmospheric data sets now show global-average warming that is
similar to the surface warming. While these data are consistent with the
results from climate models at the global scale, discrepancies in the tropics
remain to be resolved. Nevertheless, the most recent observational and
model evidence has increased confidence in our understanding of observed
climatic changes and their causes.

“In the opinion of the panel, the warming trend in global-mean surface temperature observations during the past 20 years is undoubtedly real and is substantially greater than the average rate of warming during the twentieth century”

The 1910 to 1940 warming trend was the same as the mentioned “past 20 years”, for a 30 years duration not 20.

The failure to mention this fact is proof of collective lying by climate scientists. J Curry mentions this on every one of her Congressional testimonies. She is honest.

Panel Chairman John M. Wallace, director of the University of Washington’s environment program, emphasized that the group was not asked to address the cause of the rising temperatures or whether human influences, such as the burning of fossil fuels or greater urbanization, might be involved.

I’ve certainly seen news stories of GOP members of prominence call for major cuts in Global Warming/Climate Change research but no one vocally supporting areas that Dr. Curry believes need greater research.

Like on other policy issues — it appears that the GOP is using Dr. Curry’s voice and testimony as justification to “Repeal” (dismantle) Federal Climate Research, but having no interest or plan in developing “Replace” policies.

Well, if the last few years of studies on internal variability, and there have been a lot of them, is an indication, I expect higher estimates of climate sensitivity and a broad acceptance that there will be a lot of warming in the 21st century. In fact, I somewhat doubt the buffoons in congress can head this off.

It’s too late… from Koonin:

The public is largely unaware of the intense debates within climate science. At a recent national laboratory meeting, I observed more than 100 active government and university researchers challenge one another as they strove to separate human impacts from the climate’s natural variability. At issue were not nuances but fundamental aspects of our understanding, such as the apparent—and unexpected—slowing of global sea level rise over the past two decades.

What national laboratory meeting is he talking about? Over 100 participants? I suspect GFDL.

Perhaps the first Red/Blue investigation would be whether the taxpayers got their money’s worth in purchasing 1) credible and 2) usable climate science. Progress on ECS constraint and determination of tropical cyclone frequency and intensity trend come to mind.

If it is found that Dr. Curry is correct and we’d be better to shift funding from model analysis to direct observation I would support that, and I think most Red team supporters would, if endeavors included a concurrent red team investigation, (especially if North American pine tree rings are involved).

I would also be fine with holding back 90% of funding in lieu of diverting it to nuclear fission, fusion, solar voltaic and battery technology. We get double the bang for buck in gaining energy sources and reducing AGW.

So two misrepresentations… the rate of sea level rise over the last two decades is not lower; it was not really a real national laboratory meeting because it was the circus in new Mexico: the clowns being played by cranks and crackpots.

Red team loses every time, so bring it on. Complete waste of money, but that’s about the only way this bunch of flubbers can improve the economy… make work for senile scientists.

Judith made a request to keep NASA’s earth-pointing satellites going, but it seemed a bit feeble and fell on deaf ears, so I don’t expect much. Nor is there a recourse for follow-up because the thinktanks see no need for more data.

ECS constraint is a trillion-dollar question. We need to avoid paying fortune tellers though. The development of red team protocols would provide that confidence. That is the proper way to silence dissent, not by marching.

A march for science, as that one of today, is not a march for science: it is a global protest against some politicians and a global request for funding.
I am sure that very few of those “marching” are capable of understanding this point of view against mainstream climate science: drive.google.com/file/d/0B4r_7eooq1u2ZlIwZFcxQ2ZWaHc
And I am completely sure that no one of those marching is capable of fully understand astrophysics and cosmology as explained in: drive.google.com/file/d/0B4r_7eooq1u2OHBKOFN6dEdHRWc.
And I can keep storing more and more pdfs that no one marching would understand. Then, why you keep marching?, why don’t you first try to study a little bit?.
By the way, this idea of a red-blue public discussion is good for science; but awful for UN political interests.

As I mentioned above, the debate should be on whether we want to stabilize below 500 ppm or be at 700 ppm and ring at the end of the century. This is the bottom line. Consensus says 1 C warmer per 1500 GtCO2 emitted. There is a trade-off to think about here, and it is not a binary question.

These are the questions you should be researching yourself. As it is, we will blow through 500 ppm within a four decades on the way to 600-700 ppm another four decades later, and you have to think whether you want that or whether you want fossil fuel energy sources to be replaced in that timeframe to stabilize the climate nearer 500 ppm than 700 ppm. These would be the choices, and the adults in the room have already made them.

The current rate is 100 ppm per 40 years, so do the math. I didn’t say 2050, although only a small acceleration would be required for that, and population growth and development are both accelerating forces.

Jim D | April 22, 2017 at 1:15 pm | Reply
Where in the playbook of “systems thinking” is clinging on to wrongheaded beliefs despite all the evidence?

But ARE they wrongheaded? A debate (among scientists, moderated by scientists, on all facets of the matter, over an extended period) is the way to find out. Debates are what your side has avoided. That’s suspicious; as Tom Paine wrote, “It is error only, and not truth, that shrinks from inquiry.”

Debates to the public cannot express the science properly. Debates among scientists occur in the publications and conferences on a continuous basis, and that would not be something that has never happened before. The consensus comes from such an ongoing debate. Warming is dominated by forcing, and forcing is dominated by anthropogenic forcing. Skeptics can counter neither of these.

Jim D | April 22, 2017 at 5:05 pm |
These are the questions you should be researching yourself. As it is, we will blow through 500 ppm within a four decades on the way to 600-700 ppm another four decades later, and you have to think whether you want that or whether you want fossil fuel energy sources to be replaced in that timeframe to stabilize the climate nearer 500 ppm than 700 ppm. These would be the choices, and the adults in the room have already made them.

It doesn’t matter what we Westerners want, because it is not our decisions that will affect the CO2 level, but the decisions of those in developing countries. They have made it clear that they will not abate their emissions unless they are given trillions, which won’t happen.

Sure. This is why we need international agreements. Developing countries are not the problem. Their per capita emissions are already below the 2030’s targets of developed countries, and they can benefit from new energy and fuel sources to keep them low. These agreements help everyone with forward thinking.

Jim D | April 22, 2017 at 1:51 pm |
As I mentioned above, the debate should be on whether we want to stabilize below 500 ppm or be at 700 ppm and ring at the end of the century. This is the bottom line. Consensus says 1 C warmer per 1500 GtCO2 emitted.

The bottom line is that the populace can’t be just told to cough up trillions without having the case for doing so fully aired and cross-examined in a debate. “Science” can’t just declare victory and demand to be obeyed–which is what today’s march for science is in effect claiming it has the right to do.

“Science” has already demonstrated itself manipulable by its nutrition debacle, and by other instances of its headstrong prejudice and partiality. It cannot be a judge in its own case. It does not really operate a neutral marketplace of ideas; certain departments of science can “go rogue.”

When you bring money into it, you conflate science and policy, and that is exactly the problem here. Don’t just see dollar signs when someone says 1 C per 1500 GtCO2. That’s the whole problem of extrapolation according to political values. Evaluating the impacts of reduced emissions versus temperature increases is where it starts to become a more complex problem.

When oceans warm, sea ice melts, ice shelves break off and melt, oceans lap up to the shores of land and ocean effect snow falls on land and replenishes the ice on land. This is a normal, natural and necessary part of the natural climate cycle. Man-made CO2 did not cause this cycle, it did not change this cycle very much, we did not cause it, we cannot stop it. Look at actual ice core data on my presentation and let me know what you think. The king has no clothes on. The climate alarmism is based on less than no clothes on.

It is precisely the claim that the consensus is proven beyond reasonable doubt by evidence, that is bogus. We’ll need a lot more evidence from better technology to make an honest claim one way or the other.

Jim D | April 23, 2017 at 4:09 am |
Debates to the public cannot express the science properly.

Not entirely. But they can pose embarrassing questions, point to data fudging, excessively and failed alarmist predictions, collective misbehavior by the consensus, expose advocates as extremists in other fields, describe a funding imbalance, lay out cases of attempted silencing, describe the existence of important unknowns like global clouds and windiness, etc.

Debates among scientists occur in the publications and conferences on a continuous basis, and that would not be something that has never happened before. The consensus comes from such an ongoing debate.

Journals focus on findings and review papers. Debates are considered unseemly and are rare–including at conferences.

Warming is dominated by forcing, and forcing is dominated by anthropogenic forcing.

Once you concede AGW that all the warming and more is due to the emissions already, then the follow-on debate can be had. Skeptics are edging to that, but have not accepted AGW yet or the models that support it which is a pity. The follow-on debate is what level of CO2 and warming do we want at 2100. 700 ppm and its consequent 4 C, most would say is unacceptable, but is also business-as-usual. Given the BAU is unacceptable, what then – international agreements. Sadly for the skeptics those debates have already been had while they were denying CO2 could do much, and the world is moving on with action. Perhaps they can yet contribute, but they have to catch up first.

Jim D | April 23, 2017 at 7:09 am |
Once you concede AGW that all the warming and more is due to the emissions already, then the follow-on debate can be had. Skeptics are edging to that, but have not accepted AGW yet or the models that support it which is a pity.

Skeptics have conceded the existence of AGW, contrary to your claim, although not the validity of the poorly performing models. The models only look less awful at the moment thanks to the recent El Niño. The real debate is about the level of additional warming from IGPOCC’s postulated positive feedbacks, which is a weak spot in the warmist narrative.

The follow-on debate is what level of CO2 and warming do we want at 2100.

What we (in the West) want is irrelevant. The level is pre-ordained to go up thanks to rising emissions from the developing world. (There are hundreds of new coal power plants planned there, and their emissions are already rising steadily.)

We come back to encouraging non-fossil fuel technologies, and in the next few decades these industries will take off, especially with the help of storage. Governments need to be forward thinking. Fossil fuels are so 20th century and combustion engines will go the way of the horse buggy. Future First.

Jim D | April 23, 2017 at 8:12 am |
You only have to look back to the 1960’s to see how far technology can advance in 50 years. It is realistic to expect more of the same especially with the motivation of stabilizing the climate.

OK, I’ll grant that by 2067 there’ll be advances in solar panels. but that won’t matter much, because the panels are only 25% of the cost of a solar installation (which includes maintenance and backup and replacement). Even if they were free solar would still be costly, especially outside of the desert zone.

I’m skeptical of any meaningful improvements in wind turbines—there’s not much room for improvement there—it’s basic physics. Big battery improvements would help make electric cars more popular, but they’d still face the problems of range and charging time. (And access overnight to charging cables in cities, and the need to double the electric distribution grid, and the problem of an explosive short.)

Storage is a gamechanger for wind and solar. Not just chemical battery storage but thermal, mechanical or gravitational storage. Solar energy can also be used to make fuels, like synthetic fuels or hydrogen. There are many ways this could go, all clean.

Jim D | April 23, 2017 at 5:21 pm |
Storage is a gamechanger for wind and solar. Not just chemical battery storage but thermal, mechanical or gravitational storage. Solar energy can also be used to make fuels, like synthetic fuels or hydrogen. There are many ways this could go, all clean.

But the greens have been saying that for decades. You’d think by now they’d have something more than the same old promises. Anyway, I’ve read skeptical arguments that seemed good to me against the practicality of non-chemical storage (although I’m game for funding some research in the area), and also for making fuel. (IIRC, the energy required for substantial fuel-making is high and collecting the fuel from many locations would be expensive.) But a few experiments will settle the matter.

Jim D: Sadly for the skeptics those debates have already been had while they were denying CO2 could do much, and the world is moving on with action. Perhaps they can yet contribute, but they have to catch up first.

So my contributions to fuel cell technology through innovations in electroplating do not count because I’m skeptical of CAGW? Jim, you are mixing issues, skeptics are not anti-science or anti-energy technology, they are simply skeptical of the politicization of an issue with manufactured scientific backing. You are skeptical of science funded by private enterprise. So you are a skeptic too. Who does not want cheap alternative energy? (Maybe Exxon, OK.)

Once you concede AGW that all the warming and more is due to the emissions already, then the follow-on debate can be had.

Once you concede that the Roman and Medieval warm periods were not caused by AGW, you must question the whole alarmist agenda.Then the follow-on debate can be had. The emissions we have had have not pushed us outside of the bounds of past warm periods. Any so called science that says we are outside the bounds of past warm periods are false without support from actual data. Mann hockey stick data is not even used in the more recent IPCC reports, that junk is not accepted, even in the alarmist camp because it was judged totally wrong.

You pick stuff to support your alarmist view that really goes against you. You will most likely never figure out that you were wrong, but you prove it, time and time again.

Jim D | April 23, 2017 at 4:19 am |
Sure. This is why we need international agreements. Developing countries are not the problem. Their per capita emissions are already below the 2030’s targets of developed countries, . . .

But developing countries will be the problem. Their emissions are projected to double and raise the CO2 level by a hundred or 200 points in time. (So it doesn’t matter what reductions developed countries make.) Their low per capita emissions are irrelevant, because the question is the global level of CO2 in the atmosphere.

. . . and they can benefit from new energy and fuel sources to keep them low. These agreements help everyone with forward thinking.

Sure they can cut their emissions if some of their wind and solar projects are funded for them. But the cost of funding enough of them to make a difference is too high for the West to support. If forward-thinking international agreements say that the West must fund them, the populace in the West will revolt and trash-can them. It’s starting to happen already.

Affordable energy for developing countries will require the technological advances of the next few decades. Coal has already priced itself out of the market even in developed countries, so would be a non-starter for developing countries that can’t afford it now, and will see better alternatives become affordable first. The idea of centralized generation and large transmission networks is not going to scale to developing countries that would more likely have distributed local systems. The market is there for the developed countries to sell their technology, and that would make it a gain rather than a loss for them, especially if the developed countries encourage their advanced energy industries to be competitive globally. Similar rewards await automobile markets and green building technology.

Roger, it appears that you don’t think energy technology can or should advance, and that no one should even try and encourage it. China is now canceling coal plants, and global coal markets are failing for good reason of their inefficiency and pollution. Other fossil fuels are self-limiting by virtue of limited resources and increasingly harder extraction, and will also price themselves out of the market for developing countries. There is no alternative for them but alternatives.

Jim D | April 23, 2017 at 8:07 am |
Roger, it appears that you don’t think energy technology can or should advance, and that no one should even try and encourage it.

On the contrary, a few hours ago I commented here that I favored advanced nuclear and research funding for compact fusion. I didn’t say anything about not funding research on renewables. I said their deployment should wait until they are cost-effective.

China is now canceling coal plants, and global coal markets are failing for good reason of their inefficiency and pollution.

China is probably just cutting back on its aggressive expansion schedule for building coal plants. Here are six WUWT articles on coal’s resilience and rebound, including in China:

Other fossil fuels are self-limiting by virtue of limited resources and increasingly harder extraction, and will also price themselves out of the market for developing countries. There is no alternative for them but alternatives.

Fossil fuels are needed for transportation. If lower emissions are wanted, trucks and larger cars can be powered by natural gas. (That’s what Obama should have funded, not bet everything on electric cars. Then he’d have had half a loaf, not no bread.) If oil gets scarce and expensive, cars will run on synthetic gas derived from coal, as they did in Germany in WWII, and in S. Africa now.

Jim D | April 23, 2017 at 7:20 am |
Coal has already priced itself out of the market even in developed countries, so would be a non-starter for developing countries that can’t afford it now, and will see better alternatives become affordable first. The idea of centralized generation and large transmission networks is not going to scale to developing countries that would more likely have distributed local systems.

For developing countries, targets are more fairly set in terms of reduced emission intensity (emissions/GDP) rather than emissions per se. India is taking part that way. They expect power needs to triple by 2030 and GDP to go up four times according to their INDC. Their target intensity reduction is 20-25%. GDP growth must be allowed for in their targets, but better carbon efficiency is still the goal.http://www4.unfccc.int/submissions/INDC/Published%20Documents/India/1/INDIA%20INDC%20TO%20UNFCCC.pdf

Jim D | April 23, 2017 at 4:27 am |
When you bring money into it, you conflate science and policy, and that is exactly the problem here. Don’t just see dollar signs when someone says 1 C per 1500 GtCO2. That’s the whole problem of extrapolation according to political values.

The conflation of is already in play, thanks to the Paris treaty’s policy-demands for transfer payments and for a switch everywhere to costly new energy sources. If money were no object, there’d be no debate. If the solution demanded is unaffordable, and if what is affordable is very inadequate, then spending anything is a pointless gesture.

Evaluating the impacts of reduced emissions versus temperature increases is where it starts to become a more complex problem.

Interesting maybe to biased and unavoidably ignorant researchers with a poor track record.

You have a solution in mind, perhaps, or do you advocate BAU with increasing emission rates, and not encouraging reduction targets at all? it’s very easy to just say ‘no’, and then not have your own answer or blindly accept the way things are going without evaluating it. Decisionmakers don’t have that luxury.

The question should be: Is, or will; human released CO2 negatively impact the overall climate? Can science reliably determine where and when the climate is likely to change as a result of human released CO2.

Imo, the truth is we do not yet the means to know, but there seems to be very little reliable evidence that human released CO2 is worsening the climate for the USA or the world. Show me the science (a model) that can accurately forecast changes in rainfall patterns for the next few decades. The climate will continue to change- in some places for the better, in others for the worse.

Future generations won’t have to worry about CO2 because they will choke to death on a 20 Trillion Dollar debt that grows, year to year, like the Alien plant in the “Little Shop of Horrors” story. “Feed me, FEED me, FEED ME!”

The underlying horror of this article is that it advocates for more government that runs up our debt while bringing everything to an inert standstill. A committee of judges that adjudicates the written/oral jousting matches between believers and skeptics? Really?

By all means, let’s not solve real problems, let’s make up hypothetical ones that we can pretend to solve by borrowing more money on the back of future taxpayers to do research and debate in committee. I am reminded of the old quote, largely attributed to Mark Twain, “God made an idiot for practice and then created the Committee.”

Government has grown to large to control, let alone afford and it’s slowly crushing everything in it’s path most certainly science, truth, knowledge, and our children’s future.

I honestly get worried that a red team would somehow fail to get the sizable, skeptic part of Holocene paleoclimatology on board. This is because the climate war is so atmo/hydrosphere-centered, while the paleo-community dirt their hands with geo-archives and their proxies, has largely shunned the war, yet owns the crucial longer-term solar-climate coupling issue.

Fasullo et al 2016 suggest that that the rate of sea level rise in the 1990’s was accelerated in a recovery from Mt Pinatubo cooling, that the reduced 2002 to 2012 rate was more representative of the background rate and that acceleration was at hand with projections of temperature rise and ice mass loss.

I was actually taught to use a slide rule at university. Not sure I can remember now – but the habits of engineering approximation, dimensional analysis, reality checks, limit states and safety factors in uncertainty remain. What is lacking in climate science is the back of the envelope estimates known as a Fermi problem. The climate blogosphere is even worse – it is all wild narrative in support of cultural values.

There is a body of climate science built on the projections of model opportunistic ensembles. It is build on sand – it has no scientific validity. The rate of warming in the surface record in the last half of the 20th century is 0.09 degrees C/decade. As the Paris commitments culminate in a 3.7 billion tonne increase in energy emissions to 2030 – there is little reason to expect an imminent increase in that average rate.

It is a limiting rate – there is science that suggests that most early century warming was natural – all of the mid-century cooling and – intriguingly – most of the late century warming. The natural warming last century will be lost this century.

The graph shows longwave emissions increasing over the period due to a warming atmosphere and a net decrease in cloud cover – and shortwave reflectance decreasing in the same period due to the decrease in cloud cover. The net changes (warming up by convention) are sufficient to account for all ocean warming in the 1990’s. This is data that is dismissed because it doesn’t support the narrative.

Sea level rise has a number of components – and you need to focus more broadly on what factors are changing in the short term.

The rate of increase is some 3.4 mm/yr, the ocean mass change is 1.8 mm/yr, the thermosteric rise is 0.8 mm/yr, the Greenland ice loss contribution is 0.8 mm/yr and the Antarctic ice loss contribution is 0.34 mm/yr. Ice loss is not sufficient to account for all mass change in the oceans suggesting a hydrodynamic component. A net transfer of surface, ground and soil water to the oceans in the period which is seen in the hydrological records. This has implications for the surface temperature record in that the balance of latent and sensible heat flux changes biasing the surface record to higher temps.

Steric sea level rise – in the Argo record – only occurred after 2012 – the period on which a great many hopes for a resurrection of warming depend. It perhaps may be a little too soon to declare victory and validation – but that doesn’t even slow them down.

There is a great deal of science suggesting that a more nuanced interpretation of basic observational data is warranted – especially as the data sources improve. Valid scientific inferences are built only on data. Models are useful – I have modeled hydrodynamics for decades. I think that their proper use in climate is for process simulation in hypothesis building for validation against observations – leading to a better understanding of the climate system. Interpreting everything in the context of a global warming meme set is not calculated to engender a better understanding.

A broader science is there – one that doesn’t focus almost exclusively on carbon dioxide, surface temperature, sea level rise and model opportunistic ensembles – and which translates this into catastrophic potential and limits to economic growth. The reality – barely acknowledged – is that temperature rise is just 0.09 degrees C/decade and sea level rise is 3mm/yr – in the limit case.

But let’s face it – red and blue teams are not about science. It is about claiming the culturally potent imprimatur of science for a particular set of values. I guess they truly believe that they have it. I am all for using a red team to undermine the radically oversimplified notion of a scientific consensus in the ideas marketplace.

I would expect a highly nuanced view – as opposed to the crude rhetoric found on climate blogs. The people I listed have a profound expertise across relevant areas of atmosphere, cyrosphere, hydrosphere and biosphere – as well as in remote sensing and models. These people talk science and not the superficial JCH version. The latest example of the latter being a crude interpretation of the Fasullo et al paper he is now ignoring.

Many times I feel that the most that these people do is scan documents for words that seem to reinforce warming memes – more often it seems just narrative cribbed from sites like Slate. Slate comes to mind because I have just painfully read their new diatribe on Steve Koonin. They do link to some science. A circular model attribution study – and a fingerprinting study using IPCC forcings. If that’s the consensus – you are best out of it.

I did not misrepresent anything. Koonin said the last two decades have experienced decrease in the rate of SLR. That is false. He cited the Fasullo paper. The Fasullo paper does not support his false claim that the last two decades have experienced a decrease in the rate of SLR. What they said, and he was obviously confused about by this, is the 2nd decade of the altimetry record had a slower rate of SLR than the first decade of that record… that is not the last two decades of that record. I provided the AVISO data for the last two decades… 3.31 mm/yr, and the last decade… over 4 mm/yr.

The meeting was in 2014, the paper appeared in 2013 and it covered altimetry data to 2012. The paper clearly showed a decline in sea level rise over 20 years – as shown in the graph from the paper posted above. You clearly misrepresent both Koonin and Curry for nefarious purposes. Fasullo et all suggested that the lower average rate this century was more likely the background rate.

The rate increased only in the past couple of years which you hope earnestly is a resurrection of global warming as a warmed over (bad pun intended) corpse. It is of course El Nino and the drought artifact in the surface record. And with ENSO what comes around…

I’ve read a couple of papers from Josh Willis and was impressed by the even handed approach – including discussions of natural varibility. But the bottom line is quite clear – the background rate of warming is 0.09 degree C/decade and sea level rise is a couple of millimetres per year. There is nothing but speculation – and opportunistic ensembles (lol) – to suggest acceleration anytime soon. A 2m sea level rise this century would require a highly speculative tipping point. Yes – alright – Josh Willis is off the Star Chamber bench for conduct unbecoming a scientist.

I don’t expect that there is much of a problem as up to 360 billion tonnes of carbon dioxide is sequestered as 100 billion tonnes of carbon in the next 30 to 40 years – for economic, food security and biodiversity reasons – and as we transition to cheap and abundant 21st century energy sources. I expect that there is much more natural variability to be found in climate than is suspected by many – and it is much more important to understand this for reasons of societal security and resilience. My verdict is that global warming is a completely absurd distraction.

Steve Koonin has an op-ed published today in the Wall Street Journal: A ‘Red Team’ Exercise Would Strengthen Climate Science [link].

Annotated text of the op-ed is provided below: – Judith Curry

Let’s see if we can help you.

Today was April 21, 2017. Koonin wrote an editorial that appeared in the WSJ on April 21, 2017.

In that article he made the following claim, and linked to the Fasullo paper to back it up:

The public is largely unaware of the intense debates within climate science. At a recent national laboratory meeting, I observed more than 100 active government and university researchers challenge one another as they strove to separate human impacts from the climate’s natural variability. At issue were not nuances but fundamental aspects of our understanding, such as the apparent—and unexpected—slowing of global sea level rise over the past two decades. … – Koonin, WSJ, April 21, 2017

The slowdown occurred. Now I am not too fussed about 0.8 mm/yr – but you obviously are – enough to pursue an obsessive campaign of denigration. It is all the politics of global warming tragics.

That oceans have warmed a little in El Nino years since is not all that conclusive. The interesting thing to me is the energy transfer that supposedly happens from the ocean to the atmosphere in an El Nino. How can both oceans and atmosphere warm unless the energy is coming from decreased cloud cover.

So all that’s needed is a web site describing the various Red Team exercises already completed, with a to-do list of projects that can be assembled from existing data & personnel, and a commitment by willard to get it all done before the end of the year.

How would you feel about a group letter to CSPAN to do a continuing series on air where there is three red/blue expert panelist for each issue topic?

But it’s in the actual field investigations that red/blue balance is needed most, otherwise, the debates just choose studies that they like and things get bogged down into QC of the data and analysis (not too great for video).

And interesting debate between Gavin Schmidt and Steven Koonin in a forum called Kent Presents, in January 2016. Moderated by Dan Kevles, Prof. of History of Science, Caltech and Yale. This was a debate Oxford style in which he audience recorded a vote prior to the debate on their inclination to accept or reject the notion that we must act now. Gavin acknowledges that there are uncertainties but we must act now, while Koonin’s position is that the uncertainties are fundamental to what and how we should proceed and uncertainties are great enough to give pause and not jeopardize, India’s economic development in what and how we move forward. During this debate Koonin raises the idea of taking a Red Team approach and challenged Gavin to list the five things he believes we need to understand better to greatly improve our understanding of climate science/climate change. https://www.youtube.com/watch?v=5n7aLoJr5xE&feature=youtu.be

Ocean temp was high at the start of the record – declined to a low and then again. It shows clearly that the energy imbalance at toa was negative and then positive. Contributions to changes in the energy budget had a solar cycle contribution – and an internal variability component at toa due to ocean/atmosphere circulation cloud feedbacks. Cloud is anti-correlated to sea surface temperature.

The most reliable data tells a different story to the global warming narrative. It is more than time to put climate science on trial.

I was studying the ACS Climate Change tool kit sections on the single and multilayer theories (what I refer to as the thermal ping-pong ball) of upwelling/downwelling/”back” radiation and after seeing a similar discussion on an MIT online course (specifically says no transmission) have some observations.

These layered models make no reference to conduction, convection or latent heat processes which leads me to conclude that these models include no molecules, aka a “non-participating media,” aka vacuum. This is a primary conditional for proper application of the S-B BB ideal, i.e. ε = 1.0, equation.

When energy strikes an object or surface there are three possible results: reflection or ρ, absorption or α, transmission or τ and ρ + α + τ = 1.0.

The layered models use only α which according to Kirchhoff is equal to ε. What Kirchhoff really means is that max emissivity can equal but not exceed the energy absorbed. Nothing says emissivity can’t be less than the energy absorbed. If α leaves as conduction/convection/latent (macro effect, non-thermodynamic equilibrium) than ε will be much less than 1.0.

These grey bodied layered models then exist in a vacuum and are 100% non-reflective, i.e. opaque, surfaces, i.e. just like the atmosphere. NOT!
So the real atmosphere has real molecules meaning a “participatory” media and is 99.96% transparent i.e. non-opaque.

Because of the heat flow participating molecules only 63 W/m^2 of the 160 W/m^2 that made it to the surface leaves the surface as LWIR.

63 W/m^2 and 15 C / 288 K surface gives a net effective ε of about 0.16 when the participating media is considered. (BTW “surface” is NOT the ground, but 1.5 m ABOVE the ground per WMO & IPCC AR5 glossary.)
So the K-T diagram is thermodynamic rubbish, earth as a ball in a bucket of hot mush is physical rubbish, the Δ 33 C w/ atmosphere is obvious rubbish, the layered models are unrelated to reality rubbish.

The atmosphere is not in thermodynamic equilibrium, is a closed system and as a consequence neither Stephan Boltzmann nor Kirchhoff nor thermodynamics can be a abused the ways the GHE theory applies them.

What support does the GHE theory have left besides rabid minions?

I see no reason why GHE theory gets a free pass on the scientific method.

The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium.
In non-equilibrium systems, by contrast, there are net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, it is said to be in a metastable equilibrium.

Assuming the existing funding decision-makers are skewed to Blue/consensus projects, those making the grants can’t really be trusted to give money to genuine Reds can they ? So would new funding agencies be created alongside the existing consensus ones ?