Concern for climate change is rising, but that doesn’t necessarily equate to action. Just ask someone worried about eating right and exercising enough — but who doesn’t actually make it to the gym or opt for salad over fries. http://axios.link/5TF8

The peculiar blindness of experts: The best forecasters view their ideas as hypotheses in need of testing. If they make a bet & lose, they embrace the logic of a loss just as they would the reinforcement of a win. This is called, in a word, learning.” [link]

“People are rewarded for being productive rather than right, for building ever upward instead of checking the foundations. These incentives allow weak studies to be published. And once enough have amassed, they create a collective perception of strength…” [link]

“Classical education involves the acquisition of culturally & scientifically useful knowledge, & fostering an ability to think critically to further understanding. Modern education, on the other hand, is accreditation by an officially sanctioned seminary.” [link]

Scientists like to think of science as self-correcting. To an alarming degree, its not. [link]

The House Natural Resources Committee Subcommittee on Water, Oceans and Wildlife is holding a Hearing today on Responding to the Global Assessment Report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services.

Quite an interesting list. Clearly some of the leading honchos for the IPBES Report. Surprised that the Republicans apparently got to pick several witnesses.

Having Marc Morano on this list is like waving a red cape before a bull. True to form, Marc has prepared an extremely hard hitting report for his written testimony, which was sent to me (and others) via email. Excerpts from Morano’s testimony are provided below:

<begin quote>

As a lifelong conservationist, I share concerns about the Earth’s biodiversity and particularly concerns about threats to species. I have advocated for a clean, healthy planet with a co-existence of humans and plants and animals.

But, as an investigative journalist studying the United Nations for decades, there is only one conclusion to be made of this new report: The UN’s Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), hypes and distorts biodiversity issues for lobbying purposes. This report is the latest UN appeal to give it more power, more scientific authority, more money, and more regulatory control.

According to media reports, the UN species report requires that “a huge transformation is needed across the economy and society to protect and restore nature…”

And just how does the UN justify this “huge transformation” of economics and society which it will lead? By invoking what the UN describes as “authoritative science” produced by — the UN of itself of course.

At best, the UN science panels represent nothing more than “authoritative bureaucracy”, claiming they hype the problem and then come up with the solution that puts them in charge of “solving” the issue in perpetuity. A more accurate term for the UN than “authoritative science” may be “authoritative propaganda.”

This new biodiversity report follows the same tainted IPCC procedures that the U.S. Congress must be made aware of. The report is meddled with by UN politicians, bureaucrats as part of the process.

The report’s summary had to be approved by representatives of all 109 nations,” the AP reported. Let’s repeat, “The report’s summary had to be approved by representatives of all 109 nations.” These representatives are not scientists, but they are politicians, subject to lobbying and media pressure and their own self-interests. This is clearly a political process — not a scientific process.

Canadian UN expert Donna Laframboise, who has written several books on the biased UN “scientific” process, explains how this new species report was crafted behind the scenes:

“[The UN] draft a summary known as the Summary for Policymakers (SPM). Then politicians and bureaucrats representing national governments attend a plenary meeting where the summary gets examined line-by-line and rewritten…But it gets worse. Over the next few weeks, the text being summarized – the underlying, ostensibly scientific document – will also get changed. That’s not how things normally work, of course. Summaries are supposed to be accurate reflections of longer documents. At the UN, they represent an opportunity to alter those documents, to make them fall into line…This is no sober scientific body, which examines multiple perspectives, and considers alternative hypotheses. The job of the IPBES is to muster only one kind of evidence, the kind that promotes UN environmental treaties.”

“That’s how the United Nations works, folks. Machinations in the shadows. Camouflaging its political aspirations by dressing them up in 1,800 pages of scientific clothing.”

Analyst Toby Young: “So how exactly did the [UN] IPBES arrive at the magic one million [species at risk] number? It seems we’re just supposed to take it on faith, which the BBC duly did. What about the IPBES’s claim that ‘around 25% of species… are threatened’? That seems a little pessimistic, given that the number of mammals to have become extinct in the past 500 years or so is around 1.4% and only one bird has met the same fate in Europe since 1852. Not bad when you consider how much economic growth there’s been in the past 167 years.”

“…All I could find online was a press release put out by the IPBES and a ‘summary’ of the report ‘for policymakers’. The press release states: ‘The report finds that around one million animal and plant species are now threatened with extinction, many within decades.’ It gives no source for this beyond the as-yet-unpublished report, but the summary makes it clear that it’s partly based on data from the International Union for Conservation of Nature (IUCN) Red List of Threatened Species.”

Wrightstone called the report “a case study of how those who promote the notion of man-made catastrophic warming manipulate data and facts to spread the most fear, alarm, and disinformation.”

Wrightstone’s research instead found: “A closer review of the most recent information dating back to 1870 reveals that, instead of a frightening increase, extinctions are actually in a significant decline. What is apparent is that the trend of extinctions is declining rather than increasing, just the opposite of what the new report claims. Also, according to the IPBES report, we can expect 25,000 to 30,000 extinctions per year, yet the average over the last 40 years is about 2 species annually. That means the rate would have to multiply by 12,500 to 15,000 to reach the dizzying heights predicted. Nothing on the horizon is likely to achieve even a small fraction of that.”

Wrightstone added; “In an incredibly ironic twist that poses a difficult conundrum for those who are intent on saving the planet from our carbon dioxide excesses, the new study reports that the number one cause of predicted extinctions is habitat loss. Yet their solution is to pave over vast stretches of land for industrial-scale solar factories and to construct immense wind factories that will cover forests and grasslands, killing the endangered birds and other species they claim to want to save.” (JC BOLD)

Analyst Kenneth Richard: “During the last few hundred years, species extinctions primarily occurred due to habitat loss and predator introduction on islands. Extinctions have not been linked to a warming climate or higher CO2 levels. In fact, since the 1870s, species extinction rates have been plummeting.” – “No clear link between mass extinctions and CO2-induced or sudden-onset warming events.”

As we await the full report from the UN on Biodiversity, we must note that the UN track record on species claims has not been admirable.

2014: Der Speigel’s Axel Bojanowski: “The IPCC admits that there is no evidence climate change has led to even a single species becoming extinct thus far. At most, the draft report says, climate change may have played a role in the disappearance of a few amphibians, freshwater fish and mollusks. Yet even the icons of catastrophic global warming, the polar bears, are doing surprisingly well.”

UN official on species in 2007: “Every hour, three species disappear. Every day, up to 150 species are lost. Every year, between 18,000 and 55,000 species become extinct. The cause: human activities. …Climate change is one of the major driving forces behind the unprecedented loss of biodiversity.” — Speech on 21 May 2007 by Ahmed Djoghlaf, then Executive Secretary of the Convention on Biological Diversity under the United Nations Environment Programme (UNEP).

Contrary scientific studies abound:

“Re-assessing current extinction rates” by Neil Stork in Biodiversity and Conservation, February 2010. Gated. Open copy. He cites the overwhelming peer-reviewed research evidence that claims of mass extinctions occurring today are exaggerated or false, and explains the reasons for these errors. Conclusions … “So what can we conclude about extinction rates? First, less than 1% of all organisms are recorded to have become extinct in the last few centuries and there are almost no empirical data to support estimates of current extinctions of 100 or even one species a day.”

“Species–area relationships always overestimate extinction rates from habitat loss” by Fangliang He and Stephen P. Hubbell in Nature, 19 May 2011. Gated. “Extinction from habitat loss is the signature conservation problem of the twenty-first century. Despite its importance, estimating extinction rates is still highly uncertain because no proven direct methods or reliable data exist for verifying extinctions.”

John C. Briggs (Prof Marine Science, U South FL) in Science, 14 November 2014. – “Most extinctions have occurred on oceanic islands or in restricted freshwater locations, with very few occurring on Earth’s continents or in the oceans.”

Perhaps the most high profile species prediction failure of the UN and former Vice President Al Gore has been with polar bears.

Why has Al Gore has gone silent on the extinction scare of polar bears? Gore featured the bears in 2006 film, but how many references to polar bears were in Gore’s 2017 sequel? Five references? Three? No. How about zero. The polar bears were completely absent in his 2017 sequel. The reason? Simple. The polar bear population keeps rising.

This new May 2019 UN report is extrapolating huge future species extinction predictions from a much less alarming current reality and has only released its Summary for Policymakers which is fiddled with by UN politicians and bureaucrats and the underlying science report remains at large. And that underlying report must follow the dictates of the Summary for Policymakers. This UN political process that interferes with the scientific process has been called into question for violating the U.S. science policy guidelines.

Let me clear: I am not talking about the UN and its science reports in some abstract or vague way. I am here to say that the three lead witnesses representing the United Nations today on this new biodiversity report are explicitly part of these UN scientific manipulations.

I will be presenting and submitting for the record, the voices of current and past scientists that reveal the UN’s pre-determined narrative process and expose how the UN’s panels are not rooted in honest science.

[JC note: read the full testimony for much material critical of the IPCC process]

I have been passionate about environmental issues since I began my career in 1991 as a journalist. I produced a documentary on the myths surrounding the Amazon Rainforest in 2000, which dealt extensively claimed species extinctions and how such claims are used to instill fear for political lobbying.

I have done extensive investigating reporting on species extinction claims, including how hyped up species concerns are used to shut down American mining and private breeders. One of my stories was a report titled Desert Stormtroopers and how nearly 30 state, local and federal agencies descended onto the Molycorp mine in California’s Mojave desert to protect the threatened Desert Tortoise. Based on these endangered species claims, the mine’s operations were halted, employees were forced to undergo “tortoise sensitivity training” and the U.S. federal government felt compelled to use heavy-handed tactics. It turned out that the Desert Tortoise was not even considered an “endangered” species, but a “threatened” species.

Concern over species can be used to justify massive government intrusion into business, private lives and property rights, therefore, it is extremely important that we get the science right.

Other efforts to “save” species have had mixed and sometimes woeful results.

New 2018 report highlights failures of the Endangered Species Act: “The Endangered Species Act (ESA) has been so ineffective at recovering species that the U.S. Fish and Wildlife Service has fabricated a record of success.” – Robert Gordon, The Heritage Foundation…Enacted in 1973, the ESA has managed to “recover” only 40 species, or slightly less than one species per year…“Federally Funded Fiction” – Even worse, almost half of the “recovered” species – 18 out of 40 – are what Gordon calls “federally funded fiction.” It turns out that these 18 “recovered” species were never endangered in the first place and were placed on the endangered species list due to poor data. This, however, has not kept the Department of Interior’s Fish & Wildlife Service (FWS) from trumpeting their “recovery” as a success.”

My 2000 Amazon Rainforest documentary “Clear-cutting the Myths,” exposed the hopeful news on species and the natural world’s biodiversity.

Excerpt: “Duke University, published a study on the effects of logging in Indonesian rainforests. Dr. Charles Cannon examined land both one year and eight years after it had been commercially logged. What he found surprised many. Indonesia’s forests were recovering quickly from logging operations, with a healthy mix of plant species…Robin Chazdon, an ecologist from the University of Connecticut, has studied tropical rainforests for more than 20 years. Dr. Chazdon wrote this editorial that accompanied Dr. Cannon’s study in Science Magazine. “I do think that we have underestimated the ability of the forest to regenerate,” Chazon said. Scientific reforestation efforts are paying off in parts of the Amazon. In 1982, miners cleared a large tract of land in western Brazil. Once finished, they hired scientists to reforest the territory. New studies show that the rejuvenated forest is virtually indistinguishable from its original form. Ninety-five percent of the original animal species have returned. Proponents say these attempts at sustainable logging lowered costs and increased productivity, proving that man and nature can coexist in the Amazon.

UK scientist Professor Philip Stott, emeritus professor of Biogeography at the University of London, dismissed current species explained in my Amazon rainforest documentary.

“The earth has gone through many periods of major extinctions, some much bigger in size than even being contemplated today,” Stott, the author of a book on tropical rainforests, said in the documentary.

“Change is necessary to keep up with change in nature itself. In other words, change is the essence. And the idea that we can keep all species that now exist would be anti-evolutionary, anti-nature and anti the very nature of the earth in which we live,” Stott said.

Analyst Jo Nova: “Wealthy countries are solving all of these problems faster than poor countries are. The best way to save the wilderness is to increase the GDP of those in poverty. Free trade, fair agricultural markets. Less red tape. Less corruption. We’ve tied up lots of land, so the last thing we want is to use wilderness for useless solar and wind farms, or palm oil plantations. Why keep coal and uranium underground when we can save the forest instead? Again, in nations where there are healthy economies, fish stocks are being protected and are recovering. Whales too. Even great white sharks.”

Yet, despite a massive track record of scientific failure about climate and species “crises” the UN, the media and the usual suspect scientists like failed overpopulation guru Paul Ehrlich, are at it again.

This latest report has been touted as the IPCC for nature by the UN. “The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) included more than 450 researchers who used 15,000 scientific and government reports.

Environmental activist Tim Keating of Rainforest Relief was asked in the 2000 documentary if he could name any of the alleged 50,000 species that have gone extinct and he was unable.

“No, we can’t [name them], because we don’t know what those species are. But most of the species that we’re talking about in those estimates are things like insects and even microorganisms, like bacteria,” Keating explained.

But the persistent claims that not only are humans driving this driving a species catastrophe but that humans themselves go extinct will not go away.

Is the Insect Apocalypse Really Upon Us? ‘Claims that insects will disappear within a century are absurd’ – The data on insect declines are too patchy, unrepresentative, and piecemeal to justify some of the more hyperbolic alarms. At the same time, what little information we have tends to point in the same worrying direction…The claim that insects will all be annihilated within the century is absurd. Almost everyone I spoke with says that it’s not even plausible, let alone probable. “Not going to happen,” says Elsa Youngsteadt from North Carolina State University. “They’re the most diverse group of organisms on the planet…The sheer diversity of insects makes them, as a group, resilient—but also impossible to fully comprehend. There are more species of ladybugs than mammals, of ants than birds, of weevils than fish.

Scientists uncover 1,451 new species in the ocean in the past year – UK Daily Mail 2015: From a frilled shark to the frogfish, we’re finding four new sea creatures every day: Scientists uncover 1,451 new species in the ocean in the past year alone. Despite the expansion of our knowledge however, scientists estimate we still only know about a tenth of the marine life on Earth. The World Register of Marine Species – which aims to become an inventory of all known ocean life – numbers 228,000 species, with new names being added every day.

Witnesses selected by the minority party (at present the Republicans), typically have a week at best to prepare written testimony. So it is clear that Morano’s materials must have been collected and examined over a period of time.

Without having read all the sources linked to by Morano, what he states is generally consistent with my more limited understanding of this issue (although there are many relevant issues not covered in his testimony).

And of course I haven’t read the full Biodiversity Report, since it is not yet available. I am appalled that they published the relatively short Summary for Policy Makers well in advance of publishing the full report (I haven’t even seen a publication date for the main report). This fact in itself supports Morano’s contention that the intention of this Report is propaganda. They got their headline regarding ‘1 million species at risk from extinction’ without providing the documentation that apparently can’t be very convincing.

It is very difficult to rebut Morano’s points without the full Report and its documentation.

The biodiversity and species extinction issue is associated with substantially much more uncertainty than say the IPCC WGI report on the physical basis for climate change. The species issue is potentially uncertain by orders of magnitude, with the sign of some this even being uncertain.

And the irony of all this is that the biodiversity narrative rather gets in the way of the climate catastrophe narrative. The climate issue is at best a minor issue in any biodiversity challenge. At the same time, climate change ‘solutions’ are arguably a much bigger threat to species and biodiversity than climate change itself.

That said, the Report raises some serious issues and we can and should do better at reducing our impact on habitats and species. But any sensible policies in this regard would undoubtedly get drowned out in climate change alarmism, and criticism of the Report.

Notes from the Hearing

I am watching the live hearing now (I tuned in a bit late). I thought that the oral testimonies of Shin, Moore, and Watson were very effective. Both Shin and Watson highlighted ocean issues, mainly overfishing and coastal habitats, which are of substantial concern. I don’t always agree with Moore’s statements about climate change, but with regards to biodiversity this topic is squarely in his domain of expertise. Morano’s oral statements were a bit over the top and confrontational, and the Committee Chair is being rather hostile towards Morano.

The Ranking Member (Republican) is seeking common ground, and it appears that the ocean related issues of the Report are having an impact.

Watson agrees that monoculture biofuel production is not good for biodiversity. Watson clearly coupled the biodiversity and climate change issues, stressing the importance of dealing with both together (makes sense especially if this causes reconsideration of biofuels and wind power)

Interesting comment by one of the Members: We are no longer seeing climate denial from the Republicans in Congress, but rather we are seeing climate avoidance, in terms of doing anything meaningful about it.

Morano was asked a question about ‘97% of scientists agree.’ Morano nailed it. Moore effectively chimed in on this issue also.

Hard hitting remarks from one of the members about the fact that full Report has not yet been published, only the Executive Summary.

Moore is effectively communicating the ‘global greening’ seen by satellite.

Member Bishop raises concern about scientific integrity, in context of the Report not being released.

The Chairman in his 5 minutes is attacking Morano and the Republicans for inviting him. Also criticizing ‘junior varsity think tanks.’

Watson admits CO2 contributes to greening. He then hypes extreme weather, including drying as problems associated with CO2. He started talking about economics and policy, and the Chair pulled him back to the science.

Shin brought that particular conversation back to ocean acidification.

The Chair criticized Republicans for inviting a political person (Morano) to testify. But then Watson clearly wanted to talk about these issues also.

Moore hits hard on the ‘extrapolation’ issue, and the large number of the estimated 8 million species that haven’t been identified.

Moore raises the valid point about differences between biodiversity (species number) and species mass. He understands that ocean biomass is decreasing, but is unaware of any actual species loss.

The Chair is now going after Moore. Entering into the Congressional Record a statement from Greenpeace about Patrick Moore.

The reality of the rise of an intolerant and radical left on campus [link]

Fact-checking can’t do much when people’s ‘dueling facts’ are driven by values instead of knowledge [link]

Faculty at prestigious institutions are more productive and prominent than their peers. New research suggests that their work environment, not their training, explains their success. http://ow.ly/GfY450tWGum

Expertise, agreement and the nature of social scientific facts: Against Epistocracy. “I reject any attempt, on the part of scientists themselves or of philosophers or any other students of science, to strengthen the role of experts in society. Experts need to be kept in check, not given more power.” [link]

Whether we should do anything now to limit our impact on future climate boils down to an assessment of a relevant cost-benefit ratio. That is, we need to put a dollar number to the cost of doing something now, a dollar number to the benefit thus obtained by the future generations, and a number to a thing called “discount for the future”—this last being the rate at which our concern for the welfare of future generations falls away as we look further and further ahead. Only the first of these numbers can be estimated with any degree of reliability. Suffice it to say, if the climate-change establishment were to have its way with its proposed conversion of the global usage of energy to a usage based solely on renewable energy, the costs of the conversion would be horrifically large. It is extraordinary that such costs can even be contemplated when the numbers for both the future benefit and the discount for the future are little more than abstract guesses.

Assessment of the future benefit is largely based on two types of numerical modelling. First, there are the vast computer models that attempt to forecast the future change in Earth’s climate when atmospheric carbon dioxide has increased as a consequence of the human activity of burning fossil fuel. Second, there are the computer-based economic models which attempt to calculate the economic and social impact of the forecasted change of climate. Reduction of that impact (by reducing the human input of carbon dioxide to the atmosphere) is the “benefit” in the cost-benefit calculations.

Taking the climate change calculations first, it should be emphasized that in order to be really useful, the forecast must necessarily be of the future distribution of climate about the world—on the scale of areas as small as individual nations and regions. Calculating only the global average of such things as the future temperature and rainfall is not useful. The economic models need input data relevant to individual nations, not just the world as a whole.

Which is a bit of a problem. The uncertainty associated with climate prediction derives basically from the turbulent nature of the processes going on within the atmosphere and oceans. Such predictability as there is in turbulent fluids is governed by the size (the “scale”) of the boundaries that contain and limit the size to which random turbulent eddies can grow. Thus reasonably correct forecasts of the average climate of the world might be possible in principle. On the scale of regions (anything much smaller than the scale of the major ocean basins for example) it has yet to be shown that useful long-term climate forecasting is possible even in principle.

To expand on that a little, the forecasts of the global average rise in temperature by the various theoretical models around the world range from about 1 degree to 6 degrees Celsius by the end of this century—which does little more than support the purely qualitative conclusion from simple physical reasoning that more carbon dioxide in the atmosphere will increase the global average temperature above what it would have been otherwise. It does little to resolve the fundamental question as to what fraction of the observed rise in global surface temperature over the last thirty or so years (equivalent to a rise of about 1 degree Celsius per century if one is inclined to believe observations rather than the theory) is attributable to the human-induced increase in atmospheric carbon dioxide. There is still a distinct possibility that much of the observed rise in global temperature may be the result of natural (and maybe random) variability of the system.

While the forecasts of future global average climate are not really trustworthy and would probably not be very useful even if they were, the potentially much more useful forecasts of regional climates are perhaps just nonsense. A good example supporting this rather negative view of the matter is the variability of the set of hundred-year forecasts of the average rainfall over Australia. Each forecast was produced by one of the many climate models from around the world. The present-day measured average is about 450 millimetres per year. The forecasts for the next century range from less than 200 mm to more than 1000 mm per year. That sort of thing makes finding a model to support a particular narrative just too easy.

As a consequence, the economic models of the future of regions and nations are highly unreliable if only because their regional and national inputs of forecasted climatic “data” are unreliable. But to make matters vastly worse, the economic models themselves are almost certainly useless over time-scales relevant to climate. Their internal workings are based on statistical relations between economic variables devised for present-day conditions. There is no particular reason why these relations should be valid in the future when the characteristics of society will almost certainly have changed. As Michael Crichton put it: “Our [economic] models just carry the present into the future.” And as Kenneth Galbraith once remarked: “Economic forecasting was invented to make astrology look respectable.”

There is a lot of discussion among academics as to what should be an appropriate “discount for the future” to apply in the cost-benefit calculations associated with human-induced climate change. The discussion quickly becomes incomprehensible to the average person when phrases such as “cross generational wealth transfer” and “intergenerational neutrality” and so on appear in the argument. These are fancy terms supposedly relevant to what is essentially a qualitative concept of fairness to future generations. The concept is so qualitative that there is virtually no hope of getting general agreement as to how much we should spend now so as not to upset the people of the future.

There are two extremes of thought on the matter. At one end there are those who tell us that the present-day view of a benefit for future generations should be discounted at the normal rate associated with business transactions of today. That is, it should be something of the order of 5 to 10 per cent a year. The problem for the academics is that such a discount would ensure virtually no active concern for the welfare of people more than a generation or so ahead, and would effectively wipe out any reason for immediate action on climate. At the other end of the scale, there are those who tell us that the value of future climatic benefit should not be discounted at all—in which case there is an infinite time into the future that should concern us, and “being fair” to that extended future implies that we should not object to spending an unlimited amount of present-day money on the problem.

Academics tie themselves in knots to justify the need for immediate action on climate change. For example, we hear argument that “discounting should not be used for determining our ethical obligations to the future” but that (in the same breath) “we endorse a principle of intergenerational neutrality”—and then we hear guesses of appropriate discount rates of the order (say) of 1.5 per cent a year.

The significant point in this cost-benefit business is that there is virtually no certainty about any of the numbers that are used to calculate either the likely change of climate or the impact of that change on future populations. In essence it is simply assumed that all climate change is bad—that the current climate is the best of all possible climates. Furthermore, there is little or no recognition in most of the scenarios that mankind is very good at adapting to new circumstances. It is more than likely that, if indeed climate change is noticeably “bad”, the future population will adjust to the changed circumstances. If the change is “good”, the population will again adapt and become richer as a consequence. If the change is a mixture of good and bad, the chances are that the adaptive processes will ensure a net improvement in wealth. This for a population which, if history is any guide, and for reasons entirely independent of climate change, will probably be a lot wealthier than we are.

Perhaps the whole idea of being fair to the people of the future should be reversed. Perhaps they can easily afford to owe us something in retrospect.

The bottom line of politically correct thought on the matter—the thought that we must collectively do something drastic now to prevent climate change in the future—is so full of holes that it brings the overall sanity of mankind into question. For what it is worth, one possible theory is that mankind (or at least that fraction of it that has become both over-educated and more delicate as a result of a massive increase of its wealth in recent times) has managed to remove the beliefs of existing religions from its consideration—and now it misses them. As a replacement, it has manufactured a set of beliefs about climate change that can be used to guide and ultimately to control human behaviour. The beliefs are similar to those of the established religions in that they are more or less unprovable in any strict scientific sense.

The Extinction Rebellion and the Green New Deal arouse fears of extinction for other species, and humanity. Only the complicit silence of climate scientists makes this possible. Compare the alarmists’ claims with what scientists said in the IPCC’s Fifth Assessment Report (AR5). Too bad that journalists don’t.

Climate hysteria goes mainstream. Climate scientists are silent.

The Extinction Rebellion – “Life on Earth is in crisis: scientists agree we have entered a period of abrupt climate breakdown, and we are in the midst of a mass extinction of our own making. …see how we are heading for extinction.” See their evidence here.

Rep. Alexandria Ocasio-Cortez (D-NY) is interviewed by Ta-Nehisi Coates at an “MLK Now” event in New York. Video here.

“Millennials and people, you know, Gen Z and all these folks that will come after us are looking up and we’re like: ‘The world is gonna end in 12 years if we don’t address climate change and your biggest issue is how are we gonna pay for it?’”

“Andrew Samuels, a Jungian psychoanalyst and a professor at the University of Essex, tells me that therapists are increasingly hearing from patients who are deeply disturbed by climate change and are struggling to cope.”

First fruits of the Extinction Rebellion’s climate hysteria: the UK parliament declares a “Climate Emergency.” Some say this puts the UK on a “war footing”, always a useful way to increase a government’s power over its people.

About the coming extinctions!

“Extinction risk is increased under all RCP scenarios, with risk increasing with both magnitude and rate of climate change.”

That is politics, meaningless rhetoric, not science. It tells us nothing about timing and magnitudes of changes compared to temperature increases. Turn to the full report for answers. First, the good news – they give a rebuttal to the hysteria about the mass extinctions supposedly occurring now due to climate change (more details here).

“{O}nly a few recent species extinctions have been attributed as yet to climate change (high confidence) …” {p4.}

“While recent climate change contributed to the extinction of some species of Central American amphibians (medium confidence), most recent observed terrestrial species extinctions have not been attributed to climate change (high confidence).” {p44.}

“Overall, there is very low confidence that observed species extinctions can be attributed to recent climate warming, owing to the very low fraction of global extinctions that have been ascribed to climate change and tenuous nature of most attributions. (p300.)

Looking to the future.

Much of the report discusses possible results of 4°C warming above preindustrial levels – as of 2018, we are now ~1°C above preindustrial (likely 0.8 – 1.2°C). Supposedly a raise of over 0.5°C will prove disastrous (i.e., over the 1.5°C red line). A further increase of 3°C is wildly improbable by 2065 (the visibility limit of reliable forecasting), and unlikely even by 2100 (i.e., that is in the middle of the range for the improbable RCP8.5 scenario).

WGI used a recent baseline for temperature comparisons: the average of 1986–2005. WGII measured from preindustrial temperatures, defined as before 1750 (WGI occasionally uses preindustrial, such as for historical analysis). Comparing with preindustrial has advantages for climate alarmists.

It measures warming from close to the trough of the coolest period for thousands of years.

There is no instrumental record for global temperatures in 1750.

Most valuable, it allows conflating the natural warming from 1750 to WWII with the mostly anthropogenic warming (AGW) since WWII. So, to the public, all ill effects of this warming become effects of AGW.

What does WGII say about extinctions resulting from AGW? They give many scary findings. But, like the headline conclusion given above, most either lack meaningful details, or are given low confidence, or both.

“Models project that the risk of species extinctions will increase in the future due to climate change, but there is low agreement concerning the fraction of species at increased risk, the regional and taxonomic distribution of such extinctions, and the timeframe over which extinctions could occur.” {p67.}

“Within this century, magnitudes and rates of climate change associated with medium- to high-emission scenarios (RCP4.5, 6.0, and 8.5) pose high risk of abrupt and irreversible regional-scale change in the composition, structure, and function of terrestrial and freshwater ecosystems, including wetlands (medium confidence).” (p15.)

“From a global perspective, open ocean NPP {net primary productivity} will decrease moderately by 2100 under both low- (SRES B1 or RCP4.5) and high-emission scenarios (medium confidence; SRES A2 or RCPs 6.0, 8.5) …. However, there is limited evidence and low agreement on the direction, magnitude and differences of a change of NPP in various ocean regions and coastal waters projected by 2100 (low confidence).” (p135.)

“There is a high risk that the large magnitudes and high rates of climate change associated with low-mitigation climate scenarios (RCP4.5 and higher) will result within this century in abrupt and irreversible regional-scale change in the composition, structure, and function of terrestrial and freshwater ecosystems, for example in the Amazon (low confidence) and Arctic (medium confidence), leading to substantial additional climate change.” (p276.)

WGII discusses bad impacts on some specific kinds of creatures, such as corals. Nothing about extinction of humans. The 1,150 pages of WGII have a remarkable lack of specificity about what we can expect from the various scenarios. There is one exception, a paper that WGII cites 22 times. It was published ten years ago, with no mention of its replication or follow-up research. This is an example of what Andrew Revkin condemns as the “single study syndrome” (e.g., here and here).

“Fischlin et al. (2007) found that 20 to 30% of the plant and animal species that had been assessed to that time were considered to be at increased risk of extinction if the global average temperature increase exceeds 2°C to 3°C above the preindustrial level with medium confidence, and that substantial changes in structure and functioning of terrestrial, marine, and other aquatic ecosystems are very likely under that degree of warming and associated atmospheric CO2 concentration. No time scale was associated with these findings.” (p278.)

“All model-based analyses since AR4 broadly confirm this concern, leading to high confidence that climate change will contribute to increased extinction risk for terrestrial and freshwater species over the coming century. Most studies indicate that extinction risk rises rapidly with increasing levels of climate change, but some do not. …There is, however, low agreement concerning the overall fraction of species at risk, the taxa and places most at risk, and the time scale for climate change-driven extinctions to occur.” (p300.)

AR5 describes the assessed likelihood of an outcome or a result: “virtually certain 99–100% probability, very likely 90–100%, likely 66–100%, about as likely as not 33–66%, unlikely 0–33%, very unlikely 0–10%, exceptionally unlikely 0–1%.”

Conclusions

The Left has incited hysteria about climate change for political gain (the Green New Deal is their maximum dreams given form). Their claims go far beyond consensus climate science, with little basis in the IPCC assessments. Climate scientists and their institutions have remained silent for years as the Left’s claims grew more extreme and less grounded in science. Turning these issues into an irrational crusade makes rational public policy far more difficult to achieve.

An ignored warning from 2010, a path not taken

Here is a remarkable op-ed in the BBC: “Science must end climate confusion” by climate scientist Richard Betts, 11 January 2010. He cautions about scientists exaggerating or misrepresenting climate science “if it helps make the news or generate support for their political or business agenda.”

For those of you not in the U.S., Beto O’Rourke is one of the 20+ candidates vying for the Democratic Party nomination for the Presidential election in 2020.

A number of the candidates have endorsed Ocasio-Cortez’s Green New Deal. In recent months, we have also seen the Green Real Deal, the Green No Deal, and the Green Nuclear Deal (each of which is better than the Green New Deal).

Unlike the ‘me too-ism’ of the other candidates that have endorsed the Green New Deal, Beto O’Rourke has put forward a comprehensive plan for climate change [link].

Predictably, the right wing is outraged by the 5 trillion dollar price tag over 10 years and zero emissions by 2050. The more interesting response is from the environmental activists, typified by the article in the Rolling Stone:

“Beto claims to support the Green New Deal, but his plan is out of line with the timeline it lays out and the scale of action that scientists say is necessary to take here in the United States to give our generation a livable future”

For the sake of argument, lets say you are moderately concerned about climate change and forward looking in terms of desiring a prosperous 21st century that includes abundant food, energy and water for all and a clean environment.

What might you find in Beto’s proposal that makes sense?

Beto O’Rourke’s proposal for climate change

Read the entire document, it isn’t all that long. It has a lot of recommendations; I’ve selected the ones that make sense in terms of ‘no regrets’ , even if climate change turns out not to be a big problem.

Reduce methane leakage from existing sources in the oil and natural gas industry for the first time and rapidly phase-out hydrofluorocarbons, the super- polluting greenhouse gas that is up to 9,000 times worse for climate change than carbon dioxide;

JC note: this is basically the climate fast response plan.

Strengthen the clean air and hazardous waste limits for power plants and fuel economy standards that save consumers money and improve public health, while setting a trajectory to rapidly accelerate the adoption of zero-emission vehicles;

JC note: clean air, water and soil are a priority independent of climate change

Create unprecedented access to the technologies and markets that allow farmers and ranchers to profit from the reductions in greenhouse gas emissions they secure;

JC note: soil carbon sequestration and grazing is very good for the land, increasing productivity for farmers and ranchers

Leverage $500 billion in annual government procurement to decarbonize across all sectors for the first time, including a new “buy clean” program for steel, glass, and cement;

JC note: I’m not sure about the $500B part, but the issue of steel and cement production (with substantial CO2 emissions) is rarely addressed in a meaningful way in terms of CO2 production. New cost-effective technologies for producing steel and cement would be a good thing.

Set a first-ever, net-zero emissions by 2030 carbon budget for federal lands, stopping new fossil fuel leases, changing royalties to reflect climate costs, and accelerating renewables development and forestation;

JC note: apart from the zero emissions by 2030, rational policies for federal lands is very much needed.

Protect our most wild, beautiful, and biodiverse places for generations to come — including more of the Arctic and of our sensitive landscapes and seascapes than ever before — and establish National Parks and Monuments that more fully tell our American story.

JC note: this sort of thing used to define ‘environmentalism.’ Now environmentalism is all about climate change, and it is ok to kill bald eagles with wind turbines.

Innovation that will lead to pioneering solutions in energy, water, agriculture, industry, and mobility and to scientific discovery that makes us more safe and secure. $250 billion in direct resources that will catalyze follow-on private investment, creation of new businesses, and discovery of new science:

JC note: hard to argue against this one.

20 percent of the total investment will go to the climate science needed to understand the changes to our oceans and our atmosphere; avoid preventable losses and catastrophic outcomes; and protect public safety and national security.

JC note: I don’t think an increase in climate science funding is needed (some redirection of funding would be appropriate IMO, away from climate modeling). The other points are important.

Rigorously measuring our progress, scaling what works and scrapping what does not;

Enforcing our laws to hold polluters accountable, including for their historical actions or crimes;

Advancing consumer choice and market competition in electricity and transportation;

Supporting ecosystems, conservation, and biodiversity;

JC note: hard to argue against any of these.

Increasing by ten-fold the spending on pre-disaster mitigation grants that save $6 for every $1 invested;

Changing the law to make sure that we build back stronger after every disaster, rather than spend recovery dollars in ways that leave communities vulnerable to the next fire, flood, drought or hurricane;

Recognizing the value of well-managed ecosystems to reduce and defend against climate-related risks;

Expanding our federal crop insurance program to cover additional risks and offer more comprehensive solutions to support farmers and ranchers;

Investing in the climate readiness and resilience of our first responders; and

Bolstering the security of our military bases, both at home and around the world, and supporting our soldiers with technologies that reduce the need to rely on high-risk energy and water supply.

JC notes: each of these is much needed, independent of manmade climate change.

Cutting off their nose to spite their face

Basically, what Beto has done is widen the scope of technology and policy options that are being discussed, rather than focus on a timeline for reducing energy and transportation emissions to zero. This is a move in the right direction in terms

As pointed out in my recent Congressional Testimony, there are many low-regrets actions that make a lot of sense independent of whether manmade climate change turns out to be a big or a non problem. In principle people across the political spectrum should be able to agree on at least some of these.

However, it looks like Beto’s proposal, which is an order of magnitude better than the Green New Deal, will not catch fire with the Democrats. Ignoring a more meaningful and politically viable proposal in preference for the ‘nonsensical’ Green New Deal.

Ito et al. (2019) “used a computational model of circulation and cycling of elements in the ocean to simulate [changes in seawater oxygen levels in the North Pacific] for the last approximately 70 years” and to understand their causes. https://doi.org/10.1029/2018GB005987…

“We show that the changing coverage of weather stations in the Indian rainfall data leads to spurious increases in extreme rainfall. This suggests that previously reported trends of extreme rainfall are biased positive.” https://doi.org/10.1029/2018GL079709

A new study in Nature Climate Change finds an even stronger case for reducing CO2 emissions to stabilize climate change through a shift from coal to natural gas. Findings are robust under range of leakage rates and uncertainties in emissions data. [link]

Since we’ve moved to Nevada and have been integrating into the local community, the most interesting thing we’ve come across is the National Security Forum of Northern Nevada (NSF). It turns out that a large number of people from the CIA, NSA, DOD, military etc. come to the Reno-Tahoe area to retire. The NSF was started by Ty Cobb, who was a Special Assistant to President Reagan.

Once or twice a month, the NSF has a meeting (at one of the local casino hotels!) with an invited speaker – often from our local community but also frequently from the broader national and international communities.

“There are serious opportunities for those who lead and missed opportunities for those who do not lead the transition to advanced energy sources and grid diversification.”

The CNA Military Advisory Board (MAB) has been a leading voice on national security issues since 2007, producing seminal reports climate and energy security. Two of these explore U.S. military needs for advanced, transportable, safer, and secure sources of energy and electricity transmission systems for mission critical operations. Vice Adm Lee Gunn serves as Vice Chair for CNA-MAB and has been instrumental in leading the CNA MAB reports on advanced energy and electric grid modernization. In his NSF presentation, he highlighted many key findings from the CAN-MAB studies and challenged us a Nevadans to lead the way in transitioning to a more energy secure future.

With the U.S. military transitioning U.S. bases from solely supporting mission readiness to one that conducts military operations directly from the homeland, the demand for stable, uninterruptable electric power sources has increased. Base operations now require electricity that is not vulnerable to natural hazards and malicious attacks. One of our military’s most criticaloperations is the drone missions conducted from Nellis AFB, outside Las Vegas. Drone operations demandenergy supplies that are independent and not at risk from the aging infrastructure underpinning our national electric grid. Our military operators cannot afford to be subject to large scale power outages, as much of the northeast coast experienced in 2006when a squirrel knocked a tree limb on a power line in Ohio causing lights to go out across the region for days.

“The national grid was not designed, so much as it just happened.”

Adm Gunn marveled at the ingenuity of early American engineers who built power lines to light up communities across the country over 100 years ago. He also cautioned us that much of that early infrastructure is still in place and the hodge-podge nature of its expansion leaves the United States with serious vulnerabilities to disruption and attack. The most poignant example of an electric grid infrastructure failure, especially for those of us living in or near the Sierra Nevada, was the collapse of the PG&E tower credited for sparking the Camp Fire in Paradise, California in 2018. That fire claimed 85 lives and reduced the entire town to ash and rubble. With 17 of the last 22 most destructive wildfires in the West caused by electrical grid failures, our aging grid infrastructure has become a major national security risk. The PG&E tower that failed was 99 years old with an original design life of 75 years and there are many more towers around the country that are still operating decades beyond their projected lifetimes.

Weather – fires, floods, wind, extreme storms – is the major disrupter of electric power in the United States, as witnessed by current floodingin the Central Plains, Bomb Cyclone storms in Midwest a few weeks ago,and Atmospheric River winter storms in California and Nevada this winter. Droughts are also a major contributor to electric power disruptions. A prime example of this being decreased water levels in Lake Powell available to feed hydroelectric generation at the Hover Dam. Amplewater supplies, from rivers and other sources, are also needed to coolcoal and nuclear–powered electricity generating plants.

Renewable energy resources such as wind and solar that do not depend on water supplies are well-suited to augment power generation during times of drought. Texas is the state with largest investment in wind energy with massive wind farms in the Panhandle in the north and on the coast in the south. These two sources of wind energy balance electricity generation diurnally for the State’s independent electrical grid. The scarcity of electricity from fossil fuel and nuclear power during the intense drought a few years ago, was compensated for entirely by wind generation, making Texas resilient to the electricity outagesthat plagued California andother western states.

Natural hazards are far from the only source of electric grid vulnerability. Malicious attacks, both physical and cyber, are increasing in number and sophistication. During the 3-year study period of the 2015 CNA-MAB Report “National Security and Assured U.S. Electrical Power,” there were 357 physical attacks on the U.S. grid infrastructure. One of the highest profile cases was the attack on the Metcalf Power Station south of San Francisco in which 17 rifle shots were fireddisabling several transformers and knocking power out to Silicon Valley for half a day. Due to a lack of U.S. suppliers for replacement transformers, which cost over $1meach, the Metcalf power station remained off-line for nearly a year. Even today, transformers are not manufactured in the U.S. Almost all our transformers are producedin South Korean, where manufacturers are overloaded with orders from China and other countries.

During this same time period, there were 14 successful cyberattacks on our grid. These attacks resulted in hackers either denying service to customers or taking control of elements of the grid infrastructure. More worrisome were hundreds of thousands (perhaps millions) of cyberattack “probes”that also occurred during this time in which hackers tested for vulnerable access points or grid weaknesses. What can we do to protect against these attacks and build more resilience in electricity supplies across the country? CNA-MAB’s succinct answer,

“We need smart grids.”

The smart grid solution relies on distributing energy generation to areas closer to the consumer. Smaller generating stations that can take advantage of the advanced energy resources locally – solar arrays, wind farms, geothermal plants and even small modular nuclear reactors – can feed power to smart grid systems equipped with artificial intelligence algorithms to anticipate and respond to power demands and potential disruptions, in real-time. Innovative technologies such as nana-tubes, which allow electricity to be stored and released from directly from nano-fibers, have the potential to make smart grids more practical.

Advanced energy innovations are no longer in the realm of science fiction. The Department of Defense (DoD) is leading the country, and the world, in moving these research concepts to field operations. The MAB has adopted and supports an “all of the above” approach – solar, wind, geothermal, hydroelectric, nuclear, biofuels, etc. – to reaching the goal of emission-free or reduced–emission power generation, as a national securityimperative for the country.Adm Gunn explained among the options for achieving this goal, nuclear power has many advantages, especially small modular reactors. The U.S. Army is testing small modular reactors for use in supporting forward deployments in operating theaters. Generating power in place would alleviate the risks posed by logistics resupply convoys carrying diesel fuels.Illustrating this point, Adm Gunn reminded us that one in eight resupply convoys in Iraq and Afghanistan resulted in a soldier being killed or severely wounded.

The Marines also learned about the benefits of solar energy from Moms and Pops across America who sent roll-out solar panels to troops deployed in Afghanistan. The solar panels were used to charge cell phones and batteries for other communication gear and equipment. Replacing heavy batteries loads with light-weight solar panels reduced the soldiers’ packs (typically about 110 lbs for a week’s deployment) by 30lbs. Less weight made for more agile movement and fewer casualties.

Closer to home the military relies heavily on renewable energy resources to provide uninterruptable power for mission critical operations. One of the leading examples of this is the three large utility-scale solar arrays at Nellis AFB that power drone and operations conducted from Creech and Nellis.The Naval Air Weapons Station at China Lake, California also operates a 180-megawatt geothermal generating plant that provides power for most of the Navy’sweapons and armaments research.

In closing, the Adm Gunn summarized the changing energy landscape that MAB has been reporting on for several years. Factors driving these changes include increases in global population and higher demands for energy by a growing middle class, increased electrification of transportation, new technologies for fracking and fossil fuel extraction, and the growing market for renewables. The world’s population, now at 7.7 billion people, is expected to reach 9.4 billion by 2050 and nearly 11 billion by the end of the century. Most of the growth (around 1.5 billion) will be in India and Africa, driving a projected 40% increased demand for energy by 2050. Even if fossil fuels can meet this demand, the environmental and economic costs of extraction and burning fossil fuels may be prohibitive.

As developing countries leap-frog combustion engine technologies in favor of electric vehicles (EVs), the cost of EVs is projected to decrease significantly. This will push more EVs to market in all countries around the world, changing how energy resources are managed.

Today, energy security in the U.S. depends on fossil fuels and much of our foreign policy is driven by our dependence on oil producing nations including Saudi Arabia and Venezuela. Despite the current political unrest in Venezuela the U.S. remains the largest buyer of Venezuelan oil. This is driven by geography and economics. Because of Venezuela’s close proximity to oil refineries on the Gulf coast we can purchase crude oil from them and sell refined products to others at a profit. Production of fossil fuels is driven by price, globally. And the U.S. does not (and will not) control that price.

Energy independence for the United States will only be realized when/if we control the price of our energy sources. Advanced energy development has the potential to move the U.S. from being energy self-sufficient (our current state) to being energy independent by allowing the U.S. to control energy generation costs at home. Although achievable, this goal will take time. Adm Gunn explained that given the small percentage of renewables in the global energy market, compared to fossil fuels, means the U.S. will need to accelerate advanced energy development if it wants to achieve energy independence.

“It would be far better for the United States economy and security if we led the charge for renewable energy research, manufacturing and deployment.”

Is leading renewable energy development really economically advantageous for the United States? The MAB pondered and explored this concept in their studies. Employing solar energy technologies originally pioneered in the U.S., China now sells solar panels to U.S. consumers at lower cost as similar systems produced in the U.S. MAB studies indicate that this short-term gain in cost savings comes at a longer-term to our national economy. Succinctly stated in the CNA-MAB 2017 Report, Advanced Energy and National Security,

“As new energy options emerge to meet global demand, nations that lead stand to gain; should the U.S. sit on the sidelines, it does so at considerable risk to our national security.”

That said, Adm Gunn explained how the U.S. can regain global leadership in advanced energy. Recognizing that hydroelectric and geothermal energy sources are limited in their development by the availability of natural resources and the cost of large-scale infrastructures. Nuclear power is also stalled in the United States, even as Russian and China are building and selling over 80 new nuclear reactors. The intermittency of solar and wind renewables continues to be a challenge – one the U.S. is well positioned to address.

Reminding us that energy is security, Adm Gunn closed his presentation by noting that energy security choices that the country makes now can enhance our national security and benefit military operations.

Fielding questions on a range of topics, Adm Gunn started by addressing the issue of grid vulnerability from electromagnetic pulses (EMP) caused naturally by solar bursts or intentionally by nuclear weapons. First the bad news. None of the U.S. electric grid, except for very few isolated elements dedicated to military operations, are hardened against large-scale EMPs (solar or man-made). This became evident late in the 19th century when a large solar burst electrified the telegraph lines killing several telegraph operators across the country. Potential high-altitude nuclear weapon detonations by adversarial nations, including Russia and North Korea, also pose a significant risk to the grid. If used, these weapons would also trigger severe retaliation from the U.S. providing a substantial deterrent. On a positive note, the MAB reported that deployment of more distributed energy grids provides resilience to some EMP events by allowing energy to be restored locally much faster than the national grid could be restored.

Adm Gunn addressed several questions related to the deployment of EVs at scale, including how states can compensate for losses in highway funds from lower fuel tax revenues. At present, EVs are only 1-1.5% of vehicle traffic on U.S. highways, therefore fuel tax losses are still small. As the number of EVs increases, states will need to find other ways to recoup expenses through other forms of “use-taxes.” In response to questions about EV recycling, Adm Gunn cited Germany as an example. A recent German law mandates 100% recycling of all automobile vehicles and parts. German car manufactures met the challenge and are now deploying advanced manufacturing technologies to reduce waste and increase component recycling.

The topic of lithium availability, especially from mines in Nevada, was also a popular topic. Adm Gunn acknowledged Nevada’s role as as a major world supplier; however, he also explained that Bolivia and Argentina are expanding their lithium mining efforts. China is now forging new partnerships with these countries to obtain lithium from producers outside the United States, reducing the demand for lithium from Nevada. Expanding U.S. research on batteries that use alternatives to lithium, including more abundant rare earth elements, could help protect the U.S. against a possible lithium trade war in the future.

Biosketch. Vice Admiral Lee F. Gunn, USN (Ret.), Vice Chairman, CNA’s Military Advisory Board, served for 35 years in the U.S. Navy. His last active duty assignment was Inspector General of the Department of the Navy where he was responsible for the Department’s overall inspection program and its assessments of readiness, training, and quality of service. Serving in the Surface Navy in a variety of theaters, Gunn rose through the cruiser/destroyer force to command the Frigate USS Barbey, then commanded the Navy’s anti-submarine warfare tactical and technical evaluation Destroyer squadron, DESRON 31. He later commanded Amphibious Group Three. As Commander of PHIBGRU THREE, he served as the Combined Naval Forces Commander and Deputy Task Force Commander of Combined Task Force United Shield, which conducted the withdrawal of U.N. peacekeeping forces from Somalia. Adm Gunn holds a Bachelors degree in Experimental and Physiological Psychology from the University of California, Los Angeles and a Master of Science in Operations Research from the Naval Postgraduate School in Monterey, California.

JC reflections: Energy security is a huge deal, it is hard to argue that this is not a more important near-term priority than emissions reductions to prevent future climate change. The smart way to approach this whole issue is climate-informed energy security.

Energy security actually provides a better argument for wind and solar power in a diverse energy portfolio than reducing CO2 emissions, since wind and solar don’t depend on water resources (unlike hydro plus nuclear and fossil fuel generation that require water for cooling). Wind and solar power are sensitive to different types of bad weather (e.g. icing, snowfall, clouds, too much or too little wind).

Thinking that projected climate change should determine energy policy, without careful consideration to energy security, reliability, economy and broader environmental impacts, has the potential to increase societal vulnerability to whatever weather/climate extremes might throw at us and reduce overall well being.

A deposit of teeth and bones recently discovered in an island cave in the Philippines may have belonged to a newly identified species closely related to humans and “unknown previously to science [link]

Synoptic weather patterns that bring light winds, clear skies and high humidity, result in reef scale meteorology that appears to have a greater influence on coral bleaching events than the background oceanic warming trend. [link]

CFAN’s early season ENSO forecast is motivated by preparing our seasonal forecast for Atlantic hurricane activity. ENSO forecasts made in spring have traditionally had very low skill owing to the ENSO ‘spring predictability barrier.’

During fall 2018, there was warming in the Central Equatorial Pacific, leading to a weak El Niño Modoki pattern, which impacted the latter part of the Atlantic hurricane season. This transitioned to a weak (conventional) El Niño in February 2019 and the atmospheric anomalies became more consistent with a conventional El Niño pattern.

CFAN’s ENSO forecast analysis is guided by the ECMWF SEAS5 seasonal forecast system and a newly developed statistical forecast scheme based on global climate dynamics analysis.

ENSO statistics

Figure 1 illustrates the recent ENSO history as depicted by monthly Niño 3.4 anomalies from 1980 to February 2019. Highlighted are 20 El Niño Februaries (Niño 3.4 > 0°C), including the most recent (+0.5°C) in February 2019. The Niño 3.4 anomalies surrounding each February El Niño event are plotted in Figure 2, showing the index evolution from the previous July to the following December. The February events that are followed by December El Niño conditions (El Niño persistence) are plotted in red, while those events that reverse to December La Niña conditions are plotted in blue. The Niño 3.4 evolution of 2018-2019 is shown with heavy black markers.

ENSO behavior in late 2018 is remarkable for a steep increase from slightly negative Niño 3.4 SST anomalies in July to moderately positive anomalies (+1 °C) by October. Typically, fall El Niño intensification occurs with the growth of high-amplitude events that peak around +2°C before undergoing major reversals to La Niña throughout the following calendar year (blue lines in Fig. 2).

The IRI/CPC plume of model ENSO predictions from mid-March 2018 is shown in Figure 1. The latest official CPC/IRI outlook (Figure 2) calls for a 80% chance of El Niño prevailing during Mar-May, decreasing to 60% for Jun-Aug.

CFAN’s ENSO forecast plumes from ECMWF (initialized March 1) are shown in Figure 3, for Niño1.2, Niño 3, Niño4, and the Modoki Index. ECMWF shows a peak of Niño 3 in May 2019 and a peak in Niño 1.2 in April, with subsequent declining values. Niño 4 values peak in June, and there is a hint of a return to Modoki (> 0.5) by September.

CFAN’s analysis of the ENSO hindcast skill of the ECMWF SEAS5 seasonal forecast model (Figure 4) shows a correlation coefficient of 0.7 for Niño3 and 0.79 for Niño4 forecasts initialized in March for a seven month forecast horizon (September). For a forecast initialized on March 1, Nino4 shows greater predictability than Nino4 for a 6-7 month forecast horizon.

Figure 4: Evaluation of the predictability of the Niño 3 and Niño 4: correlation of observed versus predicted) from ECMWF SEAS5 as a function of initial month and lead-time. From Hirata, Toma and Webster, 2018: Updated quantification of ENSO influence on the U.S. surface climate. https://ams.confex.com/ams/98Annual/webprogram/Paper334884.html

Two methods were used to forecast the seasonal anomalies and evolution of tropical Pacific SSTs during 2019. Niño 3.4 index anomalies (Figure 6) were forecast on the basis of recent February-March atmosphere-ocean anomalies and tendencies that systematically correlate with later ENSO anomalies. Climate precursors were identified in globally-gridded variables in the NCEP-NCAR Reanalysis at 17 vertical levels from the surface to the stratosphere. Additionally, we forecast full global SST fields (Figure 7) with a similar scheme based on the 4 leading Principal Components of global SST variability in each season. Both methods give similar forecasts of ENSO conditions throughout 2019.

Figure 6. Statistical model projections of Niño 3.4 SST in three-month windows from March to December 2019. Black markers show estimates obtained from ensembles of 20 forecast models, with final forecasts indicated by ensemble means in red markers.

For background, see these previous posts on climate sensitivity [link]

Here are some possibilistic arguments related to climate sensitivity. I don’t think the ECS example is the best one to illustrate these ideas [see previous post], and I probably won’t include this example in anything I try to publish on this topic (my draft paper is getting too long anyways). But possibilistic thinking does point you in some different directions when pondering the upper bound of plausible ECS values.

5. Climate sensitivity

Equilibrium climate sensitivity (ECS) is defined as the amount of temperature change in response to a doubling of atmospheric CO2 concentrations, after the climate system has reached equilibrium. The issue with regards to ECS is not scenario discovery; rather, the challenge is to clarify the upper bounds of possible and plausible worst cases.

The IPCC assessments of ECS have focused on a ‘likely’ (> 66% probability) range, which has mostly been unchanged since Charney et al. (1979), to be between 1.5 and 4.5 oC. The IPCC AR4 (2007) did not provide any insight into a worst-case value of ECS, stating that values substantially higher than 4.5 oC cannot be excluded, with tail values in Figure 9.20 exceeding 10 oC. The IPCC AR5 (2013) more clearly defined the upper range, with a 10% probability of exceeding 6 oC.

Since the IPCC AR5, there has been considerable debate as to whether ECS is on the lower end of the likely range (e.g., < 3 oC) or the higher end of the likely range (for a summary, see Lewis and Curry, 2018). The analysis here bypasses that particular debate and focuses on the upper extreme values of ECS.

High-end values of ECS are of considerable interest to economists. Weitzman (2009) argued that probability density function (PDF) tails of the equilibrium climate sensitivity, fattened by structural uncertainty using a Bayesian framework, can have a large effect on the cost-benefit analysis. Proceeding in the Bayesian paradigm, Weitzman fitted a Pareto distribution to the AR4 ECS values, resulting in a fat tail that produced a probability of 0.05 of ECS exceeding 11 oC, and a 0.01% probability of exceeding 20 oC.

The range of ECS values derived from global climate models (CMIP5) that were cited by the IPCC AR5 is between 2.1 and 4.7 oC. To better constrain the values of ECS based on observational information available at the time of the AR5, Lewis and Grunwald (2018) combined instrumental period evidence with paleoclimate proxy evidence using objective Bayesian and frequentist likelihood-ratio methods. They identified a 5–95% range for ECS of 1.1–4.05 oC. Using the same analysis methods, Lewis and Curry (2018) updated the analysis for the instrumental period by extending the period and using revised estimates of forcing to determine a 5-95% range of 1.05 – 2.7 oC. The observationally-based values should be regarded as estimates of effective climate sensitivity, as they reflect feedbacks over too short a period for equilibrium to be reached.

Values of climate sensitivity exceeding 4.5 oC derived from observational analyses are arguably associated with deficiencies in the diagnostics or analysis approach (e.g. Annan and Hargreaves, 2006; Lewis and Curry, 2015). In particular, use of a non-informative prior (e.g. Jeffreys prior), or a frequentist likelihood-ratio method, narrows the upper tail considerably. However, as summarized by Frame et al. (2006), there is no observational constraint on the upper bound of ECS.

The challenges of identifying an upper bound for ECS are summarized by Stevens et al. (2016) and Knutti et al. (2017). Stevens et al. (2016) describes a systematic approach for refuting physical storylines for extreme values. Stevens et al.’s physical storyline for a very high ECS (>4.5 oC) is comprised of three conditions: (i) the aerosol cooling influence in recent decades would have to have been strong enough to offset most of the effect of rising greenhouse gases; (ii) tropical sea-surface temperatures at the time of the last glacial maximum would have to have been much cooler than at present; and (iii) cloud feedbacks from warming would have to be strong and positive.

An interesting challenge to identifying the plausible upper bound for ECS has been presented by a newly developed climate model, the DOE E3SM (Golaz et al. 2019), which includes numerous technical and scientific advances. The model’s value of ECS has been determined to be 5.3 oC, higher than any of the CMIP5 model values and outside the IPCC AR5 likely range. This high value of ECS is attributable to very strong shortwave cloud feedback. The DOE E3SM model’s value of shortwave cloud feedback is larger than all CMIP5 models; however, shortwave cloud feedback is weakly constrained by observations and physical understanding. A stronger argument for placing the DOE E3SM value of climate sensitivity in the ‘borderline impossible’ category is Figure 23 in Golaz et al. (2019), which shows that the global mean surface temperature simulated by the model during the period 1960-2000 is as much as 0.5 oC lower than observed, and that since the mid-1990s the simulated temperature rises far faster than the observed temperature. This case illustrates the challenge of refuting scenarios associated with a complex storyline or model, which was noted by Stevens et al. (2016).

An additional issue regarding climate model derived values of ECS was raised by recent paper by Mauritsen et al. (2019). An intermediate version of the MPI-ESM1.2 global climate model produced an ECS value of ~ 7 oC, caused by the parameterization of low-level clouds in the tropics. Since this model version produced substantially more warming than observed in the historical period, this model version was rejected and model cloud parameters were adjusted to target a value of ECS closer to 3 oC, resulting in a final ECS value of 2.77 oC. The strategy employed by Mauritsen et al. (2019) raises the issue as to what extent climate model-derived ECS values are truly emergent, rather than a result of tuning that explicitly or implicitly considers the value of ECS and the match of the model simulations with the historical temperature record.

Was Mauritsen et al. (2019) justified in rejecting the model version with an ECS value of ~ 7 oC? Is the MPI-ESM1.2 value of ECS of 5.3 oC plausible? Observationally-derived values of ECS (e.g. Lewis and Curry, 2018) are inadequate for defining the upper bounds of ECS. There are two types of constraints that in principle can be used: emergent constraints and the Transient Climate Response.

Emergent constraints in principle can help narrow uncertainties in climate model sensitivity through empirical relationships that relate a model’s response to observable metrics. These analyses have mostly focused on cloud processes. The credibility of an emergent constraint relies upon the strength of the statistical relationship, a clear understanding of the mechanisms underlying the relationship, and the accuracy of observations. Further, the most robust emergent constraints are for model parameters that are driven by a single physical process (e.g. Winsberg, 2018). Investigations of integral constraints related to cloud processes have mostly concluded that the climate models with ECS values on the high end of the IPCC AR5 likely range show best agreement with the integral constraints (e.g. Caldwell et al., 2018). However, Caldwell et al. (2018) and Winsberg (2018) caution that additional processes influencing the metric and other biases in the model may affect the analysis. While the robustness and utility of these emergent constraints continues to be investigated and debated, this technique is not very helpful in identifying a plausible upper bound or in rejecting high values such as obtained by Golaz et al. (2019) and Mauritzen et al. (2019).

The Transient Climate Response (TCR) in principle can be of greater utility in providing an observational constraint on climate sensitivity. TCR is the amount of warming that might occur at the time when CO2 doubles, having increased gradually by 1% each year over a period of 70 years. Relative to the ECS, observationally-determined values of TCR avoid the problems of uncertainties in ocean heat uptake and the fuzzy boundary in defining equilibrium owing to a range of timescales for the various feedback processes. Further, an upper limit to TCR can in principle be determined from observational analyses.

TCR values cited by the IPCC AR5 have a likely (>66%) upper bound of 2.5 oC and < 5% probability of exceeding 3 oC. Knutti et al. (2017; Figure 1) show several relatively recent TCR distributions whose 90 percentile value exceeds 3 oC. Observationally-derived values of TCR determined by Lewis and Curry (2018) identified the 5-95% range to be 1.0–1.9 K. As discussed by Lewis and Curry (2015) and Lewis and Grunwald (2017), use of a non-informative prior or a frequentist likelihood-ratio method narrows the upper tail considerably. While the methodological details of determining values of TCR from observations continue to be debated, in principle the upper bound of TCR can be constrained by historical observations.

How does a constraint on the upper bound of TCR help constrain the high-end values of ECS? A TCR value of 2.93 oC was determined by Golaz et al. (2019) for the MPI-ESM1.2 model, which is well above the 95% value determined by Lewis and Curry (2018), and also above the IPCC AR5 likely range. Table 9.5 of the IPCC AR5 lists the ECS and TCR values of each of the CMIP5 models. If a TCR value of 2 oC is used as the maximum plausible value of TCR based on the Lewis and Curry (2018) analysis, then it seems reasonable to classify climate model-derived values of ECS associated with TCR values ≤ 2.0 oC as verified possibilities.

In light of the cited analyses of ECS (which are not exhaustive), consider the following classification of values of equilibrium climate sensitivity relative to the π-based classifications provided in the possibilistic post, which provides the expert judgment of one analyst (moi). Note that overlapping values in the different classifications arise from different scenario generation methods associated with different necessity-judgment rationales:

In evaluating the justification of the high-end values of ECS, it is useful to employ the logic of partial positions for an ordered scale of events. It is rational to believe with high confidence a partial position that equilibrium climate sensitivity is at least 1 oC and between 1 and 2.7 oC, which encompasses the strongly verified and corroborated possibilities. This partial position with a high degree of justification is relatively immune to falsification. It is also rational to provisionally extend one’s position to believe values of equilibrium climate sensitivity up to 4.1 oC – the range simulated by climate models whose TCR values do not exceed 2.0 oC — although these values are vulnerable to improvements to climate models and our observational estimates of TCR, whereby portions of this extended position may prove to be false. High degree of justification ensures that a partial position is highly immune to falsification and can be flexibly extended in many different ways when constructing a complete position.

The conceivable worst case for ECS is arguably ill-defined; there is no obvious way to positively infer this, and such inferences are hampered by timescale fuzziness between equilibrium climate sensitivity and the larger earth system sensitivity. However, one can refute estimates of extreme values of ECS from fat-tailed distributions > 10 oC (e.g. Weitzman, 2009) as arguably impossible – these reflect the statistical manufacture of extreme values that are unjustified by either observations or theoretical understanding, and extend well beyond any conceivable uncertainty or possible ignorance about the subject.

The possible worst case for ECS is judged here to be 6.0 oC, although this boundary is weakly justified. The only evidence for very high values of ECS comes from climate model simulations with very strong positive cloud feedback (e.g. Mauritzen et al. 2019; Golaz et al. 2019) and statistical analyses that use informative priors. Further examination of the CMIP6 models is needed to assess the causes, outcomes and plausibility of parameters and feedbacks in these models with very high values of ECS before rejecting them as impossible.

With regards to the plausible worst case (lower bound of borderline impossible values) of ECS, consideration was given to the upper bound of verified possibilities (4.1 oC) and also the time-honored value of 4.5 oC as the upper bound as the ‘likely’ range for ECS. Consideration of the model’s value of TCR in comparison to observationally-derived values of TCR seems to be a useful constraint for assessing the plausibility of a model’s ECS value. However, further investigation is needed to understand the methodological differences in the varying estimates of TCR and the causes of varying relations between TCR and ECS values among different models. This seems to be a more fruitful way forward than the emergent constraints approach.

Given that 4.5 oC was specified by the IPCC AR5 as the upper bound of the likely range (> 66% probability), the judgment here that specifies 4.5 oC as the maximum plausible value of ECS will undoubtedly be controversial. Other analysts may make different judgments and draw a different conclusion on this. Consideration of different rationales for making judgments on the maximum plausible value of ECS would illuminate the underlying issues and rationales for judgments.

Because of the central role that ECS plays in Integrated Assessment Models used to determine the social cost of carbon that is largely driven by tail values of ECS, the issue of clarifying the plausible and possible values of ECS is not without consequence.

Frank Bosse provided this Google translation of an interview published in Der Spiegel -Print-Issue 13/2019, p. 99-101. March 22, 2019

Excerpts provided below, with some minor editing of the translation.

begin quote:

Global warming forecasts are still surprisingly inaccurate. Supercomputers and artificial intelligence should help. By Johann Grolle

It’s a simple number, but it will determine the fate of this planet. It’s easy to describe, but tricky to calculate. The researchers call them “climate sensitivity”.

It indicates how much the average temperature on Earth warms up when the concentration of greenhouse gases in the atmosphere doubles. Back in the 1970s, it was determined using primitive computer models. The researchers came to the conclusion that their value is likely somewhere between 1.5 and 4.5 degrees.

This result has not changed until today, about 40 years later. And that’s exactly the problem.

The computational power of computers has risen many millions of dollars, but the prediction of global warming is as imprecise as ever. “It is deeply frustrating,” says Bjorn Stevens of the Hamburg Max Planck Institute for Meteorology.

For more than 20 years he has been researching in the field of climate modeling. It is not easy to convey this failure to the public. Stevens wants to be honest, he does not want to cover up any problems. Nevertheless, he does not want people to think that the latest decades of climate research have been in vain.

“The accuracy of the predictions has not improved, but our confidence in them has grown,” he says. The researchers have examined everything that might counteract global warming. “Now we are sure: she is coming.”

As a decision-making aid in the construction of dykes and drainage channels the climate models are unsuitable. “Our computers do not even predict with certainty whether the glaciers in the Alps will increase or decrease,” explains Stevens.

The difficulties he and his fellow researchers face can be summed up in one word: clouds. The mountains of water vapor slowly moving across the sky are the bane of all climate researchers.

First of all, it is the enormous diversity of its manifestations that makes clouds so unpredictable. Each of these types of clouds has a different effect on the climate. And above all: they have a strong effect.

Simulating natural processes in the computer is always particularly sensitive when small causes produce great effects. For no other factor in the climatic events, this is as true as for the clouds. If the fractional coverage of low-level clouds fell by only four percentage points, it would suddenly be two degrees warmer worldwide. The overall temperature effect, which was considered just acceptable in the Paris Agreement, is thus caused by four percentage points of clouds – no wonder that binding predictions are not easy to make.

In addition, the formation of clouds depends heavily on the local conditions. But even the most modern climate models, which indeed map the entire planet, are still blind to such small-scale processes.

Scientists’ model calculations have become more and more complex over the past 50 years, but the principle has remained the same. Researchers are programming the earth as faithfully as possible into their computers and specifying how much the sun shines in which region of the world. Then they look how the temperature on their model earth adjusts itself.

The large-scale climatic events are well represented by climate models.

However, problems are caused by the small-scale details: the air turbulence above the sea surface, for example, or the wake vortices that leave mountains in the passing fronts. Above all, the clouds: The researchers can not evaporate the water in their models, rise and condense, as it does in reality. You have to make do with more or less plausible rules of thumb.

“Parametrization” is the name of the procedure, but the researchers know that, in reality, this is the name of a chronic disease that has affected all of their climate models. Often, different parameterizations deliver drastically divergent results. Arctic temperatures, for example, are sometimes more than ten degrees apart in the various models. This makes any forecast of ice cover seem like mere reading of tea leaves.

“We need a new strategy,” says Stevens. He sees himself as obliged to give better decision support to a society threatened by climate change. “We need new ideas,” says Tapio Schneider from Caltech in Pasadena, California.

The Hamburg Max Planck researcher has therefore turned to another type of cloud, the cumulonimbus. These are mighty thunderclouds, which at times, dark and threatening, rise higher than any mountain range to the edge of the stratosphere.

Although this type of cloud has a comparatively small influence on the average temperature of the earth, Stevens explains. Because they reflect about as much solar radiation into space as they hold on the other hand from the earth radiated heat. But cumulonimbus clouds are also an important climatic factor. Because these clouds transport energy. If their number or their distribution changes, this can contribute to the displacement of large weather systems or entire climatic zones.

Above all, one feature makes Stevens’ powerfully spectacular cumulonimbus clouds interesting: They are dominated by powerful convection currents that swirl generously enough to be predictable for modern supercomputers. The researcher has high hopes for a new generation of climate models that are currently being launched.

While most of its predecessors put a grid with a resolution of about one hundred kilometers over the ground for calculations, these new models have reduced the mesh size to five or even fewer kilometers. To test their reliability, Stevens, together with colleagues in Japan and the US, carried out a first comparison simulation.

It turned out that these models represent the tropical storm systems quite well. It therefore seems that this critical part of the climate change process will be more predictable in the future. However, the simulated period was initially only 40 days. “Stevens knows that to portray climate change, he has to run the models for 40 years. Until then it is still a long way.

Stevens, meanwhile, rather fears that it is the cumulonimbus clouds that could unexpectedly cause surprises. Tropical storm systems are notorious for their unpredictability. “The monsoon, for example, could be prone to sudden changes,” he says.

It is possible that the calculations of the fine-mesh computer models allowed to predict such climate surprises early. “But it is also conceivable that there are basically unpredictable climatic phenomena,” says Stevens. “Then we can still simulate so exactly and still not come to any reliable predictions.”

That’s the worst of all possibilities. Because then mankind continues to steer into the unknown.

This post is Part II in the possibility series (for an explanation of the possibilistic approach, see previous post link). This paper also follows up on a recent series of posts about RCP8.5 [link].

3. Scenarios of emissions/concentration

Most worst-case climate outcomes are associated with climate model simulations that are driven by the RCP8.5 representative concentration pathway (or equivalent scenarios in terms of radiative forcing). No attempt has been made to assign probabilities or likelihoods to the various emissions/concentration pathways (e.g. van Vuuren et al. 2011), based on the argument that the pathways are related to future policy decisions and technological possibilities that are considered to be currently unknown.

The RCP8.5 scenario was designed to be a baseline scenario that assumes no greenhouse gas mitigation and no impacts of climate change on society. This scenario family targets a radiative forcing of 8.5 W m-2 from anthropogenic drivers by 2100, which is nominally associated with an atmospheric CO2 concentration of 936 pm (Riahi et al. 2007). Since the scenario outcome is already specified (8.5 W m-2); the salient issue is whether plausible storylines can be formulated to produce the specified outcome associated with RCP8.5.

A number of different pathways can be formulated to reach RCP8.5, using different combinations of economic, technological, demographic, policy, and institutional futures. These scenarios generally include very high population growth, very high energy intensity of the economy, low technology development, and a very high level of coal in the energy mix. Van Vuuren et al. (2011) report that RCP8.5 leads to a forcing level near the 90th percentile for the baseline scenarios, but a literature review at that time was still able to identify around 40 storylines with a similar forcing level.

Storylines for the RCP8.5 scenario and its equivalents have been revised with time as our background knowledge changes. To account for lower estimates of future world population growth and much lower outlooks for emissions of non-CO2 gases, more CO2 must be released to the atmosphere to reach 8.5 W m-2 by 2100 (Riahi et al., 2017). For the forthcoming IPCC AR6, the comparable SSP5-8.5 scenario is associated with an atmospheric CO2 concentration of almost 1100 ppm by 2100 (O’Neill et al. 2016), which is a substantial increase relative to the 936 ppm reported by Riahi et al. (2007).

As summarized by O’Neill et al. (2016) and Kriegler et al. (2017), the SSP5-8.5 baseline scenarios exhibit rapid re-carbonization, with very high levels of fossil fuel use (particularly coal). The plausibility of the RCP8.5-SSP5 family of scenarios is increasingly being questioned. Ritchie and Dowlatabadi (2018) challenge the bullish expectations for coal in the SSP5-8.5 scenarios, which are counter to recent global energy outlooks. They argue that the ‘return to coal’ scenarios exceed today’s knowledge of conventional reserves. Wang et al. (2017) has also argued against the plausibility of the existence of extensive reserves of coal and other easily-recoverable fossil fuels to support such a scenario.

Most importantly, Riahi et al. (2017) found only one single baseline scenario of the full set (SSP5) reaches radiative forcing levels as high as the one from RCP8.5 (compared with 40 cited by van Vuuren et al. 2011). This finding suggests that 8.5 W/m2 can only emerge under a very narrow range of circumstances. Ritchie and Dowlatabadi (2018) notes that further research is needed to determine if plausible high emission reference cases consistent with RCP8.5 could be developed with storylines that do not lead to re-carbonization.

Given the socio-economic nature of most of the assumptions entering into the SSP-RCP storylines, it is difficult to argue that the SSP5-RCP8.5 scenarios are impossible. However, numerous issues have been raised about the plausibility of this scenario family. Given the implausibility of re-carbonization scenarios, current fertility (e.g. Samir and Lutz, 2014) and technology trends, as well as constraints on conventional coal reserves, a categorization of RCP8.5 as ‘borderline impossible’ is justified based on our current background knowledge.

Based on this evidence, Ritchie and Dowlatabadi (2017) conclude that RCP8.5 should not be used as a benchmark for future scientific research or policy studies. Nevertheless, the RCP8.5 family of scenarios continues to be widely used, and features prominently in climate change assessments (e.g. CSSR, 2017).

Are all of the ‘worst-case’ climate scenarios and outcomes described in assessment reports, journal publications and the media plausible? Are some of these outcomes impossible? On the other hand, are there unexplored worst-case scenarios that we have missed, that could turn out to be real outcomes? Are there too many unknowns for us to have confidence that we have credibly identified the worst case? What threshold of plausibility or credibility should be used when assessing these extreme scenarios for policy making and risk management?

I’m working on a new paper that explores these issues by integrating climate science with perspectives from the philosophy of science and risk management. The objective is to provide a broader framing of the 21st century climate change problem in context of how we assess and reason about worst-case scenarios. The challenge is to articulate an appropriately broad range of future outcomes, including worst-case outcomes, while acknowledging that the worst-case can have different meanings for a scientist than for a decision maker.

This series will be in four parts, with the other three applying these ideas to the worst case scenarios for:

emissions/concentration

climate sensitivity

sea level rise

3. Possibilistic framework

In evaluating future scenarios of climate change outcomes for decision making, we need to assess the nature of the underlying uncertainties. Knight (1921) famously distinguished between the epistemic modes of certainty, risk, and uncertainty as characterizing situations where deterministic, probabilistic or possibilistic foreknowledge is available.

There are some things about climate change that we know for sure. For example, we are certain that increasing atmospheric carbon dioxide will act to warm the planet. As an example of probabilistic understanding of future climate change, for a given increase in sea surface temperatures, we can assign meaningful probabilities for the expected increase in hurricane intensity in response to a specified temperature increase (e.g. Knutson and Tuleya, 2013). There are statements about the future climate to which we cannot reliably assign probabilities. For example, no attempt has been made to assign probabilities or likelihoods to different emissions/concentrations pathways for greenhouse gases in the 21st century (e.g. van Vuuren et al, 2011).

For a given emissions/concentration pathway, does the multi-model ensemble of simulations of the 21st century climate used in the IPCC assessment reports provide meaningful probabilities? Stainforth et al. (2007) provide a convincing argument that model inadequacy and an inadequate number of simulations in the ensemble preclude producing meaningful probabilities from the frequency of model outcomes of future climate states. Nevertheless, as summarized by Parker (2010), it is becoming increasingly common for results from climate model simulations to be transformed into probabilistic projections of future climate, using Bayesian and other techniques.

Where probabilistic prediction fails, foreknowledge is possibilistic – we can judge some future events to be possible, and others to be impossible. The theory of imprecise probabilities (e.g. Levi 1980) can be considered as an intermediate mode between probabilistic and possibilistic prediction. However, imprecise probabilities require credible upper and lower bounds for the future outcomes, including the worst-case.

Possibility theory is an uncertainty theory devoted to the handling of incomplete information that can capture partial ignorance and represent partial beliefs (for an overview, see Dubois and Prade, 2011). The relevance of analyzing uncertainty with possibility theory is better appreciated when evidence about events are unreliable or when prediction or conclusion is difficult to make due to insufficient information. Possibility theory distinguishes what is necessary and possible from what is impossible. Possibility theory has been developed in two main directions: the qualitative and quantitative settings. The qualitative setting is the focus of the analysis presented here.

Possibility theory represents the state of knowledge of an state of affairs or outcome, distinguishing what is plausible from what is less plausible, what is the normal course of things from what is not, what is surprising from what is expected. In possibility theory, the function π(U) distinguishes an event that is possible from one that is impossible:

The necessity function N(U) evaluates to what extent the event is certainly implied by the status of our knowledge:

N(U) = 1: U is necessary, certainly true; implies p(U) = 1

N (U) =0 : U is unnecessary; implies p(U) is unconstrained

Possibility theory has seen little application to climate science. Betz (2010) provided a conceptual framework that distinguishes different categories of possibility and necessity to convey our uncertain knowledge about the future, using predictions of future climate change as an example. In this context, Betz defines ‘possibility’ to mean consistency with our relevant background knowledge – referred to by Levi (1980) as ‘serious possibility.’

Betz (2010) classified possible events to fall into two categories: (i) verified possibilities, i.e. statements which are shown to be possible, and (ii) unverified possibilities, i.e. events that are articulated, but neither shown to be possible nor impossible. The epistemic status of verified possibilities is higher than that of unverified possibilities; however, the most informative scenarios for risk management may be the unverified possibilities.

A useful strategy for categorizing ‘degrees of necessity’ is provided by the plausibility measures articulated by Friedman and Halpern (1995) and Huber (2008). Measures of plausibility incorporate the follow notions of uncertainty:

Plausibility of an event is inversely related to the degree of surprise associated with the occurrence of the event;

Notions of conditional plausibility of an event A, given event B;

Hypotheses are confirmed incrementally for an ordered scale of events, supporting notions of partial belief.

Guided by the frameworks established by Betz (2010), Friedman and Halpern (1995) and Huber (2018), future climate outcomes are categorized here in terms of plausibility and degrees of justification (necessity). A high degree of justification (associated with high p value) implies high robustness and relative immunity to falsification or rejection. Different classifications and associated p values can be articulated, but this categorization serves to illustrate applications of the concepts. Below is a classification of future climate outcomes used in this paper:

The contingent possibility category is related to Shackle’s (1961) notion of conditional possibility, whereby the degree of surprise of a conjunction of two events A and B is equal to the maximum of the degree of surprise of A, and of the degree of surprise of B, should A prove true.

This possibility scale does not map directly to probabilities; a high value of possibility (p) does not indicate a corresponding high probability value, but rather shows that a probable event is indeed possible and also that an impossible event is not probable.

3.1 Scenario justification

As a practical matter for considering policy-relevant outcomes (scenarios) of future climate change and its impacts, how are we to evaluate whether an outcome is possible or impossible? In particular, how do we assess the possibility of big surprises or black swans?

If the objective is to capture the full range of policy-relevant outcomes and to broaden the perspective on the concept of scientific justification, then both confirmation and refutation strategies are relevant and complementary. The difference between confirmation and refutation can also be thought of in context of regarding the allocation of burdens of proof (e.g. Curry, 2011c). Consider a contentious outcome (scenario), S. For confirmation, the burden of proof falls on the party that says S is possible. By contrast, for refutation, the party denying that S is possible carries the burden of proof. Hence confirmation and refutation play complementary roles in outcome (scenario) justification.

The problem of generating a plethora of potentially useless future scenarios is avoided by subjecting the scenarios to an assessment as to whether the scenario is deemed possible or impossible, based on our background knowledge. Section 2 addressed how black swan or worst-case scenarios can be created; but how do we approach refuting extreme scenarios or outcomes as impossible or implausible? Extreme scenarios and their outcomes can be evaluated based on the following criteria:

Evaluation of the possibility of each link in the storyline used to create the scenario.

Evaluation of the possibility of the outcome and/or the inferred rate of change, in light of physical or other constraints.

Assessing the strength of background knowledge is an essential element in assessing the possibility or impossibility of extreme scenarios. Extreme scenarios are by definition at the knowledge frontier. Hence the background knowledge against which extreme scenarios and their outcomes are evaluated is continually changing, which argues for frequent re-evaluation of worst-case scenarios and outcomes.

This raises several questions: Which experts and how many? By what methods is the expert judgment formulated? What biases enter into the expert judgment?

Expert judgment encompasses a wide variety of techniques, ranging from a single undocumented opinion, to preference surveys, to formal elicitation with external validation (e.g. Oppenheimer et al., 2016). Serious disagreement among experts as to whether a particular scenario (outcome) is possible or impossible justifies a scenario classification of ‘borderline impossible.’

3.3 Worst-case classification

On topics where there is substantial uncertainty and/or a rapidly advancing knowledge frontier, experts disagree on what outcomes they would categorize as a ‘worst case,’ even when considering the same background knowledge and the same input parameters/constraints.

For example, consider the expert elicitation conducted by Horton et al. (2014) on 21st century sea level rise, which reported the results from a broad survey of 90 experts. One question related to the expected 83-percentile of sea level rise for a warming of 4.5oC, in response to RCP8.5. While overall the elicitation provided similar results as cited by the IPCC AR5 (around 1 m), Figure 2 of Horton et al. (2016) shows that 6 of the respondents placed the 83-percentile to be higher than 2.5 m, with the highest estimate exceeding 6 m.

While experts will inevitably disagree on what constitutes a worst case when the knowledge base is uncertain, a classification is presented here that is determined by the extent to which borderline impossible parameters or inputs are employed in developing the scenario. This classification is inspired by the Queen in Alice in Wonderland: “Why, sometimes I’ve believed as many as six impossible things before breakfast.” This scheme articulates three categories of worst-case scenarios:

Conceivable worst case: formulated by incorporating all worst-case parameters/inputs (above the 90 or 95-percentile range) into a model; does not survive refutation efforts.

Plausible worst case: p just above p = 0.1. Includes at most one borderline impossible assumption in model-derived scenarios.

A few comments are in order to avoid oversimplification of this classification for a specific application. Simply counting the number of borderline uncertain parameters/inputs in deriving a scenario can be misleading if these inputs are of little importance in determining the scenario outcome. If these borderline impossible parameters/inputs are independent, then the necessity (and likelihood) of the scenario is reduced relative to the necessity of each individual parameter/outcome. If the collection of borderline impossible parameter/inputs produce nonlinear feedbacks or cascades, then it is conceivable that these parameters/inputs somehow have a cancelling effect on exacerbating the extremity of the outcome. Model sensitivity tests can assess to what extent a collection of borderline impossible parameters/inputs contributes to the extremity of the outcome.

The conceivable worst-case scenario is of academic interest only; the plausible and possible worst-case scenarios are of greater relevance for policy and risk management. In the following three sections, applications of these ideas about worst-case scenarios are applied to emissions/concentrations, climate sensitivity and sea level rise. Apart from their importance in climate science and policy, these three topics are selected to illustrate different types of constraints and uncertainties in assessing worst-case outcomes.

JC note: I look forward to your comments/feedback. The next installment will assess RCP8.5 using these criteria.

” ‘I believe in science’ is an homage given to science by people who generally don’t understand much about it. Science is used here not to describe specific methods or theories, but to provide a badge of tribal identity. Which serves, ironically, to demonstrate a lack of interest in the guiding principles of actual science.” – Robert Tracinski

For some years now, one of the left’s favorite tropes has been the phrase “I believe in science.” Elizabeth Warren stated it recently in a pretty typical form: “I believe in science. And anyone who doesn’t has no business making decisions about our environment.” This was in response to news that scientists who are skeptical of global warming might be allowed to have a voice in shaping public policy.

[I]t captures a lot of what annoys the rest of us about the “I believe in science” crowd. It reduces a serious intellectual issue—a whole worldview and method of thought—to a signifier of social group identity.

Some people may use “I believe in science” as vague shorthand for confidence in the ability of the scientific method to achieve valid results, or maybe for the view that the universe is governed by natural laws which are discoverable through observation and reasoning.

But the way most people use it today—especially in a political context—is pretty much the opposite. They use it as a way of declaring belief in a proposition which is outside their knowledge and which they do not understand.

There are a lot of people these days who like things that sound science-y, but have little patience for actual science.

The problem is the word “belief.” Science isn’t about “belief.” It’s about facts, evidence, theories, experiments. You don’t say, “I believe in thermodynamics.” You understand its laws and the evidence for them, or you don’t. “Belief” doesn’t really enter into it.

So as a proper formulation, saying “I understand science” would be a start. “I understand the science on this issue” would be better. That implies that you have engaged in a first-hand study of the specific scientific questions involved in, say, global warming, which would give you the basis to support a conclusion. If you don’t understand the basis for your conclusion and instead have to accept it as a “belief,” then you don’t really know it, and you certainly are in no position to lecture others about how they must believe it, too.

Because science is about evidence, this also means that it carries no “authority.” The motto of the Royal Society is nullius in verba—”on no one’s word”—which is intended to capture the “determination of Fellows to withstand the domination of authority and to verify all statements by an appeal to facts determined by experiment.”

That’s the opposite of what “I believe in science” is intended to convey. “I believe in science” is meant to use the reputation of “science” in general to give authority to one specific scientific claim in particular, shielding it from questioning or skepticism.

“I believe in science” is almost always invoked these days in support of one particular scientific claim: catastrophic anthropogenic global warming. And in support of one particular political solution: massive government regulations to limit or ban fossil fuels.

The purpose of the trope is to bypass any meaningful discussion of these separate questions, rolling them all into one package deal–and one political party ticket.

The trick is to make it look as though disagreement on any of these specific questions is equivalent to a rejection of the scientific method and the scientific worldview itself.

But when people in politics proclaim “I believe in science” what they’re doing is proclaiming a belief in the current consensus. Do you think Elizabeth Warren and Andrew Yang have given serious study to climate science? No, they believe in global warming and its preferred political solutions because they have been told that a consensus of scientists believes it (and because this belief confirms their own political biases). Notice that Warren’s statement was about a panel of scientists who are skeptical of global warming, led by a distinguished physicist, William Happer. When does a scientist count as someone who “doesn’t believe in science”? When he departs from the “consensus.”

end quote.

Pseudoscience

The ‘I believe in science’ crowd is very enthusiastic about labelling as ‘pseudoscience’ any actual science that has implications that are counter to their political beliefs.

Sources in the Conspiracy-Pseudoscience category may publish unverifiable information that is not always supported by evidence. These sources may be untrustworthy for credible/verifiable information, therefore fact checking and further investigation is recommended on a per article basis when obtaining information from these sources. See all Conspiracy-Pseudoscience sources.

Factual Reporting: MIXED

Notes: Climate Etc is the blog of Judith A. Curry who is an American climatologist and former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. The Climate Etc blog publishes news and information regarding climate science and climate change. The majority of articles minimize or deny the impacts of human driven climate change. According to a Scientific American interview, Judith Curry admits to receiving funding from the fossil fuel industry. This article also labeled her a “climate heretic.” Judith Curry has also been invited by Republicans to testify at climate change hearings regarding alleged uncertainties regarding man-made climate change. Climate Feedback, a climate change fact checker, debunked much of Curry’s testimonials. Further, Skeptical Science has labeled Judith Curry as a “Climate Misinformer.” Judith Curry is also cited in a Pants on Fire claim by Politifact. Overall, we rate Climate Etc as a pseudoscience website due to its promotion of anti-climate science propaganda. (D. Van Zandt 10/14/2017) Updated (1/28/2018)

“The Columbia Journalism Review describes Media Bias/Fact Check as an amateur attempt at categorizing media bias and the owner of the site, Dave Van Zandt, as an “armchair media analyst.” Van Zandt describes himself as someone with “more than 20 years as an arm chair researcher on media bias and its role in political influence.” The Poynter Institute notes, “Media Bias/Fact Check is a widely cited source for news stories and even studies about misinformation, despite the fact that its method is in no way scientific.” ”

With regards to me personally, I have seen numerous statements on twitter or wherever that I have ‘abandoned science’ or have ‘stopped being a scientist’ since I began publicly questioning aspects of the so-called scientific consensus on climate change (whatever the ‘consensus’ means at any given time to any particular person).

Tracinski’s essay does a superb job of identify the intellectual laziness, tribalism and politics surrounding these ignorant ‘arbiters of science,’ who are easily identified by their statements ‘I believe in science.’

“For decades, scientists and policymakers have framed the climate-policy debate in a simple way: scientists analyse long-term goals, and policymakers pretend to honour them. Those days are over. Serious climate policy must focus more on the near-term and on feasibility.” – Y. Xu, V. Ramanathan, D. Victor

No surprise that the article sounds the ‘alarm’, accelerated warming, speeding freight train, and all that.

Towards the end of the article, the authors make some very astute recommendations regarding climate policy, which is reproduced in its entirety:

Four Fronts

“Scientists and policymakers must rethink their roles, objectives and approaches on four fronts.

Assess science in the near term. Policymakers should ask the IPCC for another special report, this time on the rates of climate change over the next 25 years. The panel should also look beyond the physical science itself and assess the speed at which political systems can respond, taking into account pressures to maintain the status quo from interest groups and bureaucrats. Researchers should improve climate models to describe the next 25 years in more detail, including the latest data on the state of the oceans and atmosphere, as well as natural cycles. They should do more to quantify the odds and impacts of extreme events. The evidence will be hard to muster, but it will be more useful in assessing real climate dangers and responses.

Rethink policy goals.Warming limits, such as the 1.5 °C goal, should be recognized as broad planning tools. Too often they are misconstrued as physical thresholds around which to design policies. The excessive reliance on ‘negative emissions technologies’ (that take up CO2) in the IPCC special report shows that it becomes harder to envision realistic policies the closer the world gets to such limits. It’s easy to bend models on paper, but much harder to implement real policies that work.

Realistic goals should be set based on political and social trade-offs, not just on geophysical parameters. They should come out of analyses of costs, benefits and feasibility. Assessments of these trade-offs must be embedded in the Paris climate process, which needs a stronger compass to guide its evaluations of how realistic policies affect emissions. Better assessment can motivate action but will also be politically controversial: it will highlight gaps between what countries say they will do to control emissions, and what needs to be achieved collectively to limit warming. Information about trade-offs must therefore come from outside the formal intergovernmental process — from national academies of sciences, subnational partnerships and non-governmental organizations.

Design strategies for adaptation. The time for rapid adaptation has arrived. Policymakers need two types of information from scientists to guide their responses. First, they need to know what the potential local impacts will be at the scales of counties to cities. Some of this information could be gleaned by combining fine-resolution climate impact assessments with artificial intelligence for ‘big data’ analyses of weather extremes, health, property damage and other variables. Second, policymakers need to understand uncertainties in the ranges of probable climate impacts and responses. Even regions that are proactive in setting adaptation policies, such as California, lack information about the ever-changing risks of extreme warming, fires and rising seas. Research must be integrated across fields and stakeholders — urban planners, public-health management, agriculture and ecosystem services. Adaptation strategies should be adjustable if impacts unfold differently. More planning and costing is needed around the worst-case outcomes.

Understand options for rapid response. Climate assessments must evaluate quick ways of lessening climate impacts, such as through reducing emissions of methane, soot (or black carbon) and HFCs. Per tonne, these three ‘super pollutants’ have 25 to thousands of times the impact of CO2. Their atmospheric lifetimes are short — in the range of weeks (for soot) to about a decade (for methane and HFCs). Slashing these pollutants would potentially halve the warming trend over the next 25 years.”

JC reflections

Although these recommendations come from a position of ‘alarm’, I agree with each of these recommendations, since each can be justified in terms of ‘no regrets’ actions.

I most particularly agree with the first recommendation on focusing on climate variability/change over the next 25 years. This is the time scale that is of greatest relevance for city/regional planning and for industry/enterprise. While recognizing the key importance of natural climate variability on this time scale, the authors miss what is likely to be the most significant event during this period: a transition to the cold phase of the Atlantic Multidecadal Oscillation.

The second recommendation recognizes the farce of the current international policy on emissions reductions.

Adaptation makes a lot of sense, and the adaptation objectives are mostly the same whether the cause of the extreme events or trend is caused by humans or nature.

And finally, the climate rapid response plan. I don’t know why this hasn’t received more traction, particularly related to soot.

How sensitive is the Earth’s climate to greenhouse gases? Speaking about carbon dioxide in particular, how much would air temperatures increase if we doubled atmospheric concentrations of said gas?

This question lies at the heart of climate science. It is to climate what GDP is to economics – the central concept. So central that it’s very difficult to have a coherent discussion of climate issues if one does not know about sensitivity. But there is a crucial difference between these measures: the layman is somewhat familiar with GDP but not at all familiar with climate sensitivity.

Okay, many or most people cannot tell you exactly what GDP is. But many others will give you a crude, approximately correct definition. Furthermore, even those who cannot define it intuitively get the implications of both slow and fast GDP growth, and could tell you at least its order of magnitude (i.e. it happens at rates of 1% or 3% a year; not 0.1% or 30%).

As for climate sensitivity, I can report anecdotal experience from the Madrid area: not a single person that I’ve talked to has a clue what it is. I don’t mean that they fail to provide the technical definition – they don’t even know what it’s about. Ask about sensibilidad climática and people will think you’re talking about how humans react to temperatures, not how the atmosphere reacts to greenhouse gases.

It’s a pity because, at its core, climate sensitivity is an easy concept. And the way it’s calculated is easier than GDPs. You just need to grasp the interplay between:

Temperatures: duh

Forcing: it would be easier if we just called it “impact” or something to that effect, but still, not that hard. The more forcing, the more warming.

Ocean heat uptake and the corresponding energy imbalance: perhaps the least intuitive part of the calculation. Nevertheless, it’s only necessary in order to estimate equilibrium climate sensitivity (ECS); for the transient climate response (TCR) all you need are temperatures and forcing.

People even talk about climate sensitivity without realizing it. For instance, one common argument among climate skeptics is that emissions of CO2 were small prior to 1950, and thus the warming that took place before that year could not have been due to man-made CO2 emissions. But what people making this argument mean, even without putting it that way, is that if CO2 drove the early 20th century warming then climate sensitivity must have been high. And yet, the evolution of temperatures since 1950 suggests a lower climate sensitivity. Skeptics making this argument are implying that it makes no sense for climate sensitivity to have been much lower since 1950 than before, and therefore something else must have been driving warming before 1950.

The problem with this kind of arguments (from all sides of the debate) is not that they use numbers, but that they’re not numerical enough. Or perhaps I should say rigorous enough. Now, I don’t have anything in particular against the early-20th-century-warming argument; I find it to be good example because it’s common. There are three issues plaguing this kind of argument:

What matters for warming is not CO2 emissions or concentrations, but radiative forcing. This may seem obvious but even some sophisticated authors don’t actually to look at forcing – here’s a recent example.

Proponents of the argument usually don’t even try to quantify the non-CO2 forcings. Okay, there are probably natural forcings (e.g. clouds) that we cannot quantify because there are no records until the last couple decades. But that doesn’t mean you shouldn’t account for the known forcings! Methane has a warming effect, aerosols have a cooling effect, etc. You have to take these into account if you want to know exactly why the climate did what it did.

You then have to compare the evolution of these two inputs (forcing and temperatures) in order to arrive at an output, which is the amount of warming per unit of forcing. This ratio is essentially the TCR. Most proponents of the early-20th-century-warming argument cannot calculate a TCR because they don’t use radiative forcing as an input, and some don’t really check temperatures either (they just eyeball a temperature chart).

Nevertheless, it’s easy to understand why people would eyeball temperature charts, and easier to understand why they don’t look at radiative forcing. Even though the websites for temperature records are available to anyone with an internet connection, they require some sacrifice in terms of learning to navigate the data; if you don’t know that Wood for Trees exists, figuring out the exact temperature change between two points in time can be difficult.

As for radiative forcing, the “official” sources (e.g. the IPCC AR5 report) are updated every five or six years; if you want an estimate between those, you have to check the scientific literature. And unlike the Met Office or GISS, there isn’t any organization regularly pushing out press releases on what the latest forcing levels are. There is also the issue of aggregating the dozen or so different forcings into a single time series – one more obstacle for anyone who wants to calculate sensitivity.

Wouldn’t it be great if an app could check all this? You tell the app what years you’re interested in, and the app gives you:

Temperatures or, more accurately, temperature anomalies

Forcing levels. Not just for CO2, but for the aggregate of all forcings we more or less know about.

The app could also tell you what is the difference in these measures between two periods; that is to say, the app could inform the user that forcing between two points in time increased by A, temperatures increased by B, and energy imbalance increased by C. Going even further, the app could then spit out estimates of TCR and ECS.

Well, such an app now exists.

Clisense: an app that does all the climate sensitivity math for you

If you google “climate sensitivity calculator” you’ll find several websites that use this term. However, they don’t actually do what I mentioned in the above section. This one, for example, simply shows how much temperatures will go up depending on climate sensitivity and CO2 concentrations (which are prescribed by the user); it’s actually a temperature calculator.

So, to the best of my knowledge, Clisense is the first app that allows the user to estimate climate sensitivity. The user selects two periods, which can actually be single years if you so wish (just select the same year as both the start and the end of a given period). The app will then estimate TCR and ECS while showing each step of the calculation. This allows users greater insight into just how we “know” that ECS is this low or that high.

Additionally, the app asks the user to prescribe:

How much of the Earth’s energy imbalance is made up by ocean heat uptake. The IPCC’s AR5 report estimated 93% of planetary heat uptake was oceanic, and that is the default value used by Clisense, but this percentage is not totally certain.

How “efficacious” the different forcings are. For the most part there is no reason to believe some forcings have greater efficacy than others, but the app allows users to play with the numbers and see how estimates vary. For instance, how would our estimates of sensitivity change if aerosols had greater efficacy than CO2?

The numbers will be meaningless to the average online Joe, so I also made this explanatory website. Having two websites is not the most elegant solution, but custom domains on Shiny Apps go for $300 a month so it will have to do.

Only as good as the data that goes in

To be clear, Clisense estimates climate sensitivity according to a variety of inputs. If our data on temperatures, forcing, and/or energy imbalance were significantly wrong, then the app’s estimates would also be wrong. Of course developing better estimates of all three inputs is one of the main goals in climate science, but the app cannot “guarantee” that the data is correct.

For temperature, I used HadCRUT as it’s the record I’m most familiar with. Going forward, at a minimum I would like to add the Cowtan & Way version, which shows greater warming.

For forcings, I took last year’s Lewis & Curry paper, which has data up to 2016. Since that paper also used HadCRUT for one of its set of results (for the other it used Cowtan & Way), it’s possible to replicate at least one of the paper’s TCR values. Clisense uses only arithmetic for now; I don’t calculate any confidence interval, I don’t use any Bayesian prior, etc. But the result Clisense gives is the same as in Lewis & Curry, which proves the paper’s estimates aren’t biased down by its use of Bayesian priors, as some online commentators had argued.

For ocean heat uptake, the source is Zanna et al 2019. This paper was really key for the app as it’s the only source I know of that offers yearly estimates going back to the 19th century; other ocean temperature records only go back to 1950 or so. Besides, Zanna et al offers a full-depth estimate of ocean heat uptake; using other datasets often involves adding or subtracting different sources to arrive at a complete estimate.

Zanna et al is the data source used in the app that I have greatest reservations about, because it shows a rate of ocean heat uptake of 0.3 or 0.4 w/m2 going all the way back to 1930. If you run the numbers with an initial period like 1930-1950 and a final period like 2007-2016, there is virtually no increase the rate of ocean heat uptake. This is hard to believe, as man-made forcing increased very strongly between 1930 and 2016. And the result is that is that for periods like those the ECS estimate is only marginally higher than that for TCR – or even lower, depending on the exact combination of periods. It’s not clear if the paper’s figures are too high for the first half of the 20th century or too low for the recent decades, but in either case the result would be to bias down ECS estimates.

It must also be said the actual ocean heat uptake data is not available; Clisense uses my digitization of the paper’s plots. In the future, I want to add other sources of data on ocean heat uptake, although that will probably mean the year range will be restricted.

Finally, a note of caution. The app, like the scientific literature, uses volcanoes and solar radiation as the only natural forcings. In other words, the app makes the assumption that other natural factors don’t matter. This is obviously absurd for short periods of time, due to oscillations like El Niño. That’s why users are asked to select not two years but two periods: so that natural variability evens out. However, just because one selects two periods, one cannot be sure that natural variability has been completely removed. If some natural factor (e.g. reduced cloud cover) has effectively acted as a positive forcing over decade or multi-decade timescales, then ECS estimates will be biased high, because we’re assuming all the warming was caused by man-made forcing when in fact part was caused by natural forcing. The reverse is true if natural variability has acted to cool the climate. I prefer not to go further down that hole, as the research is still very tentative.

The way forward

Let me emphasize that Clisense is a project developed in my spare time. I receive no funding – in fact it costs me money, both for the Shiny app and the WordPress site. If my personal situation were to change, I might find myself without sufficient time and energy to keep improving the app. My goal is to add a ton of features – I just cannot guarantee I will.

If you have comments, questions or feedback of any form, I encourage you to share them with me by writing to alberto.zaragoza.comendador at gmail.com.

Attributing the 2017 floods in Bangladesh to climate change gave unexpected results: no trend in extreme rainfall up to now, but a trend towards more extremes is projected as the aerosol cooling is reduced. Hydrological models show the same for discharge. https://www.hydrol-earth-syst-sci.net/23/1409/2019/

Impacts of the North Atlantic subtropical high on interannual variation of summertime heat stress over the conterminous United States [link]

Social science, technology & policy

MIT has demonstrated that nuclear is required in any energy mix that attempts to achieve an optimal zero-carbon outcome. The stricter the CO2 target, the more nuclear is required. If no nuclear is employed at all, costs increase two- to fourfold. [link]

Scientific leaders have no monopoly on expertise, nor do they have a privileged ethical standpoint, for evaluating the social consequences of science and of science policies [link]

The right way to deal with extreme weather. In setting out a plan to make Manhattan better prepared for extreme weather, Mayor Bill de Blasio is delivering a sorely needed message on climate change. [link]

The effect of climate change on hurricanes has been a controversial scientific issue for the past several decades. Improvements in the capabilities of climate models, the main tool used to predict future climate, have enabled more credible simulation of the present-day climatology of hurricanes (Walsh et al 2016). The increasing ability of climate models to predict the interannual variability of hurricanes in various regions of the globe indicates that they are capturing some of the essential physical relationships governing the links between climate and hurricanes.

This Chapter addresses climate model projections of hurricane activity out to 2100, in response to manmade global warming. Also addressed is the role of natural modes of climate variability in influencing hurricane activity out to 2050.

7.1 Climate model projections

Apart from the difficulty of simulating hurricane activity in climate models; there is substantial uncertainty associated with climate model projections of 21st century climate change, including the changes in sea surface temperatures and ocean and atmospheric circulation patterns that would cause any changes in hurricane activity. Curry (2018a; Sections 5.1, 5.6) provides an analysis of these uncertainties; a summary of that analysis is provided here.

The climate model simulations of 21st century climate referenced in the IPCC AR5 are based on more than 30 different global climate models from international climate modeling groups. The climate models simulate changes based on a set of scenarios of manmade forcings from changing atmospheric composition, primarily from fossil fuel emissions. ‘Radiative forcing’ is the difference between insolation (sunlight) absorbed by the Earth and the energy radiated by the Earth and its atmosphere back to space. Radiative forcings are influences that cause changes to Earth’s climate system by altering the Earth’s radiative equilibrium, forcing temperatures to rise or fall.

A new set of emissions scenarios, the Representative Concentration Pathways (RCPs), was used for the climate model simulations in the IPCC AR5. In all RCPs, atmospheric CO2 concentrations are higher in 2100 relative to present day as a result of a further increase of cumulative emissions of CO2 to the atmosphere during the 21st century. The four RCPs are named according to radiative forcing target level for 2100. The radiative forcing estimates are based on the forcing of greenhouse gases and other forcing agents. The four selected RCPs include one mitigation scenario that leads to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high emission scenario (RCP8.5).

RCP8.5 is sometimes referred to as a ‘business as usual’ scenario. It is not. Rather, it is an extreme scenario that may be impossible. Ritchie and Dowlatabadi (2017) recommend that RCP8.5 should not be used as a benchmark for future scientific research or policy studies.

Table 7.1 summarizes the IPCC AR5 temperature and sea level rise projections for 2046-2065 and 2081-2100. Eliminating RCP8.5 from further consideration here, the likely range of temperature increase by the end of the 21st century is 0.3 to 3.1oC [0.5 to 5.5oF].

Table 7.1 Projected change in global mean surface air temperature and global mean sea level rise for the mid- and late 21st century relative to the reference period of 1986-2005. [IPCC AR5 WGI]

Climate change projections for the 21st century are only as valid as the climate model simulations upon which they are based. Chapters 11 and 12 of the IPCC AR5 describes uncertainties in the climate model-based projections:

“Projections of future states of the global climate are subject to several sources of uncertainty. The first source of uncertainty arises from natural internal variability, which is intrinsic to the climate system, and includes phenomena such as variability in the mid-latitude storm tracks and the ENSO. The existence of internal variability places fundamental limits on the precision with which future climate variables can be projected. The second is uncertainty concerning the past, present and future forcing of the climate system by natural and anthropogenic forcing agents such as greenhouse gases, aerosols, solar forcing and land use change. The third is uncertainty related to the response of the climate system to the specified forcing agents, which is referred to as the ‘climate sensitivity.”

“Simplifications and the interactions between parameterized and resolved processes induce ‘errors’ in models, which can have a leading-order impact on projections. Also, current models may exclude some processes that could turn out to be important for projections) or produce a common error in the representation of a particular process.”

The IPCC AR4 (2007) made the following projection for near-term warming:

“For the next two decades, a warming of about 0.2°C per decade is projected.”

Figure 7.2 provides an update of Figure 11.25 from the IPCC AR5. It is seen that the observed temperatures between 2000-2012 are at the bottom of the envelope of climate model simulations (this period is often referred to as the ‘warming hiatus’). The red hatching in Fig. 11.25 (Figure 5.2) reflects the judgment by the AR5 authors that lowers the projected warming out to 2035 relative to the climate model simulations.

The large El Niño of 2016 has returned the observed temperature curve to near the middle of the envelope of climate model simulations; however the previous large El Niño of 1998 was at the top of the envelope of climate model simulations. The recent data since 2012 continues to indicate that the sensitivity of at least some of the climate models to CO2 forcing is too high.

Figure 7.2 Synthesis of near-term projections of global mean surface air temperature anomalies. Projections from climate models showing the 5 to 95% range using a reference period of 1986–2005 (light grey shade). The maximum and minimum values from climate models using all ensemble members and the 1986–2005 reference period are shown by the grey lines. Black lines show annual mean observational estimates. The red-hatched region shows the indicative likely range for annual mean GMST during the period 2016–2035. [following IPCC AR5 WG I Figure 11.25; updated by Hawkins 2018]. Added green line between 1998 and 2016 reflects the trend between two strong El Niño years.

A key issue is the uncertainty of sensitivity of climate models to CO2. The equilibrium climate sensitivity (ECS) is a measure of the climate system response to sustained radiative forcing, defined as the amount of warming in response to a doubling of atmospheric CO2.

For over thirty years, climate scientists have presented a likely range for ECS that has hardly changed – the ECS range of 1.5−4.5 oC in 1979 (Charney et al. 1979) is unchanged in the 2013 IPCC AR5. While previous assessments have provided a ‘best estimate’ of 3.0 oC, the AR5 did not provide a best estimate value for ECS, stating:

“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence.”

At the heart of the uncertainty surrounding the values of ECS is the substantial difference between values derived from global climate models versus values derived from changes over the historical instrumental data record using global energy budget analyses. The median ECS given in IPCC AR5 for global climate models was 3.2 oC, versus 2.0 oC for the median values from historical-period energy budget based studies.

Subsequent to the IPCC AR5, Lewis and Curry (2015) used an observationally-based energy budget methodology with the AR5’s global forcing and heat content estimate time series to derive a median ECS estimate of 1.6 oC, which makes the discrepancy with global climate models even larger. A recent update by Lewis and Curry (2018) with more recent data concluded that high estimates of ECS derived from a majority of global climate models are statistically inconsistent with observed warming during the historical period. Lewis and Curry further concluded that the observationally-constrained values of ECS imply 21st century warming under increased CO2 forcing of only 55-70% of the mean warming simulated by global climate models.

Apart from the uncertainties in the climate models described above, there are two overarching problems with these projections (Curry, 2018b):

The scenarios of future climate are incomplete, focusing only on emissions.

The ensemble of climate models do not sample the full range of possible values of ECS, neglecting values between 1 and 2.1 oC, with values between 1.5 and 2.1 oC being within the IPCC AR5 likely

The IPCC AR5 acknowledges the constraints, assumptions, contingencies and uncertainties of their projections of future climate change:

“With regard to solar forcing, the 1985–2005 solar cycle is repeated. Neither projections of future deviations from this solar cycle, nor future volcanic radiative forcing and their uncertainties are considered.”

“Any climate projection is subject to sampling uncertainties that arise because of internal variability. [P]rediction of the amplitude or phase of some mode of variability that may be important on long time scales is not addressed.”

The climate model projections of 21st century surface temperature and sea level rise are therefore contingent on the following assumptions [IPCC AR5 WG1 Section12.2.3]:

Projections of 21st century climate from both manmade and natural climate change

An understanding of how and why hurricanes change with a changing climate.

As summarized in Chapter 4, our understanding of how and why hurricanes change in a changing climate is incomplete, with qualitative understanding based on analysis of limited observations and theoretical understanding. At best, climate model-based projections of future hurricane activity are contingent on the predicted amount of warming.

The IPCC AR5 provided a synthesis of global and regional model-based projections of future hurricane climatology by 2081 – 2100 relative to 2000 – 2019. Globally, their consensus projection is for decreases in hurricane numbers by approximately 5 – 30%, increases in the frequency of categories 4 and 5 storms by 0 – 25%, an increase of 0 – 5% in typical lifetime maximum intensity, and increases in rainfall rate by 5 – 20%.

Here are the conclusions from the IPCC AR5 (2013):

“Based on process understanding and agreement in 21st century projections, it is likely that the global frequency of occurrence of tropical cyclones will either decrease or remain essentially unchanged, concurrent with a likely increase in both global mean tropical cyclone maximum wind speed and precipitation rates. The future influence of climate change on tropical cyclones is likely to vary by region, but the specific characteristics of the changes are not yet well quantified and there is low confidence in region-specific projections of frequency and intensity.”

A summary of research since the IPCC AR5 is provided by the NCA4 (2017), whereby some studies have provided additional support for the AR5 conclusions, and some have challenged aspects of it. In the end, the NCA4 conclusions were identical to the IPCC AR5 conclusions cited above.

7.2.1 Hurricane formation and frequency

As summarized by Walsh et al. (2016), at present there is no climate theory that can predict the formation rate of tropical cyclones from the mean climate state. It has been known for many years that there are certain atmospheric conditions that either promote or inhibit the formation of tropical cyclones, but so far an ability to relate these quantitatively to mean rates of tropical cyclone formation has not been achieved, other than by statistical means through the use of empirically-based genesis potential indices (e.g. Menkes et al. 2012).

An important test of climate model predictions of future hurricane frequency is whether the climate models can simulate the present hurricane climatology. Simulation of the climatological number of Atlantic hurricanes is particularly difficult.

Most climate models predict a decrease in the global number of hurricanes by 2100. Explanations of this decrease are linked to reduced relative humidity in the mid-levels of the atmosphere and reduced upward rising motion in hurricane formation regions. Not all methods for determining hurricane numbers identify a decrease in future numbers, however. Emanuel (2013) uses a downscaling method in which incipient tropical vortices are “seeded” into large-scale climate conditions provided from a number of different climate models for current and future climate conditions. Emanuel’s approach generates more hurricanes in a warmer world when forced with the output of climate models.

While most models predict fewer tropical cyclones globally in a warmer world, the difference in the predictions among different climate models becomes more significant when smaller regions of the globe are considered. This appears to be a particular issue in the Atlantic basin, where climate model performance has been often poorer than in other oceanic regions. The issue as to whether the number of hurricanes will change in a warmer climate remains unresolved.

Using millennia-long climate model simulations, Lavender et al. (2018) examined whether the record number of tropical cyclones in the 2005 Atlantic season is close to the maximum possible number for the present climate of that basin. By estimating both the mean number of hurricanes and their possible year-to-year random variability, they found that the likelihood that the maximum number of storms in the Atlantic could be greater than the number of events observed during the 2005 season is less than 3.5%. Hence, the 2005 season can be used as a risk management benchmark for the maximum possible number of tropical cyclones in the Atlantic.

7.2.2 Hurricane intensity

GFDL (2018) provides an analysis of the predictions of hurricane changes by 2100:

“Hurricane intensities globally will likely increase on average by 1 to 10%, according to model projections for a 2 oC [4 oF] global warming. The global proportion of tropical cyclones that reach very intense (Category 4 and 5) levels will likely increase due to anthropogenic warming over the 21st century. There is less confidence in future projections of the global number of Category 4 and 5 storms, since most modeling studies project a decrease (or little change) in the global frequency of all tropical cyclones combined.”

With regards to the North Atlantic, GFDL (2018) provides the following assessment:

“Current climate models suggest that tropical Atlantic SSTs will warm dramatically during the 21st century, and that upper tropospheric temperatures will warm even more than SSTs. Furthermore, most of the climate models project increasing levels of vertical wind shear over parts of the western tropical Atlantic. Both the increased warming of the upper troposphere relative to the surface and the increased vertical wind shear are detrimental factors for hurricane development and intensification, while warmer SSTs favor development and intensification.”

“The GFDL hurricane model supports the notion of a substantial decrease (~25%) in the overall number of Atlantic hurricanes and tropical storms with projected 21st century climate warming. However, the hurricane model also projects that the lifetime maximum intensity of Atlantic hurricanes will increase by about 5% during the 21st century. At present we have only low confidence for an increase in category 4 and 5 storms in the Atlantic; confidence in an increase in category 4 and 5 storms is higher at the global scale.”

Using the GFDL hurricane modeling system, Knutson et al. (2015) found that projected median hurricane size is found to remain nearly constant globally, with increases in most basins offset by decreases in the northwest Pacific.

Changes in surface and subsurface ocean conditions can both influence a hurricane’s intensification. Huang et al. (2014) suggest a suppressive effect of subsurface oceans on the intensification of future hurricanes. Under global warming, the subsurface vertical temperature profile may contribute to a stronger ocean cooling effect during the intensification of future hurricanes. Emanuel (2015) estimated that the effect of such increased upper ocean stratification is relatively small, reducing the projected intensification of hurricanes by only about 10%–15%.

The largest increase in Category 4-5 Atlantic hurricanes is predicted by Bender et al. (2010). Owing to the large interannual to decadal variability of SST and hurricane activity in the basin, Bender et al. estimate that detection of an anthropogenic influence on intense hurricanes would not be expected for a number of decades, even assuming a large underlying increasing trend (+10% per decade).

7.2.3 Rainfall

An increase in rainfall from hurricanes in a warmer climate is a consistent finding from climate model simulations. As summarized by GFDL (2018), hurricane rainfall rates will likely increase in the future due to manmade global warming and the accompanying increase in atmospheric moisture content. Modeling studies on average project an increase on the order of 10-15% for rainfall rates averaged within about 100 km of the storm for a 2oC [4oF] global warming scenario.

7.3 2050 – decadal variability

Climate-model based projections of future hurricane activity have focused on the impacts of manmade climate change. It is of substantial interest to understand how hurricane activity might vary on timescales out to 2050, associated with the known modes of interannual and decadal variability in specific ocean basins.

The evolution of the climate on decadal time scales is the combined result of an externally forced component – due to greenhouse gases, aerosols and natural radiative forcing agents – and natural internal variability of the climate system. A decadal climate prediction attempts to simultaneously forecast the evolution of both of these components over the next few decades.

The Decadal Climate Prediction Project (DCPP) is a coordinated investigation into decadal climate prediction and variability. The first generation of the DCPP simulations was reported under CMIP5 in the IPCC AR5 [Chapter 11]. It was concluded that: “There is limited agreement and medium evidence that the Atlantic and Pacific patterns of climate variation exhibit predictability on timescales up to a decade.”

The next generation of the DCPP simulations is described by Boer et al. (2016), for the CMIP6 and the forthcoming IPCC AR6 Report. At this point, the climate models, even when the oceans are initialized with current observations, do not have any prediction skill beyond a decade at most. The biggest challenge is predicting shifts in the Atlantic and Pacific patterns of decadal variability (e.g. AMO, PDO).

7.3.1 Scenarios of modes of decadal variability

Given the challenges associated with climate model predictions on decadal scales, an alternative approach is to consider possible future scenarios of the indices of decadal climate variability and shifts in the multidecadal indices such as the AMO and PDO. Section 4.3 described the natural internal modes of variability, including the Atlantic Multidecadal Oscillation (AMO), North Atlantic Oscillation (NAO), Atlantic Multidecadal Mode (AMM), Pacific Decadal Oscillation (PDO) and the North Pacific Gyre Oscillation (NPGO).

Among these modes, a forthcoming shift in the phase of the AMO (away from the current warm phase to the cool phase; Figures 4.4, 4.5) would have the greatest impact on Atlantic hurricanes (Figure 4.6). Frajka-Williams et al. (2017) report a decline in the AMO index since 2013.

The timing of a shift to the AMO cold phase is not predictable; it depends to some extent on unpredictable weather variability. However, analysis of historical and paleoclimatic records suggest that a transition to the cold phase is expected prior to 2050. Enfield and Cid-Serrano (2006) used paleoclimate reconstructions of the AMO to develop a probabilistic projection of the next AMO shift. Figure 7.3 shows the probability of an AMO shift relative to the number of years since the last regime shift. The previous regime shift occurred in 1995; hence in 2019, it has been 24 years since the previous shift. Figure 7.3 indicates that a shift to the cold phase should occur within the next 15 years, with a 50% probability of the shift occurring in the next 7 years.

The implications of a shift to the cool phase of the AMO on Atlantic hurricanes include:

Fewer landfalls striking Florida, the U.S. east coast and the Caribbean (Figures 5.6, 5.8)

Figure 7.3 Probability of an AMO regime shift relative the number of years since the last regime shift. Source: Enfield and Cid-Serrano (2006)

7.3.2 Scenarios of interannual variability

Atlantic hurricane statistics for the period to 2050 depend not only on the timing of a shift of the AMO to the cool phase, but also on the variability of the other climate indices.

Caron et al. (2014) found that while some influences, such as ENSO, remain present regardless of the AMO phase, other climate factors show an influence during only one of the two phases. During the negative phase, Sahel precipitation and the NAO play a role, while during the positive phase, the 11-year solar cycle and dust concentration over the Atlantic appear to be more important.

Lim et al. (2016) showed that the NAO and AMM can strongly modify and even oppose the well-known ENSO impacts. While the predictability of these modes is limited to seasons rather than years, the statistics of combinations of these indices provides useful information regarding the interannual variability within the decadal time horizon. Of particular interest is the frequency of occurrence of years of extremely high or low hurricane activity. Patricola et al. (2014) investigated the possible effects of combinations of extreme phases of the AMM and ENSO. Individually, the negative AMM phase and El Nino each inhibit Atlantic hurricanes, and vice versa. Simultaneous strong El Nino and strongly positive AMM, as well as strong concurrent La Nina and negative AMM, produce near-average Atlantic ACE, suggesting compensation between the two influences. Strong La Nina and strongly positive AMM together produce extremely intense Atlantic hurricane activity, while strong El Nino and negative AMM together are not necessary conditions for significantly reduced Atlantic tropical cyclone activity.

The past decade or so has seen a preponderance of El Nino events (relative to La Nina). The PDO has been weakly negative for the past year, following a period since 2014 of mostly positive values. Presumably, at some point in the next 30 years, we can expect a period when La Nina events dominate.

The general probabilistic approach used by Enfield and Cid-Serrano (2006) seems promising for developing probabilities of regime combinations, which can then be related to Atlantic hurricane activity via historical relationships with these regime indices. However, the possibility of data-driven climate dynamics-based probabilistic predictions and scenarios of decadal scale hurricane activity is largely untapped.

7.4 Landfall impacts

The most unambiguous signal for hurricane landfall impacts in a warmer climate is that projected sea level rise should be causing higher storm surge levels for hurricanes that do occur, all else being equal. As summarized by Curry (2018; Section 5.7):

“Emissions scenario choice exerts a great deal of influence on predicted sea level rise after 2050. If RCP8.5 is rejected as an extremely unlikely scenario, then the appropriate range of sea level rise scenarios to consider for 2100 is 0.2–1.6 m [8 inches to 5 feet]; however, values exceeding 2 feet are increasingly weakly justified. Values exceeding 5 feet require a cascade of poorly understood and extremely unlikely to impossible events. Further, these values of sea level rise are contingent on the climate models predicting the correct amount of temperature increase.”

Increased rainfall rates can also be expected in a warmer climate (Section 7.2.3). There is no evidence of increasing hurricane size, which influences storm surge, rainfall amounts and the number of tornadoes (Section 4.5).

If climate model projections of fewer hurricanes but a greater percentage of Category 4 and 5 storms are correct, the tradeoff between these two competing effects on overall landfall impacts is not straightforward. The statistics of rare Category 4 and 5 landfalling events are much more volatile than basin-wide hurricane metrics.

Emanuel (2011) estimated the time of emergence of global warming effects on U.S. hurricane damage. Using a recently developed hurricane synthesizer driven by outputs from global climate models, 1000 artificial 100-yr time series of Atlantic hurricanes that make landfall along the U.S. Gulf and East Coasts were generated for four climate models and for current climate conditions as well as for the warmer climate circa 2100. These synthetic hurricanes produce damage to a portfolio of insured property according to an aggregate wind-damage function; damage from flooding was not considered. Three of the four climate models used produced increasing damage with time, with the global warming signal emerging on time scales of 40, 113, and 170 years. For the fourth climate model, damages decreased with time, but the signal was weak.

7.5 Conclusions

Substantial advances have been made in recent years in the ability of climate models to simulate the variability of hurricanes. However, inconsistent hurricane projections emerge from modeling studies due to different down-scaling methodologies and warming scenarios, inconsistencies in projected changes of large-scale conditions, and differences in model physics and tracking algorithms. Systematic numerical modeling experiments organized under the auspices of the Hurricane Working Group of the U.S. CLIVAR Program (Walsh et al. 2015) were designed to coordinate efforts to produce a set of model experiments designed to improve understanding of the variability of tropical cyclone formation in climate models. Progress continues to be made, particularly with models that are coupled to the ocean.

Apart from the challenges of simulating hurricanes in climate models, the amount of warming projected for the 21st century is associated with deep uncertainty. Hence, any projection of future hurricane activity is contingent on the amount of predicted global warming being correct.

Recent assessment reports have concluded that there is low confidence in future changes to hurricane activity, with the greatest confidence associated with an increase in hurricane-induced rainfall and sea level rise that will impact the magnitude of future storm surges. Any projected change in hurricane activity is expected to be small relative to the magnitude of interannual and decadal variability in hurricane activity, and is at least several decades away from being detected.

Decadal variability of hurricane activity over the next several decades could provide much greater variability than the signal from global warming over the next century. In particular, a shift to the cold phase of Atlantic Multidecadal Oscillation (AMO) is anticipated within the next 15 years. All other things being equal (such as the frequency of El Nino and La Nina events), the cold phase of AMO harkens reduced Atlantic hurricane activity and fewer landfalls for Florida, the east coast and the Caribbean.

Greenland is getting more rain, even in winter, triggering melting [link]

Coal plants have contributed to widespread contamination of aquifers, according to a new national assessment. In some areas, “groundwater may be unusable for decades or hundreds of years.” http://bit.ly/2TqzeER

“After reconstructing southern Greenland’s climate record over the past 3,000 years, a Northwestern University team found that it was relatively warm when the Norse lived there between 985 and 1450 C.E., compared to the previous and following centuries.” [link]

Social science, technology & policy

With Ethanol And Biomass No Longer Viewed As “Green,” Will Other Renewables Soon Follow? [link]

A review of the relationship between the solar input to high latitudes and the global ice volume over the past 2.7 million years.

Abstract

While there is ample evidence that variations in solar input to high altitudes is a “pacemaker” for the alternating glacial and interglacial periods over the past ~ 2.7 my, there are two major difficulties with the standard Milankovitch theory:

(i) The different cadence of the glacial periods prior to the MPT (41 ky) and after the MPT (88 to 110 ky). Mid-Pleistocene Transition.

(ii) The reason why so many precessional maxima in solar input to high latitudes fail to produce terminations in the post-MPT era; yet every fourth or fifth one does produce a rather sudden termination.

Raymo et al. (2006) proposed an explanation for the first difficulty in terms of global ice volume resulting from the sum of an out of phase growth and decline of northern and southern ice sheets. Ellis and Palmer (2016) proposed an explanation for the second difficulty by describing by describing the occurrence of terminations in the post-MPT era in terms of dust deposition affecting ice-albedo on the ice sheets. Raymo et al. used a simple model for the pre-MPT period. That model does not work well in the post-MPT era. Best (2018) then modified the model to include the representation of the dust induced ice-albedo effect.

Introduction

Our objective is to give a cohesive picture of the driving forces for ice age growth and decay for the obliquity-driven pre-MPT, and the precession/eccentricity-driven post MPT periods. While all three Earth orbit parameters are always acting to affect the solar input at high latitudes, there are underlying reasons why the net effect on the variation of ice volume does not show all three.

In the pre-MPT ice age era, the ice sheets expanded and contracted in concert with the Milankovitch cycle, which is the sum of the ~41 kyr hemispherically synchronous obliquity cycle, and the ~22 kyr hemispherically asynchronous precession cycle. This partial cancelation of out-of-phase precessional ice fluctuations combined with in-phase obliquity ice fluctuations results in a a global (North + South) ice volume that appears to only follow the obliquity cycle. In addition, the geographically small extent of the ice sheets in this era meant that ice-albedo was not a major climatic factor, and so orbital influences were almost completely dominant.

In the post-MPT ice age era, the Earth had cooled to a critical tipping-point roughly 800 ky ago with permanent Antarctic glaciation. At this MPT point, the energy balance of the Earth was reduced such that its natural state favored an ice age, so long as the ice sheets had high albedo. In this domain, ice would continue to grow, perhaps without limit (who knows?), until the albedo of the ice sheets could be lowered via dust deposition. This reduction in albedo resulted in a huge change in the energy balance favoring melting, and the rapid retreat of the ice sheets. At the end of the termination, the ice sheets more or less disappeared from the NH, and the climate system eventually reverted back to its previous mode of slow ice growth.

Due to precession of the Earth’s axis, there is a consistent variation of solar input to high latitudes in alternate hemispheres in regular cycles of roughly 22,000 years. The amplitude of this cyclic variation is modulated by obliquity and eccentricity. The effects of obliquity are usually weaker than precession, but nevertheless subject the amplitude of solar input to high latitudes in both hemispheres to a 41-ky cyclic variation. If, for some reason, the hemispherically asymmetric effects of precession mostly cancel out in their impact on global ice volume during some time period, the global ice volume will vary with the underlying 41-ky obliquity cycle, and the effects of precession will be hidden in the ice volume record.

In the most simplistic interpretation of solar-driven ice ages, one might expect that ice ages would occur every 11,000 years in alternate hemispheres, when the solar input to higher latitudes is a minimum in that hemisphere’s summer, allowing large ice sheets to develop. One might then expect ice ages to occur alternately in each hemisphere, every 11,000 years – in line with the hemispherically alternating precessional cycle. We do not observe this at all in the historical record of global ice volume over the past ~2.7 million years.

What is observed is that from about 2.7 mya to about 1.0 mya, the periodic variability of global ice volume followed a roughly 41 ky cycle, and from about 0.6 mya to the present, the global ice volume followed a much longer cycle of very roughly 90 to 110 ky. Between about 1.0 mya and 0.6 mya, a transition zone occurred in which the cycles gradually lengthened. Yet, the seemingly dominant periodic variation in summer solar input to high latitude (SIHL) due to precession was always in in force during all these eras.

Therefore, further investigation is needed to understand how the Earth system hid the effects of precession via internal dynamics. Note however, that the effects of precession were almost never completely hidden. In both the 41 ky and ~90 to 110 ky cycle regimes, smaller cyclic variations in global ice volume at a 22 ky period were superimposed on the broader cycles that had higher amplitude with longer periods. See Figure 1. Note that the benthic stack records total ice volume, not global temperature. It does record global average temperature, but only by default – and it cannot differentiate between the NH and SH (in both ice volume and temperature), which may alternate significantly.

So, one of the major challenges in understanding ice ages, is to identify why this persistent high-frequency solar signal due to the ~22 ky precessional cycle is rectified to 41 ky obliquity cycles prior to ~1.0 mya and 90 to 110 ky cycles after ~0.6 mya.

The period after about 1.0 mya is covered in the next section.

The Last Five ice ages

The ebb and flow of northern ice sheets was driven by variations in solar input to high northern altitudes over the past ~400 ky, as shown in Figure 2. Figure 2 shows estimated temperature, but we assume this is also representative of ice volume. Each ~22 ky precession cycle exerts a higher frequency influence on growth of the ice sheets. The NH precessional maxima (up-lobes) tend to increase the northern temperature (reduce the ice volume), while the NH precessional minima tend to reduce northern temperatures (increase the ice volume). But in the period that includes the past five ice ages, there was a seemingly relentless internal drive to increase the ice sheets. Regardless of the precession cycle, the ice sheets expanded (albeit with small higher frequency overtones) until a point was reached where they disintegrated quickly.

Most solar up-lobes due to precession merely temporarily slowed down the rate of growth of the ice sheets, but did not produce a termination. Only about one out of four, or one out of five precession up-lobes produced a sudden, decisive termination. But occurrence of these terminations followed a regular pattern. While it is common to refer to this era as the “100 ky era”, closer inspection suggests that the cycles were spaced by 88 ky or 110 ky. The spacing between the 5th and 4th, and the 4th and 3rd penultimate ice ages was about 88 ky (~four precession cycles), while the spacing between the 3rd and 2nd, and the 2nd and last ice ages was about 110 ky (~five precession cycles). Evidently, the data suggest that the combination of the colder Earth, and the relentless buildup of ice and snow at high latitudes during this period resulted in the ice sheets growing faster during NH precessional minima, but retreating only somewhat during NH precessional maxima. This trend continued through four or five precession cycles, until the next precession up-lobe produced a rapid termination. It should be noted that just prior to initiation of a termination, the northern (and global) temperatures bottomed out, while CO2 dropped below 200 ppm. Ellis and Palmer (2016) provided extensive evidence that deposition of dust on the ice sheets provided a decrease in ice-albedo that acted as a trigger to enable the next precession up-lobe to melt the ice sheets. Only at the depth of an ice age (at the lowest temperature and the lowest CO2) after 4 or 5 precession cycles had evolved, would sufficient dust be deposited on the ice sheets to cause a termination. There is good evidence that Antarctic dust levels peaked prior to each termination, but we only have data for dust preceding the last termination on the northern ice sheets. And while we only have Greenland dust data for the last ice age, that data agrees well with the Antarctic dust record, so it is not unreasonable to assume that Arctic dust flux was closely correlated with Antarctic dust since the MPT.

It seems clear that the effects of higher frequency solar variations due to precession were masked by the albedo-driven tendency toward glaciation in the North, until some trigger (most likely dust) allowed a precession up-lobe to produce a termination.

The pattern during the period from 800 kya to 450 kya was not as regular as that after 450 kya, but the spacing of major glacial-interglacial periods tended to be very roughly 4 precession cycles, or at least it was more than double the 41 ky spacing of the earliest period prior to about 1 mya.

Imbrie and Imbrie (1980) developed a simplistic model that can be written:

In which:

y = ice volume

x = SIHL (summer solar input to high latitude)

T = a time constant (best fit around 17 ky)

B = a constant to assure that ice volume builds up at a slower rate than the rate that it decays (best fit around 0.6)

This was not stated clearly in the original paper, but the variables x (SIHL) and y (ice volume) can both be positive or negative, and represent deviations from average values, rather than absolute values.

Insertion of the constant B assures that ice buildup will take place more slowly than ice sheet decay. Note that as B varies from say, 1/3 to 2/3, the ratio of effective time constants varies from 2 to 5.

Previous modelers always inserted a term y on the right side to reduce the rate of ice volume growth as the ice sheet volume increased, and increase the rate of growth as the ice volume decreased, but no physical explanations for this were offered. Inclusion of this term provides two benefits in fitting a model to the actual ice volume data:

(i) It shifts the peaks of ice volume slightly to more recent times, which helps to fit actual data.

(ii) It somewhat rectifies the higher frequencies of the SIHL (due to precession) by reducing the rate of expansion of the ice volume when the SIHL is negative, and increases the rate of expansion of the ice volume when the SIHL is positive.

Despite these factors, the application of this model to the North or to the South, nevertheless still results in a relatively “spiky” plot of ice volume vs. time. When the model is applied to the most recent 800 kyrs, the result is as shown in Figure 3 (Rapp, 2014).

Figure 3. Predicted ice volume from Imbries’ theory with T = 22,000, B = 0.6, and starting value 0.2 over the most recent 800 kyrs. (Other staring values shown as thin dashed lines lead to the same end result).

As we shall see in Section 3, this model works better in the pre-MPT period, when the ebb and flow of ice volume responded more directly to SIHL, whereas in the post-MPT period, ice volume continually built up over the years in a colder Earth, until a relatively sudden termination produced an Interglacial. In the post-MPT period, the underlying connection of SIHL to changes in ice volume is less direct and less obvious. Ellis and Palmer (2016) provided good evidence that the trigger that initiated a termination was dust deposited on the ice sheets, thus decreasing their albedo, leading to rapid melting. The Imbries’ model cannot account for this. The Imbries’ model includes the ice volume on the right side of the equation, but this is inadequate to describe events in the post-MPT era where ice continued to build up regardless of the SIHL, and only diminished when dust deposition decreased the ice albedo. In keeping with this picture of regulation of ice ages by albedo changes rather than the SIHL cycle, Best (2018) developed a model to account for this. He began with the simple equation:

dv/dt = – (1 ± b) (S)(1 – a)

where v = ice volume, a = albedo (calculated directly from Epica dust data), S = 65°N insolation, and b is a constant inserted to make the rate of ice growth greater during precessional minima than the rate of ice loss during precessional maxima. The terms S and b require some explanation. It is assumed in this model that there is a long (88 kyr to 110 kyr) period of growth of the northern ice sheets, when the albedo remains high prior to a termination. In this long, extended period of ice sheet growth, S (measured as deviation from the average) oscillates from positive to negative due to precession, but the growth during the 11 ky precessional minima outweighs the loss during the 11 ky precessional maxima due to the high net albedo. Additionally, the plus sign is used during precessional minima, and the minus sign is used during precessional maxima. This results in alternating growth of the ice sheets through many precession cycles as shown in Figure 4 as long as the obliquity remains > 0.9.

At some point in time, perhaps after several precessional cycles when dust deposits have built up over the ice sheets and reduced their albedo to a critical level, the ice sheets can absorb sufficient SIHL to melt during the next precessional maximum since S > 0, and a < 0.3. In this short period (less than 11 ky) the entire termination takes place, as shown in Figure 5.

Best tried two approaches for including decreased albedo due to dust deposition in the equation based on the record of Antarctic dust in the ice cores. This assumes that Antarctic dust levels would be coupled to the dust levels on the ice sheets, but we lack data to confirm this. Best found the best fit if he assumed a 15 ky lag between the dust peak and the onset of termination. His result of integration is shown in Figure 6. While the agreement is not perfect, and could hardly be, the model captures a great deal more reality than the Imbries’ model.

It is clear from Figure 1 that during the extended period from 2.7 mya to the present:

(i) The Earth became generally colder.

(ii) The global buildup of ice and snow increased greatly during cold periods, particularly at and after 600 kya.

(iii) The spacing between cold periods increased non-linearly from 41 ky prior to the MPT to typically 4-5 precession periods after the MPT.

Raymo et al. (2006) provided a very attractive potential explanation for the 41 ky period. First of all, they emphasized that the ocean sediment data measured global ice volume, not merely northern ice volume. Secondly, they emphasized that prior to about 1.0 mya, global ice volume never reached high levels, and the high levels we associate with recent ice ages were not reached until 800 to 600 kya. A transition period (the MPT) existed between these two extremes. They then made a crucial assumption that seems very credible:

Prior to very roughly 800 kya, the buildup of ice sheets in the North was limited, and the northern ice sheets did not exert a dominant control of the global climate, as they appear to have done in the post-MPT era. In particular, the ebb and flow of Antarctic ice was controlled by the local SIHL to Antarctica, and the ebb and flow of northern ice was controlled by the local SIHL to the Arctic.

The East Antarctic Ice Sheet (EAIS) presently is ringed by extensive marine ice shelves. However, in the distant past, according to Raymo et al., “the EAIS behaved glaciologically, at that time, like a modern Greenland ice sheet… A warmer, more dynamic EAIS with a terrestrial-based melting margin, as opposed to a glacio-marine calving margin, is implied. Because such margins are strongly controlled by summer melting, Antarctic ice volume would be sensitive to orbitally driven changes in local summer insolation.”

When did the transition from terrestrial melting to calving of marine shelves take place? Until now it has been assumed that it happened between 3 and 2.6 ma. Raymo et al. proposed that it may not have happened until after 1 ma.

Based on their model, we can hypothesize that in the early period from 2.7 mya to 1.0 mya, during the 41-ky cycle era, and even extending to a rapidly diminishing degree toward 0.6 mya, that:

(i) The ice/snow in the North never built up enough in volume for its high albedo to control the global climate. Buildup and diminution of ice/snow in both the North and South merely responded to local SIHL.

(ii) In the South, ice/snow responded to SIHL much as it did in the North, as a terrestrial-based melting margin.

(iii) The global amount of ice/snow gained or lost during a complete precessional cycle is the sum of gain/loss for the North and the South. The amount of ice/snow gained in the North during the favorable half of the precession cycle is balanced by a reduction in the amount of ice/snow lost in the South. The amount of ice/snow lost in the North during the unfavorable half of the precession cycle is balanced the amount of ice/snow gained in the South. This reduces the higher frequency component due to precession in the ice volume curve, and what we are left with is simply the obliquity cycle, which enhances SIHL at both poles in synchrony.

(iv) The global total ice volume as recorded by the benthic record, is the sum of gains and losses in the North and the South, which therefore appeared to follow the obliquity signal in this era.

(v) During these smaller 41-ky ice age cycles, the total amount of global ice stored in both the North and the South, typically maximized at 50-60 m below present-day sea level and minimized at 0 to 20 m below present-day sea level. This was considerably less than maximum depression of sea level during the last five ice ages, where sea level dropped to well over 100 m below present-day sea level.

The next step is to estimate the ice volume curves in the North and South from 2.7 mya to 1.0 mya. By adding these, Raymo et al. obtained the modeled global ice volume curve.

A fundamental assumption, based on examination of the data in Figure 1, is that the rate of variation of global ice volume is proportional to solar input to high latitudes (SIHL) in the pre-MPT era. That is quite different from the post-MPT ice age era, where the relentless growth of ice sheets through 4 to 5 precession cycles was greatly modified and modulated by ice-albedo feedback influences.

Since the observed pattern of global ice volume shows a pattern with 41 kyr periodicity, and the 22 kyr periodicity only appears as relatively small perturbations superimposed on the main 41 kyr variation, the challenge is to find a mechanism for reducing the expression of the 22 kyr periodicity in the final ice volume curve.

The problem with simplistic models of how ice volume changes with SIHL is that the variation of SIHL with time is dominated by the ~22 ky precession cycles. This, in turn, causes the resultant modeled plot of ice volume vs. time to also show variability with a 22 ky period.

Raymo et al. applied the Imbries’ model to the 41 kyr period from 2.7 mya to 1.0 mya. (There is no need to include albedo in the pre-MPT period, because ice sheet extent was limited, and the effects of albedo were small. In addition, any increase in NH albedo was countered by a reduction in SH albedo, and vice versa.) Their results are shown in Figure 7. It can be seen from the lowermost graphs that the agreement of the model with experiment is surprisingly good. Inclusion of the assumed levels of SH ice volume greatly reduces the higher frequency variation due to precession, resulting in a pattern that follows only obliquity at 41 kyr cycles.

Figure 8 shows a close-up of a portion of Figure 7 from 1.5 mya to 1.4 mya, where the vertical relationship of the various curves can be followed. For example, the vertical dashed line occurs at a precession peak in NH SIHL. Reading downward along this dashed line, it can be seen that at this date, the NH ice volume is on an upward trend, but precedes the peak in NH ice volume by roughly 5,000 years. The SH SIHL is on a downward trend, but precedes the minimum in SH ice volume by roughly 8,000 years. Because the NH and SH ice volume curves are out of phase, the peaks in NH ice volume are balanced by a partial reduction in SH ice volume, and the minima in NH ice volume are balanced by a partial increase in SH ice volume, so the curve for global ice volume shows the effects of precession only as small perturbations to the underlying 41 kyr cyclic pattern.

(1) For the whole period from 2.7 mya to the present, precession has exerted a significant higher frequency influence on the SIHL via its ~ 22 ky periodic variability.

(2) Despite the higher frequency input of precession to the SIHL, we do not observe this frequency in the record of ice volume vs. time for the whole period from 2.7 mya to the present, except as smaller secondary perturbations to underlying, more slowly varying major trends in ice volume.

(3) From about 2.7 mya to about 1.0 mya, the fundamental cadence of ice volume variability was paced by a 41 kyr period.

(4) Over the past five ice ages over the past 450 kyrs, the fundamental cadence of ice volume variability was paced by spacings of either four or five precession periods.

(5) The period from 1.0 mya to about 0.6 mya was a transition period from the 41 kyr cadence to the longer cadence, but resembled the longer cadence more closely.

(6) A major problem facing us in understanding the observations made in points 1 to 5 above, is why the higher frequency nature of the contribution of precession to the SIHL never appears in the ice volume record, except as secondary perturbations.

(7) In the earlier period from 2.7 mya to 1.0 mya, the Earth was not as cold, and buildup and decay of ice responded to local SIHL, so Arctic ice-albedo feedbacks could not dominate the Earth’s climate. During this period, the build-up and decay of Antarctic ice volume (at about 30% of northern ice volume) out of phase with northern ice volume, produced a global ice volume (sum of northern and southern) that mostly cancelled out precession variability, leaving the cycle of global ice buildup and decay appearing to only follow the 41 kyr period due to obliquity. Global ice volume never reached a level higher than about 50-60% of that in the recent ice ages.

(8) In the most recent period of the last five ice ages, and to some extent further back as far as 1.0 mya, the Earth was cold enough that the energy balance favored continued growth of the great northern ice sheets, with greater expansion during periods of low precession-induced SIHL than smaller contraction during periods of high precession-induced SIHL. The high albedo of these increasingly larger northern ice sheets could now exert a global influence on the Earth’s climate. In contrast, Antarctic ice sheets had reached their natural continental limit of expansion and so their albedo feedback remained constant, thus allowing the ever expanding NH ice sheet albedo to dominate global climate feedbacks. The ice sheets continued to grow until, after 4 or 5 precession cycles, CO2 was reduced below 200 ppm and the global temperatures reached a minimum. At that point, some factor, most likely dust accumulation on the northern ice sheets due to expansion of deserts, caused the next precessional maximum in the SIHL to relatively quickly melt the ice sheets and bring about a termination.

(9) Precession does not appear as a major factor in the history of ice volume over the past 2.7 my, even though it was always present, always active, and always important. But the final result for ice volume hid the effects of precession for totally different reasons in the early and late regimes.

Table 6.1 Strongest U.S. landfalling hurricanes. Source:Scientists have argued (in journal publications and media interviews) that at least some aspect of each of these storms was made worse by human-caused global warming: track, intensity, size, rainfall. Here we assess the arguments for claiming a contribution from global warming for each of these four impactful storms.

6.1 Detection and attribution of extreme weather events

Given the challenges to actually detecting a change in extreme weather events owing to the large impact of natural variability, the detection step is often skipped and attribution arguments are made, independent of detection. There are two general types of extreme event attribution methods that do not rely on detection: physical reasoning and fraction of attributable risk (NCA4, 2017),

The fraction of attributable risk approach examines whether the odds of occurrence of a type of extreme event have changed. A conditional approach employs a climate model to estimate the probability of occurrence of a weather or climate event within two climate states: one state with anthropogenic influence and the other state without anthropogenic influence (pre-industrial conditions). The “Fraction of Attributable Risk” framework examines whether the odds of some threshold event occurring have been increased due to manmade climate change.

Participants at the 2012 Workshop on Attribution of Climate-related Events at Oxford University questioned whether extreme event attribution was possible at all, given the inadequacies of the current generation of climate models (Nature, 2012):

“One critic argued that, given the insufficient observational data and the coarse and mathematically far-from-perfect climate models used to generate attribution claims, they are unjustifiably speculative, basically unverifiable and better not made at all.”

Given the inadequacies of climate models particularly for simulating tropical cyclones, attribution arguments related to individual hurricanes typically rely on the physical reasoning approach. The physical reasoning approach, often referred to as the conditional or ingredients-based approach, looks for changes in occurrence of atmospheric circulation and weather patterns relevant to the extreme event, or considers the impact of certain environmental changes (for example, greater atmospheric moisture) on the character of an extreme event.

6.2 Hurricane Sandy

Hurricane Sandy made landfall on 10/22/12 near Atlantic City, NJ. Hurricane Sandy’s most substantial impact was a storm surge. The highest measured storm surge from Sandy was 9.4 feet (at The Battery)[2]. The argument is that human-caused global warming worsened the storm surge because of sea level rise.

Curry (2018a) summarized sea level rise at The Battery. Sea level has risen 11 inches over the past century (Figure 6.1), with almost half of this sea level rise caused by subsidence (sinking of the land). Kemp et al. (2017) found that relative sea level in New York City rose by ~1.70 meters [5.5 feet] since ~575 A.D. A recent acceleration in sea level rise between 2000 and 2014 has been attributed to an increase in the Atlantic Multidecadal Oscillation and southward migration of the Gulf Stream North Wall Index. The extent to which manmade warming is accelerating sea level rise remains disputed (as summarized by Curry, 2018a).

When Hurricane Sandy made landfall on the mid-Atlantic coast, it was no longer classified as a tropical cyclone, but its maximum wind speed at landfall was equivalent to a Category 1 hurricane. As a result of its transition from a tropical cyclone, Sandy became a hybrid storm, which greatly increased its horizontal size.

“In summary, while there is agreement that sea level rise alone has caused greater storm surge risk in the New York City area, there is low confidence on whether a number of other important determinants of storm surge climate risk, such as the frequency, size, or intensity of Sandy-like storms in the New York region, have increased or decreased due to anthropogenic warming to date.”

6.3 Hurricane Harvey

Hurricane Harvey made landfall in southern Texas on August 24, 2017 as a Category 4 hurricane. The primary damage from Harvey occurred after the storm had been downgraded to a tropical storm and stalled near the coastline, dropping torrential and unprecedented amounts of rainfall over Texas.

As summarized by Landsea (2017), observations indicate a maximum amount of rainfall of about 60 inches just east of Houston, with much of southeastern Texas receiving at least two feet of rainfall. Harvey set the record for most amount of rainfall from a continental U.S. hurricane, going back at least to the 1880’s when comprehensive records begin. The previous top four rainfall producers were: Tropical Storm Amelia (1978) with 48 inches in Texas, Hurricane Easy (1950) with 45 inches in Florida, Tropical Storm Claudette (1979) with 45 inches in Florida, and Tropical Storm Allison (2001) with 40 inches in Texas. Harvey’s stalled, meandering track was similar to Tropical Storms Claudette and Allison. But the peak amount of rainfall from Harvey, as well as Harvey’s areal extent of extreme rainfall, substantially surpassed either of these earlier storms.

Several publications based on model simulations have concluded that as much as 40% of the rainfall from hurricane Harvey was caused by human-caused global warming (Emanuel 2017; Risser and Wehner 2017).

The rationale for these assessments was that prior to the beginning of northern summer of 2017, sea surface temperatures in the western Gulf of Mexico exceeded 30 oC [86 oF] and ocean heat content was the highest on record in the Gulf of Mexico (Trenberth et al. 2017). However, El Niño–Southern Oscillation (ENSO) and Atlantic circulation patterns contributed to this heat content, and hence it is very difficult to separate out any contribution from human-caused global warming.

Figure 6.2 shows that the Gulf of Mexico has warmed by about 0.7 F (0.4 C) in the last few decades (Trenberth et al. 2018).

Figure 6.2. Ocean heat content anomalies (top) for the monthly (black) and annual (red) for the upper160 m in the Gulf of Mexico and sea surface temperature anomalies (bottom) in the Gulf of Mexico (degrees C). The baseline is 1961–1990. Source: Trenberth et al. (2018)

Landsea (2017) summarizes the arguments for more rainfall from tropical cyclones traveling over a warmer ocean. Intuitively, rainfall from hurricanes might be expected to increase with a warmer ocean, as a warmer atmosphere can hold more moisture. Simple thermodynamic calculations suggest that the amount of rainfall in the tropical latitudes would go up about 4% per oF [7% per oC] sea surface temperature increase. Examining a 300 mile radius circle for nearly all of the rain implies that about 10% more total hurricane rainfall for a warming of 2-2.5 F [1-1.5 C]. The Gulf of Mexico has warmed about 0.7 oF [0.4 oC] in the last few decades. Assuming that all of this warming is due to manmade global warming suggests that roughly 3% of hurricane rainfall today can be reasonably attributed to manmade global warming. Hence, only about 2 inches of Hurricane Harvey’s peak amount of 60 inches can be linked to manmade global warming.

Figure 6.3 illustrates the role of sea surface temperature in the western Gulf on Texas major hurricane landfalls. Ten major hurricane Texas landfalls were observed to occur with anomalously cool Gulf sea surface temperatures, while 11 occurred with anomalously warm Gulf sea surface temperatures.

Hurricane Irma made landfall on September 10, 2017 as a Category 4 hurricane. Hurricane Irma set several records. Irma was the 5th strongest Atlantic hurricane on record. Irma was the 2nd strongest Atlantic storm in recorded history in terms of its accumulated cyclone energy – a function both of intensity (wind speed) and duration of the storm. Irma is tied with the 1932 Cuba Hurricane for the longest time spent as a Category 5 hurricane. Hurricane Irma maintained 185-mph winds for 37 hours — longer than any storm on record globally.[3]

Irma formed and rapidly intensified to a major hurricane in the eastern Atlantic, where sea surface temperatures were 26.5 oC (80 oF). The rule of thumb for a major hurricane is 28.5 oC. Clearly, simple thermodynamics associated with SST were not driving this intensification, but rather favorable atmospheric dynamics. In particular, wind shear was very weak. Further, the atmospheric circulation field (e.g. stretching deformation) was very favorable for spinning up this hurricane (Curry, 2017).

While the media made much ado about a global warming link to Irma’s intensity, there have been no published journal articles to date that have examined this issue. This is presumably because the sea surface temperatures during Irma’s development and intensification were relatively cool.

Figure 6.4 analyzes the time series major (Cat 3+) landfalling hurricanes in Florida since 1900. There is no significant trend in either frequency or intensity.

Figure 6.4 Florida major hurricane landfalls. Source: Roy Spencer

6.5 Hurricane Michael

Hurricane Michael made landfall on the Florida Panhandle on October 10, 2018 as a strong Category 4 hurricane. Michael was one of the strongest hurricanes in recorded Atlantic history, and ranks #4 in terms of landfall winds (Table 6.1). The National Hurricane Center estimated peak storm surge inundation of 9-14 feet on the Florida Panhandle (Table 5.2).

During late summer, sea surface temperatures typically exceed 80 oF, which is more than sufficiently warm to sustain a major hurricane. The water in Michael’s path was 2 to 4 oF warmer than usual. Since 1985, sea surface temperatures in the Eastern Gulf of Mexico have increased by about 1 oF (Kennedy et al. 2007).

The most striking aspect of Hurricane Michael was its rapid intensification, from a Category 1 to Category 4 in 24 hours, as it traveled over a very warm patch of water off the coast of Florida. Near Florida, there are deep warm pools of water that move around (the Gulf Loop Current). If a hurricane travels over one of these deep warm pools, it will rapidly intensify if the atmospheric circulation patterns are favorable. Hurricanes Katrina and Rita in 2005 are examples of similar intensification.

For a tropical storm or hurricane to rapidly intensify, it needs three key ingredients: low wind shear, warm ocean water and high humidity. All of these ingredients were in place for Michael, which is somewhat unusual for October. Rather than the typical cold fronts bringing higher wind shear and dry air, circulation patterns were relatively stagnant, providing favorable conditions for Michael to intensify.

A Category 4 hurricane striking the Gulf coast of Florida is nothing new (Table 6.2). The most notable of these storms in context of a manmade global warming argument is the 1848 Great Gale hurricane that struck Tampa Bay,[4] with a measured barometric pressure and storm surge that are consistent with a Category 4 hurricane. Global temperatures (and presumably the sea surface temperatures in the Gulf of Mexico) were substantially cooler in the mid 19th century.

6.6 Conclusions

Convincing detection and attribution of individual extreme weather events such as hurricanes requires:

a very long time series of high-quality observations of the extreme event

an understanding of the variability of extreme weather events associated with multi-decadal ocean oscillations, which requires at least a century of observations

climate models that accurately simulate both natural internal variability on timescales of years to centuries and the extreme weather events

Of the four hurricanes considered here, only the rainfall in Hurricane Harvey passes the detection test, given that it is an event unprecedented in the historical record for a continental U.S. landfalling hurricane. Arguments attributing the high levels of rainfall to near record ocean heat content in the western Gulf of Mexico are physically plausible. The extent to which the high value of ocean heat content in the western Gulf of Mexico can be attributed to manmade global warming is debated. Owing to the large interannual and decadal variability in the Gulf of Mexico (e.g. ENSO), it is not clear that a dominant contribution from manmade warming can be identified against the background internal climate variability (Chapter 4).

JC note: next (and final) post in this series is 21st century projections.

Ben Santer et al. have a new paper out in Nature Climate Change arguing that with 40 years of satellite data available they can detect the anthropogenic influence in the mid-troposphere at a 5-sigma level of confidence. This, they point out, is the “gold standard” of proof in particle physics, even invoking for comparison the Higgs boson discovery in their Supplementary information.

FIGURE 1: From Santer et al. 2019

Their results are shown in the above Figure. It is not a graph of temperature, but of an estimated “signal-to-noise” ratio. The horizontal lines represent sigma units which, if the underlying statistical model is correct, can be interpreted as points where the tail of the distribution gets very small. So when the lines cross a sigma level, the “signal” of anthropogenic warming has emerged from the “noise” of natural variability by a suitable threshold. They report that the 3-sigma boundary has a p value of 1/741 while the 5-sigma boundary has a p value of 1/3.5million. Since all signal lines cross the 5-sigma level by 2015, they conclude that the anthropogenic effect on the climate is definitively detected.

I will discuss four aspects of this study which I think weaken the conclusions considerably: (a) the difference between the existence of a signal and the magnitude of the effect; (b) the confounded nature of their experimental design; (c) the invalid design of the natural-only comparator; and (d) problems relating “sigma” boundaries to probabilities.

(a) Existence of signal versus magnitude of effect

Suppose you are tuning an old analog receiver to a weak signal from a far-away radio station. By playing with the dial you might eventually get a good enough signal to realize they are playing Bach. But the strength of the signal tells you nothing about the tempo of the music: that’s a different calculation.

In the same way the above diagram tells us nothing about the magnitude of the temperature effect of greenhouse gases on the climate. It only shows the ratio of two things: a measure of the rate of improvement over time of the correlation between observations and models forced with natural and anthropogenic forcings, divided by a measure of the standard deviation of the same measure under a “null hypothesis” of (allegedly) pure natural variability. In that sense it is like a t-statistic, which is also measured in sigma units. Since there can be no improvement over time in the fit between the observations and the natural-only comparator, any improvement in the signal raises the sigma level.

Even if you accept Figure 1 at face value, it is consistent with there being a very high or very low sensitivity to greenhouse gases, or something in between. It is consistent, for instance, with the findings of Christy and McNider, also based on satellite data, that sensitivity to doubled GHG levels, while positive, is much lower than typically shown in models.

(b) Confounded signal design

According to the Supplementary information, Santer et al. took annually-averaged climate model data based on historical and (RCP8.5) scenario-based natural and anthropogenic forcings and constructed mid-troposphere (MT) temperature time series that include an adjustment for stratospheric cooling (i.e. “corrected”). They averaged all the runs and models, regridded the data into 10 degree x 10 degree grid cells (576 altogether, with polar regions omitted) and extracted 40 annual temperature anomalies for each gridcell over the 1979 to 2018 interval. From these they extracted a spatial “fingerprint” of the model-generated climate pattern using principal component analysis, aka empirical orthogonal functions. You could think of it as a weighted average over time of the anomaly values for each gridcell. Though it’s not shown in the paper or the Supplement, this is the pattern (it’s from a separate paper):

FIGURE 2: Spatial fingerprint pattern

The gray areas in Figure 2 over the poles represent omitted gridcells since not all the satellite series cover polar regions. The colors represent PC “loadings” not temperatures, but since the first PC explains about 98% of the variance, you can think of them as average temperature anomalies and you won’t be far off. Hence the fingerprint pattern in the MT is one of amplified warming over the tropics with patchy deviations here and there.

This is the pattern they will seek to correlate with observations as a means of detecting the anthropogenic “fingerprint.” But it is associated in the models with both natural and anthropogenic forcings together over the 1979—2018 interval. They refer to this as the HIST+8.5 data, meaning model runs forced up to 2006 with historical forcings (both natural and anthropogenic) and thereafter according to the RCP8.5 forcings. The conclusion of the study is that observations now look more like the above figure than the null hypothesis (“natural only”) figure, ergo anthropogenic fingerprint detected. But HIST+8.5 is a combined fingerprint, and they don’t actually decompose the anthropogenic portion.

So they haven’t identified a distinct anthropogenic fingerprint. What they have detected is that observations exhibit a better fit to models that have the Figure 2 warming pattern in them, regardless of cause, than those that do not. It might be the case that a graph representing the anthropogenic-only signal would look the same as Figure 1, but we have no way of knowing from their analysis.

(c) Invalid natural-only comparator

The above argument would matter less if the “nature-only” comparator controlled for all known warming from natural forcings. But it doesn’t, by construction.

The fingerprint methodology begins by taking the observed annual spatial layout of temperature anomalies and correlates it to the pattern in Figure 2 above, yielding a correlation coefficient for each year. Then they look at the trend in those correlation coefficients as a measure of how well the fit increases over time. The correlations themselves are not reported in the paper or the supplement.

The authors then construct a “noise” pattern to serve as the “nature-only” counterfactual to the above diagram. They start by selecting 200-year control runs from 36 models and gridding them in the same 10×10 format. Eventually they will average them all up, but first they detrend each gridcell in each model, which I consider a misguided step.

Everything depends on how valid the natural variability comparator is. We are given no explanation of why the authors believe it is a credible analogue to the natural temperature patterns associated with post-1979 non-anthropogenic forcings. It almost certainly isn’t. The sum of the post-1979 volcanic+solar series in the IPCC AR5 forcing series looks like this:

FIGURE 3: IPCC NATURAL FORCINGS 1979-2017

This clearly implies natural forcings would have induced a net warming over the sample interval, and since tropical amplification occurs regardless of the type of forcing, a proper “nature-only” spatial pattern would likely look a lot like Figure 2. But by detrending every gridcell Santer et al. removed such patterns, artificially worsening the estimated post-1979 natural comparator.

The authors’ conclusions depend critically on the assumption that their “natural” model variability estimate is a plausible representation of what 1979-2018 would have looked like without greenhouse gases. The authors note the importance of this assumption in their Supplement (p. 10):

“Our assumption regarding the adequacy of model variability estimates is critical. Observed temperature records are simultaneously inﬂuenced by both internal variability and multiple external forcings. We do not observe “pure” internal variability, so there will always be some irreducible uncertainty in partitioning observed temperature records into internally generated and externally forced components. All model-versus-observed variability comparisons are affected by this uncertainty, particularly on less well-observed multi-decadal timescales.”

As they say, every fingerprint and signal-detection study hinges on the quality of the “nature-only” comparator. Unfortunately by detrending their control runs gridcell-by-gridcell they have pretty much ensured that the natural variability pattern is artificially degraded as a comparator.

It is as if a bank robber were known to be a 6 foot tall male, and the police put their preferred suspect in a lineup with a bunch of short women. You might get a confident witness identification, but you wouldn’t know if it’s valid.

Making matters worse, the greenhouse-influenced warming pattern comes from models that have been tuned to match key aspects of the observed warming trends of the 20th century. While less of an issue in the MT layer than would be the case at the surface, there will nonetheless be partial enhancement of the match between model simulations and observations due to post hoc tuning. In effect, the police are making their preferred suspect wear the same black pants and shirt as the bank robber, while the short women are all in red dresses.

Thus, it seems to me that the lines in Figure 1 are based on comparing an artificially exaggerated resemblance between observations and tuned models versus an artificially worsened counterfactual. This is not a gold standard of proof.

(d) t-statistics and p values

The probabilities associated with the sigma lines in Figure 1 are based on the standard Normal tables. People are so accustomed to the Gaussian (Normal) critical values that they sometimes forget that they are only valid for t-type statistics under certain assumptions, that need to be tested. I could find no information in the Santer et al. paper that such tests were undertaken.

I will present a simple example of a signal detection model to illustrate how t-statistics and Gaussian critical values can be very misleading when misused. I will use a data set consisting of annual values of weather-balloon measured global MT temperatures averaged over RICH, RAOBCORE and RATPAC, the El-Nino Southern Oscillation Index (ESOI – pressure based version), and the IPCC forcing values for greenhouse gases (“ghg” comprising CO2 and other), tropical ozone (“o3”), aerosols (“aero”), land use change (“land”), total solar irradiance (“tsi”) and volcanic aerosols (“volc”). The data run from 1958 to 2017 but I only use the post-1979 portion to match the Santer paper. The forcings are from IPCC AR5 with some adjustments by Nic Lewis to bring them up to date.

A simple way of investigating causal patterns in time series data is using an autoregression. Simply regress the variable you are interested in on itself aged once plus lagged values of the possible explanatory variables. Inclusion of the lagged dependent variable controls for momentum effects, while the use of lagged explanatory variables constrains the correlations to a single direction: today’s changes in the dependent variable cannot cause changes in yesterday’s values of the explanatory variables. This is useful for identifying what econometricians call Granger causality: when knowing today’s value of one variable significantly reduces the mean forecast error of another variable.

I ran the regression Temp = a1 + a2* l.Temp + a3*l.anthro +a4* l.natural where a lagged value is denoted by an “l.” prefix. The results over the whole sample length are:

The coefficient on “anthro” is more than twice as large as that on “natural” and has a larger t-statistic. Also its p-value indicates a probability of detection if there were no effect of 1 in 2.4 billion. So I could conclude based on this regression that anthropogenic forcing is the dominant effect on temperatures in the observed record.

The t-statistic on anthro provides a measure much like what the Santer et al. paper shows. It represents the marginal improvement in model fit based on adding anthropogenic forcing to the time series model, relative to a null hypothesis in which temperatures are affected only by natural forcings and internal dynamics. Running the model iteratively while allowing the end date to increase from 1988 to 2017 yields the results shown below in blue (Line #1):

FIGURE 4: S/N ratios for anthropogenic signal in temperature model

It looks remarkably like Figure 1 from Santer et al., with the blue line crossing the 3-sigma level in the late 90s and hitting about 8 sigma at the peak.

But there is a problem. This would not be publishable in an econometrics journal because, among many other things, I haven’t tested for unit roots. I won’t go into detail about what they are, I’ll just point out that if time series data have unit roots they are nonstationary and you can’t use them in an autoregression because the t-statistics follow a nonstandard distribution and Gaussian (or even Student’s t) tables will give seriously biased probability values.

I ran Phillips-Perron unit root tests and found that anthro is nonstationary, while Temp and natural are stationary. This problem has already been discussed and grappled with in some econometrics papers (see for instance here and the discussions accompanying it, including here).

A possible remedy is to construct the model in first differences. If you write out the regression equation at time t and also at time (t-1) and subtract the two, you get d.Temp = a2* l.d.Temp + a3*l.d.anthro +a4*l.d.natural, where the “d.” means first difference and “l.d.” means lagged first difference. First differencing removes the unit root in anthro (almost – probably close enough for this example) so the regression model is now properly specified and the t-statistics can be checked against conventional t-tables. The results over the whole sample are:

The coefficient magnitudes remain comparable but—oh dear—the t-statistic on anthro has collapsed from 8.56 to 1.32, while those on natural and lagged temperature are now larger. The problem is that the t-ratio on anthro in the first regression was not a t-statistic, instead it followed a nonstandard distribution with much larger critical values. When compared against t tables it gave the wrong significance score for the anthropogenic influence. The t-ratio in the revised model is more likely to be properly specified, so using t tables is appropriate.

The corresponding graph of t-statistics on anthro from the second model over varying sample lengths are shown in Figure 4 as the green line (Line #2) at the bottom of the graph. Signal detection clearly fails.

What this illustrates is that we don’t actually know what are the correct probability values to attach to the sigma values in Figure 1. If Santer et al. want to use Gaussian probabilities they need to test that their regression models are specified correctly for doing so. But none of the usual specification tests were provided in the paper, and since it’s easy to generate a vivid counterexample we can’t assume the Gaussian assumption is valid.

Conclusion

The fact that in my example the t-statistic on anthro falls to a low level does not “prove” that anthropogenic forcing has no effect on tropospheric temperatures. It does show that in the framework of my model the effects are not statistically significant. If you think the model is correctly-specified and the data set is appropriate you will have reason to accept the result, at least provisionally. If you have reason to doubt the correctness of the specification then you are not obliged to accept the result.

This is the nature of evidence from statistical modeling: it is contingent on the specification and assumptions. In my view the second regression is a more valid specification than the first one, so faced with a choice between the two, the second set of results is more valid. But there may be other, more valid specifications that yield different results.

In the same way, since I have reason to doubt the validity of the Santer et al. model I don’t accept their conclusions. They haven’t shown what they say they showed. In particular they have not identified a unique anthropogenic fingerprint, or provided a credible control for natural variability over the sample period. Nor have they justified the use of Gaussian p-values. Their claim to have attained a “gold standard” of proof are unwarranted, in part because statistical modeling can never do that, and in part because of the specific problems in their model.

Part III: is there any signal of global warming in landfalling hurricanes and their impacts?

5. Landfalling hurricanes

Total basin and global hurricane statistics are most easily related to global and regional climate variability and change. However, landfalling hurricanes are of particular interest owing to their socioeconomic impacts.

Economic losses from landfalling hurricanes have increased in recent decades, both in the U.S. and globally. Identifying a signal from manmade global warming in the increased losses requires identifying a trend that can be attributed to manmade global warming in any of the factors that contribute to economic losses from landfalling hurricanes. These factors include: hurricane frequency, intensity, horizontal size, storm surge, rate of motion near the coast, tornadoes and rainfall.

5.1 Continental U.S.

Klotzbach et al. (2018) have conducted a comprehensive evaluation of the landfalling hurricane data for the Continental U.S. (CONUS) since 1900.

Figure 5.1 (top) shows the time series of U.S. landfalling hurricanes for the period 1900 to 2017. While the largest counts are from 1986, 2004 and 2005, there is a slight overall negative trend line since 1900 that is not statistically significant. Figure 5.1 (bottom) shows the time series for major hurricane landfalls (Category 3-5). The largest year in the record is 2005, with 4 major hurricane landfalls. However, during the period 2006 through 2016, there were no major hurricanes striking the U.S., which is the longest such period in the record since 1900.Figure 5.1 Time series from 1900 to 2017 for continental U.S. landfalling hurricanes (top) and major hurricanes (bottom). The dotted lines represent linear trends over the period, although neither of these trends is statistically significant. Source: Klotzbach et al. (2018).

Villarini et al. (2012) provide an analysis of U.S. landfalls back to 1878 (Figure 5.2). While it is possible that some landfalls were missed in the early decades owing to sparsely populated regions on the Gulf Coast, it is remarkable that the highest year in the entire record, with 7 landfalls, is 1886.

Figure 5.2 Time series of the count of U.S. landfalling hurricanes for the period 1878 – 2008. From Villarini et al. (2012).

An energetic perspective on U.S. landfalling hurricanes is provided by Truchelut and Staehling (2017). Figure 5.3 shows the time series of continental U.S. landfalling Accumulated Cyclone Energy (ACE), referred to as Integrated Storm Activity Annually Over the Continental U.S. (ISAAC). The 2006  2016 drought of U.S. major hurricane landfalls is associated with a landfall ACE value that was less than 60% of the 1900-2017 average.

Figure 5.3. Timeseries of ISAAC for 1900-2017, with a ten-year centered average value (red). From Truchelut and Staehling (2017).

Truchelut and Staehling (2017) illustrate how the overall Atlantic basin hurricane activity does not directly relate to U.S. landfall activity in a consistent way. Figure 5.4 shows the landfalling ACE (ISAAC) as a percent of overall Atlantic basin Accumulated Cyclone Energy (ACE). The drought in major landfalling hurricane between 2006 and 2016 has the lowest decadal value of this ratio since 1950.

Figure 5.4: Time series of proportional ISAAC over 1950-2017, expressed as a percentage of the annual cumulative ACE occurring in the Atlantic Basin, with a ten-year centered average (red). From Truchelut and Staehling (2017).

Substantial interannual to multidecadal variability in U.S. landfall activity is seen in Figures 5.1 to 5.4. Klotzbach et al. (2018) examined how the landfall counts vary with ENSO (El Niño versus La Niña) and the warm versus cold phases of the Atlantic Multidecadal Oscillation (AMO).

Figure 5.5 compares U.S. landfall frequency during El Niño versus La Niña years. About 1.75 times as many hurricanes make U.S. landfall in La Niña seasons compared with El Niño seasons. Klotzbach et al. found similar ENSO-related modulation in both Florida and East Coast landfalls as well as Gulf Coast landfalls. The La Niña-to-El Niño ratio is slightly larger for major hurricane landfalls than for all hurricane landfalls, although the increase in hurricane landfalls observed in La Niña seasons from that observed in all seasons does not meet the 5% significance level.

Figure 5.7 shows the number of CONUS landfalling major hurricanes by decade. Why were there fewer landfalling major hurricanes in the decade 2001 to 2010 versus 1941 to 1950, both decades at the peak of the Atlantic Multidecadal Oscillation (AMO)? Figure 5.7 shows that there arguably were more major hurricanes in the Atlantic basin during the earlier, mid-century AMO. The explanation probably lies in the relative frequencies of El Niño versus La Niña years during these two warm periods, with the current warm phase of the AMO being dominated by a relatively large number of El Nino years that are associated with low Atlantic hurricane activity.

Kossin (2017) identified an increased tendency for enhanced vertical wind shear near the continental U.S. in the warm state of the Atlantic Meridional Mode (AMM) as a potential contributor to diminished landfall efficiency in active seasons. During periods of greater Atlantic hurricane activity, a protective barrier of vertical wind shear and cooler ocean temperatures forms along the U.S. East Coast, weakening storms as they approach land. Likewise, during periods of low activity, the sea surface temperatures are cooler and the wind shear is stronger there. When conditions in the tropical Atlantic are good for hurricane intensification, they are bad for it near the coast and vice versa.

The Arc horseshoe temperature pattern in the Atlantic (Figure 4.3) illustrates the spatial pattern of Atlantic surface temperatures associated with AMO. The east-west pattern of warm-cool temperatures influences the ratio of landfall ACE to total basin ACE (Figure 5.4), resulting in the opposing tendencies of hurricane intensification near the Atlantic coast versus in the Atlantic basin.

5.2 Caribbean

Klotzbach (2011) summarizes Caribbean landfalling hurricanes (Figure 5.8). It is seen that there is no significant long-term trend. The primary interannual driver of variability in the Caribbean is ENSO, whereby much more activity occurs in the Caribbean with La Niña conditions than with El Niño conditions. On the multidecadal time scale, the AMO plays a significant role in Caribbean hurricane activity. When ENSO and the AMO are examined in combination, even stronger relationships are found. For example, 29 hurricanes tracked into the Caribbean in the 10 strongest La Niña years in a positive (warm) AMO period, compared with only two hurricanes tracking through the Caribbean in the 10 strongest El Niño years in a negative (cool) AMO period.

Figure 5.8: Caribbean landfalling hurricanes, for the period 1900-2018. Updated from Klotzbach (2011).

Chenoweth and Divine (2008) provide a longer-term perspective on Caribbean landfalling hurricanes by assembling a historical document-based 318 year record of tropical cylcones impacting the Lesser Antilles, for the period 1690-2007. Newspaper accounts, ships’ logbooks, meteorological journals and other document sources were used to create this data set. This compilation estimates the position and intensity of each tropical cyclone that passes through the 61.5°W meridian from the coast of South America northward through 25.0°N. The numbers of tropical cyclones show no significant trends (Figure 5.9). The period with the largest number of landfalls was in the early 19th century. The time span 1968–1977 was probably the most inactive period since the islands were settled in the 1620s and 1630s.

Figure 5.9. The number of (top) hurricanes, (middle) tropical storms, and (bottom) both hurricanes and tropical storms passing through 10–20°N 61.5°W from 1690 to 2007. Red curve is a 21-year moving mean.

5.3 Global

Weinkle et al. (2012) summarizes the challenges in constructing a homogeneous global hurricane landfall data set. Uncertainty in tropical cyclone location and intensity data is a function of the evolving observation network throughout the past century, ranging from ship traffic, aerial reconnaissance, to satellite remote sensing. Weinkle et al. (2012) examined landfalls in the North Atlantic, northeastern Pacific, western North Pacific, northern Indian Ocean, and the Southern Hemisphere, using the International Best Track Archive for Climate Stewardship (IBTrACS).

The global frequency of total and major hurricane landfalls shows considerable interannual variability, but no significant linear trend (Figure 5.10). Furthermore, when considering each basin individually, there is no significant trend except in the Southern Hemisphere. This result is not unexpected considering the known multidecadal signals in tropical cyclone activity, which cannot be adequately resolved by the short historical record.

Figure 5.10: Frequency of global hurricane, for the period 1970-2018. Updated from Weinkle et al. (2012).

Mei et al. (2015) investigated the intensity of landfalling hurricanes over the northwest Pacific since the late 1970s. Over the past 37 years, hurricanes that strike East and Southeast Asia have intensified by 12-15%, with the proportion Categories 4/5 storms more than doubling. In contrast, typhoons that stay over the open ocean do not reflect such an increase. They found that the increase in intensity of landfalling hurricanes is tied to locally enhanced surface warming on the rim of East and Southeast Asia.

As summarized by Camargo et al. (2010), ENSO’s influence on western North Pacific hurricane tracks is reflected in the landfall rates throughout the region, with different landfall patterns associated with ENSO phase. There is a significant relationship between late season landfalls over China and ENSO. There is also an increase in landfalls in the Korean Peninsula and Japan during the early monsoon months and in the Indochinese peninsula during the peak monsoon months in El Niño years.

5.4 Water – rainfall and storm surge

Historically, the most deadly and damaging impacts of hurricanes have been storm surge and inland flooding. (Blake et al. 2011). This section assesses whether there has been in any increase in storm surge and rainfall associated with hurricanes.

5.4.1 Rainfall

It has been estimated that on average, tropical cyclones of at least tropical depression strength contribute about a quarter of the annual rainfall in the southeast U.S. Soule et al. (2012) found that tropical cyclones in the Southeast U.S. frequently ‘bust’ droughts, with the majority of counties in Florida, Georgia, South Carolina and North Carolina seeing at least 20% of their droughts ended by a tropical cyclone between 1950 and 2008.

Hurricanes also account for approximately 20% of the observed monthly rainfall from June to November across the eastern U.S. Corn Belt (Wisconsin, Michigan, Illinois, Indiana, Ohio and Kentucky) (Kellner et al. 2016).

While inland flooding typically occurs with a landfalling hurricane, several factors lead to excessive rainfall. Slow motion of the hurricane near landfall can lead to high amounts of local rainfall (e.g. Hurricane Danny – 1997; Hurricane Wilma – 2005; Hurricane Harvey – 2017; Hurricane Florence – 2018). Mountains/hills near the coast magnify rainfall potential due to forced upslope flow (e.g. Hurricane Mitch – 1998). Upper level troughs and cold fronts can lead to excessive rainfall (e.g. Hurricane Floyd – 1999). Larger tropical cyclones have larger rain footprints, which can lead to excessive rainfall owing to the longer time frame over which rainfall falls at any one location. High water vapor content in the atmosphere also contributes to excessive rainfall. As a hurricane moves farther inland and is cut off from its supply of warmth and moisture (the ocean), rainfall amounts from hurricanes and their remains decrease quickly, unless there is upslope flow from mountains/hills.

Roth (2017) provides a list of the hurricanes that were the biggest rain producers for each country/island in the North Atlantic (Table 5.2). The table does not include Hurricane Harvey’s (2017) rainfall of 60.58 inches, which occurred after this table was prepared. It is seen that Hurricane Mitch (1998), Hurricane Wilma (2005), Hurricane Flora (1963) and the November 1909 Hurricane each had peak landfall rainfall amounts exceeding that for Hurricane Harvey.

Table 5.2. List of hurricanes that were the biggest rain producers in the North Atlantic. Source: Roth (2017).

Knight and Davis (2007) found that between 1980 and 2004, tropical cyclones in the Southeast U.S. tended to be wetter, with 11 of the 84 stations analyzed showing statistically significant increases in tropical cyclone rainfall. No stations had significant decreases. Over this period, they found that the increase in frequency of landfalling storms was a more important factor in the increase in hurricane rainfall, rather than the fact that individual storms have tended to be wetter.

Kunkel et al. (2010) found that the number of Southeast U.S. tropical cyclone heavy precipitation events more than doubled between 1994 and 2008, compared to the long-term average from 1895 to 2008.

5.4.2 Storm surge

The magnitude of a storm surge depends on storm intensity, forward speed, size (radius of maximum winds), angle of approach to the coast, central pressure, high tide versus low tide, and the shape and characteristic of coastal features.

Sea level rise also influences the height of storm surges. Since 1900, global mean sea level has risen 7-8 inches (see Curry 2018a for an overview). Depending on local topography, a small change in sea level can translate into a significant increase in the inland reach of the storm surge.

The highest documented storm surge in the U.S. occurred in 2005 during Hurricane Katrina, when Pass Christian, MS, recorded a 27.8 foot storm surge.

As summarized by Belanger et al (2009), most hurricanes spawn tornadoes. Hurricanes making landfall from the Gulf of Mexico are more likely to produce tornadoes in the continental U.S. than Atlantic landfalling hurricanes that strike the U.S. coastline obliquely. Although most of these tornadoes are weak, there have been cases when significant death and destruction has resulted. Hurricane Ivan (2004) generated an outbreak of 117 tornadoes that resulted in 47 injuries, seven deaths, and $96.9 million in property damage.

In view of the undercounting of tornadoes prior to the mid-1990s, when the national network of weather radars was completed, Belanger et al. developed a statistical model of hurricane-spawned tornadoes using data from the period when the weather radars were available. From the reconstructed tornado data for the period 1920-2007, Belanger et al. concluded that the active period since 1995 has seen an increased in the average number and in the frequency of large hurricane-spawned tornado outbreaks in the Gulf of Mexico – dominated by the high number of hurricane-spawned tornadoes during 2004-2005 that was unprecedented in the reconstructed record since 1920.

These changes are linked to an increase in the median size and frequency of large Gulf landfalling hurricanes with large horizontal extent (size). Relatively little research has been done on climatic variations of hurricane size. Belanger et al. found the reconstructed climatology of hurricane-spawned tornadoes clearly reflects the decadal-scale variations associated with the Atlantic Multidecadal Oscillation (AMO).

A satellite-based hurricane size climatology was developed by Knaff (2014). Some limited information on the variability of Atlantic hurricane size is provided by Fritz (2009). Figure 4.10 shows the Atlantic season average of the maximum radial extent of 34 knot [39 mph] wind speeds (R34), for the period 1970-2005. While substantial year-to-year variability is seen, a large jump occurs in 1995, associated with the transition to the warm phase of the AMO (Section 4.3.1).

Figure 5.11. Time series of seasonal average of the radius of R34 for the North Atlantic, for the period 1970-2005. Unpublished diagram from A. Fritz, using the data set described by Fritz (2009).

Given the importance of hurricane size in landfall impacts (storm surge, rainfall amount, tornadoes), increased attention should be given to documenting and understanding the variability of tropical cyclone size.

5.6 Damage and losses

Data collected by MunichRe (2018) show that worldwide economic losses from landfalling tropical cyclones have increased over the past decades. Historically, the greatest amount of damage from landfalling hurricanes has been from winds and storm surge. Recently, we have seen several storms where the greatest damage occurred from inland rainfall, particularly for slow moving storms (e.g. Hurricane Harvey in 2017 and Hurricane Florence in 2018).

While there is no observational evidence of increased frequency or intensity of landfalling hurricanes, either in the Atlantic or globally, there is very clear evidence of increasing damage from landfalling hurricanes. Is this increase in damage solely attributed to increasing population and wealth in vulnerable coastal locations, or is there an element of climate change that is contributing to the increase in damage?

Addressing the issue as to whether there is an element of climate change that is contributing to the increase in damage from landfalling hurricanes requires the correct identification of the relevant variables driving the damage. In addition to the frequency and intensity of landfalling hurricanes, the following variables contribute to damage: horizontal size of the hurricane, forward speed of motion near the coast, storm surge and rainfall.

Klotzbach et al. (2018) and Weinkle et al. (2018) have addressed the question as to whether Continental United States (CONUS) hurricane-related inflation-adjusted damage has increased significantly since 1900. Both studies remark that since 1900, neither observed U.S. landfalling hurricane frequency nor intensity shows significant trends, including the devastating 2017 season. Growth in coastal population and regional wealth are the overwhelming drivers of observed increases in hurricane-related damage. This trend has led to the growth in exposure and vulnerability of coastal property along the U.S. Gulf and East Coasts.

Klotzbach et al. and Weinkle et al. argue that given that there are no significant trends in the frequency or intensity of landfalling U.S. hurricanes since 1900, we would expect an unbiased normalization to also exhibit no trend over this time period. Estrada et al. (2015) argue that the damage normalization approach used by Weinkle et al. and Klotzbach et al. is ambiguous owing to unobserved variables and spatial variability, e.g. changing adaptation practices and local vulnerability. Further, exposure to hurricane damage is not uniquely determined by landfall frequency and intensity: horizontal size, storm surge and precipitation amount are generally independent of storm intensity.

Warmer sea surface temperatures are expected to contribute to an overall increase in hurricane rainfall; the extent to which rainfall has increased in landfalling hurricanes remains an active area of research.

Storm surge risk is increasing owing to the slow creep of sea level rise. The extent to which this increase in sea level rise can be attributed to manmade global warming is disputed (e.g. Curry 2018a). NOAA provides the following storm surge vulnerability facts:

From 1990-2008, population density increased by 32% in Gulf coastal counties, 17% in Atlantic coastal counties, and 16% in Hawaii

Much of the United States’ densely populated Atlantic and Gulf Coast coastlines lie less than 10 feet above mean sea level

Over half of the Nation’s economic productivity is located within coastal zones

72% of ports, 27% of major roads, and 9% of rail lines within the Gulf Coast region are at or below 4 ft elevation

A storm surge of 23 ft has the ability to inundate 67% of interstates, 57% of arterials, almost half of rail miles, 29 airports, and virtually all ports in the Gulf Coast area

JC note: stay tuned for Part IV, on attribution of recent major U.S. landfalling hurricanes.

A hidden province of volcanoes in West Antarctica may accelerate sea level rise [link]

The Dominant Role of Extreme Precipitation Events in Antarctic Snowfall Variability buff.ly/2U3stFU

The Strength of Low-Cloud Feedbacks and Tropical Climate: A CESM Sensitivity Study buff.ly/2NiqxqB

A preliminary calculation of cement carbon dioxide in China from 1949 to 2050 [link]

Spring rainfall on permafrost is a growing factor in Arctic warming, but it hasn’t been accounted for in most projections. New research suggests the increase in methane emissions could be twice the expected rate. [link]

On the westward shift of tropical Pacific climate variability since 2000 [link]

Unabated Bottom Water Warming and Freshening in the South Pacific Ocean buff.ly/2TZnHtd

“Given that CCS is expected to account for the mitigation of ~4-20% of total CO₂ emissions, in 2050 the CCS industry will need to be larger by a factor of 2–4 in volume terms than the current global oil industry” [link]

Durham student was sacked as editor of a student journal, removed from his post as president of a student society, and has now been banned from a debate on another campus. And for what? Retweeting an article. [link]

Scenarios and Decision Support for Security and Conflict Risks in the Context of Climate Change [link]

Former UW-Madison professor Don Moynihan on campus speech and attack on UW political science professor Ken Mayer: “If we look the other way when academic freedom is attacked, expect it to be attacked more often.” [link]

David Spiegelhalter: “You should not want to be trusted. Instead, what you should want to do is to demonstrate trustworthiness, because that is within your control.” spr.ly/6012Eutum

Intensification of El Niño rainfall variability over the tropical Pacific in the slow oceanic response to global warming buff.ly/2WOcMEg

If oceans are getting warmer as a result of climate change, so the argument goes, surely hurricane activity must increase as a result, particularly hurricane intensity. However, most of the assessment reports cited in Chapter 1 have low confidence in attributing any recent changes in hurricane activity to manmade global warming.

What is the scientific basis for assessing whether or not manmade global warming is causing a change in hurricane activity?

Detection and attribution of manmade signals in in the climate system is a new and rapidly developing field. Attributing an observed change or an event partly to a causal factor (such as manmade climate forcing) normally requires that the change first be detected. A detected change is one that is determined, based on observations to be very unlikely to occur (less than about a 10% chance) due to natural internal variability alone. An attributable change implies that the relative contribution of causal factors has been evaluated, along with an assignment of statistical confidence.

There are some situations whereby attribution without detection statements can be appropriate, although lower confidence is assigned when attribution is not supported by a detected change. For example, a trend analysis for an extremely rare event may not be meaningful. Including attribution without detection in the analysis of climate change impacts reduces the chance of a false negative -incorrectly concluding that climate change had no influence on a given extreme events. However, attribution without detection comes at the risk of increasing the rate of false positives – incorrectly concluding that manmade climate change had an influence when in fact it did not.

The conceptual framework for most detection and attribution analyses consists of four elements:

time history of relevant observations

the estimated time history of relevant climate forcings (such as greenhouse gas concentrations or volcanic activity)

an estimate of the impact of the climate forcings on the climate variables of interest

an estimate of the internal (unforced) variability of the climate variables of interest—e.g. natural unforced variations of the ocean, atmosphere, land, cryosphere, in the absence of external forcings.

Paleoclimate proxies from the geological record are useful for detection studies in providing a baseline against which to compare recent variability of the past century or so. Time of emergence is the time scale on which climate change signals will become detectable in various regions – an important issue, since natural variability can obscure forced climate signals for decades, particularly for smaller space scales.

4.1 Detection

There are three main challenges to detecting a signal of changed hurricane activity:

very long timescales in the oceans, resulting in substantial lag time between external forcing and the realization of climate change and its impacts

high-amplitude natural internal variability in the ocean basins on time scales from the interannual to the millennial

Based on the observations summarized in Chapter 3, the following summary is provided regarding the detection of changes in global or regional hurricane activity:

global hurricane activity: small but insignificant trends of decreasing hurricane frequency and increasing number of major hurricanes;

global % of Category 4/5 hurricanes: increasing trend since 1970, although the data quality for the period before 1988 is disputed.

rate of intensification: hints of a global increase, although data sets disagree.

track migration: poleward migration of the average latitude where hurricanes have achieved their lifetime-maximum intensity for 1982-2012.

Atlantic hurricanes: increasing trends since 1970, but comparable activity was observed in the 1950’s-1960’s.

hurricanes in other regions: observational record is too short, but no evidence of trends that exceed natural variability

The observational database (since 1970 or even 1850) is too short to assess the full impact of natural internal variability associated with large-scale ocean circulations. Paleotempestology analyses indicate that recent hurricane activity is not unusual.

The focus in this section is on identifying sources of variability and change during the period since 1850, when historical data is available.

Many of the arguments surrounding an increase in hurricane activity are associated with increases in global sea surface temperature. Figure 4.1 shows the variability of globally-averaged sea surface temperature (SST) since 1850, along with external forcing from CO2, volcanoes, and the sun.

It is seen from Figure 4.1a that sea surface temperature (SST) reached a global low point in 1910, and then increased rapidly until about 1945. The elevated Atlantic hurricane activity in the 1930’s-1950’s (Section 3.3) occurred when the global SSTs were ~0.8oC cooler than present global average SSTs. This warming period was followed by a period of slight cooling until 1976, after which temperatures began increasing.

The global ocean warming during the period 35-year period from 1910 to 1945 of 0.6oC was comparable to the 0.7oC warming observed between the 42-year period between 1976 and 2018.

Regarding the recent warming, the IPCC AR5 made the following attribution statement:

“It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human- induced contribution to warming is similar to the observed warming over this period.”

In other words, the IPCC AR5 best estimate is that all of the warming since 1951 has been caused by humans.

So, what caused the early 20th century global warming? This issue has received remarkably little attention from climate scientists. Lack of an explanation for the early 20th century global warming diminishes the credibility of the attribution statement for warming since 1951.

The first substantive attribution analysis of the early 20th century warming was made in a recent paper by Hegerl et al. (2018), which came to the following conclusion:

“Attribution studies estimate that about a half (40–54%) of the global warming from 1901 to 1950 was forced by a combination of increasing greenhouse gases and natural forcing, offset to some extent by aerosols. Natural variability also made a large contribution. The exact contribution of each factor to large-scale warming remains uncertain.”

Hegerl et al. (2018) provides a summary of forcing from CO2, volcanoes and solar (Figure 4.1c). In 1910, the atmospheric CO2 concentration has been estimated to be 300.1 ppm; in 1950 it was 311.3 ppm; and in 2018 it is 408 ppm. So, the warming during the period 1910-1945 was associated with a CO2 increase of 10 ppm, whereas a comparable amount of warming during the period 1950 to 2018 was associated with a 97 ppm increase in atmospheric CO2 concentration  almost an order of magnitude greater CO2 increase for a comparable amount of global ocean warming.

Clearly, there were other factors in play besides CO2 emissions in the early 20th century global warming (Figure 4.1b). In terms of external radiative forcing, a period of relatively low volcanic activity during the period 1920-1960 would have a relative warming effect, although the period from 1945 to 1960 was a period of slight overall cooling. Solar forcing in the early 20th century is uncertain, with estimates of warming of varying magnitude, although the magnitudes are insufficient for solar to have been a major direct contributor to the early 20th century global warming.

Hegerl et al. (2018) analyzed the internal variability associated with ocean circulations during the period since 1900. They found that the unusual cold anomaly circa 1910 (Figure 4.1a) originated in the South Atlantic, and then spread globally in the subsequent decade, leading to cold anomalies in both Atlantic and Pacific.

This rarely-discussed cold period was followed by strong warming in the Northern Hemisphere, which was particularly pronounced in high latitudes. Hegerl et al. summarized some previous research that might account for mechanisms of the strong high latitude warming in the Northern Hemisphere, including multi-decadal ocean oscillations in large-scale ocean circulation patterns. However, gaps in data coverage particularly in the Indo-Pacific and Southern Oceans imply higher uncertainty during the early 20th century than for recent periods.

Hegerl et al. focus their arguments regarding internal variability associated with large-scale ocean circulations on the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation. The Atlantic Multi-decadal Oscillation (AMO) is a coherent mode of natural variability of sea surface temperatures (SST) occurring in the North Atlantic Ocean, with an estimated period of 60-80 years. The Pacific Decadal Oscillation (PDO) is a recurring pattern of ocean-atmosphere climate variability of surface temperature centered over the northern hemisphere mid-latitude Pacific basin. Warm phases of the both the AMO and PDO contributed to warming particularly during the 1930’s and 1940’s.

As summarized in NCA4 (2017), observed multidecadal variability in the Atlantic Ocean surface temperatures has also been ascribed to Saharan dust outbreaks and manmade pollution aerosols.

4.3 Natural internal modes of variability

Disentangling the complex interplay between the many modes of internal variability associated with the large-scale ocean circulations is not at all straightforward. The multi-decadal modes (with timescales of 30 to 80 years) are of the greatest relevance in attribution analyses of 20th and early 21st century climate change. These multi-decadal modes are associated with regional-to-basin-scale oceanic circulation systems that define the dynamical memory of the climate system in the presence of fast, large-scale atmospheric processes. The faster atmospheric processes not only supply energy for the multi-decadal variability, but also provide the means for communication between the different ocean basis and synchronization of the multi-decadal climate modes (e.g. Wyatt and Curry, 2013; Kravtsov et al. 2018).

Hurricane activity is also influenced by multidecadal variability in the oceans in ways that do not directly rely on local changes in sea surface temperatures – such as changes in atmospheric circulation patterns and wind shear.

4.3.1 Atlantic modes and hurricane activity

Three modes of interannual to multi-decadal variability have been identified in the Atlantic:

Atlantic Multidecadal Oscillation (AMO)

North Atlantic Oscillation (NAO)

Atlantic Meridional Mode (AMM)

Grossman and Klotzbach (2009) provide the following summary of the relationships among these three Atlantic modes. The cross-equatorial pattern associated with the AMM and the SST, sea level pressure (SLP) and wind patterns associated with the AMO can be viewed as one overall phenomenon that stretches from the high latitudes to the tropics. The AMO and AMM are also closely related to the NAO on multidecadal time scales. Long-term positive (negative) phases of the NAO coincide with the negative (positive) phase of the AMO and AMM, generally with a lag of several years. The NAO depends on the North Atlantic meridional temperature and pressure gradient, which in turn lessens (increases) as the North Atlantic warms (cools) with the positive (negative) AMO.

The most thoroughly studied of these modes with respect to Atlantic hurricanes is the AMO. The Atlantic Multidecadal Oscillation (AMO) is associated with basin-wide SST and sea level pressure (SLP) fluctuations. The positive (warm) AMO phase is associated with a pattern of horseshoe-shaped SST anomalies in the North Atlantic Figure 4.3), with pronounced warming in the tropical and parts of the eastern subtropical North Atlantic, an anomalously cool area off the U.S. East Coast, and warm anomalies surrounding the southern tip of Greenland.

Figure 4.3 Horseshoe pattern of the AMO, where the ‘Arc’ Index corresponds to the average sea surface temperatures inside the black contours. Source: Johnstone (2017).

The traditional AMO index (Figure 4.4) is calculated from the patterns of SST variability in the North Atlantic once a linear trend has been removed. However, since the trend is significantly non-linear in time (Figure 4.1a), the detrending aliases the AMO index. The nonlinearity is particularly pronounced during the period 1945-1975, when global sea surface temperatures showed a slight cooling trend.

To avoid the problems associated with detrending, Johnstone (2017) developed an Arc Index version of the AMO Index, which is the average SST in the Arc region (Figure 4.3). The Arc Index (Figure 4.5) shows abrupt shifts to the warm phase in 1926 and 1995, consistent with the conventional AMO analysis in Figure 4.5. Johnstone’s analysis indicates a shift to the cold phase in 1971, which differs from the analysis shown in Figure 4.5 that indicates the shift to the cold phase in 1964. The revised AMO index of Klotzbach and Gray (2008) indicates a shift to the cold phase in 1970, consistent with the analysis of Johnstone.Figure 4.5 Arc Index version of the AMO. Source: updated from Johnstone (2017)

The main hurricane-relevant variables that change with the phase changes of the AMO, AMM and NAO are spatial patterns of SST (or oceanic heat content) and wind patterns. Hurricane genesis (formation) locations, tracks and intensification are temporally and spatially modulated by these large‐scale climate modes.

Atlantic hurricanes show strong variations on decadal and multi­decadal time scales in the observed record (Figures 3.6 – 3.8). The greatest impact of the AMO is on the number of major hurricanes (Category 3+) and Accumulated Cyclone Energy, shown in Figure 4.6. The shift to the relatively inactive phase occurred around 1970/1971, in accord with the AMO analyses of Johnstone (2017) and Klotzbach and Gray (2008), with the late 1960’s still characterized by a larger number of major hurricanes and high ACE values. The relationship of the AMO to major hurricane activity in the Atlantic was identified by Goldenberg et al. (2001) to be associated with above normal SSTs and decreased vertical shear associated with the warm AMO.

Bell and Chelliah (2006) related the interannual and multidecadal variability of hurricane activity in the Atlantic to two tropical multidecadal modes in the Atlantic. Comparing periods of high activity in the Atlantic, they showed that the most recent increase in hurricane activity is related to the exceptionally warm SSTs in the Atlantic, while the high activity period in the 1950s and 1960s was more closely associated with the West African monsoon.

Lin et al. (2019) argue that there are two regimes of the AMO, which appear to be consistent with the analysis of Bell and Chelliah. Lin et al. argue that there are two separate AMO regimes: a 10-30 year regime (intrinsic to the Atlantic), and a 50-80 year regime (which is influenced by variability in the Pacific and the Greenland-Iceland-Norwegian Seas).

Vimont and Kossin (2007) related Atlantic hurricane activity to the Atlantic Meridional Mode (AMM). Hurricane genesis locations, SST and wind shear anomalies are influenced by the different phases of the AMM. During the positive AMM phase (above normal SSTs in the North Atlantic), there is an overall increase of hurricane activity in the Atlantic, with the mean genesis (formation) location shifting eastward and toward the equator. Also associated with a positive AMM is an increase in storm duration and the frequency of intense hurricanes (Kossin and Vimont 2007).

4.3.2 Pacific modes and hurricane activity

The El Niño – Southern Oscillation (ENSO) is a major mode of natural climate variability. ENSO is associated with sea surface temperature (SST) changes in the tropical Pacific, which is associated with shifts in the seasonal temperature, circulation, and precipitation patterns in many parts of the world. El Niño and La Niña (warm and cold) events usually recur every 3 to 7 years and tend to last for approximately a year.

ENSO has a strong impact on hurricanes, both in the Pacific and Atlantic Oceans.Figure 4.7 The various Niño regions where sea surface temperatures are monitored to determine the current ENSO phase (warm or cold) Source: Wikipedia

Kim et al. (2009, 2011) provide an overview of the impact of ENSO on tropical cyclones. In La Niña years, there are usually twice as many major hurricanes as in El Niño years. ENSO is generally thought to influence Atlantic hurricane activity by altering the large ­scale atmospheric circulation patterns for genesis (formation) and intensification. During an El Niño year, the vertical wind shear is larger than normal in most of the tropical Atlantic and especially in the Caribbean, which inhibits the formation of hurricanes.

The effect of ENSO on Pacific hurricanes is opposite to that in the Atlantic – El Nino years are associated with greater hurricane activity in the Pacific. As summarized by Kim et al. (2009a), ENSO has an impact on the mean hurricane genesis location in the Pacific, with a displacement to the southeast (northwest) in El Niño (La Niña) years. Because of this shift to the southeast, further away from the Asian continent, hurricanes in El Niño years tend to last longer and be more intense than in other years. ENSO also affects the shapes of the tracks in El Niño years, the hurricanes have a tendency to recurve northeastward and reach more northerly latitudes. Hence, hurricanes affect the southern South China Sea more frequently during La Niña years, but affect the Central Pacific more frequently in El Niño years.

Capotondi et al. (2015) address the issue of ENSO diversity, including the El Niño Modoki (a Japanese word that means ‘similar but different’). By contrast to the traditional El Niño that is associated with warming in the eastern tropical Pacific (Niño 1,2,3 regions in Figure 4.7), the El Niño Modoki is associated with warming in the central tropical Pacific (Niño 4 region). Kim et al. (2011) found that the El Niño Modoki shifts hurricane activity to the western Pacific, providing more favorable conditions for Asian landfalls, while hurricane activity in the eastern Pacific is substantially reduced. In the Atlantic, the impacts of an El Niño Modoki on hurricane activity more closely resemble a La Nina season, with elevated hurricane activity (Figure 4.8).Figure 4.8 Composites of Atlantic track density anomaly (multiplied by 10) during the August to October period for (A) El Niño, (B) El Niño Modoki, and (C) La Niña. Source: Kim et al. (2009)

In climate change attribution studies, multi-decadal modes are of greater relevance than the interannual variability associated with ENSO and Modoki events. However, there is evidence of multidecadal variability in the relative frequency of El Niño, La Niña and Modoki events. In the Pacific, two decadal to multi-decadal modes have been identified:

The Pacific Decadal Oscillation (PDO) is a pattern of Pacific climate variability (poleward of 20oN), with a decadal time scale that can be interpreted as a decadal envelope of ENSO variability. During a warm (positive) phase, the west Pacific becomes cooler and part of the eastern ocean warms; during a cool (negative) phase, the opposite pattern occurs (Figure 4.9).

The North Pacific Gyre Oscillation (NPGO; DiLorenzo et al. 2008) reflects variations in the strength of the central and eastern branches of the subpolar and subtropical ocean circulation patterns, and is driven by the atmosphere through the North Pacific Oscillation (NPO). The NPO spatial pattern consists of a dipole structure in which sea level pressure (SLP) variations in the central Pacific near 40°N oppose those over Alaska. Variations of the NPGO index are shown in Figure 4.10.Figure 4.9 PDO Index values. Source: http://research.jisao.washington.edu/pdo/Figure 4.10 NPGO Index values. Source: https://asl.umbc.edu/hepplewhite/cindex/

Maue (2011) interpreted the global Accumulated Cyclone Energy (Figure 3.2) in terms of the PDO and NPGO. The Pacific climate shifts of 1976–77 and 1988–89 have been related to the PDO and North Pacific Gyre Oscillation (NPGO), respectively, which are seen in the global ACE time series. Decadal variations in the NPGO, which has been enhanced since 1989, have been linked to SST anomaly patterns that closely resemble El Niño Modoki events.

Camargo et al. (2010) summarized several studies that have examined the decadal and multidecadal variability of hurricane activity in the western North Pacific. The observational record in the western north Pacific is unreliable before the 1950s, and perhaps even before the 1970s. The occurrence of major hurricanes is modulated by ENSO and the Pacific Decadal Oscillation. The decadal variability of hurricane tracks has also been largely attributed to the Pacific Decadal Oscillation. The regions with the greatest decadal changes are the East China Sea and the Philippine Sea.

4.3.3 Does global warming change the internal modes of variability?

The internal modes of variability associated with the large-scale ocean circulations are often referred to as ‘oscillations.’ However, it is incorrect to view these oscillations as ‘cyclic,’ as their period and frequency tend to be somewhat irregular. In principle, because they are internal modes associated with the nonlinear dynamics of the coupled atmosphere-ocean system, a specific oscillation pattern can cease to exist or change its mode of variability.

Because the historical record is relatively short, particularly outside of the Atlantic Ocean, it is useful to consider paleoclimatic evidence of these oscillations.

Knudsen et al. (2011) showed that distinct, 55- to 70-year oscillations have characterized the North Atlantic ocean-atmosphere variability over the past 8,000 years, consistent with the AMO. Cobb et al. (2013) analyzed fossil coral reconstructions of ENSO spanning the past 7000 years. The corals document highly variable ENSO activity, with no evidence for a systematic trend in ENSO variance. Twentieth-century ENSO variance is significantly higher than average fossil coral ENSO variance, but is not unprecedented. Liu et al. (2017) found that, over the period 1190 – 2007 AD, equatorial temperatures in the Central Pacific (associated with El Nino Modoki) in the late 20th century were accompanied by higher levels of interannual variability than observed previously in this period.

The NCA4 (2017; Chapter 5) concluded that confidence is low regarding the impact of manmade global warming on changes to these internal modes associated with large-scale ocean circulation patterns.

4.4 Attribution – models

Extended integrations of global climate models in principle should allow for an assessment of the frequency, intensity, duration and tracks of hurricane­like features in the model simulations. Attribution of the impacts of manmade global warming on hurricane characteristics can then be assessed through comparing climate model simulations both with and without human impacts (e.g. CO2 and aerosol emissions).

A prerequisite for using global climate models for attribution analyses or 21st century projections of hurricane activity requires that historical climate model simulations accurately simulate hurricane characteristics and interannual to decadal variability. However, simulation of realistic hurricane characteristics is hampered by the coarse resolution generally required of such global models and also the model treatment of tropical convection and clouds (e.g. Camargo et al 2008; Walsh et al. 2015). Further, climate models do not accurately simulate the timing and patterns of the multi-decadal oscillations (e.g. Kravtsov et al. 2018).

A number of new, high-resolution simulations of the generation of hurricanes by global climate models have been performed in recent years (see Walsh et al. 2016 for a summary). More realistic maximum hurricane intensities have been simulated by downscaling individual storm cases from a coarse­ grid global model into a regional high ­resolution hurricane prediction system.

As a recent example, Patricola and Wehner (2018) used a high-resolution model to simulate 15 hurricane events from the global historical record. Simulations for each storm were conducted under current climate conditions versus the surface climate associated with pre-industrial conditions. They found that, relative to pre-industrial conditions, climate change has enhanced the average and extreme rainfall of hurricanes Katrina, Irma and Maria by 4%–9% and increased the probability of extreme rainfall rates, suggesting that climate change to date has already begun to increase tropical cyclone rainfall.

The model used by Patricola and Wehner (2018) was driven by specified sea surface temperatures, and did not include coupling to the ocean. Lack of ocean coupling in the model can lead to tropical cyclones that are more intense and frequent compared to slab–ocean and fully coupled atmosphere–ocean simulations. Tropical cyclone winds typically induce a ‘cold wake’ of upper-ocean temperatures. The cold wake can reduce the tropical cyclone intensity, depending on the tropical cyclone’s intensity and translation speed and the ocean heat content and salinity structure. Further, these simulations of individual storms only include the thermodynamic (temperature) related aspects of climate change, and do not include the impact of any atmospheric or ocean circulation changes that might be associated with global warming.

In one of the most sophisticated model-based attribution studies to date, Bhatia et al. (2019) investigated the issue of whether hurricane rates of intensification are increased by global warming. They compared the observed trends to natural variability in bias-corrected, high-resolution, global coupled model experiments that accurately simulate the climatological distribution of tropical cyclone intensification. Their results suggest a detectable increase of Atlantic intensification rates with a positive contribution from manmade forcing and reveal a need for more reliable data before detecting a robust trend at the global scale. The paper concludes that the study is limited by the ability of a climate model to accurately represent natural variability as well as the uncertainty around the trends in relatively short observational records. Further analysis with additional high-resolution climate models and a longer and more reliable observational record is required to confirm these conclusions.

In summary, global climate models are currently of limited use in hurricane attribution studies. High-resolution models used to simulate individual hurricanes are being used to perform controlled experiments that focus on specific events and the complexities of relevant physical processes. However, definitive conclusions regarding the impact of manmade warming on hurricanes cannot be determined from these simulations, given the current state of model development and technology.

4.5 Attribution – physical understanding

Our knowledge of the relationships between climate variability and hurricanes comes mainly from the analysis of historical data. Meaningful interpretation of these relationships requires understanding of the mechanisms that determine these relationships, but ultimately this understanding is limited by the same fundamental factors that limit our understanding of the mechanisms of the formation and intensification of individual hurricanes (see Emanuel 2018 for a review of current knowledge of hurricane processes).

4.5.1 Genesis

While there are some theories for hurricane genesis (formation), there is no quantitative theory that relates the probability of genesis to the large-scale environmental conditions. As summarized by Camargo et al. (2008), we have known for decades that sea surface temperature, vertical wind shear, and atmospheric humidity influence genesis, and this gives us an empirical basis for understanding how climate variations influence hurricane numbers.

As summarized by Walsh et al. (2015), the number of hurricanes appears to be related to changes in the mean vertical circulation of the atmosphere. Research indicates that thermodynamic variables (related to temperature and humidity) are generally more important than atmospheric circulations for hurricane formation in the North Atlantic. Humidity in the lower atmosphere was shown to be the most important controlling parameter for formation in the Atlantic, with sea surface temperatures and cyclonic circulations patterns, wind shear and rising motion also being important.

The problem of understanding the impact of global warming on hurricane genesis is complicated by potentially compensating influences of a warming climate on hurricanes (e.g. Patricola and Wehner, 2018). Increasing sea-surface temperature (SST) are expected to intensify tropical cyclones. However, projected increases in vertical wind shear could work to suppress tropical cyclones regionally.

As summarized by IPCC AR5 (2013; Chapter 16), hurricanes can respond to manmade forcing via different and possibly unexpected pathways. For example, increasing emissions of black carbon and other aerosols in South Asia has been linked to a reduction of SST gradients in the Northern Indian Ocean, which has in turn been linked to a weakening of the vertical wind shear in the region and an observed increase in the number of intense hurricanes in the Arabian Sea. In the North Atlantic, the reduction of pollution aerosols is linked to tropical SST increases, while in the northern Indian Ocean, increases in aerosol pollution have been linked to reduced vertical wind shear – both of these effects have been related to increased tropical cyclone activity.

4.5.2 Intensification

The causal chain for global warming to increase hurricane intensity has long been argued to occur via the increase in sea surface temperature (SST) (e.g. Curry et al. 2006). Hoyos et al. (2006) showed that the trend of increasing numbers of category 4 and 5 hurricanes for the period 1970-2004 is directly linked to the trend in sea-surface temperature; other aspects of the tropical environment, although they influence shorter-term variations in hurricane intensity, do not contribute substantially to the observed global trend.

A nominal SST threshold of 26.5oC [80 oF] has been used as a criterion for the formation of hurricanes, and a threshold of 28.5oC [82.4 oF] for intensification to a major hurricane (Category 3+). New insights into the relationship between warming and hurricane intensity are provided by Hoyos and Webster (2011). During the 20th century, tropical ocean SST has increased by about 0.8°, accompanied by a steady 70% expansion of the ocean warm pool area that encompasses the regions exceeding 28oC [82.4 oF]. However, the region of tropical cyclogenesis has not expanded, owing to the area of convective activity remaining nearly constant. Hoyos and Webster argue that the temperature threshold for tropical cyclogenesis increases as the average tropical ocean temperature increases. The increasing intensity of atmospheric convection with warmer temperatures seems to be the link between SST increase and hurricane intensity, rather than the absolute value of the SST itself. Further, the location of the intense convection is related to the difference between the local SST and global tropical average SST, rather than to the absolute value of the SST itself (Vecchi et al. 2008). This variation in the threshold temperature for hurricane formation and intensification is consistent with the existence of very intense hurricanes even when the climate was significantly cooler.

The causal link between SST and hurricane plays a prominent role in theories to estimate the upper bounds on tropical cyclone intensity that indicate that there is a strong relationship between ocean thermal energy and the maximum potential intensity that can be achieved. Potential intensityis defined as the maximum sustainable intensity of a hurricane based on the thermodynamic state of the atmosphere and sea surface. The theories of potential intensity continue to be challenged and refined. Knutson and Tuleya (2004) estimated the rough order of magnitude of the sensitivity of hurricane maximum intensity to be about 4% per degree C of SST warming. Such sensitivity estimates have considerable uncertainty, as shown by a subsequent assessments.

4.5.3 Rainfall

As the ocean surface warm, more water evaporates and a warmer atmosphere has a greater capacity to hold water vapor. Simple thermodynamic calculations show that there is about 7% more water vapor in saturated air for every 1°C [2 oF] of ocean warming (e.g. Trenberth, 2007).

This increase in atmospheric water vapor can cause an even larger increase in hurricane rainfall, since water vapor retains the extra heat energy required to evaporate the water, and when the water vapor condenses into rain, this latent heat is released.

4.6Conclusions

Models and theory suggest that hurricane intensity and rainfall should increase in a warming climate. There is no theory that predicts a change in the number of hurricanes or a change in hurricane tracks with warmer temperatures.

Convincing attribution of any changes requires that a change in hurricane characteristics be identified from observations, with the change exceeding natural variability.

The global percent of Category 4/5 hurricanes has been observed to be increasing, although the amount of the increase depends on period considered, with questionable observations in some regions prior to 1987. Because of the short length of the data record, attribution of any portion of this increase to manmade global warming requires careful examination of the data and modes of natural variability in each of the regions where hurricanes occur.

While theory and models indicate that hurricane rainfall should increase in a warming climate, satellite-based observational analyses of hurricane rainfall have not addressed this issue on a meaningful spatial or temporal scale.

There is some evidence for a slowing of tropical cyclone propagation speeds globally over the past half century, but these observed changes have not yet been confidently linked to manmade climate change.

While substantial increases in Atlantic hurricane activity have occurred since 1970, these increases are likely driven by changes in the Atlantic Multidecadal Oscillation (AMO) and Atlantic Meridional Mode (AMM). Climate model simulations suggest a recent increase in the rate of intensification of Atlantic hurricanes that exceeds what can be expected from natural internal variability.

If manmade global warming is causing an increase in some aspect of hurricane activity, this increase should be evident globally, and not just in a single ocean basin. One problem is that data is insufficient for detection on the global level. When considering a single ocean basin, correct interpretation and simulation of natural internal variability is of paramount importance; unfortunately our understanding and ability to correctly simulate natural internal variability with global climate models is limited.

In summary, the trend signal in hurricane activity has not yet had time to rise above the background variability of natural processes. Manmade climate change may have caused changes in hurricane activity that are not yet detectable due to the small magnitude of these changes compared to estimated natural variability, or due to observational limitations. But at this point, there is no convincing evidence that manmade global warming has caused a change in hurricane activity.

JC note: stay tuned, next two posts will be on landfalling hurricanes.

The topic of tropical cyclones and climate change is regularly assessed by the IPCC and US National Assessment Reports, as well as by other expert reports under the auspices of the WMO, CLIVAR and other organizations. With regards to the question: ‘Why another assessment report on Hurricanes and Climate Change?’ here is my response:

CFAN’s Special Report on Hurricanes and Climate Change is distinguished from recent assessments by the following:

a focus on hurricane aspects that contribute to landfall impacts

an emphasis on geologic evidence and interpretation of natural variability

an approach to ‘detection and attribution’ that does not rely on global climate models

a perspective on future projections that accounts for uncertainties in climate models and also includes natural climate variability

a longer format that allows for more in depth explanation suitable for a non-expert audience.

Basically, this Report is motivated by the needs of my clients in the energy and insurance sectors. After grappling with this issue for the past 15 years, both from the perspective of a research scientist and the owner of a weather/climate services company (Climate Forecast Applications Network), I have a perspective that is somewhat different from other academic or government scientists addressing the problem of hurricanes and climate change.

I plan to make the full report available in May, I look forward to your feedback and suggestions.

In this post, I’ll start with Chapter 3 on the observational datasets of hurricane variability and trends. An additional 4 posts on this topic will provided in the coming weeks.

Historical variability and trends

Documenting the variability and trends of hurricane activity requires long and accurate data records. Historical information on hurricane activity is obtained from the following sources:

Over the years, the way that hurricanes have been observed has changed radically. As a result, many hurricanes are now recorded that would have been missed in the past. Furthermore, satellites are now able to continually assess wind speeds, thus recording peak wind speeds that may have been missed in pre-satellite days. Unfortunately, temporally inconsistent and potentially unreliable global historical data hinder detection of trends in tropical cyclone activity.

This Chapter assesses the variability of global and regional hurricanes over the entire available database. An assessment is provided as to whether we can detect any global or regional trends in hurricane activity from the available data.

3.1 Global

Reliable global hurricane data from satellite has been available since 1970, although inference of hurricane intensity is not judged to be reliable prior to 1980 (and in some regions, prior to 1988). Hurricane intensity is estimated from visible and infrared satellite observations through cloud patterns and infrared cloud top temperatures.

Figure 3.1 shows the time series since 1981 of total global hurricanes and major hurricanes. On average, each year there are about 47 hurricanes with about 20 reaching major hurricane status. Substantial year-to-year variability is seen, with a slight decreasing trend in the number of hurricanes and a slight increasing trend in the number of major hurricanes.

Figure 3.2 shows the time series since 1971 of the global Accumulated Cyclone Energy (ACE) (see Chapter 2 for a definition of ACE). As an integral of global hurricane frequency, duration and intensity, ACE shows greater decadal variation than does the number of hurricanes in Figure 3.1. No trend in ACE is seen, and the recent period of 2009 to 20015 was characterized by particularly low values of ACE.

Figure 3.1: Global Hurricane Frequency (all & major) since 1981 – 12-month running means. The top time series is the number of global tropical cyclones that reached at least hurricane-force (maximum lifetime wind speed exceeds 64-knots). The bottom time series is the number of global tropical cyclones that reached major hurricane strength. Source: Maue (2018).Figure 3.2: Global and Northern Hemisphere Accumulated Cyclone Energy: 24 month running means. Note that the year indicated represents the value of ACE through the previous 24-months for the Northern Hemisphere (bottom line/gray boxes) and the entire global (top line/blue boxes). The area in between represents the Southern Hemisphere total ACE. Source: Maue (2018)

3.1.1 Intensity

Figure 3.1 indicates that the number of major hurricanes is increasing globally, whereas the total number of hurricanes is decreasing. An increase in hurricane intensity has long been hypothesized to occur as global sea surface temperatures increase.

Emanuel (2005) identified a trend since 1950 of increasing maximum hurricane Power Dissipation Index (PDI), focusing on hurricanes in the North Atlantic and North Pacific. Shortly thereafter, Webster et al. (2005) showed that while the total number of hurricanes has not increased globally since 1970, the proportion (%) of Category 4 and 5 hurricanes had doubled, implying that the distribution of hurricane intensity has shifted towards more intense hurricanes.

Klotzbach and Landsea (2015) updated the Webster et al. (2005) analysis (Figure 3.3), with an additional 10 years of data and the availability of the International Best Tracks (IBTrACS) dataset, which reflects a cleaning up and homogenization of the data relative to what was used by Webster et al. Interpretation of any increase in the % of Cat 4/5 hurricanes depends on interpretation of the data quality, which Klotzbach and Landsea argue is an issue prior to 1988. Klotzbach and Landsea make a convincing argument that data prior to 1980 should not be used in trend analyses. The debate on the increase in % CAT4/5 hurricanes hinges on whether the data from 1985-1989 is of useful accuracy, since the large jump occurs between 1985-1989 and 1990-1995. The primary problem with the data between 1985 and 1987 is missed tropical cyclones in the North Indian Ocean, and a classification change in Northeast Pacific (both regions contribute a relatively small number to the total global tropical cyclone count).

To address concerns about the validity of intensity data from the earlier periods, Kossin et al. (2013) developed a new homogeneous satellite-derived dataset of hurricane intensity for the period 1982-2009. The lifetime maximum intensity (LMI) achieved by each reported storm is calculated and the frequency distribution of LMI is tested for changes over this period. Kossin et al. found that globally, the stronger tropical cyclones have become more intense at a rate of about +1 m/s per decade during the period (Figure 3.4), but the statistical significance of this trend is marginal. Significant increases in the strongest hurricanes have occurred in the North Atlantic and decreases in the Western North Pacific.

Figure 3.3. (a) Pentad total of the number of hurricanes that achieved a maximum intensity of each category grouping as delineated by the Saffir–Simpson scale. (b) As in (a), but for the percentage of total hurricanes achieving each category grouping. Klotzbach and Landsea (2015)

Apart from the issue of maximum lifetime intensity achieved by a hurricane, the rate of intensification of hurricanes is receiving increasing scrutiny.

A recent study showed the 95th percentile of 24-h intensity changes significantly increased in the central and eastern tropical Atlantic basin during the period 1986–2015 (Balaguru et al, 2018). The intensification rate increased significantly between 1977 and 2013 in the West Pacific (Mei et al, 2016). In both the Atlantic and West Pacific, the areas with the largest increase in sea surface temperatures (SSTs) were collocated with the largest positive changes in intensification rates.

Bhatia et al. (2019) conducted a comprehensive analysis of global rates (excluding the Indian Ocean) of hurricane intensification for the period 1982-2009 (Figure 3.5). Evaluation of the global data is hampered by intensity analysis uncertainties, although the intensity uncertainty is very low for the North Atlantic. In the two most reliable long-term observational records available for hurricane intensity changes, the proportion of the highest 24-hour hurricane intensification significantly increased in the Atlantic between 1982 and 2009. Globally, a significant increase in hurricane intensification rates is seen in IBTrACS data but not in ADT-HURSAT (satellite-derived).

Recent research has highlighted variation in the speed and location of hurricane tracks. These variations are significant in changing landfall locations and hurricane-induced rainfall.

Kossin (2018) showed that that tropical-cyclone translation speed (rate of forward motion) has decreased globally by 10 per cent over the period 1949-2016 (Figure 3.6). The global distribution of translation speed exhibits a clear shift towards slower speeds in the second half of the period.

This slowdown is found in both the Northern and Southern Hemispheres but is stronger and more significant in the Northern Hemisphere, where the annual number of tropical cyclones is generally greater. The times series for the Southern Hemisphere exhibits a change-point around 1980 (Figure 3.6), but the reason for this is not clear. An overall slowdown while over water was found in every basin except the northern Indian Ocean. The largest slowdown was found in the western North Pacific Ocean and the region around Australia.Figure 3.6 Global (a) and hemispheric (b) time series of annual-mean tropical-cyclone translation speed and their linear trends. Grey shading indicates 95 percent confidence bounds. Source: Kossin (2018).

In addition to the global slowing of hurricane translation speed, there is evidence that hurricanes have migrated poleward in several regions. Migration in the western North Pacific was found to be large, which has had a substantial effect on regional hurricane-related hazard exposure.

Kossin et al. (2014) identified a pronounced poleward migration in the average latitude where tropical cyclones have achieved their lifetime-maximum intensity (LMI) over the period 1982-2012. The poleward trends are evident in both the Northern and Southern Hemispheres, with an average migration of tropical cyclone activity away from the tropics at a rate of about 1° latitude per decade. In the Northern Hemisphere, the western North Pacific shows the largest migration, with the North Atlantic showing essentially no trend.

Moon et al. (2015) suggested that the poleward migration is greatly influenced by regional changes in hurricane frequency associated with multi-decadal variability, particularly for the Northern Hemisphere (NH). Moon et al. found 92% of the poleward trend is a result of the frequency changes associated with multi-decadal variability.

Daloz et al. (2018) examined whether the poleward migration of hurricane lifetime-maximum intensity is associated with a poleward migration of hurricane genesis (formation). They found a shift toward greater average potential number of genesis at higher latitudes over most regions of the Pacific Ocean, which is consistent with a migration of tropical cyclone genesis towards higher latitudes. They also found significant poleward shifts in mean genesis position over the Pacific Ocean basins.

3.1.4 Rainfall

Walsh et al. (2015) concluded that for the globe, a detectable change in tropical cyclone-related rainfall has not been established by existing studies. However, satellite data is being increasingly used to assess tropical cyclone rainfall.

Kim and Ho (2018) examined the variation of hurricane rainfall area over the subtropical oceans using satellite radar precipitation data collected from 1998 to 2014. In the subtropics, higher translation speed and larger vertical wind shear significantly contribute to an increase in hurricane rainfall area by making horizontal rainfall distribution more asymmetric, while sea surface temperature rarely affects the fluctuation of hurricane rainfall area. They suggested that in the subtropics, unlike the tropics, atmospheric circulation conditions are likely more crucial to varying hurricane rainfall area than factors such as sea surface temperature.

3.3 North Atlantic

The North Atlantic has the best data quality of any of the regions. There is credible data on frequency and intensity since 1850, with the intensity data being most reliable since 1944, when aircraft reconnaissance flights began. Prior to the onset of satellite coverage in 1966, NOAA has adjusted total basin-wide counts upward based on historical records of ship track density. During years when fewer ships were making observations in a given region, hurricanes in that region were more likely to have been missed, or their intensity underestimated to be below hurricane strength, leading to a larger corresponding adjustment to the count for those years. These adjustment methods are cited in Knutson et al. (2010).

The impact of undercounting is illustrated in Figure 3.7, which compares the raw hurricane counts (green) with adjusted counts (orange) for the period 1878-2015. The sign of the long-term trend depends critically on the adjustment.

Figure 3.8 shows the yearly values for the adjusted time series since 1850, for total North Atlantic hurricane counts and major hurricane counts. While the number of major hurricanes prior to 1944 is probably undercounted, it is noteworthy that the number of major hurricanes during the 1950’s and 1960’s was at least as large as the last two decades.

Accumulated Cyclone Energy (ACE) (Figure 3.9) and Power Dissipation Index (PDI) (Figure 3.10) provide integral measures of overall hurricane activity, with PDI providing greater weight to intensity. Values of ACE during the 1950’s and 1960’s are comparable to recent decades. Regarding PDI, the years 1926, 1934 and 1962 have PDI values as large as seen in 2004, 2005, 2017, although prior to 1944 intensity data is less reliable.

Figure 3.10 Power Dissipation Index (PDI) for the North Atlantic From 1920-2018. Source: Ryan Maue.

All measures of Atlantic hurricane activity show a significant increase since 1970. However, high values of hurricane activity (comparable to the past two decades) were also observed during the 1950’s and 1960’s, and by some measures also in the late 1920’s and 1930’s.

3.4 Paleotempestology

Hurricane data records for the past 40 years, or even the past 150 years, can present a misleading picture of range of variability of hurricane characteristics. Paleotempestology is the study of storm occurrence prior to the historical record. This provides a way of establishing a longer climate baseline than the relatively short observational record.

Many types of geological proxies have been tested for reconstructing past hurricane activity, including hurricane-induced deposits of sediments in coastal lakes and marshes, stalagmites in caves, tree rings and corals. Since these studies typically focus on a specific geographic location, a caveat is that they cannot distinguish between regional trends and systematic changes in hurricane tracks.

In the Australian region, Haig et al. (2014) used oxygen isotopic analysis of stalagmite records to show that the present low levels of storm activity on the mid west and northeast coasts of Australia are unprecedented over the past 1,500 years. Their results reveal a multicentennial cycle of tropical cyclone activity, the most recent of which commenced around AD 1700. The present cycle includes a sharp decrease in activity after 1960 in Western Australia.

Nyberg et al. (2007) constructed a record of the frequency of major Atlantic hurricanes over the past 270 years using proxy records in the Caribbean from corals and a marine sediment core. The record indicates that the average frequency of major hurricanes decreased gradually from the 1760s until the early 1990s, reaching anomalously low values during the 1970s and 1980s.

Wallace et al. (2015) review paleo-trends in hurricane activity from sedimentary archives in the Gulf of Mexico, Caribbean and western North Atlantic margins. A site from Mattapoisett Marsh, Massachusetts shows that the total hurricane deposits have remained relatively constant between 2200 and 1000 years B.P. (before present). However, the last 800 years B.P. appear to have been a time of relatively frequent total storm deposition. A site from Laguna Playa Grande, Puerto Rico has reconstructed intense hurricanes occurring over the past 5000 years B.P., with prominent increases in activity observed during 4400 – 3600, 2500 – 1000, and 250 – 0 years B.P. In the Gulf of Mexico, while the overall frequency of events remained relatively constant over the 4500 year record, the frequency of high threshold events has varied considerably – periods of frequent intense hurricane strikes occurred during 3950 – 3650, 3600 – 3500, 3350 – 3250, 2800 – 2300, 1250 – 1150, 925 -875, and 750 – 650 years B.P.

Brandon et al. (2013) found a period of increased intense hurricane frequency between ~1700 and ~600 years B.P. and decreased intense storm frequency from ~2500 to ~1700 and ~600 years B.P. to the present.

There has not been a timeline or synthesis of these results for the past five thousand years, either regionally or for the entire coastal region. However, it is clear from these analyses that significant variability of landfall probabilities occurs on century to millennial time scales. There appears to have been a broad ‘hyperactive period’ from 3400 to 1000 years B.P. High activity persisted in the Gulf of Mexico until 1400 AD, with a shift to more frequent severe hurricane strikes from the Bahamas to New England occurring between 1400 and 1675 AD. Since 1760, there was a gradual decline in activity until the 1990’s.

3.4 Conclusions

Analyses of both global and regional variability and trends of hurricane activity provide the basis for detecting changes and understanding their causes.

The relatively short historical record of hurricane activity, and the even shorter record from the satellite era, is not sufficient to assess whether recent hurricane activity is unusual for during the current interglacial period. Results from paleotempestology analyses in the North Atlantic at a limited number of locations indicate that the current heightened activity is not unusual, with a ‘hyperactive period’ apparently occurring from 3400 to 1000 years before present.

Global hurricane activity since 1970 shows no significant trends in overall frequency, although there is some evidence of increasing numbers of major hurricanes and of an increase in the percentage of Category 4 and 5 hurricanes.

In the North Atlantic, all measures of hurricane activity have increased since 1970, although comparably high levels of activities also occurred during the 1950’s and 1960’s.

Some recent sea level rise publications, with implications for how we think about the worst case scenario for the 21st century.

Less than 3 months ago, I published my Special Report on Sea Level and Climate Change. I remarked on what a fast moving field this was, particularly with regards to the ice sheet dynamics. This past week has seen the publication of 3 new papers that substantially change our thinking on the worst case scenario for the 21st century.

In 2016, a paper by Deconto and Pollard grabbed headlines with the finding that Antarctic ice was at risk from “marine ice-cliff instability”, which would see towering cliffs of glacier ice collapse into the ocean under their own weight. The 2016 study generated a lot of media coverage, even making the frontpage of the New York Times. It became the most talked-about climate paper of that year.

The past few weeks have seen publication of a number of relevant papers, that point to a much lower sea level rise than predicted by DeConto and Pollard (2016).

Abstract. “Predictions for sea-level rise this century due to melt from Antarctica range from zero to more than one metre. The highest predictions are driven by the controversial marine ice-cliff instability (MICI) hypothesis, which assumes that coastal ice cliffs can rapidly collapse after ice shelves disintegrate, as a result of surface and sub-shelf melting caused by global warming. But MICI has not been observed in the modern era and it remains unclear whether it is required to reproduce sea-level variations in the geological past. Here we quantify ice-sheet modelling uncertainties for the original MICI study and show that the probability distributions are skewed towards lower values (under very high greenhouse gas concentrations, the most likely value is 45 centimetres). However, MICI is not required to reproduce sea-level changes due to Antarctic ice loss in the mid-Pliocene epoch, the last interglacial period or 1992–2017; without it we find that the projections agree with previous studies (all 95th percentiles are less than 43 centimetres). We conclude that previous interpretations of these MICI projections over-estimate sea-level rise this century; because the MICI hypothesis is not well constrained, confidence in projections with MICI would require a greater range of observationally constrained models of ice-shelf vulnerability and ice-cliff collapse.”

Abstract. “Government policies currently commit us to surface warming of three to four degrees Celsius above pre-industrial levels by 2100, which will lead to enhanced ice-sheet melt. Ice-sheet discharge was not explicitly included in CMIP5, so effects on climate from this melt are not currently captured in the simulations most commonly used to inform governmental policy. Here we show, using simulations of the Greenland and Antarctic ice sheets constrained by satellite-based measurements of recent changes in ice mass, that increasing meltwater from Greenland will lead to substantial slowing of the Atlantic overturning circulation, and that meltwater from Antarctica will trap warm water below the sea surface, creating a positive feedback that increases Antarctic ice loss. In our simulations, future ice-sheet melt enhances global temperature variability and contributes up to 25 centimetres to sea level by 2100. However, uncertainties in the way in which future changes in ice dynamics are modelled remain, underlining the need for continued observations and comprehensive multi-model assessments.”

From Carbon Brief, quoting Golledge::

“AR5 gave mean contributions for 2081-2100 of 4 cm from Antarctica and 12 cm from Greenland. In our new study, we suggest 14 cm from Antarctica and 11 cm from Greenland at 2100, so an increase to the Antarctic term and just above the upper bound of the AR5 uncertainty range (-6 cm to 12 cm).”

“We found the Antarctic contribution to sea level this century is smaller than implied by DeConto and Pollard’s study. They had shown mean values ranging from 64 to 114cm, but our most likely value is only 45 cm. This is still definitely bad news, and we also couldn’t rule out values much higher than this. But we found the balance of probability leaned towards much lower numbers than before.”

“We found that including MICI is not necessary to explain the past, and therefore it might not be present in the future – at least, we don’t have much evidence to support it yet. Leaving it out gives much smaller sea level contributions: a most likely value of only 15 cm, one metre less than the highest projections of DeConto and Pollard, and a 5% probability of more than 39 cm.”

From the Carbon Brief article:

“The chart below shows the likelihood of Antarctica exceeding one metre of sea level rise in the new simulations. It includes three emissions scenarios: low (RCP2.6, grey), intermediate (RCP4.5, blue) and high (RCP8.5, red), with and without MICI. The lines show how the probability changes through time.”

“So, for example, under high emissions with MICI, the likelihood of more than one metre of sea level rise from Antarctica emerges above zero around the 2080s, and rapidly increases until it becomes a certainty (within the model) in the 2130s. Without MICI, there is no risk of one metre of sea level rise within this century, but it does emerge relatively early in the 22nd century.”

Only for the borderline impossible RCP8.5 scenario with MICI, is there about a 50% chance of exceeding 1 m sea level rise in the 21st century.

Abstract. “Recent studies suggest that Antarctica has the potential to contribute up to ~15 m of sea-level rise over the next few centuries. The evolution of the Antarctic Ice Sheet is driven by a combination of climate forcing and non-climatic feedbacks. In this review we focus on feedbacks between the Antarctic Ice Sheet and the solid Earth, and the role of these feedbacks in shaping the response of the ice sheet to past and future climate changes. The growth and decay of the Antarctic Ice Sheet reshapes the solid Earth via isostasy and erosion. In turn, the shape of the bed exerts a fundamental control on ice dynamics as well as the position of the grounding line the location where ice starts to float. A complicating issue is the fact that Antarctica is situated on a region of the Earth that displays large spatial variations in rheological properties. These properties affect the timescale and strength of feedbacks between ice-sheet change and solid Earth deformation, and hence must be accounted for when considering the future evolution of the ice sheet.”

The punchline of this study (at least in terms of my own interest) is hidden in the main text, not really apparent from the abstract:

“It has been shown that GIA-related sea-level and solid Earth changes, including changes to the slope of the underlying bed, alter the stress field of the ice sheet in a way that acts to dampen and slow past and future ice-sheet growth and retreat in Antarctica. An important process that is also accounted for in these coupled models is the feedback between isostatically-driven ice surface elevation change and surface mass balance.”

“The Earth structure underneath the AIS is highly variable, and viscosities may be as low as 10** 18 Pa s beneath parts of West Antarctica, leading to substantial (i.e., metres to tens of metres of) viscoelastic uplift occurring on centennial or even decadal timescales, with consequent implications for ice sheet evolution.”

“For a moderate climate warming, uplift of the LVZ Earth model preserves much of West Antarctica as compared to the simulation with the HV Earthmodel. While, for the simulation where strong RCP 8.5 climate warming is applied and new rapid-retreat-promoting ice physics are added (hydrofracturing and cliff failure e.g. MICI), West Antarctica collapses early on regardless of the choice of Earth Model.”

JC reflections

My original motivation for assessing the RCP8.5 scenario was that all of the really catastrophic sea level rise scenarios for the 21st century seem to depend on rather extreme (if not impossible) levels of CO2 and radiative forcing. If you take away RCP8.5 scenarios, SLR is not so alarming, at least on the time scale of the 21st century.

Of the three papers, the Whitehouse one may be the most important (the other two seem part of the WAIS MICI whiplash phenomena – who knows what the next round of papers will show). However, Whitehouse et al. has gotten zero press attention. Perhaps because youou have to dig deep to figure out the broader climate implications of the paper. Hopefully my little blog post will draw some extra attention to the this paper.

After the extreme alarm associated with the 2016 DeConto and Pollard paper, we are seeing a whiplash back to more reasonable (and less alarming values) of 21st century sea level rise. DeConto presented a talk at AGU on the latest simulations, apparently they are also predicting lower sea level rise from MICI, but the paper is under review and they are not publicly commenting on it yet.

The rapidity of the ice sheet instability research reminds me of the heyday in 2006 of the hurricane and global warming research, with weekly whiplash between alarming papers and nothing-here-to-see papers. I assume that this research topic will generally converge to an agreed upon list of things we don’t know, so we can better constrain the worst case sea level rise scenario for the 21st century.

In any event, to me this seems like the most interesting, fast moving and important topic in climate research right now.

I’m starting this post while sitting in the Phoenix airport waiting for my delayed flight home (by the time I get home, I will have been up for 24 hours today/tomorrow).

Sometimes I wonder why I bother.

Well, maybe tomorrow I will remember. The response to my testimony has been gratifying, from people across the political spectrum.

And the response from some segments has been very illuminating. Sometimes I think these people don’t really want to make progress in addressing climate change, but rather are using the issue as a club to enforce their tribalism and/or achieve social justice objectives. I think they actually LIKE the gridlock and climate wars.

Climate hypochondria

First, the climate hypochondriacs. Some people (including one of the Members) took issue with the following statement in my testimony:

“Based upon our current assessment of the science, the threat does not seem to be an existential one on the time scale of the 21st century, even in its most alarming incarnation.”

I referred to AR5 WGII:

“Every single catastrophic scenario considered by the IPCC AR5 (WGII, Table 12.4) has a rating of very unlikely or exceptionally unlikely and/or has low confidence. The only tipping point that the IPCC considers likely in the 21st century is disappearance of Arctic summer sea ice (which is fairly reversible, since sea ice freezes every winter).”

In hindsight, I should have hit this a bit harder. See my previous posts:

The IPCC AR5 refers to ‘reasons for concern.’ I won’t rehash my previous posts here, take a look.

Thinking that catastrophes like major hurricane landfalls, massive forest fires etc. will be ‘cured’ by eliminating fossil fuel emissions is laughable. Well its not really funny. Thinking that eliminating fossil fuel emissions will ‘solve’ the problem of extreme weather events is very sad, sort of on the level of doing rain dances. Every thing that goes wrong, they blame on fossil fuel driven climate change.

Imagine how surprised they would be if we were ever to be successful at eliminating fossil fuel emissions, and then we still had bad weather!

Tribalism

The response on twitter to my testimony from the usual suspects (e.g. Michael Mann, Dana Nuccitelli, Bob Ward and their acolytes) has been entertaining. Its actually a waste of space to reproduce any of it here, check it out on twitter if you have the stomach.

Of course they loved Kim Cobb’s testimony and thought mine was horrible, in spite of the fact that we said comparable things about climate policy.

Kim Cobb’s testimony

In 2003 or so, I hired Kim Cobb at Georgia Tech. During my later years at Georgia Tech, we disagreed on ALOT of things.

She genuinely wants climate solutions, and is prepared to work with energy companies and Republicans. VERY FEW climate scientists do this.

Here is excerpt from the first paragraph of her written testimony:

“My message today is simple: there are many no-regrets, win-win actions to reduce the growing costs of climate change, but we’re going to have to come together to form new alliances, in our home communities, across our states, and yes, even in Washington. There are plenty of prizes for early, meaningful action. These include cleaner air and water, healthier, more resilient communities, a competitive edge in the low-carbon 21st century global economy, and the mantle of global leadership on the challenge of our time. I’m confident that through respectful discourse, we will recognize that our shared values unite us in seeking a better tomorrow for all Americans.”

She discusses adaptation, innovation, energy efficiency, land use practices, as well as CO2 emissions reductions.

Compare her recommendations with my closing recommendation (slightly modified on the fly, from what was given in my previous post):

“Bipartisan support seems feasible for pragmatic efforts to accelerate energy innovation, build resilience to extreme weather events, pursue no regrets pollution reduction measures, and land use practices. Each of these efforts has justifications independent of their benefits for climate mitigation and adaptation. These efforts provide the basis of a climate policy that addresses both near-term economic and social justice concerns, and also the longer-term goals of mitigation.”

Is it just me, or is there common ground here?

The no-regrets angle is key here. Richard Lindzen reminded me that ‘no-regrets’ used to be the appropriate framework for climate policy:

By insisting on fighting the climate science wars in an attempt to win a climate policy debate, climate scientists continue to set this up for failure. From the Hartwell Paper:

“it is not just that science does not dictate climate policy; it is that climate policy alone does not dictate environmental or development or energy policies.”

By ostracizing any climate scientists who engage with energy companies or Republicans, and pretending that that energy policy depends on 100% scientific consensus in a speaking consensus to power framework, these climate scientists are setting climate policy up for failure.

Speaking of energy companies, I’m relieved that this issue did not come up in the Hearing, after the Grijalva inquisition of a few years ago. By the way, Kim Cobb holds the Georgia Power Chair at Georgia Tech. The activists presumably think that is fine; its only bad when someone like me engages with energy companies. Can anyone think of why energy companies should not have access to the best climate information available and advice from climate scientists?

Winning

Climate scientist/activists need to recognize that any U.S. climate policy will require bipartisan support (that includes the dreaded Republicans). Also, energy companies are part of the solution. Attacking scientists such as myself and other climate scientists that testify for the Republicans is pointless.

No-regrets, win-win solutions seem politically palatable to the Republicans; it remains to be seen if Democrats will make incremental no-regrets policies such as proposed here the enemy of their grandiose ideas such as Green New Deal.

I thank the Chairman, the Ranking Member and the Committee for the opportunity to offer testimony today.

Climate scientists have made a forceful argument for a future threat from manmade climate change. Manmade climate change is a theory whose basic mechanism is well understood, but the potential magnitude is highly uncertain.

If climate change was a simple, tame problem, everyone would agree on the solution. Because of the complexities of the climate system and its societal impacts, solutions may have surprising unintended consequences that generate new vulnerabilities. In short, the cure could be worse than the disease. Given these complexities, there is plenty of scope for reasonable and intelligent people to disagree.

Based on current assessments of the science, manmade climate change is not an existential threat on the time scale of the 21st century, even in its most alarming incarnation. However, the perception of a near-term apocalypse and alignment with range of other social objectives has narrowed the policy options that we’re willing to consider.

In evaluating the urgency of emissions reductions, we need to be realistic about what this will actually accomplish. Global CO2 concentrations will not be reduced if emissions in China and India continue to increase. If we believe the climate models, any changes in extreme weather events would not be evident until late in the 21st century. And the greatest impacts will be felt in the 22nd century and beyond.

People prefer ‘clean’ over ‘dirty’ energy – provided that the energy source is reliable, secure and economical. However, it’s misguided to assume that current wind and solar technologies are adequate for powering an advanced economy. The recent record-breaking cold outbreak in the Midwest is a stark reminder of the challenges of providing a reliable power supply in the face of extreme weather events.

With regards to energy policy and its role in reducing emissions – there are currently two options in play:

Option # 1: Do nothing, continue with the status quo

Option #2: Rapidly deploy wind and solar power plants, with the goal of eliminating fossil fuels in 1-2 decades

Apart from the gridlock engendered by considering only these two options, in my opinion, neither option gets us to where we want to go. A third option is to re-imagine the 21st century electric power systems, with new technologies that improve energy security, reliability and cost while at the same time minimizing environmental impacts. However, this strategy requires substantial research, development and experimentation. Acting urgently on emissions reduction by deploying 20th century technologies could turn out to be the enemy of a better long-term solution.

Given that reducing emissions is not expected to change the climate in a meaningful way until late in the 21st century, adaptation strategies are receiving increasing attention.

The extreme damages from recent hurricanes plus the billion dollar losses from floods, droughts and wildfires, emphasize the vulnerability of the U.S. to extreme events. It’s easy to forget that U.S. extreme weather events were actually worse in the 1930’s and 1950’s. Regions that find solutions to current impacts of extreme weather and climate events will be better prepared to cope with any additional stresses from climate change, and to address near-term social justice objectives.

The industry leaders that I engage with seem hungry for a bipartisan, pragmatic approach to climate policy. I see a window of opportunity to change the framework for how we approach this.

Bipartisan support seems feasible for pragmatic efforts to accelerate energy innovation, build resilience to extreme weather events, and pursue no regrets pollution reduction measures. Each of these three efforts has justifications independent of their benefits for climate mitigation and adaptation. These three efforts provide the basis of a climate policy that addresses both near-term economic and social justice concerns, and also the longer-term goals of mitigation.

This ends my testimony.

Ok, let me tweet this and post on Facebook, then I will head over to the Longworth House Office Building.

I will be testifying on Wed in the House Natural Resources Hearing on Climate change.

That is, I will be testifying provided that I can make it out of Reno today — we are on the tail end of the massive snow storm in the Sierras. You may recall that last May, I was unable to make it to DC for another Hearing, owing to heavy rain in DC. Fingers crossed.

There is also another Hearing on Wed, from the House Energy and Commerce Committees.

The Hearing starts at 10 a.m. EST. I am assured that it will be on CSPAN and a podcast will be available (possibly not in real time). I will update with any further information, e.g. where the written testimonies will be posted, whether this will be streamed live.

The only other scientist on the extensive witness list is my colleague from Georgia Tech, Kim Cobb. Also on the Witness list are governors from Massachusetts and North Carolina. This Hearing is not about the science per se, so it’s an opportunity for me to go in some different directions with my testimony.

Well I’m excited to participate in this Hearing. I will post my testimony and verbal remarks at 9 a.m. on Wed, and hopefully point to links for live streaming and the written testimony from the other witnesses.

Wish me luck navigating the airports.

To give you something to talk about on this thread, here is update on the Green New Deal [link]

A response to: “Is RCP8.5 an impossible scenario?”. This post demonstrates that RCP8.5 is so highly improbable that it should be dismissed from consideration, and thereby draws into question the validity of RCP8.5-based assertions such as those made in the Fourth National Climate Assessment from the U.S. Global Change Research Program.

Analyses of future climate change since the IPCC’s 5th Assessment Report (AR5) have been based on representative concentration pathways (RCPs) that detail how a range of future climate forcings might evolve.

Several years ago, a set of RCPs were requested by the climate modeling research community to span the range of net forcing from 2.6 W/m2 to 8.5 W/m2 (in year 2100 relative to 1750) so that physics within the models could be fully exercised. Four of them were developed and designated as RCP2.6, RCP4.5, RCP6.0 and RCP8.5. They have been used in ongoing research and as the basis for impact analyses and future climate projections.

AR5 does not provide probability assignments for any of the RCPs, and yet many impact assessments utilize RCP8.5 to declare consequences of inaction. For example, while RCP4.5 and RCP8.5 are utilized for the Fourth National Climate Assessment (NCA4), the majority of its assertions are based in RCP8.5. The NCA4 states, “RCP8.5 implies a future with continued high emissions growth, whereas the other RCPs represent different pathways of mitigating emissions.” (Executive Summary, p.7). The reader is left with the impression that, although “high” is not defined, it is the present state of things and RCP8.5 delineates how it will grow higher. Further, the statement portrays the other RCPs as mitigation scenarios that are not being acted upon. Therefore, RCP8.5 has been portrayed as the “business as usual” scenario, and impact assessments continue to spread this falsehood.

This article employs some quantitative analysis and the original RCP documentation to demonstrate how the use of RCP8.5 is misleadingly wrong and a lower, narrower range of future CO2 atmospheric concentrations can be identified.

A Long-Range Forecast Based in the Evidence

The “C” in RCP is for concentration (and not emissions), to emphasize that greenhouse gas (GHG) concentrations are the primary product of the RCPs and inputs to climate models. The Earth’s radiative balance responds to the net result of GHG sources, sinks, and sub-processes as expressed in atmospheric concentration levels. CO2 is by far the dominant GHG contributor and therefore the subject of this analysis.

Long, rigorous ongoing CO2 measurement data sets are available from the South Pole since 1957 and from Mauna Loa since 1958. The values are reported with very small measurement uncertainties, and they reveal a consistent positive trend over the past 60 years with a slightly concave-upward shape. While their annual CO2 values were similar in the late-1950s (at 315 ppm), Mauna Loa data have been increasing slightly more than South Pole data and both now exceed 400 ppm (Fig. 2). Other measurement stations subsequently added to global CO2 monitoring comprise a marine surface data set with values between the South Pole and Mauna Loa series. South Pole and Mauna Loa data are employed for this analysis since they are the longest time series and they bracket other data.

Figure 2. History and forecasts of CO2 concentration. RCP8.5 is defined by 936 ppm in 2100.

Increasing CO2 is a long-term substitution process as it transitions to a larger fractional share of atmospheric concentration. If well underway, such a process can be studied utilizing a logistic function as described by J.C. Fisher & R.H. Pry in their landmark forecasting paper, A Simple Substitution Model of Technological Change. The methodology provides a top-down appraisal of an ongoing transition assuming continuity in evolution of its contributing elements into the future. The method has been successfully employed in thousands of long-range forecasting applications across many fields of study. Its form is shown in Figure 1.

Figure 1. Fisher-Pry formulation of a logistic substitution model.

If sufficient historical data is available, the differential equation in Figure 1 can be readily solved through minimization of a rigorously constructed Chi-Square function. A solution reveals the ceiling value, process mid-point and rate constant; and it thereby has predictive power. The early portion of the S-curve is approximately exponential, followed by a transition towards the inflection point at which growth rate peaks, thereafter declining as the cumulative curve approaches its long-term ceiling.

For the case of CO2, the cumulative S-curve rests upon the pre-industrial starting level of 270-280 ppm. The rate of change in CO2 concentration is presently still increasing (Fig. 3), so the inflection point has not been reached; and second-difference calculations show no acceleration, indicating we are beyond the early exponential phase. The current substitution level should therefore lie between 15% and 50%, and this is found to be the case for the solutions shown in Figures 2 & 3.

The logistic CO2 forecasts project South Pole reaching 587 ppm and Mauna Loa reaching 654 ppm in the year 2100 (Fig. 2). The 90% confidence limits are calculated from variance of observations relative to the logistic fit and as a function of substitution level reached (Mauna Loa 24%, South Pole 33%). The result is well-constrained limits, and the slight divergence between data series continues into the future. RCP4.5 and RCP6.0 are similar to the South Pole forecast until mid-century, when RCP4.5 plateaus under mitigation assumptions and RCP6.0 increases towards the Mauna Loa forecast. RCP6.0 eventually reaches a ceiling below the Mauna Loa logistic ceiling. Results are detailed in Table 1 along with the values defining the RCPs.

Table 1. Atmospheric CO2 concentration projections.

Figure 3 displays the rate of change in CO2 concentration for the historical record, the logistic forecasts, and what is required to attain the defined RCP concentrations. The 60 year histories have a consistent upward trend, although with year-to-year variability. The highest transients above the trend are attributable to strong El Niño years (most recently 1998, 2016), which impair global vegetative response forming the seasonal CO2 cycle so that the annual value is temporarily elevated. The logistic rates-of-change are projected to attain their maximums (50% substitution) around 2037-2051 for South Pole and 2060-2080 for Mauna Loa. RCP4.5 and RCP6.0 rates bracket the South Pole forecast until mid-century, with transitions thereafter.

But what is glaringly apparent is the excessive rate-of-change required to attain RCP8.5’s 936 ppm in the year 2100. The rate would have to immediately depart from the historical pattern towards more than double any other forecast or RCP. In fact, since the RCPs were developed several years ago, it should have already transitioned to a very high trend to support an RCP8.5 expectation. This has clearly not occurred, and ongoing measurements show it is not happening. Other mathematical formulations were attempted for 936 ppm, but no logically consistent one was found. Even if it were assumed we remain in the early exponential phase of a substitution process the numbers do not support such a high expectation. RCP8.5 is a mathematically flawed projection for the future and clearly not the “business as usual” case. Rather, something similar to RCP6.0 should be assigned that designation, although with some modifications as to how it will evolve.

Revisiting the Origins of RCP8.5

The RCPs were presented in detail in a set of papers published in Climatic Change in 2011, and are worth reviewing. Recall that there was a desire to perform climate modeling over a wide range of forcing values – to fully exercise them from 2.6 W/m2 to 8.5 W/m2. This is understandable from an exploratory research standpoint, but says nothing about the likelihood of specific future outcomes. But, the papers do shed some light upon that.

RCP8.5 is described by the van Vuuren et al. The representative concentration pathways: an overview as a very high emissions scenario required to attain the desired forcing level. “RCP8.5 is a highly energy-intensive scenario as a result of high population growth and a lower rate of technology development.” Figures published in the paper identify where each RCP’s assumptions lie within the literature available at the time they were developed. Those taken for RCP8.5 lie at limits of 90th or 98th percentile bands (1% to 5% probability). The population projection is at the high limit of United Nations scenarios. Its primary energy consumption projection lies at the 99th percentile through most of this century. Energy intensity of the economy (energy/GDP) aligns with the 99th percentile from the literature. Improvement in RCP8.5’s carbon factor (CO2/energy) is minimal and at the 95th percentile, reflecting heavy reliance on fossil fuels. Coal comprises nearly 50% of RCP8.5’s energy mix, something which has not been seen since early in the last century. RCP8.5 has consequently been called a “return to coal” scenario (Why do climate change scenarios return to coal?, ). This is inconsistent with natural long-term sequential evolutions of energy technologies that project a declining share of the energy mix for coal.

It should come as no surprise then that a concatenation of very low probability assumptions yields a highly unlikely CO2 concentration at end-century. This result is given by van Vuuren et al. and shown in Figure 4. The RCP8.5 curve exits the literature envelope. The logistic forecasting exercise above confirms the most likely CO2 level that van Vuuren reported several years ago in the vicinity of 600 ppm in 2100 (Fig. 4). Their graph also serves as guidance for what might constitute a “worst case” CO2 scenario, which could be assessed to be in the range of 700-750 ppm.

So, since it was documented years ago that RCP8.5’s CO2 concentration has a vanishingly small probability of actually occurring, then why has it been promulgated for impact assessments and to inform climate policy? And why have researchers who realize that a true “business as usual” future lies closer to RCP6.0 found that when they go to the climate model library the RCP6.0 model runs do not exist? Have they been purposefully directed to RCP8.5, or is anything less than RCP8.5 unable to force a hypothesized impact?

Conclusions

The 60-year records of rigorous CO2 concentration measurements provide valuable forecasting information that is highly amenable to logistic growth modeling. It is clear that a substitution process is well underway that can be quantified to provide constraints upon expectations of future concentrations. The consistent concave-upward CO2 trend, rising rate-of-change, and well-bounded variance about the logistic solution provide confidence in the resulting forecasts and rejection of significantly inconsistent projections such as RCP8.5.

CO2 concentrations in 2100 will likely fall in the 565-680 ppm range and well short of 936 ppm indicated by RCP8.5. In preparation for the next IPCC assessment report, RCP8.5 has been redefined at even higher CO2 concentrations [link]. Modifications to inconsistent assumptions in minor-GHGs cause CO2 in the new RCP8.5 to exceed 1000 ppm in 2100 through even more coal consumption to retain 8.5 W/m2 forcing. RCP8.5 requires a CO2 rate of change inconsistent with the observed record that will be worsened by higher concentrations.

The RCP reference literature documents how RCP8.5 was based on low probabilities and questionable assumptions. It is not “business as usual” or even a worst case scenario. Consequently, the findings of any impact assessment based in RCP8.5 should be critically reviewed, as they reflect a highly unlikely, if not impossible, outcome.

The NCA4 (Executive Summary, p.22) states “The observed increase in global carbon emissions over the past 15-20 years has been consistent with higher scenarios (e.g., RCP8.5) (very high confidence).” This statement suggests either dismissal of observational evidence or that carbon budget model calculations from emissions to concentration are unable to replicate the historical CO2 measurement record. As evident in Figure 3, the recent record does not support an RCP8.5 pathway, and the statement is false.

Unfortunately the compulsion towards exaggeration can be stronger than duty to facts, and without them it will be impossible to make progress towards preparing for the future. The RCP6.0 pathway is the scenario coming closest to the forecast presented above and therefore a more realistic expectation of the future, and mitigation actions could evolve it towards RCP4.5.

Acknowledgements. The author thanks his colleague, Theodore Modis (growth-dynamics.com), for conducting the logistic forecast calculations. For those interested, the methodology is well-documented in his book Natural Laws in the Service of the Decision Maker (2013).

Article says high-pressure air in deep saline aquifers could store immense amounts of energy, sufficient to keep the lights on in Britain for a couple of months. [link]

Some of the best environmental #peacebuilding work in the world is currently taking place in Wadi El Ku, Sudan. A @UNEnvironment project has helped triple crop yield, increase farmers’ income, and improve community-based natural resources management [link]

A careful look at the early 20th century global warming, which is almost as large as the warming since 1950. Until we can explain the early 20th century warming, I have little confidence IPCC and NCA4 attribution statements regarding the cause of the recent warming.

Abstract: “The most pronounced warming in the historical global climate record prior to the recent warming occurred over the first half of the 20th century and is known as the Early Twentieth Century Warming (ETCW). Understanding this period and the subsequent slowdown of warming is key to disentangling the relationship between decadal variability and the response to human influences in the present and future climate. This review discusses the observed changes during the ETCW and hypotheses for the underlying causes and mechanisms. Attribution studies estimate that about a half (40–54%; p > .8) of the global warming from 1901 to 1950 was forced by a combination of increasing greenhouse gases and natural forcing, offset to some extent by aerosols. Natural variability also made a large contribution, particularly to regional anomalies like the Arctic warming in the 1920s and 1930s. The ETCW period also encompassed exceptional events, several of which are touched upon: Indian monsoon failures during the turn of the century, the “Dust Bowl” droughts and extreme heat waves in North America in the 1930s, the World War II period drought in Australia between 1937 and 1945; and the European droughts and heat waves of the late 1940s and early 1950s. Understanding the mechanisms involved in these events, and their links to large scale forcing is an important test for our understanding of modern climate change and for predicting impacts of future change.”

HELLOOOOOO!

This paper ‘shocked’ me for several reasons. First, I can’t imagine how I missed this paper when it was first published in Oct 2017 – apparently it received no publicity (oops now I remember, this was when i messed up my neck/shoulder/hand). Second, every time in the context of an attribution argument that I say ‘but the early 20th century global warming (not to mention the mid century cooling) and all those heat waves and droughts,’ I (along with my argument) am dismissed. I will paraphrase something I recall Gavin Schmidt saying: “We understand the late 20th century warming and have good forcing data, so no point to paying attention to the early warming where the data is far inferior.” And last but not least, the AMO and PDO are explicitly considered in Hegerl et al.’s attribution argument.

The Hegerl et al paper is actually pretty good (as far as it goes). Lets take a closer look at their analysis and argument, then I will take it a bit further.

Hegerl et al. provides a summary of forcing from CO2, volcanoes and solar (Figure 4, below). In 1910, the atmospheric CO2 concentration has been estimated to be 300.1 ppm; in 1950 it was 311.3 ppm; and in 2018 it is 408 ppm. So, the warming during the period 1910-1945 was associated with a CO2 increase of 10 ppm, whereas a comparable amount of warming during the period 1950 to 2018 was associated with a 97 ppm increase in atmospheric CO2 concentration – almost an order of magnitude greater CO2 increase for a comparable amount of global ocean warming. Back when CO2 concentrations were lower, each molecule had a greater radiative impact – but not THAT much.

Clearly, there were other factors in play besides CO2 emissions in the early 20th century global warming. In terms of external radiative forcing (Figure 4, above), a period of relatively low volcanic activity during the period 1920-1960 would have a relative warming effect, although the period from 1945 to 1960 was a period of slight overall cooling. Solar forcing in the early 20th century is uncertain, with estimates of warming of varying magnitude, although the magnitudes are insufficient for solar to have been a major direct contributor to the early 20th century global warming.

Hegerl et al. analyzed the internal variability associated with ocean circulations during the period since 1900. They found that the unusual cold anomaly circa 1910 originated in the South Atlantic, and then spread globally in the subsequent decade, leading to cold anomalies in both Atlantic and Pacific. This is very interesting, and something I hadn’t seen before.

This rarely discussed cold period was followed by strong warming in the Northern Hemisphere, which was particularly pronounced in high latitudes. Hegerl et al. summarized some previous research that might account for mechanisms of the strong high latitude warming in the Northern Hemisphere, including multi-decadal ocean oscillations.

Hegerl et al. focus their arguments regarding internal variability associated with large-scale ocean circulations on the Atlantic Multidecadal Oscillation (AMO) and the Pacific Decadal Oscillation (PDO). Warm phases of the both the AMO and PDO contributed to warming particularly during the 1930’s and 1940’s. They also consider atmospheric circulation impacts from the Pacific Walker cell, North Atlantic Oscillation and Arctic Dipole Model.

Concluding statement:

“These anomalous events occurred during a period of strong global-scale warm- ing, which can be attributed to a combination of external forcing (particularly, greenhouse gas increases, combined with a hiatus in volcanic events) and internal decadal variability. The exact contribution of each factor to large-scale warming remains uncertain, largely due to uncertainty in the role of aerosols in the cooling or stabilization of climate following the middle of the 20th century.”

The data

Let’s look at the data. First the surface temperature data. It is instructive to consider land and ocean separately, and also to look at the hemispheres separately.

Looking at the land surface temperatures, the various data sets show discrepancies during this period, as per this image from Berkeley Earth:

Note, Berkeley Earth land values are lowest circa 1910 (of the values plotted above, I suspect Berkeley Earth is most reliable over land in early 20th century). I would be interested in a better diagram that also includes Cowtan and Way (the Arctic was very warm in the 1920’s and 1930’s), but I am not seeing much in the way of land-only comparisons. Perhaps Zeke, Cowtan, or Robert Rohde has a better diagram. Note: over land, the recent warming is substantially greater than the early 20th century warming.

And now the ocean temperatures, from HADISST:

The 1910 cold anomaly is larger in the SH, but seems coincident with the timing of the NH cold anomaly (I have NO IDEA what kind of data ‘adjustments’ were made during this period, but i do know that data was very sparse in the Pacific and SH oceans during this period). But globally, you can see that the ocean warming between 1910 and 1945 was about the same magnitude as the warming from 1976 to present (recent warming is smaller in the SH).

Now, compare the ocean surface temperatures with the Zanna et al. global ocean heat content (note: inflection points in the Atlantic are generally the same as for global ocean):

The dip circa 1910 starts at the surface, and is seen in the deep ocean circa 1925.

Now take a look at sea level and sea level rise data. Klaus Bitterman’s post at RealClimate provides a useful summary:

The recent sea level rise is believed to have started in the mid-19th century (prior to these graphs. Consideration of the rates of sea level rise in the early 20th century shows an increase in the rate of sea level rise starting in 1920 and peaking in the 1940’s.

Clearly, ocean heat content is not the only thing driving sea level rise.

Things really get interesting when we look at the Arctic. The warming in the Arctic during the early 20th century is described by Polyakov et al:

The Arctic warming began in 1915, with an increase of about 1.6 C between 1915 and 1940.

The mass balance of the Greenland ice sheet provides an additional line of evidence for Arctic warming, that also has direct relevance to sea level rise. The following figure is from Fettweis et al. (2008) (note Fettweiss et al. have more recent analyses that are generally consistent with the 2008 paper, but I think the 2008 figure is best):

The impact of the early 20th century Arctic warming is seen in the negative mass balance for Greenland, from the 1920’s to mid 1950’s. This Greenland mass loss is consistent with the increase in the rate of sea level rise circa 1920 – 1950. While early estimates of Greenland mass balance are associated with substantial uncertainty, the early century mass loss is consistent with the early 20th century Arctic warming and increasing rate of sea level rise.

LeClerq et al. (2014) developed a new data set of worldwide glacier fluctuations. The data set shows relatively small fluctuations until the mid-19th century, followed by a global retreat that was strongest in the first half of the 20th century. (note: the paper doesn’t include a diagram that is useful the purpose here).

“Despite increasing global temperatures in the 20th century, this retreat is strongest in the period 1921– 1960 rather than in the last period 1961–2000, with a median retreat rate of 12.5 m/yr in 1921–1960 and 7.4 m/yr in the period 1961–2000.”

There are too many independent lines of evidence to ignore the early 20th century global warming.

Attribution

The underlying reasoning behind their detection and attribution approach seems to be this:

1) an estimate of internal variability is derived from multi-century to multi-millennial climate model simulations without external forcing. However the climate models have substantial deficiencies in simulating multi-decadal internal variability (e.g. Kravtsov et al) with regards to magnitude, spatial patterns and their sequential time development (and this is not to mention centennial to millenial scale internal variability).

2) Based on these model calculations of internal variability, they infer that the warming since 1950 exceeds the magnitude of natural internal variability . Therefore you cannot explain this warming from natural variability alone.

3) They then look at climate model simulations for the 20th century, with internal variability averaged out, so all you see is the forced response. And ‘miraculously’, the forced response ‘sort of’ agrees with observations. Therefore we are justified in assuming that all the warming is ‘forced’ (oops spot the flawed logic 2 –> 3).

The “fingerprinting” approach used by Hegerl et al. ignores the fingerprints of multi-decadal variability; multi-decadal variability is only invoked when the forcing is insufficient to explain the observations.

This attribution ‘sort of’ works, according to the principle that two ‘wrongs’ make a ‘right’ – neglect of multidecadal and longer internal variability plus climate model sensitivity to CO2 that is too high = 100% attribution of recent warming to anthropogenic causes.

As a case in point, from the 4th U.S. National Climate Assessment (p 118 and ff):

“Multi-century to multi-millennial-scale climate model integrations with unchanging external forcing provide a means of estimating potential contributions of internal climate variability to observed trends. Based on multimodel assessments, the likely range contribution of internal variability to observed trends over 1951–2010 is about ±0.2°F, compared to the observed warming of about 1.2°F over that period.”

“A recent 5,200-year integration of the CMIP5 model having apparently the largest global mean temperature variability among CMIP5 models shows rare instances of multidecadal global warming approaching the observed 1951–2010 warming trend. However, even that most extreme model cannot simulate century-scale warming trends from internal variability that approach the observed global mean warming over the past century. Thus, using present models there is no known source of internal climate variability that can reproduce the observed warming over the past century without including strong positive forcing from anthropogenic greenhouse gas emissions.”

Abstract: “With amplified warming and record sea ice loss, the Arctic is the canary of global warming. The historical Arctic warming is poorly understood, limiting our confidence in model projections. Specifically, Arctic surface air temperature increased rapidly over the early 20th century, at rates comparable to those of recent decades despite much weaker greenhouse gas forcing. Here, we show that the concurrent phase shift of Pacific and Atlantic interdecadal variability modes is the major driver for the rapid early 20th-century Arctic warming. Atmospheric model simulations successfully reproduce the early Arctic warming when the interdecadal variability of sea surface temperature (SST) is properly prescribed. The early 20th-century Arctic warming is associated with positive SST anomalies over the tropical and North Atlantic and a Pacific SST pattern reminiscent of the positive phase of the Pacific decadal oscillation. Atmospheric circulation changes are important for the early 20th-century Arctic warming. The equatorial Pacific warming deepens the Aleutian low, advecting warm air into the North American Arctic. The extratropical North Atlantic and North Pacific SST warming strengthens surface westerly winds over northern Eurasia, intensifying the warming there. Coupled ocean–atmosphere simulations support the constructive intensification of Arctic warming by a concurrent, negative-to-positive phase shift of the Pacific and Atlantic interdecadal modes. Our results aid attributing the historical Arctic warming and thereby constrain the amplified warming projected for this important region.”

As summarized in my sea level report: In general, years with positive AMO index are associated with relatively high Greenland runoff volume and vice versa (Hanna, 2013; Mernild and Liston, 2012; Mernild et al., 2017). Hofer et al. (2017) found that the reduction in Greenland’s mass balance since 1995 is caused by decreasing summer cloud cover, which has a warming effect from increased solar radiation. The observed reduction in cloud cover is strongly correlated with a state shift in the North Atlantic Oscillation (NAO), promoting high-pressure conditions in summer that inhibits cloud formation and also reduces precipitation.

So what we have is some very fine research on multi-decadal to millennial scale internal variability by Xie et al., Kravtsov, Huybers et al., Meehl et al and others. And it all gets ignored by the circular reasoning of formal attribution studies.

JC reflections

In order to have any confidence in the IPCC and NCA attribution statements, much greater effort is needed to understand the role multi-decadal to millennial scales of internal climate variability.

Much more effort is needed to understand not only the early 20th century warming, but also the ‘grand hiatus’ from 1945-1975. Attempting to attribute these features to aerosol (stratospheric or pollution) forcing haven’t gotten us very far. The approach taken by Xie’s group is providing important insights.

Once we do satisfactorily explain these 20th century features, then we need to tackle the 19th century — overall warming, with global sea level rise initiating ~1860, and NH glacier melt initiating ~1850. And then we need to tackle the last 800 years – the Little Ice Age and the ‘recovery’. (See my previous post 400 years(?) of global warming). The mainstream attribution folk are finally waking up to the importance of multidecadal ocean oscillations — we have barely scratched the surface re understanding century to millennial scale oscillations, as highlighted in the recent Gebbie and Huybers paper discussed on Ocean Heat Content Surprises.

There are too many climate scientists that expect global surface temperature, sea ice, glacier mass loss and sea level to follow the ‘forcing’ on fairly short time scales. This is not how the climate system works, as was eloquently shown by Gebbie and Huybers. The Arctic in particular responds very strongly to multidecadal and longer internal variability, and also to solar forcing.

Until all this is sorted out, we do not have a strong basis for attributing anything close to ~100% of the warming since 1950 to humans, or for making credible projections of 21st century climate change.

There are a number of statements in Cheng et al. (2019) ‘How fast are the oceans warming’, (‘the paper’) that appear to be mistaken and/or potentially misleading. My analysis of these issues is followed by a reply from the paper’s authors.

Contrary to what the paper indicates:

Contemporary estimates of the trend in 0–2000 m depth ocean heat content over 1971–2010 are closely in line with that assessed in the IPCC AR5 report five years ago

1. The paper states: “The warming is larger over the 1971–2010 period than reported in AR5. The OHC trend for the upper 2000 m in AR5 ranged from 0.20 to 0.32 Wm−2 during this period (4: AR5). The three more contemporary estimates that cover the same time period suggest a warming rate of 0.36 ± 0.05 (6: Ishii ), 0.37 ± 0.04 (10: Domingues), and 0.39 ± 0.09 (2: Cheng) Wm−2.” [Numbered references in this article are to the same numbered references in the paper. The number is followed by the lead author’s name, or AR5, where this aids clarity.]

2. AR5 (4) featured 0–700 m depth ocean heat content (OHC) 1971-2010 linear trend estimates from five studies, ranging from 0.15 to 0.27 Wm−2 of the Earth’s surface. Adding the AR5 700–2000 m OHC 1971-2010 trend estimate of 0.09 Wm−2 brings the range up to 0.24 to 0.36 Wm−2 , not to 0.20 to 0.32 Wm−2 as stated. The warming rates plotted in Supplementary Figure S1 agree to my values, not to those stated in the paper.

3. Importantly, although AR5 featured several OHC trend estimates for 0–700 m depth, its assessment of the Earth’s energy uptake (Section 3.2.3 and Box 3.1) used only the highest one (10: Domingues), adding the Levitus (12) 700–2000 m OHC trend to give a best estimate 0–2000 m warming rate over 1971–2010 of 0.36 Wm−2. That rate is identical to one (6: Ishii) of the three more contemporary estimates given in the paper and extremely close to the other two of them – within the innermost one-third of their uncertainty ranges.

See Figure 1, left hand section, and compare with the ‘Updated OHC estimates compared with AR5’ figure [Fig 2] in the paper. It is therefore misleading to claim that the warming is larger over the 1971–2010 period than reported in AR5.

4. Moreover, over the final decade covered by AR5, 2002–2011, the trend of the 0–2000 m OHC time series that AR5 adopted for its assessment, 0.60 Wm−2, was noticeably higher than those for two of the three more contemporary estimated OHC datasets given in the paper (0.35 (6: Ishii) and 0.52 (2: Cheng) Wm−2) and, unsurprisingly, almost identical to the third (10: Domingues + 12: Levitus).

Ocean warming over 2005–2017 per CMIP5 models and contemporary estimates

5. The paper’s ‘Past and future ocean heat content changes’ figure [Fig 1] caption states: “Annual observational OHC changes are consistent with each other and consistent with the ensemble means of the CMIP5 models for historical simulations pre-2005 and projections from 2005–2017, giving confidence in future projections to 2100 (RCP2.6 and RCP8.5).” This does not appear to be true for the linear trends of the annual values for the 2005–2017 projections, at least.

The main text states: “Over this period (2005–2017) for the top 2000 m, the linear warming rate for the ensemble mean of the CMIP5 models is 0.68 ± 0.02 Wm−2, whereas observations give rates of 0.54 ± 0.02 (2), 0.64 ± 0.02 (10), and 0.68 ± 0.60 (11) Wm−2.”

(i) the CMIP5 RCP2.6 and RCP8.5 projections top 2000 m OHC data archived for the paper shows an ensemble-mean linear warming rate over 2005–2017 of 0.70 ± 0.03 Wm−2, not 0.68 ± 0.02 Wm−2. The same is true when also including data from the third scenario used in the paper (RCP4.5).

(ii) the underlying time series from which the third observational estimate is derived (Fig. 3.b in 11: Resplandy) spans 1991–2016, and has a lower (and highly uncertain) linear trend from 2005 to 2016 (its final year) than the stated 0.68 Wm−2­ (which is calculated over 1991–2016), so this estimate should be excluded;

(iii) the statement inexplicably omits the Ishii et al. (6) observational data, which also have a lower estimated trend (0.62 ± 0.07 Wm−2) than per CMIP5 over this period; and

(iv) the uncertainty range for the Cheng (2) estimate appears to be seriously understated: I calculate that the estimate should be 0.54 ± 0.06 (rounding 0.055 up), not 0.54 ± 0.02.

(v) adding the uncertainty ranges in quadrature, since CMIP5 and observational errors are independent, the CMIP5 ensemble mean trend is statistically inconsistent with the all three of these observational trend estimates (2: Cheng, 6: Ishii, 10: Domingues);

7. Although it is pointed out in the paper’s Supplementary material that volcanic eruptions after 2000 have not been taken into account in CMIP5 models (with a minor effect on projected warming since then), it has been shown that when underestimation of other growth in other drivers of climate change is accounted for there is no overall bias in post-2000 CMIP5 model forcing growth (Outten et al. 2015).

Other issues

8. The straight black line in the ‘Past and future ocean heat content changes’ figure [Fig 1] for the Resplandy et al. (11) OHC estimate gives a misleading impression of close agreement with the three OHC time series based on in situ observations over 1991–2016: its trend uncertainty range is so large (0.08 to 1.28 Wm−2) that the apparent close agreement is most likely due to chance.

9. The Press release for the paper claimed that ‘If no actions are taken (“business as usual”), the upper ocean above 2000 meters will warm by 2020 ZetaJoules by 2081-2100″, which is based on CMIP5 model RCP8.5 scenario simulations. That is misleading. RCP8.5 involves not only no actions (including those already carried out) being taken, but also emissions being unusually high for a business as usual scenario.[ii]

Nicholas Lewis January 2019

[i] The paper does not directly claim that ocean warming is accelerating faster than thought; that is the headline of The New York Times article about the paper.

[ii] As the source paper (Riahi, K., et al., 2011: RCP 8.5–A scenario of comparatively high greenhouse gas emissions. DOI 10.1007/s10584-011-0149-y) states: “RCP8.5 combines assumptions about high population and relatively slow income growth with modest rates of technological change and energy intensity improvements, leading in the long term to high energy demand and GHG emissions in absence of climate change policies.”, and that “[RCP] 8.5 corresponds to a high greenhouse gas emissions pathway compared to the scenario literature”. As Riahi et al. (2011) make clear, the assumed energy intensity improvement rates are only about half the historical average while middling world GDP growth is assumed, leading to coal use increasing almost 10 fold by 2100.

Addendum: Update 22 January 2019 in reaction to tweets by co-author Zeke Hausfather

The authors seem still not to understand that their Figure 2 AR5 0–2000 m warming rates are mathematically wrong, while those I calculate are correct, not merely a different approach. Perhaps when they see it graphically they will admit their error. The below plot shows how the AR5 Box 3.1 1971-2010 Deep ocean (sub-700 m) OHC values (green line) were made up from Levitus 700–2000 m OHC data (black line) plus warming of the sub-2000 m ocean at a rate of zero until 1991 and then 35 TW from 1992 on (blue line).

.It follows that Cheng et al.’s Figure 2 values for AR5 0-2000 m warming rates over 1971-2010, which add the trend of the magenta line to the 0-700 m warming rate, are understated by the difference between the trends of the black and the magenta lines.

Honest scientists, unlike activists, are prepared to admit and correct factual mistakes in their papers, whether or not they alter its primary conclusions. I expect that Cheng et al. will accordingly submit a correction to Science to substitute correctly calculated 1971–2010 upper 2000 m AR5 OHC trend values for their erroneous values.

Lijing Cheng has asked me to post also the following reply from the paper’s authors to my critique. I am pleased to do so and I thank him for providing it. I have replaced interspersed text extracted from my article with paragraph number references. The authors’ responses are shown in blue. I have appended my comments, shown in red, on a number of them.

Paragraphs 1 and 2

Some questions have been raised concerning the numbers in our article (…) and indeed there is an inconsistency between a value in the supplementary material and the main text. It relates to the use of linear trends and how to assign a change over various periods. For longer time intervals, a linear trend is not a good fit to the data and use of that to assign a change can be misleading. In the IPCC AR5, below 700m depth, it is stated that “the heating below 700 m is 62 TW for 1971-2010”. They also state “For the ocean from 2000 m to bottom, a uniform rate of energy gain of 35 [6 to 61] TW from warming rates centred on 1992–2005 (Purkey and Johnson, 2010) is applied from 1992 to 2011, with no warming below 2000 m assumed prior to 1992.” Hence the difference for the 700 to 2000 m layer is 62 -35 = 27 TW. This is 0.05 W m-2 and is what was used in the main text to produce the numbers quoted. However, if instead one takes the 2 flat lines below 2000 m and subtracts from the actual values, and then fits a linear trend, the implied change is closer to 45 TW which gives the 0.09 W m-2 plotted in Fig. S1. If the latter is used instead, then the change from the old AR5 values to the newer OHC values is somewhat reduced (see figure below). The increase is up to 40% over the prior IPCC estimates, and the average is 24%. This exercise was prompted by a comment by Nic Lewis who we thank, and it highlights the uncertainty in actual trends and their use to depict changes. The conclusions in our Perspective remain sound. If the alternative analysis method proposed by Nic Lewis is used, the change is not quite as dramatic as implied in some of the associated press releases.

Based on this:

While there is an inconsistency that is not discussed between Fig. 2 and Fig S1, it reflects the uncertainties in previous OHC estimates and the associated methods. In particular, some values before 1980 or so are erratic (high values in the 1960s) and a linear trend is not a good fit to the time series.

All of our key points are still valid: (1) the best estimates are collectively higher than the 5 estimates featured in AR5 (0-63% higher). (2). And the best estimates are more consistent with each other (0.36/0/37/0.39 Wm−2 than 0.24~0.36 Wm−2 in AR5). (3). Model ensemble means are higher than 5 estimates featured in AR5 (0.39 Wm−2, 8-63% higher) and consistent with new/updated observations.

AR5-Box 3.1 used the strongest estimate without backup literature, we state this in the supplement (also read our replies below). So our study justifies the choice in Box 3.1 as we discussed in supplement. The new estimates could be 0-8% stronger than the selected estimate by AR5-Box 3.1 for 1971-2010. But we didn’t make claims regarding Box 3.1 in Science article, so this is not an issue.

This would be an adjusted Fig. 2 (plot below) if we used the different value, the key messages do not change:

Additionally, the Domingues value for 0-700m should have extremely large error bars: all of the values prior to 1970 are much higher than from 1970 to 1980 in AR5, (see AR5 Figure 3.2; given also below) and hence the trends for that estimate are extremely dependent on the period used. Whether that value was used or not in AR5 (and we state it was), the AR5 message is that they really didn’t know the value at all well, and now we do.

The only correct way to derive the AR5 700–2000 m depth OHC time series is to take its sub-700 m OHC time series and reverse out this addition, which is what I did. Cheng et al.’s method gives the wrong 1971–2010 rate of 700–2000 m depth ocean heating irrespective of whether this is measured by a linear trend or otherwise.

b) Their explanation of the inconsistency between their Fig 2 and their Fig S1 conflicts with the facts. They imply in the supplementary material that for both figures the warming rates are linear trends from an ordinary least squares (OLS) fit. Whether or not an OLS fit is ideal is irrelevant; it is what AR5 did and is what Cheng et al. indicated they did. I have verified that their Fig S1 estimates agree to OLS fits to their data. It is undeniable that the AR5 warming rates plotted in their Fig 2 are erroneous.

c) Numerical simulations using strongly autocorrelated random errors confirm that the 1971–2010 trend uncertainty for the Domingues 0–700 m OHC trend stated in AR5, which is incorporated in the AR5 0–2000 m trend uncertainty plotted in my Fig 1, appears to adequately reflect the large uncertainty that AR5 showed the Domingues estimates as having in pre-Argo years (which dominates the uncertainty shown in Box 3.1 Fig 1).

Paragraph 3

Please read our supplement, we fully describe the whole story as follows :

“IPCC-AR5 (1) featured five estimates for OHC within 0-700m including Levitus et al. (2) (LEV), Ishii et al. (3) (ISH), Domingues et al. (4) (DOM), Palmer et al. (5) (PAL), Smith and Murphy (6) (SMT), one estimate for 700-2000m: Levitus et al. (2) (LEV) and one estimate below 2000 m: Purkey and Johnson (7) (PG). For the Earth’s energy budget inventory (Box 3.1 in Ref. (1)) and other places, DOM, LEV and PG are used for 0-700m, 700-2000m, and below 2000m respectively. Among the five 0-700m OHC estimates in AR5, the minimum yields an ocean warming of 74 [43 to 105] TW (SMT) within 1971-2010, which is almost half of the maximum, with a rate of OHC change of 137 [120 to 154] TW (DOM). If all of five estimates are treated equally, a huge error bar has to be put in the final OHC estimate, downplaying the reliability of OHC records.

AR5 chose the DOM estimate to assess Earth’s energy budget, rather than any others or an ensemble mean of the five featured estimates by stating:

‘Generally the smaller trends are for estimates that assume zero anomalies in areas of sparse data, as expected for that choice, which will tend to reduce trends and variability. Hence the assessment of the Earth’s energy uptake (Box 3.1) employs a global UOHC estimate (Domingues et al., 2008) chosen because it fills in sparsely sampled areas and estimates uncertainties using a statistical analysis of ocean variability patterns.’

In this way, the ‘conservative error’ of many estimates has been identified in AR5 but not supported by the literature. Since AR5, many studies have been looked into this issue either directly or indirectly (8-13) and several new/revised estimates are available, and are chosen by our study.

For OHC within 0-700m, the new CHG and ISH estimates are consistent with DOM (Figure S1). The three estimates are collectively higher than LEV/ISH/PAL/SMT featured in AR5 (Figure S1). Therefore, the progress after AR5 justifies the choice of DOM in AR5 for OHC 0-700m.”

We note that the AR5 featured five different OHC estimates available at the time in the main body of their assessment and the main figure (Fig. 3.2), shown below. We feel that this justifies comparing newer OHC estimates to all five series, rather than just the Domingues series that the AR5 chose to highlight.Additionally, when the 700-2000m values from Levitus are used (as discussed above), recent records still show 0% to 8% more warming over the 1971-2010 period than the AR5 Domingues value: 0.36 ± 0.05 (Ishii), 0.37 ± 0.04 (Domingues+Levitus), and 0.39 ± 0.09 (Cheng) compared to the old Domingues value of 0.36 Wm−2.

Incidentally, since we have this figure here: note the big bump in Domingues in the top panel in the 1950s and 60s. Also note the bump in the 1970s in the 700 to 2000 m layer.

Nic Lewis comments:

None of these points affects what I say in my article. The paper says ” These recent observation-based OHC estimates show highly consistent changes since the late 1950s (see the figure). The warming is larger over the 1971–2010 period than reported in AR5. The OHC trend for the upper 2000 m in AR5 ranged from 0.20 to 0.32 W m−2 during this period (4).” Since the figure referred to shows only 0–2000 m OHC, it is implicit that ” The warming is larger over the 1971–2010 period” in the next sentence refers to warming in the 0–2000 m ocean layer. AR5 only featured 0–700 m OHC dataset other than Domingues when discussing warming of that ocean layer; it did not use any of them to estimate warming over 0–2000 m .

Paragraph 4

The period from 2002-2011 seems somewhat arbitrary, and we chose to focus on the 1971-2010 period as it was the one specifically highlighted in the AR5. We would expect greater agreement between older and newer estimates of OHC changes after around 2005 (when Argo data begins being available), as corrections of XBT measurements and better spatial interpolation approaches – which were the primary changes made to newer OHC datasets – matter much more prior to the early 2000s. And there is a better agreement after 2005, Johnson et al. 2018 BAMS state of climate show this already. We do give updated values for 2005-2017 (Argo period) for comparison with CMIP5.

Further, we could see the time series plot similar to AR5-Fig.3.2 below, the new time series apparently show better consistency than AR5-Fig. 3.2 among estimates.Figure. Times series of OHC 0-2000m for the four best estimates compared with CMIP5 model ensemble mean and two-sigma model spread.

Nic Lewis comments:

My paragraph 4 is simply an observation; it does not claim to point to any mistake in the paper. Nor does it bear on my point that it is misleading to claim that the warming is larger over the 1971–2010 period than reported in AR5.

Paragraph 5

First, 2005-2017 is a short period, there are many uncertainties: 1) There are short-term variability in the time series (i.e. Interannual variability such as ENSO) and uncertainty in observations, these can impact the trend calculation in a short period within 2005-2017; 2) CMIP5 models do not contain natural variability in phase with actual natural variability, and 3) do not contain realistic forcings after 2005. We discuss this in some detail in supplement:

“We show in the main text that over the period of 2005-2017, the linear warming rate for the ensemble mean of the CMIP5 models is 0.68±0.02 W m-2, slightly larger than the observations (ranging from 0.54±0.02 to 0.64±0.02). Many studies, including Gleckler et al. (13) and Schmidt et al. (16) have shown that the volcanic eruptions after 2000 have not been taken into account in CMIP5 models. Taking this into account, the Multi-Model-Average of CMIP5 simulations will be more consistent with observations during the recent decade (13).”

None of this is relevant to my point that the claim in the caption to Fig 1 of the paper that “Annual observational OHC changes are consistent with each other and consistent with the ensemble means of the CMIP5 models for historical simulations pre-2005 and projections from 2005–2017” is contradicted by the differences in the linear trends of the data involved over 2005–2017, having regard to the trend uncertainty ranges.

Paragraph 6

(i) As we point out in the supplementary materials (figure caption) “CMIP5 results (historical runs from 1971 to 2005 and RCP4.5 from 2006 to 2010) are indicated by the green bar”. Using CMIP5 historical + RCP4.5 runs gives us 0.68 ± 0.02 Wm−2. We could have been clearer in the main paper which RCP runs were shown in the trends comparison part of the figure; we did in earlier drafts of the article but it was cut at the suggestion of the editors at Science to shorten/simplify the figure caption.

(ii) Resplandy explicitly state in their paper that the trustable estimate is the linear trend, rather than annual values, because the O2 and CO2 changes on annual scales are not primarily driven by temperature. Hence we only use their linear trend (the revised version shown in Real Climate).

(iii) Earlier drafts of the paper did include the Ishii estimate, though it was omitted from the final version due to length constraints as it fell between the 0.54 (Cheng) and 0.64 (Dom/Lev) instrumental estimates noted. We should have made this clearer (e.g. mention that instrumental estimates range from 0.54 to 0.64), although its exclusion here does not impact any of our conclusions. As the 0.62 Ishii estimate is closer to the Dom/Lev than Cheng, its inclusion would make the overall range of instrumental estimates seem closer to CMIP5 over this period.

(iv) We used the error calculation presented in Foster and Rahmstorf 2011, which takes accounts of the autocorrelations in a time series.

OHC_errorbar=1.65*OHC_se*sqrt(v)

Where:

OHC_se is standard error using OLS.

v=1+2*p1/(1-q)

p1=OHC_autocorrelation(2)

q= OHC_autocorrelation(3) / OHC_autocorrelation(2)

OHC_autocorrelation is the autocorrelation of the time series.

Using this method, we can replicate the error bar provided by AR5, so it should be nearly identical to AR5 method.

We also get 0.06 uncertainty range for Cheng (2) if simply using OLS method. But this does not impact the comparison between new/updated observations and model.

(v) The four new/updated best estimates are 0.54, 0.62, 0.64, and 0.68 Wm-2. The CMIP5 historical + RCP4.5 model ensemble mean is 0.68 Wm-2. If we focus on instrumental estimates (and exclude Resplandy et al given its large uncertainties), the CMIP5 models are a bit higher than observations during the Argo era, although, as we discuss in the Supplementary Materials and our previous replies, mismatches between projected and observed forcings in the forecast period are expected to give differences over this period.

Nic Lewis comments:

(i) No indication is given in the paper or the supplementary material that “the linear warming rate for the ensemble mean of the CMIP5 models” for the top 2000 m over 2005–2017 referred only to projections based on the RCP4.5 scenario. Although the authors were unlucky to have an editor who was more concerned with presentation than scientific content, they, not the editor, are ultimately responsible for the paper.
The caption to their Fig 1, which states that annual observed OHC changes are consistent with the ensemble means of the CMIP5 models, shows projections based on the RCP2.6 and RCP8.5 scenarios. The relevant 2005–2017 trends for those scenarios are respectively 0.70 and 0.71 Wm-2.

(ii) This supports my point that the 2005–2017 trend estimatable from the (revised) Resplandy data is highly uncertain. The fact that the estimated 1991–2016 Resplandy trend is somewhat less uncertain (at 0.68 ± 0.60 Wm-2) does not justify treating it as also being the 2005–2017 trend. The fact is that the information available from the Resplandy data is so imprecise that it adds almost nothing to knowledge about ocean warming trends over 2005–2017.

(iii) Noted. IMO this issue illustrates a problem with publishing papers in Science and similar ‘high profile’ journals.

(iv) I also used the error calculation presented in Foster and Rahmstorf 2011. I estimated the relevant autocorrelations over 2005–2017, since that was the period over which the trend was being estimated. They were insignificantly negative for the Cheng data, so no correction to the OLS standard error estimate of 0.0332 was appropriate. Multiplying this by 1.65 gives a 5–95% uncertainty range of, rounded up, ±0.06. The authors appear to agree with this value. I cannot understand how a correction for autocorrelation could possibly reduce the uncertainty range by a factor of three in these circumstances. The paper’s ±0.02 uncertainty range for the Cheng 0–2000 m 2005–2017 trend seems clearly wrong.

(v) Using data only from the RCP4.5 scenario simulations, giving an ensemble mean lower than that for RCP2.6 only, for RCP8.5 only, and for all three scenarios combined, appears to be unjustified (even if had been disclosed).

This is irrelevant. Outten et al 2015 also included the omission in CMIP5 models of the impacts of recent volcanic eruptions, but found that it was fully offset by the net impact of misestimation of recent changes in other forcings.

Paragraph 8

We agree that the uncertainties in the revised Resplandy estimate are quite large, as we note when including them in the paper over the 2005-2017 Argo period (0.68 ± 0.60 Wm−2). Unfortunately showing the error bars of all the underlying observational series in main text figure as well as those of the climate models would have made it unreadable, and the fact that the Resplandy estimate does not extend back to 1971 means that it is left out of the “Updated OHC estimates compared with AR5” portion of the figure that does show individual series uncertainties. However, Resplandy et al does provide a novel approach to estimating ocean heat content, and we think their median estimate was worth showing alongside the three updated instrumental datasets, even if (unlike the other three datasets) Resplandy’s uncertainties are so large that they limit the claims that can be made regarding agreement with climate models.

Paragraph 9

We agree it is generally better to include the full definition of RCP8.5 to avoid any confusion, but in a press release for the general public we had to simplify. This does not impact the message in the published Science article. We note that there is an ongoing debate within the energy modeling and climate science community regarding RCP8.5 and the extent to which it represents a “business as usual” outcome, and that this is shifting with the availability of the SSP scenarios and the inclusion of a 7 w/m^2 forcing scenario in CMIP6. However, references to RCP8.5 as “business as usual” in the published literature are quite common, and the original paper presenting the RCP8.5 scenario (Riahi et al: https://link.springer.com/article/10.1007/s10584-011-0149-y) explicitly refers to it as “a high-emission business as usual scenario”.

Closing statement by the paper’s authors:

Thanks for the critique, an alternative set of values could be used in Fig.2 in our calculation for 700-2000m OHC in AR5. But the uncertainties are large in those early years.

We believe that all of our conclusions are still valid:

After significant progress since AR5, the best OHC estimates show stronger warming than estimates featured in AR5 (0~63%), and they are also more consistent with each other.

The models are consistent with the best OHC estimates for the 1971-2010 period. While models are warming slightly faster than most of the observational records during the 2005-2017 period, this is expected because the volcanic aerosol effects are not fully included.

Nic Lewis closing comment:

I thank the authors for their constructive response. I concur that OHC uncertainties are large in the early years of the 1971–2010 period.

None of the authors’ responses refute any of my criticisms concerning factual errors and misleading statements in the paper.

In particular, presenting my method of calculating AR5 0–2000 m warming rates over 1971–2010 as alternative to their method is like claiming that calculating 4 – 2 = 1 is an alternative to calculating 4 – 2 = 2.

“Climate change from human activities mainly results from the energy imbalance in Earth’s climate system caused by rising concentrations of heat-trapping gases. About 93% of the energy imbalance accumulates in the ocean as increased ocean heat content (OHC). The ocean record of this imbalance is much less affected by internal variability and is thus better suited for detecting and attributing human influences than more commonly used surface temperature records. Recent observation-based estimates show rapid warming of Earth’s oceans over the past few decades (see the figure). Recent estimates of observed warming resemble those seen in models, indicating that models reliably project changes in OHC.”

Willis Eschenbach has post that questions the error bars in the Cheng et al. paper:

“They claim that their error back in 1955 is plus or minus ninety-five zettajoules … and that converts to ± 0.04°C. Four hundredths of one degree celsius … right …”

“Call me crazy, but I do NOT believe that we know the 1955 temperature of the top two kilometres of the ocean to within plus or minus four hundredths of one degree.”

“It gets worse. By the year 2018, they are claiming that the error bar is on the order of plus or minus nine zettajoules … which is three thousandths of one degree C. That’s 0.003°C.”

“Before the 1990s, most ocean temperature measurements were above 700 m and therefore, insufficient for an accurate global estimate of ocean warming. We present a method to reconstruct ocean temperature changes with global, full-depth ocean coverage, revealing warming of 436 ×10**21 J since 1871. Our reconstruction, which agrees with other estimates for the well-observed period, demonstrates that the ocean absorbed as much heat during 1921–1946 as during 1990–2015. Since the 1950s, up to one-half of excess heat in the Atlantic Ocean at midlatitudes has come from other regions via circulation-related changes in heat transport.”

“Proxy records show that before the onset of modern anthropogenic warming, globally coherent cooling occurred from the Medieval Warm Period to the Little Ice Age. The long memory of the ocean suggests that these historical surface anomalies are associated with ongoing deep-ocean temperature adjustments. Combining an ocean model with modern and paleoceanographic data leads to a prediction that the deep Pacific is still adjusting to the cooling going into the Little Ice Age, whereas temperature trends in the surface ocean and deep Atlantic reflect modern warming. This prediction is corroborated by temperature changes identified between the HMS Challenger expedition of the 1870s and modern hydrography. The implied heat loss in the deep ocean since 1750 CE offsets one-fourth of the global heat gain in the upper ocean.”

“The ocean has a long memory. When the water in today’s deep Pacific Ocean last saw sunlight, Charlemagne was the Holy Roman Emperor, the Song Dynasty ruled China and Oxford University had just held its very first class. During that time, between the 9th and 12th centuries, the earth’s climate was generally warmer before the cold of the Little Ice Age settled in around the 16th century. Now ocean surface temperatures are back on the rise but the question is, do the deepest parts of the ocean know that?”

“Researchers from the Woods Hole Oceanographic Institution (WHOI) and Harvard University have found that the deep Pacific Ocean lags a few centuries behind in terms of temperature and is still adjusting to the entry into the Little Ice Age. Whereas most of the ocean is responding to modern warming, the deep Pacific may be cooling.”

“These findings imply that variations in surface climate that predate the onset of modern warming still influence how much the climate is heating up today. Previous estimates of how much heat the Earth had absorbed during the last century assumed an ocean that started out in equilibrium at the beginning of the Industrial Revolution. But Gebbie and Huybers estimate that the deep Pacific cooling trend leads to a downward revision of heat absorbed over the 20th century by about 30 percent.”

A) Surface temperature time series after adjustment to fit the HMS Challenger observations (OPT-0015), including four major surface regions (colored lines) and the global area-weighted average (black line). (B) Time series of global oceanic heat content anomalies relative to 1750 CE from OPT-0015 as decomposed into upper (cyan, 0 to 700 m), mid-depth (blue, 700 to 2000 m), and deep (black, 2000 m to the bottom) layers. Heat content anomalies calculated from an equilibrium simulation initialized at 1750 (EQ-1750, dashed lines) diverge from the OPT-0015 solution in deeper layers. (C) Similar to (B) but for the Pacific. Heat content anomaly is in units of zettajoules (1 ZJ = 1021J).

From the paper’s Conclusions:

“More generally, OPT-0015 indicates that the upper 2000 m of the ocean has been gaining heat since the 1700s, but that one-fourth of this heat uptake was mined from the deeper ocean. This upper-lower distinction is most pronounced in the Pacific since 1750, where cooling below 2000 m offsets more than one-third of the heat gain above 2000 m.”

“The implications of the deep Pacific being in disequilibrium become more apparent when compared to a counterfactual scenario where the ocean is fully equilibrated with surface conditions in 1750 CE. That the deep Pacific gains heat in this scenario, referred to as EQ-1750, confirms that heat loss in OPT-0015 results from the cooling associated with entry into the Little Ice Age. Moreover, the EQ-1750 scenario leads to 85% greater global ocean heat uptake since 1750 because of excess warming below 700 m. It follows that historical model simulations are biased toward overestimating ocean heat uptake when initialized at equilibrium during the Little Ice Age, although additional biases are also likely to be present. Finally, we note that OPT-0015 indicates that ocean heat content was larger during the Medieval Warm Period than at present, not because surface temperature was greater, but because the deep ocean had a longer time to adjust to surface anomalies. Over multicentennial time scales, changes in upper and deep ocean heat content have similar ranges, underscoring how the deep ocean ultimately plays a leading role in the planetary heat budget.”

In response to my tweet of the Gebbie and Huybers paper, I received the link to the following paper that further addresses the dynamics of the Pacific Ocean heat content:

“Observed increases in ocean heat content (OHC) and temperature are robust indicators of global warming during the past several decades. We used high-resolution proxy records from sediment cores to extend these observations in the Pacific 10,000 years beyond the instrumental record. We show that water masses linked to North Pacific and Antarctic intermediate waters were warmer by 2.1 ± 0.4°C and 1.5 ± 0.4°C, respectively, during the middle Holocene Thermal Maximum than over the past century. Both water masses were ~0.9°C warmer during the Medieval Warm period than during the Little Ice Age and ~0.65° warmer than in recent decades. Although documented changes in global surface temperatures during the Holocene and Common era are relatively small, the concomitant changes in OHC are large.”

“The ocean dominates the planetary heat budget and takes thousands of years to equilibrate to perturbed surface conditions, yet those long time scales are poorly understood. Here we analyze the ocean response over a range of forcing levels and time scales in a climate model of intermediate complexity and in the CMIP5 model suite. We show that on century to millennia time scales the response time scales, regions of anomalous ocean heat storage, and global thermal expansion depend nonlinearly on the forcing level and surface warming. As a consequence, it is problematic to deduce long‐ term from short‐term heat uptake or scale the heat uptake patterns between scenarios. These results also question simple methods to estimate long‐term sea level rise from surface temperatures, and the use of deep sea proxies to represent surface temperature changes in past climate.”

“In summary, although for subcentennial time scales and low forcing levels the linear relationship between thermal expansion and surface temperature anomaly seems to hold, our analysis suggests that we do not properly understand the centennial to millennia ocean warming patterns, mainly due to a limited understanding of circulation and mixing changes.”

“CERA-20C is a coupled reanalysis of the twentieth century which aims to reconstruct the past weather and climate of the Earth system including the atmosphere, ocean, land, ocean waves, and sea ice. This reanalysis is based on the CERA coupled atmosphere-ocean assimilation system developed at ECMWF. CERA-20C provides a 10 member ensemble of reanalyses to account for errors in the observational record as well as model error. It benefited from the prior experience of the retrospective atmospheric analysis ERA-20C.”

5.2. Ocean Heat Content

“In CERA-20C, time series of heat content show discontinuities between streams resulting from the model drift from its initial state (Figure 10). The model drift reflects the fact that the initial conditions from ERA-20C and ORA-20C used to initialize the different production streams are inconsistent with the coupled model’s natural state. The origin of the drift remains unknown so far. The complexity of the system makes it very difficult to point toward a single explanation and this question remains open to further investigations. In the early twentieth century, when the uncertainty in the state of the ocean is high and the ocean model is poorly constrained by observations, the ocean component of CERA-20C drifts toward its preferred state. As the observing system grows, the uncertainty and the drift are reduced. The relatively well-observed upper ocean adjusts faster than the ocean interior, where the timescales of ocean processes are particularly slow and the observational constraints are very small. Further work is needed to understand and reduce the model drift so that the initial conditions and the ocean model behavior are more realistic in poorly observed periods and areas.”

Figure 10. Time series of the global average ocean heat content in the CERA-20C ensemble for the (top left) upper 300 m, (top right) the upper 700 m, and (bottom left) the entire water column. The solid lines are the ensemble mean and the shading shows the ensemble standard deviation.

This figure shows that the ocean heat content for the upper 300 m reached values during the period 1935–1955 that exceed any value reached during the period 2000–2010.

Wunsch (2018) identified lower bounds on uncertainties in ocean temperature trends for the period 1994-2013. The trend in integrated ocean temperature was estimated by Wunsch to be 0.011 ± 0.001 oC/decade (note: this rate of warming is much less than the surface warming, owing to the large volume of ocean water). This corresponds to a 20-year average ocean heating rate of 0.48 ±0.1 W/m2 of which 0.1 W/m2 arises from the geothermal forcing. I have rarely seen geothermal forcing (e.g. underwater volcanoes) mentioned as a source of ocean warming – the numbers cited by Wunsch reflect nearly a 20% contribution by geothermal forcing to overall global ocean warming over the past two decades.

JC reflections

After reading all of these papers, I would have to conclude that if the CMIP5 historical simulations are matching the ‘observations’ of ocean heat content, then I would say that they are getting the ‘right’ answer for the wrong reasons. Not withstanding the Cheng et al. paper, the ‘right’ answer (in terms of magnitude of the OHC increase) is still highly uncertain.

The most striking findings from these papers are:

the oceans appear to have absorbed as much heat in the early 20th century as in recent decades (stay tuned for a forthcoming blog post on the early 20th century warming)

historical model simulations are biased toward overestimating ocean heat uptake when initialized at equilibrium during the Little Ice Age

the implied heat loss in the deep ocean since 1750 offsets one-fourth of the global heat gain in the upper ocean.

While each of these papers mentions error bars or uncertainty, in all but the Cheng et al. paper, significant structural uncertainties in the method are discussed. In terms of uncertainties, these papers illustrate numerous different methods of estimating of 20th century ocean heat content. A much more careful assessment needs to be done than was done by Cheng et al., that includes these new estimates and for a longer period of time (back to 1900), to understand the early 20th century warming.

In an article about the Cheng et al. paper at Inside Climate News, Gavin Schmidt made the following statement:

“The biggest takeaway is that these are things that we predicted as a community 30 years ago,” Schmidt said. “And as we’ve understood the system more and as our data has become more refined and our methodologies more complete, what we’re finding is that, yes, we did know what we were talking about 30 years ago, and we still know what we’re talking about now.”

Sometimes I think we knew more of what we were talking about 30 years ago (circa the time of the IPCC FAR, in 1990) than we do now: “it aint what you don’t know that gets you in trouble. It’s what you know for sure that just aint so.”

The NASA GISS crowd (including Gavin) is addicted to the ‘CO2 as climate control knob’ idea. I have argued that CO2 is NOT a climate control knob on sub millennial time scales, owing to the long time scales of the deep ocean circulations.

A talking point for ‘skeptics’ has been ‘the warming is caused by coming out of the Little Ice Age.’ The control knob afficionadoes then respond ‘but what’s the forcing.’ No forcing necessary; just the deep ocean circulation doing its job. Yes, additional CO2 will result in warmer surface temperatures, but arguing that 100% (or more) of the warming since 1950 is caused by AGW completely neglects what is going on in the oceans.

Stay tuned for a forthcoming blog post on the early 20th century global warming.

The Cold Transit of Southern Ocean Upwelling”. By projecting Southern Ocean water masses into T/S space we show that the conversion of deep to intermediate water relies on the seasonal cycle of air-ice-sea fluxes and mixing. [link]