AT WHAT COST? EXAMINING THE SOCIAL COST OF CARBON

WRITTEN STATEMENT OF PATRICK J. MICHAELS DIRECTOR CENTER FOR THE STUDY OF SCIENCE

CATO INSTITUTE WASHINGTON, DC HEARING ON

AT WHAT COST? EXAMINING THE SOCIAL COST OF CARBON

BEFORE THE U.S. HOUSE OF REPRESENTATIVES Committee on Science, Space, and Technology

SUBCOMMITTEE on Environment, SUBCOMMITTEE ON OVERSIGHT

FEBRUARY 28, 2017

My testimony concerns the selective science that underlies the existing federal determination of the Social Cost of Carbon and how a more inclusive and considered process would have resulted in a lower value for the social cost of carbon.

Back in 2015, the federal government’s Interagency Working Group (IWG) on the Social Cost of Carbon released a report that was a response to public comments of the IWG’s determination of the social cost of carbon that were solicited by the Office of Management and Budget in November 2013. Of the 140 unique sets of substantive comments received (including a set of my own), the IWG adopted none. And apart from some minor updates to its discussion on uncertainty, the IWG, in its most recent August 2016 report, retained the same, now obsolete, methodologies that were used in its initial 2010 SCC determination.

Here, I address why this decision was based on a set of flimsy, internally inconsistent excuses and amounts to a continuation of the IWG’s exclusion of the most relevant science—an exclusion which assures that low, or even negative values of the social cost of carbon (which would imply a net benefit of increased atmospheric carbon dioxide levels), do not find their way into cost/benefit analyses of proposed federal actions. If, in fact, the social cost of carbon were near zero, it would eliminate the justification for any federal action (greenhouse gas emissions regulations, ethanol mandates, miles per gallon standards, solar/wind subsidies, DoE efficiency regulations, etc.) geared towards reducing carbon dioxide emissions.

Equilibrium Climate Sensitivity

In May 2013, the Interagency Working Group produced an updated SCC value by incorporating revisions to the underlying three Integrated Assessment Models (IAMs) used by the IWG in its initial 2010 SCC determination. But, at that time, the IWG did not update the equilibrium climate sensitivity (ECS) employed in the IAMs. This was not done, despite, now, there having been, since January 1, 2011, at least 16 new studies and 32 experiments (involving more than 50 researchers) examining the ECS, each lowering the best estimate and tightening the error distribution about that estimate. Instead, the IWG wrote in its 2013 report: “It does not revisit other interagency modeling decisions (e.g., with regard to the discount rate, reference case socioeconomic and emission scenarios, or equilibrium climate sensitivity).”

This decision was reaffirmed by the IWG in July 2015 and again in its most recent August 2016 report. But, through its reaffirmation, the IWG has again refused to give credence to and recognize the importance of what is now becoming mainstream science—that the most likely value of the equilibrium climate sensitivity is lower than that used by the IWG and that the estimate is much better constrained. This situation has profound implications for the determination of the SCC and yet continues to be summarily dismissed by the IWG.

The earth’s equilibrium climate sensitivity is defined by the IWG in its 2010 report (hereafter, IWG2010) as “the long-term increase in the annual global-average surface temperature from a doubling of atmospheric CO2 concentration relative to pre-industrial levels (or stabilization at a concentration of approximately 550 parts per million (ppm))” and is recognized as “a key input parameter” for the integrated assessment models used to determine the social cost of carbon.

The IWG2010 report has an entire section (Section III.D) dedicated to describing how an estimate of the equilibrium climate sensitivity and the scientific uncertainties surrounding its actual value are developed and incorporated in the IWG’s analysis. The IWG2010, in fact, developed its own probability density function (pdf) for the ECS and used it in each of the three IAMs, superseding the ECS pdfs used by the original IAMs developers. The IWG’s intent was to develop an ECS pdf which most closely matched the description of the ECS as given in the Fourth Assessment Report of the United Nation’s Intergovernmental panel on Climate Change which was published in 2007.

The functional form adopted by the IWG2010 was a calibrated version of the Roe and Baker (2007) distribution. It was described in the IWG2010 report in the following Table and Figure (from the IWG2010 report):

The calibrated Roe and Baker functional form used by the IWG2010 is no longer scientifically defensible; nor was it at the time of the publication of the IWG 2013 SCC update, nor at the time of the August 2016 update.

The figure below vividly illustrates this fact, as it compares the best estimate and 90% confidence range of the earth’s ECS as used by the IWG (calibrated Roe and Baker) against findings in the scientific literature published since January 1, 2011.

Whereas the IWG ECS distribution has a median value of 3.0°C and 5th and 95th percentile values of 1.72°C and 7.14°C, respectively, the corresponding values averaged from the recent scientific literature are ~2.0°C (median), ~1.1°C (5th percentile), and ~3.5°C (95th percentile).

These differences will have large and significant impacts on the SCC determination.

CAPTION: The median (indicated by the small vertical line) and 90% confidence range (indicated by the horizontal line with arrowheads) of the climate sensitivity estimate used by the Interagency Working Group on the Social Cost of Carbon Climate (Roe and Baker, 2007) is indicated by the top black arrowed line. The average of the similar values from 22 different determinations reported in the recent scientific literature is given by the grey arrowed line (second line from the top). The sensitivity estimates from the 32 individual determinations of the ECS as reported in new research published after January 1, 2011 are indicated by the colored arrowed lines. The arrows indicate the 5 to 95% confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. Likewise, Bates (2016) presents eight estimates and the green box encompasses them. Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.

In addition to recent studies aimed at directly determining the equilibrium climate sensitivity (included in the chart above), there have been several other major studies which have produced results which qualitatively suggest a climate sensitivity lower than mainstream (e.g. Roe and Baker calibration) estimates. Such studies include new insights on cloud condensation nuclei and cosmic rays (Kirkby et al., 2016), radiative forcing of clouds (Bellouin, 2016; Stevens, 2015), cloud processes (Mauritsen and Stevens, 2015) and the underestimation of terrestrial CO2 uptake (Sun et al., 2014).

The IWG2010 report noted that, concerning the low end of the ECS distribution, its determination reflected a greater degree of certainty that a low ECS value could be excluded than did the IPCC. From the IWG2010 (p. 14):

“Finally, we note the IPCC judgment that the equilibrium climate sensitivity “is very likely larger than 1.5°C.” Although the calibrated Roe & Baker distribution, for which the probability of equilibrium climate sensitivity being greater than 1.5°C is almost 99 percent, is not inconsistent with the IPCC definition of “very likely” as “greater than 90 percent probability,” it reflects a greater degree of certainty about very low values of ECS than was expressed by the IPCC.”

In other words, the IWG used its judgment that the lower bound of the ECS distribution was higher than the IPCC 2007 assessment indicated. However, the collection of the recent literature on the ECS shows the IWG’s judgment to be in error. As can be seen in the chart above, the large majority of the findings on ECS in the recent literature indicate that the lower bound (i.e., 5th percentile) of the ECS distribution is lower than the IPCC 2007 assessment. And, the average value of the 5th percentile in the recent literature (~1.1°C) is 0.62°C less than that used by the IWG—a sizeable and important difference which will influence the SCC determination.

In fact, the abundance of literature supporting a lower climate sensitivity was at least partially reflected in the new IPCC assessment report issued in 2013. In that report, the IPCC reported:

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence). The lower temperature limit of the assessed likely range is thus less than the 2°C in the AR4…

Clearly, the IWG’s assessment of the low end of the probability density function that best describes the current level of scientific understanding of the climate sensitivity is incorrect and indefensible.

But even more influential in the SCC determination is the upper bound (i.e., 95th percentile) of the ECS probability distribution.

The IWG2010 notes (p.14) that the calibrated Roe and Baker distribution better reflects the IPCC judgment that “values substantially higher than 4.5°C still cannot be excluded.” The IWG2010 further notes that

“Although the IPCC made no quantitative judgment, the 95th percentile of the calibrated Roe & Baker distribution (7.1 °C) is much closer to the mean and the median (7.2 °C) of the 95th percentiles of 21 previous studies summarized by Newbold and Daigneault (2009). It is also closer to the mean (7.5 °C) and median (7.9 °C) of the nine truncated distributions examined by the IPCC (Hegerl, et al., 2006) than are the 95th percentiles of the three other calibrated distributions (5.2-6.0 °C).”

In other words, the IWG2010 turned towards surveys of the scientific literature to determine its assessment of an appropriate value for the 95th percentile of the ECS distribution. Now, some seven years later, the scientific literature tells different story.

Instead of a 95th percentile value of 7.14°C, as used by the IWG2010, a survey of the recent scientific literature suggests a value of ~3.5°C—more than 50% lower.

And this is very significant and important difference because the high end of the ECS distribution has a large impact on the SCC determination—a fact frequently commented on by the IWG2010.

For example, from IWG2010 (p.26):

“As previously discussed, low probability, high impact events are incorporated into the SCC values through explicit consideration of their effects in two of the three models as well as the use of a probability density function for equilibrium climate sensitivity. Treating climate sensitivity probabilistically results in more high temperature outcomes, which in turn lead to higher projections of damages. Although FUND does not include catastrophic damages (in contrast to the other two models), its probabilistic treatment of the equilibrium climate sensitivity parameter will directly affect the non-catastrophic damages that are a function of the rate of temperature change.”

And further (p.30):

Uncertainty in extrapolation of damages to high temperatures: The damage functions in these IAMs are typically calibrated by estimating damages at moderate temperature increases (e.g., DICE [Dynamic Integrated Climate and Economy] was calibrated at 2.5 °C) and extrapolated to far higher temperatures by assuming that damages increase as some power of the temperature change. Hence, estimated damages are far more uncertain under more extreme climate change scenarios.

And the entirety of Section V “A Further Discussion of Catastrophic Impacts and Damage Functions” of the IWG 2010 report describes “tipping points” and “damage functions” that are probabilities assigned to different values of global temperature change. Table 6 from the IWG2010 indicated the probabilities of various tipping points.

The likelihood of occurrence of these low probability, high impact, events (“tipping points”) is greatly diminished under the new ECS findings. The average 95th percentile value of the new literature survey is only ~3.5°C indicating a very low probability of a warming reaching 3-5°C by 2100 as indicated in the 3rd column of the above Table and thus a significantly lower probability that such tipping points will be reached. This new information will have a large impact on the final SCC determination using the IWG’s methodology.

The size of this impact has been directly investigated.

In their Comment on the Landmark Legal Foundation Petition for Reconsideration of Final Rule Standards for Standby Mode and Off Mode Microwave Ovens, Dayaratna and Kreutzer (2013) ran the DICE model using the distribution of the ECS as described by Otto et al. (2013)—a paper published in the recent scientific literature which includes 17 authors, 15 of which were lead authors of chapters in the recent Intergovernmental Panel on Climate Change’s Fifth Assessment Report. The most likely value of the ECS reported by Otto et al. (2013) was described as

“2.0°C, with a 5–95% confidence interval of 1.2–3.9°C.” Using the Otto et al. (2013) ECS distribution in lieu of the distribution employed by the IWG (2013), dropped the SCC by 42 percent, 41 percent, and 35 percent (for the 2.5%, 3.0%, 5.0% discount rates, accordingly). This is a significant decline.

In subsequent research, Dayaratna and Kreutzer (2014) examined the performance of the FUND (Framework for Uncertainty, Negotiation, and Distribution) model, and found that it too, produced a greatly diminished value for the SCC when run with the Otto et al. distribution of the equilibrium climate sensitivity. Using the Otto et al. (2013) ECS distribution in lieu of the distribution employed by the IWG (2013), dropped the SCC produced by the FUND model to $11, $6, $0 compared with the original $30, $17, $2 (for the 2.5%, 3.0%, 5.0% discount rates, accordingly). Again, this is a significant decline.

The Dayaratna and Kreutzer (2014) results using FUND were in line with alternative estimates of the impact of a lower climate sensitivity on the FUND model SCC determination.

Waldhoff et al. (2011) investigated the sensitivity of the FUND model to changes in the ECS. Waldhoff et al. (2011) found that changing the ECS distribution such that the mean of the distribution was lowered from 3.0°C to 2.0°C had the effect of lowering the SCC by 60 percent (from a 2010 SCC estimate of $8/ton of CO2 to $3/ton in $1995). While Waldhoff et al. (2011) examined FUNDv3.5, the response of the current version (v3.8) of the FUND model should be similar.

Additionally, the developer of the PAGE (Policy Analysis of the Greenhouse Effect) model, affirmed that the SCC from the PAGE model, too drops by 35% when the Otto et al. (2013) climate sensitivity distribution is employed (Hope, 2013).

More recently, the FUND and DICE model were run with equilibrium climate sensitivities that were determined by Lewis and Curry (2014) in an analysis which updated and expanded upon the results of Otto et al. (2013). In Dayaratna et al. (2017), the probability density function (pdf) for the equilibrium climate sensitivity determined from an energy budget model (Lewis and Curry, 2014) was used instead of the Roe and Baker calibrated pdf used by the IWG. In doing so, Dayaranta et al. (2017) report:

“In the DICE model the average SCC falls by 30-50% depending on the discount rate, while in the FUND model the average SCC falls by over 80%. The span of estimates across discount rates also shrinks considerably, implying less sensitivity to this parameter choice…Furthermore the probability of a negative SCC (implying CO2 emissions are a positive externality) jumps dramatically using an empirical ECS distribution.”

These studies make clear that the strong dependence of the social cost of carbon on the distribution of the estimates of the equilibrium climate sensitivity (including the median, and the upper and lower certainty bounds) requires that the periodic updates to the IWG SCC determination must include a critical examination of the scientific literature on the topic of the equilibrium climate sensitivity, not merely kowtowing to the IPCC assessment. There is no indication that the IWG undertook such an independent examination. But what is clear, is that the IWG did not alter its probability distribution of the ECS between its 2010, 2013, 2015, and 2016 SCC determinations, despite a large and growing body of scientific literature that substantially alters and better defines the scientific understanding of the earth’s ECS. It is

unacceptable that a supposed “updated” social cost of carbon does not include updates to the science underlying a critical and key aspect of the SCC.

I note that there has been one prominent scientific study in the recent literature which has argued, on the basis of recent observations of lower tropospheric mixing in the tropics, for a rather high climate sensitivity (Sherwood et al., 2014). This research, however, suffers from too narrow a focus. While noting that climate models which best match the apparent observed behavior of the vertical mixing characteristics of the tropical troposphere tend to be the models with high climate sensitivity estimates, the authors fail to make note that these same models are the ones whose projections make the worst match to observations of the evolution of global temperature during the past several decades.

While Sherwood et al. (2014) prefer models that better match their observations in one variable, the same models actually do worse in the big picture than do models which lack the apparent accuracy in the processes that Sherwood et al. (2014) describe. The result can only mean that there must still be even bigger problems with other model processes which must more than counteract the effects of the processes described by Sherwood et al.

This illustrates the inherent problems with “tuning” climate models to try to best reproduce a known set of observations—efforts to force climate models to better emulate one set of physical behaviors can degrade their performance on other ones. Voosen (2016) recently reported on the climate modelling community efforts to be more open and transparent with their multitude of (secret) “tuning” procedures. Voosen’s reporting was eye-opening not only in revealing the degree to which climate models are tuned and the significant role that tuning plays in model projections, but as to the reasons why modelers have not wanted to be up front about their methods. I reproduce an extended and relevant excerpt here:

At their core, climate models are about energy balance. They divide Earth up into boxes, and then, applying fundamental laws of physics, follow the sun’s energy as it drives phenomena like winds and ocean currents. Their resolution has grown over the years, allowing current models to render Earth in boxes down to 25 kilometers a side. They take weeks of supercomputer time for a full run, simulating how the climate evolves over centuries.

When the models can’t physically resolve certain processes, the parameters take over—though they are still informed by observations. For example, modelers tune for cloud formation based on temperature, atmospheric stability, humidity, and the presence of mountains. Parameters are also used to describe the spread of heat into the deep ocean, the reflectivity of Arctic sea ice, and the way that aerosols, small particles in the atmosphere, reflect or trap sunlight.

It’s impossible to get parameters right on the first try. And so scientists adjust these equations to make sure certain constraints are met, like the total energy entering and leaving the planet, the path of the jet stream, or the formation of low marine clouds off the California coast. Modelers try to restrict their tuning to as few knobs as possible, but it’s never as few as they’d like. It’s an art and a science. “It’s like reshaping an instrument to compensate for bad sound,” Stevens says.

Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in Princeton, New Jersey.

For years, climate scientists had been mum in public about their “secret sauce”: What happened in the models stayed in the models. The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models— and, by extension, the reality of human-driven warming. “The community became defensive,” [Bjorn] Stevens [of the Max Planck Institut] says. “It was afraid of talking about things that they thought could be unfairly used against them.” Proprietary concerns also get in the way. For example, the United Kingdom’s Met Office sells weather forecasts driven by its climate model. Disclosing too much about its code could encourage copycats and jeopardize its business.

But modelers have come to realize that disclosure could reveal that some tunings are more deft or realistic than others. It’s also vital for scientists who use the models in specific ways. They want to know whether the model output they value—say, its predictions of Arctic sea ice decline— arises organically or is a consequence of tuning. [Gavin] Schmidt [Head of NASA’s Goddard Institute for Space Studies, which, ironically concentrates on earth’s climate] points out that these models guide regulations like the U.S. Clean Power Plan, and inform U.N. temperature projections and calculations of the social cost of carbon. “This isn’t a technical detail that doesn’t have consequence,” he says. “It has consequence.”

Recently, while preparing for the new model comparisons, MPIM modelers got another chance to demonstrate their commitment to transparency. They knew that the latest version of their model had bugs that meant too much energy was leaking into space. After a year spent plugging holes and fixing it, the modelers ran a test and discovered something disturbing: The model was now overheating. Its climate sensitivity—the amount the world will warm under an immediate doubling of carbon dioxide concentrations from preindustrial levels—had shot up from 3.5°C in the old version to 7°C, an implausibly high jump.

MPIM hadn’t tuned for sensitivity before— it was a point of pride—but they had to get that number down. Thorsten Mauritsen, who helps lead their tuning work, says he tried tinkering with the parameter that controlled how fast fresh air mixes into clouds. Increasing it began to ratchet the sensitivity back down. “The model we produced with 7° was a damn good model,” Mauritsen says. But it was not the team’s best representation of the climate as they knew it.

That climate modelers were worried about being open about their methodologies for fear that “contrarians” would “unfairly” use such procedures against them indicates that the modeling community is more interested in climate policy (that may find support in their model projections) than climate science (which would welcome criticism aimed at producing a better understanding of the physical processes driving the earth’s climate). Given the degree of “secret sauce” mixed into the models at this point in time, a healthy dose of skepticsm regarding the verisimilitude of climate model output is warranted.

But even with the all the model tuning that takes place, the overall model collective is still warming the world much faster than it actually is. As shown by Christy (2016, and updates), there is a gross departure of “reality” from model predictions. Christy (2016) noted that “for the global bulk troposphere [roughly the bottom 40,000 feet of the atmosphere], the models overwarm the atmosphere by a factor of about 2.5.” The warming influence of a large and naturally occurring El Niño event has, temporarily, added a blip to the end of the observational record. But despite this short-term natural warming event, collectively the models still produce about twice as much warming as can be found in the real world over the past 38 years. And as the warming of the recent strong El Niño event fades (global surface temperatures have returned most of the way to pre-El Niño levels; see Figures below), the model/real world discrepancy will start to grow once again.

CAPTION: Five-year running mean temperatures predicted by the UN’s climate models, and observed lower atmospheric temperatures from weather balloons and satellites (figure courtesy of John Christy).The last point is a four-year running mean, and the first two are three and four, respectively.

CAPTION: Monthly temperature anomalies, January 1997 through December 2016 (surface observations; top) and January 1997 through January 2017 (satellite observations of the mid-troposphere; bottom) show the impact of the strong 2016 El Niño event and the fading warmth since. The surface readings are from the Climate Research Unit at the University of East Anglia, and the satellite readings are from University of Alabama-Huntsville.

Another way to assess model performance is to compare model projection with observed trends in the vertical dimension of the atmosphere. Here again, as shown in the Figure below, models grossly produce much more warming than has been observed. This chart, courtesy of the University of Alabama at Huntsville’s Dr. John Christy, focuses on the tropics (between 20S and 20N)—the area where climate models project the greatest amount of warming through the atmosphere. The communal failure of the models is abject.

The characteristics of the vertical profile of temperature are important environmental variables in that it is the vertical temperature distribution that determines atmospheric stability. When the lapse rate—the difference between the lowest layers and higher levels—is large, the atmosphere is unstable. Instability is the principal source for global precipitation. Although models can be (and are) tuned to mimic changes in surface temperatures, the same can’t be done as easily for the vertical profile of temperature changes. As the figure indicates, the air in the middle troposphere is warming far more slowly than has been predicted, even more slowly than the torpid surface warming. Consequently, the difference between the surface and the middle troposphere has become slightly greater, a condition which should produce a very slight increase in average precipitation. On the other hand, the models forecast that the difference between the surface and the middle troposphere should become less, a condition which would add pressure to decrease global precipitation.

The models are therefore making systematic errors in their precipitation projections. That has a dramatic effect on the resultant climate change projections. When the surface is wet, which is what occurs after rain, the sun’s energy is directed toward the evaporation of that moisture rather than to directly heating the surface. In other words, much of what is called “sensible weather” (the kind of weather a person can sense) is determined by the vertical distribution of temperature. If the popular climate models get that wrong (which is what is happening), then all the subsidiary weather may also be incorrectly specified.

CAPTION: Tropical (20ºS to 20ºN) temperature trends(1979-2016) throughout the vertical atmosphere as projected by climate models (red squares, with uncertainty) and as observed by radiosondes carried aloft by weather balloons (colored circles represent different data compilations). The red line is the model mean and the green line is the observed mean. The trend in the bulk lower atmosphere (middle troposphere) from several different satellite data compilations (colored plus signs, top box) and several reanalysis datasets (colored crosses, top box) is compared with the model projection for the same layer in the box at the top of the figure. (Figure courtesy of John Christy)

These results argue strongly against the reliability of the Sherwood et al. (2014) conclusion and instead provide robust observational evidence that the climate sensitivity has been overestimated by both climate models, and the IWG alike.

Agricultural Impacts of Carbon Fertilization

Carbon dioxide is known to have a large positive impact on vegetation (e.g., Zhu et al., 2016), with literally thousands of studies in the scientific literature demonstrating that plants (including crops) grow stronger, healthier, and more productive under conditions of increased carbon dioxide concentration. A study (Idso, 2013) reviewed a large collection of such literature as it applies to the world’s 45 most important food crops (making up 95% of the world’s annual agricultural production).

Idso (2013) summarized his findings on the increase in biomass of each crop that results from a 300ppm increase in the concentration of carbon dioxide under which the plants were grown. This table is reproduced below, and shows that the typical growth increase exceeds 30% in most crops, including 8 of the world’s top 10 food crops (the increase was 24% and 14% in the other two).

Idso (2013) found that the increase in the atmospheric concentration of carbon dioxide that took place during the period 1961-2011 was responsible for increasing global agricultural output by 3.2 trillion dollars (in 2004-2006 constant dollars). Projecting the increases forward based on projections of the increase in atmospheric carbon dioxide concentration, Idso (2013) expects carbon dioxide fertilization to increase the value of agricultural output by 9.8 trillion dollars (in 2004-2006 constant dollars) during the 2012-2050 period.

Average percentage increase in biomass of each of the world’s 45 most important food crops under an increase of 300ppm of carbon dioxide.

This is a large positive externality, and one that is insufficiently modeled in the IAMs relied upon by the IWG in determining the SCC.

In fact, only one of the three IAMs used by the IWG has any substantial impact from carbon dioxide fertilization, and the one that does, underestimates the effect by approximately 2-3 times.

The FUND model has a component which calculates the impact on agricultural as a result of carbon dioxide emissions, which includes not only the impact on temperature and other climate changes, but also the direct impact of carbon dioxide fertilization. The other two IAMs, DICE and PAGE by and large do not (or only do so extremely minimally; DICE includes the effect to a larger degree than PAGE). Consequently, lacking this large and positive externality, the SCC calculated by the DICE and PAGE models is significantly larger than the SCC determined by the FUND model (for example, see Table A5, in the IWG 2013 report).

But even the positive externality that results from carbon dioxide fertilization as included in the FUND model is too small when compared with the Idso (2013) estimates. FUND (v3.7) uses the following formula to determine the degree of crop production increase resulting from atmospheric carbon dioxide increases (taken from Anthoff and Tol, 2013a):

Column 8 in the table below shows the CO2 fertilization parameter (γr) used in FUND for various regions of the world (Anthoff and Tol, 2013b). The average CO2 fertilization effect across the 16 regions of the world is 11.2%. While this number is neither areally weighted, nor weighted by the specific crops grown, it is clear that 11.2% is much lower than the average fertilization effect compiled by Idso (2013) for the world’s top 10 food crops (35%). Further, Idso’s fertilization impact is in response to a 300ppm CO2 increase, while the fertilization parameter in the FUND model is multiplied by ln(CO2t/275) which works out to 0.74 for a 300ppm CO2 increase. This multiplier further reduces the 16 region average to 8.4% for the CO2 fertilization effect—some 4 times smaller than the magnitude of the fertilization impact identified by Idso (2013).

Although approximately four times too small, the impact of the fertilization effect on the SCC calculation in the FUND model is large.

According to Waldhoff et al. (2011), if the CO2 fertilization effect is turned off in the FUND model (v3.5) the SCC increases by 75% from $8/tonCO2 to $14/tonCO2 (in 1995 dollars). In another study, Ackerman and Munitz (2012) find the effective increase in the FUND model to be even larger, with CO2 fertilization producing a positive externality of nearly $15/tonCO2 (in 2007 dollars).

Impact of climate change on agriculture in FUND model.

Clearly, had the Idso (2013) estimate of the CO2 fertilization impact been used instead of the one used in FUND the resulting positive externality would have been much larger, and the resulting net SCC been much lower.

This is just for one of the three IAMs used by the IWG. Had the more comprehensive CO2 fertilization impacts identified by Idso (2013) been incorporated in all the IAMs, the three-model average SCC used by the IWG would be been greatly lowered, and likely even become negative in some IAM/discount rate combinations.

In its 2015 “Response to Comments Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866,” the IWG admits to the disparate ways that CO2 fertilization is included in the three IAMs. Nevertheless, the IWG quickly dismisses this as a problem in that they claim the IAMs were selected “to reflect a reasonable range of modeling choices and approaches that collectively reflect the current literature on the estimation of damages from CO2 emissions.”

This logic is blatantly flawed. Two of the IAMs do not reflect the “current literature” on a key aspect relating to the direct impact of CO2 emissions on agricultural output, and the third only partially so.

CO2 fertilization is a known physical effect from increased carbon dioxide concentrations. By including the results of IAMs that do not include known processes that have a significant impact on the end product must disqualify them from contributing to the final result. The inclusion of results that are known a priori to be wrong can only contribute to producing a less accurate answer. Results should only be included when they attempt to represent known processes, not when they leave those processes out entirely.

The justification from the IWG (2015) that “[h]owever, with high confidence the IPCC (2013) stated in its Fifth Assessment Report (AR5) that ‘[b]ased on many studies covering a wide range of regions and crops, negative impacts of climate change on crop yields have been more common than positive ones’” is completely irrelevant as CO2 fertilization is an impact that is apart from “climate change.” And further, the IAMs do (explicitly in the case of FUND and DICE or implicitly in the case of PAGE) include damage functions related to the climate change impacts on agriculture. So not only is the IWG justification irrelevant, it is inaccurate as well. The impact of CO2 fertilization on agricultural output and its impact on lowering the SCC must be considered.

Additional Climate Model Parameter Misspecifications

In addition to the outdated climate sensitivity distribution and the insufficient handling of the carbon dioxide fertilization effect, there has also been identified a misspecification of some of the critical parameters within the underlying box models that drive the pace and shape of the future climate evolution in the IAMs.

A recent analysis (Lewis, 2016) finds that the physically-based two-box climate model inherent in the DICE IAM is fit with physically unrealistic ocean characteristics. According to Lewis (2016):

In the DICE 2-box model, the ocean surface layer that is taken to be continuously in equilibrium with the atmosphere is 550 m deep, compared to estimates in the range 50–150 m based on observations and on fitting 2-box models to AOGCM responses. The DICE 2-box model’s deep ocean layer is less than 200 m deep, a fraction of the value in any CMIP5 AOGCM, and is much more weakly coupled to the surface layer. Unsurprisingly, such parameter choices produce a temperature response time profile that differs substantially from those in AOGCMs and in 2-box models with typical parameter values. As a result, DICE significantly overestimates temperatures from the mid-21st century on, and hence overestimates the SCC and optimum carbon tax, compared with 2-box models having the same ECS and TCR but parameter values that produce an AOGCM-like temperature evolution.

When the DICE 2-box model is parametrized with values for the ocean layers that are in line with established estimates, the value of the social cost of carbon that results is reduced by one-quarter to one-third during the 21th century. Lewis further point out that notes that “The climate response profile in FUND and in PAGE, the other two IAMs used by the US government to assess the SCC, appear to be similarly inappropriate, suggesting that they also overestimate the SCC.”

Ultimately, Lewis (2016) concludes:

It seems rather surprising that all three of the main IAMs have climate response functions with inappropriate, physically unrealistic, time profiles. In any event, it is worrying that governments and their scientific and economic advisers have used these IAMs and, despite considering what [equilibrium climate sensitivity] and/or [transient climate sensitivity] values or probability distributions thereof to use, have apparently not checked whether the time profiles of the resulting climate responses were reasonable.

Sea Level Rise

The sea level rise module in the DICE model used by the IWG2013/2015/2016 produces future sea level rise values that far exceed mainstream projections and are unsupported by the best available science. The sea level rise projections from more than half of the scenarios (IMAGE, MERGE, MiniCAM) exceed even the highest end of the projected sea level rise by the year 2300 as reported in the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (see figure).

CAPTION: Projections of sea level rise from the DICE model (the arithmetic average of the 10,000 Monte Carlo runs from each scenario ) for the five scenarios examined by the IWG2013 compared with the range of sea level rise projections for the year 2300 given in the IPCC AR5 (see AR5 Table 13.8).(DICE data provided by Kevin Dayaratna and David Kreutzer of the Heritage Foundation).

How the sea level rise module in DICE was constructed is inaccurately characterized by the IWG (and misleads the reader). The IWG report describes the development of the DICE sea level rise scenario as:

“The parameters of the four components of the SLR module are calibrated to match consensus results from the IPCC’s Fourth Assessment Report (AR4).6”

However, in IWG footnote “6” the methodology is described this way (Nordhaus, 2010):

“The methodology of the modeling is to use the estimates in the IPCC Fourth Assessment Report (AR4).”

“Using estimates” and “calibrating” are two completely different things. Calibration implies that the sea level rise estimates produced by the DICE sea level module behave similarly to the IPCC sea level rise projections and instills a sense of confidence in the casual reader that the DICE projections are in accordance with IPCC projections. However this is not the case. Consequently, the reader is misled.

In fact, the DICE estimates are much higher than the IPCC estimates. This is even recognized by the DICE developers. From the same reference as above:

“The RICE [DICE] model projection is in the middle of the pack of alternative specifications of the different Rahmstorf specifications. Table 1 shows the RICE, base Rahmstorf, and average Rahmstorf. Note that in all cases, these are significantly above the IPCC projections in AR4.” [emphasis added]

That the DICE sea level rise projections are far above mainstream estimated can be further evidenced by comparing them with the results produced by the IWG-accepted MAGICC modelling tool (in part developed by the EPA and available from http://www.cgd.ucar.edu/cas/wigley/magicc/).

Using the MESSAGE scenario as an example, the sea level rise estimate produced by MAGICC for the year 2300 is 1.28 meters—a value that is less than 40% of the average value of 3.32 meters produced by the DICE model when running the same scenario (see figure below).

The justification given for the high sea level rise projections in the DICE model (Nordhaus, 2010) is that they well-match the results of a “semi-empirical” methodology employed by Rahmstorf (2007) and Vermeer and Rahmstorf (2009).

However, subsequent science has proven the “semi-empirical” approach to projecting future sea level rise unreliable. For example, Gregory et al. (2012) examined the assumption used in the “semi-empirical” methods and found them to be unsubstantiated. Gregory et al (2012) specifically refer to the results of Rahmstorf (2007) and Vermeer and Rahmstorf (2009):

The implication of our closure of the [global mean sea level rise, GMSLR] budget is that a relationship between global climate change and the rate of GMSLR is weak or absent in the past. The lack of a strong relationship is consistent with the evidence from the tide-gauge datasets, whose authors find acceleration of GMSLR during the 20th century to be either insignificant or small. It also calls into question the basis of the semi-empirical methods for projecting GMSLR, which depend on calibrating a relationship between global climate change or radiative forcing and the rate of GMSLR from observational data (Rahmstorf, 2007; Vermeer and Rahmstorf, 2009; Jevrejeva et al., 2010).

In light of these findings, the justification for the very high sea level rise projections (generally exceeding those of the IPCC AR5 and far greater than the IWG-accepted MAGICC results) produced by the DICE model is called into question and can no longer be substantiated.

Given the strong relationship between sea level rise and future damage built into the DICE model, there can be no doubt that the SCC estimates from the DICE model are higher than the best science would allow and consequently, should not be accepted by the IWG as a reliable estimate of the social cost of carbon.

And here again, the IWG (2015) admits that these sea level rise estimates are an outlier on the high end, yet retains them in their analysis by claiming than they were interested in representing a “range” of possible outcomes. But, even the IWG (2015) admits that the IPCC AR5 assigned “a low confidence in projections based on such [semi-empirical] methods.” It is internally inconsistent to claim the IPCC as an authority for limiting the range of possibilities explored by the IAMs (which it did in the case of equilibrium climate sensitivity) and then go outside the IPCC to justify including a wildly high estimate of sea level rise. Such inconsistencies characterize the IWG response to comments and weaken confidence in them.

I did not investigate the sea level rise projections from the FUND or the PAGE model, but suggest that such an analysis must be carried out prior to extending any confidence in the values of the SCC resulting from those models—confidence that, as demonstrated, cannot be assigned to the DICE SCC determinations.

Conclusion

The social cost of carbon as determined by the Interagency Working Group in their August 2016 Technical Support Document (updated from IGW reports from February 2010, November 2013, and July 2015) is unsupported by the robust scientific literature, fraught with uncertainty, illogical, and thus completely unsuitable and inappropriate for federal rulemaking. Had the IWG included a better-reasoned and more inclusive review of the current scientific literature, the social cost of carbon estimates would have been considerably reduced with a value likely approaching zero. Such a low social cost of carbon would obviate the arguments behind the push for federal greenhouse gas regulations.

Intergovernmental Panel on Climate Change, 2007. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Solomon, S., et al. (eds). Cambridge University Press, Cambridge, 996pp.

Intergovernmental Panel on Climate Change, 2013. Climate Change 20013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Final Draft Accepted in the 12th Session of Working Group I and the 36th Session of the IPCC on 26 September 2013 in Stockholm, Sweden.

Patrick J. Michaels is the director of the Center for the Study of Science at the Cato Institute. Michaels is a past president of the American Association of State Climatologists and was program chair for the Committee on Applied Climatology of the American Meteorological Society. He was a research professor of Environmental Sciences at University of Virginia for 30 years, and Virginia State Climatologist for 27 years. Michaels was a contributing author and is a reviewer of the United Nations Intergovernmental Panel on Climate Change, which was awarded the Nobel Peace Prize in 2007.

His writing has been published in the major scientific journals such as Geophysical Research Letters, Journal of Geophysics, Climatic Change, Nature and Science as well as popular serials worldwide. He is the author or editor of seven books on climate and its impact, and he was an author of the climate “paper of the year” awarded by the Association of American Geographers in 2004. He has appeared on most of the worldwide major media.

Michaels holds AB and SM degrees in biological sciences and plant ecology from the University of Chicago, and he received a PhD in ecological climatology from the University of Wisconsin at Madison in 1979.

I am Patrick J. Michaels, Director of the Center for the Study of Science at the Cato Institute, a nonprofit, non-partisan public policy research institute located here in Washington DC, and Cato is my sole source of employment income. Before I begin my testimony, I would like to make clear that my comments are solely my own and do not represent any official position of the Cato Institute.

158 thoughts on “AT WHAT COST? EXAMINING THE SOCIAL COST OF CARBON”

In theory, the price of a gallon of gas our other fossil fuel is equal to the social benefit buyers receive from that gas. The social cost of carbon is an attempt to determine the cost of negative externalities caused by the CO2 released with that gas is burned and released into the environment – costs the buyer of the gas normally doesn’t pay.

Unfortunately, there probably isn’t an unambiguous way to determine a social cost for carbon that most of the people can understand trust. Which means it is a question for Congress; not unelected bureaucrats and carefully selected outside experts.

The author has raised the fault of focusing on “the cost of negative externalities”, just as the IPCC focused on “the negative influence of human activity on climate”. In both cases, a proper cost/benefit analysis might well show that additional CO2 and whatever climate results are both acceptable and beneficial.

“…costs the buyer of the gas normally doesn’t pay.” Who in the US is not a buyer of gas? This might include some urban dwellers who:
· rely on diesel or CNG powered buses.
· goods are delivered to local stores and then their homes by diesel powered trucks.
· whose mail is delivered by… well, locally, gas-powered vehicles, but inter-city by diesel powered trucks.
· rely on electric vehicles for personal transportation.
· house/apartment is heated by electricity.
Others who may not use gas include Amish communities and individualists who
But diesel and CNG still release CO2. Few communities are served electricity solely by nuclear and hydro, and none by purely wind and solar. Backups to wind and solar are typically gas, coal, and diesel.
Which would seem to mean that the buyers are paying the cost of CO2 released into the air, as we are all buyers. The costs are passed on to the buyers, just not at the pump. Is anyone exempt from 1) the benefits of gas, or 2) exempt from the negative externalities of gas?

I can only conclude the IWG already had the answer they wanted and worked backwards to it. Without a high, negative value of the SCC, affected regulations would not look justifiable in the cost/benefit analysis. While reading the preambles and technical support documents, I have found EPA likes to paper over difficulties with unscientific studies and unverifiable assumptions.

The sad part is those people who made it got government funding to produce science fiction.

No. I figure those people got their funding to tell government what the government wanted to hear. The IWG is not listening to alternative ideas (140 comments, and not one of them taken on board??). They want to tax the air we breathe and want the best price they can get for it. Nothing else matters. Science? shmience!

I thought this was going to address the “Social Cost of Carbon”, but then I got a long screed about carbon dioxide. Image my surprise. You’ve lost before you even start when you use the bastardized language of your opponents.

The author didn’t choose the language. He is critiquing reports by the federal government’s Interagency Working Group (IWG) on the Social Cost of Carbon. How can he discuss their claims without using their language? No one would know what he was talking about.

Well then, Louis, as I think Dougmanxx is implying, an explanatory note is vital preferably upmfront. For goodness’ sake, the widespread and downright deceit (or pig ignorance) of using “carbon” when referring to “carbon dioxide” is bad enough without having it reinforced, if unwittingly, in such a scholarly presentation. Don’t cravenly play the alarmists’ grubby game, I say. Slap them around by showing up their deceit or pig ignorance.

The title of the HEARING was “At what cost? Examining the social cost of carbon”. The hearing committee chooses the name. They also choose who they want to hear from. Patrick J Michaels was one of those people. His written statement starts with :

“My testimony concerns the selective science that underlies the existing federal determination of the Social Cost of Carbon and how a more inclusive and considered process would have resulted in a lower value for the social cost of carbon.”

Since I can easily “imagine your surprise” upon being informed that you appear to be uninformed, I’m sure you’ll readily apologize for your hasty, and incorrect, judgements. 🙂

Holding your elected Representatives to task is also part of being a good Citizen. Because this is a wholly political issue, my point is more valid than a discussion of carbon, carbon dioxide or anything else. Dr Michaels has already lost, because he’s conceded the language to his opponents. You get no apologies from me for speaking truth. Regards,

I certainly hope we get SOME beneficial warming from increased CO2 and don’t have to settle for just increased crop and forest yields. My guess is 1C per quadrupling. But I’m hoping for 2C per quadrupling so as to put off the imminent end of the current interglacial a bit.

This entire thing about the evils of carbon, are getting totally out of hand. And it amazes me as to just how ignorant so many people are about this wonderful element. Little known is the major revolution that is just around the corner, concerning the development of carbon to its full potential. Graphene and carbon nanotubes are just the tip of the iceberg. And add to this the fact that carbon is at least ten times lighter than steel, and more than ten times stronger.

Here’s what I see in the soon to be future. Carbon nanotube mass production will make cables that will carry things back and forth from space: space elevators. Graphene will become a cheap means of filtering sea water, producing pure potable water. But even more, carbon will become the main building material for skyscrapers, buildings, ships, space habitats, etc. Anything that is so much lighter and stronger than steel will completely replace it as structural foundation.

Our entire society will change, and all because of Carbon. Imagine that. We have already had an industrial revolution, but perhaps a new revolution will be following soon: a Carbon Revolution. And the Kooks want carbon labeled as a pollutant? For Heaven’s sake, what ignorance and folly.

Well, carbon, ie, soot, is pollution and is a real problem. But the kooks do tend to refer CO2 as carbon because carbon is black, like coal, and perceived dirty. Plucking emotional heart strings there. Here in Australia a few years ago there was an ad campaign showing electrically powered household items such as TV’s and washing machines and the like, emitting little black balloons containing 50g of green house gas, squeezing from the devices and floating up in to the sky.

I haven’t seen it aired for sometime, I think Govn’t realised people weren’t buying the propaganda.

If they really wanted to be fair, they should have a similar expression for nature’s 97% contribution for the whole year released all at once. *SMH* (Of course when they talk about emissions, they make sure it sounds like we’re the only source of CO2 emissions. Alarmists rarely have nature’s yearly contribution – and when they do compare ours to nature’s, they compound ours (going back to when we started using fossil fuels) but they never compound nature’s. That’s not science.)

Patrick, as I recall, ultra green ignoramus, Sydney Lord Mayor Clover Moore, gave us that garbage.
Remember this piece of unmitigated drivel, also?https://youtu.be/eprah6RNab4
Andrew Bolt gave the outrageously fraudulent work a real serve.

“Count the lies in this..,ad in which Michael Caton and Cate Blanchett tell us to say “yes” to Julia Gillard’s carbon dioxide tax:
– No, our skies aren’t black with soot. – No, this tax won’t clear the skies. – No, this isn’t about carbon but carbon dioxide. – No, that power station isn’t in Australia, but Britain. – No, it won’t be closed by Gillard’s tax, not least because it’s closed already. – No, not one power station in this country pumps out black soot like this.
How can such a lying ad be show by people demanding we “respect the science”?”

This entire thing about the evils of carbon, are getting totally out of hand. And it amazes me as to just how ignorant so many people are about this wonderful element. Little known is the major revolution that is just around the corner, concerning the development of carbon to its full potential. Graphene and carbon nanotubes are just the tip of the iceberg. And add to this the fact that carbon is at least ten times lighter than steel, and more than ten times stronger.

Here’s what I see in the soon to be future. Carbon nanotube mass production will make cables that will carry things back and forth from space: space elevators. Graphene will become a cheap means of filtering sea water, producing pure potable water. But even more, carbon will become the main building material for skyscrapers, buildings, ships, space habitats, etc. Anything that is so much lighter and stronger than steel will completely replace it as structural foundation.

Our entire society will change, and all because of Carbon. Imagine that. We have already had an industrial revolution, but perhaps a new revolution will be following soon: a Carbon Revolution. And the Kooks want carbon labeled as a pollutant? For Heaven’s sake, what ignorance and folly.

But even the positive externality that results from carbon dioxide fertilization as included in the FUND model is too small when compared with the Idso (2013) estimates.

Sorry but Idso´s numbers are not “estimates”. They are experimental values.

Roughly speaking, the Idsos grow plants in different CO2 concentrations (350 ppm, 700 ppm) and they measure the weight gain in those different conditions. Plants grow faster at 700 ppm of CO2 than at 350 ppm. They get an empirical, experimental value for CO2 fertilization, not an extimate, a real, experimental, empirical value.http://co2science.org/

Besides, if the FUND model uses the logarithmic formula depicted above, then the model is wrong. CO2 fixation is an enzymatic reaction performed by RuBisCO. Enzymatic reactions follow saturation kinetics, also called Michaelis Menten kinetics.

Good luck with that, co2islife. I have been in continual correspondence with the idiots running the Province of Alberta, Canada, for almost a year and unfortunately they have inhaled way too much of the cool-aide, and simply cannot fathom very simple constructs such as this.

But I will continue to fight the good fight. I do wish you all of the luck in the world, however!

Yes, thinking otherwise requires one to completely take for granted modern industrialized society and all of the benefits it has brought to the masses.

The same useful idiots clamoring on about “climate change” as if it is a human-induced catastrophe are busily enjoying the clothing they wear, racking up lots of time on their cell phones and computers, driving their cars, flying to vacation getaways (or “climate conferences,” lol), enjoying the comfort of their heated and air conditioned homes, and enjoying the fruits of CO-2 fertilization in terms of the food on their tables. In short, they are hypocrites taking everything the have completely for granted, when ALL of it is essentially thanks to fossil fuels.

AFAIK, until all the Climate Fascists are chasing down their dinner with a pointy stick that they sharpened with a rock, living in a cave, and wearing nothing more than the skins of what they managed to catch and kill with their pointy sticks, they can all just STFU.

Social cost of carbon is an oxymoron that does not exist. A social cost can only exist if the substance causing it can harm society. Carbon dioxide or any other form of carbon demonstrably does not harm society. It is in effect beneficial because it acts as a fertilizer for our food crops. The association between global warming and carbon dioxide is false because it is based upon false predictions by computer showing warming in the twenty-first century. There is no warming where computers predict it for this century. When I say prediction I mean the average of a bundle of illegitimate computer runs that are presented to us as a “prediction.” It is false and immediately disqualifies the method used. It and the results presented must be discarded. They must be used as parts of any scientific observation. It follows that the social cost of carbon is a fantasy based upon incompetent computer analysis and must be removed from any scientific description of nature.

Another 2C implies that we have already had an increase of 2C to some. An additional 2C or better yet, a 2C increase from today is more clear. Sorry to nitpick, but temperatures have only risen 1C +/- 0.2 C since about 1880 from what I recall.

Back in the early 1960’s my wife who was then living in Johannesburg, South Africa, tells me they could get a container of foul smelling liquid from the local coal fired power station. This liquid was diluted with water and spread over the lawn. She remembers it produced wonderful green lawns. She can’t remember what the stuff was called but it was definitely a fertiliser from a coal fired power station.

That would have been ammonia liquor, a source of nitrogen and some sulphur. Too much applied meant mowing the lawn more frequently!
Incidentally, the cleaning up of electricity generating station and blast furnace exhaust gas has reduced the amount of sulphur in rainfall to the extent that farmers now have to use fertilisers with added sulphur in order to maintain crop yields.

In their first report, the IPCC published a wide range of guesses as to the true value of the climate sensivity of CO2. There is probably nothing more important to the IPCC’s purpose than to make a very accurate determination as to the true value of the climate sensivity of CO2. In their last report, the IPCC published the exact same wide range of guesses so after more than two decades of study the IPCC has learned nothing that would allow them to narrow the range of their guesses one iota. In the mean time other researches have made estimates of the climate sensivity of CO2 that are below the range published by the CO2 but the IPCC refuses to recognize the existance of such estimates for fear of having their funding reduced. Kyoji Kimoto has found that the initial calculations of the Planck climate sensivity of CO2, which neglects any feedbacks, is too great by a factor of more than 20 because the calculations neglect that a doubling of CO2 will slightly decrease the dry lapse rate in the troposphere which is a cooling effect. So instead of a Planck sensivity of CO2 of 1.2 degrees C because of the LWIR absorbing effects of CO2 the number should be less than .06 degrees C which is trivial.

The primary reason for the wide range of the IPCC’s guestamates is an uncertainity as to the feedback effect of H2O. According to the AGW conjecture, more CO2 in the atmosphere causes warming that in turn causes more H2O to enter the atmosphere which causes even more warming because H2O is also a so called greenhouse gas with LWIR absorption bands. What the AGW conjecture ignores is the fact that besides being the primary greenhouse gas, H2O is a primary coolant in the Earth’s atmosphere moving heat energy from the Earth’s surface which is mostly some form of H2O to where clouds form via the heat of vaporization. According to energy balance models, more heat energy is moved by H2O via the heat of vaporization then by both convection and LWIR absorption band radiation combined. Another fact to consider ts that that wet lapse rate is significantly less than the dry lapse rate which signifies that more H2O in the atmosphere causes cooling hence the feedback is really negative, and not positive. For the Earth’s climate to have been stable enough for life to evolve the feedback has to have been negative. The negative feedback reduces any effect that CO2 might have on climate.

A real greenhouse does not stay warm because of the fabled heat traping action of so called greenhouse gases. A real greenhouse stays warm because the glass reduces cooling by convection. There is no radiant greenhouse effect in a real greenhouse but rather it is a convective greenhouse effect. So too on Earth. The surface of the Earth is warmer then it would otherwise be because gravity limits cooling by convection. Gravity provides a convective greenhouse effect. From first principals it has been calculated that the convective greenhouse effect keeps the surface of the Earth 33 degrees C warmer than it would be without an atmosphere. 33 degrees C is what has been calculated and 33 degrees C is what has been measured. There is no additional warming caused by a radiant greenhouse effect. The radiant greenhouse effect upon which the AGW conjecture has not been observed anywhere in the solar system rendering the AGW conjecture as science fiction. Without the radiant greenhouse effect,, the climate sensivity of CO2 is zero. If CO2 really affected climate then one would expect that the increase in CO2 over the past 30 years would have caused at least a measureable increase in the dry lapse rate in the troposphere but such has not happened. The reality is that there is no real evidence that CO2 has any effict on climate and plenty of scientific rational to support the idea that the climate sensivity of CO2 is really zero.

The argument which has been presented is that the natural CO2 was in balance prior to significant man-made CO2 generation. Therefore, the addition of 3% CO2 to the atmosphere is sufficient to distort this balance and lead to rising atmospheric carbon dioxide levels.

This is a valid argument. We have measures of atmospheric CO2 which illustrates the effect of man-made emissions. Claiming that it is only 3% of the natural carbon dioxide emissions is a deflection from a valid argument.

The discussions centered on the social cost vs. the social benefit of carbon dioxide emissions are valid. Most of the time, these assessments seem geared to drive policy rather than provide neutral scientific information which informs policy decisions. Many valid arguments for this distortion are presented in this article and subsequent postings support this argument.

By the burning up of massive amounts of fossil fuels Mankind has caused an increase of CO2 in the Earth’s atmosphere but there is no real evidence that CO2 has any effect on climate. There is plenty of scientific rational to support the idea that the climate sensivity of CO2 is really zero. In the past CO2 levels have been more than ten times what they are today and during that time there has been both ice ages and warm periods. It is the chemical processes that created carbonate rocks and fossil fuels that have been removong CO2 from our atmosphere that pants need as their source of carbon. The previous interglacial period was warmer than this one with higher sea levels and more ice cap melting yet CO2 levels were lower then they are today. The warming up from the LIttle Ice Age that we have been experiencing is very similar to the warming up from the Dark Ages Cooling Period, except our Modern Warm Period is not yet as warm as the previous Medieval Warm Period. The models that included CO2 based warming have all failed to adequately predict today’s global temperatures yet others have produced models that do not include any CO2 based warming that do adequately explain today’s global temperatures. From the modeling efforts one can conclude that today’s climate change is caused by the sun and the oceans over which Mankind has no control. Clearly the past billions of years of climate change could not possible have been caused by Man’s use of fossil fuels. The climate change we have been experieicing is so small that it takes very sophistocated instruments many years to even detect it. The climate change we are experiencing today is in keeping with climate change from natural forces. Changes in the weather and weather cycling is not climate change. Extreme weather events including floods and droughts are part of our current climate and even if we could put a hault to climate change, these extreme weather events would continue to happen and the sea level would continue to rise.

Thank you, Willhaas. All good, but I especially appreciated the last paragraph. It is so obvious, yet seldom stated. In fact, I haven’t seen it stated before on this blog, despite being a loyal visitor for well over 10 years. (Not to say someone hasn’t mentioned it; I can’t read every article or response.) Actual greenhouse operation – especially commercial ones – are in fact a great source to use in arguing that GHG theory is overstating the case.

Not sure that I can agree. As an engineering manager for one of the biggest American aerospace companies, we always tuned our models once we started getting actual performance data. It allowed much more accurate predictions about future performance of engines, lift and drag, comm link/antenna performance. The real world is always different from the models, but a good model can still give you very good planning information.
The difference between the models we used and AGW models is that if our models were carefully studied before launch/flight for logic and valid parameterization. If they weren’t in the ballpark during flight, they were tossed, and the manager/team took a reputation hit. If they were good, they were still tweaked to be even better, usually by a larger team of experts. AGW models do not appear to be converging on actual climate performance yet. In aerospace, a performance model like the AGW models would have been disappeared literally decades ago.
Only exception I can think of in aerospace is software development modeling prediction tools. The military likes to used them. On some very rare occasions, they achieved order of magnitude tracking. The rest of the time they were a joke. Hundreds of qualitative knobs resulted in any answer you needed. I hated having to develop values for the models. Upper management would come back and ask us to tweak those input values to arrive at management/customer target predictions. What a joke.

A simple example is table lookup. You can almost always interpolate. It is very bad engineering practice extrapolate outside the table where we know the data is valid.

In my own career I used models to predict the behaviour of circuits at microwave frequencies. Since everything matters at those frequencies, the solutions are practically impossible without computers. It was easy enough to build the physical circuit and confirm the model. As you point out, verification is mandatory.

Lindzen says that if you build a model using physics and known initial conditions, that model may be valid if the model reproduces historical data without any tuning. Furthermore, you can’t verify a model against the data that it was tuned on.

Perhaps a bit more detail…
One item we modeled was contact time for a low orbit satellite. This seems pretty straight forward if you know the orbit. However, we were surprised to learn in real life that we could establish a lock considerably earlier than predicted. Who knew that a radio signal could penetrate the ocean, allowing us downlink/uplink while the satellite was not in line of sight? We had modeled mountains and tides, but not underwater mountains and valleys. The result was that we were able to downlink more data than predicted. And we had more time for re-tasking.
Now of course, that’s included in many models, as well as atmospheric disturbances that effect radio waves. In fact, GPS WAAS satellites downlink those corrections on a daily basis to properly equipped receivers. If you pay attention to anomalous data, you can learn a lot – and apply that learning to your models.

Creosote was also very effective against termites in the Southern states with the nasty Formosan termites. Now it is banned and termites are much harder to control. They also banned chlordane. After Katrina I had to replace 200 2x4s in my house plus dozens of other larger pieces of lumber. The previous owner had poor termite control but the ban of two very effective treatments was also a big part of it as for years they had very few things to use.

I’m sorry, I’m so sorry but what an epic attempt at counting faeries, even in minute fully referenced details to what they’re wearing. No no no NO

Getting there, but still a million trillion miles from what I see as The Real Problem;The Social Cost of Sugar

What’s going on is huge, really really epically huuuuuuuuge.
Where to start?

Look around you, at the people you see.
How many are overweight, how many are currently eating? For an animal that was designed and evolved to exist on one small (fat based) meal every 24 or 48 hours, why is that? Even before we get on to why medics say to snack every 2 hours nowadays.

There is only really one sugar = glucose. Its obtained from eating any or all of the other ‘sugars’ and is what floods our bloodstreams when we eat (processed) carbohydrate.
Glucose is toxic to animal cells – it has such a huge affinity for water it totally disrupts the water balance of animal cells, and put simply, kills them because of that.
That is why we have a mechanism to control blood sugar but the repetitive flooding of that system reduces its efficacy, to the point it quite gives up.
Diabetes

The first symptom of diabetes?
It destroys our nerve cells, especially in our ‘extremities’ Peripheral Neuropathy. Have you got that, pins and needles, tingling feet, hands & fingers.
Oh no says you, I always get that. Its nothing.
Nerve cells and especially brain cells don’t repair themselves on a regular basis like all others do – Apoptosis.

The nerve/brain cells we had at age 2 are the ones that have to last us forever.
And Sugar Destroys Them. Alcohol too.

You knew all this, so what. (I do hope you’re not sucking on caffeine while you say that)

My idea – What happens if you ‘take’ an adult brain and erode (large?) parts of it?
Sugar being the damaging agent.

Is it beyond belief that you end up (mentally) with a child?
Its often remarked upon how elderly dementia patients are ‘childlike’. Physically fine and OK but mentally wasted.

Finally I get to the original post – is not the behaviour described and as extended towards ‘The Girls’ here, exactly that of the school play-ground?

Think on, is not all this tribalism that of schoolchildren? Is not the the short attention spans demonstarive of that? The ‘click on a link’ and I’m an expert mentality.

Go on admit it, you really do not understand this radiative green house effect. There are as many explanations for it on here as there are commentators. Its garbage. Extreme minutiae, like the faeries on the pin.

Trouble is, children don’t admit mistakes, usually because they’re afraid of punishment. It takes an adult (WITH A CLEAR HEAD AND GUTS) to admit a mistake.
Again, being afraid, another child like trait.

Need I go on, you see now. Find your own examples.

Coffee.
Why do you drink it.
You like the taste.
I say “no” It has a bitter nasty taste and caffeine is a potent toxin in its own right. It is the coffee plant’s defence against being eaten.
You like coffee because of the effect it has, it wakes you up basically, makes you feel better/good.
You like the taste because of the effect. Liking the taste is a brilliant example of political correctness or even tribalism

I ask, why were you feeling ‘poor or ‘bad’ in the first place – not because you recently ate some sugar was it……

An adult, with guts, will run the lo-carb & no-grog experiment and discover coffee is only any good for generating migraine.

Sugar has a lot in common with CAGW. Back in the 1950s it seemed to people that there was an epidemic of heart attacks. A guy called Ancel Keys did some flawed research and declared that the problem was animal fats. He and his buddies and a couple of large corporations made sure any different research, especially research pointing the finger at sugar, was suppressed.

Nina Teicholz, a journalist, wrote a book pointing out what Keys and his buddies had done to promote bogus research and suppress good research. Keys is gone but it sounds like his buddies are still at it.

It’s supposed to show an argument like the one being made about climate change and belief and social costs. As with the climate change, it has build in assumptions, models and true believers. If you reject one, you are supposed to reject the other. Form wise, yes, the arguments are nearly the same. Examining the assumptions is where things fall apart.

Thank you – I often wondered how much all this war over climate change was had cost us over the years. We could no doubt pay off the national debt. *SIGHS*…thought we’d be doing better by now, but it seems we have the war ahead of us. Ivanka and her husband apparently think we need to do something about it – and I hope to God that Trump puts ear plugs in when she’s talking about the issue.

Why does everyone assume Trump does whatever Ivanka wants? She didn’t elect him—the people did. If I recall correctly, she didn’t even vote in the primary. I’m pretty sure Trump is aware of what he said and promised to the people who did elect him. Ivanka may have her own opinions, but no one elected her or her opinions.

I don’t think it’s an assumption. There have been factual reports that she convinced him to do certain things regarding the Paris agreement and climate change. Those reports have never been denied or disputed.

Shouldn’t even need to be having to come up with such an argument. Atmospheric CO2 is not a pollutant. The social cost of carbon is a revenue generating scheme. The people who came up with it were given an award – but they should be charged with fraud.

How Does One Justify One of the Most Expensive Regulations in American History?In an effort to justify its massive global warming regulations, the Obama Administration had to estimate how much global warming would cost, and therefore how much money their plans would “save.” This is called the “social cost of carbon” (SCC). Calculating the SCC requires knowledge of how much it will warm as well as the net effects of that warming. Needless to say, the more it warms, the more it costs, justifying the greatest regulations.https://www.cato.org/blog/how-does-one-justify-one-most-expensive-regulations-american-history

“QUESTION: Martin,
I have a question that is well outside your usual realm of discourse, so please bear with me as I explain.
I have just returned from visiting my friend, who is a senior cetacean biologist at one of the large west coast universities. While there, he described an amazing situation to me that has alarmed me greatly. He said that research at his university has conclusively identified the complete or almost complete collapse of several dozen food chains within the Pacific Ocean, all within the last 36 months or so. Further, in “unauthorized” exchanges with the relevant departments in other coast universities, he learned that the numbers involved may well be more like hundreds of chain collapses in the same timeframe as opposed to dozens. Finally, in talking with authoritative figures in Vancouver, they apparently believe that the figure is likely closer to 1000. My friend also explained that equally alarming is the fact that all these research departments are finding within the genres of sea life they have physically examined within the same timeframe “huge numbers of general body mutations, as well as skin disorders” which all cannot yet be accounted for in terms of causation.
As bad as all of this sounds, here is the real rub. Regarding these findings about food chain collapses, mutations, and injuries, my friend’s university has instituted a policy that forbids them from publishing their findings, from discussing their findings (on this subject) publicly or in private with other researchers outside their own campus, or finally from taking “unauthorized” radiation readings as part of their research. The penalties for violating these new rules are severe: loss of tenure, civil lawsuits for violation of contract, and potentially employment termination. He showed me a memo on the subject from her own university, so there is no doubt about that in my mind. For the part about colleagues at other universities encountering the same things, I have nothing but my his word but that is good enough for me.”

The social cost of mass hysteria is always hard to quantify.
And there is nothing more hysterical than a carbon based lifeform obsessing about “carbon pollution”.
Opertunity wasted and resources ignored are very hard to sum up.
The destruction of productive people or driving them into other fields, impossible to calculate.

Funny how none of our experts saw today coming, 40 years ago…yet they assure us they see and understand tomorrow .
And that tomorrow is bag, obey their wisdom,based upon their great vision, or else.

Honestly time to start hanging bill boards on these fools.
“I am a power hungry parasite”/’The End is Nye” should work.

Educational and within reach of a layperson like Me. Interesting comment that some committee members appeared disinterested and rude to the presenter. You don’t have to agree but you should be willing to listen. That they were not tells me all I need to know. That is a political attitude and has to be approached with the contempt it deserves.

“We analysed 74,225,200 deaths in various periods between 1985 and 2012. In total, 7·71% (95% empirical CI 7·43–7·91) of mortality was attributable to non-optimum temperature in the selected countries within the study period, … . More temperature-attributable deaths were caused by cold (7·29%, 7·02–7·49) than by heat (0·42%, 0·39–0·44).” Gasparrini, et al, Lancet, 20 May 2015

There are a number of ways for ice to melt below zero. Most important is salinity content. The higher the salt content, the lower the melting/freeing point. Ice is also affected by pressure and wind. The flow of wind creates the effect of low pressure just as predicted by the Bernoulli principal. In some cases, the ice sublimates rather then melts.

That’s not to say that I buy all of the rhetoric on climate change, but ocean water can be well below zero (32F) and remain a liquid.

Not sure that is exactly correct. I think what happens is the salt water freezes at a lower temperature. Once frozen, the ice is salt free, and still melts at 0 degree C. Sublimation isn’t melting, it is more like evaporation, and isn’t related to temperature. Either way, neither is due to CO2.

You are correct — when ice forms out of salt water it retains very little salt, but it still freezes at a lower temperature than 0C. This is why we salt roads in the winter (which is not exactly the same salt, but it is the same principal.)

I didn’t claim it was due to CO2 (although it could be.) I suspect it is more due to ocean currents. If you watch those wonderful animations of “disappearing ancient ice”, you notice that the ice doesn’t melt as much as it flows out of Antarctica through the GIUK gap. I imagine that is cyclical in nature.

Interesting that not one commenter has mentioned that the Christy graph cited has been shown to be a sub-optimal comparison. Pretty much apples and bicycles. You’d think a group of sceptical minds might ask questions about said provenance.

Nope. Three things wrong. 1. Baseline choice minimizes the discrepancy. Should be normalized to just the beginning, say 1979-1980. 2. CMIP5 95% envelope is meaningless and misrepresented. 3. Ends in middle of El Nino uptick and does not show the following cooling downtick. Schmidt tried those tricks to criticize an earlier version of Christy at Real Climate and got his ears boxed.
The most meaningful comparison is CMIP5 versus 4 balloon and 2 satellite measurements for tropical troposphere. See Christy’s most recent Congressional testimony Feb 2016. Models hot by a median 3.5x for tropical troposphere. So potent that Santer rushed out a subsequent paper using a bogus (for the tropics) stratosphere correction to conclude the CMIP5 models were ‘only’ 1.7x too hot for the tropocal troposphere.

Rud Istvan is exactly right. I agree Gavin Schmidt’s comparison is meaningless. He is comparing very rough estimates of surface temperature (all of which use the same data) to model output. Dr. Christy’s compares completely independent radiosonde and satellite data (which agree with one another) to model output. Much more valid and for a large chunk of the troposphere. I’ll take three independent estimates before several highly dependent measurements any day of the week. If you think, the surface temperatures in Gavin Schmidt’s chart are independent of the model output, I’ve got a bridge in Brooklyn I’d be willing to sell you.

The proper comparison
==================
actually, there is something very fishy about Gavin’s graph. Notice that the CIMP5 ensemble shows a decelerating rate of temperature increase starting in around 2003. Almost exactly the same time that China’s CO2 emissions started to take off.

AGW predicts that temperature increase should have accelerated post 2003, but the CIMPS ensemble shows the opposite, that temperature increase started to decelerate as CO2 accelerated. In effect the models are predicting the Pause, but back in 2003 none of the models predicted the Pause.

Which tells me that the models have been adjusted after the fact to reflect the Pause. They are been retrained post 2000 to curve fit them to the Pause, giving the misleading impression that they predicted the Pause. Which they did not. In other words, something is rotten in Denmark.

I am not sure actually about the point that you trying a make with the graph above, but as I am not sure about the actual CIMP5 esemble you include there, let me ask you:

“Do you actually know the CO2 trend that that particular CIMP5 simulation has over time, and whether it really actually reaches 400 ppm by 2010 and more afterwards…?”

Because if it does not have any ppm trend reaching 400 ppm by 2010 and after, and is below 280 ppm by that time, it does not actually qualify for any thing that you may think that it does………
It will be just in the realm of the Gavin’s foolishness and his stupid GCMs averaging stupidity….

How about a comparison to CIMP1, 2 or 3? These models didn’t have the advantage of current data to build a forecast. Lets see how well the models actually perform when they don’t already have the correct answer?

Using a CIMP5 model that had access to data up to 2010, and then comparing this to temperature results up to 2010 proves nothing. Except that the computer model can store the right answers in its memory when it is given them in advance.

That is no talent. Just about any school child can do as well when they are given the answers in advance. It doesn’t take a billion dollar super computer to store 150 years work of temperatures and spit them back out when asked. Your average $100 cell phone can do as much.

OK. It looks like CIMP5 came out after 2010!! So of course the CIMP5 results should match observations up to 2010!! The models were built using actual temperatures!! How could they not match!!

That explains why the CIMP5 models show a deceleration post 2003. They had the data from the Pause, so they are now predicting the Pause after the fact!! However, prior to 2003 none of the models predicted the Pause.

So, it looks like the Climate Modelling Community has pulled a fast one. They have used the post 2000 Pause data to alter their predictions, making it look like they predicted the Pause when they did not.

The laws of physics and the data tell us that the ECS is somewhere between 0.2 and 0.3C per W/m^2 which at 3.7 W/m^2 of equivalent forcing from doubling CO2 is between about 0.7C and 1.1C and well below the lower limit of 1.5C claimed by the IPCC and its self serving ‘consensus’ surrounding the reports it generates.

Note that 3.7 W/m^2 comes from the incremental absorption of surface emissions when CO2 is doubled and assumes that incremental absorption by the atmosphere is equivalent to incremental solar input. This flawed assumption fails to account for the fact that about half of the surface emissions absorbed by the atmosphere ultimately exits out into space.

Disagree. The laws of physics and generally accepted parameter values say the ecs to doubled CO2 absent feedbacks is ~1.16C. Lindzen uses 1.2C for simplicity. Several papers since 2013 (Otto being the first) computing observational ECS with all feedbacks have it between 1.5 and 1.9. I prefer Lewis and Curry 2014, which uses only IPCC AR5 parameter inputs and concludes ~1.65C. Those are all verifiable facts, not your assertions.

With all due respect to you, the author of this post, and any one else like the dear Lord Monckton, or the brave Judith and many others, I think is the ripe and right time to let the CS or ECS or what ever in that realm to let be gone and have a rest….. It has already served the purpose and is due to be let past, as the expire date already past…

The most basic thing in climatology and climate science is that co2 emissions respond and are very sensitive to the climate change, not the other way around…
The CO2 yearly flux and it’s variation over long periods of times (aka climate) responds to temp variation….
And the CO2 concentration variation responds to the Co2 emission variation and it’s sensitivity depends on the residence time of CO2 in atmosphere……..meaning that indirectly the CO2 concentration is sensitive to climate change and the associated temp variation in accordance with the CO2 residence time….
Actually if there is a sensitivity or response to be considered in climate terms is the CO2 concentration sensitivity or CO2 concentration response to climate change…….this is fairly basic, supported by all data there…..
When looking at paleo climate data, long term or short term, all that data support and agree about the CO2 concentration response or sensitivity to climate change…….it is at ~0.4C
Below a 0.4C variation there will be no CO2 concentration variation……

In this light the CS and ECS IS ONLY A BACKWARD APPROACH TO UNDERSTAND AND EXPLAIN CLIMATE CHANGE…….with no any value any more……

Besides even GCMs simulations must uphold and validate it, as otherwise will be considered as just war computer games…..and my self I am of the opinion that GCMs actually do that, play the C02 in accordance with the temp variation over time…..

hopefully you do not get me wrong….

CO2 does not cause climate change, climate change in the other hand causes CO2 change either in emissions or concentration, and any feedback mechanisms considered there can not change this no matter what.

CS and ECS are only fake artifacts at this point in time, well past their due date of expiry….

Yes, but the ‘pre feedback’ ECS is actually the post feedback influence after all feedbacks, positive, negative, known and unknown have been accounted for. Look at the math. For 3.7 W/m^2 of equivalent forcing to increase the surface temperature by 3C, surface emissions must increase by more than 16 W/m^2. Where do you think all this ‘amplification’ comes from? In fact, it comes from no where.

Irrelevant title….there is NO social cost to carbon. There is no carbon in my atmosphere (nor in yours). There is carbon dioxide in my atmosphere and even that is infinitesimal compared to the other constituents. The impact of CO2 changes must first be demonstrated to be affecting the climate. Show me the deleterious effects of CO2 in my atmosphere right now (not in ten, fifty or hundred years). Show me the catastrophic sea level rise over the globe right now. Show me all the increased shipping traffic through the NW arctic passage right now. Show me the increase in the number of hurricanes and their ferocity (likewise for typhoons). Why is it with the warmers that past performance (CO2 rise) has not predicted future (today’s) results? After all we have been ‘studying’ this climate thing for 30+ years.

“There is a computer disease that anybody who works with computers knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is that you ‘play’ with them!”
~ R. Feynman (a really smart guy)

The sea level rise module in the DICE model used by the IWG2013/2015/2016 produces future sea level rise values that far exceed mainstream projections and are unsupported by the best available science.

Predictions for sea level rise in the average Google News Search are around a meter or so by 2100 which is well over a 10 mm/yr average for the next 87 years. Reporters who write these stories never bother to do the arithmetic to see if what they are being told is remotely credible.

All plants and animals are built and survive around the element, carbon.
We need someone or some folk to investigate and publish the reasons for forsaking honest science and facts by those who would wish to gain from lies and damage to human rationality of those who do not understand, and who should be helped in this regard.

Wonderful reports on facts that many cannot understand, and do not read, least of all politicians.

Impact of climate change on agriculture in FUND model
==========
looking at the table, they predict that as temperature goes up, agriculture in Canada will decrease. This is about as far from reality as one could hope to get.

Almost everywhere in Canada, agriculture is limited by COLD. The average temperature of Canada is 0.5C. As you move north of the US border, the growing season becomes too short to grow much of anything except ice and rocks. Every spring the biggest harvest farmers find in their fields is rocks brought to the surface by frost heave. As you move North, biting flies and mosquitos make a joke of the idea that humans dominate the landscape.

Thanks, Pat, for this great assemblage of key data presentations! Great job!

Sometimes I think about the hypothetical — what if climate science really was a science? I look at the chart of radiosonde data vs model predictions for the Tropical Mid Troposphere. This discrepancy is really important as it goes back to the missing Tropical Hot Spot, increased water vapor as the primary positive feedback,, etc.

If climate science really was a science (“IFCSRWAS”) this would be a primary focus of active research. There have to be answers to the question of why this discrepancy exists, and the clues are right there in that chart of where to begin the search. It’s probably just another of the many egregious problems with the models, but getting this a little more right could help fix the overstatement of the positive feedback’s, and help tame the C in CAGW.

But instead, what do we get? We get the argument that, if you consider the uncertainty in the model predictions (whiskers shown) and the uncertainty in the observations (whiskers not shown), they overlap a little and so there’s no inconsistency. But the data are the data, and it is very difficult to look at the line representing the predictions and the line representing the observations on this zero-based graph, and defend the assertion that these predictions are correctly predicting a system that produces those observations.

There are big, interesting questions in climate science that are very far from being answered, and ICSRWAS, every so often we’d read a paper that genuinely moved our understanding forward on one of the many big issues in this field. Wish we did.

@ Mike from Au March 5, 2017 at 5:21 am: What the Ocean people are seeing is ‘rapid’ change starting at least from the ‘modoki-blob’ recent Ninos and now ongoing. Probably connected to the Quiet sun wavy jetstream cooling effects. Corbyn’s mini little ice age beginning. Huge biosphere changes normally occur even with lesser perturbations.
Alarmists will of course see doom in all this. Mainly, we hope, to climastrology…..

It seams rather convenient to me that all of the costs are stated to follow Exponential (power) functions (like sea level rise), and all of the benefits magically follow Logarithmic functions (like fertilized plant growth), while the evidence seems to be otherwise. Furthermore, we know that the basic IPPC models themselves are linear functions of CO2 concentrations (ΔT=F{ΔCO2}), when they should be Logarithmic (T=F{log(ΔCO2)}). So we have bent the bad curves upwards and the good curves downwards to overstate the effect of CO2, overstate the damages, and understate the benefits. Yep, sounds correct to me.

In the episode, the scientists claim that krill larvae feed on phytoplankton growing on the underside of Antarctic sea ice. The show starts off with the standard cause & effect errors common in climate reporting (as the earth warmed, krill populations have been reduced. It must be warming!) The show goes beyond that, but it accepts the arguments presented without challenging them.

Krill are one of the main biological building blocks of the oceans, so it is not a moot issue. The show case well, but I was left with a lot of questions. In particular,
1) Krill exist all over the world. Do they all originate from the Antarctic Ocean? “Krill” itself is a Norwegian derived word. (Note: The show is talking primarily about Euphausia superba or Antarctic Krill, but asserts that this is the predominant krill species in the oceans.)
2) Like most plants, phytoplankton need sunlight for photosynthesis. Is the under-ice environment the best place for this to occur? (Note: I believe the assertion is that there is less competition for the under-ice phytoplankton which allows krill to prosper. If it is not under the ice, then other species will compete with krill to consume the phytoplankton.)
3) Are there other natural or man-made explanations which would drive the reduction of krill (over-fishing, pollution or natural ocean cycles)?

The report is disturbing, but my question has to do with whether it has merit.

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy