Ha Ha you took the bait. What does warming have to do with anything. Well if I mix lot’s of chemicals and elements together in a closed system and change the temperature I fully expect a catalytic reaction. Maybe benign or maybe dangerous but at the most basic level it’s a uncontrolled experiment. Personally I ignore first order effects of climate change like measuring the average global temperature, polar ice and CO2 levels. I’m watching for second order effects that actually causes accelerated biosphere divergence. I suspect you will need to see personal economic damage before you start connecting the dots.

Maybe he just wants to defund the data parts of NOAA and the earth-pointing satellite parts of NASA. Bates must be pleased with himself sticking it to all his former NOAA colleagues in this very blog. Smith and Trump picked it up. NCEI is being hit hard. It seems blogs have consequences.

Pacific Ocean Heat Content During the Past 10,000 Years. Unfortunately, the link is paywalled. Never mind the past 10,000 years; what was the Pacific Ocean heat content as of January 1, 2017? Can we measure it to a 5% accuracy? 1% accuracy? 0.1% accuracy? Can we express the margin of error in favored terms of Hiroshima atomic bombs?

It is however a very important paper. It shows that the subsurface waters of the Indo Pacific Warm Pool have been much warmer during previous warm periods, and specially during the Holocene Climatic Optimum.

It essentially destroys Marcott et al., 2013 by showing that their tropical reconstruction, that shows warming instead of cooling, is absurd.

Javier, thank you. I am not defending Marcott, or attacking Rosenthal; I merely think that both are unreliable. I just don’t believe that we can estimate the heat content of the Pacific Ocean with a precision … how would you define a precision of such a measurement? We simply don’t yet have a technology for such a task. And we did not have it 10,000 years ago.

Read the methods section. Proxies are proposed by researchers to represent past local climatic conditions. For example 18O isotopic abundance. If they give a coherent picture with what is known of past climate from other proxies they are accepted by other researchers. Then what they measure with a great degree of precision is the proxy.

When a proxy gives the same result as the calculations for the past orbital changes, as L04 does, it gives a lot of reassurance that the proxy is adequately capturing climatic changes of the past.

There is a score of paleoclimatologists whose research contradicts many of the propositions from the man-made-CO2-is-responsible-for-climate-change crowd. They just avoid being dragged to the controversy to prevent damage to their career. Understandable. They just wait for the consensus hypothesis to trip to come to the fore and say they knew all along.

One can sign up for a free AAAS account to read the paper. As Javier mentions it is generally a very good effort.

I take issue with the following:

“The early Holocene warmth and subsequent IWT cooling in Indonesia is likely related to temperature variability in the higher-latitude source waters.”

The “warm pool”, even at intermediate depth, cannot be reasonably considered “sourced” at higher latitudes after traversing ~20,000 km of the tropical Pacific Ocean.

If any ocean water on earth can be considered tropical, the warm pool would be it.

At the level of ocean unity or equilibrium, likely on the order of 2k years or the “common” era, cold (or warm) inputs will factor in everywhere. To say that apparent density equivalence over the common era implies high latitude fresh water compensation is also a stretch, IMO.

“Holocene IWT cooling must have been largely compensated by freshening at the high-latitude source regions.”

Salinity is the purported engine of surface to intermediate level energy transport. Warming might be compensated by fresh water, cooling would be amplified.

“In fact he’s not a scientist of any kind. He’s just an entertainer. And his account of cognitive dissonance is exactly ass-about. Cognitive dissonance arises when your prophecies fail. He is the one suffering from cognitive dissonance. When has a Greenie prophecy ever got anything right? He’s not even capable of Googling or he would not have made such a howler “

The article that Judith characterized as a “Long but fascinating read: An epidemic of unnecessary treatment” is in fact long and fascinating and well worth the time. Worthwhile because it talks about the types of medicine and procedures we are likely to be prescribed; that many will do nothing; some will make us worse; some will kill us and all are costly.

Not quite like a fast car. A fast car involves risks having to do with speed, etc., but the car is a system that’s been precisely engineered and all the consequences of increasing gas to the cylinder, for example, are well-known and reflected in the engineering of the entire system. Contrast that with a human body that’s incredibly complex and involves many unknowns. What we do know for certain, however, is that if you just eat your vegetables (and lots of ’em) and eat moderately, exercise, and don’t pump yourself up with alcohol and drugs, you’ll do very well.

Sorry, I still don’t see how CO2 can be attributed to increasing temperatures with DWIR decreasing. If clouds or some other mechanism is overwhelming CO2’s increasing bandwidth that is what is controlling temperatures not CO2. DWIR must increase in order for CO2 to be causing more heat retention.

gyan1,
Interesting observation, but you are reading too much into it. Under any simple construct of a heating model, the thing which determines whether something is heating or not is not the rate of change of flux, but the flux itself – specifically whether the net flux is positive or negative. All else being equal, and for small relative temperature changes, a LINEAR increase in forcing with time should asymptote to a CONSTANT net flux difference and a consequential LINEAR increase in temperature. Variations of forcing above (below) the linear should then give rise to an increase (decrease) in net flux. Such variations do not provide evidence of a lack of heating in the longer term. For downwelling LW in particular, if it were solely controlled by an exponential increase in CO2 concentration giving rise to a linear increase in CO2 forcing, we might expect it to rise to a constant value and stay there. But it is not solely controlled by CO2, nor is the forcing change strictly linear. Moreover, the downwelling LW which you show is not measured by CERES as such; it is a construct based on matching the more reliable estimates of TOA fluxes. So no definitive evidence, I’m afraid.

CO2 does not produce a linear increase in temperature. It is a Log function.

I understand that increasing CO2 bandwidth is retaining more heat.

It is being claimed that the current increase in temperature is from CO2. The net flux is declining. Increasing bandwidth from CO2 is not resulting in a net increase in DWIR. That means it can’t be responsible for an increase in temperatures. I don’t see anything in your reply that refutes this.

I’m impressed that Professor Goodenough, the co-inventor of the lithium-ion battery, was the lead scientist and must have been working on that glass solid state design for years. At age 94 the guy isn’t in it for the money, Respect.
Dr. Braga (Portugal) was the European scientist who added the novel idea of silicone/sodium electrolyte solid structure.
UT @ Austin got the patent (worth billions if they can license it).
Impressive lab specs of 1200 cycles, sub zero operation, fast charging, cheap materials.

In addition to working in nuclear safety analysis I have also been a beekeeper since I was 15. I first became a beekeeper in about 1978. In 1984 I joined the service and during the next 7 years or so I did not have bees. Between 1983 and 1990 varroa and tracheal mites were introduced into the USA. They devastated US beekeepers both professional and hobbyist.

As a teenager in Michigan I would typically lose 1 or 2 hives out of six each winter. Now it is difficult to bring a hive through a Michigan winter. Professionals typically move their hives south and make divides and then come back to Michigan in the spring.

I suspected early on that CCD was another environmental scare story. For example, activists would make a big deal about 35% of US colonies being lost in a given winter, even though this is not that much of a deviation from the norm. It is also why I became skeptical of stories in the media. I can reasonably be considered very knowledgeable in two areas; nuclear power and beekeeping. When I read stories on either subject it seems obvious I should be skeptical of anything I hear from the media.

I think CCD was a propaganda campaign by the professional bee keeper industry. Since 2007 the price of honey has doubled but as you correctly point out there has been no decline in commercial bee populations.
I suspect the same thing is true about bats and the White-nose Syndrome is just a cover up by the wind power industry.

The price of honey made a step change when the US instituted tariffs against imports in 2001. When I was a kid the wholesale prices was about 65c/pound. In 1991 when I started back it was about 55c/pound. It more than doubled after the tariffs were instituted. I don’t really follow the wholesale market anymore so I don’t know what drives it now.

I have seen some sensible people point out that beekeepers would be wise to not allow honey to go the way of maple syrup. In some applications it is already headed there.

It would have been helpful for me to see a graph for comparison of predicted to observed for 1989. There did not appear to be data points to graph my own. Using various colors to represent the data did not impress me. Maybe or maybe not their illustrations represent what they say they represent. No way as far as I can tell for me to reproduce what is claimed.

Comparing the observed change with the model projections, one notes that the land areas warm faster than adjacent ocean areas in both the model and in the observations. …

Check.

The warming tends to be largest in high northern latitudes due
mainly to the positive albedo feedback of snow and sea ice. …

Check.

In the model results, warming is a minimum in the northern North Atlantic: this is not so pronounced in the observations. In the model, this minimum is attributable not only to deep, convective mixing of heat but also to the weakening of the Atlantic meridional overturning circulation. …

In sharp contrast to most of the high northern latitudes, temperature change is small in the Southern Ocean in the model results.
The area of small temperature change is also seen in the observations, it will confirm this surprising early model finding. …

Check.

In other words, the projections shown here were made before the observations confirmed them as being correct, striking at the heart of the argument that modellers tune their models to yield the correct climate change results. …

These are the spatial trends of GISS up to 1988. On this trends (up to the year the paper was written) the model must have been “tuned” because tuning is necessary to match a model to the real world. Let’s check:
1. Comparing the observed change with the model projections, one notes that the land areas warm faster than adjacent ocean areas in both the model and in the observations. …
This is very simle physics due to the inertia and tirivial. No need to check.
2. The warming tends to be largest in high northern latitudes due
mainly to the positive albedo feedback of snow and sea ice. …
check (with trends to 1988) and also simple physics.
3. In the model results, warming is a minimum in the northern North Atlantic: this is not so pronounced in the observations. In the model, this minimum is attributable not only to deep, convective mixing of heat …
Also simple physics, the deep mixed layer in the SPG was known before 1988 … and clearly visible in the obs. up to this year.
4. In other words, the projections shown here were made before the observations confirmed them as being correct, striking at the heart of the argument that modellers tune their models to yield the correct climate change results. …
NO, as shown above.
And: The climate sensivity given in this model descritption http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442(1991)0042.0.CO%3B2 is about 2 times too high. You should recalculate it before touting.

frankclimate, the model results also scale with CO2 the same way as observations, just like future projections. You would again say that this is obvious physics, but I think some skeptics here don’t think it is as obvious as you that warming scales with CO2. In those 25 years we have had 20% of a doubling, while the model results were for a doubling. They showed a scale factor of 5 fits well, so their model had the right sensitivity too. The obvious physics that you talk about extends to the sensitivity itself.

Jim D
You say: ” In those 25 years we have had 20% of a doubling, while the model results were for a doubling. They showed a scale factor of 5 fits well, so their model had the right sensitivity too.”

Not so. Per AR5, the change in forcing from 1961-90 to 1991-2015 averages was 1.1 W/m2, or 30% of that for a doubling of CO2. So the model GMST would have increased by about 0.68 K, based on its TCR (which is 2.3 K). In By contrast, he GMST increase was about 0.43 K (per NOAAv4.0; or 0.42 K per HadCRUT4v5). So the model sensitivity was nearly 60% too high to match the observed warming – which implies a TCR of about 1.45 K, not 2.3 K. However, by choice I would not estimate TCR using periods that are close together and have natural variability and forcings rather poorly matched.

In 25 years, the CO2 level went from 350 ppm to 400 ppm, which you can work out is 20% of a doubling. You can bring in other forcings which may or mostly cancel, but the CO2 part is dominant and scales, and their experiment was only a CO2 doubling with no other forcing change.

Jim D: of course the warming scales on GHG at most and also one has to consider the natural variability as niclewis points it out. So I think the paeans on the models in the paper are a little bit… prematured? ;-)

Recent attempts to diagnose equilibrium climate sensitivity (ECS) from changes in Earth’s energy budget point toward values at the low end of the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5)’s likely range (1.5–4.5 K). These studies employ observations but still require an element of modeling to infer ECS. Their diagnosed effective ECS over the historical period of around 2 K holds up to scrutiny, but there is tentative evidence that this underestimates the true ECS from a doubling of carbon dioxide. Different choices of energy imbalance data explain most of the difference between published best estimates, and effective radiative forcing dominates the overall uncertainty.For decadal analyses the largest source of uncertainty comes from a poor understanding of the relationship between ECS and decadal feedback. Considerable progress could be made by diagnosing effective radiative forcing in models.

Willard: “No wonder “observational” has become a lukewarm buzzword.”
I’m so sorry that observations are a buzzword for you. For me it’s the fundament of every physics as long as climate science is also physics for you? Or do you prefer a post-modern physics uncoupled of obs.? Perhaps than you are right in “some kind of climate research”…

The latest thing in The news today is that we need to eat more salt, not less. This comes On top of a string of food related research that oscillates between such things as coffee, wine, dairy, fat etc etc being good or bad for you.

Therefore I take the idea that we can know the temperature of pacific water back 10,000 years with a large pinch of salt, or whatever is today considered the appropriate thing to take a pinch of

Dear Ms Curry, This is the most informative source on climate, science and related topics I am aware of. I am willing to pay a modest subscription fee to support your work. Do you have a funding mechanism in place? Regards. Chris Scanlon

I think there are plenty of folks who would pay to make sure that this blog continues at its current level of quality. There probably is not a lot of money in it for Prof. Curry but maybe she could hire someone to moderate and vet subjects for Week in Review. “Dues” could help defray the cost.

I’m concerned about the declining volume of content though I certainly understand the reasons why.

We need a blog or some vehicle that’s something other than an echo chamber. Personally, I learn the most from this blog because of commenters who challenge the skeptical view. On WUWT there are too many comments that come from the same place, and that blog becomes a bore.

The only person who actually stays here and endures the completely moderation-free mountains of personal ridicule and insult is Jim D. This place is a vile cesspool. Climate skepticism is a CargoCult. It’s members are physics spoon benders.

Where’s the stadium wave? Will you get a prayer cloth with a donation?

I await the testing with new actual “forecasts”. This is reminiscent of the repeated improvements of GDP predictors which miraculously never improve actual GDP forecasts.

Paraphrasing Milton Friedman: “Some lessons are never learned”. He was referring specifically to economists who never seem to learn about regression to the mean. Here I refer to the necessity that all claims of improved forecasting need to be tested with new actual forecasts, not improved post-diction.

Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.

Not only is the article oblivious to the numerous instances where singe righteous persons stood alone against the shared ignorance of the majority, it also is seemingly unconscious of the fact that the title of the article — Why We Believe Obvious Untruths — could just as easily have been, Why We Believe Obama or AGW or Castro was Good for Cuba?

Revolutionary Power Plant Captures All Its Carbon Emissions, At No Extra Cost [link]

Removing CO2 from gas burning is analagous to creating a pill that removes orgasms from sex. Yes, clever technical stuff, which gratifies prohibitionist religious sentiment, but worse than pointless.

CO2 is good. No need to be pruriently hung-up about it.
Let nature take it’s course.
Hominids burn stuff, it’s OK, just part of Gaia.
Like all other animal and plant activities, ultimately it will benefit the environment. Make CO2, not war.

I thought the article was bizarre in claiming it had no CO2 emissions. It looks like the CO2 is merely piped away to some other location for disposal/use. No mention of how the CO2 is separated from the N2 etc.

No real description of the process. Gas turbines produce a lot of water vapor, plus small amounts of other stuff- N-0, N-02. No mention of how the CO2 is separated and recycled. High pressure pipelines would be enough to keep is supercritical at reasonable temperatures. Perhaps they’ve found a high pressure membrane or molecular sieve. Water vapor would tend to pass through the appropriate material much faster than CO2, or vice-versa, depending on the material.

According to Figure 1 and the text describing it, the predicted changes were about 5 times as great as the measured changes. Did I read that right? I am unable to cut and paste. The text is on p 163, upper rightmost column.

They are just comparing patterns of change. The timeframes were 75 years for the model and 25 years for the observations. This explains the magnitude difference. The interesting thing is that these are the original authors back in 1989 returning 25 years later to assess how their model did, and the patterns of change are as predicted.

Their experiment was a 1% increase in CO2 per year, typical of a doubling experiment where it doubles in 70 years. That is about twice the actual rate of growth in the last 25 years. The factor of 5 accounts for the difference in CO2 levels quite accurately.

Revolutionary power plant. Read the linked Forbes article which was not that informative, then did a bit of research on the Allam cycle. Is potentially real.
There are three tricks. 1. Oxyfuel (pure oxygen used for combustion rather than air) combustion. 2. Supercritical CO2 as the main working fluid for the turbine. 3. Heat exchanger separation of the water in the exhaust stream leaving pure CO2 to feed back as working fluid or for CCS. 2 and 3 are just engineering. For the 50 MW Exelon pilot plant now under construction, Toshiba is doing the turbine and Bechtel the heat exchanger and plumbing.

1 is the tricky trick. Pure oxygen is ordinarily produced by cryogenic fractional distillation. Very energy intensive; would make the scheme technically unviable. BUT in 2012 MIT developed ceramic membranes that separate oxygen via ion transport; no nitrogen reaches the ‘far side’. (MIT tech review article on the lab scale functioning setup). The process uses just atmospheric pressure so long as the oxygen on the far side is removed to establish a partial pressure gradient across the membrane (by combustion). The ceramic membranes function at the temperature of combustion ~1000C. So that is the real technology breakthrough. The CCS part is silly, but putting these power plants where tertiary CO2 oil recovery is desirable makes a lot of sense. Ship electricity, not CO2. Tertiary recovery CO2 now comes from amine process CO2 stripping of natural gas before it goes into major pipelines. Amine process is expensive and uses energy, and only produces CO2 where there is a natural gas field high in CO2.
Something to keep an eye on.

NL, I enjoy keeping up with the bleeding edge of technology, especially in energy since is a major concern highlighted in my ebooks. Been doing that now since 1976. Was unaware of this development until Judith’s link. But IF scales commercially (lifetime is always an issue with any seiving membrane (think reverse osmosis desal, fuel cells…) then is a potential gamechanger. Some big respected commercial names were impressed enough to give it a go. Always a good sign. A 50 MW plant isn’t ruinous if it fails, but is still a financial commitment positive.

CO2 and Jim D, I ask a favor. Take your politics outside here to some ther parking lot where you can duke it out. This is mostly a science/science policy blog. See my immediately upthread comment on some interesting possible technology enabled by science for an example of what you could but don’t comment on. Knock it off, already.

Current climate models foresee a slowing of the meridional overturning circulation (MOC), sometimes known also as the thermohaline circulation, which is the phenomenon behind the more familiar Gulf Stream that carries warmth from Florida to European shores. If it did slow, that could lead to a dramatic, unprecedented disruption of the climate system.

The pernicious notion that density-driven MOC is somehow “behind…the Gulf Stream,” instead of merely being a weak adjunct of the wind-driven surface circulation, is what makes all the speculation about “unprecedented disruption” by climate modelers dynamically implausible. And It’s totally ludicrous to pretend that any “warmth” is carried from the depths to the surface in a persistently stratified ocean. Such physically nonsensical conceptions cannot be taken seriously.

This was of course implicit in Poincare’s three body problem. It might more commonly be regarded as ergodic – returning to states over a long enough period – rather than visiting new state spaces in which we are all doomed. But at any rate chaotic destabilisation in billions of years is probably not high on anyone’s agenda.

One idea is that these chaotic orbits drive changes in the solar magneto – it is presumed to impose some order on chaotic solar turbulence.

I cannot believe that the interview with Tesla in 1899 was real. In that interview Tesla told that Einstein’s theory of special relativity was wrong.
But Einstein published his theory in 1905. In 1899 no one knew Einstein.
Was Tesla a prophet?

=={ Like studies of stored cognitions, studies of learning may overstate bias if they do not account for motivated responding . }==

This touches on a big problem I have with a lot of the studies based on opinion polling. For example, the studies of the impact of “Climategate” where “libertarian” were more likely to “report” that they “learned” about the unreliability of climate science by virtue of leaked emails. In other words, they “reported” that “Climategate” reduced their concern that ACO2 might pose risks.

My feeling has long been that while that “reporting” may have been accurate in some cases, in other cases it may well have been that “skeptics” merely “reported” such “learning” because it fit with a preexisting orientation. W/o a pre-(“Climategate”) test/post (“Climategate”) -test analysis of their views, there is no way to determine if their “report” actually coincided with what they “learned.”

Of course, reinforcing my question is the observation that the congruence between “learning” and “reporting” ran in the other direction in reaction to “Climategate” with people who had a more left-leaning ideological predisposition. Not surprisingly, what they “reported” “learning” from “Climategate” was that we should be even more concerned about ACO2 emissions.

It’s getting late so I didn’t read the whole thing. But the first example, changing headings on a 2×2 matrix purporting to compare the biases resulting from headings about banning or not banning concealed carry and crime statistics. Individual subjects were apparently randomly shown one of the matrices and whether how they judged what the “data” showed compared with their political party.

My first question in a loaded situation like that would be “Where did the data come from?”. I think most people understand that statistical surveys of any kind often are highly variable, that includes opinion data, sample size, completeness of the data being shown, and all sorts of other variables that affect the “data”.

While people may have a bias towards data that is congenial to their politics, it may also reflect that they have learned that the data may be biased against their politics and not report or learn what is shown but simply go with what they already have learned is true.

I’d love scientists’ opinions of this straightforward intro-college-stats writeup of two key facets of climate change stats: rate of Arctic ice melting, and also how we infer the role of carbon dioxide in warming temps. http://ww2.amstat.org/publications/jse/v21n1/witt.pdf

Basically green garbage. No one claims that direct solar irradiance explained GW but the lesson says it is either that or CO2, which is nonsense. The surface temperature statistics are also junk but they are accepted as measurements.

JCH, the nonsense is the claim that these surface statistical models are accurate to a hundredth of a degree (or even a tenth). See uncertainties listed below in my modest proposal to reform NOAA.

Reforming NOAA research

In addition to budget cuts we need to refocus climate research. Here is my proposal for NOAA global and regional temperature estimates. The first step is a white paper elaborating on these needs in some detail.

A needed NOAA temperature research program

NOAA’s global and US temperature estimates have become highly controversial. The core issue is accuracy. These estimates are sensitive to a number of factors, but the magnitude of sensitivity for each factor is unknown. NOAA’s present practice of stating temperatures to a hundredth of a degree is clearly untenable, because it ignores these significant uncertainties.

Thus we need a focused research program to try to determine the accuracy range of these temperature estimates. Here is a brief outline of the factors to be explored. The goal is to attempt to estimate the uncertainty each contributes to the temperature estimates.

Research question: How much uncertainty does any of the following factors contribute to specific global and regional temperature estimates. Each can be explored independently.

1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.

2. Local heat contamination or cooling of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.

3. Other temperature recording station factors, to be identified and explored.

4. Adjustments to temperature data, to be identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, then analyzed for uncertainty.

5. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.

6. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.

7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.

8. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties.

9. Interpolation or in-fill. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.

10. Other factors, to be identified and explored.

To the extent that the uncertainty range contributed by each factor can be quantified, these ranges can then be combined and added into the statistical temperature model. How to do this is itself a research need.

Note that it is not a matter of adjusting the estimate, which is what is presently done. One cannot adjust away an uncertainty. The resulting temperature estimates will at best be in the form of a likely range, not a specific value as is now done.

Most of this research will also be applicable to the other surface temperature estimation models, such as HadCRU, GISS and BEST.

Eliza, the Arctic ice math is okay as far as it goes (stopping in 2012 is misleading). That Arctic ice has deceased since the 1970s is well known. But the narrative is pure alarmism and only alarmists are cited. Not mentioned is the evidence that Arctic ice extent is cyclic, with previous low levels. Plus this is just a region, not a canary.

Jim D, if BEST has estimated these various ranges what is their total range estimate? Can you point me to it, or if not to the separate range estimates? I am especially curious about their estimate of the uncertainty due to the use of an availability (or convenience) sample, as I know of no way to estimate that.

Jim D, I seriously doubt that BEST has done what you claim, namely covered all of my uncertainties. They may have touched on 3 or 4, at best. But I do not claim to know a “better way.” What I am proposing is a NOAA research program to address these uncertainties.

1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.

A) This has been studied 6 ways from sunday using multiple
muliple methods, using multiple definitions of urban/rural.
Using adjusted and raw data, using only rural stations,
using only pristine stations.. And the answer is the same.
The UHI effect is not large enough on a global scale
to get out of the noise. OF COURSE it’s uncertain, but
we can say the warming we see is not due to UHI.
B I’m in the middle of studying it yet again with several
new datasets. Still finding nothing.

2. Local heat contamination or cooling of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.

A) actually both studies on Micro site found NO effect.
B) What is missing is a Field study of micro site. There is
no evidence ( controlled tests) that micro site
is anything to be concerned about. Lots of speculation
about pavement and air conditioners. but NO field study

3. Other temperature recording station factors, to be identified and explored.

A) Unicorns

4. Adjustments to temperature data, to be identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, then analyzed for uncertainty.

A) This has been studied to death.
B) go read the literature

5. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.

A) You misuse the term.
B) We empirically know that temperature change is uniform out to
certain distances. Heck, ask tonyb about WHY you can use CET
as a global proxity.

6. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.

A) use any ocean measure you like. The answer is the same
B) SST is not used as a “proxy” In global indexes

7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.

A) 90% of the variance in temperature is captured by latitude
and elevation.
B) You get the same answer whether you use all stations, or a random sample of stations.
C) Bone up on geostats.

8. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties.

A) there are three standard methods
B) IDW used by giss and hadcrut
C) thin plane splines used by CRU and forthcoming
D) Kriging used by BE and CW

“area averaging” is how we refer to it for folks like you who dont
understand the math.

9. Interpolation or in-fill. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.

last q: “Plus this is just a region, not a canary.” I don’t really understand what this means — if Arctic ice were to be diminishing, wouldn’t that have (known or unknown) effects on weather elsewhere? And, also, isn’t it worth knowing what’s happening there in any case?

Eliza,
The first example on Arctic Ice is not bad. It would be better if it noted the limitations on what can and cannot be concluded from the statistical inference.
The second example is appalling from a number of perspectives. (a) You cannot infer causation from correlation. This is especially true for time series. Any two time series which are roughly monotonic will show a high Pearson correlation coefficient. If you were, for example, to cross-plot the the nominal price of cabbages against temperature over the same time frame you might find a higher correlation coefficient than against CO2, but you could infer nothing from this. In some circumstances, you can infer something from the ABSENCE of correlation, when a model predicts that such correlation should exist. (b) The underlying physics do not, repeat not, predict a correlation between CO2 and temperature. The forcing associated with increasing CO2 varies with the log of the atmospheric concentration, but even a correlation against ln CO2 would not be correct here, because the relationship is defined via an integral equation and is generally non-linear. It is silly to the point of being irresponsible in a teaching module to set up an erroneous physics model for students. (c) (c) The annual temperature data is auto-correlated and tests positive for a unit root. This requires sophisticated stats tools to avoid spurious correlation. To include a time series with a unit root as an example in a very low level stats module is once again irresponsible IMO.
I hope this helps.

Thank you for looking, too! Two parts of your explanation I did not get:, w/in this part: “but even a correlation against ln CO2 would not be correct here, because the relationship is defined via an integral equation and is generally non-linear.”
1) I don’t know what “a correlation against ln CO2” means, and
2) I don’t know what “via an integral equation and is generally non-linear” means.

I don’t know the full extent of your interest in the Arctic. but to get a complete picture a review of scientific papers on historical warming periods in that region is necessary. Google scholarly articles on Arctic historical warming or any variant. Oscillations affecting the Arctic is also an interesting topic.

Climatereason, aka, Tony Brown has written posts here in the past on this subject. Very well researched and an enjoyable read.

Hi Eliza,
It would help me a bit to know whether you are trying to learn some statistics, trying to learn some climate science or trying to evaluate the module paper from a professional perspective…
But here goes:-
Whenever someone runs a simple linear “Y on X” regression, he/she is accepting implicitly the validity of certain assumptions. Specifically, he assumes (a) that the X values are independently sampled from a normal distribution and (b) that the Y values are not dependent on previous Y values (autocorrelation). He hypothesises and tests that the variation in the Y values can be explained by a simple linear relationship between the Y and X values, leaving error terms which are normally distributed.

When dealing with regression between two time series, very often either assumption (a) or assumption (b) are very often not both valid, but they can and should be tested before proceeding to a simple Y on X regression, since this makes a big difference to any statistical inference drawn. (Google “autocorrelation in regression analysis” and “spurious correlation”, for examples). This doesn’t happen in the stats module. It is irresponsible in my mind for a stats module to illustrate how to make a dog’s dinner of what should be a simple example of a useful statistical application!
For the specific example, the physics does not say that temperature should vary linearly with atmospheric CO2 concentration. It says (instead) that the temperature should vary (in a complex way) with the change in the radiative FORCING associated with the change in atmospheric concentration. Numerous empirical numerical experiments have been carried out, which involve varying the well-mixed concentration of CO2 on the globe and then calculating the change in net flux at the top of the atmosphere. The summation (integration) of all of the net flux changes over the globe gives the total instantaneous change in net flux associated with the change in CO2 concentration (the “CO2 forcing”). This then allows the cross-plot of CO2 forcing against CO2 atmospheric concentration. These experiments indicate that the total CO2 forcing varies logarithmically with CO2 concentration. So we do not expect that temperature will vary linearly with CO2 concentration; we expect that it will vary in a more complex way with the logarithm of concentration i.e. with ln(CO2 concentration).
So why can’t we run a regression of ln(CO2 concentration) against temperature? Even under the simplest of assumptions – single body model with constant feedback term – the heating model (or energy balance model) says that the temperature series over time, T(t), takes the form:-
T(t) = g(t) * Integral of (F(t) * h(t)) with respect to t, where F(t) in this instance is the variation of the CO2 forcing with time {i.e. varying with ln(CO2 concentration)} and g(t) and h(t) are parameterised exponential functions of time.

For more complicated heating models, the expression becomes an even more complicated convolution integral. This is what I meant when I used the term “via an integral equation”. It is just wrong in physics and mathematics to believe that this complexity can be simplified to the simplified rehression equation which states that T = a*CO2 concentration + b or even that T = a*ln(CO2 concentration) + b. Outwith some very specific circumstances, (specifically a linearly increasing forcing with time) the above integral equation(s) cannot be approximated or simplified to these simple linear regression models.

I thank you both so much. Regarding, what am I trying to learn about: I’m trying to get, I guess, to the heart of the science, for my own edification, and so I can think clearly about climate science (and policies). So I think I need to understand basic statistics better — and your response, kribaez, is most helpful. Thanks again. And regarding the Arctic, I just think it’s a good place to start to learn about climate work, and I’m fascinated by polar regions!

I applaud your interest in this issue. I think as you dig deeper with your own independent research you will find that it is a lot more complex and nuanced than is generally believed. That is why I started to follow it 8 years ago. Every time I read about a claim, I would dig out as much as I could and often times learned there were many more questions than answers. That applies to the polar regions, sea level rise, droughts, historical temperatures (historical defined as going back before Cher’s first album), tornadoes, etc. Often times the hysterical headlines are not supported when looking at the scientific papers. As of today, the unknowns outweigh the knowns significantly.

A little circumspection never hurts when dealing with this subject, thus my suggestion to learn about previous warm periods in the Arctic.

He might be right but he knows he can’t prove it. But I’m 100% certain he will be not be held accountable if he’s wrong. He’s an evangelical christian and as we know all sins are forgiven and washed clean in the blood of christ.

Pruitt is himself a staunchly committed, conservative Christian. He is a deacon at First Baptist Church in Broken Arrow, Oklahoma, and on the board of trustees of the Southern Baptist Theological Seminary, part of the conservative Southern Baptist denomination.

Nor will the alarmists be held accountable for being wrong, which is much more likely. Religion has nothing to do with it. It is wonderful to see a skeptic running EPA. Note that the article cites an alarmist EPA website. That needs to go away or be rewritten.

Ridiculing Christians, or practitioners of any other group that believes in God, is a sign of profound ignorance. This makes you a bigot, not a rational critic.

I don’t know much about Scott Pruitt, but I’ve seen no sign that being a Christian (assuming that he is) makes him less qualified to be EPA Administrator. From what I can see, he’s not an idealogue and he’s made his reputation defending his native State from an overbearing EPA.

If you want to be critical of Trumpites, there’s plenty of ripe material. Attacking their religion is stupid.

“I appeared before the U.S. Senate Committee, last year I believe it was, sometime earlier in the year, about the Clean Power Plan challenge that we were part of leading,” Pruitt said in an April 2016 interview on the show. “And this senator from Rhode Island [Sheldon Whitehouse] during the midst of the testimony was just — it is just a religious belief for him and for others.”

scraft1,
Civilization and technology marches forward but for some reason we can’t shake our crutch on some mystical deities invented before the bronze age.

I think all religions are scams, hoaxes or conspiracies with the exception of Buddhism and Confucianism. I also believe most religions can serve a useful function in establishing a moral framework that can promote compassion and empathy which can suppress some less desirable features of human behavior like stealing, murder and lying. It’s not a black and white issue but should not influence the science of physics. Pruitt might make a good administrator for the GSA or the VA but is the wrong man to look after the environment.

It may be too soon to tell for him, but every major scientific society, government and industry have recognized that emissions need to slow down, and that is why we have Paris. Even Exxon has statements on emissions that are far ahead of Pruitt. When a person even trails the fossil fuel industry on climate change, that is something.http://corporate.exxonmobil.com/en/current-issues/climate-policy/climate-perspectives/our-position
“The risk of climate change is clear and the risk warrants action. Increasing carbon emissions in the atmosphere are having a warming effect. There is a broad scientific and policy consensus that action must be taken to further quantify and assess the risks.”

Wojick says: 7. The use of an availability sample rather than a random sample. It is a canon of statistical sampling theory that availability samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.
Mosher says: A) 90% of the variance in temperature is captured by latitude
and elevation.
B) You get the same answer whether you use all stations, or a random sample of stations.
C) Bone up on geostats.

Wojick responds: Your response has nothing to do with my point. (I note in passing that if there is any variance in the changes among the stations, which there certainly is, then getting the same result from random samples of these stations is statistically impossible. Random samples of stations should give you a distribution around the all station value.)

Here are some examples of what statistical science websites have to say about convenience samples such as NOAA and the other surface statistical models use, including BEST. In this case the population being sampled is the Earth’s actual temperature at all points, or the contiguous US, the spatial average of which is what is being estimated. (This is something I recently wrote elsewhere.)

Convenience sampling
Note that the collection of thermometer readings which are used are what is called in statistical science a “convenience sample” or an “availability sample.” This means that the data was not designed as a representative sample of the surface in question; rather it is just what was available.

Statistical science is very clear that a convenience sample does not provide an accurate estimate. Here are some examples from several statistical science websites:

As indicated by their name, convenience samples are definitely easy to obtain. There is virtually no difficulty in selecting members of the population for a convenience sample. However, there is a price to pay for this lack of effort: convenience samples are virtually worthless in statistics.”https://www.thoughtco.com/what-is-a-convenience-sample-3126358

Summary: As you can see this is nothing like a normal average. The statistical model is very complex, with many alternative possible ways of taking each step. The data is sparse, often of a proxy nature and a convenience sample. This is actually a crude estimating technique not a statistical sampling method. In no case is it an accurate measurement of global or US temperature.