Update on the strength of aerosol forcing

Increasing evidence of small aerosol forcing supports the importance of internal variability in explaining inter hemispheric differences in temperature variability.

In the CMIP5 climate models, we find two strong anthropogenic forcings influencing the climate: the warming greenhouse gases (GHG) and the cooling aerosols that reflect sunlight. The difference between the strength of these forcing agents is a key factor when estimating the sensitivity of the real climate system to the GHG, mostly important to carbon dioxide. In the last few weeks several papers have been published that provide important insights for narrowing the scope of the impact of aerosols on the real climate system.

A recent paper by Chung and Soden (thereafter – CS17) the authors examine the difference of the temperature anomalies (and also of precipitation) between the hemispheres of the earth. In the observations there are some shifts since the beginning of the 20th century and from about 1980 on we observe a steady warming trend of the NH versus the SH.

Fig.1: The interhemispheric temperature development in observations (black) and in CMIP5 models ( red, with interquartile range :green and blue). Source: Fig. 5 of CS17

The authors attempt to identify the source of model spread. They focus on the fact that anthropogenic aerosol emissions are mainly in the NH, and that as aerosols have a short lifetime. Hence, most aerosol forcing will also arise in the NH. That NH dominated aerosol forcing can be expected to cause greater cooling in the NH than in the SH, whereas GHG forcing is quite evenly distributed between the NH and SH.

Using climate models, they find that difference of the (NH-SH) temperatures is most likely explainable with a high forcing due to the aerosol-cloud interaction. Their conclusion:

“Models with larger cloud responses to aerosol forcing are found to better reproduce the observed interhemispheric temperature changes and tropical rain belt shifts over the twentieth century, suggesting that aerosol–cloud interactions will play a key role in determining future interhemispheric shifts in climate.”

The aerosol-cloud interaction in models produces a much stronger (negative) aerosol forcing due to additional dimming of the incoming sunlight. The physical hypothesis: clouds have an increased albedo and a longer lifetime when they are influenced by anthropogenic aerosols. A former paper questions if the conclusions of CS17 from the model-research are transferable to the real world. The author asserts in relation to observations in the real climate system:

“The fact that cloud albedo is not significantly larger, or even smaller, in the Northern Hemisphere is an indication that the aerosol is not a first-order factor for cloud properties.”

Two brand new papers shine a new light on the model-real world discrepancy at least in the field of the aerosol-cloud interactions. Mallavalle et al 2017 (for a summary see also this post ) tried to estimate the aerosol-cloud effect in the real world with the help of a volcano-eruption which produced an aerosol impact very similar to expected anthropogenic SO2 sources. An accompanying comment in “Nature” by the well known cloud and aerosol expert Bjorn Stevens captures the point in the title: “Clouds unfazed by haze”.

The hypothesized aerosol-cloud interaction is small. So observed aerosol forcing is substantially smaller than in most models.

Stevens clarifies:

“Until now, however, the biggest surprise has been how hard it is to find compelling physical evidence for strong aerosol forcing.”

­He underlines the big problems of some climate models with this fact. They often compensate a high sensitivity versus GHG with a high negative forcing due to aerosol haze, thus enabling them to fit the observed temperature record during the “tuning period” .

One of the main pillars of this “compensating haze” is no longer available. Stevens concludes with:

“Unless this changes, in so far as aerosols are concerned, it seems that there is little to fear from clearing the air.”

That is, we won’t get a strong temperature increase when we globally reduce the air pollution in the future, we don’t have to fear this. The (negative) aerosol-forcing is not big enough.

When we return to CS17: The authors attempt to account for the observed interhemispheric temperature changes by a phenomena that is now known to be small

So, while the findings of CS17 are very likely true in relation to global climate models, their conclusion that “aerosol–cloud interactions will play a key role in determining future interhemispheric shifts in climate” in the real world is completely unjustified.

The approach of CS17 clarifies the problems of many climate models: They have to take into account some hypothetical forcing to explain the real world observations, in this case the record of the interhemispheric temperature difference (IHD). So let’s have a look at another explanation:

Fig.2: The IHD (regressed on the IPCC AR5 forcing data, blue) and the AMO (SST 70W…7W;25…60N, also forcing regressed, red). Both lines are regression residuals with an 11a-loess smoother.

The forcing regressed IHD has a very similar pattern as the AMO, which is a product of the climate systems internal variability.

In the case of the IHD there is also a forcing at work but not the aerosols, the land areas, which are mostly located in the extratropical NH, show more warming (not due to the lower heat capacity vs. the oceans in the SH but due to the lower available moisture over land than over the oceans). Moreover, an important land mass of the SH, the Antarctic, is decoupled from the rest of the climate system and there are also other phenomena involved in the Antarctic leading to a non- or very small warming there.

Accordingly, under every forced warming one would expect the NH to warm faster than the SH.

The rest (the IHD residuals shown in Fig.2) is internal variability. It comes very likely from the over time changing meridional heat transport of the real climate system, not from “red noise” as this current paper shows again.

The CMIP5 models struggle with the internal variability and some of them replace it with a hypothetical, in reality non-existent forcing. This leads to “running hot” ” as soon as aerosol emissions start rising more slowly than GHG concentrations. Even where models do produce AMO-like multidecadal fluctuations during their historical simulations, they are unlikely to be in phase with the real world AMO as it would be necessary when selecting a defined ”tuning period” . This leads to increasing positive deviations of the model-mean when it comes to an estimation of the future temperatures.

298 responses to “Update on the strength of aerosol forcing”

“Until now, however, the biggest surprise has been how hard it is to find compelling physical evidence for strong aerosol forcing.”

It should be emphasised ( which this article does not make clear ) that this is discussing low level aerosols not stratospheric pollution or volcanoes.

I did a somewhat similar look at Mt Pinatubo and found STRONGER volcanic forcing than the more recently used values which were adopted in order to reconcile computer model output with the climate record.

My result was close to the volcanic forcing used BEFORE they started tweaking it and relied instead on basic physics modelling.

This would similarly lead to the conclusion of a lesser sensitivity to both stratospheric aerosols and CO2 and could possibly address the recent hiatus issues.

Since Gavin’s best guess is anthro forcing is responsible for 110% of current warming, I imagine there would be at least a 10% reduction, CO2 forcing makes up about half of CO2 equivalent forcing so ECS could be less than half of estimates or about what energy models indicate.

So if you go way back to the Charney compromise you could say that Manabe was close and Hansen grossly over estimated CO2 impact. Averaging scientific guesses might not be very scientific.

“You do realize you’re suggesting that aerosols are at least neutral and perhaps negative, Cap’n?”

I realize that “normal” aerosol levels are unknown and that aerosol forcing was used to fit models to estimated past climate squiggles both by dimming and indirectly influencing clouds which was also assumed to be positive forcing. The darkening aspect of aerosols, black carbon and dust were also underestimated which may have shift “preindustrial” back to 1700 if you follow PAGES2k +.

Since ECS is defined as temperature increase due solely to CO2 doubling, I believe “at least” 10% is pretty conservative. However, I am not in charge of re-running all climate models with all the revised estimates including ERSSTv >3.

> I am not in charge of re-running all climate models with all the revised estimates including ERSSTv >3.

You’re not in charge of anything, Cap’n. That’s for sure.

Just in case you missed the abstract of one of FrankB’s citations:

[1] During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution. The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters. Here we provide insights into how climate model tuning is practically done in the case of closing the radiation balance and adjusting the global mean temperature for the Max Planck Institute Earth System Model (MPI-ESM). We demonstrate that considerable ambiguity exists in the choice of parameters, and present and compare three alternatively tuned, yet plausible configurations of the climate model. The impacts of parameter tuning on climate sensitivity was less than anticipated.

“A large part of the variability in inter-model spread in 20th century forcing was further found to originate in different aerosol forcings. It seems unlikely that the anti-correlation between forcing and sensitivity simply happened by chance. Rational explanations are that 1) either modelers somehow changed their climate sensitivities, 2) deliberately chose suitable forcings, or 3) that there exists an intrinsic compensation such that models with strong aerosol forcing also have a high climate sensitivity. ”

Hmmm, looks like model world has an issue or two. . Now how does your highlighted sentence falsify my estimate? Did the modelers change their sensitivities which is supposed to be an emerging property of the models? Did they deliberately chose suitable forcings to get the desired answer? Perhaps there is intrinsic compensation in the models or the real world where strong aerosol forcing indicates a higher climate sensitivity?

Since choice 1 and 2 shouldn’t exist, that would be cheating, perhaps lower aerosol forcing might result in an at least 10% reduction.

Of course if the modelers are cheating that would be much more entertaining :)

btw willard. past volcanic aerosol forcing appears to have a greater impact on ocean heat content than on “surface” temperature which means multi-century lags in ocean heat recovery could prove to be a problem guessing what preindustrial really should be. It isn’t a bounded problem until things settle into a “normal” regime.

There’s no need to falsify something you simply pull out of your cap, Cap’n.

The highlighted sentence may indicate that your “calibrate” in scare quotes is only that, scare quotes.

If you want some falsification, you might wish to revise your smug “averaging scientific guesses might not be very scientific”:

In further exploring the ways to improve the results a new technique called the “surprisingly popular” was developed by scientists at MIT’s Sloan Neuroeconomics Lab in collaboration with Princeton University. For a given question, people are asked to give two responses: What they think the right answer is, and what they think popular opinion will be. The averaged difference between the two indicates the correct answer. It was found that the “surprisingly popular” algorithm reduces errors by 21.3 percent in comparison to simple majority votes, and by 24.2 percent in comparison to basic confidence-weighted votes where people express how confident they are of their answers and 22.2 percent compared to advanced confidence-weighted votes, where one only uses the answers with the highest average

“The highlighted sentence may indicate that your “calibrate” in scare quotes is only that, scare quotes.”

You don’t think 1) and 2) are worthy of scare quotes? ” Models that have a high climate sensitivity tend to have a weak total anthropogenic forcing, and vice-versa. A large part of the variability in inter-model spread in 20th century forcing was further found to originate in different aerosol forcings. It seems unlikely that the anti-correlation between forcing and sensitivity simply happened by chance. Rational explanations are that 1) either modelers somehow changed their climate sensitivities, 2) deliberately chose suitable forcings, or 3) that there exists an intrinsic compensation such that models with strong aerosol forcing also have a high climate sensitivity.”

You forgot to mention that this ain’t the start of the sentence, Cap’n:

One of the few tests we can expose climate models to, is whether they are able to represent the observed temperature record from the dawn of industrialization until present. Models are surprisingly skillful in this respect [Räisänen, 2007], considering the large range in climate sensitivities among models – an ensemble behavior that has been attributed to a compensation with 20th century anthropogenic forcing [Kiehl, 2007]: […]

You also fail to mention what follows your quote:

Support for the latter is found in studies showing that parametric model tuning can influence the aerosol forcing [Lohmann and Ferrachat, 2010; Golaz et al., 2011]. Understanding this complex is well beyond our scope, but it seems appropriate to linger for a moment at the question of whether we deliberately changed our model to better agree with the 20th century temperature record.

How you and FrankB try to exploit a study that mentions the word “tuning” may never cease to amaze me, Cap’n. At least FrankB had the courtesy of dogwhistling it, while you go for chopped scare quotes.

captdallas: My reference was in relation to this sentence:
“A longer simulation with altered parameter settings obtained in step 1 and observed SST’s, currently 1976–2005 from the Atmospheric Model Intercomparison Project (AMIP), is compared with the observed climate.”
from chapter 2.1 of this paper.

Willard, exactly how much is “less than anticipated?. “One of the few tests we can expose climate models to, is whether they are able to represent the observed temperature record from the dawn of industrialization until present.” How accurate is the surface temperature record at the “Dawn” of industrialization. Exactly when did industrialization “dawn?:How skillful is “surprisingly?”

I was referencing the point made in the paper that models with the highest sensitivity tend to have the greatest aerosol forcing and if you glace at the models with the highest sensitivity, they tend to be 10% to 15% higher than the mean, not the warm and fuzzie nonsense you decided to highlight.

I hope you appreciated FrankB’s grooming above. Please, focus. If stoopid modelerz “often compensate a high sensitivity versus GHG with a high negative forcing due to aerosol haze” but get surprisingly good sensitivity estimates nevertheless, what do you think this implies regarding what FrankB’s dogwhistling?

Don’t pretend you know more than FrankB. Let him tell you. Then report.

“but get surprisingly good sensitivity estimates nevertheless.” Actually it would be consistent sensitivity estimates, “good” might be in the eye of the beholder. In world 3 they had 0.5 to 1K increase in sensitivity by just adjusting entrainment rate of deep convection and homogeneity of liquid clouds. That is actually a fairly large percentage change, unsurprisingly to some, perhaps surprisingly to others. Lots of “feelings” in climate science papers it seems.

captdallas, it’s not my role to try to stop your engagement with “Willard” if it’s raining on your side :) , anyway I think this has no merrit at all because he takes every advantage he can get, and if it’s from an unphysical guy who thinks that GHG has no influence at all on the GMST. It’s a shame where it drives the “communicators” : “Your enemie is my freind”. An end game?

U.S. coal exports are up 60% so far this year. Up very big to UK and greenie France. Number of U.S oil rigs in operation doubled from last year. Stock market still hitting new highs almost daily. Last quarter GDP growth 2.6%, compared with the Obama pathetic average of 1.6%. We are on a roll. Studies suggest that the climate consensus crowd can’t take much more of this. They were hoping to collapse the economy to slow CO2 increases and save the planet. Why are we talking about aerosols?

Peter, if the aerosol forcing ( in this case the aerosol-cloud interaction) is lower than estimated than the sensivity vs. GHG declines because the couterpart of the GHG-forcing is smaller and the observed temperature record is much more unexplainable with a high GHG-sensivity. How much?
The quantities are still in question.

Peter: I think the ECS could be on the lower end of the given IPCC bandwidth in the light of some recent research results about clouds. We have some evidence that the “Iris” is real ( see http://onlinelibrary.wiley.com/doi/10.1002/2016JD025827/abstract ) which lowers the sensivity vs. GHG and some arguments for a high sensivity ( “aerosol-cloud-interactions”) seem to be not justified, see my post.

I understand that central estimates from the GCMs are around 3.0C or 3.4C and from observations ldata (e.g, Nic Lewis) around 1.65C to 2.0C. Captdallas2 interprets the new paper you refer to in your post as saying the ECS may be as low as half of something. If he means half of the GCM estimates, which is what I expect he means, then if this new paper is correct and the GCMs are corrected accordingly it could mean the ECS estimate from GCMs would become close to the ECS estimates from the observational estimates.

What I am trying to ask is whether my understanding is roughly correct – i.e. in the ball park.

By the way, my interest is in policy and estimates of the economic impacts of climate change, not in the science. So I need clear explanations appropriate for a non-specialist.

The discussion of ghgs and aerosols is focusing on LWIR but the larger issue is global brightening and dimming of the solar SW. There too the aerosol factor is second order compared to cloud reflectivity. ETH Zurich and Martin Wild are suggesting a dimming period ahead.https://rclutz.wordpress.com/2017/07/17/natures-sunscreen/

Re iris, these authors used different satellites and reached the same conclusions. Warmer SST in the tropics leads to reduced high cloud cover in most climate models; for a given model, the more this high cloud cover is reduced, the greater the increase in outgoing IR. But the effect is much stronger in the real world than in the models.https://www.nature.com/articles/ncomms15771

Not sure what you mean by ‘internal’; natural variations can also constitute forcings, most obviously because changes in the clouds (and to a lesser degree water vapor) change the Earth’s energy budget. Other smaller factors are ozone, vegetation, etc.

If climate sensitivity to CO2 is low, in principle it should also be low to natural changes in clouds. But as we have very little idea of the magnitude of these natural forcings, because measurements of the clouds’ radiative effect go back less than 30 years, this factoid by itself says almost nothing about the magnitude of natural variability.

I’ve sometimes heard this argument applied to paleo climates: ‘If temperatures varied then the climate must have been very sensitive!’ It doesn’t follow because we have little idea of the size of the natural forcings that caused these changes. There are no paleo records of ozone, no records of methane going back millions of years, and of course there were no satellites to track the clouds.

[JCH] If mid-century cooling was caused by the PDO, as an example, then couldn’t that mean the climate is at least as sensitive to CO2 as the consensus currently thinks, and perhaps even more sensitive?

[RichardA] True! Understanding what’s aerosol forced or unforced variability in the historical record is problematic particularly the long time scale internal variability of which understanding is low and which climate models may struggle to represent.

“. If he means half of the GCM estimates, which is what I expect he means, then if this new paper is correct and the GCMs are corrected accordingly it could mean the ECS estimate from GCMs would become close to the ECS estimates from the observational estimates.”

Yep. half would be about 1.5 C which is the low range of estimates like indicated by energy balance models. Just a guess though since everything seems to be constantly changing.

Peter: the AR5 estimate of ECS is “there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C”. When I think it’s in lower end of this bandwith I would estimate it near 2 or below but not below 1.5. This is well in the ballpark of the IPCC but it’s very difficult to fix on a special value. I think that is needed more research in this field, especially on clouds. The new findings are promising, this was the reason for posting.

On another topic, I appreciate Captain Dallas’ replies to Willard. I support your derogatory remarks about Willard, but I think the Captain’s responses are worth his efforts and our reading. Willard gives new meaning to the phrase “reduction ad absurdum” — he posts and posts until he is writing absurdities — they are still worth countering, in my opinion.

matthew: The Mauritsen et. al paper is interesting and the new aerosol-issues which were the content of this blogpost could have a lot of new light on it’s findings.
On the Willard topic: If I made some derogatory remarks I applogise and pease to note that I’m not an English native, perhaps some remarks were misunderstandable due to this reason.

Harry, one of the co-authors of Maleville et.al (2017) writes in the linked (from the articel) post ( at Ed Hawkins blog): “Nevertheless, the results present tantalising evidence that the cloud-aerosol effect on climate is smaller than previously thought.”
What more do you need?

One limitation of the current analysis is that the expected aerosol cloud brightening signal is difficult to discern above the substantial month to month fluctuations in weather patterns when considering satellite data measuring the total sunlight reflected from the planet […]

These crudely estimated surface temperatures are not observations. They are the output of complex statistical models operating on convenience samples. Thus an alternative explanation is that these temperatures are incorrect.

I call this the “Fallacy of the Mean” because they are treating the mean value of a sample as a fact about the population being sampled. Statistical sampling theory says that the mean is very unlikely to be true. For example, if you take the 33% confidence interval, then the odds are two to one that the true value lies outside this interval, but the interval is centered on the mean.

Mind you to my knowledge there is no way to even calculate confidence intervals for these global temperature estimates. That is how strange these surface statistical models are. So here we find climate science focused on elaborate explanations of something that is probably not true.

“Unless this changes, in so far as aerosols are concerned, it seems that there is little to fear from cleaning the air”, and “That is, we won’t get a strong temperature increase when we globally reduce the air pollution in the future, we don’t have to fear this”

What a piece of garbage!

We ABSOULTELY do have to fear more cleansing of the air . ALL of the anomalous warming that has occurred to date has been due to the reduction in the amount of dimming anthropogenic SO2 aerosols iln the troposphere. (from a peak of approximately 130 Megatonnes in 1975 to 101 Megatonnes in 2011).

You CANNOT reduce dimming SO2 aerosol emissions without causing temperatures to rise, either globally, or locally…

Yeah actually you can reduce aerosols w/o effect if the system has negative feedbacks that make it self-regulating.

Consider for a moment that clouds act like a thermostat. Increase forcing at the ocean surface and more sunlight-reflecting clouds are produced which negates the increased forcing. The opposite happens when ocean surface forcing is decreased.

A thermostat doesn’t work instantly so there’s some hysteresis involved i.e. overshoot and undershoot.

“I have seen no evidence that warming does not occur whenever SO2 emissions are reduced.”

I feel your pain. Evidence is in pretty short supply in the climate change narrative.

“Warming due to their removal appears to be completely unaffected by cloud cover, as would be expected”

If that made any sense at all it would still be a non sequitur. The postulate was that clouds are an independent regulatory mechanism that respond to increases in surface forcing from any source with an equal and opposite delayed reaction.

Every American business recession since 1850 (33) has resulted in a temporary increase in average global temperatures., due to reduced SO2 emissions.

Reduced SO2 emissions due to Clean Air efforts also cause average global temperatures to rise, at the rate of .02 deg. C. of temp. for each net Megatonne of reduction in global SO2 emissions.

And. of course, temperatures always rise as cooling volcanic SO2 emissions settle out of the atmosphere..

Thus, there is abundant evidence that the reduction of SO2 aerosol emissions will cause temperatures to rise, and zero evidence to the contrary, as I had maintained

Although clouds do have local effects, average global temperature projections/predictions based solely upon the amount of reduction in net global reductions in SO2 emissions are so accurate that clouds really have no detectable effect.

Anthropogenic SO2 aerosols from intermittent sources have a short lifetime ln the air, on the order of a few days, or weeks.

However, SO2 emissions from the majority of anthropogenic sources, such as from power plants, foundries, factories, vehicle exhausts,etc. are constantly being renewed, so that they have an essentially infinite lifetime, ending only when they are either modified to reduce emissions, or are shut down.

That this is true is proven by the temporary increase in average global temperatures during a recession, where the amount of SO2 in the air is reduced because of less industrial activity, and insolation increases due to the cleaner air.

Burl, with this sentence from your post:
“With all of the warming accounted for by the reduction in SO2 emissions, there can never have been any additional warming due to Carbon Dioxide (CO2) or other “greenhouse” gasses.”
I stoped reading further, sorry.

PS: This sentence:
“Much has been written about the role of Carbon Dioxide (CO2) and other “greenhouse” gasses in causing the anomalous global warming that has occurred between 1975 and the present. However, recent observations conclusively prove that the none of the warming can have been due to “greenhouse” gasses”
is one more reason to stop reading your elaborate. Try it with physics and you’ll stop to estimate ECS near zero. You don’t know it better than Bjorn, indeed!

Willard, please stay on topic and try to avoid any ad hom’s just like this one: “And of course FrankB still fails to properly identify himself in the comments, here and at Ed’s.” I think it’s clear that I’m the author of the post, anyway I don’t know this “Frank” at Ed’s. It doesen’t matter as long as one is interested in the facts or the other way around: Who is more interested in names is not interested in the facts, just like you, as it seems to me.
If “Burl” attacks a cited sentence of Bjorn Stevens it would be nice if he would have read his article (linked in the article) and this seems not to be the case.
I won’t feed you anymore

[I]n the light of the headline of this blogpost the Schmidt et al. claim gaíns indeed some more importance. There are: 1. aerosols, which forcing could be reduced about 30-50%. 2. volcano: we don’t see a remarkable change in this forcing during the last 15 years. 3. solar inputs: according to AR5 forcings this is only about 3% of aerosol- forcing in this time, more or less negligible.

And IF the aeosol-forcing is indeed reduced than you have to lower TCR/ECS to replicate the observed temperatures. There seems to be no other way?

I am not surprised that Frank does not take you seriously, willito. He knows you are just trying to be annoying.

Actually, you got it bass ackwards. The Donald was helping the GOP clowns try to pass their bill. They gleefully passed it before he came along. So, now we got a trio of turncoat clowns, who campaigned for it and voted for it before, but won’t vote for it now. Not poor old Mitch’s fault. I am in favor of letting Obamacare continue on the death spiral path it’s on. Let the dims keep it and watch how it turns out. If that old slimeball McCain would hurry up and die, we could get a replacement and have enough votes. Why should you care, willito? Don’t you live in Venezuela, where they trade oil for slave Cuban doctors?

You must live on another planet, willito. The U.S. has universal healthcare. I grew up on welfare and government cheese in the projects and I never lacked healthcare.

Do you know what “commots” are, willito? Before food stamps we would stand in line to get powdered milk, cheese, peanut butter, pinto beans etc. from the surplus food the government bought off the politically powerful farmers. They called it the “commodity” program (farmer welfare). We called it commots. How do you think all us hood rats got to be so big and strong? Look at my contemporary OJ and most of the NBA 7 footers.

We went to a clinic or the General Hospital , stood in line and got “free” government health care. Just like the folks in Canada, UK, Cuba etc. I was really quick and never had an emergency. But if I had needed emergency care, I would have been scooped up by an ambulance and taken to the nearest emergency room and gotten care just like everybody else. No bills, no worries. Folks dying in the streets from lack of healthcare is left loon –snip–head propaganda.

My wife used to be a little lefty bleeding heart. While providing pro-bono psych services to the poor unfortunates of East Palo Alto, among a lot of other eye opening revelations, she learned that over half of the children born at Stanford Hospital were the scion of illegal alien mothers. They don’t pay. It’s all free. Stanford Hospital.

Wish we had Obama phones. Now you see these EBT card carrying rioters burning and looting, smiles exposing gleaming gold grills and wearing $300 Air Jordans. WTF is going on these days, willito? But, you wouldn’t have a clue.

I didn’t grow up in the projects but I visited often to escape the ambition & stress of the upwardly mobile middle class. Life shouldn’t be entirely about getting good grades, staying out of trouble, and living up to expectations.

Given that everyone alive will eventually die the only meaningful measure of health care success/failure is longevity. But even that’s misleading because a short happy existence may be preferable to a long miserable existence or vice versa. Quality vs. quantity of life is a subjective measure, right?

It’s interesting that they found a rate two and a half times higher than was found in a similar study, joshie. Was it the same data?

Anyway, working age people without health insurance have different lifestyles than working folks with health insurance. I would like to see a study that proves that folks without health insurance don’t have access to healthcare. I don’t believe I will see it anytime soon.

DS, it is not a good system if people are choosing between bankruptcy and a quicker death for themselves or a family member. Universal healthcare doesn’t put anyone in that situation because your own wealth is irrelevant to considerations about healthcare, and everyone is treated the same unless they want to pay extra for a premium service. I think this is the part Don is grappling with. The US isn’t there yet, but most of the civilized world is.

Gallup recorded that the uninsured rate among U.S. adults was 11.9% for the first quarter of 2015, continuing the decline of the uninsured rate outset by the Patient Protection and Affordable Care Act (PPACA).[13] A 2012 study for the years 2002–2008 found that about 25% of all senior citizens declared bankruptcy due to medical expenses, and 43% were forced to mortgage or sell their primary residence.[14]

Pray tell Denizens how the bestest country in the holewildwurld provides universal health care while 25% of all its senior citizens declared bankruptcy due to medical expenses, and 43% were forced to mortgage or sell their primary residence.

There is n sense that Australians don’t pay through the nose for ‘universal health care’. There is first of all a tax – and then if you earn more than a specified amount more tax unless you ‘voluntarily’ insure. Nor is there any reason that people who can pay for their own health care do not.

You can choose either the public or private system. I would choose public for emergency surgery – they arguably get more practice. From personal experience – the food supplied in the systems is equally appalling.

The US seems to have a bigger problem with costs twice that of other western countries. That seems to be a focus of Trump policy.

Woah, there Willard. Did you just say “…25% of all its senior citizens declared bankruptcy due to medical expenses…”? Yeah, yeah, I know you just cut and pasted from Wikipedia but this is a number that is obviously wrong and someone of your super-high intelligence shouldn’t display such credulity. You also made a rather significant (and erroneous) change to the quote when you summarized it. Spend a second, click on the link in the Wiki article and actually read the underlying study. It says not what you claim.

> You also made a rather significant (and erroneous) change to the quote when you summarized it.

Compare:

A 2012 study for the years 2002–2008 found that about 25% of all senior citizens declared bankruptcy due to medical expenses, and 43% were forced to mortgage or sell their primary residence.

and contrast:

Pray tell Denizens how the bestest country in the holewildwurld provides universal health care while 25% of all its senior citizens declared bankruptcy due to medical expenses, and 43% were forced to mortgage or sell their primary residence.

You were saying, new-guy-who-just-happened-to-find-that-comment-of-mine-among-a-page-a-comments?

Despite Medicare coverage, elderly households face considerable financial risk from out-of-pocket healthcare expenses at the end of life. Disease-related differences in this risk complicate efforts to anticipate or plan for health-related expenditures in the last 5 years of life.”

Um, Willard, you are normally smarter than that. In your “compare and contrast”, kindly consider your addition of the word “its” to “all its”. Then go the original study, find the support for your faulty assertion and report back. (In your defense, the original wording is sadly misleading, but your edit just makes it worse). This is of course a bit of a sideshow. Removing the word “its” still doesn’t correct your claim. It is fundamentally wrong on multiple levels. While you are reading the underlying study, you may wish to to look for the support for your claim. Hint: it is not there. Hell, the word “bankruptcy” doesn’t even appear! And the only reference to 25% doesn’t say what you think it says.

DM, Obama wanted a public option in each state, but the insurance companies made sure that didn’t happen. This would have been too much competition for them, and they saw the slippery slope as people would migrate to it. So, yes, the public option never happened, and I wish it did. The co-ops were some kind of privatized compromise. Privatized individual-insurance-based healthcare has all kinds of problems that a tax-based system avoids. Obama conceded too much to privatized interests get the 60 senate votes it needed to pass. Drug companies were another one. Still it was a net win for people getting more affordable healthcare, especially in the Medicaid expansion states, and those with pre-existing conditions.

> Then go the original study, find the support for your […] assertion and report back.

The support for my assertion has already quoted twice.

Here’s a quote supporting that quote:

Average out-of-pocket expenditures in the 5 years prior to death were $38,688 (95 % Confidence Interval $36,868, $40,508) for individuals, and $51,030 (95 % CI $47,649, $54,412) for couples in which one spouse dies. Spending was highly skewed, with the median and 90th percentile equal to $22,885 and $89,106, respectively, for individuals, and $39,759 and $94,823, respectively, for couples. Overall, 25 % of subjects’ expenditures exceeded baseline total household assets, and 43 % of subjects’ spending surpassed their non-housing assets. Among those survived by a spouse, 10 % exceeded total baseline assets and 24 % exceeded non-housing assets. By cause of death, average spending ranged from $31,069 for gastrointestinal disease to $66,155 for Alzheimer’s disease.

Willard just doesn’t know when to quit. He was told that the only reference to 25% doesn’t say what he thinks it says, but he just repeats the same errors.

Let’s compare and contrast:

His original claim:
“…25% of all its senior citizens declared bankruptcy due to medical expenses…”

The support he now gives: “…Overall, 25 % of subjects’ expenditures exceeded baseline total household assets, and 43 % of subjects’ spending surpassed their non-housing assets…”

Are these the same thing? Maybe in Willard’s fevered imagination. He claims 25% of all American senior citizens file for bankruptcy (for medical reasons). This is not only wrong, it is ludicrously wrong to anyone who has even a basic knowledge of US bankruptcy statistics. There are at least two major errors. First, I already pointed him to his use of the word “its”. Does the study say the 25% refers to “all Americans”? Well no. It refers to 25% of the study sample. And what is the focus of the study? It is adults with medicare in the last 5 years of their life. Does the study claim that this population is representative of the overall US adult population. Well, no, it doesn’t.

His second error (and this one is really quite boneheaded) is to equate expenditures exceeding baseline total household assets with filings for bankruptcy. Anyone with basic experience in the area would know they are not equivalent. If they were, we would see the 25% reflected in bankruptcy filings. But we don’t. Not even close. Willard, a quick pop quiz for you for extra credit: which country had a higher incidence of consumer bankruptcy filings (% of population) in 2016: the US or Canada? (Hint: the answer doesn’t support your narrative).

So why aren’t these seniors who spend more than baseline assets filing for bankruptcy? A few little hints for Willard that might put him on the right path:
– What do “baseline total assets” include and what do they exclude?
– Why would it make sense for US seniors towards the end of their life to reduce baseline assets?

> Does the study claim that this population is representative of the overall US adult population. Well, no, it doesn’t.

I see. Quoting what the study actually says instead of pontificating on what it doesn’t say is easier:

In this paper, we consider the financial risk faced by Medicare beneficiaries during the 5 years prior to death (or the death of a spouse), and how these risks vary by socioeconomic and marital status. Specifically, we examine the likelihood of spending all or more than half of one’s baseline assets on health-related expenses, and the extent to which these risks vary by marital status or type of disease. We use the Health and Retirement Study (HRS),6 a rich longitudinal cohort study of U.S. adults age 50 years and older, that includes detailed information on out-of-pocket spending as well as information about socioeconomic status, health and demographic characteristics and cause of death.

We sampled all HRS decedents identified by a post-death, or “Exit”, interview, completed by a proxy between 2002 and 2008. […] While the sample of end-of-life Medicare enrollees are older than the general population in the 1998 Medicare Current Beneficiary Study, other characteristics such as proportion female, race, education, income, Medicaid coverage, and burden of chronic disease were similar.

The authors do indeed suggest that their sample is representative of the population they study. The population they study are Medicare beneficiaries. So let’s thank our New Guy for spotting that. However, our New Guy is a bit shy on a small detail:

Traditionally, the “elderly” are considered to be those persons age 65 and older. By that definition, in 1987 there were just over 30 million elderly people in the United States, more than 12 percent of the total U.S. population of nearly 252 million (Table 3.1). This group makes up the vast majority, almost 96 percent, of Medicare recipients.

> His second error (and this one is really quite boneheaded) is to equate expenditures exceeding baseline total household assets with filings for bankruptcy. Anyone with basic experience in the area would know they are not equivalent.

Speaking of boneheaded error, our New Guy still conflates me and my source, injects “filings for bankruptcy” when the source mentions “bankruptcy” simpliciter, and yet again armwaves the stuff he’s supposed to be mansplaining, i.e. the crucial difference between the two for the point being made here, viz that the US is far from having universal health care.

And no, I won’t spell out the two things our New Guy is supposed to be distinguishing. He’s our new guru, so he should know. We’ll provide a hint though: one costs more than the other.

***

> This is not only wrong, it is ludicrously wrong to anyone who has even a basic knowledge of US bankruptcy statistics.

Love your proof by assertion, New Guy.

Show Denizens.

Freedom Fighters are easy to spot with experience. The word “bankcruptcy” somehow hits a nerve with them:

Bankruptcy is not the only legal status that an insolvent person may have, and the term bankruptcy is therefore not a synonym for insolvency. In some countries, such as the United Kingdom, bankruptcy is limited to individuals, and other forms of insolvency proceedings (such as liquidation and administration) are applied to companies. In the United States, bankruptcy is applied more broadly to formal insolvency proceedings. In France, the cognate French word banqueroute is used solely for cases of fraudulent bankruptcy, whereas the term faillite (cognate of “failure”) is used for bankruptcy in accordance with the law.

Willard, Nice dancing, but it really doesn’t save you. You are the Elizabeth Warren of this site (no matter how wrong you are, nevertheless, you persist). Your terminology was “declare bankruptcy”. Now you are running away from that with this tap dance about insolvency having different meanings in different countries (for what it’s worth, I’m not American so another of your assumptions about “American-based Freedom Fighters” falls just a little flat). Now the word “declare” has a specific meaning when used in conjunction with “bankruptcy”, does it not? You say that I inject “filings for bankruptcy” when the source mentions “bankruptcy” simpliciter. Well, no. You are being flat out dishonest. As I have painstakingly pointed out, the source refers to “declared bankruptcy.” And, for that matter, so do you. You somehow forget to mention this. You can’t seriously be trying to mansplain a distinction between “filing for bankruptcy” and “declaring bankruptcy”? So much for Integrity (TM) and all that.

In cutting through all your chaff, you actually seem to admit (very grudgingly) that your original construct was screwed up. Do 25% of American seniors declare bankruptcy? Of course not. You moderate (finally) by saying, you won’t mind speaking of “end-of-life Medicare beneficiaries whose expenditures exceeded baseline total household instead”. Well thank you. That’s all I was asking for. (You may want to add the word “assets” in there. Your writing style is obscure enough without your making silly typos). It is a very different claim from what you originally made. Spending more than baseline total household assets is most definitely not synonymous with “declaring” bankruptcy or even with your rather pitiful attempt to redefine the term to basic insolvency. I gave you a number of hints why this cohort is not declaring bankruptcy and even included a fun link. Did you click on it? Did you understand it? Subtlety doesn’t appear to be your strong suit, so I am doubtful. Did you bother to research the question I asked you: in 2016, which country had more people file for bankruptcy: the US or Canada?

Oh, and what on earth does Godwin’s Law have to do with something incorrect being the shortest way to get information you need? Methinks you screwed up again. Did you mean Cunningham’s law? If so, this is too funny. You just unwittingly demonstrated Cunningham’s law in action, thanks to my kind correction of this additional error. How ironic. How droll. (But you are welcome for the free lessons.) Willard, you seem a little rattled and off your game. Or maybe you are just not very good at this game. Some friendly advice: take a few days off and rest before replying. Get some sun and enjoy a few drinks. You’ll come back refreshed and do much better, even against new guys.

Again, twas thy Wiki’s terminology, New Guy. Paying lips service to Warren and Cunningham shows that while you may be new here (although you seem to know about INTEGRITY ™, in all caps please) you may not be new to Freedom Fighters’ issues. Above average Freedom Fighters can usually make a difference between a sourced claim and its paraphrase.

You still fail to get that the act of declaring bankcruptcy changes from one country to the next. So pending your clarification on what could happen when expenditures exceeded baseline total household assets at the time of death besides filing for bankcruptcy in the US of A, you’re dogwhistling a distinction without a difference. The best you could argue is that Wikipedians conflated bankcruptcy with insolvency. I say arguing, which is not exactly like asking rhetorical questions, more so when they echo Freedom Fighters lichurchur on that topic. My hint (one costs more money than the other) should be enough to tell you all you need to satisfy your secret handshake.

However, I can add this other one, in the same form as you allow yourself to make: what percentage of out-of-pocket expenditures does the Canadians government find acceptable? I’m asking because, you know, New Guy, this sub-thread is about Don Don refusing to concede that Americans do not haz universal health care. This is why mentioning that having a big part of the population with steep out-of-pocket medical expenses has any relevance, after all. Care to opine on that one?

As for proving Cunningham’s law, you don’t seem to get that sometimes, I do cite stuff just for the fun of seeing Don Don missing out that the citation argues for his own position. Have you noticed how he dismissed the Salon article? Also, cites are like retweets. They don’t always constitute full endorsements. They serve a function.

Oh, and I did notice you dropped the representativeness thing. Why is that?

So teach me more about out-of-pocket expenses and their relationship with universal health care, New Guy.

I have been at some pains – quite literally – to understand poor wee willies politics. He is a neo-socialist with delusions that computers allow for centrally planned economies.

He seems to have started this by quoting a Wikipedia misrepresentation of a report. Let’s be clear – because clarity is in short supply with poor wee willie – the report did not use the term bankruptcy. Nor is it technically insolvency – as he is now claiming as the equivalent – if there is income to support the debt. Nor does there seem to be much stomach for debt recovery from people who cannot pay. Legal fees would very quickly exceed the quantum of debt discussed in the report – cheaper to write it off. But I am sure that I don’t want to get into quibbles about high finance with poor wee willie.

All this seems seems just another crusade by a self appointed standard bearer for the urban, doofus hipster cause. And you may google this.

“Trendy, “cool guy” that, while certainly trendy, isn’t all that cool. Easily distinguished by his semi-cool wardrobe worn badly after watching one too many episodes of “Queer Eye for the Straight Guy”

It’s what Seinfeld called Kramer.

“He is such a hipster doofus, and has not changed at all since college. Do you think he knows he is a hipster doofus?”

So far, our Chief or our New Guy offer very little evidence of their technical proficiency.

***

> cheaper to write it off.

Don’t quit your day job if you have one, Chief:

What happens when a large medical bill can’t be paid? Usually the outcome is a lawsuit filed by the hospital or collection agency with a judgment and a lien filed against the patient’s home and accounts. In most states, a percentage of the debtor’s employment earnings can be garnished. Generally, before this point is reached, the patient files a personal bankruptcy to stop the wage garnishment and wipe out the medical bills and other accumulated debts. But that requires that he give up all of his assets including savings accounts, real estate and equity in his home. These assets, except those that are specifically exempt, are turned over to the Court and divided among the creditors.

Notice when a person “generally” files for bankcrupcy, and compare with the population we’re discussing. Filing for bankcruptcy offers protection. It isn’t something you pay for no good reason, and being dead looks like a good reason to me.

“That’s not quite what they said. Bad health events do more than land you with big medical bills (which bills can often be settled for pennies on the dollar, because the collectors know they get nothing if you file). Getting really sick also cuts your income as you stop working. If you’ve got debt and no savings, that job loss is going to be catastrophic.

Unfortunately, the incentives of both academic journals and the media mean that dubious research often gets more widely known than more carefully done studies, precisely because the shoddy statistics and wild outliers suggest something new and interesting about the world. If I tell you that more than half of all bankruptcies are caused by medical problems, you will be alarmed and wish to know more. If I show you more carefully done research suggesting that it is a real but comparatively modest problem, you will just be wondering what time “Game of Thrones” is on.”https://www.bloomberg.com/view/articles/2017-01-17/the-myth-of-the-medical-bankruptcy

Poor wee willie quotes a lawyer touting for work as an authoritative source. Poor wee willie has no evident employment skill set – unless we count obfuscation and calumny. I suppose there might be a market in that. I suppose being able to google sh.t and find random quotes to support his BS is a skill. Being discerning about sources seems a better skill. Actually getting the quotes right – instead of via wikipedia – better still.

Nice dodge – now it is wiki’s fault, not yours. Sorry, kiddo, but when you quote faulty statistics from Wiki (and repeat them without quotes to make a point) you own them. Let’s go back to my very first take-down of your nonsense: “I know you just cut and pasted from Wikipedia but this is a number that is obviously wrong and someone of your super-high intelligence shouldn’t display such credulity.” Own it, man. You screwed up.

Have you given up on your dishonest attempt to draw a distinction between “file for bankruptcy” and “declare bankruptcy”. I hope so, but you are hardly overwhelming me with your Integrity ™ so far. Remember your comment about “bankruptcy simpliciter”, where you “forgot” about the word “declare”

I don’t know why you are bringing Don into this. I frankly don’t read his interactions with you – your back and forth is quite painful (as I am sure this exchange is to other readers). I just happened to notice your reference to a completely wrong statistic. Your defensiveness to correcting a simple error is really quite amazing.

You give a quote saying bankruptcy has a different legal status in different countries. Yeah, so what? When you are referring to US citizens “declaring” bankruptcy, standard rules of construction would mean you are referring to “declaring” bankruptcy in the US. It would be pitiful mental gymnastics to claim otherwise.

Because I am tiring of embarrassing you (as fun as it this), and because you didn’t seem to understand the hints I gave you, I will address slowly and clearly what now appears to be the core area under discussion. If spending more than 25% of baseline household assets was synonymous with declaring bankruptcy, we should see it in the numbers. But we don’t – not even close. Recently the incidence of US bankruptcy filings has been even lower than that of Canada. So clearly the two are not synonymous. There are many reasons why not, but let’s stick with two:
– Here is a link to the asset verification section of the HRS study used by the authors. http://hrsonline.isr.umich.edu/modules/meta/2002/core/qnaire/online/21hr02U.pdf?_ga=2.2768051.247789487.1502371121-459185118.1502133389. This provides a good summary of what assets are included in baseline total assets. You (well, not you, but someone with some basic financial knowledge) will notice an absence of social security and pensions. The fact that the authors do not include these forms of retirement income is not a methodological flaw – they are looking at a particular subset of people – those at end of life (see why your “representative” argument is off base?) and this means that the present value of this retirement income is close to zero. But what this means is that these are people, who had they lived on, would likely not be destitute. If you are $5,000 in debt but have a guaranteed annual income for life of $50,000, would you declare bankrupty? Of course not. To call this a distinction without a difference is just ignorant.
– It is quite common for upper and middle class senior citizens coming towards the end of their life to actively look to reduce household income. This is achieved by grants to trusts and/or structured gifts to heirs over time. This is done both for estate tax purposes and because certain benefits are means-tested (I provided a nice link showing how even Governor Pataki availed of this, but it clearly flew way over your head). So baseline household assets for senior citizens at the end of their life can be a very poor proxy for financial wellbeing. And spending those assets is most certainly not the same as declaring bankruptcy.

On a more theoretical basis (but this will blow your mind), some economists argue that ending life with no household assets is a meaningless statistic and actually rational economic behavior – see Modigliani’s Life-cycle hypothesis. Again – this shows why your sample is not representative. Did you happen to notice that even your quote did not say the sample and the full cohort were similar in terms of household assets? Kinda crucial if you are going to extrapolate a finding on spending said assets to the full population, dontcha think?

Did I mansplain this well enough for you? If not, too bad. I am tired of giving you free lessons. You say you see very little evidence of my technical proficiency. That’s really rich from someone whose entire modus operandi appears to be block quoting without understanding and then arm waving.

I think what little willito is trying to prove with this foolishness is that Bernie Sanders and the other soft “progressive” clowns are making a big mistake in advocating “Medicare for all”. Being a full blown Venezuelan-school commie, willito knows that our country can be bankrupted and thrust into a totalitarian state much faster by collapsing the healthcare system with the accelerating Obamacare death spiral. Now watch him dance.

Considering that it compelled you to mansplain, New Guy, you might need to revisit your mental model. I promised you a correction in exchange for mansplaining. Contrary to what you claim your last comment, this is your first “free lesson.” Your bullying don’t count. And yes, without your mansplanation, the distinction makes little difference.

So let’s see:

1. I thought you wanted me to own the misuage of the word “bankcruptcy” attached to the statistics cited by thy Wiki, not the statistics themselves.

2. Is your case of “$5,000 in debt but have a guaranteed annual income for life of $50,000” an example of a 25% baseline household assets debt? Somehow I doubt it.

3. The authors suggest that their sample is representative of the population they study, i.e. Medicare beneficiaries during the 5 years prior to death. If that’s correct, then their results should apply to Medicare beneficiaries during the 5 years prior to their death. Since most of the elderly is recipient has Medicare and generally die, your rhetorical questions look bogus to me. Similar proportion of female, race, education, income, Medicaid coverage, and burden of chronic disease, but not household assets? Either it’s a quantifier thing, or your incredulity is showing, New Guy.

4. At last you spell it out when you say:

It is quite common for upper and middle class senior citizens coming towards the end of their life to actively look to reduce household income. This is achieved by grants to trusts and/or structured gifts to heirs over time.

That doesn’t say what happens when expenditures exceed baseline total household assets if not bankcruptcy, but that does indicate (at least insofar as the upper classes are concerned) why the number of bankcruptcy filings is so low. So thank you for that.

Under that light, it is quite clear why insolvency doesn’t imply bankcruptcy, in contrast to what the Wikipedians inferred when they wrote the passage I quoted.

See? You show some work. You get a response.

5. Does handwaving to “some economists” work in your classes, New Guy?

Beware that the little light you yourself shed on this issue indicates that bankcruptcy numbers (no, not the 25%/43% we’re discussing, rather the Himmelstein study you so elegantly misidentify by alluding to Warren) could very well undersell the lack of wellbeing of those not enjoying universal healthcare.

This is the elephant in the room your gotcha game is supposed to make disappear, New Guy.

***

Speaking of universal healthcare, you dodged my two questions. One is related to it. The other is related to out-of-pocket expenditures in Canada.

I don’t have the time or inclination to respond to your new misunderstandings (you don’t seem to really understand the pension and social security retirement income point that I don’t think I can make any clearer and am not going to try, and you seem to mistake me with someone else with some of your references), but one point I just can’t let go by.

You say “Does handwaving to “some economists” work in your classes, New Guy?” I assume this is in reference to my comment: “some economists argue that ending life with no household assets is a meaningless statistic and actually rational economic behavior – see Modigliani’s Life-cycle hypothesis.”

This is not a handwave to just “some” economist. This is Modigliani. The life-cycle hypothesis is one of his greatest work and a key reason why he won the Nobel prize. In what world does pointing out a famous economic theory that makes my point count as handwaving? And, yes, when I was teaching undergrad economics (a very long time ago), I would expect my students to understand a reference to the Modigliani life-cycle hypothesis.

Modigliani is just one instance of the “many economists” to which you handwave, New Guy. So your namedropping fails to impress. The least you could do is to quote and cite Modigliani discusses what happens when expenditures exceed baseline total household assets, or which measure he suggests to evaluate the financial wellbeing of a population.

Meanwhile, other Wikipedians might need your help:

The findings of many economists bring out a problem in the life-cycle model. It was found out that the elderly do not dissave as quickly as has been said in the model. There are two explanations for the aforementioned behaviour of the elderly.

The first explanation is that the retired individuals are cautious about unpredictable expenses. The additional saving that arises due to this behaviour is called precautionary saving. Precautionary saving may be made for the probable event of living longer than expected and hence having to provide for a longer than the planned span of retirement. Another rational reason is possibility of ill-health and huge medical expenses. These probable events make the elderly save more.

The second explanation is that the elderly may save more so they can leave bequests to their children. This discourages dissaving at the expected rate.

Overall research on the retired section of the society show that the life-cycle model cannot completely explain consumer behaviour. Providing for retirement is an important reason for dissaving. However precautionary saving and bequests are also important.

I’m not sure about these allegations of yours, Don Don. You may need to ask your new guru. Interestingly, you guru’s guru may not have been as fond as you of mindless privatization:

He [Franco Modigliani] is the co-author of Rethinking Pension Reform (2009), Cambridge University Press, and along with Arun Muralidhar, critiqued the privatization model of Social Security reform proposed by the World Bank (in the 1990s) and President Bush in the early 2000s, and offered a better alternative to reform Social Security systems globally.

Indeed, privatization leaves all major decisions to the government. The only thing that is truly “privatized” is risk. This results in an arbitrary and capricious redistribution of pension income. To be sure, some will end up above average, as is often emphasized by the supporters, but supporters conveniently forget about the rest who will do worse. The inequalities generated by privatization are especially repellent because they are artificial and serve no useful (e.g., incentive) functions. In addition, the management costs, combined with additional costs of administration and regulation, would be prohibitive, often causing a reduction in pensions of as much as 15% to 20%. And these costs are a total social waste, for the competition between portfolio managers is a zero-sum game. It cannot increase the overall return earned on reserves. When all these flaws are taken into account, one must reject privatization, or individual accounts, unconditionally in favor of defined benefits.

Pobrecito, willito. I am pretty sure you are way too far gone to learn anything, but Googling up a bunch of crap some alleged person of authority said and regurgitating it does not impress. A clown who spent as much time as you do on this foolishness could find plenty of quotes that express the opposite opinion. You got studies, we got studies. Smart people don’t play that game, willito. New guy still has hope for you. He is trying to get you to argue on facts and logic. I am pretty sure he is wasting his time with that.

And you always avoid the point. Why do you socialist jokers want the government to impose Medicare on everybody? Do you want us all to go bankrupt?

> Why do you socialist jokers want the government to impose Medicare on everybody?

I wouldn’t call Modigliani a socialist joker, Don Don, and note that every single country in the industrialized world has health care except Murica. One of the best thing that came out of our last century was social liberalism. Isn’t about time Freedom Fighters let their futile fight go?

The reasons Modigliani offers to keep Social Security public may apply to health care. It’s about pooling risks more evenly and not wasting money on silly zero-sum games.

You remind me of Paul Ryan, for whom the ACE can’t work because “the healthy pay for the people who are sick,” Problem is, that’s basically how insurance in general, privatized or not. It’s like saying that water can’t work because it’s wet.

Social liberalism didn’t come out of the last century, wee willito. It came out of the wealth created by the industrial revolution. It was a capitalist revolution that produced the wealth that made welfare states possible.

Now, the welfare states want to stifle capitalism and free markets. We will see how much longer the welfare states can keep financing entitlements with borrowed money.

You can take it from here, willito. You apparently got nothing else to do.

> Everyone in the U.S. has access to healthcare. Those who can’t pay, don’t.

You keep using “universal health care” that way, Don Don. It might not mean what you think it means. Have you seen our New Guy defending you on that point? I don’t think he did. He didn’t answer my question about how much out-of-pocket expenses the Canadian government considers reasonable either, and never said what happens when at time of death with your assets when they’re lower than what you owe the medical industry.

Wonder why?

Speaking of rascals, let me show you how much absurd the medical industry you’re defending right now is getting:

Health spending growth in the United States for 2015–25 is projected to average 5.8 percent—1.3 percentage points faster than growth in the gross domestic product—and to represent 20.1 percent of the total economy by 2025. As the initial impacts associated with the Affordable Care Act’s coverage expansions fade, growth in health spending is expected to be influenced by changes in economic growth, faster growth in medical prices, and population aging. Projected national health spending growth, though faster than observed in the recent history, is slower than in the two decades before the recent Great Recession, in part because of trends such as increasing cost sharing in private health insurance plans and various Medicare payment update provisions. In addition, the share of total health expenditures paid for by federal, state, and local governments is projected to increase to 47 percent by 2025.

In this thread’s typically lengthy and disingenuous sermon of yours, you haven’t described one group of folks, or named a single person in the U.S., who can’t get healthcare, willito. Healthcare is not the same thing as health insurance.

The AHCA article cited and fairly characterized a NHS report that you are free to read. They got problems keeping folks alive because of bad training and lack of equipment. Or, maybe it is just the NHS sneaky way of culling. That kind of medicine is what Americans expect from government health care. That is why we don’t want it. End of stroy. Bye Bye willito!

Universal health care refers to a health care system that provides health care and financial protection to all citizens, Don Don. That’s not the case in the US of A. Unless our New Guy can vouch for your interpretation, it’s the end of story.

As for the study you’re touting, you do realize that the researchers emphasize the budget cuts and deemphazise the volativity of deaths from year to year, right?

But since you’re so keen on point out deaths, perhaps you’d be willing to comment on the intriguing fact that the US of A is the fifth worse country in the OECD for under-five mortality rate?

Yes, Don Don. You can only brag to be better than Turkey, Mexico, Argentina, and Slovakia.

We have a lot of crackheads here, who have a lot of children. Kids in the U.S. don’t die from lack of healthcare. It’s from lack of parental care. It’s from neglect and abuse. I mentor kids in those types of “families”. You just run your mouth.

In the most wonderful country in the world, Don Don. The last bastion against all those barbaric socialist countries. So much crackheads. Why, O why?

Medicaid seems to provide a better correlation:

Objectives. We investigated trends in national childhood mortality, racial disparities in child mortality, and the effect of Medicaid and State Children’s Health Insurance Program (SCHIP) eligibility expansions on child mortality.

Methods. We analyzed child mortality by state, race, and age using the National Center for Health Statistics’ multiple cause of death files over 20 years, from 1985 to 2004.

Results. Child mortality continued to decline in the United States, but racial disparities in mortality remained. Declines in child mortality (ages 1–17 years) were substantial for both natural (disease-related) and external (injuries, homicide, and suicide) causes for children of all races/ethnicities, although Black–White mortality ratios remained unchanged during the study period. Expanded Medicaid and SCHIP eligibility was significantly related to the decline in external-cause mortality; the relationship between natural-cause mortality and Medicaid or SCHIP eligibility remains unclear. Eligibility expansions did not affect relative racial disparities in child mortality.

Conclusions. Although the study provides some evidence that public insurance expansions reduce child mortality, future research is needed on the effect of new health insurance on child health and on factors causing relative racial disparities.

You are slimey. You don’t know anything. You have to look for some study to cheery pick. I have done a lifetime of studying. Been there myself. I know why kids in my country die. Bad parents. I never said that Medical or the other means of giving FREE care to those IN REAL NEED should be eliminated. You just make crap up. I am no longer amused with you foolishness. You are literally making me sick. A person with your obvious education should be making himself useful. Shame on you.

CONCLUSIONS:
Dramatic reductions in mortality among children in all socioeconomic quintiles represent a major public health success. However, children in higher socioeconomic quintiles experienced much larger declines in overall, injury, and natural-cause mortality than did those in more deprived socioeconomic quintiles, which contributed to the widening socioeconomic gap in mortality. Widening disparities in child mortality may reflect increasing polarization among deprivation quintiles in material and social conditions.

You are incredibly foolish. I am not going to continue to explain the obvious to you. You are just an ideologically blind argumentative little fool. And it shows. Please put your slimy ludicrous spin on any thing I say any way you please. It’s on you. Monkey climb too high, show him tail.

I step away for a few days and come back to see that Willard is still at his trolling tricks. Again his cut-and-paste from Wikipedia responding to my showing his ignorance of a foundational economic theory completely misses the point. He feels that a quote with a portion highlighted can substitute for making an argument. But he has the chutzpah to accuse others of arm waiving. I can do no more than repeat my earlier comment: That’s really rich from someone whose entire modus operandi appears to be block quoting without understanding and then arm waving.

He also accuses me of “bullying” him. Poor, poor Willard. But, he can sleep easier – I won’t be wasting any more time on him.

By the sheer mention of Modigliani and life-cycle hypothesis, no less. Our New Guy’s that big. I mean, we really need a “hypothesis” to observe that people tend to save when they can and spend what they saved at the end of their lives. (As underlined by the Wiki quote, the model was still far from perfect, a point our New Guy has yet to dispute, but nevermind.) How is that supposed to that tell what happens when expenditures exceeded baseline total household assets, our New Guy doesn’t even come close to address.

Who cares if there’s a clear correlation between medical debt and household financial difficulties. It’s marginal. Only 25% of the American population that will die in the five years may at worse ever have medical expenditures exceeding baseline total household assets. It’s not like it’s that bad thing, because it’s soooo easy to move assets around. After all, savings, certificates, bonds, stocks, retirement funds, life insurance, trusts, annuities, futures, and royalties aren’t assets. The IRS is that dumb. No, we need to focus on how (many) people filed for bankruptcy, a process that is far from being simple, objective, or even mandatory.

Our New Guy still has to dispute Modigliani’s arguments against the privatization of Social Security some Freedom Fighters are crying for. He’s still stuck on Himmelstein’s studies when research has evolved a bit since then, e.g. Austin 2014. Besides Don Donesque abuses, dogwhistling stuff from Fraser’s Institute, the AEI, and other Freedom Fighters’ think tanks ought to be enough, right?

A major goal of the Medicare legislation enacted in 1965 was to protect elderly citizens from financial risk. As Lyndon Johnson declared at the signing of the legislation, “No longer will illness crush and destroy the savings that [older Americans] have so carefully put away over a lifetime.”1 Since that time, healthcare costs have risen dramatically, as have Medicare expenditures. Some proposals seeking to rein in Medicare cost growth include provisions by which the elderly would pay a larger share of their healthcare costs out-of-pocket.While some prior research has suggested relatively little financial risk in the Medicare population on average,2,3 more recent studies point to a significant degree of financial risk in the last year of life.

I said everybody in the U.S. has access to healthcare. I call that universal healthcare. I don’t give a flying –snip– what you call it.

You are very pompous and boring, willito. And I don’t give a flying –snip– what a morning consult poll says. What happened to President Hillary?

They couldn’t even pass single payer in left loon California. Google it, clownboy. You are just baying at the moon with that single payer crap. Move to Venezuela, if you want that crap. Better take plenty of toilet paper and food with you.

The sun reached a peak that declined after mid-century. Aerosols rose sharply with the increase of oil use for vehicles after the war, and coal energy generation. This was the period known for global dimming, but actually it was regional and downstream of pollution. Many studies in the 70’s saw the importance of growing aerosols on climate, but they also realized that the rapidly increasing growth rate of GHGs would soon make that effect detectable and even dominant, and it came to be. At that time CO2 forcing was growing by 0.1 W/m2 per decade, and now it is over 0,3 W/m2 per decade.

The Scottish theory of spontaneous order was a crucial contribution to the model of a basically self-generating and self-regulating civil society that required state action only to defend against violent intrusion into the individual’s rights-protected sphere. As Dugald Steward put it in his Biographical Memoir of Adam Smith (1811), “Little else is requisite to carry a state to the highest degree of opulence from the lowest barbarism, but peace, easy taxes, and the tolerable administration of justice; all the rest being brought about by the natural course of things.” The Physiocratic formula, Laissez-faire, laissez-passer, le monde va de lui-même (“the world goes by itself”), suggests both the liberal program and the social philosophy upon which it rests. The theory of spontaneous order was elaborated by later liberal thinkers, notably Herbert Spencer and Carl Menger in the 19th century and F.A. Hayek and Michael Polanyi in the twentieth.https://mises.org/library/what-classical-liberalism

Jimmy’s narratives are always a bit disappointing. The economics somewhere recently not least. The classic liberal – as opposed to the progressive liberal – has all the answers. It is not all that difficult – the foundation for classic liberalism is a commitment to democratic principles and an appreciation of market dynamics.

Markets exist – ideally – in a democratic context. Politics provides a legislative framework for consumer protection, worker and public safety, environmental conservation and a host of other things. Including for regulation of markets – banking capital requirements, anti-monopoly laws, prohibition of insider trading, laws on corporate transparency and probity, tax laws, etc. A key to stable markets – and therefore growth – is fair and transparent regulation, minimal corruption and effective democratic oversight. Markets do best where government is large enough to be an important player and small enough not to squeeze the vitality out of capitalism – government revenue of some 25% of gross domestic product. Markets can’t exist without laws – just as civil society can’t exist without police, courts and armies.

The alternative vision involves narratives of moribund western economies governed by corrupt corporations collapsing under the weight of internal contradictions – leading to less growth, less material consumption, less CO2 emissions, less habitat destruction and a last late chance to stay within the safe limits of global ecosystems. And this is just in the ‘scholarly’ journals.
It is a disdain for urbanisation and industrial production that I find puzzling and disturbing – if utterly inconsequential to the evolution of culture in the 21st century.

The narratives commonly bear little resemblance to reality. Just recently there was a kerfuffle about a coal mine in central Queensland and black faced finches and protected zones in Australian seas. The range of black faced finches has declined over a century or more – along with that of other ground dwelling birds in open woodlands. There is a complex of causes that are better addressed with land management than by stopping a mine dedicated to supplying energy to India. The proposed mine footprint does not overlap the modern range of the finches. Highly protected zones – green and yellow – in Australian marine reserves increase substantially in the draft plan. I’m unclear whether progressive misinformation is dishonesty or whether they have been captured by their own narrative. Schneider is the poster boy for the former.

The climate forcing narratives are likewise based on myth and conviction. There is minimal physical evidence. As for models – irreducible imprecision is the humbling reality.

Within broad limits – the IPCC forcing can be accepted without too much angst. Add it up – sort it through – there is very little in terms of tight numbers. Solar forcing –
increasing to the 1950’s and staying high until the end of the century – is the definition of minor.

But unless there is an understanding of natural variability it all seems a bit pointless. It is presumed for a variety of reasons that there is a solar amplification mechanism.

In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

Black soot accumulates over land darkening the surface and raising surface air temperature. Black soot doesn’t darken the ocean surface.

How much of the difference between land and ocean warming is due to black soot? How much due to urban heat islands? How much due to land surface use changes?

Variables cannot be measured unless isolated somehow. Isolating variables is a fundamental tenet of experimental science and when not possible hypotheses cannot be tested and nothing but narratives (just-so stories) can be produced.

The fastest land changes are in the northern interior continents (Russia and Canada) where those are unlikely to be issues, and where direct radiative forcing is likely to be strongest. The map of the warming since 1950 gives the clues you need.

“The CMIP5 models struggle with the internal variability and some of them replace it with a hypothetical, in reality non-existent forcing. This leads to “running hot” ” as soon as aerosol emissions start rising more slowly than GHG concentrations.”

The idea is that polar surface pressure – and therefore both southern and northern annular modes – modulate ocean and atmosphere circulation biasing the system to warm or cool states. Mostly I find people don’t understand what chaos theory implies for climate.

“It may seem surprising, but despite many different attempts, almost all remote sensing of aerosols from space is only capable of detecting the total optical depth of all aerosols. MISR can provide some discrimination in special cases (picking out dust via a retrieval of non-spherical particles, or using the single scattering albedo to distinguish black carbon), but overall the estimates mix up sulphates, dust, black carbon, sea salt, nitrates and secondary organics. These originate from different processes, have different properties and different impacts on both radiation and clouds. Sea salt comes from sea spray over the oceans, dust from dry desert areas, black carbon from burning of forests and fossil fuels, sulphates derive from ocean plankton and burning coal, nitrates derive from fertiliser use, car exhausts and lightning, and secondary organics come from the stew of volatile organic compounds from industrial and natural sources alike. There are also pollen, and fat particles from outdoor cooking etc.

Because we can’t easily distinguish what’s what from space, we don’t have good global coverage of exactly how much of the aerosol is anthropogenic, and how much is natural. That uncertainty is a big player in the overall uncertainty in the human caused aerosol radiative forcing. Similarly, we have not been able to tell how much of the aerosol is capable of interacting with liquid or ice clouds (which depends on the different aerosols’ affinity for water), and that impacts our assessment of the aerosol indirect effect. These uncertainties are reflected in the model simulations of aerosol concentrations which all show similar total amounts, but have very different partitions among the different types.”

The aerosol variable has not been isolated and thus attempts to quantify it are not credible. Differences between northern and southern hemisphere that need to be controlled in order to isolate aerosol include:

1) land mass – twice as much in NH as SH
2) land distribution – critically a continent over the southern pole and none over the northern
3) urban heat islands – few south many north
4) land use – agriculture changes albedo and evapo-transpiration characteristic of large areas north and few south
5) instrumentation – much longer and more reliable instrumentation in the north vs. spotty unreliable coverage in the south
6) black soot – generated in NH, confined to NH accumulates over land darkening the surface

Inability to control for these 6 variables makes trying to tease out the effect of a 7th (aerosol) variable an exercise in futility. Spare me please. Spare us all.

Within the discipline of measurement, there will always be some variation left when a measurement model has been used to quantify the value of a measurand. The term “random error” is used for that variation.
“Random error presumably arises from unpredictable or stochastic temporal and spatial variations of influence quantities. The effects of such variations, hereafter termed random effects, give rise to variations in repeated observations of the measurand. Guide to the expression of uncertainty in measurement sect. 3.2.2

It occurs to me that within climate, the random effects can influence the climate over years, decades, and even centuries. By the nature of random effects, a clear relationship causing this variation can not be determined. It further occurs to me that the existence and the magnitude of random effects may be somewhat downplayed and underestimated by IPCC.

“Then too we know that all of the major models run much warmer than reality. In fact this glaring over-prediction of warming is a big research area in climate science. That all the climate models are presently unreliable should also be taught whenever a model is used.”

False.

Some models run cooler, some models run fairly close, and some models run 10-15% warmer than observations.

The REAL ISSUE with models is the continued acceptance of a democracy of the models. Some have clear issues, nevertheless they are used in
the “model of models” or simple averaging of models.

Both Skeptics and Alarmist are in perverse collusion that refuses to look at individual models one by one to select and interbreed the best of the best.

All models are screwed up and become wildly wrong in some circumstance or another. That’s why there’s no “interbreeding the best of the best” Steven. You think it’s because everyone but you is too stupid to think of culling the model collection of the worse performers? The self-indulgence you display in this fantasy world you’ve constructed where you’re competent in math & science really knows no bound.

Steven,
That is because there is no possible way to define best of best and hence favour it.
On some CMIP versus observation graphs, the differences between modelled and observed are greater than others, depending on author, chosen observation set, starting date. Then, even within one comparison, most of the modelled results are higher than observed, so probably none of these can be best of best.
The present situation is a shambles of talking heads rather than a dry scientific analysis and we can but hope that one day observers like me can shut up and let proper science take over. Geoff

Steven’s suggestion transferred to another problem would work something like this:

Steven is tired of losing money at the roulette tables. He buys the 100 top selling books on winning roulette systems. Such as “How to Win at Roulette: for Dummies”. Then he hires 100 dummies to go to the casinos with plenty of Steven’s money and they try the systems out for a year. Then it is a simple matter of arithmetic. Whichever dummy and system that has lost the least amount of Steven’s money, is the best.

“AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties (Fig. 1). In the context of weather prediction, the generic property of sensitive dependence is well understood (4, 5). For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability. Model differences are another source of sensitive dependence. Thus, a deterministic weather forecast cannot be accurate after a period of a few weeks, and the time interval for skillful modern forecasts is only somewhat shorter than the estimate for this theoretical limit. In the context of equilibrium climate dynamics, there is another generic property that is also relevant for AOS, namely structural instability (6). Small changes in model formulation, either its equation set or parameter values, induce significant differences in the long-time distribution functions for the dependent variables (i.e., the phase-space attractor). The character of the changes can be either metrical (e.g., different means or variances) or topological (different attractor shapes). Structural instability is the norm for broad classes of chaotic dynamical systems that can be so assessed (e.g., see ref. 7). Obviously, among the options for discrete algorithms and parameterization schemes, and perhaps especially for coupling to nonfluid processes, there are many ways that AOS model equation sets can and will change and hence will be vulnerable to structurally unstable behavior.” http://www.pnas.org/content/104/21/8709.full

There are no unique deterministic solutions to climate models. The first time I came across a dynamically unstable hydrodynamic model was around 1985. A relatively simple fix at that time. But neither of these guys has a freaking clue about the math of climate models. It is all just science by narrative that has always characterised the climate blogosphere. The third way is to use models to explore processes rather than to project surface temperature.

Lesser effect of aerosols removes the explanation of the flat trend from 1950 to 1975. Of course, the PDO was not described by science until 1996, so it is easy to see how spurious aerosol effect was inferred to explain the pause. But the meme has stuck around long past its sell date, nonetheless.

Lesser effect of aerosols is also consistent with less warming as aerosols decrease and unmask.

“Here we examined the impact of black-carbon-to-sulphate ratios on net warming in China, using surface and aircraft measurements of aerosol plumes from Beijing, Shanghai and the Yellow Sea. The Beijing plumes had the highest ratio of black carbon to sulphate, and exerted a strong positive influence on the net warming. Compiling all the data, we show that solar-absorption efficiency was positively correlated with the ratio of black carbon to sulphate. Furthermore, we show that fossil-fuel-dominated black-carbon plumes were approximately 100% more efficient warming agents than biomass-burning-dominated plumes. We suggest that climate-change-mitigation policies should aim at reducing fossil-fuel black-carbon emissions, together with the atmospheric ratio of black carbon to sulphate.” http://www.nature.com/ngeo/journal/v3/n8/full/ngeo918.html

Sulfates mixed with black carbon warms the planet? One of those ideas that don’t get traction with the usual suspects.

If the CAGW folks were honest they’d go after BC & methane which are much easier targets for reduced emission. The problem there is the US isn’t a big emitter of either BC or CH4 and the real target of all this was to slow or stop US economic hegemony. The US has massive fossil fuel reserves and if not constrained are too much of an advantage given the huge lead it already had by 1970 or so. The big emitters of BC and CH4 are Asia (rice farming and coal burning), developing countries (slash & burn agriculture) and Europe (love affair with dirty diesel engines). Those are all the victims of American financial power so the easy emission targets are off limits and CO2 became the one and only bogeyman.

In fairness to the “CAGW folks” Trump just delayed the effective date of an EPA regulation addressing methane leaks at natural gas pipelines and other distribution facilities. I thought this reg was a reasonable attempt to address the low hanging fruit resulting from outdated and poorly maintained equipment at gas facilities. But what do I know.

Black carbon and sulfate are issues for the developing world. There are health and environment dimensions. Ultimately – reductions will emerge organically from efficiency, productivity and development. Just like in the west. Reductions are low hanging fruit that produce significant and rapid reductions in climate forcing.

They don’t measure the concentration of black carbon or sulphates, only the ratio – probably because it was somehow easy to measure. Their results are somehow counterintuitive – “solar-absorption efficiency was positively correlated with the ratio of black carbon to sulphate” – reducing sulphate to zero would make for an infinite efficiency? Very messy. I would prefer data for black carbon alone.

The warming potential of BC in a plume containing sulfate was what was being investigated. The warming potential of BC is amplified in the presence of sulfates. Reduce BC and the cooling effects of sulfates kick in. Reduce sulfates and the warming potential reduces to that of BC alone.

It’s been known for perhaps decades that BC in the atmosphere contributes to warming. Given known emission sources also emit sulfate particulates known to be cooling agents its about time someone dug a little deeper.

My interest however has been what happens when the BC settles out of the atmosphere. It travels up to thousands of kilometers from the source.

In southern California in the 1970’s and 1980’s I observed any light colored things left outside slowly turned black from soot deposition. Windowsills, lawn furniture, etc.

In western New York state in the 1960’s growing up my house was very near an intersection of a north-south and an east-west highway. 18 wheel diesel trucks had to slow down, stop, and accelerate. In the winter soot from diesel exhaust would settle onto the snow. Regular snowfall would cover it up and thus it would be distributed through the snow from top to bottom. The interesting thing was what happened when the snow melted. The soot floats so as the snow melted away it became more and more concentrated on the top. If it was a few feet of snow by the time the melt approached the bottom the top was nearly black. The nearer the road the darker it was. As well the nearer the road the faster it melted because the soot absorbed more sunlight than clean snow. Much more. Tall snowbanks 6′ or more I’d built up shoveling the driveway would be gone nearest the road while further away it would take two to three times as long to melt.

Hansen found this happens everywhere in the northern hemisphere to some extent. It’s so little darkening far from the sources it’s not detectable by the naked eye but it’s still there and multiplied over millions of square miles it becomes quite significant. It even contributes to multi-year sea ice melt rate and rate of melting on the Greenland Ice Sheet. Soot deposited thousands of years ago in growing glaciers starts concentrating on the surface when the glacier, for whatever reason, begins to shrink instead of expand, accelerating the melt with positive feedback. So the longer a glacier has been intact and growing the faster it melts when the tide turns. No studies I know of documenting this (I haven’t looked so perhaps there is) but it’s an inescapable consequence of soot falling on snow that must be happening.

I note poor wee willie approvingly quotes that paragon of science – JCH – who quotes Kyle Swanson on climate sensitivity. This is less problematic than quoting a co-author of the paper who is a card carrying skeptic. Antasios Tsonis is – according to JCH – my bestie. I have in fact quoted this passage many times. It suggests that climate shifts at decadal scales – large internal variability – imply the potential for high sensitivity in the climate system.

Let’s step way back. The Earth is an isolated system that gains and loses energy as electromagnetic radiation. The system tends to energy equilibrium – energy in equals energy out – and thus to maximum entropy.

The chaos wrinkle discussed by Tsonis introduces regime theory into the mix. Regimes in internal variability persist for a while and then shift to another state. In principle – the “climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.” NAS 2002

Tsonis was talking about shifts in ocean and atmospheric circulation on a global scale – using network math on ocean and atmospheric indices – and not any direct response to forcing from greenhouse gases. Utterly different. Nor have greenhouse gases much if anything to do with the ocean and atmospheric regimes that have been a feature of climate for a very long time. Or on the future evolution of these multi-decadal regimes.

The figure below shows solutions of an energy-balance model (EBM), showing the global-mean temperature (T) vs. the fractional change of insolation (μ) at the top of the atmosphere. (Source: Ghil, 2013)

Ghil, 2013, explored the idea of abrupt climate change with an energy balance climate model that follows the evolution of global surface-air temperature with changes in the global energy balance.

The model has two stable states with two points of abrupt climate change – the latter at the transitions from the blue lines to the red from above and below. The two axes are normalized solar energy inputs μ (insolation) to the climate system and a global mean temperature. The current day energy input is μ = 1 with a global mean temperature of 287.7 degrees Kelvin. This is a relatively balmy 58.2 degrees Fahrenheit.

The 1-D climate model uses physically based equations to determine changes in the climate system as a result of changes in solar intensity, ice reflectance and greenhouse gas changes. With a small decrease in radiation from the Sun – or an increase in ice cover – the system becomes unstable with runaway ice feedbacks. Runaway ice feedbacks drive the transitions between glacial and interglacial states seen repeatedly over the past 2.58 million years. These are warm interludes – such as the present time – of relatively short duration and longer duration cold states. The transition between climate states is characterised by a series of step changes between the limits. It caused a bit of consternation in the 1970’s when it was realized that a very small decrease in solar intensity – or an increase in albedo – is sufficient to cause a rapid transition to an icy planet in this model (2).

Ghil’s model shows that climate sensitivity (γ) is variable. It is the change in temperature (ΔT) divided by the change in the control variable (Δμ) – the tangent to the curve as shown above. Sensitivity increases moving down the upper curve to the left towards the bifurcation and becomes arbitrarily large at the instability. The basis of the claim for higher sensitivity in a chaotic climate has to do with the behaviour of the system at bifurcations – and not the carbon dioxide control knob. However, it is theoretically possible that greenhouse gases and/or warming can be the trigger that pushes the system past a threshold.

“The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.” Wally Broecker

I’m with Wally – but I find myself wondering if the next major climate shift will be to a cooler state.

It should shift again unpredictably in a 2018-2028 window. It may shift early for all I know – but it still has sfa to do with greenhouse gases or climate sensitivity to carbon dioxide. JCH’s claim of a more recent shift to a warmer state is impossible to credibly make without more evidence. But credibility is not something I associate with JCH. I prefer to suspend judgement until the evidence firms up.

The assumption is that the regimes are purely periodic – warming followed by cooling perpetually – and that warming will thus return with a vengeance once the current regime of a modest to negligible surface warming trajectory is over. It is so not necessarily so.

Vengeance warming is meant to redeem their narratives I suppose – but natural variability adding to warming – if it happens –
seems far less than a credible proof of CAGW.

JCH: Stay on topic!
” July is often lower than June. This year it looks like July is warmer than June. … What on earth are you guys smoking? Whatever: bag and sell it. It’s the bomb.”
Beatles: “Nothing is real…” ( “Strawberry fields for ever”)
It seems that you are a strawberry guy… :-)

Apparently the BOM in Australia deletes all records that are too cold hence artificially warming the eastern pacific.
Try to keep up to date.
How is the second warmest/ first warmist year on record you predicted actually going?
Guess the sea infilling by ZEke helps.

If the role of internal variability in the climate system is as large as this analysis would seem to suggest, warming over the 21st century may well be larger than that predicted by the current generation of models, given the propensity of those models to underestimate climate internal variability [Kravtsov and Spannagle, 2008].

3.42 mm/yr. The reason it is no longer 3.3 mm/yr is flat spot you’re referring to. It’s above the longterm trend, so it pulls the longterm trend upwards. If it stays above the longterm trend for a long time, you get a new longterm trend. There is a word for this.

Thanks, JCH, for the graph. Since I’m sitting here on a beautiful lake in beautiful Northern Michigan with only my phone watching my beautiful grandkids and I had no ability to link that graph.

That’s the one I was thinking about. Let’s see if CU follows suit with a similar “pause” when they update their 12/16 version

This may offer a great opportunity for another study on another pause with a chance to score some big bucks from the Government’s largesse/booty. ( I hear the cause of the last pause was when the Chief Adjuster at CU was out on Long Term Disability.)

But with nueva policia en la ciudad, the gravy train might have just dried up.

Next up is the pause of a different kind. A reversal in the Arctic Sea Ice trend. I wonder what Ladbrokes has on the over/under for the number of months until that turns around.

A pause in what? The satellite era, by the NASA approach, is 3.42 mm/yr. The little flat spot, that you think is a pause, is above 3.42 mm/yr. Likely above 4.0 mm/yr. The 5-year rate is around 4.66 mm/yr. The 10-year rate is around 4.2 mm/yr.

And al the ignorant adjuster talk, these are fine people, and you are libeling them. CU has not updated in a long time. I do not know why. AVISO makes regular updates. They’re all about the same, and CU will be about the same when they get around to updating their site.

JCH, listen junior, well by about 10 months?
You were trying to say 2017 could be the warmest year early this year,
you backed off to second warmest.
Where are you now?
3rd, 4th,5th or too scared to give a prediction?
“How is global cooling going”
The anomaly has dropped a degree in the last 9 months, pretty big, and is dropping lower.
The pacific is cooling from its highs [what drives the temps].
More ice, Global about to break out ballistic upwards.
What say you.

I said it was possible. It still is. It’s .91 ℃ now, which is 2nd warmest. The year is poised to have higher anomalies for the rest of the year.

10 months is praying. You’re praying. There is no physics for a cooling prediction in ten months. And if it does cool, it will only be because there has been yet another anemic La Niña. Another warmest La Niña eva. Oh brr.

“The old climate framework failed because it would have imposed substantial costs associated with climate mitigation policies on developed nations today in exchange for climate benefits far off in the future — benefits whose attributes, magnitude, timing, and distribution are not knowable with certainty. Since they risked slowing economic growth in many emerging economies, efforts to extend the Kyoto-style UNFCCC framework to developing nations predictably deadlocked as well.

The new framework now emerging will succeed to the degree to which it prioritizes agreements that promise near-term economic, geopolitical, and environmental benefits to political economies around the world, while simultaneously reducing climate forcings, developing clean and affordable energy technologies, and improving societal resilience to climate impacts. This new approach recognizes that continually deadlocked international negotiations and failed domestic policy proposals bring no climate benefit at all. It accepts that only sustained effort to build momentum through politically feasible forms of action will lead to accelerated decarbonization.” http://thebreakthrough.org/archive/climate_pragmatism_innovation

100% wind, solar, hydro, etc by 2050 is insane energy policy. It would double energy costs and – in the remote possibility that it can be done – still achieve remarkably little.

A multi-sector and multi-gas – including aerosols – strategy is required along with accelerated energy research, development and commercialization.

I have a terrific optometrist who is especially good with reading glasses.

Speaking of which, in 1991 I bought a pair for $110. Recently, I got some for a dollar. That makes you wonder about the validity of the CPI Index and the calculations used in long term inflation.

Did you get to your bookie about that over/under bet? It’s completely unscientific but I feel it in my bones that the Arctic, pushed along by our constant companion Natural Variability, is going to come face to face with The Big Chill.

0.91 is what, surface stations?
How is it on the satellites.
As for North Pacific and Atlantic specifically I guess that means you do not want to look at the east, south and west Pacific and Atlantics?
Which is lucky because SST seem to be taking a dive in those regions.

> Did you get to your bookie about that over/under bet? It’s completely unscientific but I feel it in my bones that the Arctic, pushed along by our constant companion Natural Variability, is going to come face to face with The Big Chill.

Tell me you know what the issue is. Tell me you know about previous warm periods in the Arctic. Tell me you are aware of observations made over 100 years ago as to how the Arctic had changed. Tell me you know about attempts to navigate the Northeast and Northwest passages.

GISS June was .69 ℃.
So that does not help your 0.91 C comment does it.
Better go for 5th highest perhaps?
Looking at UAH I have to admit the tropics were warmer than I expected.
Which makes the rest colder I guess as the anomaly is only 0.28C.
Overall a lot of blue (cold) for SST.
Next month will be lower still despite Australian BOM refusing to put in low temperatures.

Okay. To get to 5th warmest would require monthly anomalies that are very low. What on earth cause that to happen? I’ll help you: big wind. What would cause this big wind fantasy of yours? Anything is possible.

CargoCult Etc., a place where hallucinatory prayers can come true. The AMO witchdoctors have followers.

Ohhh, details on Teh Bet. Ok. 2 Pounds that the Dems won’t win the White House before the Arctic regains its chillish form reducing the probability of warm years in Greenland such as experienced in 1785, 1868, 1889 and 1908 (Keegan, et al, 2014).

> 2 Pounds that the Dems won’t win the White House before the Arctic regains its chillish form reducing the probability of warm years in Greenland such as experienced in 1785, 1868, 1889 and 1908 (Keegan, et al, 2014).

You forgot to mention a crucial parameter:

[T]hese data suggest that 1889 was a particularly warm year (Fig. 2C). However, this was not the warmest year recorded in the firn. Temperatures were warmer in 1785, for example, but melting in the dry snow region did not occur in that year (Fig. 2A). Similarly, widespread melting in the dry snow region did not occur during the most recent record-breaking melt extent years of 2002, 2007, or 2010 (10−12). Thus, high temperatures alone are often not enough to cause widespread melt.

Nothing to do with 2016. It’s been warm before. It will be warm again. It’s coming around the mountain. The problem with you Algore acolytes is you’re so blinded by faith you can’t see the facts. Like I said 2 Pounds on a
cinch bet.

When the iceberg emancipated itself from Antarctica recently, your boy Al wasted no time in blaming AGW even though the true experts said it was natural variability, something you have limited capabilities to understand. Do try to keep up. The world is moving beyond you. Really fast.

I made no representations about the significance of the warm years. You’re getting bound up inn your ideological skirt to read beyond what I wrote. This is about the bet. That can’t be falsified at this time.

You seem to be taken a back that there is massive evidence the current Arctic warming is not unprecedented. Of course relying on Junior Scholastic puts one at a disadvantage. Try harder next time. You won’t look so bad.

2 Pounds that the Dems won’t win the White House before the Arctic regains its chillish form reducing the probability of warm years in Greenland such as experienced in 1785, 1868, 1889 and 1908 (Keegan, et al, 2014).

“Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions.” http://www.pnas.org/content/106/38/16120.full

Activists remain oblivious after decades of science on internal variability. They simply adjust their memes to promote insane policy overreach. There was a chance for rational policy – all they needed was a multi-gas strategy, better land management and energy innovation – but it is too late now to change trajectory. It’s all over baby blue.

Like this?
Zeke (Comment #130058) June 7th, 2014
Mosh, Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.

The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.

An alternative approach would be to assume that the initial temperature reported by a station when it joins the network is “true”, and remove breakpoints relative to the start of the network rather than the end. It would have no effect at all on the trends over the period, of course, but it would lead to less complaining about distant past temperatures changing at the expense of more present temperatures changing.

You are probably correct. One thing that seems pretty sensitive to small changes in ocean temperatures is algae. There are record levels of algae blooming across the planet year after year. One second order effect is the largest dead zone ever in the Gulf of Mexico. This also ties in to falling oxygen levels in oceans, lakes and rivers.

“Because of the earlier glaciation of Antarctica, there was a longer period available for the development of cold-water biota in the Southern Ocean, compared to that in the Northern Hemisphere (Hempel, 1987; Lfining & tom Dieck, 1989). Hence, a stenothermal flora and fauna developed in the Antarctica, with a high growth efficiency at low temperatures and a correspondingly low tolerance to high temperatures \(Hempel, 1987; Wiencke & tom Dieck, 1989, 1990). In contrast, Arctic and cold-temperate species from the Northern Hemisphere are relatively eurythermal (Bolton & Lfining, 1982; Bolton, 1983; Egan et al., 1990; tom Dieck, 1989, 1992b, 1993).” https://www.google.com.au/search?q=eurythermal+(&rlz=1C1CHBF_en-GBAU736AU736&oq=eurythermal+(&aqs=chrome..69i57j0l5.1194j0j8&sourceid=chrome&ie=UTF-8

Algae are nutrient and light – rather than temperature – limited. The shift to greater eutrophication that began in the 20th century globally was the result of increased nutrient export from farming and urban development. But there are also natural sources of variation – primarily with upwelling of cold and nutrient rich deep ocean water that have profound implications for global ecology.

Exactly what part of the biosphere are you watching for second order effects? Algae, plankton, fungus, soil microbes… The biosphere is a just a thin veneer of chemical soup which everything else depends on. When you change the physics (temperature) or chemical composition it will respond.
If CO2 was the only thing causing the changes it would be a much easier problem to solve.
Let’s add some more chemicals…
The global calcium nitrate market was valued at USD 9.05 billion in 2015 is expected to reach USD 12.36 billion in 2021 and is anticipated to grow at a CAGR of 5.4% between 2016 and 2021. The global calcium nitrate market stood at 15,215.40 kilotons in 2015.

Changes in ocean chemistry seem likely to change flora and faunal assemblages – with the possibility of chaotic (in the physics sense) impacts on trophic webs.

But eutropfication in the Gulf of Mexico is the result of Mississippi borne nutrients – and not global warming. Solving this problem – I have spent a career solving this problem – requires a different set responses than wind and solar power.

Frank Bosse
While I am enjoying and learning from the discussion in the main post and other knowledgeable commenters, may I offer the following as to why the NH temperature increases vary compared to the SH.
I particularly like the temperature charts that Dr Ryan Maue provides as they are quite revealing.
As an example, when the years 2005 and 2017 (to date) are compared one can see the influence that possibly controls the SH – NH temperature imbalance raised in the main post and then attributed to “forcings”.
2005 was a high energy year with high global ACE, both early and late season. Energy that was transported via atmosphere transport. Note the mirror effect between the NH and SH temperature profiles and the high incidence of that effect during the year. Other years such as 2006 and 2008 are also good examples. When the NH rises the SH declines in near perfect timing with a significant influence on the global average. The energy being generated in the NH is affecting the temperature profile in the SH by pulling it down due to that energy creating a blocking mechanism. Note also the significant short temperature drop late in late September. These are remarkable movements.
Then compare to the 2017 year to June (the latest I can find) which so far has been a low energy year identified by lower global ACE to 31st July. Note how the temperature trends are mainly in sync.
Here we have two completely different temperature profiles, with the only difference being hemispheric pressure levels and atmospheric transport volume.
Carbon related forcing is an intangible, the same as CO2 despite claims otherwise. Even though they cannot be proven they seem to hold centre court or given the benefit of doubt. However, in these two charts we have something physical and tangible that is telling us something. The mirror effect is not coincidence, it is a fluid dynamic forcing.
Regards

“Any reduction in global mean near-surface temperature due to a future decline in solar activity is likely to be a small fraction of projected anthropogenic warming. However, variability in ultraviolet solar irradiance is linked to modulation of the Arctic and North Atlantic Oscillations, suggesting the potential for larger regional surface climate effects.” Ineson et al 2015

Surface cooling in the eastern Pacific is the result of enhanced upwelling. This is modulated by flows in the Peruvian and Californian Currents – which in turn are driven by changes in polar storm tracks resulting from surface pressure changes at the poles. The latter – it appears – respond to solar UV changes.

In the Pacific upwelling creates reinforcing feedbacks that evolve into the PDO and ENSO phenomenon. Vast changes in sea surface temperature modulate cloud cover and thus the energy budget of the planet.
Over a very long time Pacific regimes reflect solar activity. More salt in a Law Dome ice core is La Nina.

In the North Atlantic it leads to a more negative AMO and reduced AMOC.

In general a cooling sun and less UV leads to a cooling planet. The broad potential can be identified – but the near term evolution of climate cannot be predicted from the latest monthly data. The latter is like the blind man divining the form of an elephant by feeling the trunk – a metaphor for the limitations of knowledge.

In the longer term a natural cooling influence over centuries seems more likely than not – with the possibility of reinforcing feedbacks in a dynamic sensitivity to changes in UV. Theoretically – greenhouse gases may give rise to dynamic sensitivity but there is no telling which way it will go.

The “future evolution of the global mean temperature may hold surprises on both the warm and cold ends of the spectrum due entirely to internal variability that lie well outside the envelope of a steadily increasing global mean temperature.” http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/full

“We find that the phase of the Interdecadal Pacific Oscillation (IPO), a slow-moving natural oscillation in the climate system, will regulate the rate at which global temperature approaches the 1.5°C level. A transition to the positive phase of the IPO would lead to a projected exceedance of the target centered around 2026. If the Pacific Ocean remains in its negative phase, however, the projections are centered on reaching the target around 5 years later, in 2031.” http://onlinelibrary.wiley.com/doi/10.1002/2017GL073480/full

Sine waves on a rising trend?

We are at a critical juncture with the Pacific about to shift state again – in a 2018-2028 window. Indeed – there have been suggestions that it happened after 2014. Get ready for temperature rises to take off ‘with a vengeance’?

Hilarious. Just too funny. The PDO has remained in positive territory for a record number of months in a row in the JIASO data. Record. During that ~3.5 years, the SAT has shot up like a rocket; an ENSO neutral year became record warmest year; a La Niña was violently “runted”; OHC remains very high. The AMO? WGaSATAMO? But it is also tagging along doing nothing at all as usual.

The NCEI PDO index is based on NOAA’s extended reconstruction of SSTs (ERSST Version 4). It is constructed by regressing the ERSST anomalies against the Mantua PDO index (JIASO PDO index) for their overlap period, to compute a PDO regression map for the North Pacific ERSST anomalies. The ERSST anomalies are then projected onto that map to compute the NCEI index. The NCEI PDO index closely follows the Mantua PDO index.

Burl discovered no such thing. Small oscillations in GAT are frequent and at low resolution 150 year temperature graph using sloppy alignment and no vertical bars marking the beginning and ending points of recessions you can make it look there’s a rise. His “work” is not peer reviewed and upon spot checking the recessions easily found some that happened during falling temperatures.

You completely miss the entire point of the graphs that I present. Their purpose is simply to confirm that Temperatures will rise whenever anthropogenic SO2 aerosol levels in the troposphere are reduced, which is the main point of my essay.

This is confirmed by a quick glance at the graphs from the 3 data sets. The precise start and stop dates of each recession are really unimportant.

What IS important is that temperatures always rise during a business recession.

Your comment that the 1948-49 recession occurred during falling temperatures ls INCORRECT. The warming peak on the GISS plot is admittedly rather small, but it is larger on the HADCRUT4 plot, and even more evident on the “PDO index values: January 1900 – January 2017” plot posted earlier by JCH

“We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings. We present a generic phase diagram to explain the generation of dragon-kings and document their presence in six different examples (distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures in humans and in model animals,
distribution of the earthquake energies). We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of René Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.” https://arxiv.org/ftp/arxiv/papers/0907/0907.4290.pdf

I am not about to validate linear trendology of a few years data by taking it seriously.

The large shift between states over the past few years is suggestive of a dragon-king – and thus that a shift has happened. A shift to what is as yet indeterminate. The system is not purely periodic. The system is chaotic in the physics sense – as suggested by Tsonis’ network math. Actually.

…The synthetic series in Fig. 5a also show examples of greatly accelerated warming lasting a decade or more, which are evidently spring-back effects as an internal variability cooling episode is followed by a strong internal variability warming episode. The strong warming episodes are further amplified by the underlying forced warming trend. One extreme example shows a warming of almost 1 °C in 15 years—a much greater 15- year warming rate than has occurred in the observations to date (red curves). These spring-back warmings illustrate another important potential consequence of strong internal multidecadal variability as simulated in CM3, and reinforce the need to better understand whether such internal variability actually occurs in the real world. …

The AO/NAO index gives the clue as to the mechanisms at work. The polar annular indices reflect changes in surface pressures at polar and sub-polar regions. These pressure fields in both hemispheres are modulated by solar UV/ozone chemistry. Higher polar surface pressure push wind and storms into lower latitudes and spin up sub-polar gyres in the world’s oceans. This biases the Pacific to more upwelling in the east – which is the origin of both ENSO and the PDO.

The point – however – was about the assumption of purely periodic behaviour of the system. Warming followed by cooling followed by warming perpetually. A sine wave on a rising trend as shown on the Ghil sketch. Sounds ridiculous when I put it that way.

We have a solar signal that people on one side of the issue are decomposing using harmonic analyses – usually lacking the acknowledgment of a need for an internal solar amplifying mechanism to explain large climate changes form a small change in solar forcing. Nor can we explain much about puzzling data such as the mid-Holocene transition seen in isotope records and ENSO proxies alike. On the other side – decadal climate variability is dismissed on the basis that – as purely periodic on a multi-decadal scale – it cancels out.

My view is that harmonic analysis obscures the chaotic evolution of the solar signal – it is not composed of multiple sine waves of various duration but involves indeterminate shifts in state space. Even the apparent regularity of Milankovitch cycles that are a trigger for glacials obscure the vagaries of internal responses that are the proximate cause of global temperature change in the quasi 100,000 year ‘cycle’. At the level of fine detail prediction remains problematic.

At the 20 to 30 year scale of the IPO there are shifts in climate means and variance that that add up to millennial scale climate change. This is an ENSO proxy from an Antarctic ice core. More salt is La Nina and more Australian rainfall.

The IPO is there – seen with harmonic analysis – as well as several other interesting findings. I think it is likely that the Pacific system amplified solar irradiance changes and added to warming in the 20th century – associated with cloud changes that are anti-correlated with sea surface temperature. As the system is evidently not purely periodic at any scale we know of – the question is whether declining solar intensity will bias the system to more Pacific upwelling and a cooler world in the 21st century. And likewise – how much are NH means likely to change. Or indeed – whether solar intensity will decline. It is uncertainty turtles all the way down.

We are likely to come off the 20th century peak in ENSO intensity and frequency – but when this happens is at the whim of the dragon-kings. Is it possible that the next Pacific climate shift – due in a 2018-2028 window – will determine the winner in the climate debate?

I found establishing some kind of statistical basis for measuring the differences between hemispheric trends for CIMP5 models and observed temperature data sets interesting for more than what can be derived from those differences vis a vis sensitivity differences between models and observed for aerosols.

The warming ratios for the recent AGW warming period for the Northern and Southern hemispheres certainly goes in the direction of the observed ratio of Northern to Southern trends being higher than that for most of the CMIP5 model runs and that would support the theory/conjecture that the aerosol forcing, being more confined to the Northern Hemisphere, that that forcing used by the models is generally too high which in turn indicates that the sensitivity for the CMIP5 models is also generally too high. Another consideration is that the land portion of the earth warms faster than the ocean portion and with the Northern Hemisphere have proportionally more land it would be expected to warm faster than the Southern Hemisphere.

In general, however, if the global warming trends are matching approximately for observed and models the question of this matching being derived with very different Northern and Southern Hemisphere warming trends must be very seriously considered and understood – whatever the final answer(s) might be.

The very obvious fact of nature in attempting to make valid statistical comparisons of climate features between models and observed is that we cannot obtain multiple runs of the earth’s climate realizations. We have only one realization and we cannot determine where that realization falls on a range of possible realizations given the chaotic nature of climate. Unfortunately, when we have a single model run the statistical comparison with the observed is impossible and due again to the same considerations for the single observed realization. Climate modelers and those organizations who present the results of climate models should be aware of the lack of a good statistical comparisons with observed temperature data sets and other climate models without multiple runs. If the excuse is lack of computing resources then one must wonder why a even a single run is made.

A statistical comparison can be made with a single model run or the observed series with a model that has multiple runs. In this case the statistical determination involves comparing where the single run/ observation lies in the range of the multiple run probabilities. Statistical comparisons of models with multiple runs can be more efficiently made since the means and standard errors can be used in the comparisons.

The results of my comparisons of observed data sets and CMIP5 climate models for Northern Hemisphere/Southern Hemisphere trend ratios for the period 1880-2005 are given in the link/table below. The observed sets used were HadCRUT4, Cowtan Way infilled Had CRUT4, GHCN and GISS 1200. The table shows that while generally the NH/SH ratios for the models are lower than those for the observed data sets there are some models that have near the same ratio and some that are higher. There are statistically significant differences between models and this difference should preclude using an average of all model runs and a distribution of those runs in making comparisons with the observed results. The comparison requires that with the observed to the individual model – where the model has multiple runs.

Ken, very interesting and thanks for this work and a proposal: The GHG forcing vs. the Aerosol forcing has increased meaning since about 1980. It could be that the years before introduce some noise into the trend ratios also due to variability. What about recaclulating the trends from your tables using data 1980…2015 ( not 2016 due to ENSO)?

Frank, I used some older data and methods in constructing the table above. I am going back and will use a newer method called Ensemble Empirical Mode Decomposition for (EEMD) determining trends and use the tos model data for oceans and tas model data for land to be more in line with the way observed temperatures are measured. It might take a day to do this. I plan to show how much both the observed and modeled temperature trends are reduced by this method EEMD by removing the reoccurring cyclical component.

I have determined the trends for the time periods of 1880-2015 and 1980-2015 using the methods Ensemble Empirical Mode Decomposition (EEMD) and Break Point Ordinary Least Squares regression (BPOLS). With EEMD the reoccurring cyclical components are removed from the trend. With BPOLS the determination of trends is similar the common method used in climate science of using OLS linear regression with the difference that breaks in the series are taken into account. With BPOLS the reoccurring cyclical components are part of the measured trend. In both methods the periods used for establishing trends were 1861-2015 for the models and 1880-2016 for the observed data sets.

The ocean parts of the model data sets were derived using tos and the land parts of the model data sets were derived using tas. This is done to make an oranges to oranges comparison between the models and the way the observed series are measured, i.e. for observed measured series the sea surface temperature in the water is used (tos) and not the air temperature above the water (tas). While there is not sufficient observed tas data to determine whether the trends are significantly different between tos and tas that is the case in general for the models.

The results of the comparison are shown below in the table. The parameters presented are the global trends or single value for a model or observed series global trends (in degrees C per decade) and the ratios of Northern Hemisphere to global trends and the standard deviations of the trends where a model has multiple runs. It can be seen that under both method determinations and time periods the observed and model series on average have similar global trends and NH/global ratios, but that on closer inspection, with the standard deviation values in mind, that there are a number of instances of significant differences between the observed and model results and the within model results. It shows that a comparison of the observed to average model results is somewhat misleading and that the comparison should be observed to individual model and individual model to individual model. It should be noted here that the use of single model runs is very limiting when attempting a statistical comparison of observed to single model run and single model run to single model run.

The important point to show from these comparisons is that when the reoccurring cyclical components are removed (using EEMD) the trends in the critical GHG influenced period from 1980-2015 for both the model runs on average and the observed series are reduced nearly in half. I have presented results like these showing the dramatic reduction in trends for the 1975-2016 period to a number of climate scientists who have written papers in this area of the science and some of whom have appeared on these blogs proclaiming a purely objective approach to the science. None of these scientists have replied to my request for comments or even acknowledgement of my request. I have had several productive exchanges with the inventor of EEMD (yes it is patented), Norden Huang, on this matter.

Thanks Ken, very interesting. IF the thesis would be right that the deviation of NH/glob. trendrelations from the observed value ( I would prefer C/W as Nic Lewis argued here: https://climateaudit.org/2017/05/18/how-dependent-are-gistemp-trends-on-the-gridding-radius-used/ ) is a sign for the aerosol-forcing of the model there should be a correlation between these values. Can you calculate this or can you publish the data from your result online? I’m afraid this possible correlation is not very robust due to internal variability. Thanks for your very valuable work.

The only parameter I see as giving additional information from my results is to look at the NH/Global trend ratio to the global temperature trend for the CMIP5 model runs. I calculated 4 correlations with 95% confidence intervals for EEMD determined trends 1880-2015 and 1980-2015 and break point OLS determined trends over the same 2 time periods. The only statistically significant correlation I obtained was for the EEMD determined trend for 1880-2015 (r=0.33). I suspect that the 1980-2015 period is too short to obtain significant correlations and EEMD might be a better indicator of deterministic trends by accounting for reoccurring cyclical components.

The important points of my results are (1) the dramatic reduction of the trend in the 1980-2015 period when EEMD is applied,( 2) the significant variation between individual CMIP5 models for bote global mean temperature trends and NH/global trend ratios and (3) even the differences in NH/global trend ratios for the observed series.

Ken, I would like to add one more point to your results: The model mean shows with OLS an underestimation of NH/glob. trends of 11 % ( 15%) for 1880-2015 (1980-2015) in your calculations. This is consistent with the mainpost: The CS17 paper describes the particular impact of the mostly NH-located aerosol-cloud effect in models which is much lesser ( zero) in the real world. For a better performance one has to reduce the aero tot. forcing. As the models overestimate the global trends this implies that they are too sensitive to GHG forcing.

Frank, I cannot reiterate enough the observation that individual CMIP5 (in this case RCP 4.5) models produce very different results when put to the statistical tests. In the case of the RCP 4.5 models that were studied from the 108 model runs from Climate Explorer KNMI, I show here the statistically significant difference in global means (using tas) for 17 individual models that had multiple runs. There were 42 models that produced the 108 runs and, unfortunately, only 17 models had multiple runs that allowed a statistical comparison. Surely the modelers and their administrators must be aware of the limitations on statistical testing that a single run model involves. In my mind this situation with such a high portion of single run models degrades the seriousness of the all-important modeling effort and particularly so for those of us who have little reason to take the currently published temperature reconstructions and the underlying methods used seriously.

The 17 models with multiple runs can be statistically compared using the mean and standard deviation of the model run global trends and significant differences determined (p.value less than 0.05). With 17 models, 136 unique pairs are produced for comparison. Below I show a matrix with these pairs for the global mean surface temperature (GMST) with the p.values and presenting those pairs with statistical differences in red.

The GMST comparisons of 136 pairs produced 91, 104, 83 and 63 pairs with significantly different means for EEMD produced trends for the periods 1861-2015 and 1980-2015 and the break point OLS produced trends for the same two periods, respectively. The ratios of the Northern Hemisphere to GMST EEMD produced trends for the 1861-2015 had 58 pairs with significant differences in ratios. A simulation was run with pairs using an ARMA model with ar=0.52 and a standard deviation=0.13 and a break point OLS trend that was kept constant for all simulation runs. The simulation used the same number of multiple runs as the 17 RCP 4.5 models had. Of the 136 unique pairs only 3 had significant differences.