All models are wrong, some are useful. That’s how all modelers speak (except perhaps some climate scientists).

The barriers to making a good climate model are many. The data is short, noisy, adjusted, and many factors are simultaneously at work, some not well described yet. Climate modeling is in its infancy, yet billions of dollars rests on the assumption that CO2 will cause catastrophic warming and the evidence that most recent warming was due to CO2 comes entirely out of models. It’s important to focus on the pea:

“No climate model that has used natural forcing only has reproduced the observed global mean warming trend” (IPCC 2007)

The total climate model described below can reproduce graphs based on a CO2 model, such as one used by GISS, but it can also produce graphs using the solar model developed in these posts, or a mix of both CO2 and solar. (This is the point where the solar assumption is dropped and tested.) The point here is simply to see if there is a viable alternative model to the CO2 model. It appears there is, which is not to say it’s finished, or can’t be improved, or cannot be presented better, or tweaked. At this stage it’s crude, but it exists.

There are 23 well funded ambitious global climate models that have been developed by international teams over the last 30 years, and a huge effort has been made by PR teams to make those models look good. The model below is one person’s work over 18 months with the aim of asking only, “is this possible” and “what can we learn?” The results are displayed with bare honesty and many caveats about how much (or how little) can be read out of them.

No model, much less one whose predictions have not been tested, is proof of any hypothesis. But they are sometimes good tools to tell us where to look. The notch-delay solar model is a viable alternative to the current CO2 models. It matches most major turning points of temperature (something CO2 models have struggled to do), and is used here back to 1770 — 100 years earlier than most. There’s a definite weak period with the 1950-1980 era where the atmospheric bomb test line resolves to have an improbably large effect. You might think the idea that nuclear tests cooled the planet in the 60′s and 70s is ridiculous. I certainly did. It’s something fans of CO2-theories have used to explain the cooling that Co2 based models can’t explain. Does it have legs? Hard to say, and worthy of a post on its own. But before you write it off, see John Daly’s site which has an interesting discussion page that compares bombs to Pinatubo eruptions, and points out atomic bomb testing went on in the atmosphere, despite the 1963 test ban treaty, until 1980 (thanks to the Chinese and French). Nuclear bombs contribute aerosol dust, but moreso, it’s radioactive too (a bit of a cosmic ray effect?). You might think it will rain out quickly, but bombs of 1Megaton reach up to the stratosphere above the clouds that rain. All up, 504 atmospheric nuclear explosions occurred between 1945 and 1980. A total of 440 MT were detonated (Fujii). It hangs around, C14 Radiocarbon levels in the atmosphere peaked in 1963 but the isotope stayed above natural levels for years – into the mid 1980s. (Edwards 2012). Fujii (2011) suggests atmospheric tests caused the “global stagnation“of that era and say it should be included in GCM’s. Maybe it isn’t as mad as it sounds?

The model has no aerosol component, which may or may not offset the cooling theoretically attributed to atmospheric bombs, nor does it have the Pacific Decadal Oscillation or Lunar cycles. The anomaly may be resolved if the model is expanded, or maybe it just means the delayed force from the sun is not the major driver — as we’ll explain in the next post, we’ll probably all have a good idea in a few years.

Solar TSI appears to be a leading indicator for some other (probably solar) effect, that we are calling “force X” for now. If that factor, quantified by TSI, was fed into current climate models, then those models would work with less forcing from CO2. Perhaps they would have produced better long term graphs, as well as fitting the recent pause and not requiring such a pronounced tropospheric hotspot. It might solve a lot of problems at once. Presumably projections of catastrophe would be subdued.

Lastly, compounding the many hindcast inaccuracies is the problem of inexplicable adjustments to temperatures (every skeptic will be wondering). It’s possible a model trained on raw temperatures curves (or older published datasets) may produce quite a different fit (which might be better or worse). For instance, if the land thermometer data from 1850 to 1978 exaggerates the general temperature rise then the solar model will be too sensitive — because it trained (or computed its parameters) on this data and “thinks” the TSI changes caused that amount of temperature change. Ultimately we won’t know for a few years whether it is right. (Bring on those independent audits please!)

The theory of the “delay” will be tested soon. It is falsifiable. We’re putting it out there for discussion. We have issued no press release, we aren’t selling a product (we’ll give it all away soon), nor do we demand your tax money. Judge it accordingly.

The bottom line is that modern climate models do not include any delayed force from the Sun. Saying that models don’t work without CO2, and no natural factors can explain the modern warming, is and always was, a fallacy known as argument from ignorance. — Jo

Hindcasting

In the previous posts we built the notch-delay solar model. Now we are going to test it.

The solar model is given the TSI record from 1749 (the start of monthly sunspot records), and it computes the corresponding temperature in each month from 1770 from just the TSI data for the current and previous months. Then we compare this “hindcast” with the measured temperatures. We also test the CO2 model to compare how it performs, and we test a mix of the CO2 and solar models to show that they play together well.

Finally, we look at the significance (or not) of the solar model so far.

1 Our total climate model

The total climate model* includes the notch-delay solar model, a standard CO2 model (two-compartments, with transient and equilibrium responses, computing temperature changes from the observed CO2 levels), a CFCs model (based on Lu 2013), and an atmospheric nuclear bomb tests model (based on the megatons exploded in the atmosphere, from UN reports). It can also apply all the forcings from the GISS model E, a mainstream climate model that released its forcings publicly in 2011—notably volcanoes, black carbon, snow albedo, and land use.

All these models can be switched on or off in any pattern within our total climate model. The total climate model has an optimizer to fit the model’s temperature output to measured temperatures, thus finding a set of optimal parameters.

We use composite TSI and temperature records for the measured TSI and temperature, as in the previous post. The composite temperature was put together from the main temperature datasets, instrumental back to 1880 then increasingly reliant on proxies, mainly the huge proxy study by Christiansen and Ljungqvist in 2012. Similarly the composite TSI record was constructed out of the main TSI datasets, using measured data where possible.

2 What if CO2 was the main driver?

To show how our “total climate model” works, let’s first fit a CO2 model to the observed temperatures, assuming there is no delayed link between TSI and temperatures (that is, the mainstream assumption).

Let’s run the CO2 model with solar input as per the GISS model (that is, the immediate, direct effect of changes in TSI), with the volcanoes, black carbon, snow albedo and land use also from GISS, and the CFCs. The CO2 model was fitted to the measured temperatures and found to have an equilibrium climate sensitivity (ECS) of 3.4°C, agreeing with the IPCC’s central estimate of 3.3°C. The carbon dioxide theory fits the measured temperatures since 1800 fairly well in a very smoothed sense:

Figure 1: Total climate model without the solar model. It includes immediate warming due to changes in TSI as per the mainstream “GISS Model E” climate model. Thus, most of the warming must come from carbon dioxide. The estimated equilibrium climate sensitivity is 3.4°C, close to the central estimate of 3.3°C by the IPCC

The CO2 model produces a smooth increase in temperature, echoing the smoothly increasing CO2 concentration. Carbon dioxide by itself cannot begin to explain the jiggles in temperature on time scales from one to 10 years, so the carbon dioxide theory calls these jiggles “natural variability”—essentially meaning the bits they cannot explain.

3 What if solar effects were the main driver?

Now let’s run the notch-delay solar model, without any contribution from CO2 or CFCs. In other words, we are running the solar model under the solar assumption, that the recent global warming was associated almost entirely with solar radiation and had no dependence on CO2. As explained at the start of these posts, we set out to build a solar model that could account for the recent global warming under that assumption.

So, we are now testing the proposition that the recent global warming could have been mainly associated with TSI rather than CO2.

There is monthly TSI data from 1749, when the SIDC monthly sunspot records start—they are a decent proxy for TSI, and along with Lean’s yearly reconstruction of TSI from sunspots are the only components of the composite TSI from 1749 to 1882. The step response of the notch-delay solar model takes about 15+ years to fully respond, so the model takes 20 years or so to spin up, and we begin the simulation in 1770. (During the Maunder minimum, from about 1660 to 1705, there were almost no sunspots, so the solar model has no way of estimating force X. Thus it cannot really be expected to work before about 1720 at about the earliest.)

Each monthly temperature computed by the solar model is computed only from the TSI data for previous months and the current month. This is the only data input to the solar model. We then add temperature changes due to volcanoes and so on from the other models, to form the temperature record computed by the total climate model.

The solar model computes the temperature for a given month by adding together all the step responses of the TSI steps of the previous months (that is, by convolution). The change in TSI from one month to the next is a “step” in TSI, and the temperature response to that step is as shown in the step response of the solar model in Figure 4 of Post VI, appropriately scaled by the size of the monthly TSI step. Yes this method is a little slow and there are faster methods, but this way makes it clear that we are using the step response and previous TSI data only—and anyway computers are faster these days, and the data series here have only a few thousand points.

We previously found the parameters of the solar model by fitting the model’s computed temperatures to the observed temperatures (and simultaneously fitting the model’s transfer function to the empirical transfer function). Therefore, so long as the temperature record computed by the solar model basically has the right shape, then of course it is going to fit the measured temperatures reasonably well. The question of how well it does is mainly going to depend on whether the model predicts the right shape of temperature curve, such as getting the turning points about right, because the fitting is going to ensure that the computed temperatures match the measured temperatures in a general sense.

Figure 2a: 1770 – 2013 Total climate model when driven only by solar radiation, with no warming due to carbon dioxide. The solar model output is not explicitly shown here because having three lines close together (solar model, climate model, and observed temperatures) is too confusing, but it can be inferred by subtracting the other constituent models from the total climate model.

Figure 2b: 1900 – 2013: As for fig 2a, but for the last century.

The major temperature trends are all reconstructed, with major turning points about right, and the sizes of the reconstructed changes are roughly as observed. Therefore the notch-delay solar model could provide an entirely solar explanation for recent global warming, without any significant warming due to rising CO2 or CFC levels.

The solar model reproduces a lot of jiggles, but gets the timing of them wrong as often as not, especially further back in time. This might simply be due to the fairly uncertain nature of the TSI data, which is reconstructed from sunspot numbers. Sunspot numbers themselves are uncertain because standards of what counted as a sunspot have varied over the years. And, as indicated by the physical interpretation of the delay in Post IV, the delay presumably is not constant but instead it is probably the length of the prevailing sunspot cycle, which averages 11 years but varies from 8 to 14 years. The solar model here is using a constant delay of 11 years. It doesn’t take much timing error to put an up-jiggle where there should be a down-jiggle. So there is some hope that, with better solar radiation data in future from satellites and a more complicated model with variable delay (the subject of future research perhaps, if there is sufficient interest), the solar model could explain some portion of “natural variability”.

Over the period of better TSI data from 1610, the TSI was clearly at a maximum from about 1950 to 2000. However the temperature kept increasing during this period, even though TSI plateaued. The delay in the solar model is 11 years, which pushes back that plateau from 1960 to 2010, but that is not enough to explain why the total climate model reconstructs rising temperatures throughout this period when it is based on the solar model and omits the CO2 and CFC models. Here the output of the solar model is explicitly shown:

Figure 3: The solar model from 1900 as in Figure 2, but with the solar model output explicitly shown (in pink). From the 1950s through the 1990s (but mainly the 1960s), the solar model alone computes temperatures significantly warmer than actually occurred. In the total climate model this is counteracted by global cooling due to the atmospheric nuclear bomb tests, which put fine reflective dust into the atmosphere and apparently caused a mini-nuclear winter.

The answer found by curve fitting the total climate model to the observed temperatures is that global cooling caused by the atmospheric nuclear bomb tests may have counteracted the warming associated with the stronger TSI. This initially came as a great surprise to us, because the nuclear data had only been added as a bit of a joke and for completeness, but after a bit of research it started to look kind of plausible. The tests, conducted from 1945 to 1980 but mainly before 1963, put up fine dust that stayed high up in the atmosphere for years, reflecting sunlight back into space and lowering the incoming radiation [Fujii, 2011], and also dropping down radioactive nuclei that might seed clouds. Because the nuclear dust is in the stratosphere, there is no rain to wash it out. The required cooling from the tests is about 0.5°C at its peak in 1963, the year that the USA and the USSR agreed to discontinue atmospheric testing. (If the solar model is too sensitive because the warming of the land thermometer records is exaggerated, then less cooling is required.)

While this is only an answer found by numerically piecing together the test yield data with the output of the solar model and the observed temperatures, it fits. Maybe the nuclear winter hypothesis is partly correct. We feel it is likely to overestimate the effect.

Alternative causes for a cooling influence during the 1950s to 1990s could be pollutant aerosols and/or whatever caused global dimming, or even the Pacific Decadal Oscillation (PDO). With no data that quantifies their effects, the total climate model only had the nuclear bomb yield data to work with, but it is remarkable that the piece that fits the puzzle quite well is the atmospheric nuclear bomb test data.

4 Mix of CO2 and solar

There are now two solutions to the climate question:

If we assume global warming is mainly due to CO2 then we get the CO2 theory, and it fits the measured temperatures from 1800 (though not before).

If we assume that global warming is mainly associated with changes in TSI then we get the notch-delay solar model, which also fits the measured temperatures from 1800.

Obviously both assumptions cannot be true, but it may be that the true solution is a mix of both models—such as 40% of one model and 60% of the other. If both solutions fit the measured temperatures on their own, then any linear mix will also fit the data. Here is an example:

Figure 4: Total climate model when driven by a mix of solar radiation and carbon dioxide. The temperature changes computed by the solar model were multiplied by the solar factor of 70%, then the CO2 and other models were fitted. This mix was arbitrarily selected for illustration; do not read any significance into it.

This illustrates that the CO2 and solar models play together nicely. Assuming the climate system is linear for the small perturbations of the last few hundred years, the two solutions can operate almost independently and their temperature changes add (that is, they superpose).

If the optimizer is given both the CO2 and solar models to work with, it finds a solution that is mainly the CO2 solution and only a little of the solar solution. However this is only because the jiggles in the solar solution are wrong as often as not (Figure 2), which the optimizer finds worse than simply ignoring the jiggles and getting them right on average (Figure 1). So there doesn’t appear to be any significance in this, and we will have to find other means of determining the true balance between the CO2 and solar solutions.

5 Significance of the solar model

We have developed a solar model that accounts for the recent global warming, if that warming was almost entirely associated with solar radiation and had no dependence on carbon dioxide.

This is a viable solution to global warming, because:

It’s quantifiable, with a model that approximately hindcasts the observed temperatures. It is not just a concept with handwaving, or a rough one-off computation.

It’s got physical interpretations for all the parts. This is a physical model, not just curve fitting or an unexplained correlation.

In short, we have demonstrated that the global warming of the last two centuries could have been mainly associated with TSI rather than CO2. This overcomes one of the bedrock beliefs of anthropogenic global warming, namely that the recent global warming could not plausibly be due to anything other than CO2.

The most important element of the solar model is the delay, which is most likely 11 years (but definitely between 10 and 20 years). The delay was found here as a necessary consequence of the observed notch, but it has been independently corroborated to varying degrees several times over the last decade, apparently without its significance being noticed.

A major objection to substantial solar influence is the finding of Lockwood & Froehlich in 2007, who showed that four solar indicators including TSI peaked in about 1986 then declined slightly. However temperature continued rising for several years after 1986. This has been widely interpreted to mean the recent warming cannot have been due to the Sun. However, the delay can explain this: 1986 + 11 = 1997, about when global warming ended. Thus the delay overcomes another of the bedrock beliefs of anthropogenic global warming.

Conversely, without the delay, the objection of Lockwood and Froehlich appears solid and it is hard to see how a substantial solar influence is possible.

The weakest points of the notch-delay solar theory are:

The assumption of sufficient linearity of the climate system,

The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s.

Some may challenge the discovery of the notch, but the notch implies a delay and the delay receives support from several independent findings.

What we have not shown so far in these posts is that the notch-delay solar model is true, or to what extent it is true. There is nothing in the posts so far to support the assumption that the recent global warming was almost entirely or even partly associated with solar radiation. On the material presented so far, the CO2 and solar solutions are both viable and no reasons have been given to suppose that either one is more influential.

The notch-delay theory provides a second, alternative solution to the climate problem, with a physical model and a plausible interpretation. No longer is climate a “one horse race”, where you are limited to either supporting the CO2 theory or focusing on its deficiencies. We are now in a “two horse race” (though one horse is very new to the world and not fully introduced or fleshed out yet).

Regular readers of this blog are well aware that the CO2 solution has a lot of problems. Soon we will be turning to the second part of this series, where we will look at reasons for believing that the solar model is dominant and the CO2 solution is only a small part of the overall solution.

In the next post on this topic, we will use the notch-delay solar model for forecasting. This is where it gets interesting.

* Our climate model is in a spreadsheet that we will be releasing shortly. We chose to do all the work for this project, right from the beginning, in a single Microsoft Excel spreadsheet for pc. It’s not the fanciest or the fastest, but an Excel spreadsheet is the most ubiquitous and one of the friendlier programming environments. It runs on most computers (any pc with Excel 2007 or later, maybe on Macs with Excel 2011 or later), can hold all the data, makes nice graphs, and all in a single file. The models use VBA code, a form of the basic programming language that is part of Microsoft Office. The spreadsheet is professionally presented, and you press buttons on the sheets to make models run and so on. You can inspect and run or step through the code; it will be all totally open. Thank you for your patience, but giving away the spreadsheet early would preempt the blog posts and disrupt a focused discussion.

400 comments to BIG NEWS Part VII — Hindcasting with the Solar Model

Wow, I mean wow. When I look at this, just one thought strikes me, The solar model presented here is based on the measured dynamic response of temperature to TSI, it aught to be correct, but year to year it over estimates temperature in recent times. To me that is signalling that solar influences are more than enough to explain recent warming and that if anything mankind has in the last 50 years COOLED the atmosphere. Does anyone else get that implication, what am I missing?

The solar model parameters were found by fitting to measured temperatures, mainly in the period of land thermometer data from 1850 to 1978. The solar model trained on this data. If those temperature records exaggerate the temperature rise then the solar model will associate the TSI changes of that period with exaggerated temperature changes — so it will be too sensitive.

Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.

David, if we were to presume the oversensitivity comes from adjustment bias, can you get from the model, how much bias there is in the temperature reecord Vs observation, eg divide modelled by observed, or maybe (modelled – observed)/ observed, what happens if you then compensate the model for that?

Is it worth doing the transfer function of modelled Vs observed to look at the model fit? At first thought this should probably be flat, but it doesn’t seem like it is to me? Hard to know in the time domain though.

Thirdly, your model, if I understand it right models force X as X x (TSI shifted 11 years), it would be interesting to use other cyclic phenomena directly for force X, 1/UV or -UV, Magnetic flux, 10 cm radio flux, solar wind density, and see whether those fit the wiggles better.

If
1. the notch-delay model or a successor improved model could predict accurately (which might take decades to establish),
2. the model took into account CO2 accurately
3. TSI records for 1850 – 1978 were known with a great deal of confidence
then we could hindcast the temperatures for 1850 – 1978 with a great deal of confidence. But it is going to be long time if ever before that is possible.

The aim here and now is to use mainstream datasets at face value to see if natural influences can explain the recent global warming.

Yes, Ok, That’s logical – still it’s the first thing that struck me about it, I didn’t expect the observed temperature to be BELOW the model output largely, I think, because I expected CO2 would have some effect, that koolaid is pretty powerful, even a whiff is fatal to perceptions.

Since the model was trained on CO2 contaminated temperature over a long period, I expected it to average out and predict temperature rise at a rate slightly lower than observations post 1950, witout CO2 factored in. Whatever the cause, it’s a surprise anyway.

“Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.”

Satellite sensors do not measure surface temperature.

Satellite sensors measure brightness. The brightness data is then used in conjuction with radiative transfer models to infer the temperature.

A) surface temperature ( or LST ) is the assumed temperature of the actual land. This estimate is highly reliant on other estimations and upon classification of land types.
The estimates are typically good to with +- 1C
B) air temps at various pressure levels can also be inferred. UAH and RSS for example do not estimate the temperature at the surface. They estimate miles above the surface.
The only satellite product that makes an attempt to estimate 2m air temps is AIRS.
The product is a result of interpolation.

A GCM can of course give you all of these temperatures
LST, SST, air temps at 2meters, 1000hpa, 800, 700, 600, etc

When you can predict say the temp at the stratosphere then you will have a way to actually
test your model with out of sample data. That is, we always look to test a model with data
that was not used in the construction.. not only temporally out of sample, but spatially out of sample.

Like so: you trained on 2m air temps, a gcm trained on 2m air temps. The critical test is How well do you do above 100 mb?

(“Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.””

“Satellite sensors do not measure surface temperature.”

Indeed fat Mosher. All of the instruments measure radiative flux in one direction or the other in a narrow frequency band. The measured flux is pristine, and indicates only the thermal radiative flux (power transfer)
between one temperature surface and a lower temperature surface. That narrow band one way flux gives no indication of the temperature of any/either surface..

So we now have two Climate Models to consider thanks to the fine work by Dr David Evans.

1. the new “Solar (delay) Model” which is quantifiable and

2. the “CO2 Model” which is based on computer modelling (mainly based on GIGO criteria).

I think the so called 97% of scientists who have backed Model 2 are on a dead set loser.

I know very few geos who have faith in Model 2. I am sure once they have examined Model 1 they will be won over. It makes so much sense explaining Earth’s Climate today, during the current Holocene Interglacial (11.7Ka – present) and the pre Holocene (pre 11.7Ka) geological record. The anthropogenic effect on Earth’s Climate is negligible (via CO2 emissions) although this is not the case in “localised land areas” where removal of large amounts of forest has definitely had a significant impact on local climate e.g. tropical rainforest regions in some lower latitude countries.

The SIDC sunspot number are about 20% too high after 1945(Waldmeier), so that might be why you get a little high prediction from around that time. You may want to reduce SSN to compensate this step change.

But as David specifically pointed out, they can both be partly right. Unfortunately, he also pointed out that all proportionate combinations are ‘right’ absent some exogenous ‘decider’. I make a research proposal on that. A variety of ‘exogenous’ methods from energy balance to Bayesian inference suggest effective CO2 sensitivity is on the order of 1.5 to 1.9. Pick 1.8, Guy Callendars 1938 estimate, for the sake of example. Then what proportions are indicated? An issue remains that both models were trained on temperature data incorporating natural variability. Still, possible progress.

From what i have read we still dont know the mechanism “X Factor” (or you have not divulged what you think it is as yet), surely if we could pinpoint the mechanism that controls the temps in your solar model then we should be able to eliminate or at least reduce the affects of other alternatives?

For example if we can deduce a lowering of UV coupled with increased GCR’s in conjunction with reduction in magnetic fields and who knows what else all work together to produce a raise or fall in X.X degrees C then we can then determine the affects of other alternatives like co2 and maybe aerosols etc.

I can’t wait to see the solar-based forecasted global temperatures for the next 5, 10, and 20 years. Hopefully, they diverge significantly (go down in temperature) from the CO2 models in the near future, so that if actuals do indeed follow the solar-based forecasts and similarly go downward, then it should be largely “game, set, and match”.

What I do worry about though, is the future manipulation of the global temperature data. The AGW crowd are the owners of this data, and I do worry about their motivations to preserve the CO2-based dogma. How will we know if the global temperature data is accurate in future years? Who is watching the watchers? Steven Goddard has been very vocal about this matter, and based on the Climategate emails, I wouldn’t put anything past these folks.

Sorry, I don’t see how you can hindcast anything to the mangled, manufactured temperature records which do not even record the warmth of the 30, especially in the USA, can I suggest that you try sensitizing the model and then hindcasting to the Raw data as used by Steve Goddard?

It would be possible to construct one, even on your blog, by getting a volunteer in each country to obtain raw data, and building a database out of it. I would be ok with building the database for it, Aussie raw data is easy enough. With a public repository, lots of citizen science could be done. Groupsourcing would make pretty short work of it I’d imagine. Some datasets we’d have to pay for I’d imagine, so maybe we need to get the Koch’s (Hi David, hi Charles) to cough up that little bit more. Also, we could do science on the hourly data rather than the min-max junk they currently use (min – max)/2 is not the mean daily temperature.

I’d really like to see Davids model run on raw data for example.

PS, for the conspiracy theorists out there, I really don’t know David and Charles Koch, that bit was pure /sarc

See also — The Urban Heat Island effect: Could Africa be more affected than the US?

He studied it continent by continent and found inland towns were often roughly as hot in the 1940s as they are now, but coastal towns affected by ocean air flow had warmed in line with SST.

This also begs the question of why it is so difficult to get a single proxy that goes from say 1500 to now. The proxies seem to stop 20 – 30 years ago. Where are the updates? (You’d think we had run out of trees/coral/clam shells?) I would like to use just one continuous data set.

Jo, You can use, “just the US data”, as a “representative sample” in a controlled test, in order to validate the model. The UN does that sort of cherry picking all of the time. What is good for the goose …

It might also be a way to parameterize any “inconsistencies” introduced by “adjustments”.

I admit I’m not sure of the right answer to that. But in Dr. Evans’ position I don’t think I would. If this is to be fairly tested then its performance shouldn’t be suspect because of suspicion of cherry picked data.

“Goddard is working with just the US data though. We need the raw global dataset. But they lost it.”
Steve is working with a subset. If you have a reasonable Solar model, should it not work for any subset?
No sense in training on homogenized pablum,
and applying to a specific, as there is no global anything! But training for a single latitude, then evaluating each station at that latitude?
Is that for the rest of us to do?

Am I right in thinking that a “rival” solar model is the one from Svensmark et al?
Not sure about this, but I think they would expect the 11-year oscillations to be visible, via solar magnetic effects on cloud formation via cosmic rays.

What is most impressive to me about David’s solar model is the major changes in trend around 1990, 1950 and earlier minima,
with roughly correct changes in temperature.
Those are ALMOST compelling correlations, that maybe don’t need precise matching in time, given the complexity of the climate system.
It may be better to apply much more smoothing in display graphs, to avoid the major changes being masked by the noise.

I have to disagree with “The delay was found here as a necessary consequence of the observed notch”.
Don’t think a notch has been observed, and even if it had been it does not force a delay (apart from the one you get anyway with all filters).

Svensmark, as I understand it, was looking more at the physical processes involved with climate changes. The effects that the solar radiation had on cloud production, for example. He raised the ire of several leading climate theorists, by empirically demonstrating cloud formation. I am not up with the details, but it cause a fair amount of excitement (of the wrong sort) at the time.

“The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s”

No need for the nuclear winter hypothesis.

A negative PDO combined with quieter cycle 20 caused a slight cooling in the 50′s, 60′s and early 70′s then came the positive PDO together with stronger cycles cycles 21 to 23 causing the late 70′s climate shift which did not start to fizzle out until around 2000.

Perhaps, but what causes PDO? If it is an entirely internal climate mechanism, it would be like using ENSO (which predicts temperature 6 months later really well, but is just another internal climate variable) — no real explanatory power. But PDO might be influenced by lunar forces or something, in which case it would be useful as a semi-exogenous driver. In any case I didn’t have PDO data so I didn’t include it in the total climate model, and I wanted to stick to a physical model rather than relying on unexplained correlations.

ENSO is probably an internal oscillation due to differential heating either side of the equator. The clouds of the ITCZ are north of the equator most of the time.

PDO could also be internal but the relationship between El Nino and La Nina shifts every 30 years or so.

The role of cloudiness changes would be to skew the balance between El Nino and La Nina within the PDO cycle.

Less clouds and El Ninos get stronger relative to La Ninas and vice versa regardless of whether PDO is in a positive or negative phase.

That’s how you get upward stepping of global temperature from one positive PDO phase to the next (LIA to date). You would get downward stepping from one negative phase to the next during a cooling period such as MWP to LIA.

So, we can link net warming or net cooling from PDO to global albedo changes independently of internal system variability. If it was all internal system variability there would be no sequence of consecutive step changes up or down over centuries correlating with solar activity. It would be far more random from one phase to the next.

No need to put it in your model at this stage. Just bear in mind that on my description your model is capable of accommodating the thermal effect of long term PDO variability just from albedo changes so your predictions should be much the same as mine now that we have agreed on the correct sign of the cloudiness response from force x.

What I think you need to consider is how the ocean handles the received energy from the Sun. The ocean of course doesn’t create its own energy (even though it does have a continuous (albeit small) input of geothermal heat from below), but it does possess the ability to effectuate a net storage or a net release of the solar input over years, decades and even centuries. The coupled ocean/atmosphere system is readily able to work towards holding solar energy back or expel it more efficiently to space (pressure systems >> wind strength >> latent heat flux >> convective efficiency). It is also perfectly capable of controlling how much solar energy is absorbed by the earth system in the first place (cloud cover, especially over key areas of the tropical oceans).

The significant step down in the SOI in 1976/77, for instance, is basically what started the modern ‘global warming’ era.

Yes, we can discuss what caused that major and conspicuous shift in SOI. It might have had an external (solar/lunar or other) source, or an internal one. We don’t know. But, it happened. That much we know.

Your ‘physical model’ seems to assume (like the CO2 models) that the ocean is a static (unchanging), not a dynamic solar reservoir. It is not.

The global climate is clearly driven by a combination of solar and internal (oceanic) variations, where the internal processes rule over decades and multidecades, the solar processes only beyond the internal multidecadal cycles (from one to the next).

The thermal lag and heat storage properties are included in the model via the low pass filter (only).

The time constant of the low pass filter was found to be about 5 years, which is what others have found it, e.g. as Stephen Schwartz at Brookhaven said in “Determination of Earth’s transient and equilibrium climate sensitivities from observations over the twentieth century: Strong dependence on assumed forcing” in 2012, “The time constant characterizing the response of the upper ocean compartment of the climate system to perturbations is estimated as about 5 years, in broad agreement with other recent estimates, and much shorter than the time constant for thermal equilibration of the deep ocean, about 500 years.”

That upper ocean compartment is the low pass filter. While there is obviously also longer term storage in the oceans, it doesn’t show up in the datasets we have of a few hundred years of surface temperatures.

If we were to consider the oceans dynamic, we would not know what is an exogenous influence on the climate system. The modeling approach we are using treats stuff inside the black box as an internal mechanism. The exogenous forcings would be geothermal inputs and perhaps lunar inputs, but the internal state of the ocean is more like an internal mechanism.

“While there is obviously also longer term storage in the oceans, it doesn’t show up in the datasets we have of a few hundred years of surface temperatures.”

I beg to differ! It’s precisely what the multidecadal ups and downs in global temps are all about, the ocean cycles. The reason global temps went up from 1976 to 2001 is because of what happened in the Pacific Ocean in the wake of the shift in 1976/77 plus two massive West Pacific shifts in 1988 and 1998. The mean state of the Pacific Ocean. The PDV. Directly influencing the mean state of the North Atlantic (AMO). There is no need to explain anything else. Only three global shifts relative to NINO3.4 since 1970. Otherwise flat. The entire modern ‘global warming’ is contained within those three abrupt and significant hikes in mean global temperature level alone. And they are all readily explained by ocean processes. It’s a big topic. I have just started writing about it: http://www.okulaer.wordpress.com

“If we were to consider the oceans dynamic, we would not know what is an exogenous influence on the climate system.”

I see that, but then you end up with the ‘need’ to make up strange ‘nuclear’ reasons for multidecadal downticks in global temperature. There is no such need.

“The modeling approach we are using treats stuff inside the black box as an internal mechanism. The exogenous forcings would be geothermal inputs and perhaps lunar inputs, but the internal state of the ocean is more like an internal mechanism.”

The point I am making is that the (oceanic) ‘internal mechanisms’ of the earth system are not simply about rearranging (distributing) received solar energy, having no bearing whatsoever on the total amount over years, decades and multiple decades. Treating it like this assumes a static reservoir (receptacle) function.

And then you will never get it right. Then you have missed the elephant in the room.

No, the internal (oceanic/atmospheric) processes will increase or reduce the total energy content of the system over years, decades and multiple decades. In fact, it’s the internal processes of the earth system doing this, not the Sun through its output (TSI), which is remarkably stable. The Sun only works to influence (yes, control) the progression of the internal process regimes, most likely by affecting the pressure distribution/arrangement (and thus winds and clouds) across the global surface. A very INDIRECT influence, that is. Again, this might be where your ‘Force X’ is hiding. It’s the ocean/atmosphere system that actually executes the change in our climate (handling the solar energy in different ways) over human generations. It’s the ocean processes we ‘see’ in the global data.

The Sun is the ultimate driver, but we can only see its real influence across ocean cycles, from one to the next. Like from the cycle ~1880-45 to the cycle ~1945-2010.

David, I appreciate what you’re doing here, but we know all too well that it’s easy to become infatuated by your own model, ending up reading way too much into it and its output.

You say, “I beg to differ! It’s precisely what the multidecadal ups and downs in global temps are all about, the ocean cycles.”

That is for your model! David is examining how the Sun drives temperature, including drives the ocean. I find the consideration of the magnetic field reversal significant, and says much for a detailed engineering systems analysis. Else we just have another gobbed on GCM. It is likly part Solar with the epicycles about the solar system barycenter, plus conservation of momentum. We could expound on an active adaptive thermostat, (Gaia).
Given the demonstrated near nothing of what earthlings understand, a viable alternitive to CO2 nonsense, may get some lukewarmers to admit “I do not know”.

Kristain, I think your argument is a fair point of view. One of those things that may get sorted out in the fullness of time.

I don’t doubt that a lot of temperature change can be explained in terms of oceans. A parallel is the tight correlation of ENSO with temperature: ENSO is a really good predictor of the temperature in 6 months time. But what causes ENSO? Well one can predict ENSO pretty well with TSI (and implicitly force X). And what causes TSI? And so on.

Any explanation involving oceanic influences on temperature is always going to beg the question: “well what influences the oceans?”. Eventually the full climate explanation will be in terms of exogenous influences: TSI, force X, lunar, human gases, geothermal, whatever,….

“1) Active sun in cycles 18 and 19 then a less active sun in cycle 20 plus a negative PDO = cancelling out of expected warming followed by cooling when the sun gets less active in cycle 20 (!940 to 1975).

Times of depressed solar activity correspond with historic times of global cold.
Times of increased solar activity have corresponded with global warming.
The current quiet-to-average cycles mean a cooling pattern forecast over the next few decades. So why the heat predictions?

I’m just a Joe-Six-pack, but it seems that the downgrading of the sun’s effect on our climate by some ‘trusted wizards’ shows a selective bias that sticks out like the proverbial.

The curious thing with human CO2 emissions is the fact that the shape of the atmospheric rise has not matched the rate of human emissions increase. >25% of all CO2 emissions over human history have been released only since 1998. Human emissions have been fairly “hockey stick” shaped. Atmospheric rise has been roughly linear. There are other things at work, too, such as reactivation of bogs in the boreal continental regions as permafrost receded. In addition, the more CO2 you put into the atmosphere the faster nature scrubs it through both biology and geology.

There are a number of possible reasons why a warmer world has more CO2 in the atmosphere but it is beyond the scope of this thread. I was just making the point that you don’t necessarily have to imply a warming trend from more CO2 on the basis of the above charts. It could still be all solar.

However, IPCC have NOT proven that the increased CO2 has the correct isotopic signature to infer a wholly anthropogenic (fossil) content so it must have (largely) come from somewhere in the (natural) global system. In AR4 they said:

The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere (Keeling, 1961, 1998). These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere (see FAQ 7.1). Keeling’s measurements on Mauna Loa in Hawaii provide a true measure of the global carbon cycle, an effectively continuous record of the burning of fossil fuel. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope (Francey and Farquhar, 1982) and molecular oxygen (O2) (Keeling and Shertz, 1992; Bender et al., 1996) uniquely identified this rise in CO2 with fossil fuel burning (Sections 2.3, 7.1 and 7.3). AR4, ¶1.3.1, p. 100.

IPCC provides all the parameter values but the one from Battle, et al. Those values with the equations derived above establish the ACO2 fingerprint on the bulge of CO2 measured at MLO, as if it were a well-mixed, global parameter as IPCC assumes.

IPCC does not provide δ13Cf, the parameter found in Battle, et al., suggesting IPCC may have never made this simple mass balance calculation. A common value for that parameter in the literature is around 25‰. The figure from Battle, et al., being published with a tolerance, earns additional respect. As will be shown, the number is not critical. The result is a mismatch with IPCC’s data at year 2003 by a difference of 1.3‰, more than twice the range of measurements, which cover two decades.

The mass balance will agree with the measurements if the atmosphere retains much less than 50% of the estimated emissions. The necessary retention is 13.1%, a factor again of 3.8 less than supplied by IPCC.

These results apply to IPCC’s model by which it adds anthropogenic processes to natural processes assumed to be in balance.

Instead, the mass flow model must include the temperature-dependent flux of CO2 to and from the ocean to modulate the natural exchanges of heat and gases. The CO2 flux between the atmosphere and the ocean is between 90 and 100 GtC of CO2 per year. This circulation removes lightened atmospheric CO2, replacing it with heavier CO2 along many paths, some accumulated several decades to over 1000 years in the past.

Not only are you attacking the belief system called CAGW, you are attempting to put the super computer industry out of business. Congratulations! I am eagerly waiting for the model I can run on my system.

Although I have similar sentiments, it must be made clear that Evans and Nova are NOT attacking anybody.
They are presenting a new model which they believe works well enough to share it with everybody.

Although I had difficulties getting my head around this new model concept at the beginning, as each post was presented I understood a little better.
Having read the current post (with the exceptionally well written foreword) I can say clearly that I have NO IDEA WHATSO EVER IF THIS MODEL WILL BE PROVED RIGHT OR NOT.
I do however wish David and Joanne all the best. Thankyou for sharing this, I look forward to the remaining posts.

Olaf, with the ever progressive decay in morals, the sun engulfing the Earth might be just enough to sanitize this solar system and save the universe from some very awful STDs and super bugs from being spread around.

There is another thing operating during the “nuclear winter” period and that is industrial pollution that began to get phased out due to pollution regulations. For example, when I was a child in the eastern US in the early 1960s the cities produced huge amounts of smoke and soot. You could literally smell Pittsburgh, PA 50 miles before you got to it. The city was hazy from the burning of coal in the steel mills. The eyes would sting from the sulfur in the air. There was a significant brightening that went on in the Northern Hemisphere due to pollution regulations. The US “Clean Air Act” was passed in 1970.

Coinciding with that curve for nuclear effects is a post-war re-industrialization of much of Europe and a tailing off that coincides with pollution regulations. I would be curious to know if there is a significant difference in that black line in the Northern vs Southern Hemispheres. And with the recent pause we are seeing a great industrialization in China, India, and Brazil, among other countries.

Yes, we wondered that but had no data to stick into the total climate model. Is the air pollution in Pittsburgh 1960 different from Beijing 2010? Perhaps it was, and could the differences by significant? We have no idea. Anyone?

As a starting point, one might dig into some of the references here. Not only do we have industrial pollution but other things such as an increase in contrails. Now that more over the pole flights have been authorized, we might also see a change there from increased contrails in polar regions, particularly in winter.

I would expect more contrails in tropical regions to be a net cooling influence, in the polar regions to be a net warming influence, and it can go either way in the temperate regions probably depending on season.

One day in 1993, a southerly blew all of Melbourne’s smog straight into Puckapunyal (Seymour). It made the hills a few k’s away difficult to see and our eyes were stinging a bit as a result. I’m now cutting the useless catalytic converter off my car and selling it..

In Part V escaping heat you state, “That means any model needs to understand the relationship between changes in the temperature of the radiating layers and the temperature on the ground (and on the seven seas).” If sea level is increasing, then the pipes that allow energy to escape will now have to increase in size to allow more energy to escape. So the question about how Part VII takes this into account will require me to go back to the earlier parts of this discussion as well as reading Tisdale’s articles on El Niño/La Nina.

In the 70s a trip into Chicago meant entering the green zone. From outside going in the air looked green. This has been much reduced in the subsequent 40 years. I think the decline of the steel mills in the area has a lot to do with it.

When I was about 10 (’54) my mother and I went through Pittsburgh by train. Seeing all the mills next to the tracks with the mills in operation was a wonder. I’m sure the emissions if seen by day would have been significant.

Very impressive. I was very skeptical at first that a simple model like this could produce such a result.

Whether or not the approach stands up to critics, the results are plausible. An ECS of 1.14 deg C and 40% attribution to GHG is also quite plausible.

The result is consistent with the first version of a paper by Stephen Schwartz of Brookhaven Laboratory, Heat capacity, time constant, and sensitivity of Earth’s climate system. Schwartz S. E. J. Geophys. Res., 112, D24S05 (2007). doi:10.1029/2007JD008746

As I understand the paper Dr. Schwartz estimated climate sensitivity to doubling of CO2 as 1.1 ± 0.5 K. He based his estimate on ocean heat content. His estimate was subject to several critical comments by other scientists and he published a revised figure for ECS, modified upwards.

The value of ECS that gives you the best fit corresponds to the initial estimate by Dr Schwartz.

Schwartz uses the same two-compartment CO2 model as we do. Schwartz also arrived at a 5 year time constant for what is I think is his low pass filter, namely the upper ocean compartment of the climate system (which, as he notes, is “in broad agreement with other recent estimates, and much shorter than the time constant for thermal equilibration of the deep ocean, about 500 years”).

However, as I said under the figure with the CO2-solar mix, “This mix was arbitrarily selected for illustration; do not read any significance into it.” So while an ECS of 1.14C is plausible on what we presented so far and models well from 1800 to now, it does not mean anything more in these posts.

I think the way things respond are different in cooling and warming modes. The deep ocean might be faster to cool from atmospheric changes than to warm as cooling would work with convection while warming might work against it.

Thinking about the Gulf Stream, what would happen if it warmed a bit? It would seem that it would simply travel a bit farther north before losing enough heat to sink into the surrounding water. This would act to transport more heat father north toward the pole but the water arriving in the deep ocean would be about the same temperature, at least initially. If the condition persisted long enough, the surrounding water would also warm and the downwelling point would move back south a little and then the water would be a bit warmer when it made its way to the deep. If the Gulf Stream were to cool, it would sink sooner, at first. This would act to transport less heat toward the pole until the surrounding water also cooled. So much of what happens in the system seems to want to be self-regulating. Add more heat to the system, it wants to shed more. Remove heat from the system, it wants to conserve it. There would also be changes in salinity that are important, too, that I ignored above. But the bottom line is that it is difficult to heat the bottom of a bucket by heating the surface. It IS easy to cool the bottom by chilling the surface. So I would intuitively expect the deepest ocean to act as a sort of thermal diode. It is easy to cool but harder to warm in response to surface temperature.

You’ve got it completely arse backwards. The ocean is heated primarily by the earth itself which is an incredibly hot rotating ball of life and magic.

(Incredibly hot compared to the vacuum of space).

If the earth was not so incredibly hot (with a “core” that is arguably as hot as the sun), and a thermal gradient of 25 degrees per 100km, Then all the oceans would be frozen solid (since any heating would just be sucked up by the core which would be close to the temperature of deep space).

It is so true that the fish don’t notice the water, we are completely oblivious to this subterranean heat source and the possibility that its thermal output (similar to the sun) is variable!

That’s just silly, Sonny. Although it’s true to say that the oceans are warmed a little from undersea volcanic activity, the vast majority of the deep oceans (90% of ocean water) range between zero and 3C, with pressure being the only thing stopping them freezing at these depths and temperatures.

Try freezing an unopened, pressurized bottle of soda water. The higher the pressure, the colder it needs to get.

If we switch off the Sun permanently (as we invariably do every night), within a month the surface of all the oceans will be frozen and no life on land would exist. Eventually the entire oceans would freeze, as the circulation we currently have inverts, moving warmer water from the bottom to the top, which cools, sinks and cools the deeper water further.

The reason we have ocean currents at all is because warmer water near the equator circulates to the colder waters at the poles, where that cooled surface water sinks to the bottom, taking up space and forcing cold water up from the bottom at the equatorial end (roughly).

So there you have it – the evidence we have ice at the poles is due entirely to the lack of sunlight, not lack of heat from the mantle.

Besides, I think your thermal gradient is wrong, which calculates the core as some 1600C. I believe the core is at about 6000C. Dunno where you got that number, if someone would like to correct me.

So simple anyone can run it on a PC??? OMG, the industry Gods will bring hellfire down upon you!!
Find a way to get it to all those kids in school for class projects, worldwide! A potential complete cutoff of future alarmists…:)

Without taking away from the guts of your work so far, I can’t help but notice what I will call an “atomic bomb fudge factor”.

The cooling effect of which you and Jo have lent your support seemingly because without it, your model is problematicalilly innacurate over that period of time.

I think it is safe to say that had your model accurately hindcasted the temperature you would not have changed your initial scepticism of this cooling effect.

In my book this is an illustration of a type of confirmation bias, I.e you are both invested in this model being correct, and the data which can be incorporated to support your bias is adopted and data which does not is scrutinised.

I would caution you to be completely honest about any potential for confirmation bias, this will set you and your model apart from the arrogance and ignorance displayed by mainstream climate scientists who seem completely unaware of their own limitations and human frailties.

The assumption of sufficient linearity of the climate system,
The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s.
The inability to precisely identify force X (see Post IV).

How refreshing to see! If only the government funded climate scientists, “teenage delinquents” as Donna Lamgtambois so aptly describes, could be upfront and honest about the problems in the CO2 theory, instead of using “nature tricks” to “hide the decline”.

> There is nothing in the posts so far to support the assumption that the recent global warming was almost entirely or even partly associated with solar radiation. On the material presented so far, the CO2 and solar solutions are both viable and no reasons have been given to suppose that either one is more influential.

I agree that you haven’t shown any reason to prefer the solar model. However, the unfeasibly large influence from the nuke tests is a reason to not like the solar model.

This is all black-box non-physical-model curve fitting. Where are the regulars on cue with the bit about fitting elephants?

> model fits the measured temperatures reasonably well

That’s pretty vague; you have no quantitative measure of fit; you’ve just got a by-eye “meh, looks OK”. What if it doesn’t look OK to someone else’s eye?

Thanks for taking the bait and acknowledging that you don’t have enough info on it to form a real opinion. Now, wait for him to release the code (gasp!) before you start tossing out criticisms of a model for which you have no solid details.

Make up your mind: either we have “no solid details” available, in which case DE has merely *asserted* taht its physical; or the details are available, in which case he’s *explained* it. Its not possible for both your comments to be correct.

But this model is non-physical, because there are no physical constraints on the forcings: the scale factor that relates, say, the nuke wiggle to the obs line is entirely arbitrary, deduced only from wiggle matching. There is no conservation of energy, no physics at all.

No. I’m pointing out that his two statements are incompatible. It distresses me that there is so little skepticism being shown here – many commentators are just saying whatever comes into their head, with no thought for consistency; but you give a free pass to anyone you see as being on “your side”.

Connolley, David and Jo have made the decision to roll this out in installments. Several people in previous posts said the model essentially couldn’t work. Now here is a demonstration of reasonable hind-casting ability something that current models don’t do very well at all.

Your faux-distress and outrage is phoney and misplaced. Sit back and shut up until you really have something worthwhile to say.

many commentators are just saying whatever comes into their head, with no thought for consistency; but you give a free pass to anyone you see as being on “your side”.

Right! INCLUDING YOU!

Again: Sit back and shut up until you really have something worthwhile to say.

I’d rather see intelligent criticism … than kneejerk personal attacks.

You are quite right, and I totally agree.

But in defense of Mark D, I would like to mention that we have some considerable experience of the semantic tricks, and diversionary tactics, employed by some of the “more aggressive” proponents of CAGW, on this blog. I won’t even mention the melodrama, other than to point out that Mr Connoley is “distressed” by what he reads here; but still he returns.

Of course the intent is to divert attention away from the core subject, and create lots of irrelevant sub-threads to distract readers, and in the process, drive some away.

This behaviour is rooted in a belief that science is done by consensus (effectively a vote). Of course, that is a political concept, and not a scientific one. But if you read Mr Connolleys’ comments in isolation, the political derivation becomes obvious.

After a while, he becomes repetitive (in approach) and more strident (in his choice of words). Eventually we all get frustrated, and some of us vent that frustration, from time to time. Regrettable, but part of human nature.

There is a simple test to tell you whether the model is physical or not.

Is it dimensionally correct?

Answer. NO.

There is another simple test. Ask the model to predict outcomes that it wasnt trained on.

A physical model of the climate ( a GCM) may be trained on average global temperature at the surface, But it can predict, for example, arctic amplification. Why? because it models the physics of a planet. It may be trained on temps at 2 meters, but it will give you a prediction at 10 miles above the surface? Why? because its equations are dimensionally correct. they are physics. davids model is non physical. It is a curve fit.

Perhaps you might be inspired to respond to my prior questions and prove that Catastrophic Man Made Global Warming is not pseudoscience.

I’m presuming that you are (1) capable of answering the questions in a coherent, honest and rational way, and (2) willing to answer the questions that I have posed that go to the centre of scientific practice.

You asked if Global Warming is indeed Science. As I said before (well, previously you asked if it was Science, with an odd capital – I’m not sure what that was for. Now you want to know if its not-pseudoscience), yes, it is. However, your ideas of science may well be badly confused. People who have never done any often have naive ideas about it; just like anything else they have no experience of. I also think your knowledge of GW is also likely very deficient; so much so that its hard to know where to start.

We could start with the construction of the GCMs, perhaps. They are physically-based models of the climate system (unlike DE’s model). They conserve energy and momentum; they track physically described changes. All of this is described in the appropriate papers, which you’ve never read or even attempted to find.

Perhaps a better question would be, is your attitude towards studying GW a science-based one of honest, open inquiry? Or do you start from prejudices so firm that you won’t even look at what you pretend to be interested in?

I don’t think your question is interesting, or indeed honestly meant; its a debating trick, not a question.

A far more interesting question would be to discuss whether DE’s model is physical; but you’re all shying away from that. Mostly because you don’t even know what it means; but partly I think because you have a nagging suspicion that you already know the answer.

I don’t think your question is interesting, or indeed honestly meant; its a debating trick, not a question.

My question is completely honest. I really do think that climate science – as a discipline – is corrupted at the process layer, that it is operating in a defective framework that is insufficiently rigorous to arrive at the facts wrt climate – i.e. it has become a dead end discipline. The reason that I ask it of you and other adherents to the main stream position on man made global warming is to ascertain if you have considered the methodological framework by which the published content of climate science has been arrived at.

A far more interesting question would be to discuss whether DE’s model is physical; but you’re all shying away from that. Mostly because you don’t even know what it means; but partly I think because you have a nagging suspicion that you already know the answer.

Easy – specify the criteria that would be required for a model to be Physical – and we can see if we have the same understanding of what a physical model is – and if David Evans model conforms with that criteria.

Well, the whole of Mr Connolley’s comment is pure ad hominem, with a touch of bombast.

The second paragraph states: that the CGM’s, “… are physically-based models of the climate system (unlike DE’s model)”. This is a very strange statement. There are two definitions for “physically-based” in relation to models.

The first, and the most usual, refers to control systems that model a clearly defined physical network in real time, such as an electrical distribution network, or an air traffic control system. The key characteristic of such models is that the state of every node on the system is monitored, and continuously analysed, relative to other nodes on the network. Clearly CGM’s do not share these characteristics, since climate “nodes” are not clearly defined, nor always understood.

The second definition refers to theoretical models that are derived from the laws of physics, presumably after applying occams razor. This is essentially what climate models should be, and this is exactly what David Evans is doing. Except that Mr Connolley asserts that he is not. Given the circumstances, I would suggest that this is an example of “Pious Fraud”, since Mr Connolley has joined this congregation to bolster his own belief system.

But it does not stop there. Mr Connolley goes on to say: “They conserve energy and momentum; they track physically described changes.”. This is the fallacy of “Begging the question”, because the current CGM’s are bottom up, so they use the conservation of energy and momentum as necessary components within their calculations. David’s model appears to be more top down, and based on an analysis of the observed natural variations of input, in order to identify the key drivers of change.

Mr Connolley’s fallacy exists, because both approaches are valid, but in different circumstances. The CGM’s assume that CO2 is the primary driver, and that everything that needs to be known about the mechanisms is already known with some variances, that cannot be explained without assuming that the input data is in error.

David’s approach is to treat physics as a mathematical problem (not physical, as Mr Connolly asserts) that is treated in an investigatory way, to see where it leads.

It is this last point that Mr Connolley most fears, because he, and the established Climate Scientists, appear to be loosing control of the narrative, and if this new model turns out to be more accurate in its predictions, than the CGM’s, then there will be some explaining to do.

I think you will find the vast majority of regular writers reading and trying to understand what is in front of them.
Most people are neither mathematicians nor scientists so it is important for an understanding to take place before commenting.
Since this is a sceptical site , I am sure there is likely to be a lot of comment good and bad down the line but unlike yourself, we tend to keep our powder dry until such time as we have a clear shot ,instead of firing off at all and everything.
Is it not possible for you to keep your gob shut for just one day?

(A) If atomic tests resulted in massive climate fluctuation that would help explain away the three decades of postwar cooling, why hasn’t the alarm raising crowd adopted it too? You say they do but offer no references to something I’ve never heard of, just a link to Weather Underground estimates of an actual nuclear winter. The cooling was blamed on pollution aerosols, mainly, but here you even invoke radioactivity but likely have added a fudge factor in how you scale it to make your model fit, yet another parameter. Why not blame cooling on normal pollution like everybody else does?

(B) “The theory of the “delay” will be tested soon. It is falsifiable.” No, it’s falsifiable *now* by treating the last few decades as test data or the first few. It’s confusing what is being shown. Are these all only trained by the 1850 to 1978 period which you say in comment that is “mainly” trained on this period? This is what Willis keeps asking for, which I appreciate. Lubos also asked for a test based on simply reversing the temperature trend in time as test data to see if the model is simply arbitrary in matching any data with similar characteristics. It seems the nuclear option would prevent that though since it locks half the variation in climate to a hand waving argument. That a natural sudden plunge in temperature may occur at any time is indicated already in the main Greenland ice core, but I wonder how well that will in fact “confirm” your model if it now happens again since such drastic plunges are so common and so unexplained.

(C) You say you have an algorithm that translates solar output to temperature in some way that also involves a “force X,” *but* will it do so uniquely or just arbitrarily and is there any actual meaningful relationship between solar output and temperature variation? We obviously have that correlation with the Little Ice Age already and a solar lull but what about recent minor fluctuation and actual temperature rises versus falls? Because when I look at TSI and temperature and connect the peaks and valleys with plot lines to ignore the 11 year cycle, I don’t think I see any correlation at all. This whole exercise just seems like pattern matching that is entirely arbitrary and thus entirely meaningless since it could also match the stock market in either direction, or explain solar variations as being caused by climate on Earth. This is why Willis is so concerned about a potential PR disaster, rightfully so. There is, after all, a multi-hundred million dollar a year PR effort precisely focused on stereotyping climate model skepticism as being just Internet cranks and here you are avoiding real peer review on the Internet as Steve Goddard’s blog fills up with Iron Sun and Ancient Gods crackpots. That you failed to anticipate both Willis’ and Motl’s pointed criticisms about arbitrary wiggle matching means you failed, not them, to present a clear picture of what you were doing in something shorter than 170 pages that obviously did not include the usual tests of uniqueness that any peer reviewer would demand for such a radical new theory. Now that you reveal the nuclear option, the laugh test must be also applied.

I think the phrase “massive climate fluctuation” is a bit of hyperbole here. He is saying that it acted to moderate warming, not saying it caused “massive fluctuation”. But adding a lot of dust into the stratosphere that takes a long time to settle out, it likely did reduce the overall clarity of the stratosphere. That the atmosphere “brightened” since the 1970′s is rather well documented in many papers.

The chinks in the armour are gradually showing.
* The filter can’t represent the frequency shifting that occurs due to thermal capacity of the oceans.
* The energy in the 11-year variance can’t really just disappear into nothingness, but with a simple filter model it can – First Law be damned.
* The 11-year delay is postulated, not discovered, simply due to the anti-Ockham assumption of a Notch Filter, even though the IMF Bz zero transition can provide an immediate Svensmark-style albedo shielding of the TSI peak at mid-latitudes with no delay required.
* There’s no OLR, and surface temperature is the only testable quantity it outputs, and it needs the Atomic Playboy parachuted on-demand into the 1950s to even get that far. The downturn of one of numerous documented ~60 year cyclic climatic influences would generalise to longer timespans.
* Withholding the details from the critics is backfiring. One minute we’re told “Our climate model is in a spreadsheet that we will be releasing shortly.” Three hours later we’re told “This series of blog posts is far from finished ”. Committing to a specific release date would save us the drama.

I won’t say it’s turning into a train wreck, but a brief derailment incident has already occurred, the scale of media coverage being the only free variable.

Sonny, I’m all for testing out other possibilities which make the model more accurate (you’ll be able to try it yourself soon).

You’ve called the contribution of the atmospheric bomb tests a fudge factor but you haven’t actually provided any reasons why 440Mt of explosions wouldn’t have some cooling impact. The question then is how much of an impact is reasonable. We provided two papers with estimates in the same ballpark.

Owing to the laws passed during the period and Great Leap Forward during 1958–1962, according to government statistics, about 36 million people died in this period.
Until the early 1980s, the Chinese government’s stance, reflected by the name “Three Years of Natural Disasters”, was that the famine was largely a result of a series of natural disasters compounded by several planning errors. Researchers outside China argued that massive institutional and policy changes that accompanied the Great Leap Forward were the key factors in the famine, or at least worsened nature-induced disasters.[8][9] Since the 1980s there has been greater official Chinese recognition of the importance of policy mistakes in causing the disaster, claiming that the disaster was 30% due to natural causes and 70% by mismanagement.

The hydrologic system of Long Island, N.Y., showed a marked response
to deficient precipitation in the years 1962-66. By 1966, streamflow was the
lowest of record in many Long Island streams, and ground-water levels had
declined a maximum of about 10 feet in the central part of the island. Although
the drought apparently ended in the early months of If 77 and
ground-water levels and streamflow recovered somewhat since then, groundwater
levels and streamflow were still considerably below long-term average
values in September 1968.

Table 2 ranks droughts for the period 1915-1991 based on streamflow records from six long term gaging stations. Three droughts–1930-1934, 1952-1955, and 1962-1964–are notable for their severity, duration, and widespread impact. For four out of the six stations, the 1952-1955 drought was the most severe on record. An examination of rainfall records by Knapp (1990) indicates that the 1893-1895 drought may have had as extensive an impact as the 1952-1955 drought in many parts of Illinois. However, there are no streamflow records for this earlier drought.

1960’s Drought
Drought occurred across SE Australia from 1962
to 1968. This drought varied in extent
and severity over different years. Some places were as dry as the Federation Drought.
Both Hume Weir and Burrinjuck Dam were dry in 1965, and there was widespread soil
erosion and stock losses.

n 1962, the United States carried out a series of high altitude nuclear tests called Operation Fishbowl. The tests were a response to the Soviet Union’s announcement they were ending a moratorium on nuclear testing. The planning for these tests was rushed and resulted in many changes as the program progressed.

All the tests of Operation Fishbowl were launched from missiles from Johnson Island, just north of the equator in the Pacific Ocean. The missiles were launched toward the southwest of the Island to keep the detonations as far from Hawaii as possible. This was because the planners were worried that the bright nuclear flashed might cause blindness or permanent retinal injury.

The final four tests were all success. On October 19, 1962, a test codenamed Checkmate was successfully detonated at an altitude of 147 kilometers (91 miles) Although the exact amount is classified, it was reported as being under 20 kiloton explosion. The fourth Bluegill test successfully took place on October 25, 1962. Although the exact size of Bluegill Triple Prime is classified, it is believe to be between 200 and 400 megatons. The following test, Kingfish, took place on November 1, 1962 and is believe to have a yield in the same 200-400 megaton range. The last test of Operation Fishbowl was codenamed Tightrope. It took place on November 3, 1962 and was detonated at a much lower altitude. The nuclear yield was only between 10 and 20 megatons.

Yeah those are typos. They must mean 200 – 400 kt. Sorry I didn’t pick that up. Actually according to Wikipedia it was 400 kt each. I saw the effects of the Bluegill Triple Prime explosion in the early hours of 1 November in Auckland, New Zealand (aged 14). It created a multi-colored aurora over the whole sky (aurora australis are not seen at that latitude).

The points are that:

the US ran numerous high altitude tests through 1962 at the Johnson Island location; but

it can’t have been dust which caused the resulting widespread global cooling/drought as these explosions were not just above ground on a continent (to generate dust).

When somebody can explain to me how cameras were set up which captured nuclear explosions at close proximity to houses they were filming, I will reconsider.

There is ample evidence refuting the existence of nuclear weapons.
But I will not divert this thread into a debate about it. If you are interested, google it. Climate change aunt the first time we’ve been hoaxed and it won’t be the last.

I’m afraid to say that the model does not just incorporate a “fudge factor” it incorporates a “fictional fudge factor”.

Just as “climate change” is used to promote fear in the population, the nuclear bomb hoax did this in the late 40s 50s and 60s to a FAR GREATER EXTENT.

If you search YouTube for videos of ‘nuclear explosions’ you get .. About 75,200 results.
If they were all “fire bombs [like] Hiroshima and Nagasaki” I am a camel that can pass through the eye of a needle.

YouTube comment by ‘Summer Winter’ :-
“According to Google search [the cameras] were hidden behind barriers and bunkers, or shot from far away with a zoom lens. .. The interesting thing is that the close up footage is only from low yield bombs(20-40kt) and that any higher just vaporized everything in the blast zone, and that’s why there is no footage of bigger bombs except from far away.﻿”

Peter there are probably as many videos about vampires and Dracula on YouTube.
It does not mean its real. there are certainly 10s of thousands of fake movie explosions that look pretty real to me as well! And your comment about the cameras is absurd.

I am not going to argue over the propaganda films, which were made to calm the fears of the US population over nuclear testing. When seen by todays standards, they are awful anyway.

The important point, from a climate perspective, is what atomic and nuclear blasts do to the different layers in the normal earth atmosphere.

At the time when America was conducting atmospheric tests in Nevada, and later at Bikini Atoll, most long range radio communication was transmitted in the High Frequency Band, somewhere below 30 MHz. Signals in this band, would bounce off the Ionosphere, and the ground, so reflecting the signal around the Earth’s curvature, giving the long range.

During the tests, the Ionosphere (and other atmospheric layers), was seriously disrupted and long-range communications were lost, and had to be resent when the atmosphere finally calmed down again. That often took several days. But, when it did calm down, the actual reflective capacity of the ionosphere was still greatly reduced, due to the quantity of dust particles that had also been carried to high altitude by the atmospheric disruption. It was only over time – sometimes months – that the actual signal strengths improved to the levels they were previously.

Depending upon particle size, and the height to which the disrupted winds carried it, it is not unreasonable to conjecture that the atmospheric dust would also have the effect of shadowing the earth from the sun.

I was serving in the military, and although we had all of this fancy high-tech electronic communications gear that could auto-encrypt, and self correct errors, etc., we still had to learn how to encrypt manually, and how to send and receive radio communications via Morse code (at 25 words per minute), because being a simple on/off signal, it would get through all the noise, when more sophisticated signals could not.

And it wasn’t solely Nuclear blasts that were the problem. In my time, we experienced a couple of large solar flares, that also disrupted long range communications. That wasn’t put down to a disruption in the ionosphere per see, but rather a change in the intensity of ionisation.

Historical books on military communications, for the period around the 1970′s would be where I would look for some of the more technical points. It really wasn’t my field, but rather an adjunct subject that we all had to study.

During the tests, the Ionosphere (and other atmospheric layers), was seriously disrupted and long-range communications were lost, and had to be resent when the atmosphere finally calmed down again. That often took several days.

Which reminds me – low SSNs tend to mean poor HF radio communication. At a high sunspot maximum the MUF (maximum useable frequency) can go to 50 MHz or higher for short periods (hours, days).

When the SSN count is low sometimes the MUF only gets up to 3.5 or 4 MHz. And 80 meters gets crowded.

Interesting page on the subject. Includes current SSNs and 10.7 signal. And maps.

Interesting to see some of the naysayers focusing on the throwaway comment by David about human induced cooling in the mid 20th century (the nuclear winter hypothesis) when they supported that idea at the time and still say that human influences are the primary climate driver.

In fact human influences were not necessary then and are not necessary now.

The mid 20th century cooling was a result of slightly weaker cycle 20 plus a negative PDO.

I am a fly on the wall. But a fly with an extraordinary education and ability with mathematical modelling. For a long time I have been told what I believe is a lie. I have been told (as have all the other flies in the house) that by flapping my wings I am creating extra energy that is heating the house up. The humans in control have determined that to reduce this heat they plan to introduce Mortein into the air.

I have a different theory as to why the house is heating up. I think that it is actually a combination of the temperature outside the house increasing and an increase in light coming through the windows.

I have had a massive breakthrough in identifying a 24 hour cycle! It seems that there is a reasonable correlation between the daily solar cycle and the temperature in the house!

The problem is that the correlation is not as strong as I would have hoped. It seems that at about midday, when the TSI coming through the windows is at its peak and the outside temperature is highest, there is actually a drop in the temperature inside the house. What I have done in order to get around this is introduced a “notch” filter. Easy.

There are also some other strange delay affects and other oddities, for example at 10pm suddenly there is a slight increase in temperature again, and then a delay as the house temperature slowly cools again after 4am and then starts to warm again as the sun comes up.

Never mind, this just means that there is an “x force” which is some misunderstood and mysterious aspect of TSI through my window and the temperature outside.

I now have a “physical model” with a few bells and whistles on it which presents a completely alternate and equally plausible theory to the “wing beating” theory, and I’m hoping I won’t get hit with a face full of Mortein.

Now to work out why the humans keep complaining about a “cooling bill”.

“It seems that at about midday, when the TSI coming through the windows is at its peak and the outside temperature is highest, there is actually a drop in the temperature inside the house. What I have done in order to get around this is introduced a “notch” filter. Easy.”

If you add an automatic system of blinds that responds to TSI it can swing to a shading position when the TSI gets to a certain point. That won’t actually have a cooling effect but it will offset the further warming from even higher TSI.

Using David’s method of analysis that cooling effect of the blinds will show up as a notch offsetting the higher TSI, at least for a while.

So your analogy is not sound.

Out in the real world the sun actually reduces the blinds (less clouds) but that opens up the ocean surface to more incoming energy which disappears into the oceans for a while.

Again, that produces a notch as the extra energy reaching the surface is retained by the oceans temporarily (hence the delay) offsetting the extra incoming energy from less clouds.

Well said Stephen.
The house reference opens another fallacy Sonny implied: You CANNOT model or analyze systems where humans (operators) can provide input at any time with no record of what they did.

Analyze Climate Change without human input, then go back and look for anomalies and investigate if some other natural event could have caused them. Only after eliminating all possible natural causes should you look at what humans could have done to cause it.
Starting with supposed human interference, as the current GCMs do, is and always will be a waste of time.
Especially when the modelers salary and egos are dependent on the outcome.

The Griss, the guys who know whats actually cooking dont hate CO2.
They just love a good old profitable hoax. And CAGW is up there with fractional reserve banking. Either way youre screwed. You can bring up the truth in only so many different ways until you realise that it is LIES that are the true currency in our economy.

Both the computers and the programming of the GMCs had a huge and very significant amount of human influence applied to hem. It is thereby legitimate to question what effect that might have on the final results. One answer is easy: without the human influence there would be no results from the GMCs. So we clearly have an AGMC and maybe even a CAGMC.

In short, we have demonstrated that the global warming of the last two centuries could have been mainly associated with TSI rather than CO2. This overcomes one of the bedrock beliefs of anthropogenic global warming, namely that the recent global warming could not plausibly be due to anything other than CO2.

Even simpler, depending on where the windows are in the house and the nature of the windows it is possible that the windows switch from transmitting light to reflecting it. That increase at 10 PM could be a classic wind drop (convection cooling change) with a slight overshoot of the house temperature as it comes back into equilibrium, or maybe it’s manmade, like the hot water systems waste heat kicking in.

Personally my house does a big kick up at about 9PM in winter when the energy loss of the house to it’s surroundings becomes so bad that I switch on the heater. Where’s that global warming those climate scientists promised when you need it.

NB, strangely that kick up in temperature is occuring later at night recently, I have detected a unmistakable correlation between that and my gradual descent into fuel poverty as the government insists on 13.7% electricity price increases. Soon this effect will be so bad that it will be economic to replace electric heating with a more ancient device designed to increase the earth’s CO2 content more directly.

Quite right Griss, lots of oil and diesel fuels were destroyed, also rubber.The amount of human activity on the easternFront alone was phenomenal that would conceivably create a signature.So many contributing factors much can’t be measured or determined.

I had an interesting discussion with John Kennedy of Met Office in comments, though the threading at CE is pretty hard to follow sometimes.

He said models were “tuned” to fit 1960-1990. And discussion of supposed “validation” revealed that most of it was circular logic geographically insignificant comparisons suffering from sample selection bias.

Frankly its a mess.

Something really stupid was done early on ( a 0.5K step “correction” in 1946 ) , rather than admit the error and fix it, they’ve been trying to paper over the cracks ever since.

Searching for “tuned” I can find JK saying (to you) “Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect.”

That appears to be close to the opposite of what you’re claiming he said. I can’t find JK saying what you are now claiming he said. Can you quote him exactly, please.

You claimed JK had said something. I looked at the post you referenced, and found that he’d actually said the opposite. So I asked you to back up your claim with a direct quotation, in case I’d misunderstood.

Stratosphere warming is roughly complementary to the troposphere cooling. The second graph shows extra SW solar making it into lower climate since Mt Pinatubo settled.

Follow the link therein, for the full story and derivation of the radiative anomaly.

My interpretation of this effect is that after major eruptions other aerosols, either industrial pollution and / or ozone get flushed out with the volcanic aerosols, leaving the stratosphere more transparent.

There was a post 1990 warming caused by the eruptions this is usually spuriously attributed to CO2.

I show that tropics are very insensitive to radiative changes and that current aerosol forcing is being rigged to much less than earlier more rigorous values in order to make models works with higher sensitivity.

This is why I suggested the apparent notch filter may simply be insensitivity to radiative change.

The climate is not just removing 11y solar it’s pretty much removing most of it.

There may well be a long term solar effect similar to what you suggest. This is deep penetrating UV that gets past the negative feedback effects in the surface layers.

Stratospheric injection wasn’t a big deal in WWII except some soot from airplane exhaust. Real problem was SO2 from coal burning that caused a constant tropospheric haze that only started to clear in the 1970′s and is probably occurring again due to China, India, Brazil industrialization.

Definition of analogy (n)
a·nal·o·gy[ ə nálləjee ]
comparison: a comparison between two things that are similar in some way, often used to help explain something or make it easier to understand
synonyms: similarity · likeness · equivalence · parallel · correspondence · correlation

The similarity of their results proves nothing about the system they are said to simulate but it does prove something about themselves. They were programmed based upon similar assumptions so their output will by necessity be similar. All it takes is one significant non-similarity between the climate models and the actual climate system to falsify all of them.

How about a significant change in atmospheric CO2 over the past nearly 18 years with no significant increase in temperature for just such a test. It appears that the climate models all fail without additional “tuning” to adjust for the discrepancy. Hence, they are invalidated as they stand and cannot be relied upon to forecast/predict/project/simulate the effect of CO2 within our actual climate system.

We have the failure of those who assert “the science is settled” compared to David’s restart of the science from the top down. He, unlike the CAGW “team” is exposing his thought process, the development of the ideas, states that the ideas still need testing, states that his models are a work in process, and says he will soon release a working version in Excel. The CAGW demands that we trust them and send them more of our money. David says “here is what I have done, please verify and/or find fault”.

The hardware required is trivial; you can run HadCM3 on a desktop or even a laptop. But no: I don’t think people have tried. Just writing the bare dynamical core of a GCM is waay beyond you lot, let alone all the rest. The models are well enough described; what’s lacking is you and yours ability or desire to read the papers.

On that basis you declared yourself an expert on climate and took it upon yourself to be head of thought police at WikiPedia, whilst conveniently forgetting to point out on your profile page that you were also a political candidate for the Green Party in Cambridge.

Your lack of integrity in the past probably ensures that you will now be ignored even when you make a valid point.

Quote notorious WC “Just writing the bare dynamical core of a GCM is waay beyond you lot, let alone all the rest.”

William, Just what is any (bare dynamical core of a GCM)? Do you mean the line by line extinction coefficients, that have nothing to do with absorbing surface radiation? Or are your “dynamical” the patches and band-aids applied to fake any contribution of CO2?

Dynamical core is a basic concept in GCMs. I’m not surprised you don’t know what it is, because (as I’ve said repeatedly) none of you have a clue how GCMs are built. I am surprised that you asked though – but its good that you did.

Frank Bosse at Die kalte Sonne here puts the spotlight on a global warming forecast published by some British MetOffice scientists in 2007. It appeared in Science here.

The peer-reviewed paper was authored by Doug M. Smith and colleagues under the title: “Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model“.

[...]

Now that 2007 is some years behind us, even Smith et al have realized their forecast was overinflated and so they produced a new paper which appeared last year. The latest by Smith has taken natural variability more into account and he is much more careful with prophecy-making. Still, the range of uncertainty the new paper offers makes it “more or less useless”

[...]

We’ll be revisiting Smith’s newest forecast in about 5 years time. In the meantime we have to ask ourselves if these people will ever learn. Science can take only so much damage.

… its going to be HILARIOUS watching the alarmistas trying to weasel their way around it

Fudge Factors usually work both ways. Let us not forget that in the 1970′s the climate crew were getting the vapours over a coming ice age. It will be “back to plan A, lads”, for them, when plan B fails to deliver as expected.

I suspect that your bravado is absent of any real performance. You have no idea of what you say or who you are saying them to. That you say them, demonstrates your ignorance. Your words are totally unresponsive to

The CAGW demands that we trust them and send them more of our money. David says “here is what I have done, please verify and/or find fault”.

Prove that you know the real meaning of the words you use. Share with us the documentation and source code for a significant software package you have written. Show us such things are not beyond your ability to produce something that works.

I wanted my post to be fit for mixed company so I held myself back. The full force of my thoughts on the matter might have burned a hole in the space-time continuum. However, I think my point was made.

I predict that if WC responds, it will be consistent with his countless other comments. There will be nothing new, relevant, or even approaching what David has already posted. He uses words as weapons and not as tools of understanding and communication. He has not yet demonstrated that he knows how to program in any language to any degree. How to properly develop a complex system from the top down and make it actually work appears to be way beyond his feeble abilities.

WOW! A Perl script of a few 100 lines. With vanishing little internal and no external documentation. Any script kiddy could have done at least that much in a weekend. It doesn’t convince me that you really understand how to program anything beyond the boringly trivial.

Are you only an empty braggart, as we think you are, or can you stand and deliver? That is something you have to prove. You are the one demanding respect on this list. EARN IT! We don’t have to prove a damn thing to you until you have.

Now, how about something of substance? Something well in excess of 1,000 lines of executable code that does something more than the trivial. Something difficult enough that it actually requires a technical understanding of the development process, careful planning, rigorous design, and impeccable implementation. Have you done at least one that meets that requirement? Or is all that you have done is the scramble code of the kind you offer in your example?

“Via email, I asked Anthony Watts, proprietor of WattsUpWithThat, what he thinks of Goddard’s claims. He responded…

…while it is true that NOAA does a tremendous amount of adjustment to the surface temperature record, the word “fabrication” implies that numbers are being plucked out of thin air in a nefarious way when it isn’t exactly the case.

“Goddard” is wrong is his assertions of fabrication, but the fact is that NCDC isn’t paying attention to small details, and the entire process from B91’s to CONUS creates an inflated warming signal. We published a preliminary paper two years ago on this which you can read here: http://wattsupwiththat.com/2012/07/29/press-release-2/”

About half the warming in the USA is due to adjustments. We’ received a lot of criticism for that paper, and we’ve spent two years reworking it and dealing with those criticisms. Our results are unchanged and will be published soon.

This is some pretty cool stuff. Even though it’s 3:30 in the afternoon here it is past my bedtime or I’d try and wrap my head around this. I’m just afraid that if I get started I’ll be in here until I have to go to work and I’ve tried the work without sleep thing before, it’s not my idea of fun.

No matter what the naysayers will come up with you, along with the rest of us, have learned some things. Isn’t that what it is all about?

Think about the timing: At the peak of the sunspot cycle, while the sun is producing its maximum solar irradiation, it turns out that the Sun’s magnetic field is collapsing through its weakest moment. (Marvel at Figure 1 below.) The solar radiation only varies a little through the cycle, but the dynamo of the solar magnetic field is undergoing profound changes — flipping in polarity from North to South or back again. This causes the notch.

I may be mistaken, but I have seen no comment yet regarding the combined local terrestrial effect of EMP output from the nuclear detonations. Is it not conceivable that we might be looking at a locally induced, more immediate global temperature ‘notch’ in response to these EMP – a cascading change to the atmosphere that leads to a small, brief sag in the temperature?

Jo states:

it’s radioactive too (a bit of a cosmic ray effect?)

As I understand it, the cosmic ray effect amplifies as the solar wind declines and in turn leads to greater ‘seeding’ and cloud formation, thereby to raising of albedo and net cooling. An EMP from a nuclear detonation might conceivably profoundly alter local conditions in a manner that permits greater seeding, eg. the strong EMP pulse that effectively reduces the solar wind by creating a brief, separate mini ionosphere?

After all, the Evans (Big) Notch coincides with stellar polarity reversal implying an association of unknown relationship with diminishing TSI.
Why not a magnitudinal smaller and short term effect on temperature as the result of multiple local EMP’s?

I know it seems ‘out there’, but it appears as though it may be an effect being ignored (?) similar to the potential changes in planetary ionosphere induction related to stellar polarity changes ?
What are the implications on the behaviour of the ferromagnet that is the Earth’s iron core ?

Sooty if I’m totally wrong but I can challenge a goldfish at time for memory span, so feel free to laugh.

For the early years sunspots was used as a proxy for TSI. I cannot remember if this changed for latter years. If not then there is an explanation for the overshoot for the years covered by the nuclear tests. This being issues with the sunspot count itself.

Such as the Waldmaeier overcount ( which covers the same period ) and many other reported errors, though some may be biased depending on who reports them.

Force x is challenging. The Svensmark effect doesn’t work because,as I understand it and charts associated with the theory show, as sunspot numbers rise, the effect ( cosmic rays ) fall, though there may be more clouds over the equator where ocean evaporation is greatest.

Could the apparent albedo change noted at TSI max, actual be an increase in energy absorption by the oceans, due to the spectral change of the energy produced by the Sun. As the sun reaches max there is a change in the frequencies that it generates, an increase in certain UV wavelengths. As UV is absorbed by the ocean far better & to a greater depth than longer wavelengths. Also UV is absorbed by the upper atmosphere. There is a caveat that the UV cycle may not be fully in tune with the sunspot cycle, as one commenter at Tallbloke’s blog mentioned a few days ago.

The nuclear winter affect is one of those Gomer Pyle “shazam” moments. I was immediately put in mind of Einstein’s Constant (he considered it his greatest blunder). It is plausible but too convenient, not quite like saying “and here a miracle occurs” but close. Going to have to think about these known unknowns.

Hmm not really sure that this is justified – David said they considered the nuclear bombs because they put considerable amounts particulate fine particles very very high into the atmosphere. While the war did generate huge amounts of dust I am not sure it got anywhere near as high up and therefore not inplace to have the same affect. Although I do think about Asian forest fires getting into the jet stream hmmm ??

The atmospheric winds form into layers. I don’t understand why or how, it is just something I was taught, in learning that water vapour forms different types of clouds in different layers.

Wind speeds are high, at altitude, and winds flows in different directions in different layers, and so dust gets held up by the turbulence, and cannot easily move between layers.

The blast and radiation from a nuclear explosion has sufficient energy to shred the junctions between the air layers, and that is why the dust gets up there, and why it takes so long to get down again. I guess we are talking 50,000 or 60,000 feet and higher? Not a lot of gravity up there to pull the dust back down again.

It’s late where I am (a different time-zone than home) but a couple of thoughts are wondering through my tired head. Is there anything in considering the interplay between the nuclear test ‘fallout’ in the stratosphere and it’s recent temperature trends? Sorry if not quite on topic, but strat temps and trends are another CO2′ist ‘hot’ topic.

What are all those blood suckers doing hanging around these posts being as good as gold, like they know there’s a feast coming. Aren’t they afraid when the sunlight hits of being scorched to a frazzle ?

All this work proves that the AGW propagandists in the scientific community have failed in their duty to conduct real research into climate change. As a consequence they should be disqualified from holding a science position, but of course it won’t happen for a variety of reasons. Perhaps one day.

Forgive me for not keeping up. Is it anything like predictive cancellers used to get rid of echo on telephone lines this notch delay filter, having built one of these with FIR filters about 30 years ago ?

By way of clarification for us audio buffs, as long as you bring it up:

Sorry, but a notch filter is not used to remove “main hum (50 or 60 Hz) from audio equipment” – at least now successfully (learned the hard way of course). Here’s why:

If you had 50 or 60 Hz in your audio, it would have to be VERY large in order for you to even notice it. The famous “Fletcher-Munson” curves show a rapidly declining sensitivity on the low frequency end. (Your ears have SOME response down to 15 Hz). It is, rather, the second harmonic (100 or 120 Hz) that constitutes “Hum” and is more commonly a grounding issue. Also remember that it is the second harmonic that leaks residually (ripple) through a full-wave rectified power supply. And you certainly wouldn’t want to use inductors (heavy, bulky) in a notch of that low a frequency, but rather some active filter. I wrote an Audio Eng. Soc. paper in 1982 (on my Electronotes website) on an “Adaptive Delay Comb Filter”, not related to matters here!

Trouble with all the BIG NEWS postings is that the global temperature anomalies you are apparently using are probably drivel if based on any of the major organizations NOAA, GISS BEST ect(refer to Steven Goddards site about data manipulations). Have you tried the model against CET or Armagh (probably the only truly representative surface records? (BTW stand to be corrected on the assumption you in fact are using those temperature recordsmentioned above). It is quite likely/possible that there HAS NOT BEEN an increase of 0.8C since 1880. Updated CET shows 0C (may 2014)

Eliza, We are very aware of how biased the adjustments seem. You are right, it is a real issue. The model could be in good form, but still produce the wrong predictions because the data is so questionable both for sunspots and temperature.

CET and Armagh are not global data sets. What choice do we have? FWIW – the turning points seem to survive the adjustments, it’s just the trends that shift.

It certainly will affect the accuracy of predictions (though if those results are adjusted too… Sigh).

ICOADS SST is fairly ‘raw’. It almost certainly will have some measurement biases, particularly in the early record when coverage was sparse. That may be better than adding second lot of biases called “bias corrections”.

HISTALP also have some very long records but the long term variation is more adjustment that real data. They are very secretive with the real data and want stupid money for costs of “extracting” the raw data and signing a non-disclosure agreement.

Once the data has been adjusted for political correctness, it is freely downloadable.

I doubt there is any data that has not been manipulated on the scale of 50y trends. I think the only hope of getting any information about climate is to look at <20y variability.

Having said that, the whole process of demonstrating that a model can be produced without CO2 to explain even the manipulated data is interesting.

Really, really good point Stephen Wilde made at 12:18am. I think the same exactly. The Sun drives climate. The Sun drives weather. The Sun causes warming, cooling, and extreme weather events. The Sun is going to drive the warmists batty as SC24 winds down. It’s time to drive that point home all across the world.

Yes you are right Bob Weber – the “warmists” will feel like they are in a “weber” once “judgement day” comes and that day is not far off. It has always been the Sun controlling Earth’s Climate for 4.5 billion years now. But many “Johnny come lately” Homo Sapiens born in the last 100 years can’t seem to grasp this reality. I guess they are exposing their low IQ which is embarrassing if they are scientists who should know better – clearly they didn’t study geology.

David, great work and to be applauded for the courage to be transparent while acknowledging you don’t have all the answers, in sharp contrast to those who do but can’t back up their claims with clear evidence they have everything pinned down (when no one does). It also adds a different dimension to consideration of deficiencies inherent in IPCC models and the adage; the degree of relevance going in is proportional to the limitations on application coming out. I look forward to your next installment.

I find the nuclear effect interesting from the perspective of dust and other nuclei in the atmosphere, and to what degree it has on climate variability. Is a variable influence on atmosphere (climate) dynamics arising from dust and gaseous aerosols, irrespective of whether the origin is anthropogenic induced sources (nuclear bombs, industrial emissions, agricultural land erosion, “carbon” gases and particulates, etc), or natural (cosmic dust, biological origins, etc), important?

In an holistic perspective, is the particulate/aerosol component a complex and continuously variable source of nuclei for cloud formation and therefore an influence on albedo and temperature? If so, would that have such a wide degree of variability on geographic, latitude and vertical stratification (let alone time) scales to make it somewhat difficult to model (or there is simply insufficent data available and/or on a long enough time scale to be useful)?

Is the above important insofar as algorithms in your model, or potential expansion thereof, or relatively irrelevant? I might just have my head too far in the metaphorical clouds (or as my children might say, “dream on, stick to what you know best”)?

I think Anthony is just arguing over the words used by Steve. There was a similar debate on this site a couple of days ago when someone referred to the corruption of the data –a poster objected to the word corruption when technically “tampered” or “adjusted” was a better word.
I’m with Steve on the issue –all he does is some good investigative work and produces some useful “animated” graphs to clearly show the effect of the changes. Very simple, he doesn’t pretend it is highly scientific work –just basic facts. The NOAA etc. forget there are people who keep old data from the official websites or they are clever enough to recover the older pages.

Jo Re CET and Armagh the argument is not over global temperature sets. The fact is they probably represent true anomaly changes even if local due to Central British and Irish climates (Rural, extremely stable cloudy and wet mainly with small temp variations). Basically both show 0 change since 1640 (to this date)

>”The step response of the notch-delay solar model takes about 15+ years to fully respond”

>”The most important element of the solar model is the delay, which is most likely 11 years (but definitely between 10 and 20 years)”

Definitely the delay is the most important element. I favour a delay central to the 10 – 20 yr range +/- several years for reasons laid out elsewhere and not for discussion in this thread by me. That’s ongoing in Parts II and IV.

“Over the last 35 years the sun has shown a slight cooling trend. However global temperatures have been increasing. Since the sun and climate are going in opposite directions scientists conclude the sun cannot be the cause of recent global warming.

The only way to blame the sun for the current rise in temperatures is by cherry picking the data. This is done by showing only past periods when sun and climate move together and ignoring the last few decades when the two are moving in opposite directions.”

The solar-ocean lag is another 14 years or so on from the (apparent) atmospheric response starting around 2000. “Apparent” because there are other factors that distort the picture.

Therefore, the atmosphere is only now “seeing” peak OHC, 28 years after so-called 1986 “peak” TSI (a misnomer).

The actual solar Grand Maximum activity peak spans SCs 17 – 23 (1933 – 2008) i.e. 2014 is only 6 years past the end of peak solar activity but the effect of that activity via the oceanic heat sink will still be “seen” by the atmosphere for another 8 years or so.

I think this solar delay model is on the right track irrespective of arguments (well, mine anyway) over the nature of the delay. Whether David has captured oceanic lag sufficiently or not will be known over the next 3 years or so but that’s not for this thread either.

Is in my opinion seriously misleading. The TSI has been pulsing in a progressively increasing amount that puts even recent peaks way over the average. Their statement is I suppose in regard to “average TSI” however, if there is a delay in effect, the peak pulses of TSI is still quite high.

It would seem to me that the Earths natural climate moderating “systems” would have a hard time shedding the effect of the pulses and remember the “negative portion of the TSI pulses is not “cooling” as SKS asserts those troughs are still much higher than any other time in the reconstruction and especially post 1900.

>”The TSI has been pulsing in a progressively increasing amount that puts even recent peaks way over the average.”

Yes. And within those pulses are much shorter pulses. A rough analogy is the heating effect of mains electricity at 50/60 Hz (per second). Then if there was a per minute oscillation as well analogous to 11 yr periodicity. But the heating effect causing temperature rise in say water in a pot on a stove element is due to the level of power applied i.e. how much you turn up the element dial.

From your reconstruction graph, the solar dial was turned up from 4.6 (1364.6 W/m2) to 6.1 (1366.1 W/m2 over the period 1900 to 1960. And the high level (6.1) was maintained until about 2005 when it was turned down only slightly (“slight cooling trend” – SkS) to 6 say. Still way above 4.6.

>Their statement is I suppose in regard to “average TSI”

Yes but as above, the reduction in TSI has only been “slight” (in SKS terminology), see the average TSI reduction SC 22 to SC 23 here:

>”however, if there is a delay in effect, the peak pulses of TSI is still quite high.”

Very high as above, even without the delay. The peak pulse early 2014 is relatively weaker though. We don’t yet know how much weaker in terms of TSI but in terms of the 10.7cm Solar Radio Flux index, the SC 24 peak is 27% weaker than SC 23 peak on monthly average of F10.7:

>”those troughs are still much higher than any other time in the reconstruction and especially post 1900″

Yes, exactly. And the troughs (minima) are the bicentennial trend. Elsewhere in Parts II and VI I’m trying to demonstrate that the 1955 to 2014 trend in ocean heat accumulation (OHC) corresponds to the bicentennial trend (and deVries cycle) but lagged about 6 decades from when TSI first reached maximum levels in the late 1950s.

I agree it is solar activity that runs earths climate, earths interglacial periods and ice-ages. Minor warm and cold periods on a decadal scale do correlate with the sun, and the model is spot-on in showing how the suns variability has the potential to do this empirically.

Just trying to get my head around Dr Evan’s work. Here’s my understanding of what he’s done. Let me know if any of this is incorrect.

1. Assume as a starting point that ”if the recent global warming was associated almost entirely with solar radiation, and had no dependence on CO2, what solar model would account for it?”. In other words, reduce a system with numerous input variables to a system based on the input of a single variable (TSI).

From what I can see, the output of the model of the hypothetical system and the observed record don’t correlate very well. It is then postulated that poor correlation is due to an unidentified component in the actual (real) system, hence the need to introduce something called “Force X”).

Am I still right?

My question then, is this: isn’t it more likely that the non-correlation is due to the omission of the other variable/s from the original model rather than an unknown component in the system?

I guess what it boils down to is that something is missing from either the model or from the understanding of the real system.

To me, for the model to be plausible more needs to be explained about “Force X” – what is it, how does it work, etc.

The association between TSI and Temperatures is not a “non-correlation” at all – it’s been shown by others in different ways to be a delayed correlation, by about 1 cycle or 11 years, which is also what the notching effect in the transfer function suggests. See the references and this post. That’s a fairly suggestive pointer that there is some other force coming off the sun 11 years after the TSI changes. That force may be magnetic, electric field, UV, solar wind…

I should add, thanks for doing the work to understand this. It is not simple.

In the first part of my question/s I used the term “poor correlation” – sorry for using “non-correlation” in the second part. What I meant was that there is not a direct correlation between the two quantities (it’s delayed as you say). If I post further questions I’ll refer to this as “delayed correlation” as you’ve suggested.

Also, I’m curious as to why I get a “thumbs down” for merely trying to understand the hypothesis?

I suppose that in effect the existing GCMs have always impliedly accepted David and Jo’s notch and proposed that force x is CO2 but since there has been no correlation with CO2 levels for the past 18 years that didn’t help much.

Against that, the climate record obtained for thousands of years via direct temperature measurements plus proxy sources does show a tantalising but imperfect correlation between solar activty and temperatures on time scales involving multiple solar cycles.

To get a more acceptable fit one just needs to throw the effect of a single solar cycle out of he window (the notch)and work out why a single cycle has little or no effect.

In my view the effect of a single cycle is swamped by internal system variability AND the delay involved in the oceanic response to global cloudiness changes.

So, force x (IMHO) is the change in the mix of solar wavelengths and/or particles affecting the vertical structure of the atmosphere so as to affect cloudiness in the way I have described elsewhere.

The delay then occurs in the ocean response.

I realise that goes a couple of steps beyond David and Jo’s model which identifies the notch but not the cause, hence force ‘x’ but, so far, I think my proposal is the best currently on the table.

The thing is, you can’t argue against David and Jo’s model by just reinserting the assumed thermal effect of CO2 as force x. That approach has now failed spectacularly.

I had meant to add that according to Evans in the preamble to this post, his Excel model to-be-released allows any ratio of the various “forcings” to be plugged in and run
CO2 80%/TSI 20% and so on for the problematic 1950/60′s, then on for predictive capacity

Not convinced, but playing with the XLSX spreadsheet for prediction could be fun

Personally, my current thinking is that climate is a non-linear, coupled mix of a large number of elements, producing a chaos system beyond our Navier-Stokes resolvable limits

There were 2 peaks, the first and higher level in 1958ish, the second and lower level around 1987ish in terms of “the open solar flux FS from geomagnetic activity data” – L&F07.

1986 was a minima of SC activity at end of SC 21. The 1986 peak David refers to was only found using PMOD data from 1976 and by smoothing out the solar cycle variations. There was no actual TSI “peak” in 1986 (see Part VI link above).

A similar TSI “peak” at 1986 is found using a line tracing each SC minimum (the bicentennial trend). How that trend relates to 2014 and beyond is graphed here:

I’m still trying to get my head around Dr Evan’s work. Here’s my understanding so far.

Essentially, he’s compared two datasets – TSI and Global Temperature in the frequency domain. TSI demonstrates a clear sinusoid (the well-known 11-year solar cycle) while the Global Tempreature record shows no sinusoids meaning that there’s no cyclic pattern detectable in the data.

A transfer function is then derived.

The transfer function is basically a mathematical construct that describes how one dataset (an input) can be converted to another (an output).

It’s then proposed that there is a “Force X” which smoothes or removes the cyclic pattern that should be present in the Global Temperature record.

What I don’t get is, isn’t it more likely that what you’ve actually shown is that there is no correlation between the two datasets?

We know that a change in TSI should change temperature but on an 11 year time scale it doesn’t.

So there has to be something offsetting the effect and that something only delays, it doesn’t negate, so the TSI signal then turns up for longer timescales after allowing for the delay. The delay is spread over a period of 3 to 15 years but centres on 11 years which happens to be a single solar cycle.

That something is then labelled force x until it can be identified.

The advantage of being able to represent the situation in graphical form is to enable one to play around with the scale and timing of different climate parameters in order to narrow down the nature of force x. We are looking for a physical process that varies in such a way that it produces the necessary pattern. I understand that the method may already cast doubt on cosmic rays as force x.

Previously, people have just said ‘no correlation’ and left it at that but that obstructs recognition that there an underlying relationship between TSI, temperature and another element or elements of the climate system all operating in a complex interaction.

The ‘model’ proposed by David allows work to be started on the analysis process in a step by step logical manner. It separates out the ‘signal’ of the missing climate driver (or drivers) in graphical form to give us a start for figuring out what could cause the delayed offset against the known influence of TSI variations.

I don’t see why these two statements are necessarily true – what evidence is there that TSI has the effect that is proposed by Dr Evans? What if the effect of changes in TSI are too small to affect Global Temperature in the manner suggested (Dr Evans suggests that there should be a “corresponding peak in the temperature”)?

Also, I think there is a logical flaw in the discussion of the concept.

In “Big News Part II” it is stated that “The peaks only last for a year or two, so the low pass filter in the climate system would reduce the temperature peak to somewhat below 0.1°C.”

I don’t understand why a low pass filter is mentioned here – you’re actually talking about the characteristics of the transfer function prior to its presentation is Section 3. Is that right? Or is the LPF mentioned in Section 2 different to the transfer function that “is fairly flat, except for the notch around 11 years, and hints of a fall off at the higher frequencies.” It’s not clear whether these two references are to the same thing or different things. This needs to be clarified.

>”Changing TSI must affect temperature as per the S-B equation. Hence there should be a discernible correlation between TSI and temperature even if very small.”

Yes, the very faint and very minor “fast” response. The fast temperature response to the approx 11 year solar cycle identified all over the globe by Coughlan and Tung (2004). Subsequently by Zhou and Tung.

This kind of weighted integration had the properties of a low-pass filter (like all integration) and also produces a shift. The tau=5y used here is approximately right for the shift. 10y is visibly too long. I did not spend to long optimising

Maybe this would be a better way to achieve a similar result to notch-delay which has a simply physical meaning and avoids non-physical, non-causational notch problems.

Today Clive Palmer hosted Al Gore ar Parliament House Canberra to announce that the PUPs would support the repeal of carbon tax in return for an emissions trading scheme. A few weeks ago Palmer was caught out dining with Malcolm Turnbull who backed the Kevin Rudd emissions trading scheme. The plot thickens, the wealthy want to make money based on the flawed climate change agenda. Meanwhile the UN-EU emissions trading scheme is collapsing, so what are these deceivers plotting?

Palmer wants the Australian Government to build new RAN ships in Australia, despite high costs well exceeding other supplier countries. He is planning to build a replica of the Titanic in China. This guy has problems.

That’s what we originally started this project with. Went to find it in the frequency domain, but couldn’t. Eventually realized we were looking at a notch, and got the empirical transfer function in Post I.

Note that there is a low pass filter at the heart of the model, in Post VI.

David Stockwell got me interested in the climate as a LPF, because he was finding a lot of signs for it (e.g. “Key Evidence for the Accumulative Model of High Solar Influence on Global Temperature.”).

( IIRC this is 1/s in Laplace terminology, if you’re used to working in those terms. )

Since it is a weighted integral it does have low-pass properties but due to the asymmetric kernel it also has a variable phase response and lag. Note the lag depends on frequency and is NOT a delay line.

I’m not suggesting here that there is a 5y period that I’m trying to remove. I’m suggesting that the expected response of a single reservoir model to a radiative forcing would of this form. I very quickly tried a few values of tau and found 5y about right. Discussions here have cited published research pointing to similar values have been derived empirically.

This does not sufficiently attenuate the 11y signal and is obviously too simplistic but it seems like a good physical starting point.

My reading of the overall system is that there’s strong -ve f/b in tropics ( a la Echenbach ) that strongly attenuates most surface radiative forcing: both solar and GHG. ( Less so outside tropics, but tropics are main energy input ).

There is deeper penetration of shorter wavelengths that bypasses this feedback and are subject to a longer time constant.

I think these two explain the relatively small 11y signal despite its dominance of SSN thus is in accord with your blackbox result.

There is a lot of indications of a concurrent 9y variability that many studies claiming 11y are failing to isolate as well as many totally refuting SSN because of phase drift that are equally failing to recognise.

Too much simplistic analysis and hasty conclusions on both sides.

I estimate solar and lunar influence to be comparable in magnitude, lunar even stronger at times depending on size of SSN peaks.

The required cooling from the tests is about 0.5°C at its peak in 1963, the year that the USA and the USSR agreed to discontinue atmospheric testing. (If the solar model is too sensitive because the warming of the land thermometer records is exaggerated, then less cooling is required.)

The AMO (NA SST) appears to be the main contributor (or the cause) for cooling in 1960s onwards.
This is unlikely to do anything with tests since the Arctic Atmospheric pressure ( a precursor to the NA SST) fell sharply in late 1930s, recovering in 1970-80s, to repeat its sharp fall again in the early 1990s. This would imply an imminent fall in NA SST, if the history were to repeat itself.

I attributed this to the lunar anomalistic month when I did it but perhaps it could related to the rotation of the solar core which is very close to that period too.

The three peaks are not remarkable against background noise on their own. But once they are recognised as the Fourier representation of a modulated signal, not individual peaks, their magnitude becomes significant.

It’s quantifiable, with a model that approximately hindcasts the observed temperatures. It is not just a concept with handwaving, or a rough one-off computation.

It’s got physical interpretations for all the parts. This is a physical model, not just curve fitting or an unexplained correlation.

These are, to me at least, the major important things so far and they have been lacking in climate science to date. But I’ll leave it open that someone knows more about the models and may dispute this point.

It’s interesting that the solar TSI based model gets close or right on, with or without CO2 included. But there is more to go and I eagerly await the next chapter.

“It’s interesting that the solar TSI based model gets close or right on, with or without CO2 included. ”

Well it would be if it did, but it seems to need a huge fudge factor in the form of a previously undocumented “nuclear winter” and a physically unreal, non-causal notch filter.

I think the venture is certainly worth pursuing, since IPCC claims that natural forcing only models do not work are based on models tuned to an amplified CO2 forcing where the CO2 is subsequently removed. Then: voila, it does work!

That it little short of dishonest. If they’d put the same effort in to tweaking their models (and data !!) without CO2 from the beginning they would equally be able to report that adding 3x amplified AGW did not work either.

If I say dishonest, I’m being kind.

However, I think the current model proposed here could better be achieved by a relaxation response applied to SSN ( the basic response used by the IPCC to radiative forcing )

Well it would be if it did, but it seems to need a huge fudge factor in the form of a previously undocumented “nuclear winter” and a physically unreal, non-causal notch filter.

Greg,

The “nuclear winter” idea isn’t an invention of David Evans, it’s been around a long time and believed by many to be real. What if it is? We’ve been told to believe much more ridiculous things by the climate change worriers.

The notch filter is also certainly real because good sound math can find it in the existing temperature data. Being able to find it of course, doesn’t explain it but the math behind Fourier analysis has been too well understood for too long to doubt the notch without some very good reason.

The idea of nuclear winter following a nuclear holocaust has been around for a long time. However, suggesting 0.5deg C is actually present in climate from a number of airborne tests is, IMO both new and fanciful.

No one is questioning the F.A. the problem is what is done with it. I and several others have questioned dividing the spectra like that since you need to sample the whole spectrum. An input which is mainly an 11y spike will always give you a “notch” with this method.

You are correct. However, your argument all by itself doesn’t look like sufficient grounds to dismiss the nuclear effect either. If we drop the obviously pejorative term, nuclear winter, which has been very much overused and look at the evidence there is for what David incorporated into his model then hopefully we can avoid condemnation of this TSI model until we see all of it. That’s my whole point in all the comments I’ve made, we haven’t seen all of it yet.

I have no idea how trustworthy any of this is, either nuclear effect or old temperature records, especially since there’s more than enough reason to believe that temperature data sets have been doctored up. But I suspect problems with both. Yet here we are, looking for an explanation that does account for the warming we have good evidence for and for which CO2 is a totally unbelievable cause. Let’s see it all before being its critic.

There were two assumptions: 1. TSI controls temperature. 2. The (TSI in, temp out) system is linear and invariant. Under those conditions, it’s a notch filter at work. Remind you of implications of sinusoids as eigenfunctions in Post II.

“ 1. Is the notch spurious or real?
I have yet to see an argument I consider satisfactory for spurious.

2. If it is real what is the cause?
It doesn’t act like any integrator I’m familiar with. ”

**************************************

First please see my answer to Roy that is below. Also probably Greg will answer for himself but I will give my response now as I am going to be away tomorrow.

************************

(1 – is the notch spurious or real?) The notch has to be considered “non-real” at the moment. (Future installments by David may change that.) As I stated to Roy, since it is non-causal, it is not real in that sense. I don’t know why David made it non-causal and believes he can then fix it with a delay. The delay, which does NOT even solve the causality problem, causes additional complications. Why not START with the much simpler causal filter with no delay needed? Well, he apparently THOUGHT (wrongly) that a notch had to be non-causal.

But you did actually ask two questions at once (is it spurious or real). It is spurious (at present!) as well as being non-real (non-causal). It is inferred as the ratio of two Fourier transforms, T (temperature) to TSI (solar output). (Need I mention that we don’t know either very well?) Since TSI has a bump up at 1/11-years, David infers a notch between TSI -> T, since T is quite flat. But there would be an inferred notch for any relatively flat spectrum, relative to TSI. So – sorry – spurious until proven otherwise.

(2. If it is real what is the cause?) Any answer here would be an immense help. The fact that the “notch filter” may not even exist makes speculation on its cause less urgent! Describing (or suggesting) a plausible cause FIRST would be a tremendous boost to suggesting its possible existence. Is the cause “Force X?” I don’t understand what Force-X is supposed to be and/or do, even vaguely, and so far the installments seem to attribute it to the Sun itself (?), or to something on the Earth(?) even biological (?). If you are lost – welcome, as a skeptic should be, to the club.

Quite frankly, if David has anything, and I sincerely hope he does, it needs to be spectacular! Too many pieces. Too many promises. Summarize first – details later. Science is not a murder novel – you tell who “did it” right in the abstract. Sadly, at the moment at least, it looks like another “Just-So Story”.

“The notch filter is also certainly real because good sound math can find it in the existing temperature data. Being able to find it of course, doesn’t explain it but the math behind Fourier analysis has been too well understood for too long to doubt the notch without some very good reason.”

We need to pay attention to standard terminology or we risk misleading others.

In signal processing, we should not use the term “filter” unless we have reason to believe (such as an obvious electrical network, mechanical linkages, etc.) that there is an input-to-output relationship in place, and we wish to describe this linkage. This would be the meaning of “real” – as an existential “reality”. Math along is probably acceptable at this point. Then there is the issue of “realizablity”, actually making the thing, or observing it working in Nature. This requires, among other things, causality: an arrow of time. It is perfectly proper to consider a non-causal filter to NOT be real.

The ratio of the magnitudes of two Fourier transforms of two different signals is NOT automatically a filter. It may suggests that a “nuts-and-bolts” filter of some sort COULD be an explanation – especially if some plausible mechanism is presented – otherwise perhaps not so much. David uses the term “Transfer function” for this ratio, which is perhaps a misuse (it should be Laplace instead of Fourier) as “Transfer Function” suggests a real (existing) filter, or established path. If the actual filter were established, the ratio of the two spectra (output divided by input) would be considered a magnitude of a Transfer Function (generally called a “frequency response”).

i) I see some merit in your idea of a ‘relaxation response’ but am content to go with David unless he thinks your approach could be more accessible to the lay reader.

ii) You have spotted one of my favoured features of the climate system, namely the way the entire global air circulation reconfigures as necessary to maintain the thermal stability of the system. The QBO and the trade winds amongst other climate phenomena respond directly to solar induced changes in the gradient of tropopause height between equator and poles.

iii) Don’t worry about David’s reference to the nuclear winter aspect. Just substitute the negative phase of the PDO plus weaker cycle 20 and one doesn’t need it.

Well, I’m not sure “lay reader” is a valid means of choosing a model but if you want to look at it that way, warming a pot of water and watching it cool is fairly accessible.

Notch filters, phase shifts and non-causal responses less so.

The overall aim is very worthwhile, but I think the graphs I’ve produced show that you can get a lot nearer to the surface record, a lot more easily with a much simpler and physically meaningful model.

Uncle Occam would like that.

There are too many things as it stands that just look like an attempt to force a square peg into a round hole.

I think I’ve provided a way around those problems.

Hopefully Dr Evans will find it useful. Providing a non GHG model, even if not a perfect fit will be a good counter the false claims of IPCC that only exaggerated AGW fits the data.

(1) The lack of evidence that the OFT upon which the solar model rests has been comprehensively tested against/with (or incorporates) the ensemble of all peer reviewed modern papers regarding the comparison between TSI and global mean surface temperature (from at least 1800 to 2013). I particularly mean at the very least the following:

(2)I don’t understand why the ‘window’ used was 18.50 to 1978. I don’t agree with the assertion (which was possibly not really relevant) that mean global surface temperatures can be reliably inferred from proxies back to 1613. I would put that limit somewhere between 1700 and 1800 i.e. the period in which the expansion of naval and ship-borne use of thermometers (as opposed to land-based) really took off. What good are proxies unless they are first compared with calibrating data? Note I have published extensively in isotope geochronology.

(3) The lack of evidence that the inferred TSI record (from 1700) published by Svalgaard has been taken into account. There are very good reasons why this is critical – (a) because the most recent cooling period which may well be a good analogy of where we are at right now (‘proxy’) is the Dalton Minimum, (b) because there are significant issues of doubt about past sun spot counting (as Svalgaard has identified) and (c) because Svalgaard’s TSI reconstruction is a lot more uniform than all the others.

“(c) because Svalgaard’s TSI reconstruction is a lot more uniform than all the others.”

Svalgaard’s TSI is based simply on SSN. Most of the others , like Lean et al, for some add reason add back in an 11y year running mean of SSN underneath the actual SSN.

Not only is running mean an awful filter, this just seems like double counting to me. It looks like a crude attempt to coerce the TSI data into resembling the surface record.

Since the relaxation model I used has the basic low-pass quality of the integration, it ends up having a similar profile, without the seemingly spurious double counting of the Lean type TSI reconstructions.

I really don’t see the justification manipulating TSI in that way. I’m not aware of any reason to add anything to SSN when reconstructing TSI. This seems to be Svalgaard’s basic line.

Yes Greg, I am starting to shudder every time I keep reading text references like:

‘…. the Lean 2000 TSI reconstruction back to 1610….’

This is what Prof. Lief Svalgaard now says about Lean et al. 2000 and also Wang et al. 2005:

‘In the past 5 years the ‘background’ has slowly disappeared on the radar screen. Even Judith Lean doubts her early work [she was also a co-author of Wang et al 2005]. Slide 15 of http://www.leif.org/research/Does%20The%20Sun%20Vary%20Enough.pdf shows one of Lean’s slide from the SORCE 2008 presentation. Note that she says “longer-term variations not yet detectable – … do they occur? ”

What has happened is that the Sun has had a very deep minimum comparable to those at the beginning of the 20th century. We would therefore expect that TSI now should also be comparable to TSI around 1900. Reconstructions such as Lean 2000, Wang 2005, and others, that show that TSI in 1900 was significantly lower than today are therefore likely in error.’

and Svalgaard’s 5/27/10 paper has the following conclusion:

• Variation in Solar Output is a Factor of Ten too Small to Account for The Little Ice Age,

• Unless the Climate is Extraordinarily Sensitive to Very Small Changes,

• But Then the Phase (‘Line-Up of Wiggles’) is Not Right

• Way Out: Sensitivity and Phases Vary Semi-Randomly on All Time Scales.

This is what Jeff Glassman says in response:

Svalgaard is quite right to belittle correlation by the “Line-up of Wiggles”. He could throw in visual comparisons of charts, like Lean’s beautiful map diagrams (Charts 14, 20), or of co-plots of traces (Charts 24, 27). The human eye is easily deceived. Besides, correlation is a mathematical operation leading to a lag-dependent number, hence a function. Correlation needs to be quantified, and neither Svalgaard nor Lean in these references computed the correlation between global average surface temperature and TSI. That is done in my SGW (and in David’s new .

The key point here is Svalgaard’s second bullet: “Unless the Climate is … Sensitive to … Small Changes”.

The first order effects are two. First, TSI is reduced by its reflection from reactive clouds, hence a powerful positive feedback to solar variations. Second is its absorption, transport, and release by the ocean in its surface layer and through the conveyor belt, made significant by the relative heat capacity of the ocean compared to the atmosphere or land surfaces. The hypothesis is that these effects are what make Earth especially sensitive to TSI variations, and shape the total response of Earth to certain waveforms present in TSI. My (Glassman’s) model satisfies Svalgaard’s criterion, despite Svalgaard’s belief that IPCC’s data are in some sense obsolete. It provides additional processes, specifically albedo and ocean absorption and circulation, for e.g. Lean to add as examples of empirical evidence. As shown using proper correlation techniques, Earth’s climate is twice as sensitive to the solar wind as it is to ENSO.’

The irony here of course is that Dr Jeff Glassman’s solar model preceded David’s efforts by a good 4 years but received no attention, and is not subsequently acknowledged, simply BECAUSE it included a fairly comprehensive and scientifically rigorous exploration of the 1st order effects.

Another irony is that all this stuff is also being conducted in the absence of the realization that the global climate system contains significant elements which are non-equilibrium thermodynamics and that, moreover, there is a whole community of scientists who have been studying such effects from the viewpoint of the Maximum Entropy Production (MEP) principle – a principle which has already been elegantly used to rigorously explain some of what we observe on Earth on the other planets and some moons.

We reported recently about our spectral analysis work of European temperatures [1] which shows that during the last centuries all climate changes were caused by periodic (i.e. natural) processes. Non-periodic processes like a warming through the monotonic increase of CO2 in the atmosphere could cause at most 0.1° to 0.2° warming for a doubling of the CO2 content, as it is expected for 2100.

Fig. 1 (Fig. 6 of [1] ) shows the measured temperatures (blue) and the temperatures reconstructed using the 6 strongest frequency components (red) of the Fourier spectrum, indicating that the temperature history is determined by periodic processes only.

On sees from Fig. 1 that two cycles of periods 200+ years and ~65 years dominate the climate changes, the 200+ year cycle causing the largest part of the temperature increase since 1870.

The ~65 year cycle is the well-known, much studied, and well understood “Atlantic/Pacific oscillation” (AMO/PDO). It can be traced back for 1400 years. The AMO/PDO has no external forcing it is “intrinsic dynamics”, an “oscillator”.

Although the spectral analysis of the historical instrumental temperature measurements [1] show a strong 200+ year period, it cannot be inferred with certainty from these measurements, since only 240 years of measurement data are available. However, the temperatures obtained from the Spannagel stalagmite show this periodicity as the strongest climate variation by far since about 1100 AD.

The existence of this 200+ year periodicity has nonetheless been doubted. Even though temperatures from the Spannagel stalagmite agree well with temperatures derived from North Atlantic sedimentation; and even though the solar “de Vries cycle”, which has this period length, is known for a long time as an essential factor determining the global climate.

A perfect confirmation for the existence and the dominant influence of the 200+ year cycle as found by us [1] is provided by a recent paper [2] which analyses solar activities for periodic processes.

The spectrum Fig. 2 (Fig. 1d of [2]) shows clearly a 208-year period as the strongest variation of the solar activity. Fig. 3 (Fig. 4 of [2]) gives us the solar activity of the past until today as well as the prediction for the coming 500 years. This prediction is possible due to the multi-periodic character of the activity.

The solar activity agrees well with the terrestrial climate. It clearly shows in particular all historic temperature minima. Thus the future temperatures can be predicted from the activities – as far as they are determined by the sun (the AMO/PDO is not determined by the sun).

The 200+ year period found here [2], as it is found by us [1] is presently at its maximum. Through its influence the temperature will decrease until 2100 to a value like the one of the last “Little Ice Age” 1870.

The wavelet analysis of the solar activity Fig. 4 (Fig. 1b of [2]) has interesting detail. In spite of its limited resolution it shows (as our analysis of the Spannagel stalagmite did) that the 200+ year period set in about 1000 years ago. This cycle appears, according to Fig. 4, regularly every 2500 years. (The causes for this 2500 year period are probably not understood.)

They should have detrended, applied taper function and then done the DFT. That would have given some information about periodicities upto may 30-35 years reasonably accurately, which probably would have been interesting and may have allowed some speculation about the next decade or two.

They have a peak around 34 which other data also indicate.

That also requires some padding of the window to get around the quantisation of the DFT set frequencies which a harmonics of the length of the data.

The also used “homogenised” HISTALP temperatures which are more biased adjustment [sic] than real data.

The raw data are a carefully guarded secret but can be seen to have very little long term rise until they are “corrected”.

>”That would have given some information about periodicities upto may 30-35 years reasonably accurately, which probably would have been interesting….”

More than interesting. They may have detected 11 yr periodicity. I was disappointed that Figure 3 M6 and SPA ended at 0.04 because there seemed to be enough sensitivity.

Figure 5 does show SPA periodicity from about 6 yrs though but nothing I can see at 11 except maybe around 1700 AD.

I’m convinced that David’s search for 11 yr periodicity has not been exhaustive and that he’s been looking in the wrong places. I’m sure more analysis of localized data such as M6 will identify an 11 yr signal eventually (but already found by C&T04 and Z&T).

>”David’s search for 11 yr periodicity has not been exhaustive and that he’s been looking in the wrong places. I’m sure more analysis of localized data such as M6 will identify an 11 yr signal eventually

Sure enough:

‘Periodicity analysis of NDVI time series and its relationship with climatic factors in the Heihe River Basin in China’

The air temperature time series data sets of each pixel of 9 meteorological stations in the Heihe River Basin are analyzed by the EMD. Table 3 shows the periodicity of air temperature from 1982 to 2009.

It is indicated that the EMD method can be effectively used to analyze the periodic variation of the time series NDVI data. All the time series of SINDVI, air temperature and precipitation have periodic variation from 1982 to 2009 in the Heihe River Basin. The temperature and precipitation are significant driving factor affecting the vegetation cover changes. Furthermore, the periodicity of temperature and precipitation may be affected by air-sea interaction and sunspot activity. The period of 2-3 years is the most elementary cycle of the meteorological element in the world. Period of 5-6, 10-11 and 15-16 years may be concerned with the laws of motions of heaven bodies and the medium-wave cycle of macula, they are all caused by solar activity [29, 30].

# # #

7 of 9 stations exhibit 10-11 year periodicity in this set of local data.

Unless you have a different copy of that paper, you seem to be referring to fig 6 and it’s caption, not fig5.

Fig 6 is their RM6 fourier model. Looking at their amplitude coeffs in table I don’t see any justification for their comment this is “mainly due to ~65″. The biggest by far is 254y cmpt and most of the rest are about equal.

It is clear from fig 6 that it is doing nothing more than reproducing the beginning of the series as their ‘projection’.

If they continued it would faithfully reproduce the the dip and the following peak of 1800 would be at 2050.

This has no worth at all as predictive model. You could use a decent 30y low-pass filter (not a bloody running mean) , shift the data forwards by 250 years and the result would be near identical.

That would be laughable as a projection of future temps but that’s what they done, in a rather fancy way that seems to have fooled themselves more than anyone.

With the sea of garbage that is now polluting the peer reviewed literature this probably does not matter in itself but shows that things are not getting any better.

Perhaps the one ray of hope is that garbage is now getting published on all sides of the debate. Ten years that was not happening.

It’s not a 250 yr prediction (i.e. they are not making one, you are trying to turn it into something it isn’t). It’s only about 20 yrs of 65-yr component projected with next to no 250 yr component. Same problem around 1760 as at 2000 going back in time.

>”If they continued it would faithfully reproduce the the dip and the following peak of 1800 would be at 2050″

But they don’t. And they don’t suggest doing so either (you do). They look instead for other cues, including solar (see end of Discussion).

“Figure 4. Prediction of solar activity (Φ on the left y axis and total solar irradiance (TSI) on the right y axis) for the next 500 years using the same parameters as for the tests with data of the past.”

Lagged for temperature 14 yrs say, the trend is down from about now (2014) but with the 65 yr oscillation overlaid on that. I’m inclined to think a lag for ocean heat too so that the temperature fall is not as abrupt or as soon.

In other words, not simply a repetition of Figure 6 for 2.5 times but close, except there’s 2 path options (Dark and Bright grey bands) and other factors to consider.

The important thing is that the 1960 to 2000 deVries peak is a standout.

It’s not a 250 yr prediction (i.e. they are not making one, you are trying to turn it into something it isn’t).

I’m not “turning” it into anything I’m stating what they are doing.

I don’t see why arbitrarily appending a copy of 1750 onto then end of available data segment is supposed to have any predictive ability whatsoever, It matters not whether they paste 20,100 or 250 years, it makes no sense.

It’s only about 20 yrs of 65-yr component projected with next to no 250 yr component.

That incorrect, it is the full RM6 fourier model and it will faithfully reproduce the beginning of the sample with all sub 32 year variability removed.

Not sure what “chose to run backwards in time” refers to but you need to reverse the kernel if it’s asymmetric, which it will in this case. This is not obvious since it is not necessary for symmetric kernels like lowpass filters.

I recall David Evens making some comment about it needing to “spin up” that seems odd if is done by convolution. That may indicate he is running with incomplete buffer at beginning and end. That would be a little surprising since he seems fairly familiar with all this.

Running the filter backwards in time makes sense for going as far back in history as possible,
but for best possible predictions of the future you have to filter forwards in time.

I think David is too hung up on the “notch” being physics,
life would be much simpler if it were regarded just as a way of removing the 11-year oscillations,
i.e. smoothing-out the “rapid” fluctuations in TSI.

The relaxation model with a single time constant represents a trivial single slab ocean model. This is obvious a naive simplification but already is a good start.

Looking at the graph suggests the system is less sensitive to faster changes of the 11y cycle.

My guess is that this comes from strong -ve feedbacks to surface warming in the tropics ( where most of the heat input to the system is ) and a deeper penetration of UV to layers providing longer time constant ( larger thermal mass ) that are not attenuated by the surface feedbacks.

L Svalgaard made the observation over at WUWT that solar activity increased about 300y ago and the earth has been warming since.

This simple model even with a realively short tau of 5y provides such long term warming directly from SSN based TSI.

“Looking at the graph suggests the system is less sensitive to faster changes of the 11y cycle.
My guess is that this comes from strong -ve feedbacks to surface warming in the tropics ( where most of the heat input to the system is ) and a deeper penetration of UV to layers providing longer time constant ( larger thermal mass ) that are not attenuated by the surface feedbacks. ”

If cloudiness decreases then bear in mind that most solar input at many wavelengths gets past the evaporating layer to varying depths so it doesn’t have to be just UV.

The strong negative feedback would be ocean absorption until the additional energy retained circulates around the ocean basins before returning to the air. The oceans smear the thermal response over that period of 3 to 15 years that David mentioned.

I agree that changes in evaporation, convection and tropical storms do have a cooling effect as per Willis’s hypothesis (though the concept should be global rather than tropical).

However, radiative loss to space from condensate, GHGs or particulates higher up is only part of the picture.

Uplift involves conversion of kinetic energy to gravitational potential energy (GPE) with height which involves cooling. Energy in the form of GPE does not radiate. The higher the radiating molecule the colder it will be and the less it will radiate to space but the more GPE it will carry.

GPE is then returned to kinetic energy on the descent which is what really keeps the surface warmer than S-B predicts.

Thus the extra solar energy is not lost as fast as you suggest and it does circulate through all the ocean basins.

Only a portion is lost to space via radiation from condensate, GHGs or particulates, the system recycles the rest repeatedly through the adiabatic convective cycle for 3 to 15 years until it eventually escapes.

“Look at the link.ex-tropics recover to their original temperature, ie despite a reduction in energy input for several years they do not end up cooler.
Tropics are even more impressive , they even manage to recover the number of degree.days.
This means that they not only restore thier temperature but make up for the time they were cooler with an equal period of being warmer.
I suspect the former is largely helped by the ocean gyres, importing cooler water in the east and exporting warmer water in the west. ”

That supports my point doesn’t it? The ocean basins just swap energy around.

The system takes a long time to change from the basic equilibrium and volcanic effects are simply not long lasting or widespread enough. Except maybe for a supervolcanic event.

Even a single solar cycle disappears into the noise and a change in the proportion of TSI getting into the oceans and making a difference to atmospheric temperature takes multiple solar cycles.

If your point about a fast tropical response being enough to negate changes in the proportion of TSI reaching the oceans were correct then there would be more clouds not less and more energy immediately escaping to space so that the 11 year delay followed by warming of the atmosphere would not be observed.

In reality, less clouds lead to more energy into the oceans, the tropical response is not enough to negate it, the energy retained then circulates around the global oceans which creates the observed delay.

Not discounting the very real possibility, in view of UKMO EN3, that upper ocean heat peaked around 11 years earlier than Josh “too cold” Willis at NODC estimates (i.e. 2003ish, confused by ARGO start).

That would place the OHC inflexion to peak at about a 3 year lag behind the atmospheric inflexion around 2000.

So instead of atmospheric temperature trending down from now in 2014 (14 year solar-atmosphere lag in overall peak terms), we will have to wait another 3 years to 2017 (2000 + 14 + 3) on the basis of UKMO EN3 OHC.

Cheer up Richard. You clearly make an honest effort to get across a wide cross section of peer-reviewed literature at the details level. What more can be asked? This is more than can be said even for most of the would-be crowd-sourcers – just check out their web sites. Like the warmista blogosphere e.g. Mann, Schmidt, so too does the sceptical blogosphere commonly exhibit crass ‘Club of Dome’ behaviour patterns (from the tediously long lived e.g. Miskolczi to the transient e.g. Salby). Wilful ignorance of good science, generation of outstanding ironies and marginalisation/ignoring of real expertise or simple brilliance e.g. Glassman, Montford is common. Just don’t let it drive you….. errr, wilde.

Those papers seem to suggest a meridional shift in citculation patterns to a more zonal form in response to the 11 year solar signal rather than a change in temperature.

That accords with my New Climate Model.

It also accords with David’s model in that the meridional shift is instead of a temperature signal and therefore forms part of the delay mechanism.

“studies point to the existence of a 10- to 12-year oscillation associated with changes in the solar radiation. Because the potential signal associated with the 11-year solar cycle is likely small in amplitude and varies over a relatively long period of time compared with other climate signals with larger variance, it is difficult to detect and even more difficult to prove as being statistically significan”

So don’t err….. Short change other ideas such as mine and David’s that recognise the absence or near absence of a temperature signal in response to the 11 year cycle. It is the meridional shifting of the global atmosphere and the consequent delay that matters.

Furthermore, the proper overall solution in my view is to combine the bottom up oceanic process mentioned in those papers with the top down ozone process proposed elsewhere.

My cotention is that climate change is a consequence of the interaction between the top down and bottom up mechanisms with the ultimate outcome being a stable surface temperature at the expense of global air circulation changes which we perceive as climate change.

Our emissions of CO2 would be compensated for in exactly the same way but the circulation change would be indiscernible compared to that wrought by natural solar (top down) and ocean (bottom up) variability.

I am offering a synthesis of all the competing theories which is why my New Climate Model differs from all others.

“All 3 have received a solid ignoring. How many more before tipping point?”

This thread is to discuss David Evan’s model. If you want to discuss presence or not of 11y there were several discussions on just that topic at WUWT this week.

The bottom line is that most papers finding it cherry pick post-1950 data and ignore earlier period where it does not fit, or were just poorly done. No one came up with any paper showing convincing evidence of an 11y cycle.

1) His N-D prediction turns down markedly in 2014 according to Archibald below (I don’t think so – still might though).

2) He has an 11 year solar-temperature lag (I don’t think so – 14 years based on the start of the “pause”).

3) He neglects upper OHC because he says a low-pass filter accounts for it (I don’t think so based on upper ocean OHC peak).

The following is nominal, rough, and simply to make points-of-distinction in approach.

To determine lag, start with a 14 year solar-atmosphere lag based on the start of the “pause” (2000) lagging solar peak (1986).

Then add the 14 year lag to the start of the “pause” (2000) because 2000 is roughly the end of the solar peak range.

Then add another 4 years lag (based on UKMO EN3 upper OHC peak 2004) or another 14 years lag (based on NODC upper OHC peak 2014) to the initial 14 year solar-temperature lag (1986-2000) and you’ve got a competing solar delay prediction to that shown by Archibald:

The approx 65-yr “cycle” (periodicity identified in literature) in temperature must also be overlaid on the curve produced by transfer from solar => ocean => atmospheric temperature. That will alter the dates of downturn above considerably but the future “cycle” changes of phase are unknown.

The Earth with its oceans constitutes a very complex system with multiple interacting components. The timing of the final system response from any given change is itself highly variable and is arguably never achieved because whist the system is trying to accommodate one change another change occurs.

Another starting point, rather than 1986, is the start of the highest solar levels around 1960. That would indicate a delay from the leading edge of solar to the leading edge of temperature of 40 years. If the trailing edge of solar is 2000 then downturn could be expected around 2040.

There’s a number of approaches and this is my second, but just to demonstrate that there are viable alternatives to David’s.

The BEST report (and other surface temperature modelling) does not take into account the passage of Solar induced orbital ‘Dry’ Cycles,as discovered and predicted by Alex S. Gaddes in his work ‘Tomorrow’s Weather’(1990.)These ‘Dry’ Cycles are longitudinal in scope and move at thirty degrees/ Earth Solar month, with the Westward Solar orbit of the Earth’s Magnetic Field. (Note; prevailing weather moves from West to East,with the Earth’s axial spin.) As these Cycles pass over the various surface temperature stations, they create higher temperatures (‘Dry’ Cycle,) or lower temperatures (‘Wet’/Normal Period between the ‘Dry’ Cycles.)ie, temperature is governed predominantly by Precipitation.

In the prediction of these ‘Dry’ Cycles,it is of great import that the ‘Sunspot Cycle’ number is accurate. It is not ‘around ten years’ It is not ’11,(or 11.1) years.’
The number calculated by Alex S. Gaddes is 11.048128 years.

The current ‘Dry’ Cycle started around 110 degrees East of Prime longitude (circa Beijing,)in mid-February 2014. It will reach Australia in early January 2015 – and last up to Five Years (including the influence of a Lunar Metonic Cycle due in 2016.)

The ‘Factor X’ would seem to consist of ‘Solar Particles’ emanating from the 27 Day Rotation Rate latitude of the Sun,(the Sunspot Latitude.) Alex S. Gaddes suspected that these particles may be neutrinos.

These particles seem to effect a break up of the Jet Stream Cloud, and as far as Australia is concerned, the vanguard of the ‘Dry’ Cycles also forces the Southern Lows further South into the Southern Ocean.

It is noted that these ‘Dry’ Cycles (and their ‘Wet/Normal Period counterparts,) are Longitudinal in nature, and thus affect the Arctic and Antarctic simultaneously.

Multiplied by 27
(Ratio, No. 3 Constant) = 502.47 Years
(Full Tree-ring Cycle;
3 x 167.49 Year Tree-ring
Sub-cycles. The 167.49 Year
Sub-cycle is in turn made up
of 9 x 18.61 Year Metonic
Cycles of the Moon.)

The advantage I have Greg, is that I know the forecast ‘Dry’ Cycles have arrived in the past exactly on cue – and I have observed these arrivals (and subsequent effects,) over some years. I have added these observations as an addendum to the original work.
The numbers represent derivatives of actual known scientific ‘constants.’
The table I have quoted previously is directly from the original publication.
If you wish to obtain a copy of the updated ‘Tomorrow’s Weather’,(including the original publication,) send me an email address and I will send you the work. You may then criticise and/or disseminate it as you see fit.
As for the interaction of Solar particles with the Jet Stream, I direct you to the work of Svensmark, (among others.)

I do not think Svensmark currently has it right – but his line of approach is encouragingly indicative of possible ‘alteration’ of the Jet Stream. Perhaps the definitive connection to neutrinos/’Solar Particles’ is still to be made by him, or others at CERN (or elsewhere). At the moment I don’t know the precise ‘mechanism’ either – but the ‘Dry’ Cycle forecast method contained in ‘Tomorrow’s Weather’ does provide exact outcomes in the prediction of drought conditions planet-wide. The ‘W’ (or Weather Factor)emanating from the Solar ‘Sunspot Latitude’ (postulated by Alex S. Gaddes,) may indeed correlate in some way with the ‘Factor X’ postulated in David Evans’ paper.

The 27 day Rotation Rate is used because it represents the latitude of the Sun that ‘carries’ the Sunspots, (and hence initiates the journey of the ‘Solar Influence’ (W Factor) that is manifest as a ‘Dry’ Cycle on Earth. The Earth’s Rotation Rate (axial spin) is also used. The accepted Rotation Rate of the Sun is 26.75 days at the equator. “According to Strahler, (Ref. No. 17,) the rotation rate of the Sun differentiates at a slower rate, from lower to higher latitudes.”

“It seems to me that we ought to be investigating the latitude of the sun which is rotating at the 27 day rate.” (pp 19)

The link to climate is explained by the fact that the resulting ‘Dry’Cycle forecasts derived from these numbers have proven to be extremely accurate.

I realise it is a many layered and perhaps difficult work. If you wish to carefully read and make an effort to understand the principles, I am sure you will find it rewarding – otherwise I may not be able to assist you further. In this circumstance, I invite you to await (with the rest of us,) the ‘Dry’ Cycle that will arrive over Australia in early January 2015 – and herald a Dry Period lasting up to Five Years. (see Appendix 2a. pp 104-106)

That does not answer my simple question: I know it may or may not be similar to some solar rotational period, but why x27 ? That is not explained.

“I realise it is a many layered and perhaps difficult work.”

The only thing that is difficult is that there is no explanation of why all these mulitplications and divisions. That make it numerology, not science. I read it with the intention of understanding its principals but it does not seem to have any.

I asked you to point out what I’ve missed and you fail to reply to that, which is a shame, I thought there may be more to it.

Ian Wilson has published on what appears to be standing waves is SH pressure. I thought this may add something.

Im particular, the inclination of the cresent moon and its claimed link to precipitation, if that is accurate. How does this relate to the relative position of earth, mmon, sun, declination angles etc. ?

Read the work.
In Chapter 1,you will find exposition on the development of a Gravitational Astronomical and Ratios Principle. The Lunar Metonic Cycle,declination angles etc,are discussed and outlined in Chapter 2 (Fig.7) – and so on. The multipliers of 27 explain the relationship between Solar and Earth Rotation,a ratio of 1:27, (if the 27 day Sunspot Latitude rotation rate is used.) (The Earth rotates once each day.)Whatever the ‘W’ Factor consists of and its subsequent effect on Climate, is dependent on both. If you seriously think Rotation Rates are merely ‘numerology’,you have not grasped the basic tenet of the work.
I can assist you no further until you have fully considered the contents.

[...] “all those bomb tests must have done something” are JN (Jo Nova) / DE (David Evans) in their BIG NEWS series. They’re currently bogged down fighting off LS over TSI, but when that’s beaten to [...]