How constant is the “solar constant?”

The IPCC lowered their estimate of the impact of solar variability on the Earth’s climate from the already low value of 0.12 W/m2 (Watts per square-meter) given in their fourth report (AR4), to a still lower value of 0.05 W/m2 in the 2013 fifth report (AR5), the new value is illustrated in Figure 1. These are long term values, estimated for the 261-year period 1750-2011 and they apply to the “baseline” of the Schwabe ~11-year solar (or sunspot) cycle, which we will simply call the “solar cycle” in this post. The baseline of the solar cycle is the issue since the peaks are known to vary. The Sun’s output (total solar irradiance or “TSI”) is known to vary at all time scales (Kopp 2016), the question is by how much. The magnitude of short-term changes, less than 11 years, in solar output are known relatively accurately, to better than ±0.1 W/m2. But, the magnitude of solar variability over longer periods of time is poorly understood. Yet, small changes in solar output over long periods of time can affect the Earth’s climate in significant ways (Eddy 1976) and (Eddy 2009). In John Eddy’s classic 1976 paper on the Maunder Minimum, he writes:

“The reality of the Maunder Minimum and its implications of basic solar change may be but one more defeat in our long and losing battle to keep the sun perfect, or, if not perfect, constant, and if inconstant, regular. Why we think the sun should be any of these when other stars are not is more a question for social than for physical science.” (Eddy 1976)

Using recent satellite data, it has been determined that the Sun puts out ~1361 W/m2, measured at 1AU, the average distance of the Earth’s orbit from the Sun. Half of the Earth’s surface is always in the dark and the sunlight hits most latitudes at an angle, so to get the average absorbed or reflected we divide by 4, to get ~340 W/m2. Then after subtracting the energy reflected by the atmosphere and the surface, we find the average radiation absorbed is about 240 W/m2.

The Earth warms when more energy is added to the climate system, the added energy is called a climate “forcing” by the IPCC. The total anthropogenic forcing over the industrial era (1750 to 2011, 261 years), according to the IPCC (IPCC 2013, 661), is about 2.3 (1.1-3.3) W/m2 or about 1.0% of 240. Also, on page 661, the IPCC estimates the total forcing due to greenhouse gases in 2011 to be 2.83 (2.54-3.12) W/m2. The forcing for CO2 alone is 1.82 (1.63-2.01) W/m2. They further estimate that the growth rate of CO2 caused forcing, from 2001 to 2011 is 0.27 W/m2 or 0.027 W/m2/year using the same methods. These are a lot of numbers, so we’ve summarized them in Table 1 below.

IPCC Anthropogenic Forcing Estimates (AR5, page 661)

Time period

Cause

Years

Total Forcing

Forcing per year

Percent of 240

W/m2

W/m2/yr

W/m2

1750-2011

Humans

261

2.3

0.0088

0.96%

1750-2011

CO2

261

1.82

0.0070

0.76%

2001-2011

CO2

10

0.27

0.0270

0.11%

Table 1. Anthropogenic forcing as estimated by (IPCC 2013, 661).

The IPCC’s assumed list of radiative forcing agents and their total forcing from 1750 to 2011 are shown in Figure 1. Next to the IPCC table, I’ve shown the Central England temperature (CET) record, which is the only complete instrumental temperature record that goes back that far in time. The CET is mostly flat until the end of the Little Ice Age and then, after a dip around the time of the Krakatoa volcanic eruption in 1883, it shows warming to modern times.

Figure 1. On the left is the IPCC list of radiative forcing agents from page 697 of AR5 WG1 (IPCC 2013). Notice they assume that solar irradiance is very small, in this post we examine this assumption. On the right is the Central England temperature record (CET), the only instrumental temperature record that goes back to 1750. The CET data source is the UK MET office.

If the Sun were to supply all 2.3 W/m2 of the forcing described in Figure 1, but as a steady change over 261 years, the change each year would have to average ~0.0088 W/m2/year. So, assuming constant albedo (reflectivity) the change in solar output would have to be 4×0.0088 on average or 0.035 W/m2/year. As noted above, we multiply by four because the Earth is a sphere and half of it is in the dark. This is a total increase in solar output of 9.2 W/m2 over 261 years (1750-2011), a change of 0.7%. Some might say we should start at 1951, since that is the agreed date when CO2 emissions became significant (IPCC 2013, 698-700). But, I started at 1750 to cover the “industrial era” as defined by the IPCC, the choice is somewhat arbitrary as long as we go back far enough to precede any significant human CO2 emissions. The year 1750 is also useful because it is near the end of the worst part of the Little Ice Age, the coldest period in the last 11,700 years (the Holocene). Do we know the solar output, over the past 261 years, accurately enough to say the Sun could not have changed 9.2 W/m2 or some large portion that amount? In other words, is the IPCC assumption that solar variability has a very small influence on climate valid?

How accurate are our measurements of Solar output?

The solar cycle variation of TSI is about 1.5 W/m2 or 0.1% from peak to trough (~5-7 years) or 0.25 W/m2/year and 0.02%/year. These changes are much larger than the longer-term changes of 0.0088 W/m2/year computed above. So, simply because we can see the ~11-year solar cycle does not necessarily mean we can see a longer-term trend that could have caused current warming. Satellite TSI measurement instruments deteriorate under the intense sunlight they measure and they lose accuracy with time. We have satellite measurements of varying quality over much of the last four solar cycles. The raw data are plotted in Figure 2 and the critical ACRIM gap is highlighted in yellow. Because the Nimbus7/ERB (Earth Radiation Budget) and ERBS/ERBE instruments are much less precise and accurate than the ACRIM (Active Cavity Radiometer Irradiance Monitor) instruments, filling this gap is the most important problem in making a long-term TSI composite (Scafetta and Willson 2014).

Figure 2. Raw satellite total solar irradiance (TSI) measurements. The ACRIM gap is identified in yellow. The trend of the NIMBUS7/ERB instrument in the ACRIM gap is emphasized with a red line. Source: (Soon, Connolly and Connolly 2015).

As Figure 2 makes clear, calibration problems have caused the satellites to measure widely different values of TSI, the solar cycle minima range from 1371 W/m2 to 1360.5 W/m2. Currently, the correct minimum is thought to be around 1360.5, but just a few years ago it was thought to be ~1364 W/m2 (Haigh 2011). After calibration corrections have been applied, each satellite produces an internally consistent record, but the records are not consistent with one another and no single record covers two or more complete solar cycles. This makes the determination of long-term trends problematic.

There have been three serious attempts to build single composite TSI records from the raw data displayed in Figure 1. They are shown in Figure 3.

Figure 3. Three common composites of the data shown in Figure 1. The ACRIM gap is identified in yellow. The PMOD composite is by P.M.O.D. (Frohlich 2006) also the source of the figure (pmodwrc.ch), the ACRIM composite is from the ACRIM team (Scafetta and Willson 2014), the IRMB composite is from the Royal Meteorological Institute of Belgium (Dewitte, et al. 2004).

The ACRIM and IRMB composites show an increasing trend during the ACRIM gap and the PMOD composite shows a declining trend. This figure was made several years ago by the PMOD team when the baseline of the TSI trend was more uncertain, so the IRMB and PMOD composites are shown with a ~1365 W/m2 base and the ACRIM composite is shown with a ~1360.5 W/m2 baseline, which is currently preferred. The important point, shown in Figure 3, is that the long-term PMOD trend is down, the ACRIM trend is up to the cycle 22-23 minimum (~1996) and then down to the cycle 23-24 minimum (~2009), and the IRMB trend is up. Thus, the direction of the long-term trend is unclear. Figure 4 shows the details of the PMOD and ACRIM trends, this is from (Scafetta and Willson 2014).

Figure 4. The ACRIM and PMOD composites showing opposing slopes in the solar minima and in the ACRIM gap, highlighted in yellow. Source: (Scafetta and Willson 2014).

In Figure 4 we see the differences more clearly. The ACRIM TSI trend from the solar cycle low between 21 and 22 to 22-23 is +0.5 W/m2 in 10 years or 0.05 W/m2 per year, then the trend is down to the cycle 23-24 minimum. The PMOD composite is steadily down about 0.14 W/m2 in 22 years (1987-2009) or 0.006 W/m2/year. The difference in these trends is 0.056 W/m2/year. If this is extrapolated linearly for 261 years, the difference is 14.6 W/m2, more than the 9.2 W/m2 required to cause the recent warming.

NOAA believe the SORCE satellite TIM (Total Irradiance Monitor) instrument is accurate and accepts the ~1360.5 TSI baseline it establishes. They have normalized the three composites discussed above to this baseline. After normalizing, they averaged the three composites to produce the record shown in Figure 5. The SORCE/TIM record starts in February 2003, so the average after that is replaced by the SORCE/TIM record. Averaging three records with differing trends creates a meaningless trend, so this TSI record is of little use for our purposes, but they also construct an uncertainty function (something notably missing for the individual composites) using the differences between the composites and the estimated instrument error. The NOAA composite is shown in Figure 5 and their computed uncertainty is shown in Figure 6, both figures show the raw data and a 100-day running average.

Figure 5. The NOAA/NCEI composite. It is the average of the three composites shown above, with the data after Feb. 2003 replaced by the SORCE/TIM data The ACRIM gap is indicated in yellow. The low points between solar cycles 21-23 and 22-23 are marked on the plot. Data source: NOAA/NCEI.

In the NOAA composite (Figure 5) the increase in the solar minimum value from the solar cycle 21-22 minimum to the solar cycle 22-23 minimum appears, just as it does in the IRMB and the ACRIM composites. The solar cycle 23-24 minimum drops down to the level of the 21-22 minimum, but this is a forgone conclusion since the earlier records are normalized to this value in the SORCE/TIM record. In fact, given that everything is normalized to TIM, we only have two points in this whole composite that we can try and use to determine a long-term trend, the 21-22 minimum and the 22-23 minimum, the peaks cannot be used since they are known to be variable (Kopp 2016). Thus, we don’t know very much.

Figure 6. NOAA TSI uncertainty, computed from the difference between the ACRIM and PMOD values, after normalization to the SORCE/TIM values, plus an assumed 0.5 W/m2 uncertainty in the SORCE/TIM absolute scale until the TIM data and uncertainties are available after Feb. 2003. A rapid increase in the computed TIM error occurs late in 2012. The ACRIM gap is highlighted in yellow. The low points between solar cycles 21-23 and 22-23 are marked on the plot. Data source: NOAA/NCEI.

Greg Kopp has calculated that in order to observe a long-term change in solar output of 1.4 W/m2 per century, or about 3.5 W/m2 since 1750, which is 38% of the total 9.2 W/m2required to explain modern warming; non-overlapping instruments would need an accuracy of ±0.136 W/m2 and 10 years of measurements to even see the change (Kopp 2016). As Figure 6 makes clear, the SORCE/TIM instrument, the best instrument in orbit today, has an uncertainty at least 3.5 times the required level to detect such a trend and it decayed rapidly after 10 years.

Discussion

The estimated uncertainty in the NOAA satellite composite is well over 0.5 W/m2 and it increases as a function of time before 2003. The three original composites come with no estimated uncertainty, their accuracy, or lack of it, is unknown. NOAA simply used the differences in the composites to estimate the uncertainty. This makes estimating a trend from the satellite data problematic (Haigh 2011). To look at the longer term, we must rely on solar proxies, such as sunspot counts and proxies of the strength of the solar magnetic field. The relationship of the proxies to solar output is not known and can only be estimated by correlating the proxies to satellite data. Professor Joanna Haigh summarizes this in the following way:

“To assess the potential influence of the Sun on the climate on longer timescales it is necessary to know TSI further back into the past than is available from the satellite data … The proxy indicators of solar variability discussed above have therefore been used to produce an estimate of its temporal variation over the past centuries. There are several different approaches taken to ‘reconstructing’ the TSI, all employing a substantial degree of empiricism and in all of which the proxy data (such as sunspot number) are calibrated against the recent satellite TSI measurements, despite the problems with this data outlined above.” (Haigh 2011)

The uncertainty in these proxy estimates cannot be quantified, but it must be greater than the potential error (uncertainty) in the satellite data, which varies from 0.48 W/m2 to over 0.8 W/m2. Let’s return to the slopes discussed above and illustrated in Figure 4. If we combine the opposing slopes of the ACRIM and PMOD composites, the difference is 0.056 W/m2/year. The NOAA estimated uncertainty (Figure 6) in the cycle 21-22 minimum is over 0.7 W/m2 and in the 22-23 minimum it is over 0.6 W/m2. If this uncertainty is considered, the extrapolated long-term linear trend could be as high as 0.13 W/m2 to 0.18 W/m2/year. Over 261 years, these values could add up to 34 to 47 W/m2. Both values are much higher than the 9.2 W/m2required to account for the roughly one-degree of warming observed over the past 261 years (see Figure 1 and the discussion).

Given the way the composites have been generated, we only have two points to work with in determining a long-term solar trend, the points are the lows of solar cycles 21-22 and 22-23. Everything has been adjusted to the low of solar cycle 23-24, so it isn’t usable. With two points all you get is a line and a linear change is unlikely for a dynamo. Basically, the satellite data is not enough.

We have no opinion on the relative merits of the three composite TSI records discussed. There are, for the most part, logical reasons for all the corrections made in each composite. The problem is, they are all different and have opposing trends. Each composite selects different portions of the available satellite records to use and applies different corrections. The resulting, different long-term trends are simply a reflection of the component instrument instabilities (Kopp 2016). For discussions of the merits of the ACRIM composite see (Scafetta and Willson 2014), for the PMOD composite see (Frohlich 2006), for the IRMB composite see (Dewitte, et al. 2004). There are arguments for and against each composite. There are also numerous papers discussing how to extend the TSI record into the past using solar proxies. For a discussion of some of the most commonly used TSI reconstructions of the past 200 years see (Soon, Connolly and Connolly 2015). The problem with the proxies is that the precise relationship they have with TSI or solar output in general is unknown and must be based on correlations with the, unfortunately, flawed satellite records.

Whether one matches a proxy to the ACRIM or PMOD composite can make a great deal of difference in the resulting long term TSI record as discussed in (Herrera, Mendoza and Herrera 2015). As the paper makes clear, reasonable proxy correlations to the ACRIM and PMOD composites can result in computed values of TSI, in the 1700s, that are more than two W/m2 different. Kopp discusses this problem in more detail in his 2016 Journal of Space Weather and Space Climate article:

“TSI variability on secular timescales is currently not definitively known from the space-borne measurements because this record does not span the desired multi-decadal to centennial time range with the needed absolute accuracies, and composites based on the measurements are ambiguous over the time range they do cover due to high stability-uncertainties of the contributing instruments.” (Kopp 2016)

Kopp also provides us with the following plot (Figure 7) comparing different historical TSI reconstructions. The red NRLTSI2 reconstruction is the one that will be used for the upcoming IPCC report and in CMIP6 (Coupled Model Inter-comparison Project Phase 6). The TSI reconstructions plotted in Figure 7 are all empirical and make use of various proxies of solar activity (but mainly sunspot counts) and their assumed relationship to total solar output. Figure 7 illustrates some of the uncertainty in these assumptions.

Figure 7. Various recent published TSI reconstructions. The NRLTSI2 reconstruction will be used for the upcoming IPCC report and CMIP6. There is a great deal of spread during the Maunder Minimum, over 2 W/m2 and the long-term trends are very different. The figure is modified after one in (Kopp 2016).

In answer to the question posed at the beginning of the post, no we have not measured the solar output accurately enough, over a long enough period, to definitively say solar variability could not have caused all or a significant portion of the warming observed over the past 261 years. The most extreme reconstruction in Figure 7 (Lean, 2000), suggests the Sun could have caused 25% of the warming and this is without considering the considerable uncertainty in the TSI estimate. There are even larger published TSI differences from the modern day, up to 5 W/m2 (Shapiro, et al. 2011), (Soon, Connolly and Connolly 2015) and (Schmidt, et al. 2012). We certainly have not proven that solar variability is the cause of all or even a large portion of the warming, only that we cannot exclude it as a possible cause, as the IPCC appears to have done.

Post navigation

230 thoughts on “How constant is the “solar constant?””

Excellent piece, as usual. The thing that one appreciates and welcomes is the author’s attention to defining an hypothesis clearly, and then marshalling the evidence, while being careful to work through exactly what the logical relationship is between evidence and the hypothesis.

This is so very rare in popularizations of climate issues. Normally what we find is emotional tirades about, for instance, the sun or the MWP, without a careful account of what the hypothesis is, and what sort of evidence we need, and how what we have bears on the hypothesis, and what the alternative explanations might be, and how they in turn can be assessed. Well done again.

I’m no clearer having read this what exactly the effects of the sun have been and are. But at least I do know why this is beyond being answered satisfactorily by the evidence we have now, and what this evidence logically considered can and cannot show. That’s science for you. Some things are just not known with precision at this point. If you cannot know them, all the same you greatly gain from knowing why you can’t.

Science is compelled to be a prophesier. We all demand it. Who will listen to your conclusion? “We do not know” Science can observe the present and try to measure it. We can record our observations and measurements. We can compare our our past and present observations. The we can make guesses about what we didn’t observe – or guesses about what we haven’t observed. But that is the limit of the possible. We can only hope to be educated guessers.

But to believe science can reveal the past and proclaim the future – that is imbuing Science with religious authority. Our current observations and measurements of the TSI are insufficient to draw meaningful conclusions about past and future variability. But as you note the truth doesn’t stop the IPCC from revealing to us the secret knowledge.

Actually, the same could be said for ALL the data in climate science. None of it is of sufficient quality or of sufficient length to even begin to say any of the claims made by the CAGW meme.

In my undergraduate days I was told not to ever touch the collected data. It is what it is. To interpolate or extend beyond the data would have caused an immediate failure of the lab. All that could be concluded was that in the regime observed a certain relationship could be discerned between the dependent and independent variables. We could then determine whether the relationship concurred with or was nonconcurrent with an outcome predicted by theory within that regime. If it concurred, we could then describe how to create other conditions to test theoretic predictions in extended regimes. The purpose was always to test theoretical predictions in ever more extreme conditions to find cases in which the theory made a bad prediction. As we were dealing with pretty well tested physics, of course we never disproved anything, though we did conduct a couple of experiments that didn’t measure what we thought we were measuring. Our professor took about twenty minutes with our setup before laughing at what we did, but he did have to look at it to figure it out. Undergrads can be creative in getting into trouble, luckily it didn’t create anything dangerous.

Exactly, Climate Science should be about developing and maintaining rigorous data collection and storage protocols so our great-great-grand children have some basis to develop climate simulations. The enormous investment in modeling could be used as grand scale experimental design tools to show us where our investment priorities need to be for data collection.

News Flash, it isn’t the radiation from the sun that matters, it is the amount of warming radiation that reaches the earth that counts. You can have a hot sun and plenty of clouds and the earth would cool. The Cosmic Ray theory has a hot sun clearing out cosmic rays reaching the earth, clearing the skies of clouds and making the earth even hotter when the sun is hot.

Nice theory. So how much in reality greater cloud cover during solar minima actually appears in the surface temperature record?

Internal variability completely swamps whatever effect this is supposed to cause during the last 70 years. I wouldn’t be so sure about the LIA, but then again, this means the effect is very slow to take on and probably has an oceanic component. Right?

Saying the Sun has an influence is not necessarily completely bollocks, but then again, the above explanantion does not appear to be fit in the decadal picture. Centennial maybe?

Yep …. the TSI at top of atmosphere, and any “calculated” value of the actual energy reaching the surface is meaningless. To accurately assess the impact of a change in solar energy input, we would have had to have a surface measuring network across the globe, much like the thermometer network that actually measured the daily SW and LW radiation reaching the surface over the last 50 years. Calculated values are simply too uncertain. On top of that, as the article states (good read btw), the uncertainty with the TSI instruments is too large for any meaningful measurement as pertains to climate impacts, and any calculated value would multiply that inherent uncertainty making the calculated value even more useless.

“To accurately assess the impact of a change in solar energy input, we would have had to have a surface measuring network across the globe, much like the thermometer network that actually measured the daily SW and LW radiation reaching the surface over the last 50 years. ”

Yep, and that warming is due to radiation that CO2 is transparent to. CO2 has nothing to do with warming the surface.

I don’t think the planet is a pathetic pile of s***. I would be more supportive if you had used such extreme language to describe Dr Thomas J. Chalko MSc, PhD. (I clicked on your link) Chalko is at a minimum a total nut-case.

We do not know how constant the solar constant is although some will try to provide the falsehood that we do.

We do not know how weak TSI was during the Maunder Minimum.

We have inadequate measurements of TSI now .much less trying to extrapolate what is was in the past.

TSI is constantly revised.

In addition TSI changes within themselves are but a small part of the solar/climate connection.

Then to top it off you have the dishonest IPCC trying to downplay solar climate connections and prop up human contribution. It is so ridiculous , ludicrous and so fake, and such a waste of time, but if they want to bring it on ,bring it on.

Not to worry however, because AGW fake theory is now in the process of ending as the global temperatures are no longer rising and better yet falling, and even better yet will continue to fall as we move forward..

More on my thoughts below.

It is ridiculous to even entertain the thought that non existent AGW, which has hi jacked all natural variability which was in a warming mode from the end of the Little Ice Age to 2005 had any climatic impact on this event much less the global temperature rise.

My point will be proven now – over the next few years as global temperatures continue to fall in response to all natural climatic factors now transitioned to a cold mode.

If there is any validity to AGW , the global temperatures should continue to rise now-next few years, but they will not because AGW does not exist.

What controls the climate are the magnetic field strengths of both the sun and earth. When in sync as they are now(both weakening) the earth should grow colder.

In other words during periods of very weak long duration magnetic field events the earths cools due to a decrease in overall oceanic sea surface temperatures and a slightly higher albedo due to an increase in global cloud/snow coverage and explosive volcanic activity.

Thus far all overall global temperature trends for the past year or two have been down and I expect this trend to continue.

In answer to the question posed at the beginning of the post, no we have not measured the solar output accurately enough, over a long enough period, to definitively say solar variability could not have caused all or a significant portion of the warming observed over the past 261 years.

This is true for every potential factor, not in the beam of the flashlight.

In the case of ‘climate science’, there’s a big fat government guy making sure the lamppost already installed has an indefinite supply of bulbs lighting the spot below it better and better. And the guys with torches are regularly laughed at, their findings ridiculed, and the government tries to defund torches because the guys at the lamppost told they’re spreading misinformation.

The biggest lie in the IPCC’s work and the Alarmists’ claims is the certainty. Is what they say possible? Yes, they could be right. But do they have evidence that allows them to proclaim 95% certainty? Nowhere near. That 95% is a political necessity however, a number that is large enough to convince our dumb and credulous politicians that we must do something.

Given how little we understand about our climate, I cannot understand how any honest climate scientist can support such certainty?

phoenix44,this is the biggest bone of contention when it comes to climate science. the assertion of absolute certainty ,quite often stated in abstracts when the content of the same paper shows no such thing.
yet another excellent article by andy may,a pleasure to read.

Over a year the solar energy striking the Earth changes by around 7% because the Earth’s orbit is not perfectly circular. It also matters that the two hemispheres don’t absorb the same.

The ratio of the maximum distance from the Sun to the minimum distance is about 1.034. link Because the energy arriving at the Earth goes by the inverse square law, the energy striking the planet varies by about 1.07 or 7%.

You’ve spotted the elephant in the room! A non-linear system with sinusoidal variation in Parameter X usually gives a different value of Yavg from the simplistic assumption that the average of the dependent variable, Y, corresponds to its value at Xavg.

While internal variability can cause large variations in temperature over significant periods it is always going to be difficult to ascribe any such similar changes to solar effects.

It often seems like many people acknowledge this may render certain calculations worthless, and then just proceed on the basis that they are a worthwhile pursuit anyway. (Protein folders do something very similar. Who wants to talk themselves out of a job.)

There really are some things in science that should not be attempted. This is widely acknowledged in the general, but not the particular. If scientists won’t address this directly then, ultimately, politicians will.

Because of the elliptical orbit the solar “constant” varies from 1,415 W/m^2 at perihelion to 1,323 W/m^2 at aphelion, a swing of 92 W/m^2. Geometry & algebra. That’s bound to leave a mark.

The solar constant doesn’t vary during the orbit because it’s evaluated at 1AU regardless of the position of the Earth. Also because the earth orbits faster when it’s closer to the sun that ‘swing’ cancels out (Kepler’s Laws).

Earth’s orbit is slightly elliptical, not circular, so its mean speed around the sun is an average, not a constant.

Earth is closest to the sun (at perihelion) and orbits fastest around January 3. It is farthest from the sun (at aphelion) and orbits least rapidly around July 4. Since, per Kepler, equal areas must be swept in equal time intervals, the planet’s orbital velocity must be the swiftest at perihelion and slowest at aphelion.

The data is averaged for 1AU, (line 5) but if you look (line 10 iirc) it includes the actual reading (if I can find the original dataset. Might have to hunt the SORCE raw data back again and double check, been a bit.

That being said, in another discussion I also brought up that using absolute temperature we see a change of .8 degrees Kelvin, out of 288 degrees, a .3% change in temperature. If there was a direct correlation, that would mean a 4w/m^2 change in 1365w/m^2 would result in all of the changes to date.

I have seen anything from 1/1.5 to 4 or even 6 in that same time period though.

A serious question:
If the “temperature of the earth” varies directly with changes in radiative energy absorbed,
and if energy radiated from the earth varies as the forth power of radiating temperature,
then a 1% increase in energy absorbed results in a 4% increase in energy radiated away.
Why is the earth not safely nested in a negative-feedback temperature cocoon?

The IPCC lowered their estimate of the impact of solar variability on the Earth’s climate from the already low value of 0.12 W/m2
==========
It is simply to show that W/m2 is a nonsensical measure of climate.

Consider what would happen if the Sun’s TSI remained constant in W/m2 but the radiation shifted so that it was all in the IR band. All the green plants on earth would die and the climate of the earth would be HUGELY different than today.

Yet there is a frequency shift in the Sun during the solar cycle. Where is this accounted for in TSI? It isn’t.

The problem is that the IPCC and solar science in general regards the earth as a lifeless grey body and calculates climate on that basis, without taking into effect that the primary driver of climate on planet earth is life.

Very interesting, as noted, real science. I recall studies on coastal plankton productivity in the early 60s that were measuring variation in surface solar insolation. My impression is that biologists have not done much with this, important as proven by plankton seasons. It probably is often overshadowed except in exceptionally consistent clear water. Even then, wave action and other changes are a problem.

First sentence in a 1957 review–“Solar radiation is probably the most fundamental ecological factor in the marine environment.” I haven’t done my homework on this, but wonder if solar panels are more important than measurements. I do know how difficult it is to measure, usually not, in coastal waters. So much to learn.

Solar panels have the same problem as the instruments in space – doing their job makes them less efficient at doing their job. Every photon that liberates an electron on the plate makes a change in crystal structure that ever so slightly damages the surface. After a couple of years, the same incident sunlight generates a measurably lower current output from the panel.

Every panel produced has a pretty wide variance from batch to batch and panel to panel, so even changing the panels every season wouldn’t guarantee consistent results in output. As there is no baseline to compare to, solar panels can’t generate usable scientific data on the output of the sun except on order of magnitude level changes (which, thankfully for all of us living around this star, never happens!)

Based on examination of a lot of marine science papers in the last couple of decades, I wonder if two factors have skewed study. (1) The environmental movement. (2) Movement to computers, satellites and buoys for data. There has still been a lot of valuable work done, but suspect we missed a lot by not being on site.

I think this may be the only recent paper I found locally, but did not chase all the physical journals and never did a computer search. There are a few studying predator-prey reactions with light, but the sun was not important to them. I do know that day/night studies are not common.

Lugo-Fernández, M. Gravois and T. Montgomery. 2008. Analysis of sechhi depths and light attenuation coefficients in the Louisiana-Texas shelf, northern Gulf of Mexico. Gulf of Mexico Science. 16(1):14-27. Interesting that this mid 19th century method goes back to a Frenchman named Secchi, maybe others

Having dived tropical reefs around Guam, I can say there is a distinct difference in the behavior of sea life between the day and night (at least there). I would think any study that purported to evaluate an ecosystem would have to do extensive day/night evaluations in order to even come close to a comprehensive evaluation. Of course I studied physics not biology, so as the old saying goes – what do I know?

From a fisherman’s perspective (my dad), the fish are greatly affected by what happens above the water as well as below. When they want to feed and when they don’t. Location, tide, wind and rain all have a very big impact on catching a fish (and by extension, their activity).

There is simple solution to that. For long time measurement give up permanent watching of the sun, just make fast snapshot once a day.
Do not let instrument to be permanently damaged by sun. Just hide it behind shutter and open for 1 second a day. This way instrument decay will be negligible. We would have data for 800,000 years instead of 10.

How sure are we of all the posted trend estimates for solar forcings required for the warming? Is this the right way to go?

How can the IPCC do real physics when the sun’s power is quartered first? The ocean at the sub-solar point responds to full TSI, not TSI/4, under clear skies. Does anyone figure income on a 24-hour average? No, it’s always figured on actual hours of real work. Same with the sun’s energy.

The IPCC loves to downplay the sun’s influence. The way they do this is by just assuming the solar input has a minimal influence, as that is the only way to downplay the glaring discrepancy in solar epochs since 1850, the only way to ignore the energetic difference between the falling phase to the rising and high long-term solar activity regime:

I think we can know with certainty whether TSI varies enough to cause climate change, even though we’re uncertain about a number of things wrt TSI, such as long-term averages and extremes, and absolute value.

If you’ve seen Willis’ tropical temperature (and rain?) plot covering a day you’ll notice it registers to the morning daylight hours, maximizing early afternoon, then dropping off until the next day. The ocean temperature and evaporation are responding to the instantaneous incoming solar at full TSI under clear skies.

IPCC does this quartering because of the Trenberth cartoon. The concept of a “Forcing,” that the atmosphere can and does heat the surface of the Earth, depends on 24/7 so-called “Downwelling Infrared Radiation,” which would easily be shown to be nonsense without the quartering. Anyone who does not exist in a cave knows that the Sun heats the surface of the Earth, not the atmosphere.

“Forcings” is/are a completely absurd concept. All made up so that the IPCC can sound scientific, but they do not, to anyone who knows anything about science. The Earth has no average temperature that we can know to any certainty, and we certainly did NOT know what it was in 1750.

2) There is a 333 W/m^2 0.04% GHG up/down/”back” energy loop that traps/re-emits per QED simultaneously warming BOTH the atmosphere and the surface. Good trick. Too bad it’s not real. – thermodynamic nonsense.
And where does this magical GHG energy loop first get that energy?

3) From the 16 C/289 K/396 W/m^2 S-B 1.0 ε BB radiation upwelling from the surface. – which due to the non-radiative heat transfer participation of the atmospheric molecules is simply not possible. (TFK_bams09)

Nicholas, The surface is slightly warmer than the atmosphere during the day. It radiates IR, which is mostly captured by water vapor or CO2 in the lower atmosphere and held for a while (maybe 0.5 seconds). While it is being held the molecule is excited and it has thousands of collisions with neighboring molecules and can transfer the energy to them, exciting them and “warming” the air locally. Convection (wind and currents) moves this energy (“warming”) around. This is the greenhouse effect. More greenhouse gases enhances the effect.

I’ve no problem with the concept, my problem is with the magnitude of the effect. Assuming it controls climate change is where they lose me.

That is where it comes from, TSI*cos(𝜃)cos(𝜙) gives the flux/unit area received by any point on the Earth’s surface, the double integral from -𝜋 to 𝜋 gives the total being received by the total Earth’s surface at any time. That gives the same factor of 4 that the ratio of projected area to surface area gives because it comes from the same math.

The equator receives a daily pulse of sunlight, integrated, and governed by this curve, minus cloud and aerosol effects

Exactly, that curve shows the cos relationship I gave. Your curve shows the effect at the equator, as you go away from the equator you get a similar cos dependence hence the double integral of cos(𝜃)cos(𝜙).

Yes, integrating results in deriving the factor of 4 reduction, but its much easier to understand when you compare the area of the Earth’s surface emitting energy to the area of the plane across which solar energy is arriving.

Why is this this often ignored relative to energy absorbed by the atmosphere? When the same integration is performed for the atmosphere, it emits over twice the area it absorbs, thus the net emitted flux either up or down is limited to half of what the atmosphere absorbs. Even more perplexing is that the data confirms the roughly 50/50 split!

IR instruments measure temperature with T/C & thermopiles with a temp/mv relationship. W/m^2 is inferred assuming the emissivity. SURFRAD assumes 1.0 which is wrong. BB emission from the surface is not possible.

The up/down/”back” “measurements” are due to bad data, misunderstood and misapplied pyrogeometers.

Yes, the ‘back radiation’ measurements are completely bogus. The data I’m talking about is based on a first principles examination of the energy balance.

The first thing we need to do is unwind Trenberth’s excess complexity that serves no purpose other than to obfuscate.

We can ignore latent heat and thermals, relative to the balance, as whatever effect they are having is already embodied by the average temperature and its emissions. When you subtract their return to the surface from the bogus back radiation term, all that’s left are the 390 W/m^2 offsetting surface emissions.

Relative to the thermodynamic state of the planet, the water cycle links the water in clouds with surface water over short time periods. The result is that energy absorbed and emitted by the liquid and solid water in the atmosphere (clouds) can be considered a proxy for energy absorbed and emitted by the water in the oceans, relative to averages.

Now, lets reconsider the AVERAGE balance.

In the steady state, 240 W/m^2 arrives from the Sun and 390 W/m^2 is emitted by the surface at its average temperature of 288K. The ISCCP cloud data combined with GHG concentrations tells is that the atmosphere absorbs about 300 W/m^2 of the 390 W/m^2 emitted, leaving only 90 W/m^2 to escape into space.

To offset the 240 W/m^2 arriving, 150 W/m^2 more is required which when added to the 90 passing through exactly offsets the 240 W/m^2 arriving.

The only possible source of these 150 W/m^2 are the 300 W/m^2 absorbed by the atmosphere, leaving 150 W/m^2 more to be returned to the surface and combined with 240 W/m^2 of solar input to offset 390 W/m^2 of surface emissions.

The way these reconstructions all move upward over time make solar look favorable, as the latter decades are higher. But, the NRLTSI2 reconstruction for example is wrong because it doesn’t incorporate the v2 sunspot revision numbers, which will increase older high solar cycles TSI to similar levels of recent high activity cycle TSI.

TSI acts like a pulse-width amplitude modulated heat source, the longer it’s high the hotter we get, and vice versa, and it’s operated from a narrower range than most of these reconstructions show. I consider the narrower range more realistically tracking solar cycles to be very favorable for solar TSI forcing.

Nearly every star we observe varies with what are mostly unexplained random periods and intensities. If the Sun is as constant as claimed, it would be a very rare and unusual star.

The total solar energy arriving at the surface varies by 80 W/m^2 (almost 6%) between perihelion and aphelion. The 20 W/m^2 difference averaged across the surface after 30% reflection becomes 14 W/m^2. According to the IPCC’s nominal ECS of 0.8C per W/m^2, this should result in a temperature difference of 11C between the 2 hemispheres. No where near enough energy is transported between hemispheres to offset this much of a change in forcing, so where is this temperature difference?

The two hemispheres respond quite differently to changes in solar insolation and we see this in the ice cores with a clear 22K periodicity corresponding to the precession of perihelion. In the N hemisphere, the winter snow band is mostly over land and snow readily accumulates. In the S hemisphere, the winter snow band is over the ocean and snow and ice slowly extends from the Antarctica mainland, rather than rapidly accumulate in place. A serious error often found in models is to AU normalize solar input.

Despite what alarmists claim proxies are telling us, we really have no idea about how much the Sun (or the Earth’s orbit for that matter) has varied over geologic time. It’s as likely that the orbit and Sun has been constant for billions of years as it is that the Earth is only a few thousand years old. In fact, the latter is a prerequisite for the former.

The tenuous proxy evidence of a far distant climate that was much warmer or colder than we see in the ice cores is far more likely to be due to variable solar output then the absurd concept of a highly amplified GHG effect arising from CO2 causing warming or the lack of CO2 causing cooling.

The total solar energy arriving at the surface varies by 80 W/m^2 (almost 6%) between perihelion and aphelion.

No that’s the flux, the energy is measured in J/m^2, a W is a J/s.
However since the rate at which the Earth moves also depends on distance from the sun which cancels out the increase in flux. (Kepler’s Law of Areas)

The planet responds to the instantaneous energy flux and only averages are affected by the average energy across the period of the average. Otherwise, we wouldn’t notice seasonal variability, much less any temperature difference between night and day. Over the course of a year, the total energy does balance out, but what doesn’t balance out is the asymmetric response as perihelion shifts through the seasons. This was one of Milankovitch’s arguments that’s often ignored but is supported quite well by the ice cores.

The difference in albedo between summer and winter is larger in the N than the S, owing to the aforementioned difference in where the snow belt resides. As a result, the larger winter solar input in the N hemisphere winter is offset by a relatively larger reflectivity, as the lesser solar input to the S hemisphere winter is offset by a lower reflectivity. When this flips in about 11K years, it will add to the asymmetry, rather than act to cancel it out, and the seasonal variability will get significantly larger than it is today, especially in the N hemisphere.

Because of the T^4 relationship between temperature and emissions and that it’s the forcing/emissions whose range is symmetrically extended, winters get colder by more than summers get warmer giving glaciers a chance to grow. Today, we are the opposite end of that cycle where the relative difference between summer and winter in the N hemisphere is at its lowest possible levels.

We don’t notice this because the S hemisphere is less sensitive to instantaneous flux and is more aligned with average flux owing to the much larger fraction of water and where that water is relative to how the surface reflectivity responds to the temperature. In 11K years, the S hemisphere climate will not have changed by much, except perhaps Australia, but the N hemisphere climate will see significant changes across the US, Canada, Europe and Russia while the equatorial climate will stay pretty much the same as it is now.

Over the course of a year, the total energy does balance out, but what doesn’t balance out is the asymmetric response as perihelion shifts through the seasons.

So you agree with me. You said total energy varied and then quoted the change in flux. As I showed the Earth spends more time from the March equinox to the September equinox (189 days), than from the September equinox to the March equinox (176 days). Also the length of day varies for the same reason =/-7.5 mins.
I didn’t mention the shift of per/aphelion relative to season (it’s currently close to the solstice) as that’s a longer term variation.

Over the course of a year, the total energy does balance out, but what doesn’t balance out is the asymmetric response as perihelion shifts through the seasons.

So you agree with me. You said total energy varied and then quoted the change in flux. As I showed the Earth spends more time from the March equinox to the September equinox (189 days), than from the September equinox to the March equinox (176 days). Also the length of day varies for the same reason =/-7.5 mins.

OK, so let me ask the derivative of that question:
What IS the correct approximation for a “calendar year” measurement (temperature, ice area, solar radiation, whatever) that varies over the 365.25 day year?

1) Do we skip leap years? Average them with 365 days? Use Feb 29 only when convenient?

2) Is “data” on Day-of-Year (DOY) = 62 DOY = 62 for all years: or only leap years, or just for 3 of the 4 years excluding all leap years? Or do we consider DOY = 62 “good enough” for DOY = 62 (and 63)? After all, solar radiation doesn’t change very much.

(By the way, ZERO papers mention this in their plots and printed files! Can Aug 12 1966, 1977, 1998, and 1999 all be the same DOY when plotted? ) Or, as mentioned, does it matter?

3) The most accurate method is to add/average/list/compare all 365 days for a seasonally changing value. Impractical, but possible.

So what IS APPROPRIATE (accurate enough) to evaluate monthly averages and “total year effects” of a 365.25 day year?

4) 4 equal days (12-22, 3-22, 6-22, 9-22 pick four important dates: Each equinox, the two solstices. But not accurate enough.

5) The 12 “average months” are better, but what is the “average solar irradiation absorbed” each month?
Are 28, 30, and 31 day-long months to be equally important?
Which is the “accepted” representative Day-of-Year for the 12 months?
The 15th of each month? (Not per ) The 14th on Feb, but the 15th on every other month?

6) If “The more days, the better” philosophy holds, then what IS adequate:
The 1, 15 of every month?
The 5, 10, 15, 20, 25, 30? (OOpsie. Skips February 28 and 29. Or do we use “the last day of Feb = the 30th of very other month””?)
The 2, 12, 22 of every month? (Gets all of the four special days of solstices and equinoxes!)

For example: “Average for each months (whatever) is calculated for the 15th of the month”
Real world number, 365 day year.

Date DOY
01 Jan 001 (What happened to Day = 0.0 on the graphs ?)
02 Jan 002
15 Jan 015
14 Feb 045
15 Mar 074
15 Apr 105
15 May 135
15 June 166
15 July 196
15 Aug 227
15 Sept 258
15 Oct 288
15 Nov 319
15 Dec 349
But these are "12 not-equal intervals" measured at the "middle" of each month of varying length?
Are they valid "averages" to use for 1/12 of a full year?
What is the weight factor for Feb (28 days, or 29) compared to a month with 31 days in winter, or a month with 31 days in north hemisphere summer?
The textbooks are less useful: Solar Engineering of Thermal Processes, by John Duffie and William A Beckman of Univ of Wisconsin, Madison, WI advises one use the following "Days" for data from monthly solar calculations:
Mon Date DOY
Jan 17 017
Feb 16 047
Mar 16 075
Apr 15 105
May 15 135
Jun 11 162
Jul 17 198
Aug 16 228
Sep 15 258
Oct 15 288
Nov 14 318
Dec 10 344
Does anyone consider these variations "better"? At best, "most" of the days have 30 days between intervals.
But not all.

Well ‘months’ are an artificial construct and the start of the year is totally arbitrary, at best we can just compare like with like from year to year. In England prior to 1750 the start of the year was March 25 not Jan 1. What would make sense to me for this purpose would be to start the year at the winter solstice.
The meteorological year has the winter season being from 1 Dec to the end of Feb. The slight mismatch between the solstices and per/aphelion makes things slightly more complicated.

Little Ice Age = Urban Myth
The Big Lie repeated so often it becomes true.

1. Where are the bodies….

2. In the ‘photo’ we have of it (frozen River Thames), people are dancing (ice skating) on the ice, they are visiting shops and market traders, they are having an outdoor BBQ and generally having A Nice Time

3. What we do have for temperature records show it to be a non-event.

4. Oh. Charles Dickens wrote a lot about White Christmases.
Err. Charles Dickens was a novelist – he wrote fiction. Fiction does NOT evolve around the boring, the everyday and the mundane.

If anything did happen, it was due to Queen Elizabeth continuing the tradition of her father and chopping trees to build a war-machine. That certainly would change the English Climate. And in more ways than one.
Similar to the burning of forest.

And why she need such a huge war machine? (For the time)
She was generally accepted to be indecisive and totally dependent upon her advisers, especially her ‘nanny’.
Why was she so insecure, so lacking in confidence and untrusting of her own judgement?
Especially when it came to men and suitors.

Not because she was quite addicted to sugar by any chance?
Why courtiers and other folks with aspirations would get their teeth painted black or even knocked completely out – to make themselves appear like The Queen and able to feast on refined sugar. As Mary Queen of Scots knew, if you wanted to curry favour with Liz, you wrapped your request inside a box of sugared almonds.
Any parallels there?
Who has the largest army in this world and who (seemingly and gobsmackingly) goes through 3 cans per day, each, of carbonated soda-pop. 3 cans. Each. Daily. ?????? !!!!!!!
And *who* is the most paranoid about trivial & inconsequential things, such as decimal places of solar grunt and sunspots………

and you do ‘get’ that sunspots are the Dancing Faeries on the pinhead?
and that computers & Sputniks are the equivalent to QE1’s nanny?
*Everybody* these days is indecisive (buck passing) and didn’t QE1 set herself as a true role-model in birth-rate reduction?

Peta,
A summary of the Little Ice Age:
“The Little Ice Age was a horrible time for mankind according to Behringer. Glaciers advanced in the Alps and destroyed homes, it was a time of perpetual war, famines and plagues. Horrible persecutions of Jews and “witches” were common. Society was suffering from the cold and lack of food and they needed to blame someone. They chose Jews and old unmarried women unfortunately. Over 50,000 witches were burned alive. Tens of thousands of Jews were massacred. Not because there was any proof, just because someone had to suffer for the bad climate. Some people, the masses mainly, seem to need to blame someone or humanity’s sins for natural disasters. Behringer notes that in The Little Ice Age: “In a society with no concept of the accidental, there was a tendency to personalize misfortune.”
Source: https://andymaypetrophysicist.com/climate-and-civilization-for-the-past-4000-years/

Thermometer records in the CET show that the 1690s and 1700s, during the depths of the Maunder Minimum, were the coldest interval of the LIA.

The other coldest cycles of the LIA also fell during periods of low sunspots, with warming cycles between them. But overall and globally, the LIA was significantly cooler than today, as shown by proxy data as well as temperature readings.

It was the most recent of the centennial-scale coolings, alternating with warming cycles, since the Holocene Climatic Optimum.

Oh another thing, Peta bread, QE1 died in 1603, years before England’s great ship building period.
Perhaps you mean the Spanish, you know that “Great Armada” thingy .
You really need to brush up on your History, I would suggest you start with “The Roman Imperial Army” by Graham Webster Copyright 1969. It covers the 1st&2nd centuries during the Roman warming period.
After you have read that, get back to us and I recommend a good History of the 30 years war in Europe during the Little Ice Age. The comparisons and contrasts between the two time periods are interesting.
Specifically, what the Romans were able to do and the later European States were not. For example, the Romans were able to keep eight Legions on the Rhine for two hundred years and keep them healthy. In total, the empire maintained between 25 to a maximum of 30 legions though out the empire. During the Little Ice Age armies would routinely lose between 50%-90% of their strength come winter.
The fate of Johann Tserclaes (Count von Tilly’s) Catholic army after the defeat at White Mountain is telling, it froze to death during the retreat.

The Little Ice Age is no urban legend, only someone of vast ignorance would state that.
Start reading.

You’re right that the Royal Navy didn’t face a timber crisis until the 1650s. ERI did support the RN, however her ship-building program was slow but steady. She tried to remedy the decline into which the sea service had fallen under the reigns of her brother and sister. Her early goal was 30 ships in 20 years.

Despite this program, the RN relied heavily on commandeered commercial ships for defense against the 130-ship Armada of 1588. It boasted 22 large galleons and 108 armed merchant ships, including four Neapolitan war galleasses.

The RN had 34 smaller warships, plus 163 armed merchant vessels, just 30 of over 200 tons, and 30 flyboats.

At the outset of the First Anglo-Dutch War (1652–54), Commonwealth Britain had 18 (1st and 2nd rate) ships of the line superior in firepower to the Netherlands’ flagship Brederode, largest in its navy. Furthermore, not only were the British (no longer Royal) navy ships larger, with more guns, but the English guns were bigger than Dutch naval guns. The English could thus fire and hit enemy ships at a longer distance, causing comparatively more damage with their shot. I don’t know the number of 3rd through 6th rate British warships at that time.

In 1652, the Netherlands’ navy had only 79 warships, many in bad repair, so that fewer than 50 were seaworthy. The deficiency in the Dutch navy was to be made good by arming merchantmen. As noted, all were inferior in firepower to the largest English first and second rate ships.

During the war, both sides built up their navies to around 300 ships. Hence, the timber troubles.

When the wars that prompted the naval expansions ended there was often surplus timber that was redirected to house building. The beams in my parents’ house (which is ~400 yrs old) were unused ship timbers.

The beams in my parents’ house (which is ~400 yrs old) were unused ship timbers

Must be beautiful. Painted or stained & varnished?
With their “natural” bends and angles (the “bents” of a reinforced angle joint where the ribs meet the deck)?
Or as squared-off large lumber beams and columns?

Sorry my response from yesterday didn’t show up. Stained and varnished, looks good I was back visiting this summer, my sister lives there now. The beams were sort of squared off with unused mortices for their previous application, not a right angle in the place (stone built).

This is a very poor post. Extrapolating to 261 years the noisy measurements of only 3 or 4 sunspot cycles is bad science. There is now general agreement that the changes in TSI are due to changes in the sun’s magnetic field. We have measured the latter with accuracy since the 1970s. The sun’s magnetic field also determines the diurnal range of the variation of the geomagnetic field which is known since the 1740s. Also the intensity of geomagnetic storms, which is known with good accuracy back to the 1840sm and even the modulation of cosmic rays reaching back much farther in time. All of those effects show that there has not been a large variation [whether 9 or 14 W/m2] of the basal level of TSI, exceeding 0.5 W/m2.:http://www.leif.org/research/EUV-F107-and-TSI-CDR-HAO.pdf

Extrapolating to 261 years the noisy measurements of only 3 or 4 sunspot cycles is bad science.

I totally agree with this statement.
You also say:

There is now general agreement that the changes in TSI are due to changes in the sun’s magnetic field.

OK, not very controversial. Now, how accurately can we measure the Sun’s magnetic field and precisely how do we compute the TSI, how accurate is the calculation? To go far back in time we need to compare the magnetic field measurements to sunspots, how accurate is that? Your ppt is very interesting and you have a lot of good correlations, but no indication of accuracy at the level required.

Can you supply TSI at the accuracy Kopp computes? That is + or – 0.136 W/m^2?

You note that TSI no longer follows the SN. Where is the error, how much error is there?

My only point is that we do not know solar variability, from ANY source accurately enough to exclude solar variability as a possible cause for recent warming, or, at least, a large part of it.

Now, how accurately can we measure the Sun’s magnetic field and precisely how do we compute the TSI, how accurate is the calculation? To go far back in time we need to compare the magnetic field measurements to sunspots, how accurate is that?
My comment supplies the necessary error analysis. The measurements of the solar magnetic field are very accurate. The sensitivity of TSI to the magnetic field can be gauged from the rotational and solar cycle behaviour. There is no way TSI could have varied 13.6 W/m2 [=1%] over the past 300 years, which is 10 times the well-established solar cycle variation. Such a variation would have resulted in a 1/4% variation of Temps = 0.72 C. In particular, at every solar minimum the activity falls to nearly zero, so TSI must be almost constant for every minimum, even if we allow for a generous error of 0.5 W/m2.

Since it is unlikely that people who disagree will even look at my comment, I affix here the abstract:
A composite record of the total unsigned magnetic (line-of-sight) flux over the solar disk can be constructed from spacecraft measurements by SOHO-MDI and SDO-HMI complemented by ground-based measurements by SOLIS covering the period 1996-2016, covering the two solar mimina in 1996 and 2009 and the two solar maxima in 2001 and 2014. A composite record of solar EUV from SOHO-SEM, TIMED-SEE, and SDO-EVE covering the same period is very well correlated with the magnetic record (R2=0.96), both for monthly means. The magnetic flux and EUV [and the sunspot number] are extremely well correlated with the F10.7 microwave flux, even on a daily basis. The tight correlations extend to other solar indices (Mg II, Ca II) reaching further back in time. Solar EUV creates and maintains the ionosphere. The conducting E-region [at ~105 km altitude] supports an electric current by a dynamo process due to thermal winds moving the conducting region across the Earth’s magnetic field. The resulting current has an easily observable magnetic effect at ground level, maintaining a diurnal variation of the geomagnetic field [discovered by Graham in 1722]. Data on this variation go back to the 1740s [with good coverage back to 1840] and permit reconstruction of EUV [and proxies, e.g. F10.7] back that far. We confirm that the EUV [and hence the solar magnetic field] relaxes to the same [apart from tiny residuals] level at every solar minimum. Since the variation of Total Solar Irradiance [TSI] is controlled by the magnetic field, the reconstruction of EUV does not support a varying ‘background’ on which the solar cycle variation of TSI rides, strongly suggesting that the Climate Data Records advocated by NOAA and NASA are not correct before the space age. Similarly, the reconstruction does not support the constancy of the calibration of the SORCE/TIM TSI-record since 2003, but rather indicates an upward drift, suggesting an overcorrection for sensor degradations.

The measurements of the solar magnetic field are very accurate. The sensitivity of TSI to the magnetic field can be gauged from the rotational and solar cycle behaviour. There is no way TSI could have varied 13.6 W/m2 [=1%] over the past 300 years, which is 10 times the well-established solar cycle variation.

“Very” accurate is not very specific. One of your correlations has an R^2 of 0.96, this is also not good enough to establish that “There is no way TSI could have varied 13.6 W/m2.”

According to Kopp:

measurement duration, a potentially-realistic TSI-variability trend as large as 0.1% over 100 years, such as discussed in Section 2.2.2, may be detected at the 1 – sigma level by continual and overlapping instruments having 0.001% yr stability uncertainties even over short time-periods (but regardless of measurement duration), while non-overlapping instruments having 0.01% uncertainties on an absolute scale could provide a similarly-marginal 1 – sigma level trend detection only after 10 years

So you see, the accuracies that you refer to are orders of magnitude too low, we need >99.9%, which is my point. When it comes to proxies, you can quantify the quality of the correlation, perhaps, but the accuracy of the TSI record is in question, so is the accuracy and duration of the EUV record. Lots of good work, but the accuracy required just isn’t there to back up your assertion. In any case R^2 is only a measure of correlation, not accuracy. Further, the relationship to climate is another unknown jump. Speculation stacked on speculation. This is an important area that is being totally ignored by the IPCC, by assuming it doesn’t matter.

the accuracy of the TSI record is in question, so is the accuracy and duration of the EUV record
The quoted accuracy is 0.5 W/m2 which is not in question. The EUV record is solid back to 1740. When you say ‘is in question’ you conveniently omit to say by whom, and by how much. The way of countering my comment is for each slide to argue or show that it is wrong, not by blanket hand waving.

Leif,
The accuracy of 0.5 W/m^2 you quote is only for the TIM instrument and only for its first 10 years. That is the best you can expect – and only for a short time. Even the TIM instrument deteriorates with time. This is all well documented in the post.

As for EUV, exactly how accurate are the measurements made in 1740? I see an R^2 of correlations to SSN of 0.917, not even close to the 0.999 we need to make our case. F10.7? I see an R^2 of correlation of 0.956, not very good at all. These values are from your ppt. I’m quoting your values.

Who is blanket hand waving? We have a particular accuracy we are looking for, + or – 0.136 W/m^2, I’m not seeing it in your data.

I see an R^2 of correlation of 0.956, not very good at all.
R2 of 0.956 means that 95.6% of the variation is explained by the correlation, leaving only less than 5% for wiggle room [or error if you wish].
You are not [doing] as I suggested. for each slide explain what is wrong with it.

In particular, at every solar minimum the activity falls to nearly zero, so TSI must be almost constant for every minimum, even if we allow for a generous error of 0.5 W/m2.

I do not think all indicators suggest that solar activity drops to zero in a solar minimum, your ppt contains a quote from Foukal and Eddy (2007), suggesting that there was a lot of variation even during the Maunder Minimum. Shapiro (2011) notes that sunspots do not form some lower limit of solar activity. The modulation function is still active, even when SN is zero.

In short, as you have written, there is no clear evidence that there is a secular increase in solar activity over the past 300 years, I agree with this. But, there is no clear evidence that there isn’t a secular increase – that is just as speculative.

Observations
in the Ca II K line started as early as in 1892 at various sites, providing a good temporal
coverage of the whole 20th century. . . .
We have developed a method to recover the relation for the response of the plates to
the incident radiation by using information that is stored on the solar disc of the image.
. . .
We have also reassessed the relation between the magnetic field strength and the Ca
II K contrast, by using a larger number of Ca II images than was done in earlier such
studies. . . .
The new series confirms the existence of the modern grand maximum of activity in the second half of the 20th century, when sunspot cycles were significantly higher than during the 19th and 18th centuries. The new GSN series provides a robust reconstruction of solar activity (the number of sunspot groups) with a realistic estimate of uncertainties and forms a basis for further investigation of centennial variability of solar activity over the last 270 yr.

But, there is no clear evidence that there isn’t a secular increase
Yes there is, and very strong indeed.
The sun’s magnetic field heats the corona where EUV is generated. The EUV creates the E-region [at 105 km altitude]. Dynamo action produces a current whose magnetic effect at ground level is easily measured [discovered in AD 1722]. With the exception of some years at the beginning of the time since, we have kept track on this effect and can thus directly calibrate it in terms of EUV flux [using modern data]. For such calibration and R2 of greater than 0.9 means extraordinary significant agreement. The result is direct evidence for a lack of secular increase. This is not in doubt and is not controversial. The causal relationships between solar magnetism, EUV [and F10.7] flux, ionospheric currents, and geomagnetic effect are well-understood: Slide 21 ofhttp://www.leif.org/research/EUV-F107-and-TSI-CDR-HAO.pdf

It should be well-known that even at minimum sunspot activity there is a ‘floor’ under which the sun’s magnetic field does not fall, e.g. Loomis [1873].

Naturally I disagree. Sun and only sun is capable of changing climate while humans are far too feeble to compete.

“Londoners loved a few tankards of wine. In fact, they were rarely sober. It was imported from France, of course, but also home-grown. The Medieval Warm Period, which lasted from around 950 to 1250, made viticulture viable. Vineyards existed outside the city walls – but also within. Vintry, another of the City’s 25 wards, was once the main district for wine growing.

Between 1600 and 1814, it was not uncommon for the River Thames to freeze over for up to two months at time. During the Great Winter of 1683 / 84, where even the seas of southern Britain were frozen solid for up to two miles from shore, the most famous frost fair was held: The Blanket Fair. Britain’s (and the entire of the Northern Hemisphere) was locked in what is now known as the ‘Little Ice Age’

A family company, Ridgeview has been producing quality sparkling wine since 1994 in the South Downs near the village of Ditchling. It’s open for tastings and sales from Monday to Saturday and also hosts general and private tours (pre-booking essential) throughout the year. For £15 you’ll enjoy a tour of the vineyard where its grows Chardonnay, Pinot Noir and Pinot Meunier as well as the state-of-the-art winery, where you’ll get a glimpse of the sparkling winemaking process. And you’ll get to try all the estate’s current wines.”

It looks like 1K year solar cycle.
Perhaps the TSI data scientists should look at their hypothesis again.

CET data goes back to 1659, but by no means is too accurate before 1850s, even so it was compiled erroneously until January 2016 (when the error was brought to the Met office’s attention by the undersigned).
m vukcevic

As I remember it was minuscule Vuc. My point is that the LIA was not continuous cold but spells of colder conditions based on a mean over a degree colder than now. And good old climate variability ruled in England.

Yes, it was very small error but you would expect that diring years and years of compilation someone would have noticed.
It is generally accepted that LIA was colder than MWP or current warming as illustrated by the narrative, making a case for 1ky cycle occasionally quoted here and elsewhere
Neither of these periods were consistently cold or warm, as a mater of fact (according to theCET data) the early 1700s were very similar to the recent decades

Leif,
I normally appreciate your comments and respect what you contribute. Although, I think you are being a little zealous in your remarks above. To whit, you said, “Extrapolating to 261 years the noisy measurements of only 3 or 4 sunspot cycles is bad science.” OK, I can support that. However, you than say, “There is now general agreement that the changes in TSI are due to changes in the sun’s magnetic field. We have measured the latter with accuracy since the 1970s.” So, it would seem to me that, strictly speaking, we can only demonstrate a correlation between TSI and the solar magnetic field “since the 1970s.” Going back farther requires some unstated assumptions about “all other things being equal.” Furthermore, the inability to measure the magnetic field directly two or three centuries ago requires a proxy for the magnetic field if the relationship between magnetic field and TSI is to be of explanatory utility.

Logically, one would not expect the minimum TSI to vary significantly from the baseline of zero sunspots, if sun spots are either responsible for, or good proxies for, variations in the solar magnetic field. However, that doesn’t close off the possibility that there is a spurious correlation with something that has not yet been identified. On the other hand, perhaps there isn’t a linear relationship between sunspots, magnetic field, and TSI above the base level. After all, we are depending on “the noisy measurements of only 3 or 4 sunspot cycles…”

We constructed the first multi-isotope composite based on one global 14C and six local 10Be records, using a new Bayesian approach (Chap. 3). All six 10Be records were first synchronized with respect to the 14C record using a wiggle-matching method. Next a Monte Carlo simulation was performed to search for that solar modulation potential which best fits all the available isotope data sets at any given time. This composite is considered more robust compared to other composites constructed linearly. . . .
Next, we use the SATIRE-M model and the first multiisotope composite to reconstruct the solar irradiance over the last 9 000 years. This is the first SSI reconstruction that not only uses physics-based models to describe all involved non-linear physical processes, but also bases on a multi-isotope composite. . . .
The TSI/SSI reconstructions with simulated cycles are consistent with the reconstructions based on the directly-observed sunspot numbers. This final solar irradiance reconstruction has been provided as a solar forcing input to climate models . . .

Going back farther requires some unstated assumptions about “all other things being equal.”
Not quite. It has the natural assumption that the Sun did not behave differently back then and that the response of the Earth to a given solar effect has not changed. You can only challenge those two by actually providing evidence that they are false. No such evidence has been compelling.

“It has the natural assumption that the Sun did not behave differently back then and that the response of the Earth to a given solar effect has not changed. You can only challenge those two by actually providing evidence that they are false. No such evidence has been compelling.”

If evidence has to be compelling to you then no amount of evidence would ever reach that bar. However compelling evidence to referees and editors has already been produced.

“Aims. The Sun shows strong variability in its magnetic activity, from Grand minima to Grand maxima, but the nature of the variability is not fully understood, mostly because of the insufficient length of the directly observed solar activity records and of uncertainties related to long-term reconstructions. Here we present a new adjustment-free reconstruction of solar activity over three millennia and study its different modes.

Methods. We present a new adjustment-free, physical reconstruction of solar activity over the past three millennia, using the latest verified carbon cycle, 14C production, and archeomagnetic field models. This great improvement allowed us to study different modes of solar activity at an unprecedented level of details.

Results. The distribution of solar activity is clearly bi-modal, implying the existence of distinct modes of activity. The main regular activity mode corresponds to moderate activity that varies in a relatively narrow band between sunspot numbers 20 and 67. The existence of a separate Grand minimum mode with reduced solar activity, which cannot be explained by random fluctuations of the regular mode, is confirmed at a high confidence level. The possible existence of a separate Grand maximum mode is also suggested, but the statistics is too low to reach a confident conclusion.

Conclusions. The Sun is shown to operate in distinct modes – a main general mode, a Grand minimum mode corresponding to an inactive Sun, and a possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.”

IMO, Earth’s Pleistocene climate also has three modes: glacial maxima, glacial and interglacial. Their main causes owe to Milankovitch cycles and tectonics, not to shorter-term solar variability.

During glacial maxima, such as the last one which began around 26.5 Ka, permanent sea ice grows around the North Atlantic, which in winter freezes over, as does the Arctic Ocean even in interglacials. Like interglacials, glacial maxima can last thousands of years (although probably not the tens of thousands of the longest interglacials). During the rest of glacial time, there are stadial and interstadial intervals.

Leif,
Need I remind you that “speculation” is what hypotheses are born from? The number that “sign on” is irrelevant. Science shouldn’t be a popularity contest. A claim, supported by empirical measurements, has been presented in response to your demand that evidence be produced to demonstrate that the sun has behaved differently in the past. Now, unless you can demonstrate that the paper is flawed, we would seem to have evidence to support the idea that “all other things are NOT equal.”

Once again, Leif brings to the forefront the basic tenants of research critique. I wish this was a required class at the senior high and freshman college level and beyond. If students find it beyond their academic level, then they should consider whether or not academia is their strong suit. Which begs the question whether or not comentators in this thread are up to the task. Take your bias colored glasses off and apply good research critique methods lest you succumb to the snake oil salesperson.

Hugs, as you note there are many factors influencing weather and climate. And further complicating matters is the need to look at accumulations over time of changes in solar energy incident upon the surface. Dan Pangburn did an engineering analysis to estimate the order of magnitude effects. My synopsis is at https://rclutz.wordpress.com/2016/06/22/quantifying-natural-climate-change/

Interesting in itself, but physicists have been telling us for years that variation in TSI is not adequate to explain climate changes. TSI and sunspot numbers are symptoms of what the sun’s magnetic field is doing. The remarkable correlation of Sun spot numbers, (SSN), total solar irradiance (TSI), solar magnetic flux, cosmic ray intensity, production rates of 14C and 10Be, ionization in the atmosphere, cloud production, and global temperature over thousands of years provide geologic evidence for a likely solar cause of climate change.

I agree Don. Even if TSI varies little over the past 300 years, what if another solar variation is what causes climate to change? The Sun is a complex star and we are trying to understand it through very simple measurements and proxies. Sometimes we simply need to say “We don’t know.”

The range of the TSI variability on millennial time scales for the three used isotope series is about 0.11% (1.5 W/m2 ). . . . The TSI/SSI reconstructions are available at the webpage of MPS “Solar Variability and Climate” group”

The remarkable correlation of Sun spot numbers, (SSN), total solar irradiance (TSI), solar magnetic flux, cosmic ray intensity, production rates of 14C and 10Be,
Shows that all these measures agree and that therefore the variations are much too small to explain climate change.

If SSN were 93 for a million years, the oceans would be cooling for that million years. If then suddenly the SNN increased to 95 and stayed there for a million years, the oceans would be warming for the that next million years. I hope you can see how ridiculous that is.

Following your ‘logic’: if the SSN were 93 for a million years, the oceans would be steadily cooling for that million years. If then suddenly the SNN increased to 95 and stayed there for a million years, the oceans would be steadily warming for the that next million years. I hope you can see how ridiculous that is.

In addition, the average SSN since 1700 is 78. This means that the difference between SSN and your 94 is generally negative, so the accumulated SSN for 1700-now is minus 4800, meaning that we whould have steadlity cooled since the MM…

I personally think you’ve crossed over into misrepresenting both me and SORCE.

Let’s look at slide 61 of your pdf – you claim SORCE isn’t following SSN or F10.7cm anymore, as you indicated in two charts using annual numbers.

You neglected to include in your slide that the Oct 2014 SSN peak and last quarter high F10.7cm flux drove the Feb 2015 TSI peak, a normal few month lag that was obscured by your use of annual numbers, supporting your faulty statement:

“TSI (SORCE/TIM) no longer following Sunspot Numbers nor F10.7 Flux
I have been following this for some time and am puzzled by this behavior of my ‘Gold Standard’”” – slide 61

There is no cumulative effect as the Earth radiates away as much as it receives.

If that were true, at all time scales, the surface temperature would be constant and would never warm or cool and we would not be having this discussion. The troposphere is not in radiative equilibrium like the stratosphere almost is.

If “climate” mean global average “surface” temperature, then it hasn’t changed much over the Holocene, at least since the 8.2 Ka cold snap associated with the last outburst of ice sheet melt water. Of course no one can know GAST with any precision today, let alone 8000 years ago, but some have ventured rough guesses.

A ballpark estimate of a swing of about three degrees C is out there, from maybe two degrees warmer than now during the Holocene Climatic Optimum to one degree lower during the LIA and maybe other cool intervals. Could be more, but call it about 1.1% of present GAST around 288 K. That’s around ten times the variation in TSI.

However, since the high-energy, short wavelength end of the UV spectrum varies a lot more than TSI (or the visible and IR spectra), solar effects on climate, IMO, can’t be ruled out. The UVC and UVB bands are absorbed by the air, making ozone out of O2 molecules in it (UVC completely and UVB by about 90%). Plus, the UVB which makes it to the surface and UVA contribute to warming the ocean.

So, while its fluctuating share of TSI never amounts to much, UV boasts a unique qualitative distinction from the rest of the sunlight spectrum, with climatic effects.

You are probably familiar with Fligge and Solanki’s 2000 study, which estimated that UV had increased by about three percent since the Maunder Minimum.

Although the total (and visible) irradiance has only increased by roughly 0.3% since the Maunder minimum the enhancement of UV and NUV radiation during the last 3 centuries is ten and four times larger, respectively. The variability of the IR was only moderate, i.e. at the 0.15% level.

short wavelength end of the UV spectrum varies a lot more than TSI
That is to say that Bill Gates’ wealth varies like the loose coins in his pocket.
The high energy UV varies a lot less in absolute terms [i.e. Watts/m2].

But you are off topic, because the issue under discussion is TSI. Not the UV.

It’s only like Gates’ loose change if you consider only quantitative energy, not the qualitative difference which ozone-making and breaking UV has with the rest of the spectrum.

Nor IMO is the issue off topic, since the question is the effect of solar variation on climate, not strictly limited to TSI. Spectral variation, magnetic flux and all aspects of solar-climate connections are relevant.

The problem lies in precision and accuracy, which is the point of the post. In your ppt on slide 6 you show the correlation of EUV to magnetic flux, the correlation has an R^2 of 0.961 over 20 years, this is not good enough. As I’ve stated before Kopp (2016) calculated we need an accuracy (not just correlation) of >0.999 (page A30-p5).

Then in slides 58 to 62 we find that your function does not match SORCE/TIM very well, so you conclude that SORCE/TIM is in error. Well, this is possible, but if so what evidence do you have that your function is correct? I don’t see anything in the ppt but correlations between proxies, so what? Anybody can correlate anything to anything, that doesn’t mean anything. You seem to be saying that we should throw out the data and use your model.

Leif,
Your ppt slide 61 is key. SORCE/TIM, the best measurement of TSI that we have does not follow the best estimate of the sunspot number (SSN) or the F10.7 microwave flux. You conclude that the TSI measurement is wrong, but that is just opinion. It could be SSN is measuring something else or is an inadequate proxy, we don’t know. Likewise, the F10.7 cm microwave flux may not be well related to TSI. These are just as likely as SORCE/TSI being wrong.

It is true that there may be calibration problems with TSI (see my figure 6). But, if so you have no solid data to compare to, you are left comparing proxies to proxies, what does that tell you? Not much.

Not to butt in but that is my point on much of this discussion. So many intercorrilated potential modifying variables make a tough row to hoe particularly when you bring up the true bottom line, correlation is not causality. Still, very interesting post and comments.

Covington then showed that the 10.7cm Solar Flux correlates with indices of solar activity such as sunspot number and total sunspot area, with the advantage over those indices that the measurements are completely objective, and can be made under almost any weather conditions. Since it is closely correlated with magnetic activity, it correlates closely with other activity indices and, since magnetic activity modulates the Sun’s energy output, with solar irradiance.

Leif missed something in slide 61. Does he expect TSI to peak at the same time as sunspots and F10.7cm peak?

“TSI (SORCE/TIM) no longer following Sunspot Numbers nor F10.7 Flux. I have been following this for some time and am puzzled by this behavior of my ‘Gold Standard’” – slide 61

2014 was the sunspot peak and F10.7cm peak year for SC24, and 2015 was the TSI peak year, as his two graphs of annual numbers on slide 61 show. This obscures the temporal relationship between them that can be seen on a finer scale.

He didn’t include in the slide that the 2014 last quarter high F10.7cm flux and sunspot number drove the 2015 first quarter TSI peak, with a normal few month lag, that was obscured by using annual numbers.

Leif missed something in slide 61. Does he expect TSI to peak at the same time as sunspots and F10.7cm peak?
Not necessarily, but over longer time intervals the two should generally follow each other. The difference between TIM/SORCE/TSI and Belgian/TSI, F10.7, MgII, SN, and generally all solar indicators is steadily the last decade, indicating that SORCE/TIM/TSI is the odd-man-out. All the other indicators agree on this. And BTW, the LASP people are now agreeing with me [and the Belgians] that they have a problem. They are chosen not to admit that publickey for the time being, awaiting more data from the new TSIS instrument.

It’s probably not relevant one way or the other. On our 70% water world average depth 12, 000 ft, the manner in which the oceans absorb, convey and release this energy input is where we need more information. Sun shines on some part of the ocean 24/7 and has for a very long time. Looking for “signals” in time series comparisons of solar radiation changes is not very productive, so far. Too many modifying variables are involved.

Looking for “signals” in time series comparisons of solar radiation changes is not very productive, so far. Too many modifying variables are involved
If so, it would also not be very productive to claims that the data show that the sun is the main driver of climate change on the time sale of the series.

It is, however the main driver of the climate, its minor variations probably not so much. Energy in and energy out is the basic driver. Those modifying variables, long term, orbital, eccentricity, precession, inclination, tectonic plate movement, shorter term, clouds, impacts volcanism and so on, absorption, storage, transport and regurgitation by the oceans make weather and ultimately climate. A complex, undefined, multivariate, possibly chaotic system at work.

Also, dividing the TSI by 4 has always sounded a little too unscientific to me. Shouldn’t the number be slightly under 4 because of the angle of refraction?

Have at it the Earth’s radius is ~4000 miles and the atmosphere’s thickness is about 60 miles, only the lower portion of which will refract incoming light (I’ve seen it described as wrapping clingfilm around a basketball). Feel free to use 3.999 instead.

If you believe that a flat earth theory of average sunlight at an average latitude on an average day of year with an everage albedo and an single average surface temperature can tell you something useful.

Well, K-T is not a flat earth model, but a ball suspended in and evenly warmed by a bucket of warm 342 W/m^2 poo model.

Still simplistic, unrealistic and dumb.

Well, the K-T model begins by assuming a flat plate of area 1/4 the surface area of the true sphere.
That flat plate is illuminated on only one side by a constant average sun at (as you correctly point out) a constant 342 watt/m^2 rate.
But then an equilibrium state is somehow obtained, but all of the energy is somehow lost by LW radiation from only the one side that is illuminated!

It is a perfectly valid physics model: For an Einstein-like mental gymnastics exercise for a flat-plate massless grey body, isolated in space and insulated underneath a constant insolation field under an ideal gas of uniform pressure, temperature and humidity and clouds.

Not true the emission is from the whole surface, hence there’s a factor of 4 in the energy balance.

False. The first assumption in the full physics derivation of the Trenberth model is that “Surface area illuminated = Surface Area radiating” … Followed by more approximations and assumptions for several pages of seemingly elaborate theory.
True flat plates radiate from 2x the area illuminated.

Andy, good article showing that the Sun is neglected with insufficient information to do so. I think your central point is well supported on the available evidence.

Given the way the composites have been generated, we only have two points to work with in determining a long-term solar trend, the points are the lows of solar cycles 21-22 and 22-23.

I have something to add to this point.

Although the relationship between solar activity and its proxies is less tight than what some solar physicists would like us to believe, both the smoothed sunspot number at solar minima, and the number of spotless days per solar minimum, support a lower level of activity for the SC22-23 minimum than for the SC21-22 minimum.

I find this to be out of line with the number of x-ray flux days of “A0.0” or lower as occurred during the last minimum (true for zero sunspot days too). We are just now ahead of 2006 in “A0.0” days, with SORCE TSI tracking near 2006 too:

TSI and other solar indices being less tight than expected is because it takes time for the sun’s plasma to re-organize and diminish after new active regions emerge, grow, then decay. The plasma activity and TSI generated by it persists for several months before tapering off, and new regions will add to it.

The emergence of new regions will keep TSI kiting higher, until lower activity ensues. There is a lag at the sun between sunspot activity peaks and peak TSI from them. At the end of the cycle, where we’re at now, the emergence of very small spots will keep TSI kiting upward in small pulses, as it did this summer. Now the TSI 90day trend is falling but the level still isn’t where it was during the last minimum for times of similar sunspot number and F10.7cm. Why? The sun’s surface plasma hasn’t diminished enough yet, as evidenced by the fact we haven’t had as many zero sunspot days nor x-ray A0.0 days.

SORCE TSI should fall further as the number of x-ray A0.0 and sunspot number zero days increases into the minimum. Whatever difference in correlation to F10.7cm at this solar minimum compared to the previous minimum will define the true degradation for SORCE for me, and TSIS-1 is on the way. PMOD is over-corrected in my view. How do we know PMOD is wrong today? Because they’ll change it in the next version in a few months! There have been up to 9 changes to most years of PMOD TSI since version 1508.

“PMOD versions since 2016, especially the last version are showing a decline into this minimum to a lower level now than the previous 2008/9 minimum:
I find this to be out of line with the number of x-ray flux days of “A0.0” or lower”

Thanks Javier. I appreciate that PMOD is supported by the proxy trends, I’m less sure if this is significant or not. I’m neutral at this point in the great ACRIM versus PMOD debate.

What is clear to me (for what that is worth) is that we don’t have accurate enough measurements for long enough, to establish a TSI trend. Further, we can’t even be sure TSI is the whole story, other solar factors may be important.

My wish is that the IPCC would take solar activity variations at least as seriously as they take CO2 levels. After all, there is no more evidence that CO2 is causing warming than there is that the Sun is doing it, probably less. And then, gasp! maybe they are both involved? Do you think?

“What is clear to me (for what that is worth) is that we don’t have accurate enough measurements for long enough, to establish a TSI trend.”

I agree. I would even go further and say that even establishing a trend from current measurements is shaky. Instruments slowly die while measuring TSI and that is not a simple problem to solve.

“Further, we can’t even be sure TSI is the whole story, other solar factors may be important.”

Or we can be pretty sure that TSI IS NOT the whole story on solar activity effect on climate.

“My wish is that the IPCC would take solar activity variations at least as seriously as they take CO2 levels.”

That is nearly impossible as the IPCC is a political body created with the only goal of blaming climate change on human factors. Unless solar variability can be blamed on humans and taxed there’s no chance.

Half the reason the climate change models have no predictive power is their complete omission of Henry’s Law of Solubility. That’s the physical phenomenon by which the oceans regulate the concentration of CO2 in the atmosphere. It is a negative feedback that mitigates the forces that would tend to change the concentration, namely the biosphere, volcanoes, and human activity.

The other half of the model failure is its assumption of a constant albedo. Cloud cover albedo adjusts daily to the intensity of solar radiation. Its the cloud burn-off effect, and it amplifies TSI. It is a positive feedback to solar radiation that tends to account for the now dated published reports of atmospheric amplification of TSI. It’s a phenomena that occurs at low latitudes in the headwaters of the so-called ThermoHaline Circulation, where it is carried in the mixed layer to the poles, then to the deep ocean, to emerge at the Equation on a time scale of one millennium. Those lags appear in the history comparing Sea Surface Temperature to solar radiation.

So Anthropogenic Global Warming (AGW), the model that concludes that CO2 and not the Sun is controlling Earth’s surface temperature variations, is wrong twice, both times on the physics. It’s like a vast mobile not fastened at the ceiling.

A model with no predictive power is not a real scientific model, regardless that it might meet all three of Popper’s “Intersubjectivity” criteria of peer review, consensus support, and publication in an approved professional journal, (“even if this is limited to a circle of specialists” he added). This, along with eschewing definitions and pragmatism, comprise the steps in Popper’s deconstruction of Modern Science by which he removed objectivity from Modern Science, tossing the carcass to academics and the “publish or perish” movement.

Of course, holding academic science (i.e., Post Modern Science) to the standards of Modern Science is unfair, but it is inevitable as the promise of AGW, AKA “climate change”, falls farther and farther behind the curve.

There are an increasing number of papers making the claim that warming has driven a decline in tropical low level cloud cover. It doesn’t seem to occur to them that the decline in low cloud cover has driven the warming. Primarily because they have a big fat ‘official’ CO2 warming signal in their minds that they have to apply to the models without fail, so it must be anthropogenic warming what done it look, what on Earth else could it be etc.

Everything in a GCM is ultimately a programmer decision. And it doesn’t matter whether albedo is paramaterized, pasteurized, or homogenized. In the Real World, albedo is dynamic, a positive feedback to and an amplifier of solar radiation. In the models albedo is static — a constant. Result: the models cannot get the effects of the Sun right.

The part of a model that counts is its predictive power, and the only accessible prediction known within those GCMs is Equilibrium Climate Sensitivity. But the models get that number, one critical to its distant projections beyond our lifetimes and experience, wrong by a huge amount plus a sign error! IPCC reports ECS >1.5C/2xCO2 (90% confidence), >3 (50%), and >4.5 (17%). Those form a straight line in log space yielding a pitiful 2.2% IPCC confidence for the 0.7 C/2xCO2 estimated by Lindzen & Choi (2011)).

IPCC’s defines ECS as the rise in temperature following a (step) increase in CO2. That’s the way it works in the models, but not in the Real World. Real CO2 doesn’t lead temperature; atmospheric CO2 lags surface temperature. Lindzen & Choi assumed a lead/lag relationship. The L&C ECS would be far more accurate stated as -0.7C/2xCO2.

Real science requires both fidelity to definitions and empirical support for parameters, most especially leads and lags.

L&C make sense to me because, on a homeostatic water world, there is good reason to conclude that net feedbacks should indeed be negative. Hence, a lab value of 1.1-1.2 degrees C per doubling of CO2 concentration might well turn out to be 0.7 degree C in the real, complex climate system.

Add in other human effects besides GHGs, and the sign of total man-made effects on climate could be negative as well, ie net cooling. But in any case, the effect is negligible on a global basis. Locally and regionally, the effects can be more detectable.

You said, “Then after subtracting the energy reflected by the atmosphere and the surface, we find the average radiation absorbed is about 240 W/m2.”

I would submit that the stated amount absorbed is an upper-bound because NASA treats the reflectivity of water as though the sun always has a small angle of incidence, and ignores the impact of specular reflection on the limbs of the Earth.

Clyde, I think you are correct. My calculation (and Trenberth’s) is an over-simplification. I apologize for that, but I was trying to toe the IPCC line in the intro so I could quickly move on to the point of the piece, which is that we are trying to come to conclusions about the Sun’s influence over long time periods without observations that are sufficiently accurate.

The data is averaged for 1AU, (line 5) but if you look (line 10 iirc) it includes the actual reading (if I can find the original dataset. Might have to hunt the SORCE raw data back again and double check, been a bit.

That being said, in another discussion I also brought up that using absolute temperature we see a change of .8 degrees Kelvin, out of 288 degrees, a .3% change in temperature. If there was a direct correlation, that would mean a 4w/m^2 change in 1365w/m^2 would result in all of the changes to date.

I have seen anything from 1/1.5 to 4 or even 6 in that same time period though.

As discussed, there seems to be a number of positive feedback factors that accentuate warming during periods of high solar output and conversely accentuate cooling during periods of low solar output.
Some potential factors have not been mentioned and as such, does anyone have further information or thoughts on these:
1)
UV output from the sun contributes about 10% of TSI.
UV output seems to vary as much as 10 times that of the visible and IR spectrum (How Does the Sun’s Spectrum Vary? Judith L. Lean)
During periods of higher solar output, UV output increases more than the visible and IR. Increased UV breaks down Ozone in the atmosphere and as a consequence less UV is absorbed in the upper atmosphere causing it to cool but conversely warming in particular the oceans which readily absorb UV.
This is potentially an additional positive feedback effect which also reinforces cooling during periods of low solar output whereby reduced UV increases ozone, warming the upper atmosphere which can more readily radiate heat to space while reducing the UV energy absorbed by the oceans.
Would anyone have any thoughts on this process and perhaps some actual solid numbers on the magnitude of this effect if any?

2)
We have seen an 18% increase in cosmic rays between mar18 and jul18 (Spaceweather.com)
This is during a period of decreasing solar activity and also decreasing magnetic field strength on earth.
As discussed by many, weakening of the solar and earths’ magnetic fields causes an increase in cosmic ray influx which can lead to increased cloud seeding and increased albedo and thus reflection and cooling.
The question here; is there any mechanism that would cause the earth’s magnetic field to be weakened if the sun’s magnetic is weakening ?
If yes then perhaps this is an additional positive feedback factor, perhaps there is no interaction or perhaps the effect is opposite and overall feedback is negative. Would anyone have any further thoughts on this?

I think the weakening of the earth’s magnetic field at the same time the sun’s magnetic field is weakening is random. Not related.

That said it is of paramount importance as far as the climate is concerned when the two fields are in sync as they now are.

The questions are as follows:

To what degree of magnitude change will the strength of both fields decrease to?

How fast in the case of the geo magnetic field will the weakening progress?

How far away from the geographical poles will the North and South magnetic poles migrate?

What will be the duration of time of the weakening magnetic field events?(both solar/geo)

What are the threshold levels of the weakening of the two combined fields that would result in major climatic changes as opposed to minor changes? That is the 64 million dollar question and I do not know what the threshold levels are, except to say that they are out there.

There is a strong case to be made that if galactic comic rays increase enough they are going to have climatic impacts ranging from geological activity to global cloud coverage.

Changes in the global electrical circuit/Forbush events lend support to the galactic cosmic ray global cloud cover connections. It however can be masked at times, as many of the solar/climatic connections are if threshold levels are not attained.

Every G class stellar observed for a significant period has shown large flaring activity…. some on the order of X70 class. (the Carrington Event was a X40 class, that is 3 orders of mag below 70).
If our sun did that on a regular basis, we would not be here.

I don’t find it amazing at all. Observer bias. Was it not extremely stable we wouldn’t be discussing about it.

The Universe has a huge number of things set in such a particular way as to make us possible. From the values of some universal constants to the curious fact that solid water has a lower density than liquid water. Only two possible explanations. Either God, or we are just in the right Universe/Solar System/Binary Planet of the nearly infinite possibilities, and we are just suffering from a very acute case of observer bias. My mind chokes when I reach that point. It wasn’t designed by evolution for that task.

Drake equation, popularized by Carl Sagan, is absolutely useless and pure speculation. This universe is our universe because it became possible for us to exist until now. No guarantees from now on.

Joel — agree. Prb’ly the reason life & after a very long time, intelligent life (such as it is) developed , is the very stability of Old Sol. Yes, it VERY gradually increases in brightness, but so slowly that life has adapted & the earth itself has responded to maintain it.

The official treatment of errors is not done in the classic manner.
The work fails, and fails badly.
There has to be an explanation of why there is a departure from classic error analysis.
In this out-in-space setting, the most plausible way to measure accuracy is to sample the variable TSI with different styles of equipment, then to make an ovberall envelope that contains most of the data, sometimes with implausible outliers excluded.
We have several sets of instruments on different satellites. None of them has been shown capable of accurate measurement of the main variable to the accuracy that is sought for meaningful further use. The range of values from about 1360 to 1374 w/m2 equivalent, some 14 w/m2, is well beyond the +/-0.1 w/m2 that researchers would like to be able to measure for uses such as determining the absolute variation of TSI. Two orders of magnitude worse.
In classical analysis, (here roughly eyeballed, though the principles apply) the error should be expressed in the first place as 1366 +/- 6 w/m2 with the number of sigmas stated. Next, if there a reason to reduce this error, as by an adjustement to the readings from one satellite after another, then the error envelope can be lessened, provided that (and this is a very big proviso) provided that the errors involved in that adjustment are known and insignificant.
In the present case, ab in itio, one does not know if one satellite gave more accurate results than another (or, ideed, if they are not all wrong). In the ideal case, an adjustment would not be allowed on the data from one satellite unless that adjustment can be measured in some way, as by on-ground post-event simulation of the error and its magnitude. For imaginary example, it might be found that one satellite did not capture enough irradiation and this can be corrected by changing the shape/size of a mask on a similar device on the ground.
Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Please note that I have not discussed precision, being the scatter of results about a mean that hopefully has no scatter in the ideal case.

No amount of adjustment will lead to the correct physics unless the correction can be reproduced and validated, and its magnitude and sign and intrinsic error validated. In that I wish the researchers the best of luck, because in the classical sense one would not use these results because they are not accurate enough. Geoff.
(p.s. I have not yet read the comments of others, but I shall, so some of what I wrote might be discounted already.)

one does not know if one satellite gave more accurate results than another
The differences between satellites are not due to random ‘errors’, but to systematic differences stemming from different constructions of the sensors and to their sensitivity to degrading due to the harsh space environment. For the most part, those systematic differences are understood and can be corrected for. The composite record is thus much more accurate than each of the individual series that go into the composite.

Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Both of these statements are not correct. In particular since proxies are not used to adjust TSI.

This cannot be true
Of course it is true. when you identify and correct the systematic errors that the individual series have, the composite will be free of those known errors and thus much more accurate.
Even if you just blindly average the series without correcting for the systematic errors you still get a composite that is more accurate. This is the standard justification for averaging: the resulting error decreases with the square root of the number of individual data: https://en.wikipedia.org/wiki/Standard_error

Of course it is true. when you identify and correct the systematic errors that the individual series have, the composite will be free of those known errors and thus much more accurate.
Even if you just blindly average the series without correcting for the systematic errors you still get a composite that is more accurate.

There are two statements here #1, correcting systematic errors in the components and then building a composite reduces errors. This is only true if the corrections are correct. There is no agreement and much disagreement about the corrections so this isn’t true.

#2, averaging the satellite measurements will reduce the error. This is only true if each measurement has an equal chance of sampling the true value, clearly not true. The first estimate was 1371, the second was 1367, the third was 1365 (3 satellites), the fourth was 1361. An outlier, the third estimate of 1361 is now thought to be most correct, but no one seems very sure of that.

The potential error in the composites has to be higher than the potential error in the SORCE/TIM or ACRIM2 and ACRIM3 instruments. Unless they are wildly off like we now assume the ERB, ERBE and VIRGO instruments. Your idea that the composite is more accurate than all of the instruments is clearly wrong. We still don’t know how accurate our measurements are.

You clearly have no idea about this. The corrections are possible because the sensors overlap in time. There are no disagreements about this exceeding about half a Watt, which then becomes the precision of the composite.
Claus Froehlich has a good discussion of this:https://www.leif.org/research/TSI-Uncertainties-Froehlich.pdf
“Figure 6b supports the idea that a reliable composite for the three and half solar cycles can indeed be constructed by using the corrected series instead of the original ones.”
The uncertainty

Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Both of these statements are not correct. In particular since proxies are not used to adjust TSI.

The statement is quite true. PMOD is often supported using trends in proxies like SSN.

In any case we have no idea what the accuracy of the TSI composites is, they have no computed error bars due to the way they were constructed. Likewise we have no idea what the accuracy of the proxies is, including the SSN proxies. The NOAA attempt to compute uncertainty in TSI was simply a comparison of various composites and the addition of the instrument error, this is simply an educated guess. And, in any case, the NOAA accuracy is inadequate for our purpose as the post explains.

The statement is quite true. PMOD is often supported using trends in proxies like SSN.
Which is not the same as PMOD being calibrated against the proxies.
The construction of PMOD is compared with with proxies but not forced to match them.

Leif,
Thank you for your comments. I think you being overly iconoclastic.
What defence can you mount to my assertion that it cannot be shown that any of the satellite TSI measurements are adequately accurate?
In this subset field ofclmate science, we have several examples whereby each new instrument was promised to be better than the last – until the next instrument produced its result. Paper after paper has been written on data from earlier instruments, since fallen into disfavour. Few have been retracted or even seen a correction in print.
The Argo floats were said to show previous designs inferior for ocean properties. The sampling of the atmosphere by balloons, rockets and oxygen mcrowave gear on satellites shows large differences, many irreconciled so far. Early ocean pH is disregarded, as are early measurements of CO2 in air. This type of scientific progression is unsurprising and expected.
What is eminently disputable is the treatment of error, especially accuracy sometimes named bias or drift or other wrong terms. You have given no argument to refute that the classic estimate of TSI should start at about 1366 +/- 6 W/m2 and possibly improve cautiously from there. No way can it be justified as better than +/- 1 W/m2. Just wait until the next new instrument starts reporting.
As to your comment that I made 2 wrong statements earlier, (a) when I see proxy data for TSI in units of W/m2, I assume calibration against these satellite direct measurements and (b) I am agreeing with you that proxies are too inaccurate to correct raw TSI measurements from such satellites. Geoff

that any of the satellite TSI measurements are adequately accurate?
For the climate debate, the absolute value of TSI is not so important. What matters is the relative variation , e.g. the change over a solar cycle. That change is of the order of 1.5 W/m2. The Sun is rotating and when there are few sunspots TSI does not change much from rotation to rotation. The values 27 days apart vary typically less than 0.05 W/m2 which is the upper limit of the ‘error’ in single day average TSI. The differences between the absolute value of TSI between instruments are systematic differences in the construction and design of the sensor. Such differences are now understood and easily corrected for.

Only true if one assumes no long-term variability. The absolute value must be very accurate to measure long-term variability as Kopp has shown.
The measured and/or derived values of the sun’s magnetic field show no long-term variability at least over the last 300 years, so it is safe to assume no long-term variability over that time. What is important if not the accuracy of the absolute value over time, but the stability of the relative values. You misread [-interpret] what Kopp meant.

Sal,
It has to be amongst the worst.
The treatment of errors is formalised in publications such as those from the Paris based Bureau of Weights and Measures. Failure to conform to these standards leads to the present situation of wishful thinking dominating scientific limitations.
Wishful thinking dominance has now become so widespread in the climate subset of science that many authors now regard it as the norm. People get away other it because of the difficulties of eliminating inaccuracies in the system where you cannot duplicate planet Earth for a cross check. History shows that all science progresses through periodic demonstration of cold reality over personal belief. That is a main conclusion also reached by Andy May. I suspect he has known of these error problems for some time. My own blog protests started, IIRC, about year 2007.
Those authors who play fast and free with the formalism of proper error analysis ought line up and hand in their badges. There is a long queue to get through. Geoff.

Once again the first encountered pathology is swept away for something considered to be more refined, elegant, sexy, newsworthy, and attractive to low hanging fruit. Both sides of the debate, AGW due to increasing CO2 from human sources, and minute solar changes, make the same mistake. Earth, by far, is the intrinsically variable celestial Queen and King body of change, fully equipped to throw weather patterns created from larger, variable, semi permanent oceanic and atmospheric systems out of one phase and into another. Further, these semipermanent systems themselves respond and change as a result of small and large changes to land masses as they bridge, unbridge, move, collide, and break apart into new shapes and global positions, forcing new regimes in atmospheric and oceanic weather forcing systems.

I will always slap my forehead to this nonsensical river of studies that propose the tiniest speck of an agent to be capable of sweeping the elephant out of the way so we can marvel at this tiny phenomenon apparently imbued with very powerful magic STUFF!

Question. In this solar thesis, what did you do to rule out the first encountered pathology in order to focus our attention on such a tiny agent of change? It is a serious error if you did not apply due diligence to this critical first step in scientific inquiry methods following an observed phenomenon. I propose this question to the proponents of the AGW thesis as well.

Then that would be a no? You chose not to assess whether or not an intrinsically highly and powerfully variable planet can force weather into and out of short and long term trends without needing tiny variations in solar output to provide the initial shove or keep it in a weather regime over a long period of time? If that is the case, I have no choice as a critical reader to dismiss your thesis. As I do the anthropogenic CO2 thesis.

So I decided to do some research to see if the orbit of the earth, which is influenced by the orbit of other bodies in our solar system could be correlated in any way, shape, or form to global warming.

If you go to the JPL Horizons website you can access their data via website, telnet, and e-mail. I pulled data from their earliest time (9998-Mar-20 BC) to now by year the distance the Earth was from the sun to see much it varied. Their data, shows the following.

1) The closest the Earth has been to the sun is 1.015517355402501E+00 AU

2) The furthest the Earth has been from the sun is 1.019810319506244E+00 AU

3) The total variance is 0.00429296410374 AU or 642,218.28891 kilometers

What I am seeing is that the Earth has been 1.0155 AU from the sun (the closest distance ever) in 1799, 1886, 1894, 1905, 1913, 1924, 1932, 1943, 1989, 2000, 2008, and 2011.

Oh the last year the the Earth was 1.0195 AU away was 7695 BC. Earth’s orbit is decaying.

I agree that Earth’s orbit is more eccentric. I’m also going to say that our orbit is decaying regardless of mean, median, or mode. Below is a portion of the data from NASA’s JPL Horizon website including the dates with their date format.

9998-Mar-20: AD= 1.019558765236404E+00 <– furthest back I could go.
9915-Mar-20: AD= 1.019810319506244E+00 <–furthest from the sun.
1894-Mar-20: AD= 1.015517355402501E+00 <– Closest to the sun.
2017-Mar-20: AD= 1.016946036524143E+00 <– This is the date when I ran the data.

including the dates with their date format.
Bevare that dates before 1582 AD are in the Julian Calendar that drifts with regard to our Gregorian Calendar. So March 20th before 1582 is not the same as now.
In addition, the distance at closest approach is less than 1 AU.

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy