New paper from Judith Lean estimates Solar Irradiance Since 850 CE

Solar total and spectral irradiance are estimated from 850 to 1610 by regressing cosmogenic irradiance indices against the National Oceanic and Atmospheric Administration Solar Irradiance Climate Data Record after 1610. The new estimates differ from those recommended for use in the Paleoclimate Model Intercomparison Project (PMIP4) in the magnitude of multidecadal irradiance changes, spectral distribution of the changes, and amplitude and phasing of the 11‐year activity cycle. The new estimates suggest that total solar irradiance increased 0.036 ± 0.009% from the Maunder Minimum (1645–1715) to the Medieval Maximum (1100 to 1250), compared with 0.068% from the Maunder Minimum to the Modern Maximum (1950–2009). PMIP4’s corresponding increases are 0.026% and 0.055%, respectively. Multidecadal irradiance changes in the new estimates are comparable in magnitude to the PMIP4 recommendations in the ultraviolet spectrum (100–400 nm) but somewhat larger at visible (400–700 nm) and near‐infrared (700–1,000 nm) wavelengths; the new estimates suggest increases from the Maunder Minimum to the Medieval Maximum of 0.17 ± 0.04%, 0.030 ± 0.008%, and 0.036 ± 0.009% in the ultraviolet, visible, and near‐infrared spectral regions, respectively, compared with PMIP4 increases of 0.17%, 0.021%, and 0.016%. The uncertainties are 1σestimates accruing from the statistical procedures that reconstruct irradiance in the Medieval Maximum relative to the Modern Maximum, not from the specification of Modern Maximum irradiances per se. In the new estimates, solar irradiance cycle amplitudes in the Medieval Maximum are comparable to those in the Modern Maximum, whereas in the PMIP4 reconstruction they are at times almost a factor of 2 larger at some wavelengths and differ also in phase.

Introduction

Solar irradiance is Earth’s primary energy input. It establishes the thermal and dynamical structure of the terrestrial environment and is the primary external cause of terrestrial variability. The specification of solar irradiance over multiple centuries is requisite input for numerical simulations of climate variability prior to the industrial epoch that provide a baseline against which to evaluate contemporary anthropogenic influences. For this purpose, the Paleoclimate Model Intercomparison Projects PMIP3 (Schmidt et al., 2011, 2012) and PMIP4 (Jungclaus et al., 2017) developed reconstructions of the Sun’s total and spectral irradiance since 850 CE that are compatible with the absolute scale and variability of irradiance inputs recommended for climate change simulations in the subsequent industrial epoch (Lean, 2009; Matthes et al., 2017).

Because reliable, accessible, ongoing solar irradiance specifications are necessary for a range of Earth science research and applications, the U.S. National Oceanic and Atmospheric Administration (NOAA) implemented the Solar Irradiance Climate Data Record (CDR, Coddington et al., 2016) in 2015. The Solar Irradiance CDR includes estimates of total and spectral solar irradiance made using models constructed to replicate variations in contemporary space‐based observations. Currently, the NOAA CDR irradiance specifications (v02r01) extend from 1610 to the present but not, as yet, from 850 to 1610.

Continuous space‐based observations of total solar irradiance (TSI) began in late 1978, when the Nimbus 9 satellite carried the Hickey‐Freidan solar radiometer into Earth orbit, followed in 1980 by the launch of the Active Cavity Radiometer Irradiance Monitor on the Solar Maximum Mission. Thereafter, a dozen or more solar radiometers on space‐based platforms have continued the record, including the Total Irradiance Monitor (TIM) on the Solar Radiation and Climate Experiment (SORCE, Rottman, 2005) whose observations enable the model that specifies TSI for the NOAA CDR. Lean (2017) summarizes the space‐based historical irradiance observations; the record continues with the recently launched state‐of‐the‐art TIM of the Total and Spectral Solar Irradiance Sensor (TSIS) on the International Space Station (Richard et al., 2011).

Compared with the database of TSI observations, that of spectral irradiance observations is more limited in temporal coverage, has less certain absolute calibration, and reduced repeatability, especially on decadal time scales. Thus far, spectral irradiance observations over multiple cycles exist only at ultraviolet wavelengths less than 400 nm albeit discontinuously. The launch of the Solar Mesosphere Explorer (Rottman, 2006) in 1980 initiated systematic ultraviolet irradiance observations for a decade. Solar spectroradiometers on the Upper Atmosphere Research Satellite continued the record from 1992 to 2003 (Dessler et al., 1998), on SORCE from 2003 to the present (see Lean, 2017, for overview) and on the International Space Station into the future (Richard et al., 2011). Additional observations of solar spectral irradiance made in pursuit of ozone concentration measurements, such as by the Ozone Monitoring Instrument, also contribute to the solar spectral irradiance database (Marchenko et al., 2016). Systematic, continuous observations of solar spectral irradiance at wavelengths from 400 to 2000 nm exist only since 2003, made by the Solar Irradiance Monitor (SIM, Harder et al., 2009) on SORCE.

Models that combine the influences of the two primary solar sources of irradiance variability, namely dark sunspots and bright faculae, reproduce the observed space‐based TSI variations with high fidelity (Fröhlich & Lean, 2004; Kopp & Lean, 2011). For example, the Naval Research Laboratory Total Solar Irradiance (NRLTSI2) model, which the NOAA CDR utilizes to estimate both present and historical irradiance variations (Coddington et al., 2016), inputs a sunspot darkening function calculated from direct observations of sunspot areas and locations on the Sun’s surface and the Mg irradiance index as a facular proxy; the correlation of this model with daily averaged TIM observations (from 2003 to 2016) is 0.96. The Spectral and Total Irradiance Reconstructions (SATIRE) model derives its two sunspot (dark sunspot umbra and penumbra) and two facular (bright faculae and network) inputs from solar magnetograms (Krivova et al., 2010); the correlation of the SATIRE model of TSI with the TIM observations is also 0.96.

The same sunspot and facular solar features that cause total irradiance to vary also influence the spectral irradiance, their net effects being strongly wavelength dependent. The Naval Research Laboratory Solar Spectral Irradiance (NRlSSI2) model specifies solar spectral irradiance for the NOAA CDR with wavelength‐dependent combinations of sunspot and facular indices. The relative strengths of the sunspot and facular influences at different wavelengths are estimated from direct observations made by the Solar Stellar Irradiance Comparison Experiment (SOLSTICE) and SIM on SORCE (Snow et al., 2010). The SATIRE model uses a theoretical model of stellar atmospheres (Unruh et al., 1999) to specify the wavelength dependence of its sunspot and facular inputs. Unresolved instrumental trends thus far preclude observational determination of solar cycle spectral irradiance changes (Lean & DeLand, 2012), except, arguably, for SOLSTICE observations of the brightest and most variable HI Lyman α emission at 121.5 nm. The correlation of the NRLSSI2 model with the daily SOLSTICE SORCE Lyman α irradiance (from 2003 to 2016) is 0.99.

Models of solar irradiance variability such as NRLSSI2 and SATIRE expand and normalize the limited spectral and time domains of the observations. They provide regularly gridded specifications of solar spectral irradiance from the far ultraviolet to the far infrared, and in epochs prior to 1978, in formats suitable for input to climate and atmospheric model simulations (Matthes et al., 2017). To reconstruct historical solar irradiance variations, the models incorporate proxy indicators of the sunspot and facular sources, synergistically for total and spectral irradiance. Direct observations of the areas and locations of sunspots are available since 1882, but sunspot numbers are the only direct indicator of solar activity from 1610 to 1882. The NRLTSI2 and NRLSSI2 models estimate annual irradiance variations from 1610 to 1882 using direct correlations of annual mean sunspot numbers with total and spectral irradiance estimated after 1882. SATIRE algorithmically transforms the sunspot number to estimates of the model’s four separate inputs (dark sunspot umbra and penumbra and bright faculae and network; Kopp et al., 2016; Krivova et al., 2010).

Estimates of solar irradiance prior to 1610 rely on the 10Be and 14C cosmogenic indicators of solar activity extracted from ice cores and tree rings (Delaygue & Bard, 2011; Roth & Joos, 2013; Steinhilber et al., 2012, 2009). Cosmogenic isotopes contain information about solar activity because the Sun is the source of the heliospheric magnetic flux that modulates the flow of galactic cosmic rays that produce these isotopes of gases in Earth’s atmosphere (McCracken et al., 2004, 2013; McCracken & Beer, 2007). Figure 1 shows two different reconstructions of TSI since 850 developed as part of PMIP4, using more recent cosmogenic isotope indices and irradiance variability models than were available at the time of PMIP3. The PMIP4 irradiance reconstructions are synergistic with the absolute scale and variability of the irradiances that Matthes et al. (2017) recommend for use in Intergovernmental Panel on Climate Change’s Sixth Assessment Report simulations, namely, the average of irradiance modeled by NRLTSI2 (total) and NRLSSI2 (spectral) and SATIRE. Of the two different PMIP4 irradiance reconstructions shown in Figure 1, that based on 14C (rather than on 10Be) is specifically recommended for use in the Coupled Model Intercomparison Project (Phase 6) numerical model simulations.

Figure 1 Shown are time series of annual total solar irradiance based on the 14C and 10Be cosmogenic isotopes that the Paleoclimate Model Intercomparison Project recommends for use in simulations of preindustrial climate change (Jungclaus et al., 2017).

Shown are time series of annual total solar irradiance based on the 14C and 10Be cosmogenic isotopes that the Paleoclimate Model Intercomparison Project recommends for use in simulations of preindustrial climate change (Jungclaus et al., 2017).

This paper estimates total and spectral solar irradiance from 850 to 1610 consistent in magnitude and variability with the NOAA Solar Irradiance CDR from 1610 to 2016. In addition to extending the Solar Irradiance CDR prior to 1610, the goal is to provide independent, alternative, irradiance reconstructions for comparison with, and assessment of, the PMIP4 recommendations. The PMIP4 approach converts cosmogenic isotopes to sunspot numbers then calculates solar irradiance prior to 1850 using a SATIRE‐type numerical transformation of this single solar activity index to estimate the model’s four separate inputs. Vieira et al. (2011) report that this simplification of the magnetic flux inputs to the SATIRE model is a major source of uncertainty in its Holocene irradiance reconstructions. In contrast, the current approach estimates solar irradiance prior to 1610 using direct parameterizations of cosmogenic indices with the NOAA Solar Irradiance CDR after 1610.

…

Figure 9Compared with the Paleoclimate Model Intercomparison Project (PMIP4) recommended times series of annual solar irradiance from 850 to 2016 (orange lines) are the Naval Research Laboratory Total Solar Irradiance (NRLTSI2) and NRLSSI2 modeled values from 1610 to 2016 (black lines) extended from 1610 to 850 using the Roth and Joos (2013) cosmogenic irradiance index. In (a) is total solar irradiance. Solar spectral irradiances in broad bands are shown in (b) at ultraviolet wavelengths from 100 to 400, (c) at visible wavelengths from 400 to 700, and (d) at near‐infrared wavelengths from 700 to 1,000 nm.

How long does it take the climate system to purge the extra warmth stored since the start of the Modern Warm Period, if indeed it is over? 50-100 years given the storage capacity of the oceans? If so, it suggests that any influence of the current low solar cycle(s) may be muted beyond seeing a pause. So the effect may be there but canceled out by what is in the pipeline for the last 40 years.

That is the over arching question. However if during low solar periods increased cosmic ray costs lead to increased albedo then the loss stored heat may be accelerated due to a decrease in insolation at the surface. Time will tell.

Does one million W per sq km make any difference????
Think of it: 1 W per sq m, is the same as given above. In meteorology and climate, it’s no use working with sq meters. The areas being heated and cooled during climate change are vast, e.g., measured in sq km !
Start using these measures, when it comes to discussing solar variability.

I think the numbers provided may give an approximate answer to that question: Eyeballing the graphs, (Maunder Minimum value / today’s value) ^ 4 – 1)* Surface Temperature [Kelvin] is about 18 deg K (or C of course). Any of the charted values gives a similar answer. We know roughly how much the temperature has gone up since the MM, so now we know roughly how far along towards “equilibrium” we are. We also have Judith’s pre-MM numbers for cross-checking. There might be a useful answer from that lot ………..

Someone tried to hammer the solar record flat, so it couldn’t be attributed as the reason for the warmth of the 20th century, and now it has sprung back to its original shape. To bad for the data-fiddlers.

Hey Leif. A few posts back you made a prediction (verbal) about this solar cycle. Why don’t you plot a graph and stick it up on the Solar Page and leave it there…so we can see how accurate it turns out to be?

Thanks Leif, but still confusing. Your Svalgaard2018 match fig.1 of this post better than your Lean2018. Never mind, what’s your opinion on fig.1 then?
Lean2018 is not mine, but Lean’s.
Figure 1 shows how the 14C and 10Be can disagree. There is really no reliable 14C data after ~1950.

What is interesting about Figure 1 is that the minima in the 20th century are much higher than the minima going back a millenium. That would be a big source of the warming no matter what the peaks looked like.

Is Judith Lean an ESL researcher? I was taught that it was impossible for something to increase (in this instance TSI from the Maunder Minimum to the Medieval Maximum) going back in time. Is this similar to CO2 causes global warming if you start with today and go backwards, so that all those 800 or so lag periods then look to be in the right direction?

Once she had that nonsense (maybe she meant something else, but it’s not what she said), I stopped reading.

The NRLTSI2 and NRLSSI2 models estimate annual irradiance variations from 1610 to 1882 using direct correlations of annual mean sunspot numbers with total and spectral irradiance estimated after 1882
Here she uses the old obsolete Group Sunspot Number which has a problem in the 1880s: it is too low before that by 40-50% so her reconstruction suffers from all the ills of that obsolete series.
Geophysical Research Abstracts
Vol. 20, EGU2018-19274, 2018
EGU General Assembly 2018
Sunspot Number Revision
Laure Lefevre (1), Clette Frederic (1), and Cliver Ed (2)
“We will present here the effort that was undertaken by the whole solar community to achieve the revision of the Sunspot Series. This well-known index of solar activity had not been revised since its creation by Rudolf Wolf in 1849 and the disagreement between the Sunspot Number and Group number prompted us to reevaluate both series.
The corrections we describe here use newly recovered historical sunspot records as well as original sunspot
data. For the 17th and 18th century, the results confirm the low solar activity during the Maunder Minimum.
Over the 19th century, the k scaling coefficients of individual observers were recomputed using new statistical
methodologies, like the “backbone” method resting on a chain of long-duration observers.
After identifying changes in the observing methods, two major inhomogeneities were corrected in 1884 in the Group Number and in 1947 in the Sunspot Number. A full recomputation of the group and sunspot numbers was done for the last 50 years, with original data from the 270 stations archived by the World Data Center – SILSO in Brussels.
The new Sunspot Number series definitely exclude a progressive rise in average solar activity between the Maunder Minimum and an exceptional Grand Maximum in the late 20th century.
Residual differences between the Group and Sunspot Numbers over the past 250 years confirm that they reflect different properties of the solar cycle.
We conclude on the implications for Space and Earth climate studies and solar cycle and on important new
conventions adopted for the new series as a framework open to future improvements of those unique data series.”

Not to be critical of anyone or take sides, how is that someone who works for the US Naval Research Laboratory would not be using the most up to date information? I’m surprised that there are not common assumptions among those working at this level and she would not be privy to the latest thinking.

She is not associated with some podunk, backwater college working on her thesis.

A serious question. Why such a divergence in data at such a high level within the profession?

that someone who works for the US Naval Research Laboratory would not be using the most up to date information?
Mainly because it would invalidate her earlier work, and secondarily because it would remove much of the motivation [and perhaps funding] for her work [why use sunspot data if they don’t support the notion that solar activity is a significant driver of climate?].

Just curious Leif – what do you believe to be the primary driver of climate?
There is a tendency for people to assume that he primary driver is the ONLY driver, but the Climate system has many drivers, perhaps not of very dissimilar strength. In addition, complex non-linear systems have chaotic internal fluctuations not due to any obvious direct cause. I hope this is non-committal enough to make anyone happy.

‘There is a tendency for people to assume that he primary driver is the ONLY driver, but the Climate system has many drivers, perhaps not of very dissimilar strength. In addition, complex non-linear systems have chaotic internal fluctuations not due to any obvious direct cause.’

Actually, a good answer – sort of reflects my own two-syllable version – order attempting to assert on the edge of chaos.
Does this, in your opinion, preclude – or at least hamper – predictability?

Does this, in your opinion, preclude – or at least hamper – predictability?
It makes long-term prediction very hard [perhaps impossible] without destroying the predictability in the short term [on time scale of days].

Lief Svalgaard hangs-out at WUWT because he profoundly suspects the Sun’s variabity is the cause of multi-century climate change trends, but can’t figure out how, as per everyone else, but won’t ever admit to that so don’t ask—hence enjoys blood sports in the interim.

“Reflecting current ambiguity about the true evolution of solar activity from 1610 to 1882, the NOAA Solar Irradiance CDR makes available two reconstructions of past total and spectral solar irradiance that differ prior to 1882, one using Hoyt & Schatten’s (1998) group sunspot numbers, the other using the SILSO sunspot numbers (Clette et al., 2015).

Because differences between the Hoyt & Schatten (1998) group sunspot numbers and the SILSO sunspot numbers (Clette et al., 2015) prior to 1882 are not yet resolved (Asvestari et al., 2017), the irradiance from 1610 to 1882 is estimated separately for the two individual sunspot records (Kopp et al., 2016). Comparisons of multiple sunspot number records with cosmogenic radionuclides extracted from meteorites suggest that sunspot records such as SILSO may overestimate solar activity prior to the mid eighteenth century (Asvestari et al. 2017). This motivates the adoption of an average of the two reconstructions using the different sunspot records as the preferred irradiance specification since 1610.”

I guess some solar physicists believe the revision over-corrected the sunspot record.

No she doesn’t.
Yes, she does by including it in the average as you point out.
To see the bad effect of this one need only to compare her TSI with the H&S Group Number, before and after 1882:
At least one can demand that she is internally consistent.
The H&S Group Number is OK after ~1882 and ~40% too low before that, and so is Lean’s TSI.
There is no excuse for using the old sunspot numbers.

If you read the paper it says:
“The exact level of solar activity after 1750 cannot be distinguished with this method, since both H- and L- scenarios appear statistically consistent with the data.
This is because of the very large uncertainties of data points.
The crucial issue is solar activity just before and just after 1882 and as Ken Schatten now recognizes [and as several lines of evidence clearly shows], the old H&S series is wrongly calibrated before that point.

So they say that before 1750 your reconstruction is wrong and afterwards we can’t tell.
They have no data before 1750, and the issue is not 1750, but 1882: is the H&S reconstruction correct before that. Even Schatten agrees that it is not. So, no endorsement of Lean.

Schatten’s opinion doesn’t mean anything. What matters is the evidence. It is clear to everybody that the figure from Avestary et al., 2017 above provides a better support for the low count than for the high count.

I don’t know if Judith Lean is correct or not, but she is justifying her decision and providing evidence from the bibliography to base it.

You might disagree but your criticism is unfounded. There is no one good and one bad series. There are two uncertain series with controversial support from evidence.

that the figure from Avestary et al., 2017 above provides a better support for the low count than for the high count.
Regardless of your opinion, the authors themselves argue that they can’t tell after 1750 AD, and the issue is the discrepancy before and after 1882. And, I would say that Schatten’s opinion matters as he acknowledges that the H&S series is wrong before 1882, and all evidence we have about that [for a hundreds years on either side] supports that.

the uncertainty in sunspots observations is very large in the 18th-19th century.
Not in the 19th. The data are good since ~1820. The problem with 1882 is that the H&S reconstruction is just plain wrong before that. Ken Schatten [who co-authored the H&S series] is also a co-author of the revision and agrees that the old series is incorrect before 1882.

Besides ambiguity in the proper sunspot number, the precise relationship between TSI and sunspots is unknown, as well as the precise relationship between TSI and climate. Further, we still have not determined how much solar radiation makes it to 1AU or whether it is trending up or down or staying the same. Too many unknowns!

the precise relationship between TSI and sunspots is unknown,
Your use of ‘precise’ is disingenuous. What is ‘precise’?. There is good evidence that the variation of TSI is due solely to the variation of the magnetic field of the Sun [which we can measure with good accuracy]. And there is good evidence that the magnetic field is well described by the sunspot number. With the new revision of the sunspot number series, we have a reliable series back to about 1820. So there are not too many unknowns after that time.

Andy, this graphic illustrates the point that solar indices SSN & F10.7cm flux co-vary and drive TSI with a variable lag. There are things that you and Javier need to learn from Dr. Svalgaard as I have. I went on to build on that incredibly rich knowledge.

There is no one good and one bad series.
Yes there are. We have identified what is bad about the H&S series [daisy-chaining a too low RGO group count back in time from about 1900, and miscalculating the Wolf-Wolfer calibration]. These are not ‘uncertainties’ but out-right errors admitted even by the one who made them and corroborated by comparison with the Zurich sunspot numbers, the geomagnetic data, and even the cosmic ray proxies.

That you have corrected mistakes in the way the sunspot record is interpreted does not mean that the sunspot record is more accurate as a proxy for solar activity. We have other proxies for solar activity and if they don’t agree there is always uncertainty.

Lean does not compare the sunspot records among them, but to a different solar activity proxy. The lower count gives a better fit to the ⁴⁴Ti data.

does not mean that the sunspot record is more accurate as a proxy for solar activity. We have other proxies for solar activity and if they don’t agree there is always uncertainty.
The sunspot group record is a very accurate indicator for solar activity. It matches the F10.7, the geomagnetic daily variation, and solar magnetic flux with great fidelity.

We know it is for the instrument period. We don’t know about before that. The uncertainty grows as we go back in time.
We know it for at least back to the 1840s:

Before that, there is of course, more uncertainty, and the Ti44 data cannot be used to infer anything after 1750 [as the authors point out].
The early records [before ~1820] are active research areas [stimulated by my research] and one might hope that progress will ensue.

my hopes were that Judith would see the error of her ways
Such admissions come in little pieces. She has too much tied up in earlier work so it will take her time to slowly walk back. One step at a time. She leans [no pun] a bit on the faulty NOAA Climate Data Record [so she can eventually blame problems on it].

If Leif is right, that would imply a faster reaction to the change in solar activity (see my comment above), which to me actually looks more likely (ie, I’m saying that what Leif says looks right). The cross-check idea is still just as valid. BTW, I was assuming a linear reaction to the solar measures, which might need adjusting.

If this representation is correct you would expect a continued drop in irradience for the rest of the century. Noting of course that we are talking about a 0.07% change from maximum to minimum, which would amount to ~0.2 K change in temperature, assuming no indirect forcings, such as cloud modulation.

Pretty clearly, they meant “decreased 0.035 ± 0.009% from the Medieval Maximum (1100 to 1250) to the Maunder Minimum (1645-1715) … ” Proof reading is very difficult and few people are good at it. Since I’m not one of them I’m inclined to grant others a bit of latitude.

No, it’s not a simple matter of proof reading; it is bad composition on the part of the author and sloppy editing by the journal editors. As an occasional author, editor and sometime proof reader, yes, I agree mistakes are made but that is such a glaring error it is not deserving of latitude.

@ Don k …I don’t think so, as the phrase is used more than once. The phrase is correct with what it is stating, but it is a very poor way of stating that. Yes, solar irradiance increases as you look back through time, but again that phrase will lead to confusion.

Yes, a very strange use of language. She may be so wrapped up in the data that she doesn’t think of it as a time series any more, just a bunch of numbers.

Or someone else wrote the abstract? Or told her to re-write the abstract with a deliberately odd phrasing to draw attention to how much hotter the MIHP (Modern Intolerably Hot Period) is than the MWP.?

Wait a minute, if the sun is giving us more radiation than “ever before”, perhaps we don’t need CO2 to explain why the “unprecedented” global warming is so “unprecedented”

“The new estimates suggest that total solar irradiance increased 0.036 ± 0.009% from the Maunder Minimum (1645–1715) to the Medieval Maximum (1100 to 1250), compared with 0.068% from the Maunder Minimum to the Modern Maximum (1950–2009). PMIP4’s corresponding increases are 0.026% and 0.055%, respectively.” So, I suppose that means the Sun is “worse than we thought”–a bigger driver of global warming than previously estimated in both the Medieval and Modern Warm Periods, and is a bigger contributor to recent warming than it was to the medieval warming. If that is not the upshot, someone please tell me. Thanks.

I think that’s what the author wants us to take away. Very interesting.

Then Leif undermines it quite effectively, and Javier responds in kind, perhaps a bit pre-emptively. I think Leif wins on points in the thread, just, but I’ve no deep knowledge of the subject to make a real judgment.

But what a pleasure it is to see a discussion at this level, without all the peripheral, shallow stuff we so often see. Well done to all.

But most of us we know the sun was active during the Medieval Warm Period , Inactive during the 500 year Little Ice Age, and Active during the modern warm period ,up to year 2005. Inactive 2005-present.

Now with weaken solar and geo magnetic fields the climate is going to turn colder with this year being the transitional year.

I am much more concerned about what is going to happen in the immediate future not a hash of the past which is where most of the arguments are in this thread.

As I have said it is not just TSI, and not just the sun but also the state of the geo magnetic field.

For all clever guys here, let me say this.
TSI variation does not correlate or has any correlation or relation to the RF variation.

CO2 concentration variation does really have such a relation with RF variation, as known and considered thus far. Every one arguing about the RF variation as per climate does not seem to understand the problem of this given related concept with it, the concept of radiation imbalance being always positive.
When even Gore seems to have no much problem in understanding this much!

If the Sun’s TSI does not correlate or not in synchronicity with the RF variation, it means nothing, as either going down or up, in concept it only means gain , warming, or heat content accumulation on the system as per the climate term. (almost forever).
But that conceptual “effect” does not mean much, because when considering either RF variation or temp variation over time the TSI effect or affect is non existent, as it does not make it even at some little detectable noise effect there.

So technically either when TSI variation going up or down at most it will mean gain, as all depends on the internal response of the Earth system, which is always subjected to a positive, in regard and towards secondary meaningless forces as per and towards the RF (variation), when such as secondary meaningless forces definitely not in correlation and synchronicity with the RF variation.

RF variation is not effected by the TSI, in any possible meaning way, as per TSI to actually be considered as with any value as towards temp variation or RF variation.

And here, as for this comment, the lack of correlation between TSI variation and Tropical warming is not the main clause of falsification to be used.
Simply TSI in context of RF and the concept of radiation imbalance.

In essence here, whatever way, within the main range within the threshold of RF variation, either it going up or down, either as per the TSI or CO2 concentration, or whatever else that fits the imagination, there is no cooling to be contemplated, whatever the case of what drives or causes the RF variation, there is no way of cooling to be contemplated within the range of the max-min threshold.

I am sure that this could still be very confusing..:)
But for whatever it is worth.

It’s kind of annoying how people seem to look at the peaks, and infer climate influence from that. They see the peaks don’t change all that much, and conclude, “yup, no climate influence there!”

That’s not how it works. Climate is cumulative over very long time intervals. What matters is the area under the curve. Here is a plot of the SSN taken from WFT, filtered through a lag filter with a 100 year time constant. It is quite clear that the impact of solar activity saw an increase in the late 20th century.

Partisans can argue, if they like, that the WFT series is not the latest thang, but get whatever data you prefer and perform the filtering operation, and show us the results. I bet it won’t be a lot different.

I started in 1750 where the WFT data starts, with the initial filter output set to the overall mean value, to try to mitigate the start-up transient. I only plotted the data starting one time constant later, again to try to mitigate any misconception from start-up transient.

When using such a short data set relative to the time constant of my filter, initialization is somewhat subjective. But, it cannot be reasonably argued that the peak in the late 20th is not higher than the peak in the late 19th.

There is a bit of confusion here [and Lean takes some blame for that]. What is used in those TSI reconstructions is not the Sunspot Number such as can be found at WFT or SILSO, but the Group Number [either directly or multiplied by about 20 to make it look somewhat like the sunspot number]. Here is a plot of the Group Number since 1610:

You are the very guy, if I remember correctly, that by making a high stand in a very strong “fight”, some months ago, in that famous ristvan blog post, where at some point you had one of the most “aggressive” AGW zealots drop in his responses to you the very proof of the AGW falsification,,, the lack of correlation between CO2 and Tropical warming.

Hope you remember this, Bartemis, as you happen to be the main “fighter” that day, at least as per my view point.

You see, if my memory serves me right, that was the point that these AGW guys than broke and run away,,, oh well after Mosher somehow, in his “quizzical” way warned these guys clearly about their failure…:)

Now don’t you really see that same falsification still subjects Sun variation to the same outcome!

It’s kind of annoying how people seem to look at the peaks, and infer climate influence from that. They see the peaks don’t change all that much, and conclude, “yup, no climate influence there!”

Couldn’t agree more, Bartemis. My take on the issue is to consider the 11-year cycle the unit of solar activity. I just add all the sunspots (monthly count to avoid repetition) for each cycle. My result is similar to yours:

The modern maximum is very obvious. Seven straight cycles with above average solar activity. The longest such stretch on record.

The modern maximum is very obvious. Seven straight cycles with above average solar activity.
It is curious that you harp [when it suits your argument] on how uncertain the records are, but then fail to take into account that the probable errors on the 18th century values are of the order of +/-20% which is much larger than the difference between centuries.

On a growing solar activity for the past 300 years and a modern solar maximum the ¹⁴C and ¹⁰Be proxy records also agree. The agreement between different independent proxies gives more confidence.
Not at all.
Here is the long-term cycle average TSI since 1700 [done right]. The ‘trend’ with an R^2 of 0.0127 is not significant:

There is a disagreement between SSN and GSN prior to 1840
Just shows how large the uncertainty is for the early years, so that no significance can be attached to any trend that extend earlier than 1840.
And, BTW the SSN has not yet been recalculated from original sources [but the GN has], so don’t attach too much to the SSN.

SSN is closer to 14C prior to 1800. Any change towards GSN will make it more different from 14C. It might be the wrong change.
Prior to 1800 everything is so uncertain that it is hard to say which is better. The SSN [SN] has not been recomputed [yet] for those early years, but are simply Wolf’s old values divided by 0.6 [to bring them on the Wolfer scale]. The 14C and 10Be are both uncertain as well [c.f. Figure 1 of this post]. The only way forward is to analyze [as Schatten and I did] the original data without making the same errors as were made before. Let me repeat: the revised GN series is the best [any]one can do with available data at this time.

The one that agrees with the solar cosmogenic proxies has a bigger chance of being better.
No, as the proxies disagree among themselves. And direct observation of the sun is always better than mere proxies.

My reconstruction is the best Leif can do at this moment. It doesn’t mean it is more correct than TSI reconstructions from solar proxies.
It is always better to use actual observations of solar activity instead of mere proxies. And the community is slowly coming around to realize this. A workshop is ongoing in Bern [CH] to finalize the revisions [version 3 is coming] and to whittle down the confused competitor series.

Then, according to svalgaard himself, javier’s take on the issue has merit at least back to the dalton minimum. (and since we really don’t know enough about the 18th century’s accuracy, we shouldn’t be drawing conclusions wrt the entire record either way)…

The relationships between sunspot numbers, group numbers and TSI is not linear. As I show on Slide 54 ofhttp://www.leif.org/research/EUV-F107-and-TSI-CDR-HAO.pdf
the best fit to available TSI measurements [according to Claus Froehlich] is using a power of 0.7, so:
TSI = 1360.5 + (0.0304 SN^0.7 + 0.329 GN ^0.7)/2 where I don’t make any assumption about which is better, SN or GN – although I believe GN is. In any case the influence is minor.

My linear model plot above in green (0.005*ssn+1360.5 – PMOD model) is for illustration to show how it closely approximates your plot of Claus’ model in higher activity cycles. My plots were calculated with monthly numbers. Were yours?

I’m also familiar with Claus’ method, having explored it a while ago, including it’s development. Here is his model in two plots compared to actual – top is annual, bottom is monthly calculated values of TSI calculated from actual v2 SSNs, overlaid on his model, with scatterplot and R values:

HoweverI am also the one saying it’s non-linear at all sunspot levels. I learned this on my own.

However, it is usefully possible to approximate SSN-TSI or F10.7-TSI linearly in the lower activity range, in the largest percentage of overall solar activity over time. The use of my linear F10.7cm-TSI model for F10.7cm under 100sfu has worked great so far in my SST forecasting. If I hadn’t already applied it successfully, I’d have nothing to say about it.

Several important points emerge: the sun has different operating curves in SSN-TSI depending on |MF|, ie the stronger the cycle the closer to linear this relationship becomes. It would take tremendous magnetic energy beyond anything we’ve ever seen to maintain TSI in a linear relationship at very high SSNs.

Probably the very most important thing about it to understand is there is a point of diminishing returns for TSI wrt to increasing SSNs, meaning every solar cycle will have its own TSI “sweet spot” range of sunspot activity like SC24 did above my TSI warming line, according to the mean field power behind it.

Of course my work is highly dependent on the accuracy of your SSNs………..

I think in the long run the reconstructions will converge closer to my model than yours. Wanna make a bet?

My linear model plot above in green (0.005*ssn+1360.5 – PMOD model) is for illustration to show how it closely approximates your plot of Claus’ model in higher activity cycles. My plots were calculated with monthly numbers. Were yours?
I simply used Claus’ data. Remember slide 54?
Claus use all the different measurements, not only PMOD. His innovative method was to line them up on their minima.
Comparing daily data for TSI and SSN and F10.7 is meaningless as a large dark sunspot [F10.7] drags down TSI while the debris [faculae] from that large spot shortly thereafter increases TSI. Only averages over at least some months make sense.

I just updated Claus’ plots as they didn’t have all the years the first time around. Clearly, we can see that while Claus’ TSI’ model correlates PMOD TSI with R>0.6, both the trendlines of the actual SSN-PMOD relationships are less than the level of Claus’ model in both yearly and monthly data, by a not insignificant amount. This is why his model – your model – are too high – they’re higher than the 4 solar cycle history.

I do remember slide 54. Who can forget slide 54? What I’m showing you here is the actual performance of that model versus reality.

Only averages over at least some months make sense.

See Fig 9, look at the “Span” time, over 16-140 months were used to establish these correlations in my linear model. I told you before it doesn’t work well over 100 sfu. For all values I have a second order models for F10.7-TSI, for v2 SSN-TSI, and for SSA-TSI, and they all show the non-linearity.

Comparing daily data for TSI and SSN and F10.7 is meaningless as a large dark sunspot [F10.7] drags down TSI while the debris [faculae] from that large spot shortly thereafter increases TSI.

From experience I know about it on a very practical level, but not how the NRLTSI2, NRLSSI2, and SATIRE parameterizations are set up, which I’d like to know.

I just updated Claus’ plots as they didn’t have all the years the first time around.
Here is an expanded version of Claus’ relationship [up through 2016] using all spacecraft data:
I see no reason to do anything else.

I do as my forecasting system requires a finer knowledge and application of these relationships than you know, and I’ve already developed the skills by honing them continually, which you haven’t realized so far.

which you haven’t realized so far.
No, I’m not interested in anything shorter than a year. My interest is in the long-term variation [centuries] of solar activity. I do not waste time on your short-time stuff, which incidentally is not compelling enough to consider. If you are happy with it, run with it, but don’t bother me with it.

There’s nothing to this as the actual data shows the model is too high
There is no such thing as the ‘actual data’. Every satellite is different. Your blue points and the cloud of red points are just offset a fixed amount. To compare with Claus, plot everything against sqrt(SN v2). And as Claus points out: The comparison is only valid for intervals much longer than a month. He says ‘one year’.
As you can see, every satellite has a different offset, so you must adjust all data to the same value for SSN = 0.

My interest is in the long-term variation [centuries] of solar activity. I do not waste time on your short-time stuff, which incidentally is not compelling enough to consider.

Everything I say works on short to long time scales for the same reasons. The same exact principles apply at different timescales, so if you think you’re wasting your time it’s because you haven’t caught on to the applicability at short to long time scales.

Even time period is important, every day. What days of data could I leave out and still have a compelling picture of reality?

Everyone today lives in the present. I can explain the sun’s effect on the past, the present, and in a limited fashion into the future based on my very intimate knowledge of solar and climate indices. Maybe that’s not compelling to you, fine, solar archeologist, but the rest of the world wants answer to the immediate past, the present, and the immediate future.

but the rest of the world wants answer to the immediate past, the present, and the immediate future.
I don’t think so. What is important is what the present value is in relation to past values, especially past values long ago, so we can know the trend. The present and immediate times around it are now enough to know what the trend is [or of there even is a trend].

The trends are important, but the solar cycle influence on everything is where the action is, where people are focused in time – now and what’s next. The same cycle influences recur, as you discovered in your career. The action is in the timing and magnitude of solar activity and the subsequent responses on earth. That is very exciting research with widespread application.

The TSI modelling I do is intended for hindcasting, nowcasting, and forecasting, covering any time span where there’s good sunspot data, or F10.7cm data, or USAF 45day or SWPC cycle forecasts. It works.

Judith Lean states that: “Cosmogenic isotopes contain information about solar activity because the Sun is the source of the heliospheric magnetic flux that modulates the flow of galactic cosmic rays that produce these isotopes of gases in Earth’s atmosphere “.
But in fact it is the solar wind intensity that transforms the geomagnetic field to receive more or less cosmic rays, not solar activity. Though there is some correlation between solar wind and solar activity, the long trend of solar wind is completely different from the solar activity trend, as I have shown in my papers. So such reconstructions based on proxies imo are quite absurd depicting some dynamical phenomenon and definitely not counting solar activity.
For example many derived periods of highest solar activity in figure1 happened in periods of high cooling of the planet, compared to temperature reconstructions of the last millennium.
As I have shown in my papers ( http://dimispoulos.wixsite.com/dimis ) it is planetary tides on solar low density atmosphere that drive solar activity and solar wind.
I have seen comments of Leif Svalgaard where he rejects this possibility, considering that planetary tides are very weak. But this should be not correct. Since solar temperature varies between some 6000K at surface to some millions K at core, only to dig some kilometers of the solar ultra low density atmosphere after some tidal perturbation can lead to 1000K increase in temperature…

Your blue points and the cloud of red points are just offset a fixed amount.

Yes they are and that fixed amount is the amount of error is that model. The model is too high.

The comparison is only valid for intervals much longer than a month. He says ‘one year’.

The yearly data comparison show the same issue. The blended TSI model is the equivalent to someone averaging UAH & RSS, or GISS & HadCrut & NOAA – it’s at this point counterproductive using a blended model. I consider the blended model inappropriate. Nothing personal.

If not PMOD tell me what data in use today will show itself nearer to the model line instead of below it?

Is PMOD no good now too? Besides, itt doesn’t mater what data you use, the non-linearity is always there.

Yes they are and that fixed amount is the amount of error is that model. The model is too high.
Nobody knows what the ‘actual data’ are, so the vertical position of the curves is freely floating. All we [and Claus] can do is to model the variation from an arbitrary absolute value.

If not PMOD tell me what data in use today will show itself nearer to the model line instead of below it?
Is PMOD no good now too? Besides, it doesn’t matter what data you use, the non-linearity is always there.
All data are ‘bad’ as we don’t know what the real value is. A new TSI instrument has just been installed on the International Space Station. After a year or two, we can retrieve the instrument and compare its readings with what we found in the lab before it was sent up. In that way we can finally learn what the absolute value should be and we can adjust all the other instruments to that value.
And the relationship with the SN and the GN is indeed non-linear: TSO ~ SN^07 ~ GN^0.7 to high precision.

So there is no falsifying that model?
Sure there is. When we bring back the new TSI instrument from the ISS we’ll know what the real values are.
But, as far as the variation with the SSN is concerned, all spacecraft agree closely enough that they support each other.

but your model is still too high
The model has a free parameter namely the base value of TSI = 1360.5. You can set that one to any value you prefer to agree with any series you like without influencing what is important, namely the time-variation of TSI. So, the notion of ‘too high’ or ‘too low’ does not apply.

So in other words there is no objective way to score your model? Not scientific imo.

A single instrument model should be employed as I have done, not a blended set, using the longest series possible, which in this case is PMOD, for the most self-consistent results.

The model is too high by the offset. The model must be adjusted in the same way as using PMOD infill for SORCE, adjusted by determining the offset between them using the difference of the three common low solar minimum years average in 2007-9. In this case just use the trendline offset.

The fact is the model overvalues by the offset amount between the trend lines, something any objective person could see, and that model needs to be adjusted to accommodate it.

I find it absolutely fascinating how you can be so particular about how everyone else’s models curve-fits or not but not this one. It’s not about Claus. We had several good conversations and I didn’t bring this up.

Otherwise our models correspondence better than the other(s), something I wanted you to see.

A single instrument model should be employed as I have done, not a blended set, using the longest series possible, which in this case is PMOD, for the most self-consistent results.

When you look at the totality of measurements of TSI [I use deliberately and old slide]:
you’ll see that even PMOD is ‘blended’. The ‘composites’ are made by scaling all data to the same [but arbitrary] instrument and then removing the offsets and ‘blend’ the values into a single series. This is standard procedure. Since we don’t know which instrument measures the true, real values, the offsets are all different and no one series is too low or too high. What Claus did in Slide 54 sidesteps this problem by plotting each series as the deviation from its value at sunspot minimum. This allows him to find a formula [a model] for the variation as a function of the sunspot number [actually the square root of the sunspot number as the net magnetic flux scales with the sqrt of the number of spots as I pointed out to him long ago]. Having removed the offset between instruments allows him to ‘blend’ all data into a single composite. Adding in an arbitrary base value [1360.5 for example] turns the result into a useful TSI. What the base value should be is not known today [but will be soon when TSIS-1 has operated long enough].
Claus understandably uses his own PMOD series to define the base value. SORCE/TIM has another base. RMIB still another. ACRIM yet another. None is more correct at this point.

Since PMOD is already ‘blended’, adding it into another blend again is ‘double-blending’. It’s why I say use just the one and test against that. Seems you’re willing to accept a vague average. There must be a way to specifically test the model against measurements today.

List all the datasets blended to make PMOD, then list the datasets blended with PMOD again to arrive at the new double-blended model Claus made, and then I can offset and scale the datasets properly, put them all together in a scatterplot to compare against the model. I think you’ll arrive back at the same place: the model is too high.

It should not be necessary to wait again for the next instrument data to know that now.

the model is too high
I added 1360.5 to match Lean’s for the spacecraft era [so we can compare with hers]. I can add anything I like to match anything I want, so the ‘too high’ bit is meaningless. The meat of the model is the dependence on SN and GN: TSI = TSIo + (0.0304 SN^0.7 + 0.239 GN^0.7)/2 which is derived from Claus’s fit to all available data in slide 54. TSIo is a free parameter that I can set to anything in order to match whatever I want. If I want to match PMOD I use one value of TSIo. If I want to match SORCE, I must use another value of TSIo. If I want to match ACRIM, I must use yet another value of TSIo, etc. All this is necessary because we do not know what the ‘real’ TSIo is [but we shall soon when TSIS-1 has been in operation for some time and is brought back to Earth, so we van measure its degradation].

What methods were used to validate the model?
All instruments degrade in the harsh space environment, so the issue comes down to determining how large the degradation is and how it varies time. That is usually done by having several sensors on board and expose [‘open the window’] them to sunlight for varying amount of time under the assumption that when the window is closed there is no degradation. Experience has shown that this assumption is not valid, so the only solution is to bring the instrument back to the lab and measure directly how much it has degraded.
What Claus did for the old data is the next best thing: we believe that the variation of TSI is due to the variation of the Sun’s magnetic field which we can measure and which is closely related to the sunspot numbers, so Claus plots the TSIs against the sunspot numbers [in slide 54] to derive as a calibration curve. This is the validation of the ‘model’. As each spacecraft has its own value at SSN = 0 [namely TSIo], we can get TSI for that spacecraft instrument by adding the TSIo for that instrument. Thus ‘too high’ becomes meaningless as you must ask “too high compared to what?”. And the answer is not known at this point in time.

I meant the data for TSI not that TSI changes themselves, which are part of the climate puzzle. That is important especially if changes in TSI are north of .03% as this article suggest from very low solar activity periods versus very high solar activity periods.

I think the data on what TSI actual is, is not reliable but that is not really important. What is important is by how much does TSI change.

All of our solar and geomagnetic indices show that solar magnetic activity now is at the level of what it was a century ago in contradiction to this attempt to make TSI vary much more than would be indicated by the equality of activity now and back then.

There have been many cases in history where the top paradigm was confronted with a shift and at times a complete nullification. But not without blood letting. I am considering re-entering the often times bloody battle known as research. And my area of interest will indeed cause, if I am correct, a very bloody battle. Having been on the edge of my seat watching the solar recreation battle, I have gained respect for the battle, now understanding that this is likely the norm when long held ideas and beliefs are made to face a shift. Battles will continue long into the evening of the shift, so obviously present in Lean’s current effort.

So I am either a glutton for punishment, or I still believe in the classic scientific method and am willing to get a few bumps and bruises along the way from abstract to conclusion with a hypothesis, if shown to be the case, that will redefine a major area of learning disorders and how we test for it.

So thank you Leif for showing me how to keep a level head and a civil tongue when new ways of thinking challenge the status quo.

Judith Lean states that: “Cosmogenic isotopes contain information about solar activity because the Sun is the source of the heliospheric magnetic flux that modulates the flow of galactic cosmic rays that produce these isotopes of gases in Earth’s atmosphere “.
But in fact it is the solar wind intensity that transforms the geomagnetic field to receive more or less cosmic rays, not solar activity. Though there is some correlation between solar wind and solar activity, the long trend of solar wind is completely different from the solar activity trend, as I have shown in my papers. So such reconstructions based on proxies imo are quite absurd depicting some dynamical phenomenon and definitely not counting solar activity.
For example many derived periods of highest solar activity in figure1 happened in periods of high cooling of the planet, compared to temperature reconstructions of the last millennium.
As I have shown in my papers ( http://dimispoulos.wixsite.com/dimis ) it is planetary tides on solar low density atmosphere that drive solar activity and solar wind.
I have seen comments of Leif Svalgaard where he rejects this possibility, considering that planetary tides are very weak. But this should be not correct. Since solar temperature varies between some 6000K at surface to some millions K at core, only to dig some kilometers of the solar ultra low density atmosphere after some tidal perturbation can lead to 1000K increase in temperature…