Scientists Issue Unprecedented Forecast of Next Sunspot Cycle

This is an official NCAR News Release (National Center for Atmospheric Research) Apparently, they have solar forecasting techniques down to a “science”, as boldly demonstrated in this press release. – Anthony

Scientists Issue Unprecedented Forecast of Next Sunspot Cycle

BOULDER—The next sunspot cycle will be 30-50% stronger than the last one and begin as much as a year late, according to a breakthrough forecast using a computer model of solar dynamics developed by scientists at the National Center for Atmospheric Research (NCAR). Predicting the Sun’s cycles accurately, years in advance, will help societies plan for active bouts of solar storms, which can slow satellite orbits, disrupt communications, and bring down power systems.

The scientists have confidence in the forecast because, in a series of test runs, the newly developed model simulated the strength of the past eight solar cycles with more than 98% accuracy. The forecasts are generated, in part, by tracking the subsurface movements of the sunspot remnants of the previous two solar cycles. The team is publishing its forecast in the current issue of Geophysical Research Letters.

“Our model has demonstrated the necessary skill to be used as a forecasting tool,” says NCAR scientist Mausumi Dikpati, the leader of the forecast team at NCAR’s High Altitude Observatory that also includes Peter Gilman and Giuliana de Toma.

Understanding the cycles

The Sun goes through approximately 11-year cycles, from peak storm activity to quiet and back again. Solar scientists have tracked them for some time without being able to predict their relative intensity or timing.

Forecasting the cycle may help society anticipate solar storms, which can disrupt communications and power systems and affect the orbits of satellites. The storms are linked to twisted magnetic fields in the Sun that suddenly snap and release tremendous amounts of energy. They tend to occur near dark regions of concentrated magnetic fields, known as sunspots.

The NCAR team’s computer model, known as the Predictive Flux-transport Dynamo Model, draws on research by NCAR scientists indicating that the evolution of sunspots is caused by a current of plasma, or electrified gas, that circulates between the Sun’s equator and its poles over a period of 17 to 22 years. This current acts like a conveyor belt of sunspots.

The sunspot process begins with tightly concentrated magnetic field lines in the solar convection zone (the outermost layer of the Sun’s interior). The field lines rise to the surface at low latitudes and form bipolar sunspots, which are regions of concentrated magnetic fields. When these sunspots decay, they imprint the moving plasma with a type of magnetic signature. As the plasma nears the poles, it sinks about 200,000 kilometers (124,000 miles) back into the convection zone and starts returning toward the equator at a speed of about one meter (three feet) per second or slower. The increasingly concentrated fields become stretched and twisted by the internal rotation of the Sun as they near the equator, gradually becoming less stable than the surrounding plasma. This eventually causes coiled-up magnetic field lines to rise up, tear through the Sun’s surface, and create new sunspots.

The subsurface plasma flow used in the model has been verified with the relatively new technique of helioseismology, based on observations from both NSF– and NASA–supported instruments. This technique tracks sound waves reverberating inside the Sun to reveal details about the interior, much as a doctor might use an ultrasound to see inside a patient.

NCAR scientists have succeeded in simulating the intensity of the sunspot cycle by developing a new computer model of solar processes. This figure compares observations of the past 12 cycles (above) with model results that closely match the sunspot peaks (below). The intensity level is based on the amount of the Sun’s visible hemisphere with sunspot activity. The NCAR team predicts the next cycle will be 30-50% more intense than the current cycle. (Figure by Mausumi Dikpati, Peter Gilman, and Giuliana de Toma, NCAR.)

Predicting Cycles 24 and 25

The Predictive Flux-transport Dynamo Model is enabling NCAR scientists to predict that the next solar cycle, known as Cycle 24, will produce sunspots across an area slightly larger than 2.5% of the visible surface of the Sun. The scientists expect the cycle to begin in late 2007 or early 2008, which is about 6 to 12 months later than a cycle would normally start. Cycle 24 is likely to reach its peak about 2012.

By analyzing recent solar cycles, the scientists also hope to forecast sunspot activity two solar cycles, or 22 years, into the future. The NCAR team is planning in the next year to issue a forecast of Cycle 25, which will peak in the early 2020s.

“This is a significant breakthrough with important applications, especially for satellite-dependent sectors of society,” explains NCAR scientist Peter Gilman.

The NCAR team received funding from the National Science Foundation and NASA’s Living with a Star program.

As we don';t udnerstand the Sun, how can we have an accurate model of it? Or am I being pedantic?

Secondly, tweaking an arbitary model tof the Sun to match the previous solar cycles does not invest the model with “predictive skill”. It simply means one has tweaked a computer program to produce the desired result … a posteriori.

The photo is priceless. This to me summarizes the complete lack of scientific approach to everything these days. Everyone sits round a PC and blindly believes whatever nonsense it spews out.

No first principles. No cause and effect. No understanding of physics. mathematics or even statistics. Just run any number of widely available computer modeling programs, fit the historical data and hey presto another science breakthrough.

You sucked me in. I read the whole thing and finally saw the the date the prediction was issued. I should have been more wary when reading as our own site publishes failed computer model prediction stories all the time. In fact, we have a site category that lists these stories (“Failed Predictions – Model/Human”), here:

Needless to say, scientists who develop computer models, and come to believe in their amazing forecasting ability, should really stay away from press releases on roll-out. Since that won’t happen, it would be great if at some point computer model scientists would gain some perspective and humility from the constant failures.

Like I’ve said several times before, any model is only as good as it’s forecasts, not it’s hindcasts. I sure hope the climate modelers think about this story before the the next time they tell us how certain they are about the future climate.

Natural variability is currently “masking” the solar activity. Once this temporary condition eases, the sunspots, storms, and cycles will come back with a vengeance unless we change our polluting ways.

“The panel now expects the sun’s activity will peak about a year late, in May 2013, when it will boast an average of 90 sunspots per day. That is below average for solar cycles, making the coming peak the weakest since 1928, when an average of 78 sunspots was seen daily.”
…

“But not everyone on the panel expects the coming cycle to be weaker than average. “The panel consensus is not my individual opinion,” says panel member Mausumi Dikpati of the High Altitude Observatory in Boulder, Colorado.

Dikpati and her colleagues have developed a solar model that predicts a bumper crop of sunspots and a cycle that is 30% to 50% stronger than the previous cycle, Cycle 23.

Because it is still early in the new cycle, it is too soon to say whether the sun will bear out this prediction, Dikpati says. “It’s still in a quiet period,” she told New Scientist. “As soon as it takes off it could be a completely different story.””

This reminds me of all the betting schemes on horse races and football matches etc that I get sent through the mail that are going to make my fortune. They claim a positive % success rate and have past results to prove it.

Often wonder why they are telling me this instead of making their own fortunes. This forecasting principle obviously applies now to the new AGW Industry who are creaming us all off on predictions that no-one can guarantee – and no doubt will go on ad infinitum.

It seems to be in our nature to think that we know everything. I wonder if this is part of our psychological mindset. If it has not been done already, sociologists or perhaps psychologists could have a field day analyzing people’s responses to global events such as swine flu and global warming. Assuming that they can maintain professional detachment from the events happening around them.

Perhaps it’s the times we live in but we seem to have lost a sense of proportion over such events. It is claimed by Kofi Anan that 300,000 die because of an unsubstantiated consequence of “global warming”. How many die of wars, polluted water, disease, malnutrition, smoking, traffic accidents etc.?

From http://saintseminole.awardspace.com/stats.htm & others
47,000 die in the US every year because of the flu & respiratory diseases (USA Today)
112,000 die in the US every year because of obesity
418,000 die in the US every year because of smoking
39,000 die in the US every year because of traffic accidents
1.2 million die in the world, annually, in traffic accidents
and so on…

Really, the sunspot cycle prognosticator boys desperately need to just STFU until they’ve got some hard observable facts to base a new forecast on. They’re making Jeanne Dixon look good at this point. Shameful.

A great example of developing a computer model based on a very limited dataset (the past 12 cycles) of a highly complex phenomenon with no “out-of-sample” testing to validate the model. The vaunted AGW models spewing their predictions of runaway warming all suffer from the same deficiencies. How can such educated people make such basic and obvious errors?

So, we see that they (the team) have NOT changed their press release – despite two out of 7 years of their FIRST prediction being completely wrong: 2007 – 2009 being completely flat, rather than rapidly increasing to apeak of 2013.

What changed – so their predictions for 2007 and 2008 can be updated? What did NOT change – to indicate that their program was right, but off in its initial conditions? What did they correct in their program? Why are they being paid – if they REFUSE to update and revise their program’s predictions when proven so abjectly wrong?

The good news is that the forecast made back in 2006 might have verified purely by by accident. Consider what the subsequent effects of this would have been. We are indeed fortunate that the prediction was falsified on the first shot out of the box.

I’ve spent a lifetime involved with weather/climate observations and making operational forecasts and along the way looking at every medium to long-range forecast scheme that’s ever been publicized. I’ve even tried to make a few of them myself. After a while it starts to sink in. None of these schemes has stood the test of time. Most have failed shortly after their publication.

Computers can fit curves to an amazing variety of prior observations, but using the same algorithm to make predictions is almost never successful.

“Researchers” who continue to pursue these phantasms can have many motivations. They often believe in their own ideas. More recently there seems to be a horde of investigators that have no particular loyalty to their precepts but find them to be convenient vehicles for advancement in their chosen profession.

Experience has proven, at least to me, tht the only appropriate position to take on unverified predictions is skepticism, with or without prejudice, depending on the track record of the proponent(s).

The field of meteorology is one in which failure to acheive verifiable results is not a handicap to success.

Robert Wood (18:30:15) :Perhaps Leif can come to the aid of my confused mind.
As we don’;t udnerstand the Sun, how can we have an accurate model of it? Or am I being pedantic?

Secondly, tweaking an arbitary model tof the Sun to match the previous solar cycles does not invest the model with “predictive skill”. It simply means one has tweaked a computer program to produce the desired result … a posteriori.

When the solar cycle prediction panel started its deliberations I produced this http://www.leif.org/research/Grow-N-Crash%20Prediction%20Model.pdf computer model to show how easy it is to match previous cycles with a reasonable physical model. I called it the Grow-N-Crash model and it simply says that cycles grow and grow until they are too big, then crash and the process repeats…My model does a very good job, too, and even predicted a large [140] SC24 as theirs… The exercise was to show how easy this was and how their model was not unique in its predictive ‘power’.

I believe that all reviews should be published [as an electronic supplement to the paper] and I have never insisted on anonymity in reviewing. IMHO, the review is as important to the public as the paper itself.

I am afraid that I cannot go along with the crowd in poking fun at the funny failed computer model prediction. If, as advertised, the failed prediction is based on a model that incorprated physical dynamics related to sunspot formation in the past, then the failure might well siggest that something important has changed in the sunspot department. Maybe the slowing of the slowing of the conveyor belt that was reported a while back, for example. While I am certainly not a CO2 denouncer, my not so humble opinion is that this failed prediction is worth serious consideration

“The next sunspot cycle will be 30-50% stronger than the last one . . .The scientists expect the cycle to begin in late 2007 or early 2008, . . .”
Contrast:

May 8, 2009: A new active period of Earth-threatening solar storms will be the weakest since 1928 and its peak is still four years away, after a slow start last December, predicts an international panel of experts led by NOAA’s Space Weather Prediction Center.

Scientific modeling is useful for the continuing progress of science, but using un-validated models to build “consensus” or “settled science” and guide political policy is crazy, and not “based on the facts”. I wouldn’t trust a climate model further than I could throw it.

When I was in business school I spent a lot of time studying models of economic and financial forecasting. The first thing you learn in this process is that it is a fundamental mistake to assume that the future is going to resemble the past. You can construct a model to “predict” the past with incredible precision, but when you apply it to the future it doesn’t work very well. If that were not the case, there would be models that predict the future behavior of financial markets with tremendous precision.

The problem with models is, when it’s actually your own money at stake, you usually lose your money. That’s the case here as well.

NCAR has(had) some heavy duty solar dynamo researchers like Miesch. Never have seen dynamo researchers making sunspot predictions however.

I’d say the ‘solar conveyor recycling flux tubes’ is going to be lightly regarded if not considered falsified by 2015 or so. Why they don’t figure the baroclinic forces at the tachocline create a Coriolis effect and start there I can’t figure.

That’s the great thing about these solar cycle predictions – we don’t have to wait long to know if they were accurate. Hopefully this lull is actually providing an incredible learning opportunity for solar scientists and is also reminding them of how much they DON’T know about the sun just yet.

This prediction was correct(ish) in one regard – this next cycle is starting late!

You have to give them credit for standing up with several firm predictions. They had several bold numbers which would tend to confirm their model if they had been achieved. The start date had a broad spread, but predicting a late start would have made their high peak all the more interesting. At the moment all we know is that their start date prediction was wrong.

I don’t have time to look in the thesaurus for proper descriptions for how late the start has actually become.

Often wonder why they are telling me this instead of making their own fortunes.

Most unfortunate, dear sir, they are having invested vast winnings to diamond concern being based in Nigeria. That we see you are one most worthy of trust, we asking of your kind assistance, which will being mutual profitable . . .

When I was in business school I spent a lot of time studying models of economic and financial forecasting. The first thing you learn in this process is that it is a fundamental mistake to assume that the future is going to resemble the past. You can construct a model to “predict” the past with incredible precision, but when you apply it to the future it doesn’t work very well. If that were not the case, there would be models that predict the future behavior of financial markets with tremendous precision.

Back in the mid-1970s a friend of mine had a letter published in Nature in which he forecast Cycle 21 based on a power series analysis; every other cycle had a negative sunspot count, making the 22-year cycle look sort of like a sine wave. But he admitted it was “pure numerology” with no attempt to take any underlying physics into account. This can work in some circumstances — you can get very good ephemerides of planetary positions this way, and you don’t need to know anything about gravitation, or Kepler’s Laws, or any of that stuff. As I recall, the prediction for Cycle 21 turned out to be pretty accurate, but cycles 22, 23, and 24 just fell to pieces — the curve didn’t even look particularly sinusoidal.

As they say, “past performance is not necessarily indicative of future results.”
e

I give high marks for at least being bold and precise in their predictions, but I’m sure one doesn’t have to give them a call to advise them to be a little more humble in choosing a confidence level in the future.

The most telling thing here about all models – particularly the climate models that AGWers also attach 95%+ probabilities (am I wrong in thinking that the probabilities of interdependent events 1, 2, 3,…,n is P1*P2*P3*….Pn). Can one have such high probabilities for each factor that when multiplied out they would arrive at 95 or 95% or even 65% probability? Indeed, in the field of possible climate “causing” factors, solar cycles are the most predictive of all – wouldn’t a climate scientist just love to be able to predict future climate so accurately as one can predict solar cycles – say be a year or two off. And yet, even with this sinusoidal behaviour we can be so off the mark.

I think it is daft to give the discredited Dikpati paper prominence without at the same time publishing the papers that discredit it.

See the devasting critiques by see Choudhuri, Chatterjee and Jiang.

In this regard see Choudhuri, Chatterjee and Jiang (2007). Using solar dynamo modelling, these authors predict a very low amplitude for Sunspot Cycle 24 similar to the prediction of Svalgaard et al 2005. Kapyle (2007) compared the solar dynamo models of Choudhuris, Chatterjee and Jiang (2007) and Dikpati et al (2006), concluding the Choudhuri et al model the superior of the two. Amongst other things, the model used by Choudhuri et al provides that the poloidal field generation is intrinsically random whereas the model used by Dikpati et al uses sunspot area data as a deterministic source for the poloidal field.

The Choudhuri et al model is considered the more realistic of the two. Secondly, the Choudhuri et al model was published in 2004 and subject to scrutiny by scientists. The Dikpati et al model has not been published.

Ok since the one arm bandit is out.. its time for my new planetary predictor. The tarot reading of the climate. Center card is Earth, Crossing card is Solar system.. then we have a N, S, E, W spread for compass points of earth. The four cards down the left from top to bottom would be for ocean currents, volcanic activity, water vapor and sun activity. :D Ya think if I run my predictors 400 times they would give me 10k for funding? hahahahaha..!!! Its a lot of work, but I bet I could come up with a better prediction then most.

Gilbert (21:51:24) :Great review. How much attention did they actually give it?
They did follow all my requests, otherwise it would not have been accepted as I get to see it again and again, until I’m happy with it. The reviewer most of the time wields enormous power in that regard.

I don’t mind scientists positing ideas and hypotheses. It is what is done and encouraged……

I only start minding when those yet unproven ideas and hypotheses are used to change the World’s economy and energy use, whilst declaring a minor, but important atmospheric trace gas, a “pollutant”.

Also I’m not terribly happy with scientists using everyone else’s money to pursue what is basically a glorified hobby…. Most of ‘em need a haircut and a proper job….. They can still do science on the weekends ;-)

Long ago, before cheese was invented, I was a teacher. My task was to purvey various principles of English law to the willing but not necessarily able. One thing I learned very early on was that my students would cross-check everything I said against the content of a variety of textbooks and academic papers. If it wasn’t in writing it was treated with utmost skepticism.

That’s fair enough, people far more experienced than me and with far better paper qualifications wrote the books and articles. Or so one might think. In fact there were instances of my explanations being questioned on the basis of articles and books written by people whose experience and qualifications were indistinguishable from my own. It was all done very politely, but it was clear that most students presumed what they read to be more reliable than what they heard.

Over time my name appeared at the bottom of articles in journals and on the title pages of books (not many and not very good ones) and the result was that I then had authority in their eyes. What I said in lectures and tutorial classes might have remained the same but they were now words of confirmation rather than of dissent. The written word was very much King.

I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct. That is not an entirely illogical approach, after all the models are set-up and operated by people with strings of letters after their names. It is logical to accept the word of people far better qualified than you are, what is not logical is to accept the word of a computer. Yet we seem to find that the authority afforded to computerised results is greater than the authority given to the word of those who fed the computer in the first place.

It’s a recipe for overstatement and for the weakening of critical analysis. Not good.

Richard Mackey (21:53:38) :Amongst other things, the model used by Choudhuri et al provides that the poloidal field generation is intrinsically random whereas the model used by Dikpati et al uses sunspot area data as a deterministic source for the poloidal field.

The randomness is an important element in the model and is what makes prediction more than one cycle ahead impossible [except in a statistical sense].

The Choudhuri et al model is considered the more realistic of the two. Secondly, the Choudhuri et al model was published in 2004 and subject to scrutiny by scientists. The Dikpati et al model has not been published.

Richard means the actual source code. I have studied the Choudhuri code in detail and it is well documented and understandable. During our panel meetings I urged Dikpati to show us her code so we could form an opinion about it, but she refused with the excuse that the code was the result of many years of work and was patch upon patch and so ‘messy’ that it was not ready for public consumption for which it must be cleaned up and documented [Choudhuri actually remarks that he spent a considerable amount of time doing just that]. I asked her how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer. Another argument was something about ‘intellectual property’ [of taxpayer funded work???].

Also I note this comment… “Our model has demonstrated the necessary skill to be used as a forecasting tool”…
I would think that it would be necessary for the model to have actually correctly forecast something to make this statement, what do you think?

Perhaps they were a bit too optimistic here. But the coming cycle will be a good test case. Dikpati is sticking to her guns and expect the huge cycle to be just around the corner. And in truth, we don’t KNOW which way it is going to go. The coming cycle is ALSO a test of my method, so we shall see. In either case we will learn something.

Excellent investment opportunity: Get someone to pick up the tab for your computer game development, then hide the source code.
I would expect something funded by NSF to be open source, where the idea is that if you get it wrong, someone else can come along and try to improve it, especially if they have an understanding of the underlying processes.
No advancement possible there.
Wonder what thier justification was for the funding?
Even when you get an HST award, you can normally only keep the data proprietary for 1 year, after which the data becomes public record.

How did they get 98% when it got SC14 wrong, and butchered SC12??
Hmmm…. maybe I am in the wrong business. If I dust my C off, I could write a program to generate loops and fill them with color so that the image matches all the Solar Cycles, and sell it to NSF for a couple million.

“New Solar Cycle Predictions
by Tony Phillips
Boulder CO (SPX) May 28, 2009
An international panel of experts has released a new prediction for the next solar cycle, stating that Solar Cycle 24 will peak in May 2013 with a below-average number of sunspots. Led by the National Oceanic and Atmospheric Administration (NOAA) and sponsored by NASA, the panel includes a dozen members from nine different government and academic institutions.

Their forecast sets the stage for at least another year of mostly quiet conditions before solar activity resumes in earnest.

“If our prediction is correct, Solar Cycle 24 will have a peak sunspot number of 90, the lowest of any cycle since 1928 when Solar Cycle 16 peaked at 78,” says panel chairman Doug Biesecker of the NOAA Space Weather Prediction Center, Boulder, Colo.”

Aren’t they also running a program to enlighten TV meteorologists of the reality of AGW? I believe I’ve seen a model on their site that shows how, if it wasn’t for all the CO2 in the atmosphere, the climate would have been cooling for the past 20-30 years or so. Here’s a rhetorical question – do they do any real science?

It is Archimedes who is thought to have said, “Give me a fulcrum, and I shall move the world!” Some of today’s ‘scientists’ have left facts behind and seem to believe, “Give me a computer, and I will create the reality that suits my purposes.”

The thing that bothers me the most is the claim that if a model prediction turns out to be close that we will have learned something, that the model is somehow validated because of a single prediction being close, when it could be nothing more than coincidence. Several fairly accurate predictions of *future* cycles would be a different matter, but that encompasses several decades of waiting.

rbateman (23:01:47) :How did they get 98% when it got SC14 wrong, and butchered SC12??
They use the first three cycles [12-14] to seed the model, so they are predicted poorly. Only when loaded with three cycles is their model expected [by them] to perform well.

Glenn (23:17:22) :Several fairly accurate predictions of *future* cycles would be a different matter, but that encompasses several decades of waiting.
The polar field precursor method has now performed reasonably well for three cycles and may be on track for number four, which in any case will be yet another test.

According to my breakthrough statistical reanalysis of sunspot activity over the past two years, the NCAR prediction was actually correct. Sunspot cycle #24 is a already a raging bull! I won’t release my data-base to any of you counter-revolutionary cads because, after all, the debate is over. But, the New York Times will trust my work and so should you. Believe me; RAGING BULL! I recommend pointy, tin-foil hats for all as a precaution. I also predict a thumping EMF pulse that will tear the fanny off every Prius Smugmobile on planet earth.

While having no expertise in this area at all, I was struck by the comment posted by C Colenaty. Maybe the Dikpati model is no longer valid because something has changed in the sun? Anthony has more than once drawn attention to the October 2005 step change in solar flux and to my untutored eye, it does seem as though there has been some kind of phase shift. Is this something fundamental that might have happened, putting us into a rather different regime, wherein a previously reasonable model no longer applies?

Leif: Yes I agree all published papers should include the reviewers report. Apparently Nature (not 100% certain) tried this but it caused so much revolt/trouble that they stopped it. What Editors are suggesting is a short summary of the review. I still think it should be a full version review (even though persons may not be interested in missing comas and back and forth emails etc..). BTW I think your prediction SSN 70 + Archibalds (50) = 65 will be the mark for 24. As I recall Dikpati was always way over the top? This posting… I also got sucked in… Great!

I am a full time trader and trade the stock markets for a living. All the big banks have these models that can predict past stock market behavior very accurately and but are total useless in predicting the future. They curve fit the historical data with their models and then use this model to try to predict the future. They failed miserably as evidence by the present financial crisis.

I guess the same thing is happening here with this so called scientific model for predicting sunspot cycle.

And another point.. maybe Svensmark’s proposition that the universe (not only the sun) influences Earth’s climate (and Sun per se) may have some truth to it after all. The variables would be completely unpredictable. The videos are very professional and obviously geared eventually for mainstream media (personal opinion.

I think I’ll bookmark this post for reference the next time some commenter suggests that peer-reviewed studies are inherently superior. Here we have a prime example of a paper that was subjected to peer-review with a rigor and diligence that I strongly suspect exceeds the level generally provided and it still turned out to be quite a load of [snip]. Saved ya some work there mod. Like the old saying says you can’t make a silk purse out of sow’s ear.

I am afraid that I cannot go along with the crowd in poking fun at the funny failed computer model prediction … ”

The crux of the piece isn’t about ‘poking-fun’ it’s about highlighting that [a] hindcasts of a model are about as reliable as rune-reading and [b] trying to base your predictions of a complex system on limited understanding is going to come back and bite you.

Do you really, really think that ‘something in the sun has changed’? That’s a classically human-centric approach, in essence ‘we had it right but now it’s changed’.

How about ‘a previously unknown mechanism in the sun has come to light’ (apologies for the pun)? This way of looking at the universe has a subtle but important distinction to the previous, that is, we don’t know it all and we’re still learning…

Hhmmmm! As my late father used to say, “modern technology is infallible, right up to the point when it doesn’t work!”.

I suspect they’ve been using some left-overs fromt he Met Office’s models perhaps. BTW, they managed to get the forecast right for 3 days so far!

Slightly OT – over at climatereaslist Piers Corbyn has invested in a new calculation device much as the Met Office has recently done, only his costs a little less! Well worth a look at this incredible technology!

It is very unfortunate that the AGW ship of fools is creating doubts about science in general, and not only climate.

Statements that comment on the funding of failed models, for example, show an enmity and denial of the scientific method that we have to go a long way in the past to find.

To call scientists as exercising their “hobby” is right. We, who are scientists, are lucky to have been payed for pursuing our hobby, but have a look at what pursuing a hobby seriously means: a 24/7 dedication, and even when you sleep you tackle the current problem. I remember when we were doing bubble chamber picture scanning, I used to see antineutrino events instead of sheep before going to sleep. Scientists who pursued science as a hobby because they were affluent enough not to need financing, like Neuton and Darwin, would be too far and few in between to create the present burgeoning of science, that happened when universities started to be seriously funded.

There is something monomaniacal and compulsive in the makeup of a good scientist, and in our present society, scientists are funded to follow their nose. This means that they will make mistakes and wrong models, wrong turns and assumptions. It is all in the program, otherwise one would not be doing research but logistics. Research means that if you are right 10% of the time you are doing well.

The funding of mediocre scientists, in hindsight, should not be considered a waste of national resources. My father used to say ” you need a lot of manure for roses to bloom”, and the ambiance of universities and research institutes is what is necessary for the few and correct results ( in hindsight) to appear.

The problem with the AGW movement is that is messed up science with politics and maybe even a mystical Gaia agenda. That should not be attributed to science, but unfortunately, for the average skeptic, it is.

That will be the lasting damage of this wrong hypothesis of CO2 warming.

We should not hang Dikpati et al for being wrong. It is part of the fertilizer.

Their model was brilliant at predicting past solar cycles but failed spectacularly within a couple of years when attempting to predict the future. Does this sound familiar?

Of course, the current sleeping sun may have been caused by a step change that could not have predicted on the basis of our current knowledge. But, becoming ever more cynical in my old age, I would lean towards another explanation: that, despite all their talk of physical processes, their model was little more than an exercise in curve fitting. If so, then their claimed success in hindcasting previous solar cycles is a perfect example of circular reasoning.

This piece is excellent because there is a clear parallel with climate models. They are brilliant at forecasting climate that has already happened, but they fall at the first hurdle when it comes to forecasting the future. All the models, including NASA Model E, predict a continuous rise in the global temperature. Of course, it hasn’t happened. The world is no warmer than it was ten years ago (I suspect that’s what they really mean when they say ‘things are even worse than we thought’). Could it be that the climates models, too, are little more than than exercises in curve fitting?

I suspect it will be many decades before we understand the climate engine sufficiently to make meaningful predictions. Indeed it may never be possible due to the chaotic nature of weather and climate. Even if I believed in AGW I would still regard those MIT researchers as fools. Claiming they can predict the global climate in 2100 with a high degree of precision is such obvious nonsense that it does make me rather sad about the state of science. I think that, as science has always been self-correcting in the past, things will improve and climate science will become honest. But I’m not holding my breath.
Chris

In a 2006 paper in Urania de Jager says this of the solar dyamo beleived to drive the sun’s magnetic activity- “The dynamo is a non-linear process that shows chaotic elements. Phase catstrophes do occur. Therefore it is basically not possible to forecast future solar activity.”

That said de Jager and Duhau published another paper in 2008 based on some empirical observations on the long term osciilation around these phase transitions and came up with this prediction.

“The regularities that we observe in the behaviour of these oscillations during the last millenim enable us to forecast solar activity. We find that the system is presently undergoing a transition from the recent Grand Maximum to another regime. This transition started in 2000 and it is expected to end around the maximum of cycle 24, foreseen for 2014, with a mximum sunspot nmber Rmax = 68 +/- 17. At that time a period of lower solar activity will start. That period will be one of regular oscillations, as occured between 1730 and 1923. The first of these oscillations may even turn out to be as strongly negative as around 1810 in which case a short Grand Minimum similar to the Dalton one might develop. This moderate to low activity episode is expected to last for at least one Gleissberg cycle (60-100 years).”

In an earlier conference abstact I have seen one of those authors, based on the three above possible evoltions of the solar cycle predicted that in the most conservative outcome, regular oscillations one supposes, a fall in global temperature of 0.3 deg C over 20 years i.e. very nearly half the rise we have seen in the last century in about a fifth of the time.

So there are broadly two different predictions for the evolution of the global temperature. I understand that this is what science is all about. The old, old story of the beautiful maiden hypothesis and the ugly ogre, truth!

Kinda lika the thermohaline ocean circulation theory, isn’t it? Maybe there should be a magazine for debunked scientific ideas, that way people would realize the importance debunking wrong theories has in the advancement of Science. It could be called something like “Not Science”…

The last bit was the best…2006. Ah well lads. Reminds me of this joke:

A bunch of mathematicians and physicists are on the train going to university. They all know each other well and are talking about things in general. A physicist looks up and sees the conductor coming through the next carriage, so he proceeds to check his wallet for his ticket. His friends start doing the same. Casually he looks up and notices that all the mathematicians aren’t doing the same, in fact they all looked very laid back and uncaring. He says to his friend “Haven’t you got your ticket?”. His mathematician friend says “Oh I have a ticket….but they don’t” and gestures to his laid back friends.
“But the conductor is just about to come into this carriage” says the physicist looking up.
“Oh, well then” says the mathematician and with a nod all the other mathematicians and himself get up and walk down to the other end of the carriage where there is a toilet. They all proceed to get in. All 15 or so of them. Very geometric.
The conductor comes in, checks everyone’s ticket and moves down the carriage. He notices that the toilet is occupied and knocks on the door.
“One second” says a voice and then a ticket is slipped under the door. The conductor checks it and moves on.
The physicist is impressed. “Ah that’s a good plan”
The next week he is on the train with his friends and the same bunch of mathematicians. This time the mathematician looks round and notices the conductor in the next carriage. “Did you buy ticket?” he asks the physicist.
“Yes…but only one” and with a nod, as the conductor is getting closer, all the other physicists move to the bottom of the carriage and into the toilet. All 10 of them.
The mathematician smiles. With a nod all the other mathematicians move down to the bottom of the carriage and into the next carriage, where there is also a toilet. The first mathematician moves down to the toilet in this carriage, waits until the conductor is half way down the carriage, then knocks on the toilet door.
“Tickets please”. A ticket is slipped under the door.
“Thank you” he says and promptly walks off with it.
As he is walking down the next carriage and getting ready to repeat the mathematicians stock manoeurve, his friend asks him “Why did you do that? Is he not your friend?”
“Ah well, yes he is,” he replies, “but physicists shouldn’t meddle in methods they don’t fully understand”

I’m a physicist and a mathematician but I think this sums the modelling above up quite nicely

While the model certainly appears laughable in short hindsight, I’m willing to cut the researchers a little slack. Eccentricity and naivete know no bounds in science and too often passion and belief are honestly mistaken for truth and rigour. The press release gives a few clues as to why this model went off the rails. Predictive models are a saleable quantity and universities are known for their propensity in modern times to push their research efforts to market. That appears to be the motivation in this case. The release has all the earmarks of a university marketing office looking to catch cash for the school. Competitiveness amongst faculty will tend to encourage them to get on the gravy train. Prestige, cash and workload may be in the balance. Academic eccentricity only tends to be tolerated these days from those who’s wackiness is profitable or sufficiently tenured as to be immovable.

The model does have a “marketable” feature: to demonstrate clearly the fragility of rearward looking models to accurately predict the future. Interestingly, the curve fit presented by the model also tends to correlate at some level with “global warming”, if you are to accept it exists. While this failed model doesn’t automatically point to the inherent failure of the currently fashionable climate models, it also provides no support for their utility.

As I reread Dikpati’s theory, I was struck by the fact that they base their predicitions on the sun’s predictability. Without better understanding of the underlying effects, they are doomed to failure.

I haven’t seen anything better than Landscheidt’s solar torque effects to explain variations in the solar dynamo…Dikpati’s conveyor belt is not invariable.

Still the same mistake, again and again, with predictive models.
1) sorry guys but it’s very easy to fit a model with 8 cycles….too little data. Any model can be tuned to fit 8 cycles…even a poor polynom. It means nothing, certainly not a proof that the model is accurate.

2) when you get a model of a natural phenomenon (where chaos, diverging equations play a large role) which is 98% accurate…a big red sign should blink in your head: “THIS IS NOT POSSIBLE”. There is chaos…no model can predict it…if you have a 98% accurate model it means you have tried to model chaos instead of only modelling the part which can be modelled. You are bound to fail in the future.

Maybe we need a book burning session. These scientists are now making things up instead using the good ol “We just don’t know” and “we can’t get any of our models to converge”. This is getting really stupid when you have an event that there is no data to match. I wish the media would just go away because journalism is tabloidism now.

Gilbert (21:51:24) :
Great review. How much attention did they actually give it?
They did follow all my requests, otherwise it would not have been accepted as I get to see it again and again, until I’m happy with it. The reviewer most of the time wields enormous power in that regard.

Then you must be a reasonable reviewer, i.e., one who recognizes that sometimes even “reasonable minds disagree,” and who understands that the task of a reviewer/referee is not to impose one’s own view of things, but to insure that certain standards have been met. I was a referee for a couple of journals in my field “back in the day” and both of the editors that I worked most closely with insisted that my reviews contain constructive criticism, so that even if I were recommending against publication, I should be telling the authors what they could do to improve the paper to the point it might stand a better chance of getting published.

I think this is the way it probably works in fields that are not heavily politicized
(yet?) heavily politicized. I have the feeling that “climate science” has drifted far from this ideal.

The sun will do what the sun will do. We may observe and predict, but the big orange thing probably doesn’t much give a snip. We have more than enough problems with the few things we can control. Or the things we think we can control.

OT, smoking doesn’t kill people. It isn’t as if they’ll live forever if they don’t smoke. It may shorten their lives (in my ancestors cases, it probably lengthened them) but many things shorten lives. Does it really matter if SC24 is big or small? (no offense intended, Leif) Interesting, yes, crucial, maybe not. The federal deficit has a bigger impact, and I predict that will be huge. And something we don’t seem to have any control over.

Warm, cold, or in-between. As long as the sun comes up each day and doesn’t dump a big CME on me. Not sure I could do much about that.

I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct. That is not an entirely illogical approach, after all the models are set-up and operated by people with strings of letters after their names. It is logical to accept the word of people far better qualified than you are, what is not logical is to accept the word of a computer. Yet we seem to find that the authority afforded to computerised results is greater than the authority given to the word of those who fed the computer in the first place.

This problem has been with us since the dawn of computer based simulation. I agree that it’s understandable for a lay person to accept the results based on their expectation that the “with strings of letters after their names” actually represent knowledge and integrity. Today, sadly, they don’t. In this case, with this group of clowns, all they have actually “proved” is the uselessness of models that haven’t been through IV&V. And the dishonesty, incompetence or both of those who refuse the IV&V process.

Leif Svalgaard (22:11:18) :

… I asked her [Dikpati] how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer. Another argument was something about ‘intellectual property’ [of taxpayer funded work???].

She can not have any faith in her model, and she knew it. Whenever I began the verification part of IV&V, the very first step was to CLEAN UP THE CODE. That is what reputable scientists, such as Choudhuri, do. I wouldn’t even think about validation until I had confidence the code was actually modeling what it claimed to model. Based on what you wrote, I’ll surmise Dikpati had “patched” it to effectively “jump to desired answer” and wasn’t about to let you discover that. (I’ve seen that behavior too many times, so she’s not alone. in fact, I know of at least one major $$$$Billion program that was recently canceled because their simulation based evaluations were caught out as junk as a result of this kind of behavior.)

You’re also absolutely correct about ownership. If she developed it on the US governments nickel, the government owns it. Period. At one time, companies would mix their own funds with the governments to claim proprietary rights, but acquisition regulations (both FAR and DFAR) started blocking that long ago.

Thanks for providing the background that illuminates what kind of “scientists” these people are. I look forward to the day when acquisition authorities begin looking into these “research” programs.

The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first meta-analysis of these surveys.

To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, “cooking” of data, etc… Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis.

A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.

Mod, please use this one instead of the previous where I forgot a tag.

FatBigot (22:11:17) :

I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct. That is not an entirely illogical approach, after all the models are set-up and operated by people with strings of letters after their names. It is logical to accept the word of people far better qualified than you are, what is not logical is to accept the word of a computer. Yet we seem to find that the authority afforded to computerised results is greater than the authority given to the word of those who fed the computer in the first place.

This problem has been with us since the dawn of computer based simulation. I agree that it’s understandable for a lay person to accept the results based on their expectation that the “with strings of letters after their names” actually repressent knowledge and integrity. Today, sadly, they don’t. In this case, with this group of clowns, all they have actaully “proved” is the uselessness of models that haven’t been through IV&V.

Leif Svalgaard (22:11:18) :

… I asked her [Dikpati] how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer. Another argument was something about ‘intellectual property’ [of taxpayer funded work???].

She can not have any faith in her model, and she knew it. Whenever I began the verification part of IV&V, the very first step was to CLEAN UP THE CODE. That is what reputable scientists, such as Choudhuri, do. I wouldn’t even think about validation until I had confidence the code was actually mmodeling what it claimed to model. Based on what you wrote, I’ll surmise Dikpati had “patched” it to effectively “jump to desired answer” and wasn’t about to let you discover that. (I’ve seen that behaviour too many times, so she’s not alone. in fact, I know of at least one major $$$$Billion program that was recently canceled because their simulation based eveluations were caught out as junk as a result of this kind of behaviour.)

You’re also absolutely correct about ownership. If she developed it on the US governments nickel, the government owns it. Period. At one time, companies would mix their own funds with the governments to claim proprioetary rights, but acquisition regulations (both FAR and DFAR) started blocking that long ago.

Thnaks for provideing the background that iluminates what kind of “scientists” these people are.

In the graphic “Old and new cycle groups during SC-transit”, (see http://users.telenet.be/j.janssens/SC23web/SCweb10.pdf )
Janssens lets know that the break-even is reached in October 2008. SC-minimum is 2 months prior to the break-even SC23-SC24 (+/- 4 months). “According to the above method, sol

In the graphic “Old and new cycle groups during SC-transit”, (see http://users.telenet.be/j.janssens/SC23web/SCweb10.pdf )
Janssens lets know that the break-even is reached in October 2008. Theoretically the SC-minimum is 2 months prior to the break-even SC23-SC24 (+/- 4 months). “According to the above method, solar cycle minimum should take place in August 2008 +/- 4 months”.
Is the past break-even point a strong argument putting the solar minimum in August 2008 and not in December 2008 as NOAA predicts?

It´s the summer coming, their last opportunity to market global warming. HE´ll be back soon…Get ready…it will be really cataclysmic; this time he will be supported by the envoy himself and escorted by an armed task force.

By 2041, solar activity will reach its minimum according to a 200-year cycle, and a deep cooling period will hit the Earth approximately in 2055-2060. It will last for about 45-65 years, the scientist added.“By the mid-21st century the planet will face another Little Ice Age, similar to the Maunder Minimum, because the amount of solar radiation hitting the Earth has been constantly decreasing since the 1990s and will reach its minimum approximately in 2041,” he said .
The Maunder Minimum occurred between 1645 and 1715, when only about 50 spots appeared on the Sun, as opposed to the typical 40,000-50,000 spots.
It coincided with the middle and coldest part of the so called Little Ice Age, during which Europe and North America were subjected to bitterly cold winters.
“However, the thermal inertia of the world’s oceans and seas will delay a ‘deep cooling’ of the planet, and the new Ice Age will begin sometime during 2055-2060, probably lasting for several decades,” Abdusamatov said. http://en.rian.ru/science/20080122/97519953.html

That pic looks like it’s from the 70’s. That was the first thing I thought. Photoshopped to add the laptop.

o/t but this is bugging me. How do people get away with this stuff? I did a search and found this site where this man said:

“Over the past century and a half, we’ve increased the carbon dioxide content of the atmosphere from about 270 parts per million (ppm) to about 387 ppm–more than 40 percent. All of that increase is human-caused, and its effect is to increase the retention of heat in the atmosphere. It warms the earth, destabilizing climates globally.”

I read ‘opinions’ like this and feel compelled to thank you, Anthony, again and again for this wonderful blog.

1) A friend of mine long ago worked for a major oil company where teams of engineers and geologists modeled oil reservoirs so they could forecast the response to waterflooding. The models always matched history, but never accurately predicted the future. There are a lot of unknown unknowns down there.

2) In 2005 two Russian solar physicists bet Dr. James Annan $10,000 that the global temperature ten years from then would be cooler, not warmer. Annan developed one of the climate models, and his pride of authorship will probably cost him $10,000; how could there be any unknowns not already reflected in the climate models?

The model predicted the last 8 cycles. Ok, so that is about 100 years. Perhaps they should extend it back and tell us what the sunspot numbers should have been and, in true IPCC fashion, adjust the other numbers to match. Seriously though, it not matching suggests that either the previous data was bad, or something had changed, or that the information provided the model was insufficient to go back any farther. There are indications with the end of Cycle 23 that something has changed, and that models predicting the the future may not be reliable. As was said above, Cycle 24 will be a test of which model is closer to reality.

I was trained as a Biologist.I wanted to be a classic field Biologist,or barring that Forestry,instead I went into aviation-and did a lot of related work.Still have an interest in Science (or I wouldn’t be here) .However.Sitting and running computer models-and not having the least bit of skepticism,-but _believing_ them is what I have problems
with.Observing and still not seeing what is happening is the definition of insanity…
BTW-nice work Mr. Watts…:)

Does this happen when you’ve been on WUWT a long time, it’s late, you’re punchy, you start misspelling words, you reply to comments that say something other than what you thought they said, after you’ve already clicked ‘enter’ you go back and look at the comment and see your reply doesn’t apply, you start to comment on links you were too tired to click on and read before commenting on, you start thinking all your comments will be fully or party snipped by charles…

Come on guys. Their model has proven to be inadequate, that’s all. I don’t fault them for having a theory in the first place, then building a model to test it. That’s how it works. There are competing theories among solar scientists, and that’s a good thing.

I think this episode brings up a related topic that deserves some discussion – namely the role of the ** press release ** in modern science. In the present case, look at the title and first sentence:

—

Scientists Issue Unprecedented Forecast of Next Sunspot Cycle

BOULDER—The next sunspot cycle will be 30-50% stronger than the last one and begin as much as a year late, according to a breakthrough forecast using a computer model of solar dynamics developed by scientists at the National Center for Atmospheric Research (NCAR).

—

Note the use of the words “unprecedented” and “breakthrough”. These are very strong words, and entirely undeserved. Also “will be 30-50% stronger” implies a certainty that is particularly unwarranted. Unfortunately, these statements go straight into the mainstream press reports that we see as headlines in our daily papers (for those who still buy such things – I don’t). And, as others have noted, if later on the predictions stated in the press release turn out to be false, there is usually no follow-up, except perhaps a small mea cupla about “revised forecasts” and “unexpected behavior by the sun” buried in another, unrelated press release.

Why do public scientific institutions need to have these press releases anyway? Shouldn’t they wait until more consensus is reached? Is this entirely about justifying the budgets these groups have to submit each year for funding (probably)?

annav:
They should have published thier code along with thier paper so that every one who comes along later will understand that curve-fitting to complicated systems is not the best way to advance science. Otherwise, it’s too easy to bet on the horse that won the last X number of races.
All they are really showing is that the current run of cycles has changed behavior, and nothing more.
If someone wants to explore in this area, they could possibly identify discrete sets of cycles, and show how they have changed behavior one set to the next, for example.

I wonder whether we are seeing the same thing with the spewings of computer models. Once something has been spewed it seems to be presumed that it is correct.

In the late 1970s, animal rights protesters were claiming that animal testing of drugs was no longer necessary because the drugs’ effects could now be tested on cells simulated in computers. At the time this seemed silly to anyone with a modicum of awareness of biology and computer simulation technologies. But it was reported to the public before evaporating without explanation.

Carsten Arnholm, Norway (03:29:31) :“The polar field precursor method has now performed reasonably well for three cycles and may be on track for number four, which in any case will be yet another test.”
Is there a published prediction somewhere, predating cycle 21, demonstrating this?

we had this to say about past predictions:
“Schatten et al. [1978] pioneered the use of the solar polar magnetic field as a precursor indicator. Because the poloidal field is an important ingredient in seeding the dynamo mechanism, the polar field precursor method appears to be rooted in solid physics. The success rate of predictions made very early before cycle onset has been mixed, however (cycle 21: observed 165 vs. predicted 140 ±20 [Schatten et al., 1978]; cycle 22: 159 vs. 109 ± 20 [Schatten and Hedin, 1984]; cycle 23: 121 vs. 170 ± 20 [Schatten and Pesnell, 1993]). Several reasons exist for this: the solar polar fields are difficult to measure and proxies (e.g., geomagnetic activity indices) were often used in their place, the historical database is short, and it was not clear when within the cycle the polar fields would be best utilized. As we approach minimum and the new cycle gets underway, the solar polar field precursor method improves markedly (cycle 22: 159 vs. 170 ± 30 [Schatten and Sofia, 1987]; cycle 23: 121 vs. 138 ± 30 [Schatten et al., 1996]). The improvements also result from the use of actually
measured polar fields rather than proxies. It is a strength of the polar field precursor method that the predictions improve in this manner. This paper suggests a novel way of applying the polar field precursor well before sunspot minimum.”

The key is two insights: 1) only the WSO polar fields are reliable [and there was a problem in 1976-77, now being corrected – see below], and 2) the timing is important, namely only use the polar fields once they have become stable. With this in mind the success rate is reasonable. In judging the timing of the papers, one must consider that a year often go by between submittal and printing.

The prediction of cycle 21 was a bit too low. We now know that the measured values of the polar fields in 1976-77 were too low because of scattered light. Experiments in 1978 [using felt-eraser chalk] and this month [using Johnson & Johnson Baby Powder] quantify and confirm this. The corrected polar fields provide a better fit and would have been a basis for a better prediction, so we count that as a success as well. A ‘work in progress’ on this can be found here: http://www.leif.org/research/Reduction%20of%20Spatially%20Resolved%20Magnetic%20Field%20by%20Scattered%20Light.pdf Figure 7 is especially scary as it shows the influence on scattered light on solar rotation, ‘slowing’ the Sun by 75 m/s…

Boudu (04:07:21) :I’m sure that once the apropriate adjustments have been made to the observed data, the predictions will be spot on.
see above :-)

anna v (01:36:59) :We should not hang Dikpati et al for being wrong. It is part of the fertilizer.

And we actually do not KNOW yet if she is wrong. SC24 is a test of her model AND of ours. There is no doubt that SC24 is on its way now, check out http://www.leif.org/research/TSI-SORCE-2008-now.png It will be interesting to watch it grow. Will it grow fast [large cycle] or slowly [small cycle]?

As proof of the utility of our forecast [and of the ‘faith’ NASA really does have in it – regardless of the misleading press releases] I may note that NASA was thinking of developing a special mission to bring back Hubble, but we convinced the head of GSFC, Ed Weiler, and ultimately Michael Griffin, that Hubble would “fly over” SC24 because of low enough solar activity. Of course, if we are wrong, …

Such a scaring prediction fits the aims of those who fund scientific research and which eventually will back the convenient and politically needed escatological view:“In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself. Democracy is not a panacea. It cannot organize everything and it is unaware of its own limits. These facts must be faced squarely. Sacrilegious though this may sound, democracy is no longer well suited for the tasks ahead.” – The First Global Revolution, a report by The Club of Rome.”http://www.green-agenda.com/spiritualunitednations.html

George Hebbard (05:20:20) :I haven’t seen anything better than Landscheidt’s solar torque effects to explain variations in the solar dynamo…

Carsten and Idlex showed right here on WUWT that Landscheidt’s solar torque mechanism is based on wrong physics, regardless of whatever confidence one wishes to bestow on the correlations.

MattN (08:04:46) :“The NCAR team is planning in the next year to issue a forecast of Cycle 25, which will peak in the early 2020s.”
Can we see this prediction of #25? Please?

They stopped saying that after a while. It probably cannot be done [unless you belong to the astrology cult that can forecast solar activity with absolute precision thousands of years in advance]

Basil (06:08:50) :Then you must be a reasonable reviewer, i.e., one who recognizes that sometimes even “reasonable minds disagree,”

But I am, of course. And it is not about disagreements at all. All journals have the policy [AFAIK] that disagreement is not a reason for rejection. And it is not about a ‘certain standard’ either [although that must be met to a degree], but rather about whether the paper brings something to the table. Often papers that are wrong can be the most successful in furthering the field by their effect on and inspiration to other researchers. Lockwood et al.’s 1999 Nature paper about the ‘doubling’ of the Sun’s open magnetic flux is a good example. Another example may be Dikpati’s paper which I recommended for publication even having severe reservations about it.

Rik Gheysens (06:37:40) :Is the past break-even point a strong argument putting the solar minimum in August 2008 and not in December 2008 as NOAA predicts?
Break-even point is only really meaningful if the two cycles are of equal strength, but the main point is that there is no sharp definition of ‘minimum’.

Syl (07:58:38) :Come on guys. Their model has proven to be inadequate, that’s all. I don’t fault them for having a theory in the first place, then building a model to test it. That’s how it works. There are competing theories among solar scientists, and that’s a good thing.
And Syl is quite right…

Frank K. (07:59:23) :Note the use of the words “unprecedented” and “breakthrough”. These are very strong words, and entirely undeserved.
and should not be used about such a flimsy thing, but such is the influence of the PR machine that NASA uses to justify funding when times are lean.

FWIW, where you have the image from ucar.ed “Fiigure Comparison” it clips on the right hand side. The source code indicates it is in a table of width 550 and the image has width is set to 534. I suspect it needs to be more like 500. Height at 551 ought to be scaled accordingly to about 516. We’ll see if wordpress eats the html, but I think it ought to be like:

NCAR scientists have succeeded in simulating the intensity of the sunspot cycle by developing a new computer model of solar processes. This figure compares observations of the past 12 cycles (above) with model results that closely match the sunspot peaks (below). The intensity level is based on the amount of the Sun’s visible hemisphere with sunspot activity. The NCAR team predicts the next cycle will be 30-50% more intense than the current cycle. (Figure by Mausumi Dikpati, Peter Gilman, and Giuliana de Toma, NCAR.)

Oh, and the reason for the Hubris comment is that here we have yet another case of someone modeling part of a larger cycle and thinking it is truth. Their modeled period does not contain several known discontinuities (such as the Maunder) and so they are just doing “data modeling” which will work right up until it doesn’t… when the missing events return. IMHO, that looks like it’s setting up to be now.

BTW, here it is June and I’m under cloudy skies with cool temperatures, no tomatoes, and sulking greenbeans. But my cabbages, peas, onions and other cool season crops are doing just fine (when normally the expected 80F to 90F under clear skies with piercing sun would have fried them by now).

Yeah, it’s just “weather” and only on the west coast… but… this is not normal and not like past decades. Something is Different. That is the kind of discontinuity these folks are missing.

Hmm. It looks like two independent cycle 24 tiny tims in the SOHO view. The magnetogram is out of time but it can be seen in http://gong.nso.edu/Daily_Images/
Cannot be seen in the sunspot views there, just the magnetograms.

I would place my bets on weak to very weak. Are there sunspot images from the equivalent beginning of 23, assuming you are right and 24 started ? I could not find any in SOHO.

“I believe that all reviews should be published [as an electronic supplement to the paper] and I have never insisted on anonymity in reviewing. IMHO, the review is as important to the public as the paper itself.”

Amen! That would tell the world how rigorously the paper was reviewed and what elements were in question. Good luck in getting that past The Team.

I wouldn’t believe NCAR solar scientists and astrophysicists even if they were the spoken authority of the pope. NCAR is so tight with the pro AGW climatologists they are practically twins. As time has shown, their computer model is junk as it is three years later and their predictions are way off.

The prediction of cycle 21 was a bit too low. We now know that the measured values of the polar fields in 1976-77 were too low because of scattered light. Experiments in 1978 [using felt-eraser chalk] and this month [using Johnson & Johnson Baby Powder] quantify and confirm this.

I imagine a lot of astronomers and telescope operators would be very reluctant to apply that. Did they get “bribed” with the promise of recoating? Perhaps Rob Bateman would be happy to reproduce that experiment with his ‘scope the next time there’s a sunspot to draw.

As proof of the utility of our forecast [and of the ‘faith’ NASA really does have in it – regardless of the misleading press releases] I may note that NASA was thinking of developing a special mission to bring back Hubble, but we convinced the head of GSFC, Ed Weiler, and ultimately Michael Griffin, that Hubble would “fly over” SC24 because of low enough solar activity.

As far as I know, Hubble made it through the SC23 peak(s) okay. Do I just not know the right stuff or was the concern then that SC24 might be too big? And if NASA brought back Hubble, what’s the chance it would go back up soon after solar max? Was some of that thinking due to having to fix things that were not designed to be fixed in orbit?

Basil (06:08:50) :Then you must be a reasonable reviewer, i.e., one who recognizes that sometimes even “reasonable minds disagree,”

But I am, of course. And it is not about disagreements at all. All journals have the policy [AFAIK] that disagreement is not a reason for rejection. And it is not about a ‘certain standard’ either [although that must be met to a degree], but rather about whether the paper brings something to the table.

Science rejected the Livingston and Penn fading sunspot paper citing it relying on statistics and not proposing a mechanism, IIRC. While I think they should have submitted to Icarus or some other astronomical journal, how often do papers that have statistics over mechanism get published? In retrospect, at least, they certainly brought something interesting to the table. I’ll grant it may not have appeared as likely then as it does now.

Leif Svalgaard (22:11:18) : I asked her how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer.

I can actually answer this. Code grows and mutates over time. You can follow all the threads, but it has layers of history in it. Prior to being “public” the programmer likes to go back and clean up the “cruft”. A crufty program is one that works well (or well enough) but has lots of messy and not “prefessional looking” bits in it (yes, there is a ‘term of art’ for this condition, it is so common). That does not make the code wrong.

I’ll give a fictitious example here. Say I wanted to add 2+2 and make a decision based on the result.

Both of these would produce the same result. One of them would cause a fair amount of embarrassment in public.

Actual code can be much worse in "cruftiness" than this example… but it generally consists of a few specific things. Rarely does "management" care about these, so once the code works the "SHIP IT" mantra begins no matter how much the code is an embarrassment to the writer. This is part of why programmers are often a bit "cautious" about announcing when the code is "done and working". They want time to make the "presentation" nice…

(Having been both the programmer and the management, I can speak to both sides of this… I usually made my guys "ship it" as binary but gave them time to clean up the source code before moving ALL their time to the next project…)

1) Either a near complete lack of comments or comments that ought not to see the light of day.
2) Messy and/or poor "style" that works, but is not considered 'good style' today. (There are endless 'style wars' fought over things like "goto vs nested if " or the best way to evaluate a formula, and similar things).
3) "Dead code" from some prior iteration before you figured out what was wrong, but left around while you were figuring it out (good for the forensic "WTF Did I change?" that inevitably comes up when the program either starts working fine or breaks in a completely unexpected way ;-)
4) Things that work, but are embarrassing due to, for example, misspelling.
5) Poor physical formatting. Called "pretty printing", there is a formal process of going through and turning unreadable blocks of working code into a nicely formatted readable indented work of page art.

Forgot about the wordpress gremlins that steal gt and lt signs… revised to omit and use greaterthan instead… We’ll see if my pseudocode survives wordpress this time…

Leif Svalgaard (22:11:18) : I asked her how she could have faith in the correctness of the programming if the code was such a mess, but never got a good answer.

I can actually answer this. Code grows and mutates over time. You can follow all the threads, but it has layers of history in it. Prior to being “public” the programmer likes to go back and clean up the “cruft”. A crufty program is one that works well (or well enough) but has lots of messy and not “prefessional looking” bits in it (yes, there is a ‘term of art’ for this condition, it is so common). That does not make the code wrong.

I’ll give a fictitious example here. Say I wanted to add 2+2 and make a decision based on the result.

Both of these would produce the same result. One of them would cause a fair amount of embarrassment in public.

Actual code can be much worse in “cruftiness” than this example… but it generally consists of a few specific things. Rarely does “management” care about these, so once the code works the “SHIP IT” mantra begins no matter how much the code is an embarrassment to the writer. This is part of why programmers are often a bit “cautious” about announcing when the code is “done and working”. They want time to make the “presentation” nice…

(Having been both the programmer and the management, I can speak to both sides of this… I usually made my guys “ship it” as binary but gave them time to clean up the source code before moving ALL their time to the next project…)

1) Either a near complete lack of comments or comments that ought not to see the light of day.
2) Messy and/or poor “style” that works, but is not considered ‘good style’ today. (There are endless ‘style wars’ fought over things like “goto vs nested if ” or the best way to evaluate a formula, and similar things).
3) “Dead code” from some prior iteration before you figured out what was wrong, but left around while you were figuring it out (good for the forensic “WTF Did I change?” that inevitably comes up when the program either starts working fine or breaks in a completely unexpected way ;-)
4) Things that work, but are embarrassing due to, for example, misspelling.
5) Poor physical formatting. Called “pretty printing”, there is a formal process of going through and turning unreadable blocks of working code into a nicely formatted readable indented work of page art.

Ric Werme (10:02:23) :I imagine a lot of astronomers and telescope operators would be very reluctant to apply that. Did they get “bribed” with the promise of recoating?

We are putting in a new mirror anyway in a couple of weeks, so can afford to mess with the old one.

Ric Werme (10:12:51) :how often do papers that have statistics over mechanism get published? In retrospect, at least, they certainly brought something interesting to the table. I’ll grant it may not have appeared as likely then as it does now.
Different Journals have different policy in that regard. Livingston thinks that the rejection was OK. He is working on a new and more substantial paper [still no mechanism]. Several of my papers [even my very first one 41 years ago] have been rejected. They have usually turned out significantly improved the second [or third!] time around. Here is an example: http://www.leif.org/research/No%20Doubling%20of%20Open%20Flux.pdf First you’ll see the paper, then a resubmission [also rejected] and then reviewer reports and resulting whining by the authors. Eventually [years later], it turned out that we were correct after all and Lockwood has effectively abandoned his earlier views [except he’ll not admit it is asked outright].

E.M.Smith (10:30:18) :That does not make the code wrong.
Very true [it is probably her model/assumptions that are wrong, rather than the code]. But at the time, her refusal to show us the messy code, made it impossible for us to gauge its quality, and more importantly the [often hidden] assumptions and adjustable parameters [she claims only one, I count at least a dozen in her paper and related papers].

This excerpt — especially the last sentence — from Richard Feynman’s 1974 talk on “Cargo Cult Science” is worth remembering when dealing with “models” that are based mainly on past performance:

I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science. In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas — he’s the controller — and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.

Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school — we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid — not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked — to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

jh (03:58:04) : That said de Jager and Duhau published another paper in 2008 based on some empirical observations on the long term osciilation around these phase transitions and came up with this prediction.

“The regularities that we observe in the behaviour of these oscillations during the last millenim enable us to forecast solar activity. We find that the system is presently undergoing a transition from the recent Grand Maximum to another regime.

Oh Boy! “Cyclomania” in a peer reviewed paper!

See, it IS all about cycles! ;-)

The first of these oscillations may even turn out to be as strongly negative as around 1810 in which case a short Grand Minimum similar to the Dalton one might develop. This moderate to low activity episode is expected to last for at least one Gleissberg cycle (60-100 years).”

I agree, but without any decent foundation…

The old, old story of the beautiful maiden hypothesis and the ugly ogre, truth!

Stephen Wilde (11:03:41) :Could you suggest any evidence that, in your opinion, would reveal the Earth’s climate as being more sensitive to minor solar changes than is currently accepted ?

I think there must a solar influence at some level below 0.1 degree as the changes in TSI demands that [radiation balance], anything more than that has very little evidence [IMO] going for it [in spite of the hundreds or thousands of claims]. If we are still in vain debating and looking for [refining the methods until the data yields the desired result] the clear and unmistakable solar cycle signal at the below 0.1 degree level that everybody can agree on then it seems even harder to accept ‘amplifiers’ and ‘feedbacks’ and ‘known or unknown unknowns’. Once we have ice cores and climatology for other planets we may have the data needed for a comparative study as the solar signal may be a common factor. Before that, I don’t see much hope, barring a completely new approach that somebody still has to come up with.

E.M.Smith (11:35:10) :The old, old story of the beautiful maiden hypothesis and the ugly ogre, truth!
Well, you have to kiss a lot of frogs before you find one that turns into a princess [or prince, depending on your orientation].

“I am a full time trader and trade the stock markets for a living. All the big banks have these models that can predict past stock market behavior very accurately and but are total useless in predicting the future. They curve fit the historical data with their models and then use this model to try to predict the future. They failed miserably as evidence by the present financial crisis.”

Not only that, they’ve failed to predict the current rally, and most quant-based funds (like D.E. Shaw) have shorted it and lost badly.

The more science relies on government funding, the more it is susceptible to distortions arising from political factors. IPCC is primarily a political entity, and like all political entities has a vital interest in its own continuation. The scientists promoting AGW are also self-interested. If they produce objective analyses showing that AGW is a minor factor, or a non-factor, in future climate then they have no rationale for continued funding of research in this area.

OT, smoking doesn’t kill people. It isn’t as if they’ll live forever if they don’t smoke. It may shorten their lives (in my ancestors cases, it probably lengthened them) but many things shorten lives.

Interesting point… the lengthening bit…

BTW, my Dad died of smoking induced cancer. With that said, when fishing together, he would blow smoke around himself and be happily mosquito free… while I would be the human pincushion covered in little red welts… (leaving me with a life long hatred of mosquitos…)

Now this was in a place with sporadic Malaria (one case every few years). Later, when decent repellants were invented, I stopped being the pincushion, but it left me to wonder:

To what extent does the nicotine level in the smoker and the smoke about the smoker prevent them from contracting any large number of bug vector diseases? Nicotine is an insecticide. You can use tobacco tea to protect your garden. Ought not a nicotine soaked person have some similar benefit?

I once asked him why he started smoking. He said that during WWII in a foxhole it mattered more that you be 1/10 second faster than the other guy and less what would happen in 30 years… Having “sampled” nicotine (via absorbing some of that “tea” while applying… use gloves…) I can tell you that it does speed you up some fair amount. It also kills various parasitic worms and microbes (and I think may have cured a small persistent skin lesion of unknown etiology I had — it ‘went away’ right after the “tea” soaking…)

So I’ve turned from a rabid anti-tobacco person into a “whatever you want, just out of my nose, please” person. Because maybe, just maybe, I was wrong to condemn it outright. To be so absolutely sure.

I think this is illustrative of what needs to happen with the same effect in the AGW ‘non-debate’ where the “right feeling answer with truthyness” is held as a high moral standard, despite what the reality might mean and despite that there may be some problems in that truthyness… (i.e. maybe you don’t die of cancer in 40 years, you die of a bullet now, or malaria, or worms, or… Similarly, maybe we don’t die of heat in 40 years, we die of malaria or economic collapse now.)

Basically, the ‘tobacco’ thing was something I was once absolute about. Not a single nanometer of quarter given. Now, not so much… I think a similar transition needs to happen to the AGW true believers. They need to realize the limits of their truthyness… and that maybe the cure is for the wrong disease…

E.M.Smith (12:05:37) :
ontinuing on your OT
reminds me of Mediterranean anemia, the recessive gene protects from malaria, but when two appear in a child, the child dies before teens. A stiff price to pay for protection of the tribe, but evolution does not ask, whereas addictions are voluntary choices.

I had an aunt who was prescribed smoking for her asthmatic lungs, go figure, back in the 1940s.

In the evils of nicotine add that the dose from a single cigarette could start a spasm on plaque coated arteries that would dislodge some of it and create a heart attack, an embolism or what have you. Russian roulette after a certain age.

[i] Jack, never! Never ever think about it!
Even when they consist utter nonsense,
they do document that nonsense.
And that is something worth documenting.[/i]

Retired Engineer (06:13:43) :

Descartes: “I think, therefore I am.”

Universe: “So ?”

[i] @Retired Engineer. My version:

Descartes: I think, that I am.

Universe: So wattsupwiththat? [/i]

BTW: every numerical model, created to simulate the
past quite nicely, has lost it’s freedom to prognose
– even – the nearest future.

They way I did learn it:
– you have to learn from past and successfully try
to understand it.
– then, you may – more or less – understand the presence.
– if yes, you may – somehow – get a very raw guess of
what will come.

@Leif

sorry to bother you. You did provide the link to the adjusted
– to 1 au – 10.7 cm flux. I couldn’t find it. Can you repost it, please.

jeez. i wouldn’t chuck their concept just because it’s having trouble modeling a chaotic system. heck, let’s throw away meteorology! they might be on to something, even if it isn’t completely correct. the whole global warming “debate” has poisoned the atmosphere.

“Could it be that the climates models, too, are little more than than exercises in curve fitting?”

In my humble opinion, the answer is clearly “yes,” when you consider the following:

(1) The warmists are using more than one model;

(2) These models cannot all be right, since some predict a lot more warming than the others;

(3) All of the models fit history pretty well.

In any event, it seems to me that a necessary condition for being able to predict surface temperatures on a 100 year time scale is to know what caused the Little Ice Age. If we don’t know what caused the LIA, there’s quite possibly an important forcing which is not being accounted for.

The other possibility is that the climate is chaotic on that time scale. In my opinion, many people are too dismissive of the possibility that we cannot predict temps over the next hundred years just as we cannot predict when the next big hurricane will hit New York City.

“Come on guys. Their model has proven to be inadequate, that’s all. I don’t fault them for having a theory in the first place, then building a model to test it. That’s how it works. There are competing theories among solar scientists, and that’s a good thing.”

It’s one step above the AGW shell game in that it makes clear and falsifiable predictions. Still, Dr. Dikpati deserves our scorn for what is very likely an exercise in curve fitting and self-deception.

A complex simulation of a complex pheonomen which simulation matches history is very likely to be wrong.

Think about it: There are billions of possible simulation models out there for solar activity. Lots of them will match history by coincidence. At most, only a few are correct. And quite possibly, none are correct.

Based on what I’ve read, If I was a betting man, I’d put money on this cycle being much bigger than past ones, possible one of the biggest in recorded history.
I would also put money on the cycle peeking in 2012 to 2013.
However, most predictions are wrong. possibly this one.

Yes, failure in a scientific endeavor is absolutely A OK, otherwise we would not be talking of science research but of engineering.

When in high school I was taught the following but cannot quote the original author: Knowledge is like a circle , the more you know, the more there is that you do not know, as the growing perimeter of the circle increases the contact with the unknown. Science explores the unknown with primarily the brain and its imagination secondarily with diligence and dedication. Often there be tigers, dragons and bogs, quick sand and chasms. Sometimes there is a breathtaking view of something incredibly beautiful and incredibly useful. Were it not for the explorers who fall in the bogs and are caught by quicksand, the beautiful and useful would remain unknown.

brazil84 (17:58:09) :

“Come on guys. Their model has proven to be inadequate, that’s all. I don’t fault them for having a theory in the first place, then building a model to test it. That’s how it works. There are competing theories among solar scientists, and that’s a good thing.”

It’s one step above the AGW shell game in that it makes clear and falsifiable predictions. Still, Dr. Dikpati deserves our scorn for what is very likely an exercise in curve fitting and self-deception.

A complex simulation of a complex pheonomen which simulation matches history is very likely to be wrong.

Read the above paragraph. If scientists were not intrepid explorers there would not be any science to talk about. If they were fearful of ridicule and failure, they would not walk a step into the unknown. They have to believe they have got the pope by the beard ( substitute $#%^), as a greek proverb says, to have a chance of being effective in their speculations. Just a chance, like explorers for gold.

It is OK to be wrong in a scientific model/theory. Feynman himself was wrong about the parton model, but it did provide the fertilizer from which quantum chromodynamics sprung and could be proven. We would not accept that climate is chaotic, weather up, had we not fallen on our faces with this AGW nonsense. It is the mistakes that build the next level of scientific understanding and chaos and complexity are new and rapidly growing inter disciplinery tools. Climate modelers will be forced to use them pretty soon, as will probably sun modelers, but these tools are new, and their need will be based on the failure of models like Dikpati et al ( if it fails).

Unfortunately the damage to science by the politicization of climate is incredible and will be lasting, like the damage Lysenko did to biology in the soviet union, and worse, because it is world wide.

Evidence of this is the attitude of quite reasonable skeptic people on science in general.

“Read the above paragraph. If scientists were not intrepid explorers there would not be any science to talk about. If they were fearful of ridicule and failure, they would not walk a step into the unknown”

It seems to me there is a difference between courage and folly; between calculated risk and wild speculation; between legitimate inquiry and frivolous wastes of time.

Here is a hypothesis for you to test: If you hold a sack of flour over your head in one hand; a burning hundred dollar bill in the other; and sing the national anthem; you will win $100,000,000 in the lottery.

Probably nobody has ever spent a hundred dollars and ten minutes of their time to step into the unknown and test this hypothesis. But everyone with an ounce of common sense knows that it’s a complete waste of time and money.

Your example is irrelevant to the science discussion we are supposed to be having.

We are talking of following the scientific method in increasing the knowledge of each field, using standard scientific means and procedures and that the results of research are not to be measured by engineering demands. It is not folly to propose a model not seen before. The research could end in being wrong, irrelevant, not important. Or it could hit the jackpot.

My analogies are just parables, to make a point and not a description of procedures. Dikpati et al followed normal scientific procedures, from using mathematics and modeling to submitting their report to peer review.

anna v (10:30:34) :Dikpati et al followed normal scientific procedures, from using mathematics and modeling to submitting their report to peer review.

I agree completely. They are all good scientists and being wrong is OK, we all are at times. Any blame [if you want to dole some out] should be put on NASA and NCAR on over-hyping this in press releases.

They are all good scientists and being wrong is OK, we all are at times. Any blame [if you want to dole some out] should be put on NASA and NCAR on over-hyping this in press releases.

This incessant need for advertisement of research results so as to get the necessary funding has to be addressed soon by the scientific community, not only of the US, where the fashion started, but also the EU where it has caught the fancy of the bureaucrats. It has played a large role in this snowball called global warming and renamed climate change. I hope when AGW deflates a reckoning and rethinking of all sorts will take place.

Maybe Anthony could start a thread where we could exchange ideas of how research could be funded without such overwhelming bureaucratic/political government interference and consequent need for promotion of research objectives and results as if they are products.

anna v (12:16:07) :This incessant need for advertisement of research results so as to get the necessary funding has to be addressed soon

My simple solution: if the result turns out to be wrong or contradicted by later research AND it was hyped at a press conference, either the PR person(s) directly responsible should be disciplined [fired, demoted, decapitated, … :-) ] or the principal investigator or both.

Leif Svalgaard (13:21:14) :if the result turns out to be wrong or contradicted by later research AND it was hyped at a press conference, either the PR person(s) directly responsible should be disciplined [fired, demoted, decapitated, …

Trouble will be what to do if the researcher is not a scientist, say a railways’ engineer like the one at the IPCC or a Nobel prize winner…

“[…] This incessant need for advertisement of research results so as to get the necessary funding has to be addressed soon by the scientific community, not only of the US, where the fashion started, but also the EU where it has caught the fancy of the bureaucrats. It has played a large role in this snowball called global warming and renamed climate change…

Maybe Anthony could start a thread where we could exchange ideas of how research could be funded without such overwhelming bureaucratic/political government interference […]”

I remember a time (50’s into the 60’s) in the US when many corporations had large R&D staffs that usually included a few scientists free to pursue any pure research of their own interest. Results were often kept secret as they might have great potential for future commercial application. Also, the space race was on and there was a lot of basic research conducted in support of that effort.

I’m betting there are a lot of readers here who remember a time when you not only were NOT in a hurry to race to the press with results but were in BIG trouble if you discussed your work with anyone, even people in another department.

Times have changed with the exception that there’s still a coffee or wine study published every week or two, just like always ;o)

When governments fund research, expect results to support political goals. When corporations fund research, expect results to support economic goals.

The following news report shows that Dikpati still has hopes for a very high peak of sunspot activity for S24:

“Not everyone agrees on whether it’ll be so quiet, however; Mausumi Dikpati from the High Altitude Observatory in Boulder, Colorado – who was one of the experts on the panel that came up with the conclusion she disagrees with – for example:

The panel consensus is not my individual opinion… It’s still in a quiet period. As soon as it takes off it could be a completely different story.

That story, she says, will be a cycle 50% more powerful than the last, and something to silence those of us who’re worried that the sun is slowly going out a la disaster movies, despite knowing better.”

It is true that there has been too much carrot in gaining research grants in the past generation or so of scientists. This has resulted in too much self advertisement and self aggrandizement by a lot of researchers and research groups that should have known better, all in order to acquire for their group/institute a larger part of the pie of funding.

My feeling is that research was carried out much better before funding became centralized and bureaucrats found out they could wield power with apportioning the funds.

If there were a universally accepted research budget, it should be apportioned to institutes/universities in a democratic way: number of researchers, number of students, location, etc weights with results of previous years should be devised so a fair share could fall on institutes and not on individual researchers. The institutes then could fight it out within their ranks on how it should be divided. There would again be politics and back stabbing etc, on a much smaller scale but little chance of such gross polarization of a discipline the way we see with climate presently.

Also it would be good to have five year or seven year funding plans both internally in the institutes and externally so there could be stability in research objectives and no great hurry to come out with half baked results.

So my stick will be the peer pressure within each institute, that will be different in each, and thus no world coherence and consensus on temporary results could become fashionable as easily as now. Now the purse control of the central agencies creates Hansens and Gores etc.

anna v (22:38:38) :If there were a universally accepted research budget
Just imagine, for example, by the United Nations…Absolutely NO!
Freedom is the precondition of any human adventure.
The consequence of such a “universally accepted” research would be a Bee Hive or an Ant’s Hill, a “Brave New World” indeed.

I”f there were a universally accepted research budget”
Just imagine, for example, by the United Nations…Absolutely NO!

Here we come into language difficulties. By universal I did not mean global. Rather something reasonable as for example a percentage of the GDP for a country, accepted universally, i.e. by all, within the country. A budget that should be divided to institutes and not persons. It is the creation of researchers as contractors that in my opinion has created the mess of AGW. Gives them carrots for sloppy science and self advertisement and filling the pocket.

Ideally researchers should be like monks, dedicated and chasing their vision. They should be given a salary in order to live well and respectably and no incentives to make more personal money out of their work.

“Here we report a new annual resolution 10Be record spanning the period 1389–1994 AD, measured in an ice core from the NGRIP site in Greenland. NGRIP and Dye-3 10Be exhibits similar long-term variability, although occasional short term differences between the two sites indicate that at least two high resolution 10Be records are needed to assess local variations and to confidently reconstruct past solar activity. A comparison with sunspot and neutron records confirms that ice core 10Be reflects solar Schwabe cycle variations, and continued 10Be variability suggests cyclic solar activity throughout the Maunder and Spörer grand solar activity minima. Recent 10Be values are low; however, they do not indicate unusually high recent solar activity compared to the last 600 years.”

The paper lends support to some recent conclusions of mine:
1) the sun is not coming down from an all-time high [20th century not a Grand Maximum]
2) solar magnetic cycle persists through Grand Minima [maybe
Livingston and Penn have something]
3) Heliospheric magnetic field does not have large swings and cosmic ray modulation doesn’t either

Ok… What we should do, what you should think, what is happening, oh my God, we need to know everything!! Are things warming up, cooling down, is that star blinking faster than the other? Cyclic solar blabla under harmonic feedback of gravitational blabla and the plasma convection releasing blabla… The urge to give your opinion and postulate, argue, etc.. is sooooo tiring.. Humans are soooo pathetic. One year its this, the other year its that. (baby voice: We have to learn more! we want to learn more! we want to know why! tell us why! explain to me why!. Have you ever stopped for a instant, so small an instant, just for a moment, and maybe realize that this kind of unrelentless thirst for “knowledge” throught scientific reasoning will never give you an answer on “why” things are the way they are but only “how” they are? And even so, the answers to the “how” will always bring up more questions.
Big scientist heads with big ideas. A life of work in the field for what? Recognition for trying to understand something that doens’t need to be understood? Laws of physics that fall into the logic of your interpretation? When you are thirsty, very thirsty, and you long for a glass of water. When you take that first big gulp of water out of the glass, does the quenching of your thirst depend on your understanding of why your thirst is being quenched? NO. And trying to understand everything else just kills the moment. Hey! look! the sun! aint it beautiful? Oh yes but did you know that the sunspot count on the sun is now under 11 and the 10.7 flux is at 67? NO AND WE DONT CARE. The sun shines, and it helps us get warm, it makes plants grow, and it did well before you guys tried understanding why, and will still do well after you stop trying. The functioning of things is far from depending on puny arrogant humans who think highly of themselves and of their oh so precious knowledge of things. The Earth doens’nt need you. Neither does the sun. You just destroy everything on your path like locusts, and then try to pose as redeemers of the world. You are pathetic useless, and lost. Nature’s solution to Nature’s problems is provided naturally by Nature. Humans only interfere with normal process of things. Nature doesn’t NEED your understanding. It just needs you, and all your stupid arrogant friends, to leave her alone.