New calibration satellite required to make accurate predictions, say scientists

A new paper published in Philosophical Transactions of the Royal Society A, explains weaknesses in our understanding of climate change and how we can fix them. These issues mean predictions vary wildly about how quickly temperatures will rise. This has serious implications for long term political and economic planning. The papers lead author is Dr Nigel Fox of The National Physical Laboratory, The UK’s National Measurement Institution.

The Earth’s climate is undoubtedly changing, but how fast and what the implications will be are unclear. Our most reliable models rely on data acquired through a range of complex measurements. Most of the important measurements – such as ice cover, cloud cover, sea levels and temperature, chlorophyll (oceans and land) and the radiation balance (incoming to outgoing energy) – must be taken from space, and for constraining and testing the forecast models, made over long timescales. This presents two major problems.

Firstly, we have to detect small changes in the levels of radiation or reflection from a background fluctuating as a result of natural variability. This requires measurements to be made on decadal timescales – beyond the life of any one mission, and thus demands not only high accuracy but also high confidence that measurements will be made in a consistent manner.

Secondly, although the space industry adheres to high levels of quality assurance during manufacture, satellites, particularly optical usually lose their calibration during the launch, and this drifts further over time. Similar ground based instruments would be regularly calibrated traceable to a primary standard to ensure confidence in the measurements. This is much harder in space.

The result is varying model forecasts. Estimates of global temperature increases by 2100, range from ~2-10◦C. Which of these is correct is important for making major decisions about mitigating and adapting to climate change: for instance how quickly are we likely to see serious and life threatening droughts in which part of the world; or if and when do we need to spend enormous amounts of money on a new Thames barrier. The forecasted change by all the models is very similar for many decades only deviating significantly towards the latter half of this century.

Dr Nigel Fox, head of Earth Observation and Climate at NPL, says: “Nowhere are we measuring with uncertainties anywhere close to what we need to understand climate change and allow us to constrain and test the models. Our current best measurement capabilities would require >30 yrs before we have any possibility of identifying which model matches observations and is most likely to be correct in its forecast of consequential potentially devastating impacts. The uncertainties needed to reduce this are more challenging than anything else we have to deal with in any other industrial application, by close to an order of magnitude. It is the duty of the science community to reduce this unacceptably large uncertainty by finding and delivering the necessary information, with the highest possible confidence, in the shortest possible time.”

The solution put forward by the paper is the TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies) mission, a concept conceived and designed at NPL. This which would see a satellite launched into orbit with the ability to not only make very high accuracy measurements itself (a factor ten improvement) but also to calibrate and upgrade the performance of other Earth Observation (EO) satellites in space. In essence it becomes “NPL in Space”.

The TRUTHS satellite makes spectrally resolved measurements of incoming solar radiation and that reflected from the ground, with a footprint similar in size to half a rugby field. The unprecedented accuracy allows benchmark measurements to be made of key climate indicators such as: the amount of cloud, or albedo (Earth’s reflectance) or solar radiation, at a level which will allow differences in climate models to be detected in a decade (1/3 that of existing instruments). Its data will also enable improvements in our knowledge of climate and environmental processes such as aerosols, land cover change, pollution and the sequestration of carbon in forests.

However, not only will it provide its own comprehensive and climate critical data sets but can also facilitate an upgrade in performance of much of the world’s Earth observing systems as a whole, both satellite and ground data sets. By performing reference calibrations of other in-flight sensors through near simultaneous observations of the same target, it can transfer its calibration accuracy to them. Similarly its ability to make high accuracy corrections of atmospheric transmittance allow it to calibrate ground networks measuring changes at the surface e.g. flux towers and forests and other reference targets currently used by satellites such as snowfields of Antarctica, deserts, oceans and the Moon. In this way it can even back correct the calibration of sensors in-flight today.

TRUTHS will be the first satellite to have high accuracy traceability to SI units established in orbit. Its own measurements and in particular the calibration of other sensors will not only aid our understanding of climate change but also facilitate the establishment and growth of commercial climate and environmental services. One of the barriers to this markets growth is customer confidence in the results and long-term reliability of service. TRUTHS enable a fully interoperable global network of satellites and data with robust trustable guarantees of quality and performance.

The novelty of TRUTHS lies in its on-board calibration system. The instruments on the TRUTHS satellite will be calibrated directly against an on-board primary standard – an instrument called a CSAR (Cryogenic Solar Absolute Radiometer). This compares the heating effect of optical radiation with that of electrical power – transferring all the difficulties associated with existing space based optical measurements (drift, contamination, etc) to more stable electrical SI units. In effect, this mimicks the traceability chain carried out on the ground in orbit.

This would make climate measurements ten times more accurate and give us models on which we could make important decisions about the future.

The project, which would be led by NPL, is being considered by different organisations. The European Space Agency has recommended looking into ways to take it forward, possibly as a collaboration with other space agencies. NASA is also keen to collaborate formally.

Nigel concludes: “Taking this forward would be an excellent investment for the UK, or any other country which supports it. This is not only an effective way to address the problem of understanding climate change, but also an excellent opportunity for business. It would grow expertise in Earth Observation and showcase the UK’s leading space expertise – an industry which is growing by 10 per cent a year. It would also provide a platform to underpin some of the carbon trading which will be a big international business in the near future.”

The National Physical Laboratory (NPL) is the UK’s National Measurement Institute and one of the UK’s leading science facilities and research centres. It is a world-leading centre of excellence in developing and applying the most accurate standards, science and technology available.

NPL occupies a unique position as the UK’s National Measurement Institute and sits at the intersection between scientific discovery and real world application. Its expertise and original research have underpinned quality of life, innovation and competitiveness for UK citizens and business for more than a century.

93 thoughts on “Uncertain, impaired, models”

How uncertain is the future? Nigel should be a lead author on AR5, or at least be asked to account for the discrepancy between his call for more data and the no-need-for-more-data assertion by the IPCC-Gore Group-Grope.

The disconnect within the “consensus” group is further proof of the consensus. But of course consensus exists within the context of levels of disagreement; statistically significant error bars supported by peer-reviewed studies define “consensus” within a p>0.95 range such that outliers (in both sense of the word), ex. skeptics, are recognizable and discountable.

The scary thing is that the above Orwellian statement may be defensible.

At first glance, this sort of thing is just what is needed. I do not yet know if this particular implementation is exactly what is needed. However, I say “Yes” to improved data collection before we do anything else.

Fox gives a brief criticism of climate models and existing data gathering techniques. Both are welcome. I hope he develops both in great detail. I especially like his point that present data gathering techniques will not yield adequate data for decision for perhaps thirty years.

The last thing the Warmistas want is any kind of empirical truth,
especially from a satellite called ‘Truth’. Only computer models need apply for that label.

I expect it will be launched on another Taurus rocket with a sticky fairing,
crashing in Antarctica with the previous two would-be sources of AGW-threatening data.

If launched it will just confirm Spencer’s ideas about no missing heat, Svensmark’s cosmic-ray ideas, and the negative-feedback reality of no catastrophe, impending or otherwise. Such an absolute disaster to the Warmista Bandwagon will never be allowed to happen. Watch for big funding cuts here.

Oh brother, an orbiting acronym. Funny how they seem to need that now. A calibration satellite, eh? Just diddle it any old way ya like, and it will TRUTH everything. They’re just figuring out that there’s a signal-to-noise ratio? Bother. “It is a world-leading centre of excellence in developing and applying the most accurate standards, science and technology available.” Now self-flaggelation is part of the mix.

I worry about giving the Team the ability to re-calibrate the satellites to match the models. What controls will be in place to assure that the calibrations are being done in a open, transparent way that can be audited by independent (non-team, non-peers) scientists?

So between surface temperature readings that are questionable and satellite temperature measurements that are questionable and models that are questionable we have what assurance that global warming is occurring? With all this “questionable” data(?) how do we justifiably call “climate science” a science?

“The forecasted change by all the models is very similar for many decades only deviating significantly towards the latter half of this century.”

That’s the first time i hear this. As far as I know, you can run the same model twice, with different random initial conditions of the atmosphere, and get wildly varying trends for the first decades of the model run.

Just been today playing with regional runs of CMIP5 model under RCP60 scenario (~SRES1B, realistic projected use of fossil fuels). Alas, they do not produce anything except the hockey sticks, not being able to catch the 20th century record at all. So my recommendation to modelling community is to replicate the known past first, then arguing about 2100.
Station Debrecen, Hungary with 150 years long record vs CMIP model:

Good calibration through state of the art equipment seems like a sound approach. And looking to satellite sensors would remove pesky earth-based inaccuracies. So far so good.

However, he is essentially saying that all the climate models are accurate enough for the next 40 y (since they more or less agree) — significant divergence starts only thereafter (seems like a stretch to me, but OK). So we just need to know which models to pick within 10 y or so (which is possible ONLY with improved calibration, otherwise it would take 30 y). But we already have > 30 y of satellite data with which to compare models. Shouldn’t it be possible to run those same models on the reams of data over past 30 y in order to separate wheat from chaff?
Or did I miss something?

Looking another way, if the current accuracy in measurement is an order of magnitude below state of the art, how does he know that ANY of the existing models are correct? How does he know that once his improved data has been gathered ANY of the existing models are going to reflect reality?

Between the lines, he betrays an uncanny belief that climate models have it right (more or less). What will he do if/when NONE of them follows reality? Recalibrate? Ask for more funding?

RE: 10% annual growth (of the “Earth Observation” industry): One hopes that this will taper off soon, so governments can focus on truly anthropogenic crises facing us!

I’m not impressed. A lot of bold statements from people without the access to the knowlege needed to make those statements.

“NASA is also keen to collaborate formally.” Translation from PRese, “We called some guy at NASA and the guy did’nt hang up.”

“The novelty of TRUTHS lies in its on-board calibration system. The instruments on the TRUTHS satellite will be calibrated directly against an on-board primary standard – an instrument called a CSAR (Cryogenic Solar Absolute Radiometer). This compares the heating effect of optical radiation with that of electrical power – transferring all the difficulties associated with existing space based optical measurements (drift, contamination, etc) to more stable electrical SI units. In effect, this mimicks the traceability chain carried out on the ground in orbit.”

Sorry dude, old hat. This reads like a gee wiz “New Scientist” article.

BTW, I have 35 years of experience with electronic test and measurment, including uncrtainty measurment and calibration. Half of this was for space based electronics some of which are on other worlds.

Don B says:
September 19, 2011 at 1:18 pm
“The climate is complex, and modeling it is very, very, very hard. Andy Revkin wrote a column about that 10 years ago in the NY Times, and not much has changed. Perhaps a little more money would help.”

Yes, by doubling the computer power you could increase your forecasting horizon from 5 to 5.5 days; and by doubling again you could improve it to 6 days. (The deviation of a chaotic system from its model increases exponentially with time)

There is not enough money in the world; THAT’s the crucial problem of climate science. And if they got all the computer power they needed, the heat dissipation of the computer would turn Earth WHITE hot; no greenhouse effect needed.

John W: “carbon trading which will be a big international business in the near future.”

That is the point I saw as being the ‘runway joke’. This appears to be the UK science way of justifying carbon taxes. At great expense of many further satellite launches, of course, and certainly not at the expense of the UK.

Dr Nigel Fox, head of Earth Observation and Climate at NPL, says: “Nowhere are we measuring with uncertainties anywhere close to what we need to understand climate change and allow us to constrain and test the models. Our current best measurement capabilities would require >30 yrs before we have any possibility of identifying which model matches observations and is most likely to be correct in its forecast of consequential potentially devastating impacts. The uncertainties needed to reduce this are more challenging than anything else we have to deal with in any other industrial application, by close to an order of magnitude. It is the duty of the science community to reduce this unacceptably large uncertainty by finding and delivering the necessary information, with the highest possible confidence, in the shortest possible time.”

If any of you were CEO of a capitalistic company competing for market share, how many of you would go to your board of directors and thank them for the huge amounts of money spent on something with such dubious value that it needs to be essentially replaced?

NPL is truly a world centre of excellence in mensuration, and Nigel Fox makes good points concerning the need for accurate, precise and reliable climate data acquisition. His comments on the models are also worthy.

More than a decade ago a group of scientists from around the world was assembled to make presentations on climate change at the US Congress. There were three panels each of three persons (one of whom chaired the panel) and who each gave a presentation. The presentations from each panel were followed by a Q&A session. Fred Singer chaired the first panel on climate data, I chaired the second on climate model performance, and David Wojick chaired the third on policy implications.

The first question to the panel on climate models was voiced in an aggressive manner and was;
“There seems to be a dichotomy. The first session said the climate data is wrong, and this session says the models are wrong. Where do we go from here?”

Gerd Rainer-Weber rose to reply but – as chairman of the panel – I signalled him to desist then I looked the questioner in the eyes and replied;
“Sir, there is no dichotomy. The climate data are right or they are not. If the climate data are right then the models cannot emulate past climate. Alternatively, the climate data are not right. In either case, we cannot assess the models’ performance and, therefore, they are not useful as predictive tools. So, I agree your question, Sir, where do we go from here?”

The questioner studied his shoes and said nothing.

More than a decade has passed since then, but I would give the same answer to that question today.

“It would also provide a platform to underpin some of the carbon trading which will be a big international business in the near future.”
————————————–
The question is “How will they search differently and come up with a higher number for CO2?”
Did the NPL question or did they investigate Dr Jones, the Team, climategate, or the harry read me file?
I wonder if they will be the science wing of the IPCC.

From NPL in the head post: “Similar ground based instruments would be regularly calibrated traceable to a primary standard to ensure confidence in the measurements. This is much harder in space.”

Traceable to a primary standard means the measurements would have a known absolute accuracy, as contrasted to precision, with a known accuracy standard deviation. This is exactly what has been entirely missing from the surface air temperature measurements across the entire 20th century and right up through the present.

Comparing regional time series does not test for accuracy in surface air temperatures. Regional correlations in temperature records only imply that weather-induced systematic errors — most notably from solar loading and wind speed effects — can be as regionally correlated as the weather itself. Without an external reference standard, systematic sensor measurement errors are hopelessly convoluted into the surface air temperature record. And the surface air temperature record is one of the primary fine-tuners of climate models.

mpaul your question is at once highly appropriate and a tragic commentary on how the systematic dishonesty in climate science has cast a dark shadow of legitimate suspicion over the integrity of the entire scientific enterprise.

Twenty years ago, your question would have been so far beyond reason that it likely would have occurred to no one. Today, it’s perhaps among the first questions that come unbidden to mind. No one can trust these guys anymore, and for good evidential reasons. Not only that, but virtually every single major official science organization has taken a public stand in support of a position reached dishonestly. They are no longer worthy of trust to do the right thing.

Let’s eliminate 20 of the 22 climate models, take the funding and invest in high quality measurement of all things critical to climate science.. And while we are at it, let’s measure this “positive feedback” thingy and put this issue to bed.

“Solar radiation is the driving force of the Earth’s climate and small changes in the total output of the Sun can have significant effects on the Earth’s surface. It is believed that a 0.3% change in Total Solar Irradiance (TSI) was responsible for the mini-ice-age of the 17th century, Figure 2. The TSI record relies upon the data from many different solar radiometers flown over the last 20 years, whose inherent variability (~0.8%) could affect the prediction of models of global temperature change by as much as 0.8 K.”

I love the bit about the Thames barrier…. has no one read all the literature from the 60`s up to now, the South Eastern `plate` of England (which includes London) is declining at an incredibly well define rate. It is slowly dipping into the English Channel, it is not a question of “if” but “when” a new barrier needs to be built.

Hard data is the enemy of AGW, but if someone tried to make a case for a satellite based on the idea that it disproves AGW the chance of funding would be zero. By wrapping it up as a way of distinguishing between AGW models the chance of funding is good. We get improved data, and data is data!

If CLOUD had been described as an experiment to DISPROVE solar influence on climate then funding would have been less likely to be delayed. If the experiment happened to show the opposite then it is not the experimenter’s fault – that is what science should be about.

“Sorry dude, old hat. This reads like a gee wiz “New Scientist” article.

BTW, I have 35 years of experience with electronic test and measurment, including uncrtainty measurment and calibration. Half of this was for space based electronics some of which are on other worlds.”

=====

Thanks for posting that. Let me add that many satellites have included on board calibration standards for some sensors since the very early days of satellites. Unfortunately, it’s been nearly 50 years since I worked with those, and I no longer remember specific examples. And worse yet, even if I did remember, I probably couldn’t remember what was classified and what wasn’t and therefore couldn’t cite examples if I wanted to.

That said, if some types of observations really are a bit shaky and could do with better calibration — on board or ground based — who can argue? I would ask however why the improved instruments can’t just be developed and deployed on future satellites within existing observation programs. Do we really need a new satellite program to make these observations?

Us normal people know man cannot control Earth’s climate, so our plan is to try to adapt as Mother Nature does her thing. But my big question is, what are these “man-caused climate change” idiots going to do when they finally realize there’s not a darn thing man can do to prevent “climate change”? Or are these people so stupid they will never realize their mistake?

“Let’s eliminate 20 of the 22 climate models, take the funding and invest in high quality measurement of all things critical to climate science.”

This is an EXCELLENT suggestion. And I know at least one candidate for elimination…

I would also suggest an old fashioned approach. Instead of mooching relentlessly off the taxpayers, let’s have a telethon to raise needed funds for climate research! They could get entertainers, politicians, and movie stars to plead with the American people that climate research is vital. Maybe something like this…

Ted Dansen: “Just look at this poor little GCM, Martin!”

Martin Sheen: “[sniff] Yeah, it has no computer to run on…it just sits there, lonely, unfulfilled!”

Bono [to audience]: ” But YOU can help…don’t let this climate model die! Just a small donation of $10/month could give it a home on a nice parallel computer.”

Everyone [singing]: “We are the world! We are the children! We will be broke by 2012, unless you start givin’…….”

P Walker says:
How do they intend to prevent calibration loss during launch ?

Exactly what my first thought was. The only way to really calibrate the instrument is to fly a calibrating instrument aboard a Shuttle like craft that is in orbit briefly and can do “crossing runs”. This is where the short term calibrating instrument takes data at the same time and place the original satellite instrument does. The calibrating instrument is then returned to earth and closes the loop as its final calibration is determined. Not absolutely guaranteed to catch drift because launch and landing jumps might not be caught but there is some chance to get to the TRUTH.

During the development of the F-16, the Air Force assigned an office to ride herd on the weight budget for the plane. One day he visits the avionics department in a lather. He was sure that the plane was way over budget on weight. The Manager goes over his table for avionics. “Nothing wrong that see.”

The lieutenant says, “No, those figures are just for the computers, radar and display. You are spending just as much money on computer programs and they are not on this table. That’s what’s missing.” Slowly, the manager said, “ Soft ware does not weigh any thing.”

The flustered lieutenant left, only to return with a cart filled with trays of IBM punch cards. “These are only a part of one of you programs. You want to tell me they don’t weigh something!?!?” The manager replied, “We only use the HOLES.”

“Estimates of global temperature increases by 2100, range from ~2-10◦C. Which of these is correct is important for making major decisions about mitigating and adapting to climate change”

The range does not include negative numbers, and that is because warming is presupposed.
Warming and cooling have been presupposed in decades past, and have turned out wrong.
What is there, that has been proven, about this time that makes it so special?

Excuse me but I’m not sure I understand how the science is settled when the data is “an order of magnitude” too uncertain for long term projections. And of course short term projections are impossible because that’s weather, not climate.

It appears that the science is settled like what I flush down my toilet is settled on the bottom of my septic tank. It’s a load of crap, basically.

So we can read a license plate number on an automobile of interest from orbit but we can’t park a satellite in geosync that can focus on the earth and determine total brightness to better than a few percent?

Forgive me but this sounds almost simple enough for a fifth grade science project. Focus the camera on the moon for a constant brightness reference then point it at the earth. Snap, snap, done. We don’t even have to return the film to the earth to be developed like we did with Keyhole spy satellites 60 years ago.

Unfortunately not. It has a number of difficulties not least of which is that it can only take measurements once a month. As I recall it ran for five years around the turn of the millenium and the measurements were not in satisfactory agreement with other methods employed to measure earth’s albedo. I checked up on it recently and the earthshine boffins are now claiming they’ve “corrected” the data so it now agrees with other methods.

“consequential potentially devastating impacts”
No agenda here folks. Just another gubment employment program, lasting decades.
Since co2 has been proven to not be a significant climate temperature driver, there isn’t a damn thing anyone can do to change climate one way or the other.

Don B – nice article. In criticizing the models, which are necessarily imperfect given the complexity of Earth system, we often forget what a major intellectual achievement it is that they do as well as they do.

If there’s one area of climate science where people’s criticism isn’t backed up with meaningful specifics, constructive alternatives, or a basic understanding of the development process, modeling would be it. As proven by the comments here.

Don B – nice article. In criticizing the models, which are necessarily imperfect given the complexity of Earth system, we often forget what a major intellectual achievement it is that they do as well as they do.

If there’s one area of climate science where people’s criticism isn’t backed up with meaningful specifics, constructive alternatives, or a basic understanding of the development process, modeling would be it. As proven by the comments here.

Let me reconstruct your last paragraph to be true:

“If there one area of climate science that isn’t backed up with complete specifics, empirical data, or a basic understanding of the complexity of the earth, modeling would be it. As proven by the many posts and accompanying comments found here at WUWT for man months, even years.

Apparently you haven’t been doing your homework. And to refute your first statement “that they do as well as they do” simply isn’t good enough to base drastic changes in the way we live–you’d throw Western Civilization basically under the bus because you think they can predict the future. I can’t predict the future either, but I’m not making drastic, outlandish, suicidal proposals–but unlike your climate modelers, I’m freely willing to admit it. They aren’t.

“So we can read a license plate number on an automobile of interest from orbit but we can’t park a satellite in geosync that can focus on the earth and determine total brightness to better than a few percent?”

Dave, is your license plate number 8T6-W5E7 ? Just checking to see if we got it right. Ha ha.

The Spy satellites recently declassified had ground resolutions of a few FEET, so unless you had a really BIG license plate they probably could not read it.

They could detect your car/tank/missile/submarine/bomber and that is what they needed to do.

Dave also wrote;

“Forgive me but this sounds almost simple enough for a fifth grade science project. Focus the camera on the moon for a constant brightness reference then point it at the earth. Snap, snap, done.”

Actually, making absolute measurements of optical radiation (Visible/IR/Far IR) is one of the more challenging measurement problems that exist. IMHO it is much harder than measuring electrical power, distance, temperature, time, etc. etc.

Just look up the international standard for optical radiation, it is based on the flux coming from a molten pool of platinum. Why ?, well in part they could not find a better absolute standard. Even with this standard and lots of careful measurements it is DIFFICULT to make absolute optical radiation measurements on the surface of the Earth to much better than a few percent (that’s in the visible region, in the infrared better than 3-5% is state of the art). Doing it on orbit is orders of magnitude more difficult.

There is lots of info on this topic at the NISTwebsite among other locations.

Cheers, Kevin.

(FYI, I have worked on the ground calibration of the radiometric accuracy of Earth imaging satellites. I would much prefer measuring the absolute length of a stick to better than 0.001%)

RockyRoad, what level of agreement would you need from a range of models to be convinced that the climate will get a lot warmer, like nothing Western Civilization has experienced, under a BAU scenario?

Also, how are current models not continually compared against a lot of empirical data?

Also, how have models not gotten better in reflecting the complexity of the Earth?

In other words, what would be a sufficient model?

Or, alternatively, if you don’t like the current models, what’s you alternative? (remembering that observations alone won’t cut it – any cause and effect attribution requires some kind of model).

I enjoyed, ex officio, a tour of NPL a couple of years back, talked for some hours with some of the lead scientists and was given demonstrations of their work. They are very, very good at what they do (i.e. measure stuff). Brilliant in fact. Science at the level of art. Of course what happens afterwards to the measurements they obtain is anyone’s guess.

So we can read a license plate number on an automobile of interest from orbit but we can’t park a satellite in geosync that can focus on the earth and determine total brightness to better than a few percent?

My thoughts exactly! A earth image that fills 100% of the measurement frame seems the logical path of metrology. The signal would be clean and hard to dispute. Are you aware of the status and priority of any such proposals? GK

Uhm…AMSU-A which provides the data for UAH run by Dr Spencer recalibrates itself several times per day by (if I recall correctly) comparing to readings from outer space known to be stable. Given the absolute glee with which his detractors reiterate the tiniest of errors he’s made, long after he’s corrected them, and publicly to boot, one would think they’d have gone after him big time if there was anything wanting in terms of his calibration methodology? Sounds to me like NPL is hyping a problem that they haven’t even shown for certain exists, and I doubt that it does for the most part.

But the biggest objection possible to their plan should be the notion of using ONE satellite to calibrate ALL THE OTHERS. That frankly, is just insane. For starters, if their “truths” satellite turns out to have an error of its own, then ALL the data from ALL the satellites would be calibrated WRONG.

WORSE… we would have no way of knowing!

WORSER….We’d be abandoning everything we ever learned about science and measurement. Just when I thought there was NOTHING LEFT TO ABANDON! The whole point about science and measurement is to use different methods of investigation. If you get the same answers via different methods, that’s evidence you got it right. If you get different answers…that’s evidence that there are one or more errors in one or more of the methods, and the manner in which they differ is a clue to figuring out what the errors ARE!

WORSER STILL…Measurement requires that you ALWAYS START AT ZERO. They don’t teach drafting anymore, just Autocad, which doesn’t teach the same lessons. If you draw a line 10 cm long, with tick marks at 4, 8 and 9 cm, you NEVER measure out four and then four more and then one more to get 4, 8, and 9. Why? Because if you make a mistake on 4, then 8 and 9 are now both wrong as well. Instead, you measure (from zero) 4 and (from zero) 8 and (from zero) 9. Any mistake is confined to that one measurement instead of all the measurements that follow.

So sure…let’s put up ONE satellite using ONE method with NO WAY TO VERIFY it, and use it as the reference for EVERYTHING ELSE.

This is the kind of science that can only be explained by a runway model.

It is quite a bit more that a “fifth grade science project”. I have done it for a living and I doubt I could have done it after the fifth grade.

Cheers, Kevin.

Please note I did not quote that part of DS’s comment. It was hyperbole for sure but some literary licence is permitted, I think. My comment was:

My thoughts exactly! A earth image that fills 100% of the measurement frame seems the logical path of metrology. The signal would be clean and hard to dispute. Are you aware of the status and priority of any such proposals?

Climate models do not rely on a simple ln() formula for C02. They rely on band models for their RTE. You can even see intercomparison experiments where the band models are compared to LBL models and to physical measurements.

In the 1980′s, band models began to be incorporated routinely in climate models. An international program of Intercomparison of Radiation Codes in Climate Models (ICRCCM) was inaugurated for clear sky infrared radiative transfer, with results described by Ellingson et al. 1991 and Fels et al. 1991 (note Andy Lacis is a coauthor):

The ln formula is a simple “curve fit” that is used to EXPLAIN the ln response. It is not used in any GCM.

As with all modelling we have a hierarchy of models. At the top of the pyramid we have LBL models. These are the most precise. They are validated against observation. They work. They make predictions precise enough that we actually use them to produce the data products that you like to use.. like SSTs measured from space. There is no need to model individual molecules. Below the LBL model is the band model. Its faster ( some even run onboard satillites) but slightly less precise.

At the bottom.. lies the phenomenlogical model. This is where the ln() model lies. It’s used to “illustrate” ideas in simple ways. like saying the trajectory of a projectile follows a certain curve.. we all know that curve is a gross approximation, but it gets the general idea across.

Interstellar Bill says:(September 19, 2011 at 1:02 pm)
“The last thing the Warmistas want is any kind of empirical truth, especially from a satellite called ‘Truth’. … ”

Reminds me of the old Russian joke from the Soviet era, when they had two state newspapers called Pravda (“Truth”) and Izhvestia (“Information”). They used to joke that you could get truths which weren’t informative, or information which wasn’t true, but nowhere could you find truthful information. You have to love the grimness of their humour … meanwhile, wait for a satellite called “Information”.

And yes. No re-calibration or re-interpretation of the results without complete openness, please.

Concerning climate models, at September 19, 2011 at 8:49 pm you ask RockyRoad.

“In other words, what would be a sufficient model?

Or, alternatively, if you don’t like the current models, what’s you alternative? (remembering that observations alone won’t cut it – any cause and effect attribution requires some kind of model).”

I offer the following correct, accurate and true answers to your questions.

A sufficient model has demonstrated predictive skill.
But
(a) no climate model has existed for sufficient time for it to have demonstrated any predictive skill for periods of 25, 50 or 100 years.
and
(b) each climate model has demonstrated that it lacked the predictive skill needed to predict the stasis in global warming of the last decade.

In other words, the only predictive skill demonstrated by the climate models is that they fail to predict global climate change at decadal time scales.

There are several alternatives to climate models that have similar demonstrated predictive skill for prediction of future climate. For example, one such alternative is the examination of chicken entrails.

Yes, they are obviously parametrized to fit the 1980-2000 period, but totally unable to replicate periods before and after. Lets see the Arctic polar region, allegedly the most sensitive area to those thick forcing arrows in K-H cartoons:
Models blindly follow the Keeling curve and there is not a hint of cyclical variability observed in reality. How long the sine wave will have to be going down, until they are invalidated?

“LBL models.. are the most precise. They are validated against observation. They work. They make predictions precise enough that we actually use them to produce the data products that you like to use.. like SSTs measured from space.”

Oh really? Lets see the North Atlantic, modelled vs observed:
I see no cyclical variation captured by the model, which produce again just another hockey-stick. Where is the warming 1910-1940 and cooling 1940-1980 in the models “which work and are the most precise”?? What is so unusual in post-1980 warming? What will models and modellers do, when the post-2005 cooling will become obvious?

Global projections: where is the “AGW may be masked for a decade, our models predicted exactly that” warming lull?

Models are totally disconnected from the reality. All that modelling BS is computerized wanking on 30-year natural trend, AMO/PDO being in a positive phase. Me guess all that radiative pseudophysics is BS, or overestimated 10x.

Trillions upon trillions of nano-second-scale energy transfers are reduced to a formula covering a 20 km by 20 km by 3 km box. Just because LBL is more refined than a simple 5.35 ln(CO2) formula doesn’t make it all accurate.

What role does N2 and O2 play in LBL code? Most energy transfer in the atmosphere occurs through molecular collisions rather than radiative transfer.

…Or, alternatively, if you don’t like the current models, what’s you alternative? (remembering that observations alone won’t cut it – any cause and effect attribution requires some kind of model)…

.
I don’t think your question framed properly.

The current modality inputs has been shown to be both biased and incomplete. To base any climate decisions on projections of a deeply flawed model would be folly! To take far reaching, dramatic, consequent rife actions, based on this model, is ludicrous. When reality mocks your model… It is time to put your hands into your pockets and leave them there. WUWT will let you know when the current modality has been improved enough. I certainly want to see the model with valid inputs (cloud effects) and validation runs forward and back.

Under the current information via propaganda paradigm, how long do you think that will take? GK

G. Karst: It is time to put your hands into your pockets and leave them there. WUWT will let you know when the current modality has been improved enough. I certainly want to see the model with valid inputs (cloud effects) and validation runs forward and back.

I definitely understand concerns about climate models and the need for much better ones, but your reply really can’t be taken seriously. Co2 is a significant gas and its increases have to be taken seriously, even if models can’t predict all the future details. No one knows the future anyway, in terms of emissions, or population growth, energy trends, et cetera, so no prediction is really possible even in principle. We will have to act in the face of uncertainty, and we know, even without models, that climate sensitivity S is about 3 +/- 1.5 C from studying past climates.

Does it really matter much if S is only 1.5 C instead of 4.5 C when BAU Co2 levels will be something like 600 ppm by 2100? (If not higher.)

Shouldn’t we be taking action now at least as a form of insurance against the worst case scenarios? Waiting 25-50 years to see how today’s models play out — by when we will have much better models anyway — does not seem to me to be a sensible approach to a potentially large problem.

“Shouldn’t we be taking action now at least as a form of insurance against the worst case scenarios? Waiting 25-50 years to see how today’s models play out — by when we will have much better models anyway — does not seem to me to be a sensible approach to a potentially large problem.”

The worst case scenario by far is a significant constraint on anthropogenic CO2 emissions. Such constraint would reduce the use of fossil fuels with resulting deaths of billions of people mostly children. No possible effect of AGW could be anywhere near as horrific as that.

So, the required “action now at least as a form of insurance against the worst case scenarios” is to oppose such emission constraints.

Mike Wryley: Millions of years ago there were not billions of people on the planet who depended on a large and honed agricultural system, or who lived in immovable cities on coastlines, or who depended on adequate freshwater supplies from rainfall and snow packs. Humanity does not have the adaptability that ecosystems and species did millions of years ago, and our climate is changing very fast compared to historical averages.

“If we are concerned about Co2…” We’re not. The “carbon” scare is trumped up nonsense.

And:

“Millions of years ago there were not billions of people on the planet who depended on a large and honed agricultural system, or who lived in immovable cities on coastlines, or who depended on adequate freshwater supplies from rainfall and snow packs. Humanity does not have the adaptability that ecosystems and species did millions of years ago, and our climate is changing very fast compared to historical averages.”

Malthus himself could have written that.

Instead of scaring yourself, Jeff, try thinking rationally: those billions of people now live much longer, healthier lives on average than those who lived only a couple of centuries ago. Don’t buy into climate alarmism that is motivated by $billions. Just look around you at all the progress in health and living standards. The climate charlatans are just using lies to fool you for their own personal benefit.

Does it really matter much if S is only 1.5 C instead of 4.5 C when BAU Co2 levels will be something like 600 ppm by 2100? (If not higher.)

I buttonholed my plant friends and they all look forward to warm, wetter, CO2 enhanced climate. They assured me, that even under your flawed projected range, they will most likely be able to feed 10 billion human mouths. They are very concerned that someone may be lying to them and additional CO2 plant food may not affect temperature significantly. In fact their northern friends have been feeling a little frigid. Mr. seaweed reports cooling sea temps and flat sea levels and wants to know WTF. Some idiot had told them ice was melting. They have warned me, however, that any significant cooling, and they will retaliate, by feeding only half.

Since my friends have fed me from birth and I owe any continuance to them, I tend to listen to them very carefully. If you mess with my friends you mess with me. GK

Smokey, I certainly accept the necessity of plentiful, affordable energy as fundamental to a richer, healthier life. This is especially necessary for progress in the developing world. I do not advocate depriving anyone of energy.

At the same time, I think we know enough to conclude that our large emissions of Co2 are risky and could change the climate. We don’t know all the details, and we might never know, but we don’t have to know if S is 2.5 C or 3.5 C in order to conclude we ought to stop emitting Co2, because with BAU we are going to rise to very high levels of Co2, 600 or more ppm. I don’t think that hiding from this is a solution, with all due respect.

We need both more energy and less Co2. Both camps are right. But how to do it?

G.Karst, that’s a truly funny post but I still don’t think you are addressing my concern. People aren’t lying about the Co2/T connection. Maybe they are wrong about the S value. But maybe they are right. Or maybe they are on wrong on the low side.

In any case, it doesn’t seem to me we can just say “they are wrong” and leave it at that. Maybe the people who say they are wrong are themselves wrong. Can we afford to bet the farm on them?

Won’t there always be some uncertainty? I think there will. Don’t we have to prepare for the possibility of a bad scenario? Just like in the rest of life. I’d rather not spend the money for a lot of safety features on my car, and I don’t expect to get in an accident and I hope I do not. But I might. I just can’t be sure of the future. At the same time I don’t want to get rid of my car, but I also wouldn’t count on a 1930s model to protect me, and I buy insurance. Isn’t Co2 the same?

Won’t there always be some uncertainty? I think there will. Don’t we have to prepare for the possibility of a bad scenario? Just like in the rest of life. I’d rather not spend the money for a lot of safety features on my car, and I don’t expect to get in an accident and I hope I do not. But I might. I just can’t be sure of the future. At the same time I don’t want to get rid of my car, but I also wouldn’t count on a 1930s model to protect me, and I buy insurance. Isn’t Co2 the same?

It takes real wealth to mitigate disaster. It takes, a lot of real wealth and resources and technology to decipher climate. Anything that impedes this chain will impede any realistic response. Broad, deliberate global actions can have disastrous, unintended consequences.

Even on an individual basis, drinking cyanide to kill a flu, will be successful but has serious consequences to the patient.

This is why, when a novice is first introduced to an “in service” control panel, he is always sternly commanded to: “keep his hands firmly in his pockets” We just can’t start pushing climatic buttons and pulling CO2 levers, until we predicatively understand the system (tested & validated model). No one is very certain, if we are going to resume warming or begin a new cooling trend. The solar minimum, cooling ocean temps, double dip La Nina – all point ominously and historically to cooling. And yet, you would have us divert much of our resources to additional cooling, decreasing energy, measures? This is not precautionary, it is premature, non-penetration ejaculation. It will not create, but will decrease viability inventory.

How many, 3rd world, malnutrition/starvation deaths due to food (ie corn) for fuel programs, will it take for you to feel secure?? And for what result to temps? GK

Bill: I’m hardly an expert in climate science, but I do know enough to say that Co2 alone does not explain the temperature history of the last 570 million years. There are other big factors at work over that time, such as Milankovich factors, changes in the amount of energy given off by the Sun, and large shifts in tectonic plates. Our planet simply wan’t the same place hundreds of millions of years ago, and common analogies to that period aren’t meaningful. So I don’t see how your graphs apply to today’s situation. Can you explain?

I’ve read some about the Ice Ages, and I’ve never seen anyone explain them with the changes in the energy from the Sun.

Great graphs. Why not see if Anthony will add them to the menu “Reference Temperature page”.

JeffG:

If the physics holds true for hundreds of million years, and non-correlation (CO2 vs Temps) exists for hundreds of million years, why would you suddenly draw different conclusions to what drives climate. CO2 just doesn’t have the muscle as the graphs clearly show. GK