This is a guest post by Jamie L. Vernon, Ph.D., an HIV research scientist and aspiring policy wonk, who recently moved to D.C. to get a taste of the action

Well, today Chris is somewhere in the Mediterranean Sea. For those who aren’t aware, he is on the Center for Inquiry Travel Club Cruise with the likes of Joyce Salisbury, Lawrence Krauss and Phil Plait. I can only imagine the discussions they are having as they travel across seas that were once the battlegrounds for control of ideas and thought in the world. Most often those conflicts occurred between religious and scientific views, which in many cases is not very different from what is occurring today.

The cruise will be visiting ports of call in Italy and the Greek Islands, places where no man or woman can visit without reflecting on the scientific history that began there.

Will Phil Plait take a late night stroll on the upper deck to catch a glimpse of our galaxy as it passes overhead? If so, will he think about the fact that our galaxy, the study of which has forced massive changes in religious thought, ultimately bears its name because of the story of a jealous Greek goddess?

Hera, the wife of Zeus, is said to have spilled the milk from her breast when she forced Herecles, the child born of one of Zeus’ adulterous escapades, to stop suckling. The spilled milk appeared in the sky and became known as the Milky Way.

Will Lawrence Krauss catch a glimpse of a star from his balcony and remember that if not for Copernicus’ observation that the universe does not revolve around the Earth but rather the Earth revolves around the Sun, if not for this observation, the Scientific Revolution may never have occurred?

Will Chris Mooney pace along the main deck and ponder the challenges Galileo faced after he developed the scientific methods to prove Copernicus’ theory of heliocentrism?

As these great minds explore the Mediterranean this week, I wonder how Galileo communicated his discoveries to the people of his time? Did he use different communication methods to reach the citizenry than those for his scientific peers? What did it take to convince his opponents to embrace his conclusions? Indeed, many of his opponents never did.

I can only imagine the angst that Galileo endured once he realized that his science would reorder human understanding of the universe. For the rest of his life, he would struggle internally and externally over the turmoil his discoveries would create. The challenges he faced then were similar to those we face today as we struggle to make a persuasive argument that the climate is changing in a not-so-good way. Only the stakes for Galileo were much higher. His discoveries put his life at risk and ultimately led to permanent imprisonment at the hands of the Catholic church.

But why? Well, the threat this posed to the Church was so great and required such reevaluation about the teachings from the Bible that he was forbidden from defending his observations. Fortunately, for the fate of humanity, he violated the Church’s demands. This, in the end, led to his incarceration.

Today, we see a similar form of scientific oppression, only the oppressor is not always the Church and the motive is not only to preserve the integrity of religious practice. Rather, as we see in the climate debate, the oppressors are big corporations, as they are legally bound to protect the profits of their investors. What some perceive to be evil intentions that arise from greed and gluttony are actually the result of legal bindings that require corporations to defend their profit-making practices.

For example, within the oil industry, companies whose profits are based on fossil fuels must consider the impact that investing in alternative fuels will have on the value of their stocks. If it is deemed that significant spending on research and development of alternative energy sources will have a negative overall impact on corporate value (by sending a signal that oil is not the future, thus driving investors to new technologies), these companies are legally bound to not participate in these activities. Further, if a corporation identifies a threat to the value of their stock, such as a competitor or an entity that wishes to limit the company’s profits, then the company must act to defend their product.

An oil company has many faculties for defending itself. It may attempt to use its funds to buy out a competitor or silence an antagonist. It might use its massive profits to form relationships with political leaders in order to persuade them to support legislation that protects its industry. The company may also use its political power to motivate public officials to carryout acts that are based in law, but are actually designed to squash the rise of opponents. We are currently witnessing this in regards to climate scientist Michael Mann’s fight with Virginia Attorney General Ken Cuccinelli. While this fight is considered legal, it is designed to achieve ends that are counter to the interests of the public. But, the oil companies are legally required to support these types of activities. The company may also pay scientists to use their understanding of the science behind their opponents’ arguments to manufacture a controversy over things like climate change, which they classify as a scientific controversy. Clearly, we are seeing this when less than 2% (many of this 2% are employees of or benefit from the fossil fuel industry) of climate scientists disagree that the planet’s climate is warming and yet this 2% is given equal footing in the media. Perhaps this form of “balanced” media coverage is due to the fact that oil companies offer such lucrative advertising revenue for media outlets. Could this possibly be the case? Heh. On a different front, the company may simultaneously carryout public relations campaigns that shed a positive light on their products. The company may also create a campaign that leads the public to believe that they are exploring alternatives to the extent that they satisfy the public’s desire to achieve the benefits of alternative fuel sources while not putting the oil industry at risk of losing investors.

Today, while all of these devices are being put to use by the oil industry, one (myself included) must be cautious not to jump to the conclusion that these activities are of evil origin. Instead, we must understand that our legal and economic systems led to the creation of policies that necessitate these behaviors on the part of the company. It is survival of the fittest, corporate-style. If they fail to live up to their legal commitments to their investors, they are legally bound to accept financial responsibility for the losses that might occur without these actions.

Whereas Galileo, despite his contradictory scientific theories, was thought to have been a true religious believer until his death, some have argued that his proclamations of “belief” were merely survival tactics. A denunciation of the Church could have been a death sentence. Today, we are under no such religious persecution.

Aside: For oil corporations that rely on profits from fossil fuel sales, it truly is a life or death situation. If investors get a whiff of blood in the water that the world is genuinely moving away from oil, these companies will be doomed. Thus, those whose livelihoods rely on oil profits are truly fighting for their own survival.

While we may be concerned about livelihoods, we are not talking about losing our lives in the fight for science. Fortunately, even in this atmosphere, there are those who are willing to fight. Michael Mann may be financially devastated by his fight with Cuccinelli. Who knows? But the fight over climate change will go on. It will continue in the blogosphere and on television. It will continue in classrooms and board rooms. It will continue in Notting Hill and on Capitol Hill. Unlike Galileo, we have the American democratic system to give us the environment to hold this debate. We also have the authority to change the laws to release the oil companies from their legal bindings. We must apply all options. If we look at this as a legal problem rather than a war against evil, perhaps we can reach a mutually beneficial resolution. And, perhaps, the oil industry will be able to act on its knowledge that climate change is occurring and that it is likely due to human activities, specifically the burning of fossil fuels.

What we know about the Catholic Church is that they eventually came to accept the theory of heliocentricity. We can only hope that the conditions change such that the oil industry follows suit in regards to the climate debate.

The consensus of thousands of natural philosophers was to follow Aristotle. More than 98% of scholars supported the Church’s position. All the experts agree – the debate on heliocentrism was declared ‘over’ in the third century BC after Aristarchus attempted it. Galileo may have been inspired to his heresy by the devil, but we have to remember that he was not necessarily evil to be doing so – after all, if you’ve signed a contract in blood you’re legally bound.

Or then again, maybe the IPCC reports might not be infallible holy writ, and Galileo, like most sceptics, was motivated by a sincere belief that the evidence did not support the consensus? If we look at the heliocentrism debate as a legal problem rather than a war against evil, perhaps we can reach a mutually beneficial resolution.

“The consensus of thousands of natural philosophers was to follow Aristotle.”

First, the key word here is “philosophers.” It took Galileo to develop the scientific method and the instrumentation to provide testable scientific evidence to support his theories. This was not simply philosophy. It was the birth of modern science.
Second, 98% of religious scholars (who base their beliefs on the supernatural) are not equal to 98% of scientists (who base their theories on testable evidence).
Finally, if the IPCC reports were considered “holy writ” they would, by definition, be fallible by scientific standards. As they actually are scientific documents, they are also subject to errors. However, these errors are able to be identified by scientific testing, not likely through prayer.
Gee whiz.

Exactly my point! Galileo is the primary of example of how science is not determined by consensus but by evidence. And yet despite this, and hundreds of years of history, we still have people making assertions based on consensus and appeals to authority, and a distinct unwillingness to seriously discuss any evidence.

Take this “98% of scientists agree” business. Actually, it depends on precisely what question you ask. Some aspects of climate science are in more doubt than others, and few of the pollsters understand the issues well enough to come up with a precisely worded question that isn’t ambiguous. But several of the surveys over the past few years have asked about the general acceptance of the IPCC attribution question and the figure for climate scientists comes out around 85%. The only ones to get 98% use statistical cheats to distort the figures in the direction they want. But to know that, you have to actually look at the evidence, not take somebody’s word for it.

The vast majority of scientists may well base their beliefs on testable evidence, but that’s of no use if you don’t actually test it.

The problem is that everybody seems to have assumed that somebody else was testing it. And now that some sceptics have tested it and found it riddled with problems, nobody believes them because nobody believes any scientist could have got their statements to the pinnacle of public policy advice without their statements ever having been tested. Circular reasoning, but not uncommon – even in the practice of science.

As one scientist said: “No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves.” Do you understand why he said that?

As one scientist said: “No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves.” Do you understand why he said that?

Because he wanted other scientists to yield to his authority and form a consensus around his preferred methods, it seems. Likewise the “Galileo Gambit” is nothing if not an appeal to authority.

This violent allergy to the concept of scientific consensus is based on absolutely nothing. It is political rhetoric, not an actual prescription that reflects how evidence is actually gathered and used. If you look at how real scientific practitioners work you can disprove it in about two seconds. Do astrophysicists point their telescopes towards the Earth’s surface or away from it? Why? Do cardiologists examine your chest, or your ankle? Why? Do chemists hang the ringstand above the bunsen burner, or on the opposite side of the room? Why? Do paleontologists dig in rocks or in shipping crates of marshmallows? Why, oh why? The answer: consensus. The very IDEA that something called “evidence” exists, and that this evidence can be used to draw conclusions, and likewise the very idea that evidence-based conclusions are better and more important than those without evidence, is in and of itself a consensually assented human enterprise.

The more people try to belittle the notion of consensus within the scientific process, the more they remind me not of Galileo but of Neo–insisting that we are actually all dreaming in the Matrix and that nothing is real at all.

“The very IDEA that something called “evidence” exists, and that this evidence can be used to draw conclusions, and likewise the very idea that evidence-based conclusions are better and more important than those without evidence, is in and of itself a consensually assented human enterprise.”

So you’re saying that the only reason that science has turned to demanding evidence over accepting the Aristotelian consensus is because everyone says that’s what they should do?! I love it! What a marvellous concept! I’d never have thought of that one.

“The vast majority of scientists may well base their beliefs on testable evidence, but that’s of no use if you don’t actually test it.

The problem is that everybody seems to have assumed that somebody else was testing it. And now that some sceptics have tested it and found it riddled with problems, nobody believes them because nobody believes any scientist could have got their statements to the pinnacle of public policy advice without their statements ever having been tested. Circular reasoning, but not uncommon – even in the practice of science.”

The data is public information, and evidence exists in multiple disciplines. The data have been analyzed. The conclusions have been analyzed. The ANALYZERS have been analyzed. Data, hypothesis, testing, peer review. It’s all been done, by a whole lot of people. Papers from these people are routinely being published, critiqued, and reviewed.

Check out Berkeley Earth sometime. It is yet another full-fledged test of the data. Careful attention is being paid to avoid any bias in testing, from funding, onwards. And yet they’ve already come up with clear results confirming human-driven climate change, and have testified before Congress about it.

But you would rather believe in “sceptics” who have not actually done testing, but rather have cherry-picked data to fulfill a pre-determined hypothesis that does not stand up under testing, nor under peer review. That is not skepticism. That’s denial.

Nullius in Verba’s stock in trade is an odd post modernist variant of the courtier’s reply: well, of course you’d believe the emperor (in this case the energy companies) is naked, but only because of your slavish trust in people who claim to be scientists, whom anyone genuine ought to doubt because that’s what John Galt would do. (I exaggerate only slightly.)

If we were arguing about the precession of Mercury’s orbit it would be funny. Were vaccines at issue there would be different wackos. Contraception or abortion? Actually, if either were contested this site would be swarmed, because those issues are of immediate practical consequence.

Great post Jamie. Casting fossil fuel interests in the role of the medieval church is brilliant.

Nullius, I honestly think you and many others misinterpret scientific consensus as argument from authority. It is not. It is an indicator—a metric—of the status of a scientific idea. When an idea is presented, the scientific community does its best to shoot it down. The attacks can go on for some time, but will usually die down in the cases of those theories which do a good job surviving (often they need to be modified in the process). A consensus being reached is an indicator that the theory hasn’t yet been falsified. Thats as close to proof as any scientific idea gets: surviving attempts at falsification.

Another important aspect of this process is that any scientific attacks themselves become fair game for scientific attack. Its not enough to simply raise an objection to bring down a proposed idea: that objection must prove itself able to withstand the same levels of withering scrutiny. To be sure, amateurs and non-specialists have contributed to the scientific endeavor, but when they engage in this activity they are expected to play by these rules. Almost all of the skeptics complaints that I’ve seen, about the scientific establishment suppressing their ideas, are nothing more than entries into the arena, unprepared for smackdowns delivered by pros as delivered to any other players in the ring, fair and square.

Please – hysterics aside, Galileo insulted the Pope by writing a book portraying the Pope as a ‘simpleton’. He spent the rest of his life under house arrest, not in a prison as implied above. Five minutes of research on wikipedia will confirm this. He continued to write, research and entertain visitors. It really makes me wonder what else is embellished to make his point.

Please do not take this as a defense of the inquisition or the many mis-deeds of the church.

No. This is a common misconception. Some of the data is public, but other parts are not – and deliberately so. Otherwise there would never have been a need for Freedom of Information requests. Otherwise nobody would have written things like “The two MMs have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone.” or “One of the problems is that I’m caught in a real Catch-22 situation. At present, I’m damned and publicly vilified because I refused to provide McIntyre with the data he requested.” or “Why should I make the data available to you, when your aim is to try and find something wrong with it. ” or even “p.s. I know I probably don’t need to mention this, but just to insure absolutely clarify on this, I’m providing these for your own personal use, since you’re a trusted colleague. So please don’t pass this along to others without checking w/ me first. This is the sort of “dirty laundry” one doesn’t want to fall into the hands of those who might potentially try to distort things…”

Institutions and scientists vary. Some are very good about providing data and code. Some provide data but not code. Some provide processed data but not raw data. Some provide part of the data but not other bits that don’t show the results they expect. Some do not provide any at all. Pointing to the best of them does not exonerate the rest.

Berkeley Earth is an excellent example! This project exists in response to, and directly as a result of, the efforts of sceptics criticising what they did previously. Sceptics criticised the lack of transparency, the justifications for adjustments, the poor understanding of error estimation, and the fact that despite claims to the contrary, something like 80% of the weather stations in the US were poorly sited, being located on black asphalt parking lots, next to air conditioner outlets, under trees, next to buildings, and even comedy cases where they were located next to barbecues, incinerators, and the backwash from jet aircraft. The scientists didn’t even know – nobody had thought to check where the thermometers were.

So after the sceptics documented all this, the measurement community ***to their great credit*** acknowledged the issue, and promised to do something about it. Berkeley Earth is a part of that response. They promised open access to data and algorithms, the best statistical practice, with outside experts brought in to provide it, to take a more systematic approach to splicing and adjustment, and to be more responsive to the interested public. They promised to only publish results when they could also publish the supporting evidence in its entirety. In short, they offered everything that the sceptics had been asking for, and the sceptics (provisionally) praised them to the skies for it.

They have not completed this task yet. So far, no data or algorithms have been published, and they are not yet in a position to supply a result. Relationships with the sceptics soured somewhat when one scientist for the project broke the earlier promises and announced results based on a preliminary and partial test survey on a tiny fraction of the data with unfinished algorithms long before any of the evidence to support it was published. (And which in no way confirm or claim to confirm that climate change is human induced.) We’ve got claims put on the record, but no way to dispute them because the data hasn’t been published yet – and of course by the time it is the original claims will have become part of the background. But the scientist arguably had little choice in the matter, and if BEST lives up to its advertising, it would indeed go a long way towards satisfying the sceptics. We shall wait and see.

The problem is that Berkeley is only one project – and there should be dozens of them. This is the approach the climate science community should have taken to Climatgate (or indeed, should have been doing all along), but instead most of them seem to have decided to circle the wagons and deny there was ever any problem. That won’t resolve the issue, it will only make things worse.

Sceptics have done proper testing, and these have stood up to testing and peer review. (Which considering they face peer reviewers who say things like “It won’t be easy to dismiss out of hand as the math appears to be correct theoretically” about sceptical papers that cross their desks, is a remarkable achievement.) By contrast, climate scientists have admitted to cherry picking both in testimony (“you have to pick cherries if you want to make cherry pie”) and in print (“this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.”)

It is a scientific scandal. Such things are not unknown. But it is still entirely remarkable how so many people defend the consensus as ‘good science’ with no detailed knowledge of the evidence arrayed against it. Or who reject anything that does not fit their beliefs as “cherry picking” or “taken out of context”. No it isn’t. We frequently do understand the context, and it is often worse than it appears at first glance. But I have no doubt the denial will continue for a few years yet.

It is through nothing but a face-value read–a consensus of self-styled “sceptics” and autodidacts–that the Climategate email quotes are taken to be significant at all. None of them are actual verified facts representing any proven behavior of any scientist, let alone any proven discrepancy between the reporting and reality. The very fact that it doesn’t matter whether the emails are accurate, only whether they exist–and that their initial existence is always weighted more heavily than the numerous exonerations from numerous investigatory committees that followed–is the prettiest proof of the lie behind the “sceptic” moniker.

One might as well steal an email from a primatologist in which he wished to see Bigfoot someday and then take that as proof that Bigfoot exists. And the more people who read that stolen email with its stolen wish, the stronger the consensus for the unspeakably criminally concealed existence of Bigfoot all along will become.

Interesting concept of using the “they have a legal obligation to defend the indefensble” angle on this. Except, they don’t. They have a legal obligation to keep the business going. They can and do really do alternative energy research because they know that the oil will eventually run out, although not nearly as soon as most people think. They also have greatly diversified their holdings in recent years to give them other avenues for making money and keeping the stockholders happy. Of course, all of this leaves aside the fact that global warming is NOT settled science. If it is ever confirmed, it will still take years to prove it was caused by man, and not a natural cycle of a planet that has gone through these cycles before. And even if that can be confirmed, is there really anything that man can do to slow or reverse it? I don’t know yet, neither does anyone else definitively. And THAT’s why the oil companies, and most other non oil company related people aren’t doing more to combat global warming. I personally think it is perfectly sane and reasonable for us all to be more environmentally friendly without constantly deriding anyone who doesn’t “believe”. The Church might even help you on this part, since He tasks us with taking care of His creation. Just don’t tell them it’s an evolution of ideas……

Anthropogenic CO2 is by far the best candidate as the cause of global warming. Many of the proposed natural causes can be ruled out. For example, the amount of sunlight incident on the Earth has been constant to within less than 0.15% or so for the last 30 years, while the temperature has increased, so the warming isn’t due to an increase in solar radiation. Other mechanisms which have been invoked to explain ice age cycles, such as axial drifts and orbital eccentricity oscillations operate FAR too slowly to explain the observed warming in the last half-century, whereas the observed CO2 buildup can explain it.

Just as an almost trivial example of the sorts of things to consider – take a model in which the temperature change each year is a random number that varies with how cloudy/sunny it is that year. The temperature each year is 0.95 times the previous year’s temperature (to model a slow drift back to the mean) plus a normally distributed zero-mean random variable.

If you plot out a few hundred values, you will very often see what appear to be ‘trends’. The line will go up, or go down steadily over many decades (with short-term wiggles and jaggedness.) – Try it.

Conversely, you can also plot a linear trend line plus random noise. Do the two graphs look the same?

Can you rule out the possibility that the weather is controlled by something more like the former (only much more complicated) than the latter?

If you’re allowing temperatures to go up or down beyond that expected by reasonable variations in cloud cover, your model is not physical. Furthermore, you’re forgetting the energy balance. IR radiation output increases like temperature to the 4th power, so the restoring factor becomes far stronger at higher temperatures, severely dampening higher temperatures.

Long term temperature trends could be caused by long term trends in albedo, but there’s no evidence of that since the early 80s:

“Long term temperature trends could be caused by long term trends in albedo”

You missed the point. Trends in albedo are not being proposed – just trendless random variation. Try the example. Generate albedo data as a series of zero-mean Normal random numbers – with no trend – and then calculate the temperature series using the scheme I described. No trend in the input is required.

Note, I’m not claiming this is a realistic model, I’m suggesting this is the sort of phenomenon you should be considering.

Data since 1980 won’t tell you whether the centennial trend is real or not. You would need to have cloud data with and without the temperature trend.

You missed the point. Trends in albedo are not being proposed – just trendless random variation.

You’re missing my point: that statement itself is unphysical.

This condition in your model …

…plus a normally distributed zero-mean random variable.

…necessarily entails conditions of energy balance: incremental temperature changes must be attributed to a physical cause, such as incremental changes in albedo, and therefore correlate (or anti-correlate) with them. If one instance of temperature increment is greater than the mean increment, it is due to an instance of lower-than-average drop in albedo, and vice versa. This implies that any trends in temperature, upwards or downwards, are caused by corresponding trends (or countertrends) in albedo. The only way to accumulate an increase in temperature is to have an excess of below-average albedo days.

Its a mistake to think of temperature as a random statistical variable. It is not. Since it is the average molecular kinetic energy, it is governed and constrained by all sorts of physical laws of thermodynamics (and other physics of the system, such as fluid dynamics, radiative transfer etc).

Temperature cannot change randomly without a cause. It is a metric of average system energy. It cannot go up unless energy is added to the system, and it cannot go down unless energy leaves the system. (And these are ultimately radiative in both cases). Changes in system temperature, and therefore energy, must be caused by corresponding changes in the energy inputs and outputs of the system, and therefore will correlate with them.

If you claim otherwise, that temperature can vary randomly and independently of the system inputs and outputs, your example is violating conservation of energy.

“It cannot go up unless energy is added to the system, and it cannot go down unless energy leaves the system. (And these are ultimately radiative in both cases).”

True.

“Changes in system temperature, and therefore energy, must be caused by corresponding changes in the energy inputs and outputs of the system,”

True.

“and therefore will correlate with them.”

False.

The point is admittedly subtle. But this last statement does not follow from the earlier statements. There can be a (short-term) trend in the temperature, with no trend in the albedo, and no correlation between temperature and albedo. The albedo causes the temperature to be what it is, but there is no trend in the albedo, while there is in the temperature.

Energy is conserved throughout. I agree that it is odd, but it’s a real effect.

Take a model in which the temperature change each year is a random number that varies with how cloudy/sunny it is that year. The temperature each year is 0.95 times the previous year’s temperature (to model a slow drift back to the mean) plus a normally distributed zero-mean random variable saying how much heat was added or lost.

i.e. A(t) = r.v. ~ N(0,1)
T(0) = 0
T(t+1) = 0.95*T(t) + A(t)

Plot A(t) versus t and T(t) versus t for t = 0..100.

A(t) has no trend. (Check that.) But for a high percentage of cases, T(t) does.

It’s called a “stochastic trend” or “spurious trend”, and was discovered by the mathematician Udny Yule in 1926.

No argument that your construction gives a random walk. What I’m pointing out is that the construction cannot possibly describe physical reality, which invalidates your claim that temperature trends are some sort of random walk phenomenon.

For starters, your choice of the amplitude of 1 for A(t) is arbitrary, no? If you let that amplitude approach zero (a uniform, unchanging albedo), your temperature would drop endlessly. So much for energy conservation— its not even built into your model. That constant multiplier “simulating” cooling is just plain wrong, especially when T is below the radiative balance steady state temperature (I’m assuming your T is in Celsius and not K. If thats in K and you’re starting from absolute 0, you’ve got even worse problems)

And, furthermore, speaking of radiation balance: A(t) is not the albedo. Albedo, a, comes into play in the first derivative of T:
dT/dt is proportional to (1-a)*S minus B T^4, where S is proportional to the solar flux, and B and S contain appropriate constants. Trends in delta T will track with trends in a. (Incidentally, when integrating to get T, if the albedo a is truely random, it will integrate out to zero, leaving T hovering around or at the steady state temperature).

To summarize, what you’re presenting is an mathematical construct, which does demonstrate random walk behavior, but which has no bearing on reality and can’t be seriously considered as a cause for or even a demonstration of the observed warming trend.

My first point (amplitude of A) is wrong if you’re using C – I was confused and thinking in K at that point. If you’re using C then the 0.95T acts as a positive and negative restorative to 0. My error.

My second point still stands, though: your A(t) is not the albedo, its more like an integral of the albedo a(t), or 1-a(t), or rather its some sort of excess of the integral (1-a(t)) over or below the steady state value. Hard to say because your cooling term doesn’t match the radiative output term very well either.

Nevertheless, a consistently high run of A(t) will increase T(t) which means that energy is coming in from someplace, and if you argue that its due to albedo, then the albedo has to be trending to let more light in.

I wasn’t using either C or K. I was doing the delta from equilibrium in unspecified units. It would be a little more difficult to do it in absolute units, but entirely possible. It’s an illustration of the principle – trying to put in realistic details would just mislead and confuse as to the intention. There is no way to accurately model the entire Earth’s climate with a model so simple.

A(t) is something like one minus the albedo times the solar constant times the average thermal heat capacity of the ocean surface layers defined at an annual time scale. In reality, A(t) is not independent of temperature – the sea temperature affects humidity which affects cloudiness. It affects the heat capacity. It affects the motion of weather systems that lead to clear or cloudy weather. There are many other contributors, and they affect different parts of the Earth differently. And so on. You can’t predict the weather with a one-line equation.

“if you argue that its due to albedo, then the albedo has to be trending to let more light in”

No, the short-term mean has to be higher than it was previously, but that doesn’t mean there is a trend – at least, not at the same time as the temperature is rising.

I’m not claiming this is how temperature change works – at best it is one factor. The example above was of a model called AR(1), which is the simplest possible process that has these sorts of properties. As it happens, we know the weather isn’t AR(1), and most statistical researchers nowadays model it with an ARFIMA process (which stands for auto-regressive fractionally-integrated moving average). But that’s probably not a good example to start with when introducing the topic for the first time! Especially in these sort of circumstances.

The point is, it is entirely possible to have an apparent trend in the output, with no trend in the input. There are infinitely many different mechanisms that can do so. I’m just saying it’s the sort of thing you need to consider.

The point is, it is entirely possible to have an apparent trend in the output, with no trend in the input. There are infinitely many different mechanisms that can do so.

After all this back and forth, you still haven’t substantiated this claim. You’ve provided a simulation with a delta-energy term supplied by a random number generator, without showing how this could possibly be physically connected to a real energy input and not correlate with it. Comparing and contrasting this with the hypothesis of AGW, which does supply a physical mechanism (on the output side of the radiation balance) and where temperature shows a nice correlation with the cause, as expected, AGW is the clear winner in this contest.

“After all this back and forth, you still haven’t substantiated this claim.”

Ah. And I thought you had already said “what you’re presenting is an mathematical construct, which does demonstrate random walk behavior”. So now you’re saying that it doesn’t? That I haven’t demonstrated that it is possible to get an output with trend-like random walk behaviour using trendless inputs?

So what correlation do you think there is between A(t) and T(t)? How big is the trend in A(t)? And if you agree that there is no trend in A(t), and given that you can see everything in the model, where do you think the trends in T(t) come from?

I have to apologize—I haven’t been running on all cylinders due to lack of sleep the last couple of days, and haven’t had a chance to sit down and really think this through properly until today, and haven’t been very coherent in my comments. Let me start this again.

Lets take your model in #26. Although we both agree its not physically accurate, I now agree that it IS illustrative, as you have said, although perhaps not in the way you thought.

The real issue with the model is not so much the nature of A(t) as it is the interpretation of the temperature T = 0: It is not, as you said in #29, the equilibrium temperature. That is because the T progression you define in #26 can never go below 0. The A(t) term is never negative, and the “cooling” term 0.95*T can never reduce T below 0.

A better interpretation of T = 0 is absolute zero. Time t=0 corresponds to a cold (T=0) body, with a wildly varying albedo which is suddenly exposed to the Sun at time t=0. The “albedo input delta T” term begins adding energy, averaging 0.5 degrees of increase per time interval. Since the cooling term scales linearly with T, there is little cooling at first since the cooling deltaT loss is much smaller than the heating gain A(t) on average

So the body begins heating, and continues, on average, doing so, until the cooling term grows to the point where the cooling loss is equal to the average albedo input increment (0.5) and the system reaches an equilibrium (or steady-state) temperature. That is roughly mean(A(t)) /(1-0.95) = 0.5/0.05 = 10 degrees. So the numerical simulation will grow from zero until it reaches 10 degrees (which takes about 100-200 timesteps) and then levels off and the temperature varies around the mean of 10 degrees.

Once the system has reached the equilibrium temperature, it can be seen that peaks in T(t)—specifically the first derivative of T(t)—are correlated with A(t). Here’s a test running the simulation to t=1000, well after equilibrium (or steady state) is reached. I argue that this is much more relevant to Earth, because of that. To make the point a little better, I do a small amount of smoothing on A(t) (which I don’t think affects my point)

Several runs consistently yield a Pearson's R of 0.66 or so between the smoothed A(t) – mean(A(t)) and T(t)-T(t-1)

So fluctuations in T(t) can be tied to fluctuations in A(t) after steady state is reached. I may be in error calling peaks in T(t) and A(t) trends—maybe thats the wrong term to use—but thats more or less the point I was trying to make: we can tie behavior in T (t) to A(t) and therefore to a(t), even in your model, once it has reached the steady state. This regime is appropriate for Earth, and so I claim we should likewise be able to make the same claim, and go looking for trends in the Earths albedo to see if they correlate with the temperature increase. If there’s no correlation, we can rule it out as a cause.

“I have to apologize—I haven’t been running on all cylinders due to lack of sleep the last couple of days, and haven’t had a chance to sit down and really think this through properly until today, and haven’t been very coherent in my comments. Let me start this again.”

No problem.

I’ve replied in greater depth, but it’s got stuck in the spam filter. (In case it takes a while for Jamie to spot it.)

BREAKING NEWS UPDATE!something like 80% of the weather stations in the US were poorly sited, being located on black asphalt parking lots, next to air conditioner outlets, under trees, next to buildings, and even comedy cases where they were located next to barbecues, incinerators, and the backwash from jet aircraft.

No less luminarious “skeptics” than Watts and Pielke have now found the siting issue to be completely irrelevant, as shown in their own peer-reviewed research.

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications…..

In the United States, where this study was conducted, the biases in maximum and minimum temperature trends are fortuitously of opposite sign, but about the same magnitude, so they cancel each other and the mean trends are not much different from siting class to siting class…..

When using Menne et al.’s station set, ratings, and normals period, our aggregation method yields national trend values that differ from theirs on average by less than 0.002°C/century.

I think we are about to see another key differentiation point between genuine and mock skepticism.

The hypothesis that the siting of temperature stations could introduce significant error to records of global warming was confirmed by the paper.

For the record, I think the conclusions of the paper are very likely correct – provisional as usual on further examination/replication. In the US at least, the significant errors shown to be in the climate record by this study happened to cancel for the calculation of the mean trend. But that doesn’t really help you, and it doesn’t refute the point I was making above.

The point was about the quality of the science. It was in answer to the claim that the data had already been analysed thoroughly by the normal testing and peer-review process. And saying “we made a mistake and didn’t spot it, but luckily we made some more mistakes that we also didn’t spot that luckily cancelled out the effect of the first mistake!” does not speak highly of your quality, testing, or peer-review.

And neither does the fact that you see it as some sort of vindication.

I’m afraid that the “your methods are wrong but your measurements are coincidentally right” claim just doesn’t have any sting. The hypothesis that the global temperature rise was a measurement artifact due to urban heat islands has been tested, yet again, and has failed, yet again. The potential of that systematic has been a concern for decades and many have examined it and come to the conclusion that the effect, while likely real, doesn’t affect the global average by very much. Furthermore, the same warming trends are also evidenced by oceanic heat measurements, which don’t suffer from urban heat islands. It was a concern, but its been repeatedly ruled out.

And don’t fall into the trap of thinking new measurements cited by TTT (#34) are method-perfect either. I would bet money on someone coming along sometime in the future and discovering a systematic or other problem with them. Not that it really matters at this point. Its getting down to nickel-and-dime level corrections.

“The hypothesis that the global temperature rise was a measurement artifact due to urban heat islands has been tested, yet again, and has failed, yet again.”

That hypothesis wasn’t tested – and is probably untestable given the data available.

First, the study only covers the United States – it is not “global”. There is no reason to assume that the rest of the world works the same way.

Second, the effect of the sheer number of the poor quality stations means the best stations against which we are comparing everything are about eighty out of a thousand. Trying to draw conclusions even about the trend in the US (3.8 million square miles), let alone the world, will result in seriously wide error bars.

Third, this only addresses poor siting, not the urban heat island effect, which applies at a slightly larger scale. And the cancellation only applies to the trend in the mean over a particular period, not to all the other trends, periods, and statistics people pay attention to.

Fourth, this isn’t the only adjustment, or potential source of bias. It’s a comparison between two trends over a particular interval both of which are subject to errors, some of which errors will be in common. You can draw conclusions about the effects of microclimate on the trend, but not on its accuracy. (In other words, it is a case of measuring precision rather than accuracy.)

And fifth, the trend being measured here for the United States is in fact very low compared to other continents – about 0.3 C/century over the reference period – and it has long been a topic of debate whether the record hottest US temperature was in the 1930s or more recently. It has been suggested by some sceptics that this low value is because the US operates the biggest, best, highest quality and best funded climate monitoring networks in the world, and it’s low trend actually reflects the reality better; the much poorer data from elsewhere giving the misleadingly high trends. If the United States network is run this badly, how bad do you think it gets in central Africa, or the middle of Siberia, or the Antarctic? How good was it in 1900?

And sixthly, it’s been at least ten years since sceptics seriously claimed that global warming might be entirely due to UHI – what they more usually say is that the poor quality of the networks make the measurement uncertain – far more uncertain than is claimed. Do you really believe that with 80 decently-sited thermometers we can actually determine the average temperature of the entire United States to 0.01 C accuracy? Or even 0.1 C? Remember, every measurement was rounded to the nearest degree C before being averaged.

We get this fascinating attitude to scientific method and data quality all the time. I’m lectured about how the ‘official’ science is fine because it is scrutinised and tested and reviewed by professional scientists. I’m told here that sceptics have failed to live up to these high standards of scientific quality. And yet here we are, with an attitude that gross errors “don’t matter” if it turns out that some of the errors approximately cancelled out. We’ve got scientists saying they’re making stuff up, and corrupting databases with false codes – knowing that it is going to allow bad databases to pass and good databases to go bad – and the establishment not caring. We’ve got thermometers measurements corrupted by aircon vents, and nobody noticing. We’ve got data mislabelled, fudged, extrapolated, filled-in, spliced, short-centred, cherry-picked, and all the rest of it, and scientists denying that there is anything wrong with this. You could calculate it by interpreting the entrails of a chicken, and if the answer came out roughly right you guys would be happy to call it science. Nobody seems to care about the method, they only care about the conclusion.

It was a lesson I learnt at elementary school – getting the right answer by the wrong method was still wrong. Show your working – so your method can be checked. Kids getting the right answer by lucky guessing is not education. And getting the right answer by having all your many gross errors luckily cancelling is not science.

I’ve always wondered why the oil companies don’t diversify and get into producing renewable energy themselves. The explanation above doesn’t make sense to me, because they could sell it to the stockholders as hedging against the losses that will happen when oil runs dry. If there’s any panic about oil itself, that would push investors to put more money into the wind and solar divisions of the company.

Hi Matt,
I found this article during my research on this topic: http://money.cnn.com/magazines/fortune/fortune_archive/2007/04/30/8405398/index.htm
Investing in alternative energy research is just too risky for Exxon. Similarly, many oil companies refuse to invest as little as 10% of their profits. Why? Because it’s not good financial management at this time. There’s no guarantee of good ROI. It might be good PR, but as long as America is held hostage to fossil fuels, PR is not a priority.
Cheers,
Jamie

It’s a good question. Some of them have done, but only where they believe they can get long-term government-backed subsidies to support the business. This is essentially because renewable energy technologies are considerably more expensive, and are on a rational economic basis not necessary for another 40 years yet (at least), so nobody could sell it to more than a small niche of the severely devoted in a free market.

Even with the support of protectionist regulation it is very risky – as the Europeans have found out – because it only works economically if the regulation/subsidy is guaranteed for the next 40 years or so. Industry is increasingly uncertain about whether serious support for green politics can be sustained for years, let alone decades to come. Several European initiatives have collapsed after only a few years. They’ll happily take your money, but they have no intention of getting stiffed with the bill when everybody comes to their senses.

Ah I see what your problem is. You are allowing for negative temperatures and the albedo term to go negative (which is not in itself unreasonable if you are looking at departures from equilibium).

However the 0.95T term, which might be interpreted as a cooling or loss term for positive T, is not such for negative T; it is a heating term for negative T. When T is negative, 0.95T moves T upwards, and thus effectively adds energy automatically under those conditions regardless of the A(t) term. Your model therefore implicitly includes another energy source which is uncorrelated with the random albedo energy source. Essentially there are two conditions which can cause T to go up: 1) A(t) > 0 and also 2) T < 0. And conversely two independent ways for T to go down: 1) A(t) < 0 and 2) T > 0. Since the sign of A(t) is independent of the sign of T, its no surprise no correlation is seen with A(t). You’ve got something else BESIDES albedo built in that boosts the system energy when its below equilibrium.

So back to #24:

The point is admittedly subtle.

the math was, at least for me.

But this last statement does not follow from the earlier statements. There can be a (short-term) trend in the temperature, with no trend in the albedo, and no correlation between temperature and albedo. The albedo causes the temperature to be what it is, but there is no trend in the albedo, while there is in the temperature.

Thats because you implicitly buried a second energy source which is uncorrelated with the albedo. Thats cheating.

“which is not in itself unreasonable if you are looking at departures from equilibium”

Yep.

I didn’t express it very clearly. Code does make it much easier.

“However the 0.95T term, which might be interpreted as a cooling or loss term for positive T, is not such for negative T; it is a heating term for negative T.”

Yes. Both the temperature and the albedo are departures from equilibrium. The hidden energy source is the constant subtracted from the albedo to give a delta from zero. The buried energy source is the albedo that leads to equilibrium.

The hypothesis that the siting of temperature stations could introduce significant error to records of global warming was confirmed by the paper…. Saying “we made a mistake and didn’t spot it, but luckily we made some more mistakes that we also didn’t spot that luckily canceled out the effect of the first mistake!” does not speak highly of your quality, testing, or peer-review. And neither does the fact that you see it as some sort of vindication.

An error source that does not actually produce erroneous results pretty much by definition cannot be called “significant.” In much the same way, subsequent authors using different methods to reproduce the same originally-claimed temperature trend is pretty much by definition a “vindication.”

That yarn you spun about your precious parking lots and jet exhaust has been documentarily falsified. You may now cease talking about it. Don’t worry: you don’t have to admit you were wrong. That would be the part of a legitimate scientific character that even a good con artist would have the most trouble faking. And in any case it would be redundant. I’m just glad I could help. You’re welcome!

If you subtract a constant from a set of data, the linear fit still has the same slope.

And as I’ve said repeatedly, it’s not supposed to be a complete model – it’s a toy example for the purposes of illustration.

I can, if you like, write out the energy balance as a differential equation, apply perturbation theory about the equilibrium point, linearise, and take discrete samples, and get exactly the equation above. But such lengthy details are not relevant to the point I’m making. I can do it if it’s the only way to stop you chasing off after distractions, but there’s no point if all you’re doing is clutching at straws to try to avoid thinking about the conclusion?

Scientist – Space and time are curved like a rubber sheet with weights placed on it.
Audience – What sort of rubber, exactly? Is it vulcanised? Your ‘curvature’ hypothesis must be wrong, because with rubber there would be more friction, right?

#56,

“An error source that does not actually produce erroneous results pretty much by definition cannot be called “significant.””

Sigh. As we’ve just pointed out, it does produce erroneous results. If you ask about the trend in the maxima, it produces the wrong answer. If you ask about the trend in the minima, it produces the wrong answer. If you ask about the absolute value, it produces the wrong answer. If you ask about your degree of uncertainty, it produces the wrong answer. If you ask about the diurnal range, it produces the wrong answer. If you mix stations together with a homogenisation adjustment, it will produce the wrong answer. If you ask about regional/local trends, it produces the wrong answer. If you ask about mean trends for different time intervals than the one considered here – since the cancelling errors occurred at different times – you get the wrong answer.

The data is wrong. It produces wrong results. You are doing the equivalent of reading chicken entrails, and claiming that because one of your predictions came out pretty close your method is totally validated and correct and to be trusted.

Getting the right answer by the wrong method is still wrong.

But you do help me immensely by your style and content of argument, which I appreciate. Thank you!

@55And as I’ve said repeatedly, it’s not supposed to be a complete model – it’s a toy example for the purposes of illustration.

I can, if you like, write out the energy balance as a differential equation, apply perturbation theory about the equilibrium point, linearise, and take discrete samples, and get exactly the equation above.

Not necessary, since the logic error has been shown with the model as presented in #26 and #46-48. To reiterate, you claimed, in #24

(1) The albedo causes the temperature to be what it is, but there is no trend in the albedo, while there is in the temperature.

and then stated in #26

(2) A(t) has no trend. (Check that.) But for a high percentage of cases, T(t) does.

It is now clear that A(t) as defined in (2) is not the albedo in (1), thus a fallacy of equivocation has occurred. That A(t) in (2) is uncorrelated with T (#26) does not imply at all that albedo (1) is uncorrelated with T.

You have thus not provided a valid example of this claim from #29

The point is, it is entirely possible to have an apparent trend in the output, with no trend in the input.

And I’ve already said that A(t) is the albedo offset by a constant (and scaled). Which doesn’t affect the correlation.

I agree that my original proposal was carelessly worded. I assumed the relationship of the anomalies with actual albedo/temperature would be obvious, and oversimplified the terminology. It’s exactly the same as when people direct me to graphs of “global temperature rise” and omit to mention that it is actually the global mean monthly adjusted mid-diurnal temperature anomaly – even though the scale shows it centred on zero, such language normally passes without comment or dispute.

What I’m not sure of is whether you’re just being picky about it, or whether you really think it is a serious issue that invalidates the more general point.

What I’m not sure of is whether you’re just being picky about it, or whether you really think it is a serious issue that invalidates the more general point.

Thats exactly what I’m saying.

And I’ve already said that A(t) is the albedo offset by a constant (and scaled).

I was raising the issue of the non-constant “restoring” term 0.95*T, a positive change for T < 0, which pushes T up regardless of A(t), but it turns out I didn't even need to make that point ….

Which doesn’t affect the correlation.

… because I’ve been looking at your linear fits to the first 100 points of T(t) and A(t) by running the code concatenated from #46 and #47, and I see a clear—I mean obvious—correlation between the fit slopes of the two quantities.

plot(Results$ATrnd,Results$TTrnd)
ccf(Results$ATrnd,Results$TTrnd) produces quite a spike in the cross-correlation between the two series!!
cor.test( Results$ATrnd, Results$TTrnd ) yields around 0.79 or 0.78 for each run !!

and going further, looking at the correlation between A(t) and the 1st derivative of T(t) by replacing
Results$ATCor[j] = cor(ModelData$Albedo,ModelData$Temperature)^2
with
Results$ATCor[j] = cor(ModelData$Albedo[1:99],ModelData$Temperature[2:100]-ModelData$Temperature[1:99])^2

gives a very different picture – note the R-squared histogram in the lower left.

So I didn’t even need to raise my concerns about the model—the temperature vs A(t) correlations are quite apparent in your model as it is. (Unless I’m doing something REALLY wrong, but darned if I can see what that might be)