Searching for the missing pieces of climate change communication

Tag Archives: Suppressed evidence

Just one week ago I watched an interesting discussion on Reyers Laat (a talk show of the Flemish television). It wasn’t climate related, it was actually about the terroristic attacks in Paris. Part of the discussion was the role of the media in the current fear. By focusing on the negative things the media gives the impression that we live in a worrisome world.

The host invited among others the editor-in-chief of the VRT television news, Björn Soenens. He stated that the media should play another role in society. The media should privide a framework, give more background, insight, context. In that way the public would be better prepared to judge such situations. That is a noble mission, but seeing the reaction of one participant in the discussion showed me not everybody believed him. I wasn’t impressed either. I knew that the VRT news wasn’t exactly an example of these aspirations. His news program got mentioned several times in this blog for its bias in climate communication.

When listening to the alarmists panel in the UK Parliamentary Inquiry on the science of IPCC AR5, I noticed a clear difference to the skeptics panel. The witnesses in the alarmists panel were very sure about themselves. The science was clear. There was a consensus. Climate scientists were skeptical themselves and keeping science sane. There is uncertainty, but it is small and is accounted for. The models were reliable and could give valuable answers. They said all those things with much confidence.

How could there be such a large contrast with the skeptical panel? I think it certainly has to do with the role they are playing. As John Robertson said:

I like the idea that science tells us something and we have to agree because science says that is it.

Some sites made fun of this statement of John Robertson, but I think it is right in the heart of the issue. The politicians like simplicity from scientists. They like them to say if global warming is really happening, yes or no. If it changes our climate. If there is a need for action and what that right action then should be. There is no shame admitting that. For several decades now we are being told an extremely simple story:

It is simple and straight forward. So we also expect the same confidence from those who study it. They are considered the experts. This means they could not possibly backpedal now and say there are still huge uncertainties, that the models were way of with reality. That is what the skeptics are saying and it doesn’t serve them well. Therefor they avoid the inconvenient parts like the standstill in temperatures, the many uncertainties they face, the failing models,…

Let’s face it. People are attracted to confident statements from those they consider the experts. This is how issues are communicated most efficiently. That’s why the “consensus” is so important and alarmists put a lot of effort into declaring it. It is not because it is part of the scientific methods or important in science. It is because people, especially those who don’t want to look into the issue, are biased to the side of the largest group. Thinking that “They can’t all be wrong”.

But isn’t that a problem for the scientists involved? Sure, it is. If their predictions/forecasts/projections don’t materialize, they lose credibility. But the projections are way ahead in the future. Many of the projections of the IPCC are for 2100, more than 80 years ahead of us. There is no doubt all the current experts will be long gone before any of those predictions can be verified or falsified. So at this point it doesn’t even matter what they project.

But shouldn’t we respect our experts? Sure, but we should keep being skeptical, especially in areas where there is little data and high uncertainties. The fact that scientists try to declare a consensus and high certainty in a complex system with little data should be a clear warning sign.

The alarmists panel in the inquiry knew what was being expected from them. They delivered. They likely will be asked again.

When someone would ask “Are heatwaves increasing?”, what would you answer? Probably something like “That’s common knowledge, we all know that!”. Sure, but why? “We hear it everywhere”.

I heard it many times before and had no doubt it was true. At first glance it seems straight forward and logical. When temperatures go up (global warming, you know), the frequency of extreme temperatures go up. It has been told that the frequency of heatwaves is unprecedented and even ramping up in the last years.

I hear around me that we now live in a more extreme world. In stead of just believing this story line, I wanted to see for myself. I came to realize that this data exist. We call extremes in temperatures “heatwaves” and “coldwaves”. Okay, heatwaves increase, but the problem started when I asked myself: “How much?”. The answer was not exactly what I expected it to be…

First things first: looking for the dataset. The Belgian dataset seemed to be very hard to find and was really confusing. I found bits and pieces of it and nothing seem to match. I found that their definition of “heatwave” changed over time. That could well be the reason for these bits and pieces. The Belgian temperature dataset was not freely available either, so reconstructing it would not be possible. This wasn’t going to be easy.

But our neighbors in the North kept their records as well and, more important, they share their data with the world. It was very easy to find the list of heatwaves in De Bilt from 1900. It is not exactly what I searching for, but the situation from the Netherlands should be relatively similar. De Bilt is only a couple 100 km from Brussels. With the same definition, there will be probably somewhat more heatwaves recorded in Brussels (it is more to the South), but considering that in the average in the GISS dataset are made with stations until 1,200 km from each other, this should not be a problem.

This is what the Dutch call a heatwave (it is the current Belgian definition also):

A heatwave is a succession of minimum five summer days (maximum temperature of 25.0 °C or higher) from which minimum three tropical days (maximum temperature of 30.0 °C or higher).

I thought the heatwave data would represent extremes well. It is the threshold above which temperatures are recorded, so only the extremes are visible. The more data comes above this threshold, the more extreme the temperatures.

So I fired up Calc and loaded the KNMI data in a spreadsheet. Then I plotted it as a graph. The result was surprising. I expected, well, a steadily increase. Maybe not as steep as people suppose it is, but yet increasing. But that is not really what I saw.

Heatwaves De Bilt 1900 until 2013

It looks more like a cyclical event in which temperatures didn’t get above the threshold before 1911 and also between 1951 and 1975 (smack in the middle of the period of the rise and fall of the ice age scare).

Also, our well feared heatwaves in the 1990s and 2000s seem to be very bleak against the 1940s. I knew that the 1930s-1940s were warmer globally, but I didn’t expect to see it that clearly in the data closer to home.

The heatwave data consists of summer days (maximum temperature ≥ 25.0) and tropical days (≥ 30.0 °C or higher). The ratio between the two are about the same. One would expect with “global warming” that the tropical days increase against the summer days. It isn’t.

But, but, why do those people say that there are more heatwaves than before? Do they use other data? No, just look at just a very small selection of quotes about more heatwaves than before.

[…] A new study examining six decades of global temperature data concludes that a sharp increase in the frequency of extremely hot summers can only be the result of human-caused global warming. […]

See a pattern emerging? They all talk about the last 60 years, the period from the 1950s, etcetera. They seem to forget there was life before 1950. If you take this period, where does that takes us in the graph? Let’s look at what they compare with:

Heatwaves De Bilt 1951 until 2013

This makes some things clear. The different time frame gives the (false) impression that:

this warming is unprecedented (it isn’t, there was a period with similar or even more heatwaves in the past, but when one only looks from the 1950s this could not be seen)

the intensity increased in the late 1990s till the half of 2000s (it did, but this happened before).

there were no heatwaves at the beginning of this period and then from the 1970s heatwaves start popping up (that’s true, but there was a period with a similar, even hotter temperatures, before the 1950s)

heatwaves are ever increasing since the 1970s (if you don’t look at the period before 1950 it is, otherwise it isn’t)

in 2006 the intensity increased from 1 to 2, the speed is increasing! (it was increasing, certainly when one saw this in 2006-2007, but there were identical or even bigger increases in the past).

This seems not to be much different than many things in climate communication: there is a core of truth in it, but it lacks balance. Although it is an excellent opportunity for those who want to dig into it (I learned a thing or two about climate/weather by figuring things out after being surprised by some claims), for the public this is misleading. If one only look at the data from the 1950s on, there is a reason for alarm. But that scare has more to do with a lack of perspective.

This is part of my story – you might see the category: My Story first if you haven’t already (begin at the bottom of the page and work yourself up).

Since my beliefs in Global Warming had crumbled I got the desire to be able to check out some things myself and not to rely on the opinion of others. But what could I do as an interested layman to achieve this goal? The first thing I intended to do was to take a closer look at the messages that appeared in the media, but not to take them on face value as I did before. My impression until then was that the mainstream media tells a very one sided story climate-wise and it could be interesting to discover the neglected data…

This is about the first story I checked. It made an impression back then. We write August 23, 2011. A big thunderstorm crossed Belgium. I experienced it when I was at work. It was surreal: at about 10 AM it gradually got darker and darker, until it was pitch black. It seemed like it was night time. Then came the wind, rain and thunder. Afterwards we learned that it was so dark because the thundercloud was about 16 kilometer high and not much sunlight could come through. A couple days before (August 18) we also had a heavy local thunderstorm which killed 5 music lovers at Pukkelpop festival when a tent collapsed and 140 others needed medical attention.

As expected, the next day I found articles in newspapers that made the connection of these storms with Global Warming. One such story was in the newspaper Het Laatste Nieuws of August 24, 2011. The title over two full pages was: “87,000 lightings and more rain than previous storm” and subtitle (over almost one page) “Warming of the earth increases storms” (translated from Dutch):

Coincidence? Yes and no
Two very severe thunderstorms in less than a week time: could this still be a coincidence? “Yes and no” according to Luc Debontridder. It is a coincidence that these thunderstorms are so severe. But it is certain that there are more thunderstorms than in the past. We reexamined the numbers and noticed that the number of detected thunderstorms in the previous decades did increase: from on average 88 days per year in the 1980s to 94 days now. Be careful: literary every thunder-activity on Belgian area is in this number. A thunderstorm that makes havoc in for example Vlaams-Brabant, but didn’t effect Limburg, will be counted as 1 thunderstorm day. But the increase of 88 to 94 thunderstorm days is irrefutable and according to Debontridder blamed on global warming.
The prominent Belgian climate expert Jean-Pascal van Ypersele confirms this. “When the climate warms by greenhouse gases and the temperature rises, there will be more evaporation from the oceans and seas, that is the way it is. Therefor clouds will be filled with more water vapor and those clouds will turn quicker into thurderstorm clouds. I see those weather extremes increase in the next 30 to 40 years anyway, says van Ypersele.

At first glance this seemed rather balanced: according to both of them, the severity of both thunderstorms was a coincidence, but not the frequency. There doesn’t seem to be strange looking statements. But yet some things caught my attention.

[…] We reexamined the numbers […]

What do they mean by this exactly? Did they simply looked at the numbers again when the last two thunderstorms came over? Or did they reexamined/adjusted/normalized/reconstructed the historical/current data? Whatever, after this “reexamining” they noticed an increase of 6 thunderstorm days per year in comparison with the 1980s.

[…] literary every thunder-activity on Belgian area is in this number […]

It seems the author left a back-door open. It tells us that these numbers alone will not say much. A local storm will have exactly the same weight as a heavy storm over the entire country. There are 6 storm days more per year on average. This could well be small local storms. With such a way of counting this doesn’t necessarily mean a significant change.

[…] from on average 88 days per year in the 1980s to 94 days now […]

The way this is stated reminded me of a statement I read some time ago. Can’t find the reference a the moment, but here is the gist: the mean temperature of the previous decade was compared with the mean temperature of the current decade and the conclusion was that the current average was higher. But when one looked at the current trend it went down. The fact the mean was higher was because the beginning of the period was much warmer and this pulled the mean upwards. So with this in mind this statement caught my attention. It also made me wonder if there was a specific reason why they skipped the 1990s altogether and took the 1980s to compare with?

I went searching what was known about thunderstorms in Belgium. I didn’t find much (what didn’t come as a surprise at all), but found that the climate experts of the KNMI(Royal Meteorologic Institute of the Netherlands) were not so sure about this connection(translated from Dutch):

[…] To what extend climate change effects thunderstorms and lightning is difficult to determine. The data series with observations of the number of thunderstorm days are of variable quality and don’t say anything about the frequency of lightning. […]

That is an interesting statement. “Variable quality”. Probably it means no consistent measurements over time? This made me wonder if this was the same for our country? After some searching, it seems KMI (Royal Metereologic Institute of Belgium) used manual systems (which only partly detected lighting activity) and those were replaced in August 1992 by SAFIR (Système d’Alerte Foudre par Interférométrie Radioélectrique) which can detect lightnings with a high resolution. SAFIR was later extended to BELLS (BElgian Lightning Location System) with an even higher resolution.

For those who don’t yet get it: 1992 is just after their base period of the 1980s. Let me rephrase that: KMI replaced in 1992 a low resolution system with a high resolution system. So in the 1980s they used a system that didn’t detect all thunderstorms. In the 2000s they used a automatic system that detects much more thunderstorms, if not all. How did they manage to compare those two correctly? Maybe they had to adjust the previous measurement data to be able to compare those two? Is that what they meant by “reexamining” the numbers?

It is true that if you count the observed number of thunderstorm days you will find more in this decade compared to the 1980s, there is no doubt about that. But they “forgot” to mention that there are better detection methods now than back then in the 1980s.

Like this:

How can one not like free wind energy? It looks that simple. The wind is plentiful and blows free for everybody. Many green minded people naively believe the idea. Also I previously assumed that when the rotors of a wind mill were turning they were saving energy, because the electricity they produced shouldn’t be produced anymore by a fossil fuel power plant somewhere else. Boy was I wrong!

An example

When looking at the page of the C-Power I found the following statement concerning the first Belgian offshore wind farm:

The capacity of the completed wind farm will be 325.2 MW, enough to provide power to 600,000 inhabitants

This is not an isolated case of reporting it this way. Every time for example the mainstream media reports on wind energy, this kind of statement is given. Sometimes they give it in number of people or in “households” or “homes” that supposedly benefit from the windmill(s).

It is a nice statement on its own, but when digging deeper things don’t seem to add up well. Let’s see about this in more detail. They give more detailed information about this farm on another page on their website:

The wind farm consists of 54 turbines with an installed capacity of 325.2 MW
…
According to calculations by C-Power, based on over 1.2 million historic wind measurements on the coast since 1986, the wind turbines will run for 8,440 hours per year, being 96% of the time.
The equivalent number of hours when the wind turbines are running at full power (equivalent full load hours) has been calculated to be approximately 3,300hrs/year.

That’s 38% of the time, which is a normal figure for offshore turbines.

Digging deeper

This gives me the information to reconstruct the calculation of the statement. Just a quick calculation. The 54 turbines are erected in three phases and with two different capacities.

Phase

Number

Capacity (MW)

Total capacity

MWh

I

6

5

30

99,000

II – III

48

6.15

295.2

974,160

Total

1,073,160

In kWh this will be per person per year: 1,073,160,000 kWh / 600,000 = 1,788.6 kWh. That is a plausible average number in Belgium, so at first glance the statement seems to hold very well.

So, what is the problem then? They will produce 1 TWh in a year (what C-Power acknowledges) and this will be equivalent to the yearly use of about 600,000 people (what the Thornton bank-site said and what the calculation seems to confirm). Yes, that’s all true in theory, but in practice it is a completely different story.

The power produced by these windmills will obviously not really go to a group of 600,000 people. There are no power lines from the farm to their houses. The reality is that the power will be put on the national grid and such an amount of electricity can be used by around 600,000 people with an average power consumption of 1,788 kWh per year. This is a very important difference. The first statement seems to imply that the output of these turbines can fully satisfy the power needs of those 600,000 people. It is not.

If this really was the case, they would have:

complete blackouts during the equivalent of 4% of the time (about 15 days per year)

a varying amount of power during an equivalent of 96% of the time (about 350 days per year)

Unless they get their power also from other sources.

Indeed, this power is not produced constantly. According to the specifications of the turbines, below the wind speed of 3.5 m/s the turbine doesn’t produce anything. The same above 30 m/s averaged over 10 minutes or 35 m/s peak.
Below about 14 m/s the turbine produces only a part of the installed capacity.

The company estimated the working time of the turbine to 96% according to their wind measurements. But this doesn’t take into account the time the turbine stands still because of defects or maintenance. So those figures are again too rosy. The nice 39% load factor is only for brand new turbines, but according to data of the offshore wind farms in Denmark, the load factor declines over time (in case of Denmark from 45% to less than 15% at 10 years of age). That is 1/3 of the initial value left! There are several reasons for this. There is the wear and tear of the turbines and the blades in the humid, salty sea air. But also the turbines experience more breakdowns, need more maintenance and will take longer to put back on line. If the Denmark data is correct, the capacity could decline to one third after 10 years, which means the farm will only cater for “about 200,000 people” anymore.

Wind energy is intermittent. It is not very suitable for power production which needs a reliable output. Power consumption has a certain pattern over a day, but power generation follows the pattern of the wind and is not necessarily in accordance with actual power consumption. If we really want to rely on wind energy, we will need a backup generator that fills in when necessary, which means such a fast reacting generator will turn mostly at suboptimal speed and use more resources than when running at optimal speed.

This is the little dirty secret of wind energy: when we want to rely on wind energy as a constant power source, we need backup plants that run at a fraction of optimal capacity than when it would be if the wind farm was not there. Or put it in another way: even when wind turbines are working optimal, the backup system is still on stand by AND uses (fossil fuel) resources.

What did I learn?

My previous assumption that windmills were saving us fossil fuel seems to be unwarranted. This assumption was fed by waves of smooth stories from which one side is completely omitted, as the one above. Although the installed capacity of the generators and the wind data (the nice data) were correctly brought, the data of the maintenance, capacity decline and backup needs (the inconvenient data) were completely ignored. When adding this information, it paints a completely different picture than the theoretically capacity story.

An argument that omits relevant evidence appears stronger than it is. No wonder this inconvenient side of the story is hardly been told. No wonder the public at large still has false expectations about wind energy.