Notyiceable that the new METcast doesn’t extend as far as the old one; ‘hide the decline’? My current guess is the inverse of the met’s: Cooler over the next few years and warming up a bit with a strong el nino from late 2016. General trend – down.

Tim C: Thanks for the cover, in between other travails, I’ve had to configure another lappy to get online with.

“Notyiceable that the new METcast doesn’t extend as far as the old one”

Check out the link in LB’s comment above. There is more than one “decadal forecast”. They are made annually, first one 2005 to 2014, next 6 to 15 etc. LB and I have been asking, to no avail, for updates on how they are progressing for years.

Greensand and I had discussions many times with Richard Betts but he doesn’t seem active anymore over at Bishops Hill, keeping a low profile. I know you used to have his attention through twitter, has there been any chatter lately? Perhaps we could have an explanation of the change in form?

tchannon, not a problem now that attention has been drawn to the differences.

LB: Well, it’s worse then that really. Just before Christmas, Richard Betts and Ed Hawkins were touting a new model prediction saying there was a 50% chance of every year breaking the ‘hottest evah’ record. I tried to get them into a bet, but it seems they weren’t prepared to show any confidence in their models…🙂

“A central plank in delivering the new Programme is the development of the physical science core of our new coupled climate model HadGEM3. For seasonal to decadal prediction we shall use the same configuration with horizontal resolutions of approximately 60km in the atmosphere (N216) and ¼ degree in the ocean.”

All above my head, sounds like better resolution gives better results but there is a simplified description here.

They been arguing that for years. It’s the central plank to their demand for a bigger, better, more expensive computer.

It’s just crap. In 2008/9 their CEO announced the new, all singing, all dancing, multipurpose, high resolution model which would allow them to sell 10 years ahead forecast. The same model would be used for seasonal and medium term forecast. It was this model that gave them the “4 years after 2009 would be hotter than the hotest year so far.

I think Betts is director of climate impacts. So a sort of PR job. Making sure the alarmism continues unabated. I truly hate those people and I can’t think of anyone else I actually hate.

Global average temperature is expected to remain between 0.28 °C and 0.59 °C (90% confidence range) above the long-term (1971-2000) average during the period 2013-2017, with values most likely to be about 0.43 °C higher than average (see blue curves in the Figure 1 below).

A range of 100% and they can only offer that within the 90% confidence limits. What the hell happens at 95% confidence ????

Richard Betts is “Head of the Climate Impacts strategic area, which includes climate impacts research and also the climate change consultancy unit”. His work therefore depends on accepting the Met Office alarmist view of anthropogenic global warming. Of course his work could also be carried out assuming the more realistic effects of natural climate change and the global cooling that we have started to see. But Richard Betts has to go along with, and support, the alarmist position of the Met Office, whether he believes the BS science or not. His salary, career and pension depend on not straying too far from the Met Office line (which he possibly was starting to do before he was reined in).

Apologies if this is a long post. Moderators please feel free to delete it if you think it is worthless…

I emailed the MO about the original graph in Feb 2012.

Here is the response with my questions in italics:

Thank you for your enquiry. Below are some answers (in blue) to your questions:

(1) The start of the new forecast is outside the 90% uncertainty area. So why use a similar or even steeper warming gradient when the previous forecast gradient was obviously overestimated?

We do not specify the gradient – it is predicted by our modelling system. The reasons why the observations are at the lower end of the uncertainty range of the forecast starting in 2005 are actively being investigated.

(2) Why use red as the uncertainty shading? Red usually signifies danger, not uncertainty. The shading therefore distracts the reader from the actual observed data.

Red is in no way meant to signify danger. It is simply used to contrast the forecast from the observations.

(3) The thick blue line shows about 0.45 deg C warming in the next 10 years, more than the entire warming of the previous 20 years. What evidence is there for this steeper gradient?

Changes over short time periods can be much greater than long term trends.

(4) Why does the forecast line (and red shading) shown extend until 2015 when you have already started a new forecast from now? Any red shading after 2011 is nugatory.

That is the range of the 10 year forecast – it started in 2005 and ends in 2015. This forecast is still valid, and we wish to compare it both to the observations as they become available, and to our updated forecasts.

(5) Why does the graph begin at 1950, just following a sharp cooling period of about 0.5 deg C when you have data from 100 years before 1950?

The graph has to begin somewhere! 1950 is a round number – there really was no selection to avoid previous warmer periods! Showing the data since 1870 would not leave much room on the graph for the actual forecast.

(6) Can you explain – and provide real-world evidence for – the sentence “The forecast trend of further global warming is largely driven by increasing levels of greenhouse gases”?

There have been many studies showing that our model, and others, do not predict a warming unless they are forced by changing concentrations of greenhouse gases.

Kind regards,

Xxxxx

I particularly liked the answer to Q5!

So, I responded and here it is: I never received a reply…

My response in italics

Dear Xxxxx,

Thank you for your reply to my questions regarding your decadal forecast. I appreciate your time but I have to say that, of the six questions, only two have been answered in any reasonable fashion.

(1) We do not specify the gradient – it is predicted by our modelling system. The reasons why the observations are at the lower end of the uncertainty range of the forecast starting in 2005 are actively being investigated.

You do specify the gradient because you specify the input data and algorithms upon which the models are based. The question was why the gradient was steeper than previous forecasts when those forecasts have largely been overestimated. The model obviously needs amending. As an aside, I suggest that the emphasis of your ‘active investigation’ should be on why the models are so much higher than the observations, not on why the observations are lower than the models. I am sure you agree that it is the data that is reality, not the models.

(2) Red is in no way meant to signify danger. It is simply used to contrast the forecast from the observations.

That is a clear answer. Thank you. It does seem strange, however, that the uncertainty area provides a greater visual impact than the observation…

(3) Changes over short time periods can be much greater than long term trends.

This is an unacceptable answer. The question was “What evidence is there for this steeper gradient?”. Please re-assess your answer. Of course short term periods can have a greater gradient, but there has to be a reason! What evidence leads your model to produce that gradient? The question is particularly relevant when you consider that your won data shows no overall warming since 1998.

(4) That is the range of the 10 year forecast – it started in 2005 and ends in 2015. This forecast is still valid, and we wish to compare it both to the observations as they become available, and to our updated forecasts.

That is a clear answer. Thank you. Is there a way I can see the previous decadal forecasts? I can only find the 2009 forecast.

(5) The graph has to begin somewhere! 1950 is a round number – there really was no selection to avoid previous warmer periods! Showing the data since 1870 would not leave much room on the graph for the actual forecast.

That is not the sort of answer I would expect from the Met Office. 1900 is a round number, as is 1850 (when your data started)! You can easily make more room on the graph by changing the units on the X axis. It cannot have escaped your notice that you have effectively removed a very significant warming period – immediately prior to the start of your graph – of a similar anomaly amount (around 0.7 C) to the entire warming since the start of the graph!

(6) There have been many studies showing that our model, and others, do not predict a warming unless they are forced by changing concentrations of greenhouse gases.

Another unsatisfactory answer. I asked: “Can you explain – and provide real-world evidence for – the sentence “The forecast trend of further global warming is largely driven by increasing levels of greenhouse gases”?” A model is not the real world. Your models make several assumptions. The fact that the recent models are so far removed from actual observations should indicate that there is something amiss with those assumptions. I merely want to know what real-world evidence you have to support the assumptions made in the models.

Thank you once again for you time, but I would prefer my questions to be substantially answered.

Any links between the MO and characters represented are purely coincidental. No references to Slingo and a Mafia styled boss shark should be intended but a vegetarian shark may provoke examples of belief overcoming nature!

I’m afraid I find it virtually impossible to have a sensible scientific conversation on Bishop Hill these days, somebody nearly always jumps in and starts demanding answers to questions that are irrelevant to the topic, or over-analyses what I say, or is generally offensive – or they get offended when I dip in and out of conversations because of limited time. I am rather wary about posting here too, since I see Stephen Richards is here and he is often exceptionally rude, but I see he has so far refrained from actual abuse and avoided words like “scumbags” and “scam” so I will give him the benefit of the doubt for the moment🙂

The reasons for the new forecast being different to the previous one are that (a) as it says on the website, this is a new version of the model with a whole new setup for simulating the fluid dynamics of the atmosphere, and (b) the initial conditions are more recent so we have more information on what the coupled ocean-atmosphere system is doing at the moment. Inclusion of the initial conditions from a specific date in the very recent past is one of the critical aspects of decadal forecasting, as it allows the model’s simulation of year-by-year natural variability to be tested against specific years, as opposed to just looking at the general long-term statistics.

This is cutting-edge stuff and very exciting – you are right to be keeping a close eye on this area of climate science, it is where the big advances will be made in the next few years, as we are making forecasts for the next few years which are therefore testable within a useful timeframe (as opposed to many decades) and learning all the time.

When you say
“Inclusion of the initial conditions from a specific date in the very recent past is one of the critical aspects of decadal forecasting, as it allows the model’s simulation of year-by-year natural variability to be tested against specific years”
and
“it is where the big advances will be made in the next few years”

Are you expressing a belief that the solution of the Navier-Stokes equations as they apply to convective cells of sufficiently high resolution to be useful in modelling aggregate weather worldwide will be calculable within a few years?

Wouldn’t that mean us having to buy another, even bigger computer for the MET office?😉

The comment from Richard Betts is very encouraging since it acknowledges the extent to which the models are incomplete, experimental and in the process of ongoing refinement.

But how then can one attribute the observed warming (or possibly now cooling) to any one particular cause in the way that the Met Office does ?

This comment extracted by Arfur from someone else at the Met Office is very illuminating:

“There have been many studies showing that our model, and others, do not predict a warming unless they are forced by changing concentrations of greenhouse gases.”

Which suggests to me that natural variability on various timescales such as from MWP to LIA to date is entirely missing and presumably an assumption about the effects of CO2 has been plugged in instead.

The graph has to begin somewhere! 1950 is a round number – there really was no selection to avoid previous warmer periods! Showing the data since 1870 would not leave much room on the graph for the actual forecast

The comment from Richard Betts is very encouraging since it acknowledges the extent to which the models are incomplete, experimental and in the process of ongoing refinement.

Great hit Steven. Models are useless except for playing games with. One must ask these morons why they are using experimental models which have failed at every attempt to predict or project the climate to advise the government to spend £120b when the people of the UK are dying from cold and lack of food caused directly by their consel.

Good to hear from you again, apologies for bringing you into the firing line again but you do have a good way of explaining things to those of us that follow from the sidelines.

Was I correct in thinking that the new prediction graph is the output of HadGEM3 as it seems a lot more plausible than the previous version, even though it still may prove to be too warm, in my humble opinion?
The reduction to five year prediction I would think will also help maintain credibility as the model is refined but wouldn’t it be advantageous to compare against previous model predictions to promote the improvements rather than replacing them? Maybe I am jumping the gun a bit and you guys have a media launch planned but it does cause concern when past examples disappear without trace. You know some of us like to reference stuff at times.

“since I see Stephen Richards is here and he is often exceptionally rude”

[snip] I spent some of my working life in the same environment in which you find yourself and I know exactly what is going on in the background. I had to guard what I said, I had to be careful not to overstep the group think mark. In the end I got sick of the dishonesty involved in so doing and got out so that I could speak my mind and it paid much more hansomely than the staying in it.

I wish you only the best but when you spout the party line you should at least be prepared for a ‘severe’ response. I have seen my parents and my children struggle to survive because of government policy promoted on the back of your unvalidated and unverified models which you now say, yet again, are genius. Can you understand why I am ‘rude’. The Met off thoroughly disgusts me as does the UEA CRU.

I studied Lamb for years and was always impressed with his science and his method. His strategy from the outset of his creation of the CRU was spot on. Sadly, he became very disappointed with the people who followed him as he expressed before his death.

You just simply do not appear to comprehend the disaster creation of which you are part. Your job and everyone’s at the MO could disappear with the simplest of startegy: ADAPT. In my opinion that is the ideal strategy. A move of CRU/Met off climate funding towards research on alternitive sources of base power load provision would solve all our problems with one fell swoop. The British and european economies would grow substantially and they would have more money to care for the poor of their countries.

I’m not rude. It is civil servants like you that have never worked in the ‘real’ world who think that plain speaking is rude. Get out and work in the real world. Try the USA first. You will find it invigourating, interesting and freeing of spirit. You will also find it rude if you do not come up to their mark.

[…] presented the recent update to the UKMO decadal forecast in his post Major change in UK Met Office global warming forecast. Figure 1 is a gif animation comparing last year’s forecast to the newer one. The UKMO has […]

Don’t you think it’s a tad misleading (I’m choosing my words carefully) that the new graph labels hindcasts as being “previous predictions”? Particularly as the MO has made actual previous predictions that are nowhere to be seen. (Presumably because they’re not very good.)

A couple comments.
1. The original graph went up to 2021 and dipped slightly at the end. The new graph seems to stop around 2018 and dip steeply. What would have happened if they had continued the projection to 2022?
2. I have been arguing for some time that the Atlantic Multidecadal Oscillation is a major influence on climate (or at least a proxy for something that is).http://www.climatedata.info/Discussions/Discussions/opinions.php
It is now in decline, will continue in decline for a few decades and lead to a minimal increase in temperature for that period.. Could it be that the Met Office has tweaked its projections to take account of this?

Given the fact that the science was “settled” some time ago it does seem “surprising” that the Met Office decadal forecast issued in January 2011 is somewhat different from the 2012 version issued last week!

The new forecast has a best estimate of no change in global temperature over the next four years and error bars which will allow them to claim “skill” even if global temperature anomalies fall by 50%

It’s even more “surprising” that the UKMO’s decision to publish it’s revised near time temperature forecast for 2016 which predicts 50% less warming than 12 months ago has received absolutely no publicity in the mainstream media.

If the UKMO’s best estimate is correct, by 2016 there will have been no warming for 20 years during which 37%* of all the CO2 ever produced by man will have been pumped into the atmosphere!

Stephen R: I understand nyour frustration and annoyance, but if Richard Betts stopped commenting at BH due to the stick he was getting, I’ll try to encourage everyone here to tread lightly while we wait to see if he’ll engage with the substantive scientific issues.

I saw the MO’s DePreSys Decadal Forecasts as a focus point upon which the “uncertainties” that RB and Tamsin brought to the BH forum could be based.

I find it impossible to comprehend “uncertainties” without some measure of magnitude. The resultant discussions just meander off without any possible resolution.

So my simplistic vision was to get the MO to append the forecasts on a monthly basis with actual observational data. A simple operation as the forecasts are made on a rolling monthly basis.

They are, as I understand, produced annually, therefore there would be an ongoing series giving an insight into how “we” (homo sapiens) are progressing in our aim to gain skill to further understand the climate of our planet..

I understand that it is “new” science – 2005, so problems/issues should be expected. However I still think it is a good vehicle for the MO to demonstrate, transparently, the “uncertainties” they face. Publishing the forecasts has been done, now move on to the next stage and update on a monthly basis with the actual data, let’s see how “we” are doing. Rude? I think not, insistent, yes, but it is only one request. Though I do have to admit to being well and truly miffed about hindcasts being misrepresented as any sort of forecast and I will give RB his due in expressing the same concerns.

The latest forecast 5 years out is roughly 0.4 degrees lower than previously forecast and lies outside the previous 90% confidence interval.
The best estimate for temperature anomaly in 2018 appears to be slightly lower than that 20 years earlier and more likely to be in freefall than rising.
The white curve that the MO describes as “previous predictions” is actually an updated hindcast.

@Richard what would be “really exciting and cutting edge” would be to replace water vapour positive feedback (because there is no physical evidence to support this) in your GCMs with an enhanced solar forcing (there are a number of recent papers to support this) and see what happens.
If the solar forcing was tuned to a similar magnitude as CO2/water vapour, I suspect you would find a pretty good fit with recent warming.

“NEMO (Nucleus for European Modelling of the Ocean) is a state-of-the-art modeling framework for oceanographic research, operational oceanography seasonal forecast and climate studies.”http://www.nemo-ocean.eu/

Should be right up your street, to let us know if this will improve forecasting.

The AGW theory is dead. 17 years of non-warming when CO2 levels kept on increasing linearly at the same rate as it had for the past 50 years proves the theory, which was more of a hypothesis than a theory, wrong.

Feynman had thought us that when the theory does not fit the observations, throw away the theory not the observations. AGW theory is dead in the water but the MET shamans keep on dancing the warmimg dance in the hope of a temperature rise.

Alex, the proponents of the AGW hypothesis took a lot less than 15 years to reach the conclusion that they were right. Yet they seem to be demanding that we wait for 18 years of no warming before they’ll entertain the possibility they were wrong. Seems like there are some saucy geese among them.

I don’t have any time to look into this, but:
1) I’m hoping Richard Betts will address the tail end of Bob’s article.
2) If the confidence intervals (CIs) are estimated how I suspect they are, they’re useless. (ALL stat inference is based on assumptions. Tolerance of patently false stat inference base assumptions is pathologically widespread in academia & government admin. It’s cultural — and its defense is as evasively intellectually weaselly as weaselly can ever get. There are contexts where CIs are sensible — this isn’t one of them. This is a deeply philosophical issue. Sensible parties need to band together to defeat misapplication of this paradigm in fields where lurking variables dominate. This is a far more serious issue than most realize, as so much purportedly “objective” vital planning is based on preposterous “logic”. We can’t afford such severe qualitative errors (ubiquity of lurking statistical paradox in nature). Stay AWAY from stat inference in fields where there’s deep ignorance. This should perhaps be an ethically binding obligation in some contexts. For those who don’t realize: Raw data exploration differs fundamentally from stat inference; it’s not based on assumptions.)

“The Met Office Hadley Centre has pioneered a new system to predict the climate a decade ahead. The system simulates both the human -driven climate change and the evolution of slow natural variations already locked into the system. This is possible because the climate takes a long time to respond to some variations. In particular, the state of the ocean has an impact on climate for months and years into the future. In part, this is because it takes a long time for the ocean to heat up and cool down.

By starting this system in the 1980s and comparing the results with observations from the 1990s we have already demonstrated its skill at predicting the global climate. However, a major effect it cannot predict is volcanic eruptions, so the biggest differences between the model and the
observations occur following the major eruption of Mount Pinatubo in June 1991.

We are now using the system to predict changes out to 2014. By the end of this period, the global average temperature is expected to have risen by around 0.3 °C compared to 2004, and half of the years after 2009 are predicted to be hotter than the current record hot year, 1998.“

Thanks Rog for moderating to keep the tone civil. Yes, the increased resolution to the point at which convection can be explicitly resolved is seen as one of the main strategies to improve the models – we need to reduce the extent to which we rely on parametrizations and approximations. And yes this does of course mean increased computing power – however, the economic benefits of useful seasonal forecasts would be huge so it would be a worthwhile investment.

Stephen Richards, please don’t make assumptions about my life. My parents were shopkeepers in the Midlands and I used to work in their shop, is that “real-world” enough for you? Also I do a lot of work for customers in the private sector, including major energy companies, the rail industry and multi-national mining companies. I can assure you that none of these guys tolerate airy-fairy academia or civil service beurocrats, they want a service to be provided and do not pay if they don’t get it. I’ve been doing that for 5 years now and now have a lot of repeat business, and indeed this part of our work is growing (which is a good thing as this private-sector business is sorely needed in order to reduce the burden on the taxpayer – that is why the Met Office is a Trading Fund)

The decadal forecasting work is indeed all about informing adaptation – to climate *variability* as well as climate change (in fact probably more the former than the latter). Although some decision-makers (such a major infrastructure owners) are thinking decades ahead, most people and companies only need to look at the next few years. The long-term global warming trend from anthropogenic climate change is largely irrelevant to them – they need to know about climate variability at regional and local scales over the next few years, especially in precipitation. Although we have made progress in forecasting on these timescales, there is still a lot more to do before we can make really useful forecasts. This is the main thrust of developing the climate models here at the Met Office.

Top marks to Lord Beaverbrook for realising that the new model is HadGEM3. This is the version of the model which is currently receiving all the attention to improve skill in regional forecasts at seasonal to decadal timescales.

The previous version HadGEM2 (as used in CMIP5 and shown at the end of Bob’s post) has not had the improvements that are in HadGEM3, but importantly in this context it is not run in “initialised forecast” mode like the decadal forecasts are. In other words, the only initial data was in 1860, and then the model ran free from then on, driven only by observed emissions but with no further contraints on the meteorology. It does include natural variability (which emerges as a natural consequence of the fluid dynamics equations) but this would not be expected to match the observed variability in exact dates – it can only be expected to do the right thing in a statistical sense. So for the decade of the 2000s, we would not expect it to reproduce the flattening of the global temperature curve except by change. Bob has shown the ensemble mean (ie: the average of several model simulations) which by definition will average out the natural variability across the different simulations. Looking at a single simulation would show that there are decades of greater and lesser warming than the long-term trend, although as I say, it is pure chance whether a period of slower warming in the model coincides with that in the observations. There is a useful paper by Ed Hawkins on this in “Weather” – July 2011, Vol. 66, No. 7

Don Keiller, the water vapour feedback is not really something that is imposed on the models, it is an emergent property of the model physics. There *is* physical evidence that this exists, please see this paper and references therein. My colleagues who are working on seasonal to decadal forecasting *are* taking account of solar effects – they have no axe to grind on whether forcings are natural or anthropogenic, they just want to be able to make useful forecasts. One of their papers is Solar forcing of winter climate variability in the Northern Hemisphere.

Thanks everyone for your interest in this important area of the Met Office Hadley Centre’s work. You should definitely keep an eye on this for further developments!

Thank you for coming back with some answers.
If the main thrust of developing the climate models at the MO is commercial, is there no emphasis on inclusion of HadGEM3 in AR5, would the process not benefit from this cutting edge technology?
Especially as this may influence policy outcome in 2015.

External natural forcings are included in the historical simulations but not the future ones at the moment. The Ineson et al paper I linked to above suggests that UV effects may be important for seasonal forecasting in Europe, and arguably we should consider including the 11-year cycle as that makes a small difference on interannual to decadal timescales, but currently we don’t think solar effects would make a huge difference to long-term (centennial) projections – see this paper.

Oh I meant to say earlier in response to James Evans that, yes it would have been better to say “hindcasts”. I think my colleagues were trying to avoid jargon, but yes it does make it sound like the decadal predictions have been made for ages which is not the case – the first was published by Smith et al in 2007.

LB: it is too late to include HadGEM3 in AR5 – the literature cutoff for papers to be submitted for WG1 was July, so model simulations had to completed a long time before that in order for papers to be written. However I don’t think this will necessarily affect long-term climate projections anyway, compared to what is already known. This is all about natural variability in the near-term. Colleagues of mine had already shown that recent observations implied that the higher rates of warming that had previously been seen to be consistent with observations now appear unlikely – but that long-term warming is still expected. See here.

So, don’t get *too* excited about all this – it certainly doesn’t blow AGW out of the water…..! However it does provide an opportunity to test the models over the next few years.

Thanks for the link to the Jones, Lockwood and Stott paper, we’ll have some interesting discussion about it I’m sure.
The principle problem with the paper (so far as I can tell from the abstract) is that it takes no account of the heat capacity and concomitant thermal inertia of the ocean.
This will lead it to a false conclusion.

Rog, I don’t think that’s particularly relevant to the decadal forecasting issue. If interannual to decadal variability *was* strongly forced by a predictable natural external forcing then it would be a heck of a lot easier🙂

“Oh I meant to say earlier in response to James Evans that, yes it would have been better to say ‘hindcasts’. I think my colleagues were trying to avoid jargon, but yes it does make it sound like the decadal predictions have been made for ages which is not the case – the first was published by Smith et al in 2007.”

The issue for me is that it makes it look as if the MO’s previous decadal predictions have been highly accurate. Which they haven’t. They’ve been highly inaccurate. The hindcasts are pretty good (which of course is pretty much a minimum requirement for us to even consider the computer program to be a climate model.)

Ha ha, I notice the C has gone from CAGW still waiting for the A to become lower case. I will have to try and find a copy of that paper you suggest that isn’t paywalled.

Shame none of this is making it into AR5, sort of makes another case for it being out dated before it is published. Still the final draft should be less catastrophic than the last one if it is representative of the science🙂

Richard Betts says, regarding HADGEM3: “It does include natural variability (which emerges as a natural consequence of the fluid dynamics equations) but this would not be expected to match the observed variability in exact dates – it can only be expected to do the right thing in a statistical sense.”

If the models are to be used to determine the causes for the warming (anthropogenic versus natural factors), then the models need to be capable of matching the frequency, amplitude and duration of natural ocean processes: ENSO, for example. ENSO dictates how much heat is released from the tropical Pacific to the atmosphere, how much warm water is distributed from it to adjoining ocean basins, the extent of the teleconnection-related warming of global surface temperatures remote to the tropical Pacific, and how much heat in the form of warm water is created to fuel the El Nino events.

Hawkins begins:
“It is ‘very likely’ that humans have caused most of the warming of the Earth’s climate since the mid-20th century; this was a key conclusion of the 4th Assessment Report (AR4; Solomon et al., 2007) of the Intergovernmental Panel on Climate Change (IPCC).”

Since Hawkins does not dispute that finding, his paper is flawed. Satellite-era sea surface temperature data (Reynolds OI.v2) does not confirm anthropogenic warming. That is, the data indicates it warmed naturally, and it’s blatantly obvious. Anyone who can read two graphs can see it.

We would then have about 20 years without warming, meaning that GHG warming would have been completely offset by mostly natural factors.

By the same logic half of the warming from 1980-1998 would then have been caused by natural factors, which were then all in positive phases. Actually more than half, because AMO has not even turned negative yet, and that was still enough to stop warming.

The other issue is, why the new graph is cut off after 5 years, right after making a turn downwards.

If you are discounting everything Ed Hawkins says because he agrees with the IPCC assessment of AGW, then you must discount everything I say too as I also agree with it. The evidence does support the existance of AGW. However we don’t know how large the future change will be, we’ve not yet seen a long enough period of anthropogenic forcing to closely constrain the models’s long-term responses to this forcing. The decadal forecasting we’ve been starting to do over the last few years is interesting because it gives us the chance to test the models in “forward mode” and then evaluate them against observations afterwards, but even then, we also need to test the models against the long-term trend.

My colleague Mat Collins is a co-author of Guilyardi’s. Yes there is a need to do more on improving ENSO, and indeed this will help improve the ability to forecast regional climate variability on season to decadal timescales and beyond. Again this is an area where there is a lot of research going on at the moment.

Hi Roger, the net anthropogenic forcing is positive (even more so if aerosol forcing is less negative than previously thought!), the outgoing LW radiation measured by satellite shows increased absorption in CO2 absorption bands, the spatial patterns of observed warming are consistent with what we’d expect, we see warming over ocean as well as land, we see cooling at higher levels in the atmosphere, and we see increases in tropospheric water vapour.

But Outgoing long wave radiation has increased overall too, so I don’t think this proves AGW has had any effect. In any case, the error on the TOA measurement is 4 times bigger than the claimed signal. So you’re not getting any evidential confirmation there.

Good to see that our scientists have their nose to the grindstone right up to Christmas day. I do hope that they managed to get New Years day off as well.

It wouldn’t do to hang onto a revision until a holiday period came up to sneak it out without explanation, that might give the wrong impression as to the importance of the work.

Maybe the Met office news site will furnish the details, when they catch up. First major Met Office news story for 2013 scooped by tallbloke, best European weblog 2012, makes you wonder if the future of science reporting is outsourcing.🙂

The sea surface temperature anomalies of the East Pacific from pole to pole (90S-90N, 180-80W) haven’t warmed in 31 years:
It’s tough to say AGW is responsible for its warming if it hasn’t warmed.

The sea surface temperature anomalies for the Rest-of-the-World (90S-90N, 80W-180) warm only in response to major El Niño events:

Contrary to what you’ve written, the sea surface temperature data does NOT support the existence of AGW. As I wrote earlier, the data indicates it warmed naturally, and it’s blatantly obvious. Anyone who can read two graphs can see it. Those are the two graphs. Let me to expand on the discussion of the Rest-of-the-World data.

It’s easier to see the warming only occurs during those major El Niño events if we remove the effects of the El Niño events from the Rest-of-the-World data:

If you’re wondering why the Rest-of-the-World sea surface temperatures shift upwards in response to those significant El Niños, it’s because they do not cool proportionally during the trailing La Niña events. We can see this by detrending the Rest-of-the-World data and comparing it to scaled NINO3.4 sea surface temperature anomalies:

Why don’t they cool proportionally during La Niña events? I’ve been discussing that at my blog and at WattsUpWithThat for almost 4 years. Last I remember, WUWT gets about 40,000-50,000 views daily, so I’ve reached a reasonably large audience so far. I’m hoping to reach a much larger audience this year, Richard.

Would you like me to subdivide Ocean Heat Content data as well? I only need to present the tropical Pacific and extratropical North Pacific to counter any thoughts you might have that they warmed via AGW or that AGW was responsible for those strong El Ninos.

He said: “They failed completely with their models to predict the flattening out of global warming. I think that they are just trying to bury bad news that their predictions in the medium and long-term have been pretty poor.”

Alex, the proponents of the AGW hypothesis took a lot less than 15 years to reach the conclusion that they were right. Yet they seem to be demanding that we wait for 18 years of no warming before they’ll entertain the possibility they were wrong. Seems like there are some saucy geese among them.

Mean while, over at Roy Spenser’s blog:

David Appell says:
January 3, 2013 at 1:53 PM

I calculated my number directly, but without including autocorrelation, which Skeptical Science does (as per Foster and Rahmstorf 2011).

Assuming their result is correct, what this really tells you is that 19 years is too short of a time period to make statistically significant conclusions about temperature trends — there is just too much thermal inertia in the system.

@Richard Betts you say
“the water vapour feedback is not really something that is imposed on the models, it is an emergent property of the model physics. There *is* physical evidence that this exists, please see this paper and references therein.”

The paper you refer to is titled “A very simple model for the water vapour feedback on climate change”

Personally I prefer to believe in real world experimental data, rather than (clearly flawed) models.
This paper for instance.

Satellite observations published in this paper show that global water vapour has instead declined over the past 12 years despite steadily rising concentrations of CO2. These observations provide further support that the positive water vapour feedback in IPCC models is overstated and therefore claims of future warming is exaggerated.

Richard Betts says: “My colleague Mat Collins is a co-author of Guilyardi’s. Yes there is a need to do more on improving ENSO, and indeed this will help improve the ability to forecast regional climate variability on season to decadal timescales and beyond. Again this is an area where there is a lot of research going on at the moment.”

“Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).”

[…] quietly on Christmas Eve was noticed by some Met Office watchers, especially the ever-interesting Tallbloke’s Talkshop website which reported IT on January 5th. This piece started a flurry of blog comments. We at the […]

The fact that the UK Met Office had changed its near-term global warming forecast quietly on Christmas Eve was noticed by some Met Office watchers, especially the ever-interesting Tallbloke’s Talkshop website which reported it on January 5th. This piece started a flurry of blog comments. We at the GWPF republished the story the same day on our website.

The next day we internally discussed the Met Offices’ revised forecast. The GWPF published my analysis of the considerable implications of the Met Office revision on the 7th January. The analysis was distributed via CCNet at 11:51 am, including hundreds of journalists.

[…] Met Office and they have given up the the notion of an imminent global surge in heat (see report at Tallbloke’s Talkshop). The old prognoses showed a temperature increase of almost 0.5°C by the end of the current […]

On the original Met graphs, the white line is shown, which is their previous forecasts. However, the white line on the new one, showing a drop in temps in the last few years to today’s level, is totally different to the Dec 2011 version, which showed accelerated warming.

I’ve looked through the first half of the comments so apologies if this has been discussed:
I would like to know if the most recent forecast has been truncated? After all, there’s a great deal of emphasis from the Met Office on 10 year forecasts and the top graph (original forecast) does go to 2020 at least. It seems that the five years to 2017 on the new graph might show a convenient period of at least some warming, whereas that short kick down in 2017 might be the start of a long decline. Does anyone know if there’s another five years’ worth knocking around in their database or the Wayback machine?

Rog, Radiosonde data show that tropospheric specific humidity has increased over the past 40 years, see this paper.

Don, 12 years is a short timescale, we are back to the discussion of natural variability vs. the long-term trend

Scute, Lord Beaverbrook, no, this forecast only goes out to 5 years, because the new model is much more computationally expensive than the old one.

Paul Homewood, no, we’re not “re-writing the past”, the “previous predictions” are actually the hindcasts (ie: done retrospectively to check the performance of the model”. See my comments at Bishop Hill

Hi Richard, thanks for your continued participation in this discussion.

My plot of Specific Humidity *is* the NCEP re-analysis of the radiosonde data at 300mb – up near the tropopause where the radiative balance actually matters in terms of the equilibrium of the whole Earth system. The correlation shows that The Sun is the major factor here.

You are correct that tropospheric specific humidity has increased over the past 40 years *near the surface*, but although this raises air temperature a bit, it isn’t really important for the whole Earth energy budget, because it doesn’t affect oceanic bulk temperature much at all. Trying warming a bath full of cold water with a hairdryer and you’ll see why.

Time you MET guys got a sense of proportion for the relative magnitude of effects in my opinion. Try reading some of the technical discussions we’ve been having on the ideal gas law, gravity, atmospheric mass, surface level pressure and its effect on evaporation rates. These thermodynamic considerations are what is missing from the inadequately coupled atmosphere-ocean radiative-convective models.

I appreciate you commenting on this and other blogs. You are most welcome.

On the BH thread, you said:[“The guys who work on the decadal forecast are definitely honest, objective scientists who are working hard in pushing the boundaries of science by trying to make forecasts of natural variability over the next few years.”]

Would you care to comment on the answers given to me by the MO, as posted on this thread by me here:

[…] son at 11 could tell a joke about farting and it would be better. One of their scientists did it better, maybe they should get rid of the PR people. In fact please do – I would like less tax please. […]

Inevitably last week it didn’t take long for the bush fires set off by Australia’s “hottest summer ever” to be blamed on runaway global warming. Rather less attention was given to the heavy snow in Jerusalem (worst for 20 years) or the abnormal cold bringing death and destruction to China (worst for 30 years), northern India (coldest for 77 years) and Alaska, with average temperatures down in the past decade by more than a degree. But another story, which did attract coverage across the world, was the latest in a seemingly endless series of embarrassments for the UK Met Office.

Some of this story may be familiar – how on Christmas Eve the Met Office sneaked on to its website a revised version of the graph it had posted a year earlier showing its prediction of global temperatures for the next five years. Not until January 5 did sharp-eyed climate bloggers notice how different this was from the graph it replaced. When the two graphs were posted together on Tallbloke’s Talkshop, this was soon picked up by the Global Warming Policy Foundation which whizzed it around the media.

The Met Office’s allies, such as the BBC’s old warmist warhorses Roger Harrabin and David Shukman, were soon trying to downplay the story, claiming that the forecast had only been revised by “a fifth”, and that even if the temperature rise had temporarily stalled, due to “natural factors”, the underlying warming trend would soon reappear. But they were only able to get away with this by omitting to show the contrast between the two graphs.

In 2011, the Met Office’s computer model prediction had shown temperatures over the next five years soaring to a level 0.8 degrees higher than their average between 1971 and 2000, far higher than the previous record year, 1998. Whereas the new graph shows the lack of any significant warming for the past 15 years as likely to continue. Apart from how this was obscured by the BBC, there are several reasons why this is of wider significance for the rest of us.

For a start, it is not generally realised what a central role the Met Office has played in promoting the worldwide scare over global warming. The predictions of its computer models, through its alliance with the Climatic Research Unit at the University of East Anglia (centre of the Climategate emails scandal), have been accorded unique prestige by the UN’s Intergovernmental Panel on Climate Change, ever since the global-warming-obsessed John Houghton, then head of the Met Office, played a key part in setting up the IPCC in 1988.

A major reason why the Met Office’s forecasts have come such croppers in recent years is that its computer models since 1990 have assumed that by far the most important influence on global temperatures is the rise in atmospheric carbon dioxide. Yet as early as 2008, when temperatures temporarily plummeted by 0.7 degrees, equivalent to their entire net rise in the 20th century, it was already clear that something was fundamentally wrong with this assumption. The models were not taking proper account of all the natural factors governing the climate, such as solar radiation and shifts in the major ocean currents. Even the warmists admitted that it was a freak El Niño event in the Pacific which had made 1998 the hottest year in modern times.

But the Met Office was not going to abandon easily its core belief that the main force shaping climate was that rise in CO2. As its chief scientist, Julia Slingo, admitted to MPs in 2010, its short-term forecasts are based on the same “numerical models” as “we use for our climate prediction work”, and these have been predicting “hotter, drier summers” and “warmer winters” for decades ahead. Hence all those fiascos which have made the Met Office a laughing stock, from the “barbecue summer” that never was in 2008, to the “warmer than average winter” of 2010 which brought us our coldest-ever December, to its prediction last spring that April, May and June 2012 would probably be “drier than average”, just before we enjoyed the wettest April and summer on record.

Such a catastrophic blunder is scarcely mitigated by the Met Office’s sneaky attempt to hide that absurd 2011 graph. One day it will be recognised how the Met Office’s betrayal of proper science played a key part in creating the most expensive scare story the world has ever known, the colossal bill for which we will all be paying for decades to come.

Meanwhile, it is not just here that this latest fiasco, reported in many countries, has been raising eyebrows. Our ministers love to boast that British science commands respect throughout the world, They should note that the sorry record of our Met Office is beginning to do that reputation no good at all.

Britain’s Met Office has come under fire for two pieces of crystal-ball gazing involving global temperature and British rainfall. On Christmas Eve, the Met’s temperature prediction for the UK was quietly revised downwards, and only merited a press release this week after physics blog Tallbloke’s Talkshop noticed the change.

According to the Met’s Richard Betts, an IPCC lead author and head of the Met’s Climate Impacts team, the new projection the result of new climate models, with different inputs.

The new temperature prediction is 20 per cent lower than the previous estimate, with a mean deviation of 0.43°C above the 1971 to 2000 average over the next five years. If it holds true, then global temperatures will have experienced a 20-year standstill, with no statistically significant warming. The Met didn’t predict, as the BBC erroneously reported, a 0.43C increase in global temperature over the next five years.

Hello Richard Betts I have a question with respect to a comment you wrote earlier.

You wrote “However it does provide an opportunity to test the models over the next few years” and linked to a paper by your colleagues (Stott and Jones 2012).

Firstly here is the figure from that paper with the prediction.

If we are taking the green lines as the range of the confidence limits. then from eyeballing the prediction is for this decade to be between about 0.7oC warmer than the 2000’s and 0.1oC cooler. My problem is that this real doesn’t look that skillful. If you look at the previous century on that graph than just about every decadal change would fall into that crange, Again eyeballing it looks like 1890 to 1900 would be outside that range at about 0.2oC cooling. Every decadal change in the 20th century seems to fit into the range of the model prediction including the early decades when presumably GHG forcing was little changed. The prediction seems to have the range of decadal change from almost all the observational record irrespective of the forcing, They’ve got all bases covered

We all know that the odds at a casino always favour the house but they are nothing compared with the odds in favour of the model in that prediction being correct. I could say I have skill at predicting the outcome of a coin toss if my prediction was that “it will either land heads or tails”, I realise this prediction isn’t quite that obvious but it doesn’t seem too far away from that. Richard I wonder whether you would comment on that?

I notice that Shukman has now edited out the ‘0.43C rise by 2017’ and replaced it with:

“It says the average temperature is likely to be 0.43 C above the long-term average by 2017, as opposed to an earlier forecast suggesting a difference of 0.54C.”

Meanwhile Matt McGrath’s blog negates that goodwill by saying:

“Yes, obviously the science is important and the issue is critical to our survival as species etc etc, but arguments about experimental models and degrees of difference seem really far removed from the concerns and interests of many people.” [before going on to address the concerns and interests of people affected by the Australian fires].

What he really means by “degrees of difference” (alluding to his colleague, Shukman’s original, unedited article) is whether it’s a 0.43 rise by 2017or no rise by 2017.

So McGrath is saying that confusing the British public by saying we’ll see a temp rise over 5 years that’s equivalent to nearly half the rise in the previous century is “far removed from the concerns and interests of many people”… Including, presumably, the many people like me paying bigger fuel bills to fund the £0.43 solar feed in tariff to fund the full cost of £12000 solar arrays on large roofs in leafy suburbia, is that right, Matt?

Bob Tisdale suggests the omission of years 6-10 on the latest graph was due to lack of “confidence”. I commented on his site it was more likely a lack of comfort, if those years continue the steep dive!

“Scute, Lord Beaverbrook, no, this forecast only goes out to 5 years, because the new model is much more computationally expensive than the old one.”

I did muse on this and think it must be very computationally expensive indeed not to let it crunch for another 5 years. It can’t be due to a fear of chaotic divergence from inputs because they have happily done decadal forecasts before, even if they were wrong. Would 5 years more of crunching really have taken up that amount of computer time? After all a decadal forecast, if trusted and followed, has ramifications running into the tens of billions of pounds, euros and dollars, take your pick!