Monday, December 29, 2014

Been busy entertaining recently. Four of the people in these photos, plus the photographer have a birthday within a week of Christmas... and then there's the Japanese emperor's birthday on 23rd December to consider, and, of course, that Jesus chap, on Christmas Day itself.

Friday, December 19, 2014

One of the more interesting talks for us in the Paris conference mentioned previously here was James Porter talking about UKCP09. It turns out there has been a social sciences project ("Project ICAD") part of which involved looking at the UKCP09 project (and they are based in Leeds University, perhaps an additional reason for a visit there some time?). We were in Japan over the entirety of the interval in which UKCP09 took place, and only had limited contact with the relevant parties, but perhaps know enough about the issues for our perspective to have some relevance. The speaker had spent some time embedded with the Hadley Centre and had talked to a lot of people involved in the production and review of the UKCP09 project.

A significant part of Porter’s talk looked at the question of how the probabilistic predictions were made, and in particular the UKCP choice of basing their probabilities primarily on their ensemble of HadCM3 simulations with different parameter values (perturbed parameter ensemble or PPE), rather than basing their results on the CMIP3 ensemble of different models (multi-model ensemble, MME). I was surprised to see this presented as such a major decision, as my recollection is that most of the critics at the time were really complaining about the willingness of UKMO to generate probabilistic predictions at all since (the critics argued) there was not really a sound basis for assigning numerical values by any method.

The main UKCP09 proposal was (according to their web page) funded in 2004 and at that time, it seemed quite widely accepted that PPEs were a better foundation for probabilistic prediction than the MME. In fact this era was very much the heyday of PPEs, with climateprediction.net, the Hadley Centre’s QUMP group and our own rather smaller ensemble research activity all making rapid progress. The UKCP09 approach was externally reviewed back in 2008/9. The full review doesn’t seem to be available (anyone know where it is?) but I don’t see any evidence in either the summary or response that the question of MME vs PPE was seriously raised by anyone even at that later time.

I believe (though I could be wrong and would welcome references) that we were actually the first to argue the contrary. The roots of our argument can be found in this Yokohata et al paper (which although published in 2010 was submitted back in 2008), which pointed to substantial inconsistencies between two PPEs based on our two different GCMs (MIROC and HadCM3). However it was actually our series of papers on ensemble analysis starting in 2010 (eg here, here, here and here) that most clearly argued not only that PPEs had serious problems, but also that the MME was much better than previously believed. So while I’m encouraged to see that this question is now high on the agenda, I really don’t think it was on the table at the outset of UKCP09 and it doesn’t really seem fair to use it as evidence of insularity or reflexive dismissal of outside ideas, which seemed to be the speaker’s point. Given the work they had already done by 2010 or so, the UKCP09 researchers actually made quite substantial and constructive efforts to account for the (by then) emerging failings of the PPE approach by effectively adding on the MME’s uncertainty to their results. While this may satisfy neither the resolutely anti-Bayesian nor the most purist pro-Bayesian, in my view it certainly improved the credibility of their results.

Some of the interviewees gave excuses for their apparent reticence to air their doubts openly at the time. According to Porter, some of them said they were scared of being labelled sceptics! What a feeble excuse. Perhaps more plausible, is the additional argument that the incestuous and cliquey nature of climate science in the UK made it a bit of a career risk in terms of future funding. But in any case, I certainly recall some people making their criticisms very plain. In particular, Lenny Smith argued eloquently about the risks of assigning probabilities where there was not really a sound basis for them. If the next model generates different results (which is entirely plausible) then someone is going to end up looking rather silly.

So I’m not going to stick the knife into the Hadley Centre for proposing in 2004 to base their probabilistic predictions on a PPE methodology that they had already started to work on. You would have had to be unusually prescient to anticipate our research by several years, although I’m encouraged to see it is now obviously high on peoples’ minds. On the other hand, the Hadley Centre’s apparent continuing preference for PPEs is hard to defend, now that they have a chance to regroup. To that extent, perhaps this Project ICAD analysis contains a truth that is deeper than the actual story they purported to tell :-)

The talks from scientists were generally straightforward, but the social science talks inevitably left us waiting for the punchline. They would get to the end and stop, before reaching any real conclusion. This has been a common impression we have both got from a number of similar events. The speakers tend to be long on historical description and retrospective analysis, and short on anything amounting to overall vision, substantive advice or predictive claims. (There have been notable exceptions to this general impression, but they are rare.)

A lunchtime discussion provided more insight than a day and a half of talks. We were sitting next to a social scientist, let's call him Bob (because his real name was Ian). jules asked him pointedly what the purpose of social science was, as it never seemed to say anything of substance to us, i.e. concrete advice. IanBob admitted rather frankly - proudly, even - that it wasn't supposed to have a point. Does it have to have utilitarian value to be worthwhile, he countered. I readily admitted that art and poetry had some value without making falsifiable predictions. But I had been hoping for more from the, um, scientists, involved in social science. The entire system of science can possibly be summed up as the making of falsifiable predictions and this is what most clearly separates it from religion (probably a bit over-simplistic, I don't claim any great authority on the topic). So asking for some testable theories didn't really seem too unrealistic to me.

The first talk was actually a history of the establishment of confidence in climate modelling, and though it was basically a valid review of the literature, it lacked a little (in my view) in describing confidence and consensus as something that seemed to emerge by default over time, and failing to recognise the emergence of consensus as primarily an indication of the limits of credible disagreement. IMO, this is the most fundamental aspect of consensus-forming and scientific progress (as we argued in this piece), but of course the failure to generate credible alternative theories is not really obvious from the literature. For a currently topical case, consider Tim Palmer's call for new high resolution climate modelling centres. Tim has some ideas for improvements to climate models and climate modelling, which may be wrong or right (his new article is at least an improvement on previous versions of his argument, IMO), but at least they are plausible and concrete. In contrast, Judith Curry waves her hands and asks for "fundamentally new model structural forms" but without any actual ideas as to how these fundamentally new models might be created nor what they could bring to the table, it's just hot air and hand waving. While I'm on the topic, if it's not the job of people like Curry to actually create such new models, then who exactly does she think should be doing it, and how? But I digress.

Anyway, back to the story, Ian argued that the main point of social science was to provide stories - his word - that described how human society worked. And these stories were to be judged primarily on how plausible or convincing they sounded. The concept of "truth" as a scientist would interpret it didn't come into the matter - truth was basically determined as whatever ideas were currently popular, nothing more. Of course scientific "truth" is actually a bit of a slippery concept. For example, Newtonian gravity is not actually true, but it's near enough for very accurate predictions over a wide range of applications. We don't think we are really describing truth, but we are at least attempting to approximate it and the demonstration of this is that the theories reproduce and predict the world, rather than merely being attractive to an audience. Note again the importance of useful predictions in this. Moreover, the stories were not expected to be generalisable to other situations. They were just what happened in that particular case. No over-arching theories, or even any consideration that this could - in principle - be one of the eventual goals.

Of course Ian's argument was somewhat undermined by the number of speakers wailing that "things need to change" (in order to make progress in the policy debate, which went almost without saying as the underlying purpose of the conference). This sounds almost like a predictive claim, i.e. that a change in behaviour might lead to some observable result, but stopped some way short, in that they didn't actually describe what the required changes were, nor what results would likely be observed. Next time I hear a social scientist going on along similar lines, I will simply sigh and try to treat it as a Just So Story, only not as good.

Thursday, December 04, 2014

Phew. Back at last to the land of feeble livestock-based humour. Talking of livestock, it was a major revelation quite how much cheese the French consume. We've visited France for weeks at a time before (although not Paris since we were both infants), but although we'd sampled many cheeses from many kinds of French livestock (sheep, goats, cows, chickens etc), we'd not really eaten with actual French people much before, so had failed to appreciate the Quantity. All that red wine is probably a necessary accompaniment to save the arteries from completely furring up.

This photo was taken on a bike ride before we went to France. It was frosty yesterday so the moo cows may well be be in their sheds for the winter by now.

James has some posts half written about some of the slightly less cheese related French adventures, so they may appear soon.

Nothing should surprise me where parasitical publishing is concerned. The big headline news is that Nature have made articles free to view...FOR SUBSCRIBERS!

Big whoops and high-fives all round!

Worryingly, some people have fallen for it (Tim Haford re-tweeted approvingly, perhaps not having read carefully enough). As the link makes clear, all that Nature are doing is allowing subscribers to share a crippled DRM-protected read-only version of manuscripts that will obviously require proprietary software of some sort to view and therefore be thoroughly unhelpful for promotion of scientific research.

Considering that Nature already allowed people to put their own published pdfs up on their own website, openly readable by all and not crippled by DRM, I don't see how this can be anything other than a big bold step in the wrong direction. Let's hope it's the last gasp of a dying empire. For too long scientists have been paying parasitical publishers for the privilege of then having their own work sold back to them at hugely inflated prices. The journals don't pay the authors who write their material, don't pay the reviewers who are the only participants in the process who actually add any genuine value compared to an open archive (eg arxiv). Maybe Tim Harford would be less enthusiastic if he was paying the FT to print his columns!

Wednesday, December 03, 2014

Not sure why, but people seem interested in the “history of climate blogging”. See posts here and here, for example (there may be more, if so, they are hopefully linked). And I see Eli has already nicked my title, but for different purposes.

My first post was Jan 2005, some climate stuff came within a few months. Not really that long after Stoat and RC, then. But I didn't (and still don't) think of it as anything particularly revolutionary or trend-setting, it was just that the previously well-established discussion fora of usenet which had served their purpose well for many years were finally dying due to a surfeit of nuisance-makers and lack of moderation. I'm still not really a fan of the concept of blogs (too much of a personal soap-box for proper discussion) but they still seem to be the worst system available, apart from all the other ones. I was on sci.env a decade earlier (at least occasionally; climate change wasn't high on my agenda during my maths DPhil).

But one thing that thinking back on this does perhaps help to explain, is my cynicism at the supposed new dawn of revolutionary new(bie) climate bloggers trying to be nice to sceptics, in the hope that this will make a material difference to anything at all. It hasn't, and it won't. That Watts should post something likening climate scientists to Hitler, soon after having a supposedly collegial dinner with several of them, should surprise no-one. Plus ça change (scuse my french, I've just had mussels for dinner). That the scientists respond by writing an article for his blog basically excusing him for having posted it...umm...that worked well. For him.