Karl Popper and global warming

I am 'the other kind' of skeptic about global warming. I look at climate charts, past predictions, and the many PF threads and come away unconvinced that global warming is occurring at all -- to speak nothing of cause. But global warming opponents are even less convincing to me, using outdated science and cherry-picked statistics. (Admittedly, both sides use the latter.) So I remain skeptical -- probably because I'm accustomed to greater proof from more established fields.*

But it occurred to me that most of my skepticism can be traced to the apparent lack of verifiability, in particular predicting future temperatures. This led me to ask myself: is the theory of global warming falsifiable?

Here's what I'd really like to see, in an ideal world. Two models, one using the scientific consensus about global warming and the other assuming no global warming. They could even include various future events (measured sunlight intensity and atmospheric particulates, for both, and concentrations of various greenhouse gasses for the GW model). Even better would be if there was just one model, where actual GW data could be replaced with average data from (say) 1980 to 2000 for the non-GW model.

More realistically, I'd be interested to see any falsifiable predictions or models about climate change. In the likely case that there is no paired non-GW prediction, average temperatures could be taken from some reasonable recent time.

Past papers would give me a head-start -- I wouldn't need to wait 10 years to see how they pan out. But I would naturally have some concern about cherry picking. If I read that a study from 30 years ago predicted a steep rise in temperature that would lead to Florida being under water about now, how could I know if that was the accepted consensus or just one paper a GW opponent dug up? (Suggestions on how to avoid this bias would be appreciated.)

* The evidence for tectonic theory and phrenology was weak when they were first proposed; research showed the former but not the latter valid. Some came to the conclusion faster than others... I suppose I'm one of the slow ones here.

I am 'the other kind' of skeptic about global warming. I look at climate charts, past predictions, and the many PF threads and come away unconvinced that global warming is occurring at all -- to speak nothing of cause. But global warming opponents are even less convincing to me, using outdated science and cherry-picked statistics. (Admittedly, both sides use the latter.) So I remain skeptical -- probably because I'm accustomed to greater proof from more established fields.*

I'm not sure what you mean by "occurring at all". The warming trend is a measurement.

Suppose you make measurements of a long period comet passing the Sun. Are those measurements scientific? You can't repeat them; the comet has gone. You can't falsify them in the sense of showing that the orbit was something different. Is this measurement at all dubious as science?

But it occurred to me that most of my skepticism can be traced to the apparent lack of verifiability, in particular predicting future temperatures. This led me to ask myself: is the theory of global warming falsifiable?

Here's what I'd really like to see, in an ideal world. Two models, one using the scientific consensus about global warming and the other assuming no global warming. They could even include various future events (measured sunlight intensity and atmospheric particulates, for both, and concentrations of various greenhouse gasses for the GW model). Even better would be if there was just one model, where actual GW data could be replaced with average data from (say) 1980 to 2000 for the non-GW model.

More realistically, I'd be interested to see any falsifiable predictions or models about climate change. In the likely case that there is no paired non-GW prediction, average temperatures could be taken from some reasonable recent time.

Past papers would give me a head-start -- I wouldn't need to wait 10 years to see how they pan out. But I would naturally have some concern about cherry picking. If I read that a study from 30 years ago predicted a steep rise in temperature that would lead to Florida being under water about now, how could I know if that was the accepted consensus or just one paper a GW opponent dug up? (Suggestions on how to avoid this bias would be appreciated.)

I think this emphasis on predictions is not really what Popper intended by the falsification criterion; there can be all kinds of empirical tests of a scientific idea. In any case work on philosophy of science since Popper has suggested that the principle of falsification is not so black and white. Scientific theories frequently get adjusted to deal with anomalous results. This could make an interesting general comment on the philosophy of science; the work of Imre Lakatos would be and important counter to an overly strict focus on falsification.

Consider your example of plate tectonics. Understanding how this theory became recognized as valid doesn't fit well with a requirement for "predictions".

With that quibble in mind, we can actually give some predictions in the sense you are requesting. I don't think this is "cherry picking". Papers that make predictions which clearly fail usually are describing theories that are falsified, and so do not go on to become part of the development of that field of science. (Although, as noted, sometimes they might; such a paper may not actually falsify a theory so much as show a limit or an additional consideration.)

The first paper to make quantified predictions on a time scale that can be checked in the present is, I think, this one.

This was based on some of the early primitive climate models, and it still serves as a good introduction to the relevant physics. Because the paper is so early, it has to explain things pretty clearly which are sometimes taken for granted in modern publications. At the time of this paper, there was no clear warming trend apparent above natural variations. There were hints, but not enough to be definite about global warming.

From the summary:

The global temperature rose by 0.20C between the middle 1960's and 1980, yielding a warming of 0.4°C in the past century. This temperature increase is consistent with the calculated greenhouse effect due to measured increases of atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar luminosity appear to be primary causes of observed fluctuations about the mean trend of increasing temperature. It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980's.​

The expection that "the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century" has been met; it is now conventionally identified that around 1975 marks a strong warming trend that is specifically linked to carbon dioxide. In 1980 this trend was not observable.

The 1980s did indeed show strong warming.

The clearest prediction and comparison with a "no trend" null hypothesis is figure 7 of the paper:

I have superimposed on top of this the modern GISS temperature record in red. Note that the measured warming exceeds what was predicted; one of the reasons for this is apparent in the figure caption. The prediction was based on slow growth of CO2 emissions, and omits other trace gases. This climate model was also comparatively crude by comparison with modern models, in particular in the way it handled the ocean. Nevertheless, it was sufficient to give a reasonable estimate for global anomalies.

I include the 2006 reference, as it includes a relevant retrospective, looking back at their 1988 prediction and the subsequent observations. I originally considered citing the 1988 paper, but decided it was better to use the older reference.

The Goddard research group involved in these publications has an enviable record of good predictions. Not perfect, of course, in the sense of giving a lock step match with subsequent observations, but well within the bounds that conventionally represent a very successful research program.

That hasn't stopped some strange criticisms being made, of course. But in any case; there's the paper, it is a legitimate reference, and you are free to present criticisms on your own behalf with reference to this paper. It is the same that was the basis for the presentation made to Congress in the same year. If you choose to get your information from some source that doesn't actually meet guidelines, that should be fine as far as I am concerned. It just means you need to do a bit more work in expressing your criticism with reference to the 1988 paper directly on its own merits, rather than by simply referring to a source of insufficient standing in its own right. I've done this as well myself; used a blog or newspaper article to get some bit of information to get started, then checked it out more carefully and worked out a post which sticks to reliable sources for references. It's more work but a better result. Be careful though; there are good reasons for preferring the legitimate scientific literature and there may be additional pitfalls if you go more widely.

The 1988 paper uses a more advanced model, and three scenarios. Figure 3a from the paper shows the scenarios, and as before I overlay the subsequent instrument record in red, for comparison.
[URL]http://lh6.ggpht.com/_WtnYwFZtgHI/SztsiM4C-iI/AAAAAAAAAiA/hcWope5eGpE/s800/Hansen1988_fig3a.JPG

Scenario B was singled out in 1988 as the most likely; and as it turns out has been quite close to observations. In fact, the match is so close that it is in part accidental. There are some uncertainties expected in a projection like this, and the match is well within those uncertainties.

The claims for a mismatch with prediction are alluded to in the 2006 paper. I'll quote an extract:

The first GCM calculations with transient greenhouse gas (GHG) amounts, allowing comparison with observations, were those of Hansen et al. (12). It has been asserted that these calculations, presented in congressional testimony in 1988 (13), turned out to be ‘‘wrong by 300%’’ (14). That assertion, posited in a popular novel, warrants assessment because the author’s views on global warming have been welcomed in testimony to the United States Senate (15) and in a meeting with the President of the United States (16), at a time when the Earth may be nearing a point of dangerous human-made interference with climate (17).

The congressional testimony in 1988 (13) included a graph (Fig. 2) of simulated global temperature for three scenarios (A, B, and C) and maps of simulated temperature change for scenario B. The three scenarios were used to bracket likely possibilities. Scenario A was described as ‘‘on the high side of reality,’’ because it assumed rapid exponential growth of GHGs and it included no large volcanic eruptions during the next half century. Scenario C was described as ‘‘a more drastic curtailment of emissions than has generally been imagined,’’ specifically GHGs were assumed to stop increasing after 2000. Intermediate scenario B was described as ‘‘the most plausible.’’ Scenario B has continued moderate increase in the rate of GHG emissions and includes three large volcanic eruptions sprinkled through the 50-year period after 1988, one of them in the 1990s.

Real-world GHG climate forcing (17) so far has followed a course closest to scenario B. The real world even had one large volcanic eruption in the 1990s, Mount Pinatubo in 1991, whereas scenario B placed a volcano in 1995.

[...]

Close agreement of observed temperature change with simulations for the most realistic climate forcing (scenario B) is accidental, given the large unforced variability in both model and real world. Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2°C for doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3±1°C for doubledCO2, based mainly on paleoclimate data (17). More complete analyses should include other climate forcings and cover longer periods. Nevertheless, it is apparent that the first transient climate simulations (12) proved to be quite accurate, certainly not ‘‘wrong by 300%’’ (14). The assertion of 300% error may have been based on an earlier arbitrary comparison of 1988– 1997 observed temperature change with only scenario A (18). Observed warming was slight in that 9-year period, which is too brief for meaningful comparison.​

Hansen's reference 14 for a supposed 300% error, by the way, is Michael Crichton's State of Fear (Harper Collins, New York 2004). Crichton is, to say the least, a very poor guide on climate science.

Reference 12 is the 1988 paper. The 1981 paper is not listed, because it did not use all the various greenhouse gases; only carbon dioxide. The 1988 prediction is more sophisticated.

In this case the predictor and the verifyer are the same, a least they are not independent. Is that science?

The verification is the data, which anyone can check. That's science.

The point of the 2006 paper is to deal specifically with widely repeated and completely bizarre claims of a failed prediction. For example, your google image search will turn up plenty of images that compare only with scenario A. It is appropriate to caution people that googling for images is not good science. Many of the images you turn up, particularly those only showing scenario A, are flatly dishonest.

I'll get back on the scenarios. For instance what was the predicted and the actual emission of greenhouse gasses?

Note that emissions are not a prediction. They are a scenario. Also, what counts for the climate response is atmospheric concentrations, and it is the change in atmospheric levels that is used for the calculations. Scenario B is the closest to what occurred for carbon dioxide levels, and this is the major forcing. The details are in appendix B of the 1988 paper. Page 9361 indicates that in scenario B the rate of growth peaks at 1.9 ppm/year in 2010. That's the current mean rate of increase.

Scenario B also includes some random volcanic eruptions, to give some realistic natural cooling. This is not a prediction, of course; but as it turns out we did have the big Pinatubo eruption in particular, which also makes scenario B the one to compare. It is singled out as the most likely in 1988, and it has turned out to be the closest, and it also gives a quite accurate climate response.

The prediction is so close, in fact, that Hansen specifically notes this level of match is basically an accident. Scientifically they expected to get somewhere near; as things turned out they got much closer than they had any right to expect, given the magnitudes of unmodeled short term natural variation.

I'm not sure what you mean by "occurring at all". The warming trend is a measurement.

You're right to call me out on that; I didn't properly explain. There is great variability in year-over-year temperature, and the underlying trend is not clear. Looking at 20th century data, there appears to be a significant warming, but this could be a statistical artifact or a usual cycle. Distinguishing the possibilities (temperatures are an aberration not a trend, temperatures are higher as part of a usual cycle, temperatures are higher for other reasons, e.g. human emissions) can be difficult. For a non-scientist it's especially hard to distinguish.

In any case work on philosophy of science since Popper has suggested that the principle of falsification is not so black and white. Scientific theories frequently get adjusted to deal with anomalous results. This could make an interesting general comment on the philosophy of science; the work of Imre Lakatos would be and important counter to an overly strict focus on falsification.

I completely agree -- but I would be most pleased to see falsifiables regardless. (I suspect PF will deliver on this one!) While questions of philosophy of science and mathematics interest me, I have no background. I'd need to read a lot more before I could even find the questions to ask. (If a thread on that starts, though, I'd be happy to read, especially if there are some nice justifications for the principle of parsimony... always seemed to be poorly justified though sorely needed.)

With that quibble in mind, we can actually give some predictions in the sense you are requesting. I don't think this is "cherry picking". Papers that make predictions which clearly fail usually are describing theories that are falsified, and so do not go on to become part of the development of that field of science.

Yes... though I still have those concerns. I could claim that though astrology has been falsified in the past, my New Brand of Astrology (tm) is not based on those failed theories... and yet you would still be justified to believe that mine will also be bunk. (Clearly this is an extreme example, but I trust you see my point.)

This was based on some of the early primitive climate models, and it still serves as a good introduction to the relevant physics. Because the paper is so early, it has to explain things pretty clearly which are sometimes taken for granted in modern publications. At the time of this paper, there was no clear warming trend apparent above natural variations. There were hints, but not enough to be definite about global warming.

From the summary:

The global temperature rose by 0.20C between the middle 1960's and 1980, yielding a warming of 0.4°C in the past century. This temperature increase is consistent with the calculated greenhouse effect due to measured increases of atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar luminosity appear to be primary causes of observed fluctuations about the mean trend of increasing temperature. It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980's.​

The expection that "the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century" has been met; it is now conventionally identified that around 1975 marks a strong warming trend that is specifically linked to carbon dioxide. In 1980 this trend was not observable.

The 1980s did indeed show strong warming.

I will look up that paper. The prediction is not sufficient for me: a priori, there is a 50/50 chance that temperature would rise, and had it not risen I would not have started this post. But it is possible that the model in the paper could be used to 'predict' how much warming there will have been in the 1980s, based on actual CO2 levels during that period.

(Of course it would only be sensible to be lenient in the interpretation of its accuracy, in light of its early date.)

The clearest prediction and comparison with a "no trend" null hypothesis is figure 7 of the paper:

I have superimposed on top of this the modern GISS temperature record. Note that the measured warming exceeds what was predicted; one of the reasons for this is apparent in the figure caption. The prediction was based on slow growth of CO2 emissions, and omits other trace gases. This climate model was also comparatively crude by comparison with modern models, in particular in the way it handled the ocean. Nevertheless, it was sufficient to give a reasonable estimate for global anomalies.

Excellent. Modulo remaining concerns about cherry-picking, this completely addresses my issue. I appreciate the effort you have given to my request.

Scenario B was singled out in 1988 as the most likely; and as it turns out has been quite close to observations. In fact, the match is so close that it is in part accidental. There are some uncertainties expected in a projection like this, and the match is well within those uncertainties.

I don't really agree. Up to 2005 it matches well, but it diverges significantly after then. In that period, eyeballing it, the error between B and reality is roughly the same as that between average 1980-2000 temperature* and reality.

But I"m still essentially satisfied. The 1981 prediction matches reasonably well, the 1988 paper isn't worse than the naive model. Perhaps I'll compare the 2006 predictions to 1986 to 2006 averages 4-5 years from now.

* Notice that I used the same year range I suggested in my original post, before looking at the data. I did this to avoid any danger of cherry-picking. If I'm going to worry about others doing it, I'd better be circumspect about avoiding it myself.

The point of the 2006 paper is to deal specifically with widely repeated and completely bizarre claims of a failed prediction. For example, your google image search will turn up plenty of images that compare only with scenario A. It is appropriate to caution people that googling for images is not good science. Many of the images you turn up, particularly those only showing scenario A, are flatly dishonest.

Note that Hansen et al DO predict based on annual emission and NOT on increase in atmospheric greenhouse gas.

We have total CO2 emission data available here with these http://cdiac.ornl.gov/ftp/ndp030/global.1751_2006.ems [Broken].

Now we can compare Hansens scenario A and B CO2 growth with reality until 2006 as follow:

Note:

All emission estimates are expressed in million metric tons of carbon.

in graphical form:

Edit / update, trying to get results for 2007 and 2008 I found this, which talks about increases of 3.3% and 1.7% for 2007 and 2008 respectively, that would update the comparing tabel and graph as follows:

So, which CO2 scenario of Hansen is closest to reality? And what was that about growing long noses?

Note that Hansen et al DO predict based on annual emission and NOT on increase in atmospheric greenhouse gas.

The extract you have quoted demonstrates that this was based on the increase in atmospheric gases, not annual emissions.

Note that it starts with the Keeling curve. This is a measure of atmospheric concentrations. Everything in that extract goes on to calculate with atmospheric concentrations, which for CO2 is measured in ppm.

You are not even reading the reference you quote! Everything there is explicitly quantified by looking at atmospheric concentrations directly. All your graphs are irrelevant, because they are in terms of industrial emissions; which is not what the reference is doing.

So, which CO2 scenario of Hansen is closest to reality? And what was that about growing long noses?

Scenario B is definitely closest to reality. Ignoring your own personal graphs, which have no good relationship to what you have quoted, you can see from the actual words of the paper that they propose growth rates for the atmospheric concentrations directly.

Charges of dishonesty are very serious. I don't apply them across the board. I can believe that most of the errors in describing climate science in popular discourse are simple honest mistakes; very naive but not actually dishonest. However, this confusion and error is fostered by people who have no such excuse. The initial choice to focus on scenario A -- despite the fact that scenario B is definitely closer to reality and was also singled out in 1988 as the most likely -- cannot be so excused. It's been passed on by people who don't really know much about it and have probably not even read the paper.

You've now quoted the paper. I suggest you read it again, more carefully.

Cheers -- sylas

PS. Added in edit. It's not enough to look at the word "emissions". The question is how this is quantified. The increase in atmospheric CO2 levels is from industrial emissions; but you can't map them one to one. The numbers that are used are atmospheric concentrations. Look at what is actually calculated and how. The 1.5% applies a growth in the annual increment of atmospheric CO2. In scenario B, this goes from about 1.5 ppm/year in the early 1980s to a projected 1.9 ppm/year in 2010.

The extract you have quoted demonstrates that this was based on the increase in atmospheric gases, not annual emissions.

Note that it starts with the Keeling curve. This is a measure of atmospheric concentrations. Everything in that extract goes on to calculate with atmospheric concentrations, which for CO2 is measured in ppm.

You are not even reading the reference you quote! Everything there is explicitly quantified by looking at atmospheric concentrations directly. All your graphs are irrelevant, because they are in terms of industrial emissions; which is not what the reference is doing.

Scenario B is definitely closest to reality. Ignoring your own personal graphs, which have no good relationship to what you have quoted, you can see from the actual words of the paper that they propose growth rates for the atmospheric concentrations directly.

[highlights mine]I can not see words indicating concentrations, just the reverse. The word 'emissions' is used several times in that extract, and 'emissions' is used explicitly in section 4.1 where the paper formerly defines the scenarios.

sylas said:

[...] The initial choice to focus on scenario A -- despite the fact that scenario B is definitely closer to reality and was also singled out in 1988 as the most likely

I suspect everyone now has read the "most likely" phrase. How this is used as some kind of fundamental basis of the argument escapes me. Hansen proposed three emissions scenarios to feed their model, that is three guesses as to likely human industrial policy for the next several decades. As an aside, Hansen hazards a guess that B, a correction and slow down to human emissions activity was the 'most likely' and it followed of course their predicted temperature anomaly for scenario B was most likely. As it happens, this (scenario B slow down) did not occur, but so what? Scenario B was simply speculation on human activity, only an input to the climate model, and had nothing to do with the validity of the model by itself. The scientific meat of this paper is the predictions of global temperature given each of these inputs. Reality gave us emissions scenario A. Given scenario A, Hansen predicts global temperature anomaly A.

Hansen 88 Section 4.1 said:

Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions

sylas said:

[...]You've now quoted the paper. I suggest you read it again, more carefully.

I've read it as well

sylas said:

PS. Added in edit. It's not enough to look at the word "emissions". The question is how this is quantified.

Oh? It appears to me to be quantified as a 1.5% yr-1 increase in emissions.

sylas said:

The increase in atmospheric CO2 levels is from industrial emissions; but you can't map them one to one.

Agreed. The various sinks are complicated and are a function of time from what I read elsewhere.

sylas said:

The numbers that are used are atmospheric concentrations.

That is not what is stated in the paper.

Andre has clearly shown here how emissions scenario A is closest to reality. The temperature anomalies in Figure 3A are the outcomes of the model, and from this model temperature anomaly A is what we should have seen. We did not.

Afterthought: Given the casual use of ppm metrics in the paper, I have the impression that Hansen et al assumed they knew the mapping from emissions to concentration, took it as a relatively constant over this time period, and that assumption turned out to be wrong.

Afterthought: Given the casual use of ppm metrics in the paper, I have the impression that Hansen et al assumed they knew the mapping from emissions to concentration, took it as a relatively constant over this time period, and that assumption turned out to be wrong.

No, given the use of ppm metrics, it is explicit that what is being estimated is the atmospheric levels. There's no mapping from emissions to concentration used anywhere. There's no attempt to quantify emissions as a number of tons of carbon or anything like that. When numbers are used, they are always an estimate of atmospheric levels. Of course the cause for the increasing levels of CO2 are emissions, and this is why they are mentioned, but at no point in the paper are the magnitudes of emissions used, and there's no carbon model employed.

The scenarios are given directly in atmospheric levels, all the way.

The extract given by Andre is from the section of Appendix B called "trace gas scenarios". Let's walk through it, focusing on CO2, because this is the largest factor involved.

Trace gas scenarios. Trace gas trends beginning in 1958 (when accurate measurements of CO2 began) were estimated from measurement data available in early 1983, when we initiated out transient simulations. References to the measurements are given by Shands and Hoffman [1987]. ....

[Note, this is measurement of atmospheric CO2]​

Specifically, in scenario A CO2 increases as observed by Keeling for the interval 1958-1981 [Keeling et al., 1982] and subsequently with 1.5% yr-1 growth of the annual increment. ...

[Note the Keeling curve is a measure of atmospheric levels, and the increment here is obviously the annual increment of atmospheric CO2]​

In scenario B the growth of the annual increment of CO2 is reduced from 1.5% yr-1 today to 1% yr-1 in 1990, 0.5% yr-1 in 2000, and 0 in 2010; thus after 2010 the annual increment in CO2 is constant, 1.9 ppmv yr-1.

[Again; this is the increment in atmospheric CO2, culminating at a fixed rate of increase of 1.9 ppmv yr-1 which is indeed about what we have today.]​

The use of ppm is not "casual"; it's the proper unit, consistently applied throughout the paper, for the CO2 levels.

Of course the paper discusses emissions, because this is the cause of the annual increment. All the calculations in the scenario use atmospheric levels, which are indeed increasing pretty much as given in scenario B.

Furthermore... this is the scenario; it's not a prediction. The actual predictions are the response to scenarios. The paper does identify B as the most likely, but that's a secondary issue; it is not what was being predicted. The real meat of the work is the estimation of temperature response to different scenarios, which is extremely close. That is, this prediction demonstrates that the model does give credible estimates of climate response to forcing.

You and Andre and mixing up emissions with atmospheric levels in a way that the paper does not. The paper is describing quantified scenarios for atmospheric levels, which are of course driven by emissions... but there's no explicit mapping provided. Just three alternatives for atmospheric levels, of which B is easily and unambiguously the closest to what has transpired.

I am 'the other kind' of skeptic about global warming. I look at climate charts, past predictions, and the many PF threads and come away unconvinced that global warming is occurring at all -- to speak nothing of cause.

Same here.

But it occurred to me that most of my skepticism can be traced to the apparent lack of verifiability, in particular predicting future temperatures. This led me to ask myself: is the theory of global warming falsifiable?

Got the same burr under my blanket. When GW first came to the attention of the general public, part o fthe predictoin was a decrease in hurricane activity. Climatologists said that their computer models showed that GW would lead to a decrease, so if we see a decrease in hurricane activity, that is proof that GW is real. This "prediction" came near the tail end of a long period of decline in hurricane activity. Shortly after the prediction was published, the long decline ended, and hurricane activity has been on the rise ever since. Today, hurricane activity is nearly back up to the levels observed in the early years of record keeping.

Shortly after the increase in storm activity began, the consensus of climatologists (many of them the exact same climatologists that "predicted" a decrease) said that their computer models predict an increase in storm activity, so if we see the number and severity of hurricanes increasing, that too is proof that GW is a fact. My knee-jerk reaction was to think, "all they need now is a third model in which storm activity stays exactly as it is..." Anyhow, I too have been looking for falsifiable claims, and data I can accept as credible. Ican find none of the former and, frankly, precious little of the latter.

I find this fairly frustrating, as I would like to have some position on the matter. But as things currently stand, I can't honestly say I believe anybody from either side with any degree of confidence.

Shortly after the increase in storm activity began, the consensus of climatologists (many of them the exact same climatologists that "predicted" a decrease) said that their computer models predict an increase in storm activity, so if we see the number and severity of hurricanes increasing, that too is proof that GW is a fact. My knee-jerk reaction was to think, "all they need now is a third model in which storm activity stays exactly as it is..." Anyhow, I too have been looking for falsifiable claims, and data I can accept as credible. Ican find none of the former and, frankly, precious little of the latter.

I also found the predictions of storm activity non-scientific. But there seems to be good evidence in terms of temperature increase, no?

So I can easily reject storm-related predictions while accepting (provisionally) the general warming prediction.