Search form

Bugs in the Climate System

To those of you who use computers, you can understand the issues that can sometimes be caused by them. Faulty hardware, operating system issues, programs not working, drivers not installed properly, the list goes on and on. The average user has no end to the issues they could have, with many being very subtle and tricky to solve. A recent bug has been revealed in Climate modeling systems that will need to be handled if any model is going to have credibility, and there’s no tech support you can call when your model for a climate system doesn’t start.

The issue is less a specific bug or ‘feature’ as some might call it of the many different climate models being tested right now, but rather a subtle issue with the computers these models are being run on. It was recently discovered that numerous major models currently under observation are heavily sensitive to their starting conditions. Even a small change in starting conditions produces massive variances in later data, producing a spaghetti-like web of data that cannot tell us what direction the climate is heading. Regardless of opinions on both sides of the climate debate, it becomes pointless chatter if none of the models agree.

A group of researchers at Yonsei University in Seoul, Korea decided to experiment further with this statement. Just because models are heavily dependent on starting conditions, and two models might vary wildly from each other because of this, doesn’t necessarily make one of the two models wrong. The researchers decided to test a different aspect: what effect is the computer the model is being run on produce?

The researchers tested several different climate models using different equipment. Starting conditions differed based on the processor used and the program used to compile the model. Tests were then started using differing starting conditions as well, and tests were then run for 10 simulated days using data from early May 1996. The results were quite startling, with significant deviation from each other after only 10 days. It is clear from this data that the way a computer handles the model can tamper with its results.

The underlying issue comes from how different computers and calculations handle rounding calculations. Different setups will manipulate rounding of results differently, and since these results are used to predict the next set of results, the errors compound over time. There will always be errors, as no model is perfect, but the issue is that these errors compound each other. Over time every model that has these issues will become less and less reliable.

For the sake of Climatology, this issue must be addressed. New models need to be built up that aren’t sensitive to these errors and differences in machines. This is an issue affecting all of climate research, and other areas of research as well, especially those involving chaotic repeating systems like economics and physics. Thankfully, now that the problem has been identified it can now be addressed. There may not be a phone number one can call to fix this bug, but at least research can work to find a solution.

Scott Michael Slone is C2ST’s resident intern. He is currently a senior at Illinois institute of Technology, majoring in Materials Science and Engineering. He enjoys the work C2ST is doing to help promote science and technology in the Chicagoland region, and is glad to help them as well. His scientific interests include nanotechnology and molecular machines. He hopes you’ll enjoy his technical tidbits on these and other subjects.

Watts, A. (2013, July 27). Another uncertainty for climate models – different results on different computers using the same code. Retrieved from Watts Up With That: http://wattsupwiththat.com/2013/07/27/another-uncertainty-for-climate-models-different-results-on-different-computers-using-the-same-code/