The Case for Open Computer Programs

The authors of this “perspective” make the point, that “without code, direct reproducibility is impossible”. The possibility to reproduce a “scientific paper’s central finding” and not the “replication of each specific numerical result down to several decimal places”. Reproducibility is part of the scientific method. I personally think that it is key to advance science. Only through understanding what others have done, we can link in our mind different concepts, which is the basis for novel thoughts.

Kyle Niemeyer points out in his article at arstechnica, that key reasons against the publication of source code are selfishness (“to slow down the competition by keeping the results of hard work to yourself”) and ideas of making money from the source code. He points to an argument by Daniel Lemire, who points out that “open sourcing […] code not only makes […] work repeatable, but spreads the ideas faster and makes the code better in the long run, since other users can help debug it.”

An important concept, also mentioned in the paper, improves reproducibility: Literate Programming, introduced by Donald Knuth. A concept that has been adapted early by Mathematica Notebooks, SWEAVE for R, or ipython notebooks for python, amongst others.

Three interesting bits from the article:

Microsoft affirms that the treatment of floating point numbers in its popular Excel spreadsheet “…may affect the results of some numbers or formulas due to rounding and/or data truncation.”

there are programming errors. Over the years, researchers have quantified the occurrence rate of such defects to be approximately one to ten errors per thousand lines of source code

a study from IBM demonstrated that “fully a third of all the software failures in the study took longer than 5,000 execution years (execution time indicates the total time taken executing a program) to fail for the first time.”

Photo by nerovivo – http://flic.kr/p/zWeRv

Communication of Climate Projections in Us Media Amid Politicization of Model Science

In this paper, the authors make a point that goes beyond reproducibility. Some models, climate models in this case, are complex which leads to hinderance of “the communication of their science, uses and limitations.”

According to the authors, this hinderance is mostly due to a lack of believe in models by the public combined with a decreasing number of mentions in the media:

“Of those surveyed in 2010, 64% reported either that they believed that scientists’ computer models are too unreliable to predict the climate of the future (41%), or that they did not know whether to trust them (23%)”.

The researchers first looked at articles published between 1998 and 2010 that mentioned climate change in the Wall Street Journal, New York Times, Washington Post, and USA Today. The quantity of coverage peaked in 2007, when the fourth IPCC report was released and public acceptance of climate science hit the high water mark. Yet even in 2007, climate models rarely got a mention. Over 4,000 articles (including opinion pieces) about climate change were published that year, but only 100 made reference to climate models. And that fraction continually declined through the period studied.

Scott Johnson points out in his arstechnica article, that one solution to this problem could be a public educated better in science.