Variable Variability

Pages

Sunday, 9 December 2018

I have to apologize to Peter Hadfield (better known as Potholer54) as I am not sure I have found the source of the talking point that we only have 12 years. Science journalist Hadfield always encourages real skeptics to check claims by searching for the source in the scientific literature. The best solution to the riddle I found is really disappointing, but independent of the source, the claim is terrible.

How much warming is seen as acceptable is a political compromise between how hard it is to change the energy system (against powerful vested interests) and how much damages people see as acceptable. All world leaders have agreed in the Paris climate agreement on the following compromise.

Holding the increase in the global average temperature to well below 2°C [3.6°F] above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C [2.7 °F] above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change.

Previously the political compromise used to be to keep the warming below 2°C and most scientific work thus focused on the impacts of 2°C warming and on possible ways to make the transition that fast. After Paris politicians asked scientists to study how much the damages from climate change would be reduced and how much harder it would be to limit warming to 1.5°C.

The Intergovernmental Panel on Climate Change (IPCC) thus brought all the research on this topic together and in October published a report on the difference between 1.5°C and 2°C warming. In the media reporting a frequent talking point somehow was that we only have 12 years to stop climate change.

How much CO2 can we still emit?

Before looking into it I had guessed the claim would be based on the carbon budget. How much CO2 we can still emit until we reach the amount that will likely warm the Earth by 1.5°C. Note that CO2 accumulates in the climate system and the warming is determined by the total historical amount we emit. I sometimes worry people think that when global warming becomes too devastating we can stop emitting CO2 and the problem is solved. When the French stopped dumping salt in the Rhine and Meuse the water quality quickly became better. CO2 is not like that. When we stop emitting CO2 warming will even continue for some time, it will not go back to the temperature we used to have and many consequences (such as sea level rise) will keep getting worse, just slower.

The Earth has already warmed by about 1°C since the end of the 19th century.* Based on past emissions alone we would not reach the 1.5°C warming level yet, according to the IPCC report. A part of the carbon budget is still left. This is a bit more than 10 times how much we currently emit per year and could thus have been the source of the talking point. The Summary for Policy Makers of the IPCC states:

Limiting global warming requires limiting the total cumulative global anthropogenic emissions of CO2 since the pre-industrial period, that is, staying within a total carbon budget (high confidence). By the end of 2017, anthropogenic CO2 emissions since the pre-industrial period are estimated to have reduced the total carbon budget for 1.5°C by approximately 2200 ± 320 GtCO2 (medium confidence). The associated remaining budget is being depleted by current emissions of 42 ± 3 GtCO2 per year (high confidence). The choice of the measure of global temperature affects the estimated remaining carbon budget. Using global mean surface air temperature, as in AR5, gives an estimate of the remaining carbon budget of 580 GtCO2 for a 50% probability of limiting warming to 1.5°C, and 420 GtCO2 for a 66% probability (medium confidence).

This would have been a better reason for the talking point than the possible reason below, but even then we do not have 12 years, we should do more NOW. We cannot wait 12 years and then suddenly stop all emissions. We are already doing a lot, half of all new electrical generation capacity in the world is already renewable power, but we need to do more and do this now. The only time better than now is decades ago.

On the other hand, the Earth does not explode in 12 years, any action reduces damages and adaptation costs. If we do not manage to limit the warming to 1.5°C, it would be better to limit it to 1.6°C than to 1.7°C, and so on. There is no brick wall we crash into, there is no cliff we fall into, there is no "deadline", CNN. Any limitation of the warming makes life on Earth better. Being lied into one Iraq war is catastrophic, but still better than 2 or 3 wars.

Any analogy is imperfect, but a better analogy would be that climate change is like crossing a busy street without looking, it is a irresponsible risk and the farther you go the higher the risk. Another analogy is walking into a mine field as Michael Mann often says. We do not know when the mines will explode, better walk into the field as little as possible and not 12 meters.

Reducing CO2 emission means changing our energy system and agriculture. This is a big task, and not something we will not be finished within 12 years. When we do more now, we would also have more time than 12 years to finish the task.

It is much better to say that to achieve the climate goals we have set ourselves in the Paris climate agreement we have to be at zero emissions in a generation. Or that we have to half emissions in 2030.

IPCC has to use a few circumlocutions to avoid giving a direct answer to this question (for reasonable and understandable reasons). I’m not quite so constrained…

There are many issues related to the feasibility question of which physical climate-related issues are only one. The basic issue is that the effort to reduce emissions sufficiently to never get past 1.5ºC would require a global effort to decarbonize starting immediately that would dwarf current efforts or pledges. This seems unlikely (IMO).
...
So my answer is… no.

I get that there is reluctance to say this publically – it sounds as if one is complicit in the impacts that will occur above 1.5ºC, but it seems to me that tractable challenges are more motivating than impossible (or extremely unfeasible) ones – I would be happy to be proven wrong on this though.

The craziness begins

However, the press articles and TV segments on the IPCC report do not talk about the carbon budget. In most cases they do not explain at all where the 12 years comes from. The Guardian headline is: "We have 12 years to limit climate change catastrophe, warns UN." Given that English is the global language, the difficulty of the English have understanding international relations is rather surprising. The IPCC is not the UN. More importantly for this post, the headline is not explained anywhere in the article.

The most surprising place to see the claim is Fox News**: "Terrifying climate change warning: 12 years until we’re doomed." That is some contrast to their evening television opinion shows operating as the PR arm of the Republican party. This article was on their homepage, in the science section, under the category "Doomsday" and republishing an article written by the New York Post. Again I cannot find a justification for the headline in the article.

The Sunrise Movement will visit members of Congress to lobby for a Green New Deal on Monday, December the 10th and would like climate scientists, who happen to meet at the AGU Fall meeting in Washington DC, to join them. They also did not go to the source, but trusted newspapers when they write in their call for action: "the latest UN report says we have 12 years to rapidly transform our economy to protect human civilization as we know it."

As an aside, had I been in Washington, I would have been happy to join them, I feel we need to do more to reduce climate change damages, but the Green New Deal is politics, not science. So I would not show up as a scientist (in one of those stereotypical white lab coats).

We may be getting a bit closer to the solution listening to CNN. Interrupting their programming on the missing Malaysia Airlines aircraft CNN titles "Planet has only until 2030 to stem catastrophic climate change, experts warn" and says: "In Paris leaders pledge to keep the rise well below 2 degrees [Celsius]. This report now suggests we aim for 1.5°C. A benchmark we are predicted to reach in 2030."

(No, the politicians suggested we'd aim for 1.5°C.) Why does CNN think that experts warned about this? The reporting of LifeGate may give a hint:

If not curbed, this trend will lead the Earth to exceed the threshold of +1.5 degrees between 2030 and 2052 (according to the different scenarios the SR15 took into consideration). This means that in just 12 years we could reach the temperature rise that the Paris Agreement hypothesised for 2100.

LifeGate is a news organization calling itself "the leading point of reference for sustainable development since 2000". They at least describe this situation is sufficient detail to have a look what the source says.

What does the IPCC say about 1.5°C, 2030 and 2052? The summary for policy makers states in their description of the current situation:

Human activities are estimated to have caused approximately 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C. Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate. (high confidence)

Chapter one of the report has the details and confirms that the period 2030 to 2052 is based on an estimate of how much the world has warmed up to now and how fast it is warming. That fits, we have warmed about 1°C and the warming is about 0.2°C per decade. So one degree more warming would be in 5 decades and half a degree warming more would be 25 years, which is the middle of the 2030 to 2052 year interval.

This is the warming baseline WUWT & Co., Big Coal and Big Oil and the pro-torture politicians Trump, Jair Bolsonaro and MohammadBone SawSalman are fighting for, which is for comparison with climate policies that benefit humanity in the remainder of the report.

The period of 2030 to 2052 is mentioned in the beginning of the report and was also mentioned early in the IPCC press conference. So it makes sense that is was noted in the press, but that they used the lower uncertainty range (12 years) and not the mean (26 years) is weird, as well as calling reaching the 1.5°C level an immediate catastrophe or deadline, while it had a different function.

Maybe I am too much of a scientist, but mentioning the lower boundary of an uncertainty range makes no sense without defining the range. The IPCC used the term "likely", which is defined as a probability between 66 and 100%. If you wanted to more sure the period contains the year we will cross the 1.5°C level, for example "virtually certain" (99-100%), the range would have been much wider and the lower bound much earlier. Scientifically speaking the "12 years" without that context is meaningless.

So what most likely happened is that we have scientists describing the progression of climate change. They give the uncertainty range and the press decides to only mention the lower boundary of this range. Then they somehow turn it into a deadline, put this in many headlines and never tell their readers where the number comes from. This made #12years a somewhat viral political meme. Chinese whispers of the worst kind. Journalists please listen to Peter Hadfield: check the source.

In the ongoing climate negotitions in Poland, Saudi Arabia, the US, Russia & Kuwait objected to the conference "welcoming" the IPCC Special Report on 1.5°C warming, the BBC reports. The flood of messages on this BBC article suggests that climate scientists who volunteered to write the report are not amused.

Thursday, 15 November 2018

The German eleventh Forum on Science Communication came to my home town, which was a good occasion to spread the word on Climate Feedback. I could write several blog posts inspired by this meeting, which is good and I might, but this post is just about the opening speech by Prof. Dr. Julika Griem, the vice president of the German Science Foundation (DFG) and the Director of the Institute for Advanced Study in the Humanities. Julika Griem slaughtered a few holy cows in front of the vegan science communication folks who were talking about this until the last session.

The conference offered a wide range of backgrounds. Well-dress communications people with slick rehearsed talks, natural scientists like myself in t-shirts freely speaking inspired by power point bullet points, journalists in the middle and Griem gave the opening lecture in humanities style by reading a written out text. I would have preferred to read myself, to be able to stop and think, so I was happy when they posted the text (in German). People who survive a humanities eduction deserve a medal.

Prof. Griem asked several controversial questions. Is it really such a good idea to focus so much on story telling, adventures, heroes/personalisation and simplifying? Isn't describing, explaining and argumentation better suited for science? Especially modern science is about working together in large groups with many specialised scientists in which management, administration and technical infrastructure are important. Documents, data and instruments are harder to turn into heroes than individuals.

Later at the form, the group The Debate, which organises science debates all over Germany, noted that their audience missed talking about the process, how we do science, why we are confident about certain results? While in science journalism it is common to focus on the results themselves and on people.

This fits to my blogging experience: posts on methodological and organisation matters are read surprisingly well, especially as they are not spread on social media that much. Maybe everyone thinks they are the only ones interested such details, i.e. in how science works. Such a meta science topic is something that does not fit that well in a story telling framework. My explanation how a scientific conference is organised will never get a Pulitzer price, but was read well. (Alternative hypothesis the title sounded nice, but the post was not what was expected, which would fit the linked example.)

Entertaining Lecture on the excavation of Richard III by Turi King. The important insight for #scicomm :The communication process must not begin with the findings of Richard III but with the idea of wanting to find his remains. Communicate the process not only results #FWK18pic.twitter.com/7TuA45ugz7

Related, but not from the conference, is that I discovered Street Epistemology on YouTube. With this method it is possible to reasonably consistently have friendly conversations about hard topics, even about religion, which you are normally not supposed to talk about because it so easily escalates.

The Street Epistemology strategy is: 1) do not attack the person, 2) do not question the conclusion (closely tied to identity), but 3) do talk about methodology. In that respect it fits well into the above suggesting to communicate better how we do science and why we know what we know.

As scientists we talk about methodology all the time and we are normally able to have productive conversations at scientific conferences even though scientists come from a wide range of culture and backgrounds. However, outside of science we hardly talk about methods. Especially the media is very focused on people & conclusions.

Maybe we should do this more and see if Street Epistemology (SE) also works to get a friendlier chats on climate change. The first video below give an example of an SE conversation, while the second explains the idea in more detail.

We should not forget in this debate that scientists and journalists have different interests. For a scientist quality is much more important than quantity (number of readers). When I give a talk it is not important whether 10 or 1000 people are in the audience, if there is one expert in the audience who will build on my work it was a success.

A former journalist once did a short media training here in Bonn. He wanted us to do exactly what ever journalists want us to do. He was utterly disappointed when I told him scientists have their own interests, he complained I had not been listening. I did, and it is good to know what your counterpart wants to try to find the best compromise for both, but I do have my own interests.

Griem also did not like hero stories as the lone warrior recalls the same anti-institutional feelings of anti-science critics. Stories where technocrats and bureaucrats at universities, in Bonn (where her DFG is) and in Berlin (politics), hold back intrinsically motivated people to do ground breaking work. I would see that as a reason to like hero stories, trying to make institutions better is the opposite of being anti-institutional and there really is enough to complain about, from publish and perish, to funding science based on short-term projects rather than on long-term relationships with people and institutions. Project-funded science stifles innovation and waste enormous amounts of work in writing proposals, in managing them, and in each time building a new group. Time and energy that could have been spend on doing ground-breaking research.

Prof. Dr. Julika Griem proposed that science communication should tenderly overstrain the public. That does not sound like a good general strategy. It would lead to misunderstanding and would limit the audience interested in science even more. However, it is good to have a range of strategies and publishing a tenderly overstraining book like [[Gödel, Escher, Bach]] also belongs to science communication.

Tuesday, 6 November 2018

Opinion polling is hard because one needs to sample all groups evenly.

Election polling is even harder because you need to predict turnout as well. With the low turnout in the US (2014: 37% of eligible voters; 2012+2016: 61%): TURNOUT IS EVERYTHING.

If all young people would vote Republicans would have nearly no seat in the House. That is how important turnout is. (Take your friends with you when you vote.)

One advantage election polling normally has is being able to correct for biases based on past data. With turnout being this unprecedented, this will not help this time. Normally polling errors are about 3%. This midterm election my guesstimate is 7% nationally, 15% locally.

In other words: polling is nearly useless this time.

Historical results are also useless. In a West Virginia district Trump won by over 40 points, Justice Democrat Richard Ojeda has a 50/50 chance.

A large confidence interval may not sound like much of a prediction, but it does matter: If you think you live in a save state or district: you are wrong, go out & vote. You may otherwise regret it on the day after the elections.

Let's talk about the issues

Trump wants to kill a bunch of poor people in the South of Mexico. Similar historical that only a few will arrive at the border in several weeks. The so-called left wing press does his bidding by spending hours on this non-issue. Wednesday this topic will again have the prominence it deserves: none.

Friday, 27 July 2018

The International Consortium of Investigative Journalists that also uncovered the Paradise Paper and Panama Papers investigated the world of predatory scientific journals and conferences. Most of the investigation was done by German journalists, where it has become a major news story that made the evening news.

The problem is much larger than I thought. In Germany it involves about five thousand scientists, about one percent sometimes use these predatory services. That is embarrassing and a waste of money.

While the investigators seem to understand scientific publishing well, it seems they do not understand science and the role of peer review in it. One piece of evidence is the preview picture of the documentary at the top of this article. It shows how the two reporters dressed up to present some nonsense at a fake conference. Presumably dressed up how they think scientists look like. Never seen such weird people at a real conference.

They may naturally claim they dressed like that to make it even more fake. But they also claim that no one in the audience noticed their presentation was fake, I guess they think so because they got a polite applause. That is not a good argument, everyone gets an applause, no matter how bad a presentation was.

The stronger evidence that the reporters do not understand the way science works are the highly exaggerated conclusions they draw, which may lead to bad solutions. At the end of the above documentary (in German) the reporter asks: "Was wenn man keinen mehr glauben kann?", "What if you can no longer believe anyone?". Maybe the journalists forgot the ask the interviewed scientists to assess the bigger picture, which is what I will do in this post. There is no reason to doubt our scientific understand of the world because of this.

As an aside, the journalists of the International Consortium of Investigative Journalists are the good guys, but (Anglo-American) journalism that rejects objectivity is the bigger problem for the question what we can believe in than science.

Fortunately most of the reporting makes clear that the main driving force behind the problem is the publish-or-perish system that politicians and the scientific establishment have set up to micro-manage scientists. If you reward scientists for writing papers rather than doing science, they will write more papers, in the worst case in predatory journals.

Those in power will likely prefer to make the micromanagement more invasive and prescribe where scientists are allowed to publish. The near monopolistic legacy publishers, who are the only ones really benefiting from this dysfunctional system, will likely lobby in this direction.

Peer review

Many outside of science have unrealistic expectations of peer review and of peer reviewed studies. Peer review is just a filter that improves the average quality of the articles. Science does not deal in certainty (that is religion) and peer reviewed studies certainly do not offer certainty. The claims of single studies (and single scientists) may be better than a random blog post, but reliable scientific understanding should be based on all studies, preferably a few years old, and interpreted by many scientists.

This goes much against the mores of the news business, who focus on single studies that are just out, may add some personal interest by portraying single heroic scientists. The news likes spectacular studies that challenge our current understand and are thus the most likely studies to be wrong. If this mess the public sees were science we would not have much scientific progress.

As a consumer of science reporting I would much prefer to read overviews of how the understanding in a certain scientific community is and possibly how it has changed over the last years. It does not have to be recent to be new to me. There is so much I do not know.

the biggest threat to the proper public understanding of science is ... the lie we tell the public (and ourselves) that journal peer review works to separate valid and invalid scienceMichael Eisen

Peer review is nothing more than that (typically) two independent scientists read the study, give their feedback on things that could be improved and advice the editor on whether the study is sufficiently interesting for the journal in question. Review is not there to detect fraud. Reviewers do not redo all experiments, do not repeat all calculations, do not know every related study that could shine a different light on the results. They just think that other scientists would be interested in reading the article. The main part of the checking, processing of the new information and weaving it into the scientific literature is performed after publication when scientists (try to) build on the work.

The documentary starts with someone with cancer who is conned into a treatment by a scrupulous producer pointing to their peer reviewed studies, partially published in predatory journals. They also criticize that articles from predatory journals were available in a database of a regulatory agency.

However, the treatment in question was not approved and the agency pointed out that they had not used these articles in their assessments. For these assessments scientists come together and discuss the entire literature and how convincing the evidence is in various aspects. These scientists know which journals are reliable, they read the studies and try to understand the situation. One of the interviewed scientists looked at one of the studies on this cancer treatment in a predatory journal and found several reasons why the journal should not have accepted it in the present form.

Also politicians would often like every scientific study to be so perfect that you do not need any expertise to interpret it and can directly use it for regulation. That is not how science works and also not what science was designed for. Science is not an enormous heap of solid facts. Science is a process where scientists gradually understand reality a little better.

Trying to get to the "ideal" of flawless and final studies would make do science much harder. Every scientists would have to be as smart and knowledgeable as the entire community working on something for years. Writing and reviewing scientific article would be so hard that scientific progress would come to a screeching halt. Especially new ideas would have no chance any more.

Conned scientists

Like most scientists I get multiple spam mails a day for fake scientific journals and conferences. Most of them have the quality of Nigerian Prince Spam. People say this kind of spam still exists because the spammers only want really stupid people to respond as they are the easiest to con.

Thus I had expected that people who take up such offers know what they are doing. Part of becoming a scientist is learning the publishing landscape of your field. But Open Access publishing reporter Richard Poynder mentioned several cases of scientists being honestly deceived, who tried to reverse their error when they noticed what happened.

The first researcher who contacted me realised something had gone wrong when the manuscript that he and his co-authors had submitted was returned to them with no peer review reports attached and no suggested changes. There was, however, a note to say that it had been accepted, and could they please pay the attached invoice. They later learned that the paper had already been published.

Quickly realising what had happened, and desperate to recover the situation, the authors agreed to pay the publisher the journal’s full [Author Processing Charges] (over $2,000) – not for publishing their paper, but for taking it down.

Apparently there are also predatory journals with names that are very similar to legitimate ones and a 1% error rate easily happens. I guess assessing the quality of journals can be harder in large fields and in case of interdisciplinary fields. If the first author selects a predatory journal, the co-authors may not have the overview of the journals in the other field to notice the problem.

We need to find a way help scientists who were honestly fooled and make it possible that the authors can retract their articles themselves. Otherwise they can be held hostage by the predatory publishers, which also funds the organised deception.

If the title of the real article is the same as the one of the predatory article it would be hard to put both on your CV or article list. Real publishers could be a bit more lenient there. A retraction notice of the predatory version in the acknowledgements of the real version should be "shameful" enough that people do not game the system, first publish predatory and then look for a real publisher.

Predators

If there is evidence that scientists purposefully publish in predatory journals or visit fake conferences that should naturally have consequences. That is wasting public money. One institute had 29 publications over a time span of ten years. There is likely a problem there.

It should also have consequences when scientists are in the editorial boards of such predatory journals. It may look nice on their CV to be editor, but editors should notice that they are not involved in the peer review or that it is done badly. It is hard to avoid the conclusion that they are aware that they are helping these shady companies. Sometimes these companies put scientists on their editorial boards without asking them. In that case you can expect a scientist to at least state on their homepage that they did not consent.

It is good to see that prosecutors are trying to take down some of these fake publishers. I wish them luck, although I expect this to be hard because it will be difficult to define how good peer review should work. Someone managed to get a paper published with the title "Get me off Your Fucking Mailing List". That would be a clear example of a fail and probably a case of one strike and you are out. At least scientifically, no idea about juridically. With more subtle cases you probably need to demonstrate that this happens more often. Climate "sceptics" occasionally manage to publish enormously bad articles in real scientific journals. That does not immediately make them predatory journals.

Changing publishing

In the past scientific articles were mostly published in paper journals to which academic libraries had subscriptions. This made it hard for the public and many scientists to read scientific articles, especially for scientists from the global South, but I also cannot read articles in one of the journals I publish in regularly myself.

Nowadays this system is no longer necessary as journals can be published online. Furthermore, the legacy system is made for monopolies: a reader needs a specific article and an author needs a journal that most scientists subscribe to. As society replaces morality with money and as the publishing industry is concentrating and clearly prioritizes profits over being a good member of the scientific community subscription prices have gone up and service has gone down. As an example of the former, Elsevier has a profit margin of 30 to 50 percent. As an example of the latter, in one journal I unfortunately publish in the manuscript submission system is so complicated that you have to reserve almost a full working day to submit a manuscript.

The hope of the last decade was that a new publishing model would break open the monopoly: open access publishing. In this model articles are free to read and in most cases the authors fund the journals. This reduces the monopoly power of the journals. Readers can read the articles they need and authors can be sure their colleagues can read the article. However, scientists want to publish in journals with a good reputation, which takes years if not decades to build up and still produces a quite strong monopoly situation.

This has resulted in publishing fees of several thousands of Euro for the most prestigious open access journals. In this way these journals are open to read, but no longer open to publish for many researchers. These journals drain a lot of resources that could have been used for research; likely more than the predatory publishers ever will. My guess would be that the current publishing system is 50 to 90 percent too expensive; the predatory journals have less than 1 percent of the market.

The legacy publishers defend their profits and bad service with horror stories about predatory open access journals. They prefer to ignore all the high quality open access journals. This investigative story unfortunately feeds this narrative.

[UPDATE The German Alliance of Scientific Organizations fortunately states that journal selection is part of the freedom of science. They furthermore state that the quality of a study does not depend on where it is published and want to help scientists with training and information persons. They see a key role for the Directory of Open Access Journals (DOAJ).]

I just send a nice manuscript to a new journal, which has no real reputation yet. Its topics fits very well to my work, so I am happy my colleagues started this journal. I did my due diligence, know several people on the editorial board as excellent researchers and even looked through a few published articles. The same publisher has many good journals and journal is by now also listed in the Directory of Open Access Journals (DOAJ). The DOAJ was actually very quick and already listed this journal after only publishing 11 articles. But getting those first 11 would be hard if the FWF policy wins out.

The opposite model is to create a black list. This has less problems, but it is quite hard to determine which journals are predatory. There used to be a list of predatory journals by Jeffrey Beall, but he had to stop because of legal threads to his university by the predatory publishers. There were complaints that this list discriminated against journals from developing countries. True or not, this illustrates how hard it is to maintain such a list. There is now, oh irony, a pay-walled version of such a list with predatory journals. The subscriptions should probably pay for the legal risks.

Changing publishing

A good solution would be to review the articles after publication. This would allow researchers to update their assessments when evidence from newer studies come in and we understand the older studies better. PubPeer is a system to do this post-publication peer review, but it mostly has reviews for flawed papers and thus does not give a good overview over the scientific literature.

F1000 Prime is an open access journal with post publication review. I know of two more complete post-publication review systems: The Self-Journals of Science and recently Peeriodicals. Here every scientist can start a journal, collect the articles that are worthwhile and write something about them. The more scientists endorse an article, the more influential it is. In these systems I miss reviews of article are not that important, but are valid and they may still be informative for some. Furthermore, I would expect that the review would need to be organized more formally to be seen as worthy successors of the current quality control system.

That is what I am trying to build up at the moment and I have started a first such "grassroots journal" for my own field to show how the system would work. I expect that the system will be superior because these "grassroots journals" do not publish the articles themselves, only review them, and thus can assess all articles in one field at one place, while traditionally articles are spread over many journals. The quality of the reviews will be better because it uses a post-publication review model. The reviews are more helpful to the readers because they are published themselves and quantify in more detail what is good about an article. As such it performs the role of a supervisor in finding one's way in the scientific literature.

You get a similar effect from the always up-to-date review paper on sea surface temperature proposed and executed by my colleagues John Kennedy. This makes it easy for others to contribute, while having versioning and attribution. There is naturally less detail per article that is reviewed.

Changing the system

But also a better reviewing system cannot undo the damage of the fake competitive system currently used to fund scientific research.

Volker Epping, president of the University of Hannover, stated: "The pressure to publish is enormous. Problems are inherent to the system." I would even argue: Given the way the system is designed, it is a testament of the dedication of the scientists that it still works so well.

It is called "competitive", but researchers are competing to get their colleagues to approve the funding their research. There is no real competition because there is no real market. If you did a good job, there are no customers that reward you for this. In the best case the rewards come as new funding decided by people who have no skin in the game. People who have no incentive to make good funding decisions. Given that situation, it is amazing that scientists still spend time in making good peer reviews of research proposals and show dedication comparing them with each other to decide what to fund.

My proposal would be to return to the good old days. Give the funding to the universities, which give it to the professors, which allocate it to what they, as the most informed experts, think is interesting research, which furthers their reputation. Professors have skin in the game, their reputation is on the line, and they will invest the limited funds where they expect to get most benefits. In the current system there is no incentive to set priorities, submitting more research proposals has no downsides for them beyond the time it takes to write them. One of the downsides of this model for science is that the best researchers are not doing research, but are writing research proposals.

A compromise could be to limit the number of projects a science foundation funds per laboratory. The Swiss Science Foundation uses this model.

The old and hopefully future system also allows for awarding permanent positions to good researchers. Now most researchers are on short-term contracts because the project funding does not provide stable funding. With these better labour conditions one could attract much better researchers for the same salary.

Because project science requires so many peer reviews (of research proposals and of a bloated number of articles) a lot of time is wasted. (This waste is again much bigger than that of the predatory publishers.) This invites the reviewers to use short-cuts and not assess how good a scientist is and instead assess how many articles they write and how high the prestige is of the journals the articles appear in (bibliometric measures). Officially this system is illegal in Germany, the ethics rules of the German Science Foundation forbid judging researchers and small groups on their bibliometric measures, but it still happens.

My expectation is that without the publish-or-perish system scientific progress would go much faster and we certainly would not have the German public being shocked to learn about predatory publishers.

I hope the affair will inspire journalists to inform the public better on how science works and what peer review is and is not.

Why so many researchers use dubious ways to publish (in German). Warum so viele Forscher auf unseriösem Weg publizieren. Volker Epping, president of the University of Hannover: "The pressure to publish is enormous. Problems are inherent to the system." I would even argue: Given the way the system is designed, it is a testament of the dedication of the scientists that it still works so well.

Publish or perish is illegal in Germany, for good reason. German science has another tradition, trusting scientists more and focusing on quality. This is expressed in the safeguards for good scientific practice of the German Science Foundation (DFG). It explicitly forbids the use of quantitative assessments of articles.

Tuesday, 12 June 2018

After some blog posts about grassroots journals, it looks as if no one else will pick up the idea and I have started creating a first grassroots journal.

(It is interesting how often fear of being scooped is mentioned as reason against Open Science. Typically good ideas are not recognised before they are presented in detail and even then it takes time. At least that is my impression with the small paradigm changes I was responsible for: surrogate clouds and adaptive parameterisations.)

Susan Arthur had seen her husband returning from the academic wars before. "Well," she said, trying to find something comforting to say, "I guess it wouldn't be a revolution, would it, if everybody believed in it at the start?"

The first grassroots journal is naturally on homogenisation of climate observations. That was a good way for me to check whether the idea would work in practise. I think it does. And a concrete example is a good way for everyone to see how such a journal would work.

Using a WordPress blog works, but I am learning how to code a WordPress site to make it more user friendly and to make it easy for everyone to start such a grassroots journal, just as easy as starting a blog. (Which is really easy. Hint.)

With that it also became time to spread the idea and I have written a short guest post for the blog of the OpenUp project, an EU project on Open Publishing. After all these years of blogging that was my first guest post.

I was supposed to write about how wonderful openness is. So I wrote about how wonderful the right mix of privacy and openness are. Scientists are natural contrarians.

The main message, however, was already presented in my last blog post that if we separate the two roles of peer review — 1) feedback for the authors and 2) advising the journal whether the article is important enough for them — we will get a much more healthy quality assurance system.

In the feedback round I see no real reason to publish the content, all the (little) mistakes that were corrected. However, as it is just feedback, a friendly helping role, it would be easy publish the name of the reviewer.

In the assessment round the review itself is very interesting for others as well and is best published. Because this is judging a colleague and anonymous makes it easier to be honest. I would give the choice whether to be named to the reviewer.

Sunday, 25 March 2018

I once asked a friend and colleague about a wrong sentence in one of his scientific articles. He is a smart cookie and should have known better than that. His answer was that he knew it was wrong, but the peer reviewer requested that claim. The error was small and completely inconsequential for the results; no real harm was done. I wondered what I would have done.

Peer review has two roles: it provides detailed feedback on your work and it advises the editor on whether the article is good enough for the journal. This feedback normally makes the article better, but it is somewhat uncomfortable to discuss with reviewers who have a lot of power because of their second role.

My experience is that normally you can argue your case with a reviewer. Still to reach a common understanding can take an additional round of review, which means that the paper is published a few months later. In the worst case, not agreeing with a reviewer can mean that the paper is rejected and you have to submit to another journal.

It is quite common for reviewers to abuse their power by requesting their work to be cited (more). Mostly this is somewhat subtle and the citation more or less relevant. However, an anonymous reviewer once requested that I'd cite four article by one author, one of which was somewhat relevant. That does not hurt the article, but is disgusting power abuse and rewards bad behavior. My impression is that these are not all head fakes; when I write a critical review I make sure not to ask for citations to my work, but recommend some articles of colleagues instead. Multiple colleagues, not to get them into trouble.

Grassroots journals

I have started a grassroots journal on homogenization of climate data and only recently started to realize that this also produces a valuable separation of feedback, publishing and assessment of scientific studies. That by itself can lead to a much more healthy and productive quality control system.

A grassroots journal assesses published articles and manuscripts in a field of study. One could also see it as a continually up-to-date review article. At least two reviewers write a review on the strengths and weaknesses of an article, everyone can comments on parts of the article and the editors write a synthesis of the reviews. A grassroots journal does not publish the articles themselves, but collects articles published everywhere.

Every article also gets a quantitative assessment. This is similar to the current estimate of how important an article is by the journal it was able to get into. However, it does not reward people submitting the articles to a too big journal, hoping to get lucky, making unnecessary work for double reviews. For example, the publisher Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

In case of traditional journals your manuscript only has to pass the threshold at the time of publishing. With an up-to-date rolling review of grassroots journals articles are rewarded that are of lasting value.

I would not have minded making a system without a quantitative assessment, but there are real differences between articles, the reader needs to prioritize their reading and funding agencies would likely not accept grassroots journals as replacement of the current system without it.

That is the final aim: getting rid of the current publishing system that holds science back. That grassroots journals immediately provide value is hopefully what makes the transition easier.

The more assessments made by grassroots journals are accepted the less it matters where you publish. Currently there is typically one journal, sometimes two, that have the right topic and prestige to publish in. The situation for the reader is even more terrible: you often need a specific paper and not just some paper on the topic. For this one specific paper there is one (legal) supplier. This near-monopolistic market leads to Elsevier making profits of 30 to 50% and it suppresses innovation.

Another symbol of the monopolistic market are the manuscript submission systems, which combine the worst of pre-internet paper submissions (every figure a separate file, captions in a separate file) with the internet age adage "save labor costs by letting your customers do the work" (adding the captions a second time when uploading a figure with a neat pop-up for special characters).

Separation of powers

Publishing is easy nowadays. ArXiv does this for about one dollar per manuscript. Once scientists can freely chose where to publish, the publishers will have to provide good services at reasonable costs. The most important service would be to provide a broad readership by publishing Open Access.

Maybe it will even go one step further and scientists will simply publish their manuscript on a pre-print server and tell the relevant grassroots journals where to find it. Such scientists likely still would like get some feedback from their colleagues on the manuscript. Several initiatives are currently springing up to review manuscripts before they are submitted to journals, for example, Peer Community In (PCI). Currently PCI makes several rounds until the reviewers "endorse" a manuscript so that in principle a journal could publish such a manuscript without further peer review.

With a separate independent assessment of the published article there would no longer be any need for the "feedback peer reviewers" to give their endorsement. (It doesn't hurt.) The authors would have much more freedom to decide whether the changes peer reviewers suggest are actually improvements. The authors, and not the reviewers, would decide when the manuscript is finished and can be published. If they make the wrong decisions that would naturally be reflected in the assessment. If they do not not add four citations to a peer reviewer that would not be any problem.

There is a similar initiative in the life sciences called APPRAISE, but this will only review manuscripts published on pre-print servers. Once the journals are gone, this will be the same, but I feel that grassroots journals add more immediate value by reviewing all articles on one topic. Just like a review article should review the entire literature and not a random part.

This discussion may change when we separate feedback and assessment. Giving feedback is mostly doing the authors a favor and could more easily be done in the open. Rather than cumbersome month-long rounds of review, it would be possible to simply write an email and pick up the phone and clarify contentious points. On the other hand anonymity makes it easier to give an honest assessment and I expect this part to be mostly performed anonymously. The editors of a grassroots journal determine what is published and can thus ensure that no one abuses their anonymity.

The future

Concluding, in a decade a researcher writes an article and asks their colleagues for feedback. Once the manuscript no longer changes that much it is send to an independent proof reading service. Another firm or person takes care of the lay-out and ensures that the article can still be read in a century by making versions using open standards.

The authors decide when their manuscript is ready to be published and can be uploaded to the article repository. They send a notice to the journals that cover the topic. Journal A makes an assessment. Journals B and C copy this assessment, while journal D also uses it, but requests an additional review for a part that is important to them and they write another synthesis.

Readers add comments to the article using web annotations and the authors reply to them with clarifications. Also authors can add comments to share new insights on what was good and bad about the article.

Two years later a new study shows that one of the choices of the article was not optimal. This part was important for journal C and D and they update their assessment. The authors decide that it is relatively easy to redo their article with a better choice and that the article is sufficiently important to put in some work, they upload the updated study to the repository and the journals update their assessment.

A related proposal by Gavin Schmidt: Someone C.A.R.E.S. Commentary And Replication in Earth Science (C.A.R.E.S.). Do we need a new venue for post-publication comments and replications?

Psychologist Henry L. Roediger, III on Anonymity in Scientific Publishing. A well written article that lays out all arguments, which are whether we talk about the authors, reviewers or editors. The author likes signed reviews. I feel that editors should prevent reviewers taking advantage of their anonymity.

Tuesday, 13 February 2018

As I work in Bonn, I sometimes represent Germany, for example in the World Meteorological Organisation or in EU science projects. A colleague once noted that I am fond of making clear that while I represent Germany, I am actually Dutch.

Could be, at least I like the slogan "I did not start the war". The Second World War. It is a truism, but Germans are not allowed to say so. So, I say it on their behalf.

In addition Germans are still treated as if they are personally responsible. So, it is more pleasant if people know that I am not German.

My father traveled from The Netherlands to Austria on his motorbike. Admittedly long ago, not too long after the war, when he was young. At the border between France and Germany the customs were not willing to speak German, until they saw his Dutch passport, then they were suddenly able to speak German fluently.

Before I moved to Germany in 2000 I told a few friends that it was nice that the hostile feelings against Germans were over. They each time looked at me like I was from another planet. They were right. I must have had a sheltered life with nice friends.

More recently I went to Dublin for a week of fun at the European Meteorological Society meeting and stayed at a bed and breakfast. As far as I can see I got the worst room. The landlord asked a few times whether I was German. Maybe because the answer, "No, I am Dutch", was too much of a shock to process in one step. The toilet was next to my pillow, a naked light bulb, the backside of the water basin was unfinished lumber. The other rooms looked better and were not all occupied during the week. When reserving a room a normally ask for a firm mattress, maybe I should add: "I did not start the war".

I sometimes fear that
people think that fascism arrives in fancy dress
worn by grotesques and monsters
as played out in endless re-runs of the Nazis.

Fascism arrives as your friend.
It will restore your honour,
make you feel proud,
protect your house,
give you a job,
clean up the neighbourhood,
remind you of how great you once were,
clear out the venal and the corrupt,
remove anything you feel is unlike you...

It made no sense to me when my first German language teacher said she felt personally responsible for the war. She was born well after the war. But if people treat you like you are guilty, it is easy to start feeling guilty. We are social animals after all. In a survey published today one in ten Germans agreed with the thesis: "Even if I did not do anything bad myself, I feel guilty for the Holocaust".

This shows the power of social contagion, when the way Germans are treated by others can make 10% believe something that is impossible. How much larger will this effect be in cases were it is hard to judge who is right, the person or many others. If people treat you badly because of the way you look, it is too easy to say you should just ignore that. It will leave its trace.

Ironically racists often call for an end to the shame and blame culture, while their counterparts in other countries produce it. While they are technically right that it is not logical to feel guilty, what they actually want is that people do not know what the consequences of their ideology of hate and conflict are.

Fortunately in the same survey 79% say it is important to teach history in school. The two main reasons for this are to learn about the damages caused by racism (79%) and to prevent a return of national socialism (84%). More than half reported to have victims of the Second World War in their families.

Not mentioned in the survey, but the independent public media are also very important. They feature German history regularly and show emotional interviews of victims of the Nazi regime. The most important lesson of such interviews may be that the Nazis did not start with the Holocaust, they started with parades, denigration, discrimination and deportation. It ended with the Holocaust.