Plagiarized gibberish? Some journals will happily publish that—for a fee.

Peer-reviewed scientific papers are the gold standard for research. Although the review system has its limitations, it ostensibly ensures that some qualified individuals have looked over the science of the paper and found that it's solid. But lately there have been a number of cases that raise questions about just how reliable at least some of that research is.

The first issue was highlighted by a couple of sting operations performed by Science magazine and the Ottawa Citizen. In both cases, a staff writer made up some obviously incoherent research. In the Citizen's example, the writer randomly merged plagiarized material from previously published papers in geology and hematology. The sting paper's graphs came out of a separate paper on Mars, while its references came from one on wine chemistry. Neither the named author nor the institution he ostensibly worked at existed.

Yet in less than 24 hours, offers started coming in to publish the paper, some for as little as $500. Others offered to expedite publishing (at a speed that could not possibly allow for any peer review) for additional costs. The journals in this case are scams. Without the expense of real editors and peer review, they charge the authors fees and spend only a pittance to format the paper and drop it on a website. The problem is that it can be difficult to tell these journals from the real things.

The Science sting was perhaps more disturbing, since a number of the journals taken in by an equally nonsensical paper are supposedly serious academic outlets. Although the Science sting focused on open access journals, the problem it highlighted probably extends into other journals as well: weak editorial oversight and limited, shoddy peer review.

Issues with peer review can be problems in solid journals. For some of the large, cross-discipline studies that are increasingly popular, it can be tough to find reviewers who have all the relevant expertise to evaluate the different fields of science the papers contain. The result can be the publication of something that has solid biology but ludicrously bad chemistry, to use an example that was highlighted by blogger Derek Lowe. Another problem is the intense pressure that people in many fields (including all the biological sciences) are experiencing right now, which probably limits the amount of attention that reviewers can spare.

But the Science sting suggests that for at least some lower-profile journals, the attention paid by the reviewers is minimal or non-existent. Otherwise, there's no reason that something like a deranged theory of everything should ever find its way to a journal. This sort of shoddy review is a problem that only scientists themselves can fix.

Unfortunately, by attempting to highlight the problem of lax review procedures, some computer scientists may have exacerbated the problem. Suspecting that some reviewers weren't doing a thorough job on some conference papers, they put together a random gibberish paper generator for anyone who wanted to test whether reviewers were paying attention. Unfortunately, that software has since been used to get 120 pieces of gibberish published.

None of this is to say that there is a complete crisis in peer review. At the higher-profile journals with reputations to protect, most of the research is likely to be reliable (with interdisciplinary work being a potential exception). But it should certainly raise an added level of caution about some of the work that is published in the more obscure or overly specialized journals that have popped up in recent years.

SCIgen really is amazing, I suspect its (ludicrously silly) papers would probably pass muster with someone who wasn't a computer scientist. They're crazy, but if you just look at the formatting or don't know anything about the topic, they might fool someone.

Sounds a bit like a rehash of what Alan Sokal did 18 years ago. Sokal, a physics professor, sent a totally meaningless paper to Social Text, and got it published. The title of the paper was "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity"...

Sounds a bit like a rehash of what Alan Sokal did 18 years ago. Sokal, a physics professor, sent a totally meaningless paper to Social Text, and got it published. The title of the paper was "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity"...

Just shows the problems aren't exclusive with any single field, I suppose.

This reminds me a lot of those writing submission things where you submit a short story or a poem (no matter how awful it is) and they'll publish it in a book along with everyone else who submitted it, and then offer to sell you the book.

I think it's basically the same thing. In the literary world there are scam periodicals and publications who prey on peoples' need for validation, and there are periodicals and publications who have editors and staff who only publish things that are up to par. Same goes with science journals now.

My sister-in-law once started to read one of my (legitimate) scientific articles. I got the impression form her response that it looked to her like one of those generated CS gibberish articles looks to anyone.

This reminds me a lot of those writing submission things where you submit a short story or a poem (no matter how awful it is) and they'll publish it in a book along with everyone else who submitted it, and then offer to sell you the book.

I think it's basically the same thing. In the literary world there are scam periodicals and publications who prey on peoples' need for validation, and there are periodicals and publications who have editors and staff who only publish things that are up to par. Same goes with science journals now.

That's known as a vanity press. I see a lot more need for science journals to be held to a higher standard than literary publications.

All fun aside, though, I think the open-access movement is one of the better things to happen to journals in a long time. It even pressured some more mainstream journals and grant-funding institutions to mandate some or all articles be available without cost (though accomplishing this often includes publication fee extras like some open-access journals charge). I would hate to see the opportunistic actions of some "journals" set back these efforts. I would also note that PLOS ONE, the standard-bearer for open-access journals, passed the sting operation with flying colors.

Of COURSE Science went after open access and not the journal market as a whole. Science is one of those massive publishers that has much to lose as open access publishing makes further headway. And since open access publishing tends to carry costs for researchers in the form of publishing fees (to offset the lack of subscription and access revenues), it makes it a prime target for scams. That doesn't, in any way, denigrate the model as a whole, however. It just means that open access journals must be vetted. Certainly this is no worse than vanity publishing, a problem that continues to plague the traditional publishing industry.

When I pushed my thesis through to the school and subsequently to their open access web portal, I instantly started to get all kind of emails from "publishers" saying things like "congratulations on your incredible breakthrough!" Some weren't as overtly obvious, but all of them wanted me to submit the paper and, I'm sure, pay a fee to publish the paper through them. From what I remember, I got about 30-40 emails from different solicitors trying to get me to publish through them. All from a master's thesis.

The solution rests not only with scientists. Universities have placed so much emphasis for tenure on grants funding and publications that researchers will do whatever it takes to pad their bibliographies. I remember back before easy electronic publshing, folks would turn their one big idea paper into 6-7 small ones, each tossing a snippet of tease like some mini-series on TV. Anyone else's paper would have to reference them several times, adding to their status in the Citation Index. This pay to publish is just an easier path.

Let us also not forget the proliferation of political special-interest groups creating their own "peer-reviewed journals" to promote their agenda (Journal of Intelligent Design....)

For the reveiwers, one does not have to be a specialist to get some sense of whether the paper has followed proper scientific method, and see if the interpretation makes sense based on the data. Ars writers do that when you write an article summarizing a journal paper. Hell, the Ars community does a pretty fine job evaluating your articles.

Like everything else on the interwebs, one must approach with a healthy dose of skepticism.

Of COURSE Science went after open access and not the journal market as a whole. Science is one of those massive publishers that has much to lose as open access publishing makes further headway. And since open access publishing tends to carry costs for researchers in the form of publishing fees (to offset the lack of subscription and access revenues), it makes it a prime target for scams. That doesn't, in any way, denigrate the model as a whole, however. It just means that open access journals must be vetted. Certainly this is no worse than vanity publishing, a problem that continues to plague the traditional publishing industry.

The issue is that the majority of the predatory journals ARE open access. It's the wild west of academic publishing. Online-only journals are easy to set up, and their entire revenue stream is derived from authors paying to have their articles published (possibly with some ad revenue). This contrasts with paid journals who have at least two streams of revenue, the authors (who still pay to publish...at times more than what open access charges), but also their subscribers.

The selective pressure on consumption just isn't there for these fly-by-night journals, as their revenue is not directly tied to someone actually wanting the content.

You're quite right that some kind of vetting process helps. My lab has published in open access journals, but only after looking very closely at them. Sometimes it's easy to determine if a publication is on the up and up, as is the case of PLoS or Frontiers (being associated with NPG really helps), but in many cases it's hard to judge.

Of COURSE Science went after open access and not the journal market as a whole. Science is one of those massive publishers that has much to lose as open access publishing makes further headway. And since open access publishing tends to carry costs for researchers in the form of publishing fees (to offset the lack of subscription and access revenues), it makes it a prime target for scams. That doesn't, in any way, denigrate the model as a whole, however. It just means that open access journals must be vetted. Certainly this is no worse than vanity publishing, a problem that continues to plague the traditional publishing industry.

An excellent point. According to the science article roughly 45% of the overall submissions and 60% of the acceptances were from journals/publishers on the "Beall list" of known predatory publishers. The author actually used these publishers to help seed the submission destinations (along with a more mainstream list: the DOAJ). This may cause the problem, though still evident, to be overstated.

Also... really, Science? An experiment with no controls from the more traditional journals? Tsk tsk.

... The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago ... The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system. ...

I get email invitations to publish in these "journals" almost on a weekly basis. My colleagues and I wouldn't touch these with a barge pole, and I suspect the market for these outlets are people who want to give their CV some cachet with a couple of "scientific" publications, but who are not actually scientists. And the crackpots, of course.

More seriously, it brings home the point that one should not trust a journal paper just because it has been peer reviewed. Sometimes it is just wrong, and you have to judge for yourself if you believe the findings. At other times, the paper is all hype and no substance. This is particularly true for currently popular disciplines, and this has the unfortunate side effect of making the discipline look sub-standard. The biggest problem this presents is to journalists, who, in my experience, do not want to be told that their story (which they sold to their editor) is a load of crap.

And then there is scientific fraud, but I think that is relatively rare.

Of COURSE Science went after open access and not the journal market as a whole. Science is one of those massive publishers that has much to lose as open access publishing makes further headway. And since open access publishing tends to carry costs for researchers in the form of publishing fees (to offset the lack of subscription and access revenues), it makes it a prime target for scams. That doesn't, in any way, denigrate the model as a whole, however. It just means that open access journals must be vetted. Certainly this is no worse than vanity publishing, a problem that continues to plague the traditional publishing industry.

Strange that I see this, as 20 days ago I missed a review request for a journal that I've never heard of that had an abstract that involved the "Internet of Inteligences", the "pavement paradigm", an "application in choosing milk powder", and "zero elements of the matrix will be replaced with non-zero" numbers."

It came up as an email reminder this morning. I refused to review it, saying the abstract looked like gibberish.

Researchers are reporting as new findings that I remember from lectures while doing an M.S. in Earth Science and from my wife's term papers when she was doing an undergraduate degree in biology.

Some of the worst junk is publicly financed by government grants.

Much of climate science is junk. Michael Mann's famous "hockeystick" became famous simply because so few scientists have training in statistics. His reviewers were not competent to review his most famous paper in which Dr Mann stated that his method relied on conventional principal components analysis (PCA).

Unfortunately when I was conducting graduate seminars in statistical modeling I did not have Dr Mann's methodology on hand to demonstrate how not to do PCA.

The issue you run into for most scientific publishing is that it is on a topic that is very specific and typically niche. Much of it is investigation into oddities or things that have not been explored widely. You get some topics that are very popular, but most things are fairly obscure even by scientific standards (everyone can't work on the same things).

As a result, there are only a handful of people who can really dissect a paper and give good, technical criticism. It is not uncommon for only 20 people in the world to be knowledgeable on the subject to a point they can really review a paper and fully grasp the intricate details. 6 of them work with you in your research group, so they're not good reviewers. Another 6 are in the group that you colaborate with on everything, so they're not good reviewers. That doesn't leave much of a pool. So you have 8 people left, half of which are grad students ...so journals don't like to get them to review/dont' know who they are.

The reviewers get nothing directly for reviewing and are generally anonymous, so they have little direct motivation to do a good job other than a general sense of "doing good." When there are other constraints on your time, this isn't always that strong a motivation.

"None of this is to say that there is a complete crisis in peer review. At the higher-profile journals with reputations to protect, most of the research is likely to be reliable (with interdisciplinary work being a potential exception). But it should certainly raise an added level of caution about some of the work that is published in the more obscure or overly specialized journals that have popped up in recent years."

I tend to disagree somewhat with this statement. Nowadays, retractions ranging from solid to "high-profile" journals are not at all infrequent; I would consider that the impact of finding mistakes/misconduct in relatively more novel science is higher than if they were found in boring mundane science.

"None of this is to say that there is a complete crisis in peer review. At the higher-profile journals with reputations to protect, most of the research is likely to be reliable (with interdisciplinary work being a potential exception). But it should certainly raise an added level of caution about some of the work that is published in the more obscure or overly specialized journals that have popped up in recent years."

I tend to disagree somewhat with this statement. Nowadays, retractions ranging from solid to "high-profile" journals are not at all infrequent; I would consider that the impact of finding mistakes/misconduct in relatively more novel science is higher than if they were found in boring mundane science.

Perhaps the process needs another step. High Profile Journal receives a paper, High Profile Journal send the paper to a few subject matter experts to review, High Profile Journal publishes the paper, a hundred subject matter experts around the world can now read the paper, one of them finds a problem with it and brings it to attention of the journal, Journal consults experts, experts agree with there's a problem, the Journal publishes a retraction and science progresses.

Here's a new 'journal' that appeared in my email this morning, such emails are quite regular. This one was for an invitation to be on an editorial board for the "Austin Journal of Biotechnology and Bioengineering". Sounds like a respectable journal but the editorial group looks a bit odd, comprising of two editors from dental schools, another from an institute for oral health and a forth from Brazil whose qualifications are difficult to locate. None seem qualified for a journal of biotech and bioeng. The editorial board alone raises a major red flag. Other suspicious indicators are poor grammar on the web page, they will only accept word files, the FAQ is empty, reviewer guidelines is empty, there are very few articles actually published, etc. Processing fees are not listed for this particular journal but for the other 50 odd journals they publish the fees are between 700 and 2500 dollars. And finally a quick look at the site, http://scholarlyoa.com/ which lists what they call predatory journals, includes the Austin group as one of them. Surprise surprise.

I have been a reviewer for journals in the health literature. It is unpaid and sometimes with time pressure to get the paper turned back to the publisher. But it's still a privilege to be the first to judge a paper that explores new knowledge, even if it is not ground-breaking.

I have to say that it's great when you read about procedures, statistics and rationale that you know something about and have experience with in the paper you're reviewing. Often it 'clicks' with your own knowledge and experience in the field. It's also interesting to see other researchers' approaches that may be different from your own or uniquely creative. But usually there are some aspects of the research, procedures, statistics or cited works of others that I don't fully understand. There reaches a point in reading the manuscript where one trusts the authors, at least in the absence of any subsequent bizarre interpretations or obviously inappropriate methods. I'm pretty good in my field, but health research is multi-disciplinary and even within a discipline one might not know things that someone else does. As a reviewer you want to help the research process along, apply your expertise and help others. But you don't want to look like an idiot to the publisher and tell them the manuscript is out of your level, even if some of it is, though perhaps not the critical parts. And as for references, I look at them and am familiar with some of the cited articles, but I don't think any reviewer actually looks each of them up and checks for veracity. When you shoot it back to the publisher, with or without suggestions for revisions, there is often a feeling of 'I hope I got that right'.

The bottom line is that in the absence of glaring errors, you have to trust the authors of the manuscript to have integrity and assume they checked details. Which is usually the case.

There can be legitimate disagreements and "errors" among manuscript authors, reviewers and publishers, though. Even something as seemingly cut-and-dried as statistics can be applied artfully and is a matter of judgement, interpretation and opinion sometimes.

And scientific papers in research journals aren't the final answer on a subject. Lay people, especially those on a cause for some medical disorder, often refer to research papers out of context, without knowing all the terminology involved or the underlying theories, lines of evidence and legitimate splintering that leads up to a published paper. They're not textbooks. They're research papers, and not the final word on a subject. If you remember psychology as an undergraduate for example, there were all kinds of theories on how memory or perception works that were tested over the decades, improved upon, discarded, looked at with new tools and so forth. That is the purpose of publishing research papers in journals. A paper usually won't allude to all the negative or controversial reactions that will ensue from its findings, even if its findings are technically sound.

When I was a graduate student, one of my jobs was to go to the library and retrieve articles that were cited as references in other articles that my faculty member wanted. In a few of these articles, the references didn't match what was being cited in the main article at all. Wrong page numbers, not the same title; total mismatch. I don't know if this was from poor citation management by the original author or outright fraud.

Peer review is one step, but other researchers being able to duplicate the work is really the gold standard. I think there has been a generation or two now of the 'me' generation that was indoctrinated with the mantra to "question everything"---which they do, mindlessly. Thereis some inaccuracy out there, but on the whole research publishing is pretty good. The problems arise from the inappropriately intense need to publish and puff up resumes (which itself is a product of the 'me' generation), and from non-scientists trying to criticize, manage and control science. I don't even discuss my work with friends and family because it took so much knowledge to get where I am, and they just don't have this foundation to be able to discuss it intelligently.

Isn't the obvious solution just to make the peer review process more transparent? Then we can all tell who is and isn't doing peer review.

Even blind review can be done transparently. Journals would just need to share review metadata -- eg 'John Timmer reviewed 3 papers for Science during 2014' -- with an independent third party service that can verify with the reviewer that the journal is indeed performing peer review. This can all be done without revealing which reviewer reviewed which article.

That's what we're trying at Publons (https://publons.com), where both reviewers and journals can record (and get credit for) their peer review activity -- even for blind and double-blind review.

Other initiatives (like https://www.qoam.eu/score) are trying to solve this problem with journal ratings (or journal reviews), where authors/reviewers etc can rate and review their experience with a particular journal.

Identifying untrustworthy journals is an inherently solvable problem, so I can't see this still being an issue in five years time.

Unfortunately, this problem isn't even contained to publishing science papers. Vastly more important (or vastly less important depending on your frame of reference) is nonsensical gibberish being passed into law. Sure there are the big bills that are debated 3 ways to Sunday every... er, Sunday morning that we expect have every minor detail teased out, but what about the "little" bills? Like how much plutonium can be in soft drinks (hopefully on the low end).

Even the "well known" bills wind up being so massive that it would take several US House of Rep terms to read the things. Add that to the fact the main skill required for the people responsible for writing, passing, amending, and/or rejecting these bills is the knack of getting people to give them money, or at the very least putting a check by their name.

The worse thing this does is spoil the pool and make truly well crafted, important bills like the following get lost in the chaos: