EVENTS

Gibberish papers discovered and withdrawn

That journals can sometimes be fooled into publishing nonsense papers has been well established in some high-profile cases but these tended to be seen as isolated instances that were done deliberately, with the papers carefully constructed to prove a point.

Now the journal Naturereports on fake publications on a much larger scale using computer-generated papers. More than 120 papers were published, not in peer-reviewed journals, but in conference proceedings published by IEEE and Springer. Conference proceedings can have widely varying levels of review prior to publication. They usually face less scrutiny before they are published but the fraud is disturbing nonetheless because Springer says that these proceedings were supposed to have been peer-reviewed. IEEE has not said if their papers were peer-reviewed.

The fake papers were discovered by computer scientist Cyril Labbé of Joseph Fourier University in Grenoble, France and he says that these papers were generated using a freely available software program.

Labbé developed a way to automatically detect manuscripts composed by a piece of software called SCIgen, which randomly combines strings of words to produce fake computer-science papers. SCIgen was invented in 2005 by researchers at the Massachusetts Institute of Technology (MIT) in Cambridge to prove that conferences would accept meaningless papers — and, as they put it, “to maximize amusement” (see ‘Computer conference welcomes gobbledegook paper”). A related program generates random physics manuscript titles on the satirical website arXiv vs. snarXiv. SCIgen is free to download and use, and it is unclear how many people have done so, or for what purposes. SCIgen’s output has occasionally popped up at conferences, when researchers have submitted nonsense papers and then revealed the trick.
…

Labbé says that the latest discovery is merely one symptom of a “spamming war started at the heart of science” in which researchers feel pressured to rush out papers to publish as much as possible.

Some of the authors are denying having anything to do with the papers published under their names while others have not responded to requests for comments.

Comments

The WMSCI conferences have been running for ten years, and last year’s meeting attracted nearly 3,000 papers. WMSCI 2005 advertises itself as “trying to bridge analytically with synthetically oriented efforts, convergent with divergent thinkers”.

The MIT team regards it as one of many conferences that have no scientific function and sell themselves through indiscriminate e-mails. “You see lists of speakers, and there’s no one you’ve ever heard of,” says Stribling. “They spam us.”

Such conferences have sparked anger in the field, as demonstrated by a WMSCI submission from David Mazières of New York University and Eddie Kohler of the University of California, Los Angeles. The title, text and figures of their ten-page paper consist entirely of the phrase “Get me off your fucking mailing list”.

“I don’t know why these conferences exist,” adds Frans Kaashoek, a member of the MIT computer-science group to which Stribling and his colleagues belong.

But the WMSCI’s general chairman, Nagib Callaos, who is based in Venezuela and has no listed academic affiliation, has defended the conference’s decision. “We did not receive reviews for some papers,” Callaos says. “Since we thought that it was not fair to reject those, we accepted them as non-reviewed ones.” The MIT paper has now been pulled.

I myself regularly receive “invitations” to such “conferences” with ridiculously broad topics; invariably, I haven’t heard of any of the “organizers” or any of the people they’ve “asked” to be “keynote speakers”, and even though the topics are so broad, they can rarely be claimed to include what I work on! They harvest my e-mail address from my published papers, as do the spammers who “invite” me to publish in new “journals”.

Their goal is to collect the “participation fee”. The fake journals tend to claim to be open-access author-pays journals and to offer me a first-time discount on the publication fee…

Usually they address me as “Dear Professor/Researcher”. It’s like the “Dear Sir/Ma” used by the people who desperately want to foist tens of megabucks on me that they inherited from “cocoa and GOLD merchants” who were “poisoned by business associates” anywhere from Ghana to Nigeria.

The new paper reveals that Springer (one of the big four science publishers) and the Institute of Electrical and Electronic Engineers have repeatedly been fooled into publishing the “proceedings” of such “conferences”, or perhaps simply not cared. That is a scandal, but a different one than your post makes it sound like.

Again, these are fake conferences. From the new paper:

Labbé does not know why the papers were submitted — or even if the authors were aware of them. Most of the conferences took place in China, and most of the fake papers have authors with Chinese affiliations. Labbé has emailed editors and authors named in many of the papers and related conferences but received scant replies; one editor said that he did not work as a program chair at a particular conference, even though he was named as doing so, and another author claimed his paper was submitted on purpose to test out a conference, but did not respond on follow-up. Nature has not heard anything from a few enquiries.

and:

Ruth Francis, UK head of communications at Springer, says that the company has contacted editors, and is trying to contact authors, about the issues surrounding the articles that are coming down. The relevant conference proceedings were peer reviewed, she confirms — making it more mystifying that the papers were accepted.

The IEEE would not say, however, whether it had contacted the authors or editors of the suspected SCIgen papers, or whether submissions for the relevant conferences were supposed to be peer reviewed. “We continue to follow strict governance guidelines for evaluating IEEE conferences and publications,” Stickel said.

Looks like they weren’t peer-reviewed, because the whole conference was a scam in the first place – each one of them.

There is a long history of journalists and researchers getting spoof papers accepted in conferences or by journals to reveal weaknesses in academic quality controls — from a fake paper published by physicist Alan Sokal of New York University in the journal Social Text in 1996, to a sting operation by US reporter John Bohannon published in Science in 2013, in which he got more than 150 open-access journals to accept a deliberately flawed study for publication.

I don’t review papers, but I do participate on a screening panel at the company where I work, where we decide which patent proposals will actually get sent to the lawyers to be patented (which is expensive). The process is pretty different in practice from peer review, but it’s a similar idea.

Once I got a proposal that, while clearly composed by a human, was completely unworkable. I think what happened is that in a meeting somewhere, somebody noticed that the description of algorithm X used some of the same words as problem Y, and said, “Hey, we should apply X to Y! You there, write up an invention proposal!” Except the words happened to mean different things in these different contexts, and the combination ended up being gibberish.

Here’s the thing: The first time I read it, I said to myself, “Oh man, that one is highly technical… I’ll come back to it later.” The next two times I thought, “Wow, this is quite subtle. It will take me some time to grasp the math.” It wasn’t until about the fourth read when I realized the problem was not with my understanding, but rather with the proposal itself. Even then, I had a co-worker read it over to make sure I wasn’t missing some key point. I think I spent more time with the nonsense proposal than I do with most serious ones.

I can definitely sympathize with these faulty peer reviewers, is what I am saying. If I was on a tight schedule, and all that was expected of me was a yea or a nay, and especially if I thought one or two other peer reviewers were going to be double-checking the paper, I might be pretty tempted to say, “Well, I don’t get it, but that’s because I’m not familiar with that particular algorithm. Whatever, I’m sure they did due diligence…” And of course that would have been a mistaken assumption, but a tempting one nonetheless!

I have reviewed papers and proposals and it is not easy. It is time-consuming and difficult, exacerbated but the fact that many scientists are simply not good writers. So I too amy sympathetic to papers passing through the filter, unless glaring errors were overlooked.

I think part of the problem is that reviewers may be reluctant to tell authors “Look, I just don’t understand what you are trying to say and/or do. Please rewrite it so that it is more clear to the reader.”

I think part of the problem is that reviewers may be reluctant to tell authors “Look, I just don’t understand what you are trying to say and/or do. Please rewrite it so that it is more clear to the reader.”

Yes, precisely. There are two parts of our invention review process that I think help immensely:

First, the primary reviewer is supposed to come to the panel with a short (usually 3-5 sentences) summary of what the invention is all about, in their own words. You can kinda fake it, and I’ve seen it done that the inventor basically writes the summary… but at least forcing the reviewer to attempt a summary I think makes it less likely. With the example at hand, if all I had to do was check a box, I might have been tempted to just shrug my shoulders and move on (especially if I was really busy). But I knew I didn’t understand it well enough to write a summary, so that sort of forced me to either make a conscious decision to circumvent the process, or else press on until I understood.

Second, you are supposed to talk to the inventor(s), at minimum via a brief email exchange, to make sure you understand. This, also, doesn’t always happen, but then there is at least a record that the process wasn’t followed to the letter.

Alas, the latter is probably not practical in academic peer review, since getting all of these disparate people in contact would be a logistical nightmare. (Even in the example I am giving, when everyone is working at the same company, logistics sometimes sabotage that part of the process) It’s too bad, because what a difference that could make…