In an editorial published online October 23, editor Carol Shoshkes Reiss notes that the decision to ban authors who plagiarize material stems from a rash of recent submissions containing overlapping text:

Recently we began to routinely scan each submitted manuscript for original language, using high-caliber plagiarism detection software. While it is not unusual to see common language describing methodology and reagents, we have found that 10–20% of the submissions in the last 4 months have had unacceptably high levels of strings of words or even full paragraphs that are taken verbatim from other published material. The most recent submission to come across my virtual desk had a 50% overlap with published articles found in the literature.

The journal won’t just ban authors who submit overlapping text — it will notify the authors’ institutions and funding agencies, as well:

The action taken by DNA and Cell Biology is swift and final, and its policy is firm. Questionable submissions are immediately rejected for plagiarism. The Provost or Dean of Research of the home institution of the corresponding author is notified. Furthermore, if grant support was listed in the submission, the agency is notified. The authors are red-flagged in our electronic system and are barred from submitting future research communications for 3 years.

The journal routinely asks authors to recommend reviewers, but will no longer accept suggestions that come with non-institutional email addresses, which can easily be created under a fake name:

The Journal’s policy is to allow authors to suggest potential referees with names of experts in their fields. However, only suggested reviewers who have an academic and/or institutional e-mail address will be considered. E-mail addresses with generic domain names (Gmail, Hotmail, Yahoo, AOL, etc.) will be disregarded.

We haven’t come across a case of fake peer review affecting DNA and Cell Biology, so we’ve asked editor Carol Shoshkes Reiss if the move is a preventative one. She said the journal has “recognized fake email addresses,” but most (not all) were flagged as such:

I have confidence that 4 of 5 section editors never used the potentially fraudulent email addresses. After a recent flurry of internal messages, I am certain that the 5th will never do it in the future.

Reiss added that the faked emails didn’t result in a paper published by the journal. The recent decision to avoid reviewers with non-institutional emails is to prevent any future incidents, she said:

…yes, this is a pre-emptive preventative decision to avoid “tame” and fake critiques.

To make sure she is contacting the correct expert, Reiss told us:

I will get the university email address for the proposed experts and use that. Now, on occasion, the literature has personal email addresses for corresponding authors. I will use those.

Reiss told us the journal hasn’t established a set cut-off for how much plagiarism is acceptable:

We don’t have a specific figure. I examine the iThenicate report. If the materials and methods is the only area of common phrases, I ignore it. If the introduction, results and discussion have long tracks of identical words, the paper is rejected. There was just a paper with a 31% score and >11% self-plagiarized.

She noted that the journal has spent four months scanning every submission, before which it only investigated papers that someone had flagged:

We only have 4 months experience where every submission is scanned. Before, if alerted by a reviewer, or in our search to select reviewers we found a concern, the submission was examined.

Dr. Carol Shoshkes Reiss, I have three queries for you:
a) what commercial software do you use to screen plagiarism? How is the % textual overlap factored into the decision of “plagiarism” and is any of this information detailed in your instructions for authors?
b) Have all back papers already published in DNA and Cell Biology been screened for plagiarism? If not, would this be fair to new authors, or excessively lenient to past authors?
c) Why use such a radical approach? Can you not accept human error to be a possible valid excuse?
d) I use a yahoo.com email account. Does that make me an invalid peer?http://www.researchgate.net/profile/Jaime_Teixeira_Da_Silva/publications

1) we use iThenticate ; decisions are made on a case-by-case basis.
2) no we have not screened previously published papers, only each new submission.
3) no, when you see the results, it is apparent that human error has not resulted in long blocks of copied text being integrated into a new MS
4) if your publications indicate that the corresponding addess is your yahoo account, it would be included, should the section editors wish to invite you to evaluate a MS

Hello Carol, I think it is excellent that you respond publicly at R to questions by scientists. Your accountability, frnakness and openness is admirable. Unfortunately, I see anything but these three principles in most editors of plant science journals when queries are made publicly. So thank you for showing the correct way, even if the questions are highly unpalatable. I have one worry about your response to my query 2. How do you know that no plagiarism exists in all past published papers? If I were an author from an “old” paper who may have plagiarized text, I would be breathing a massive sigh of relief that my paper had not been scrutinized. May I suggest that, to be fair to all new authors, that all old papers be screened with the exact same software, iThenticate. To be fair to all authors in all 34 previous volumes:http://online.liebertpub.com/loi/DNA

Incidentally, will this policy be applied uniformly across all Mary Ann Liebert Inc. journals, or only to DNA and Cell Biology?

Either they don’t get many submissions or they are way overstaffed. This is a potential resource nightmare. Many journals have expected that human error exists and authors are given an opportunity to correct or explain minor infractions. This policy is more like the nuclear option–no second chances, no mistakes allowed. I will be interested to see if submissions drop.

Indeed, the revised policy looks like a nuclear option. However, as you know, post-WWII military strategies involving nuclear weapons are essentially based on the balance-of-power and “mutually assured destruction (MAD)” doctrines. I feel that Carol Shoshkes Reiss just want to use a fair deterrence strategy, with the purpose of decreasing the plagiarism rate. The actual concern is about the “unacceptably high levels of strings of words or even full paragraphs that are taken verbatim from other published material”.
The good thing is that editors are not machines. Concretely, I’m sure that an editor receiving a submission in which the whole introduction is copied from a wikipedia page will take it easy. In the case of an inexperienced author, from a country or a lab with poor scholar culture, the best way is to send an advice like “Hey guys! You’re locos! Take a look at my Editorial! Read the notes to authors and prepare your manuscript accordingly!”. At the end, no one will be banned, I hope.

Disclaimer: I haven’t read the Editorial (paywalled).
Disclaimer 2: A long long time ago, I myself attempted to save time and efforts by submitting a couple of papers having a rather large section in common (the papers were actually companion papers, dealing with the same topic). The auto-plagiarism was obviously spotted by the Editor, and this provided an opportunity for an exchange of correspondence full of humor. Both articles were eventually published, after revision.

Sylvaine, I agree wholeheartedly with you. If, in an unfortunate circumstance, similar text is detected, especially a tiny amount of it, or plagiarism, then editors should be more “human” and alert the authors, requesting them to edit the text accordingly. Immediate and aggressive rejections without a fair chance to correct an honest error is precisely what I am fighting here with this Taylor and Francis / Informa journal:http://retractionwatch.com/2015/09/24/biologist-banned-by-second-publisher/#comment-781705

Bottom line: we should be able to resolve such issues peacefully, as a conversation between authors and editors, not through aggressive acts of banning. I do admit, however, that if authors fail to address textual overlap or plagiarism, then they could be banned (but are co-authors also included in this ban?).

Reiss’ statements actually says: “I will get the university email address for the proposed experts and use that.”

This is narrower than “institutional” addresses and would appear to exclude addresses from e.g. governments, private or public companies, NGOs, independent researchers, etc.

On a broader point, mandating institutional (or university) addresses ignores the realities of the employment market. Many researchers are employed on short-term contracts, perhaps across multiple institutions, and regularly gain and lose institutional addresses. Personally, I’ve had over 15 separate email addresses across my career so far, 10 of which were institutional ones. I’ve lost access to those as my contracts have ended. The only email addresses to which I have reliable, long term access are the generic ones, and an “email for life” address I was issued as a student.

It’ll be interesting to see how quickly the review pool is reduced by this policy.

Exactly – this is pro-university bias, ignoring scientists who actually work in the applied field – industry and government. How about, instead, the editors just do an online search for a few minutes, and figure out reviewers to contact on their own?

I agree with this. I also note that the plagiarism policy assumes all authors have a provost.

They’re responding to a live problem, I’ll give them that. “Diversify your editorial staff” + “stop letting authors suggest their own reviewers” would be better, in my book, but what can I possibly know? I don’t have a provost 😉

I understand the concerns you raise, as I’ve encountered them myself. Journals are not the only places asking for institutional addresses, though. For example, ResearchGate requires such an address before allowing people to set up an account. I think an easy fix for people in transition- or even those who are established, is to set up their own academic website. Many already have these. Usually the website host provides a number of email addresses that are institutional in nature.

ELF, if you are the corresponding author of a recent peer reviewed paper that is relevant to the submission under consideration, we would use the email address of record in that publication. If you have moved to another place, and if the message is not forwarded, you would simply not be among the reviewers.

Or, they are committed to doing the right thing. Funny how the same amount of time can be spent doing a thing carefully as doing a thing carelessly. The integrity of the person/institution/journal is at stake. How much will the stature of those published in this journal rise? Markedly I predict.

I applaud the moves made by these journals to address unethical behavior. Increasingly we observe material reproduced verbatim, that really should never have been allowed to go to print. I believe their proactive approach will prevent this sort of thing, which is really very injurious to the original author(s) of the material. If, for example, the plagiarized material becomes frequently cited in other papers, then the original authors become obscured if not entirely ignored for their contributions. Worse still, novel works and ideas end up being attributed to someone that never contributed a lick. Especially for young investigators looking to advance themselves, citations are very important. Let’s hope all journals become as proactive in eliminating these predatory practices.

I applaud the editor’s for its decision to “notify the authors’ institutions and funding agencies” if plagiarized material is submitted. That could lead to some chilling reactions.

On the other hand, the issue about non-institutional/generic emails accounts is complex, and here, I would expect the editors simply to do their job: if something looks strange, it only takes 10 minutes to cross-check over faculty web pages, google scholar, Orcid, etc. Editors can distrust the hotmail accounts, however, I absolutely distrust institutional accounts, for many reasons.
Remember the recent Harvard scandal:http://www.nytimes.com/2013/03/11/us/harvard-e-mail-search-stuns-faculty-members.html

David Sabaj Stahl
For example, ResearchGate requires such an address before allowing people to set up an account. I think an easy fix for people in transition- or even those who are established, is to set up their own academic website. Many already have these. Usually the website host provides a number of email addresses that are institutional in nature.

– They ask that people sign up with an institutional address, but you can change it later.
– If the institutional address isn’t recognised, you request an account and they process it manually.
– If you don’t have an institutional address, they ask for more details (e.g. publications, employment history) and process it manually.

As to setting up your own “academic” website, or getting a website that provides addresses “that are institutional in nature”… how does that prevent the fake email address issue? The only difference between that and a generic email account is that you may need to fork out a bit of money for a domain name. It doesn’t verify that the account holder is real or at an “institution”.

As you say, indeed, you could still fake it using a domain name with associated website. But my point was, I think that procedure would overcome the objections posted here concerning the requirement of an institutional (i.e., non-generic) email address. I’m guessing as professionals, having non-generic addresses would have many other benefits beyond the journal’s requirements. It’s always a plus when academic cheaters are found out and punished- but I think it’s even better when Katey bars the door- and they have the opportunity to cheat removed from the system. I find it difficult to believe, although not impossible, for a huckster to purchase five domains to get five email addresses to submit for five fraudulent reviewers. If they don’t have the wherewithal to write a legit pub, they probably don’t have the patience even for that.

Yes, good point for you. The COPE does not recommend alerting the funding agencies in case of plagiarism. Now that I think about it, that makes sense, as long as the plagiarism is limited to language. Stealing the work of another author is a different thing, but apparently that isn’t the point in the Editorial released by “DNA and Cell Biology”. Redundant publications would be a bit more controversial, since the success rate for grant proposals depends, at least in part, on the publications record of the applicant.

At the Journal I edit ( New Zealand Journal of Medical Laboratory Science, a peer-reviewed open access journal with no author fees), we have also adopted a policy of banning authors from submitting for at least 3 years, if their initial submissions contain significant plagiarism. We will also try to notify their institutions. There is another situation where we are considering banning authors. This is when they fail to re-submit a revised manuscript in light of reviewers comments. This has happened twice in the last couple of years and we suspect that they use our journal to get useful comments to strengthen their article and then submit it to a “better” journal. To date we have not found these articles published elsewhere, but are closely monitoring the international literature.

Yes, it is not uncommon for authors to “shop” for constructive criticism… however, it is also possible that some authors submit to a more highly ranked journal, get rejected and resubmit (with or without revisions). I have gotten cover letters to editors of different journals that were not changed — very embarrassing, actually.

Dr. Reiss, does this ban also encompass all co-authors of the person being banned? What if co-authors of a banned author who formed part of the team in a paper carrying plagiarism were to submit a paper to DCB as part of another team? Will they be banned? I ask because I was recently banned from ALL Taylor and Francis journals recently, and one of my concerns is that punishment is being handed out automatically to other scientists, namely my co-authors, who are not accountable for my views. Moreover, in the case of Taylor and Francis, I cannot see any clear policy regarding banning on any of its web-pages, which makes me think that banning is an ad hoc situation, but that can negatively impact innocent scientists who frm part of my team. I consider this to be highly incorrect, so your opinion on how you would avoid a similar situation with DCB, as you seek an iron-fisted approach to dealing with plagiarism, would be greatly appreciated.

Yes, as each co-author bears responsibility for the content of manuscripts with their name in the author list and must assure that the work is valid, original and never previously published. If a MS is accepted, each co-author must sign the copyright release form.

David Sabaj Stahl
I find it difficult to believe, although not impossible, for a huckster to purchase five domains to get five email addresses to submit for five fraudulent reviewers. If they don’t have the wherewithal to write a legit pub, they probably don’t have the patience even for that.

Truly, it’s very, very easy to get new email addresses and you don’t have to pay for a new domain names. Plenty of online sites offer free email accounts, “disposable” email addresses, or even full fake profiles (with street addresses and SSNs, depending on country), complete with domain names that aren’t gmail, Hotmail, Yahoo, aol, etc. Bottom line is that editors need to verify the identity of proposed reviewers, and domain names are not a great way to do it. Arbitrarily excluding some domains doesn’t solve the verification problem, and potentially excludes anyone using generic domains for perfectly legitimate reasons.

Carol Shoshkes Reiss
we would use the email address of record in that publication. If you have moved to another place, and if the message is not forwarded, you would simply not be among the reviewers.

Thank you for the clarification, Carol. I do not work in your field, so have no sense of how easy or hard it is to find appropriate reviewers. I can appreciate that using an email address linked to a recent publication may be useful for verifying identity.
I found this study on emails of Corresponding Authors in the MEDLINE database. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1369259/
Over 24% e-mail accounts had become invalid within a year of being published, and 49% were invalid at the time of the study! Clearly this is a huge problem too, for both Editors looking for reviewers, and potential reviewers wanting to offer their services. I’d love to see this study replicated with more recent data.

Carol Shoshkes Reiss
I have gotten cover letters to editors of different journals that were not changed — very embarrassing, actually.

I managed to do this by uploading the wrong file. Yes, I was mortified.. especially since I’d spent close to a day reformatting the document and reference list, and rewriting parts of the document to meet the 2nd journal’s instructions!

Most journals stop at rejecting a paper that they suspect has plagiarized content. However, the barrage of incidents of fake peer reviews certainly necessitates a stricter action against authors indulging in misconduct. The journal has taken a bold step and probably in the right direction. But this raises a question about the future of those authors who plagiarize unintentionally, in particular, non-native English authors. Would putting a ban on unsuspecting authors and making their mistake public impact their career and reputation permanently?

“Though there are many possible reasons for China’s problems for plagiarism, the most cited are the intense pressure upon Chinese researchers to publish and establish themselves internationally, a political system that is often seen as protective to those who are higher ranking giving them relative immunity to plagiarism allegations, and a general lack of understanding on the rules regarding plagiarism and research ethics.

However, another way to look at China’s plagiarism issues is in the context of its broader issues with intellectual property. China is routinely seen as a haven for knockoff, imitation and pirated goods and copyright law is struggling to catch up with the rest of the world. In fact, it was only earlier this month that China began to pass legislation protecting works by anonymous authors.

But whether the problem is the pressure to publish or a generally relaxed attitude toward intellectual property, the solution, for journals at least, has had to involve great awareness and enforcement.”

In that context, perhaps some plagiarism is “unwitting”, i.e. the authors are unaware or oblivious to the fact this that this is unethical?

I am not proposing public listing of corresponding and co-authors of papers; no public shaming is or was intended. However, they will not have the privilege of submitting to DNA and Cell Biology. They are not banned from other journals published by the same Publisher (or any other journals).

If this instance is/was unique for the group, they might get the message, and play by the expected standards of scientific publishing. If it represents a pattern (and only a thorough examination of their history could address that question), then their Research Integrity Officer and the Funding agencies should be involved in discipline for breaches in Academic Integrity.

If it was due to a Trainee’s naivete and the senior co-authors were negligent in their mentoring and supervision, then practices in the group need to be changed.

“[…] this raises a question about the future of those authors who plagiarize unintentionally”.

Just for the record: the concept of “unintentional plagiarism” makes very little sense, and that’s why plagiarism detection is merely a question of means (computing resources and software): the probability of obtaining by chance two times the same string of 25 words is close to 1E-75, assuming automatic writing of nonsensical sentences (while comparing two real articles, the figure thus gets away a little bit from 0).
Assessing whether a plagiarized document was created with the intent of deceiving the reader is by far much more difficult, and no software is available for that.