Chemistry World and Others on Dodgy Data

September 5th, 2013

Hello, friends. Pardon the radio silence of late. My first semester of teaching just started at SLU and my head is already spinning. I’ll have a full post on that subject soon, but I wanted to weigh in on a few recent pieces regarding the cases of suspicious data that were reported here and elsewhere.

Reporter Patrick Walter wrote a story earlier this week for Chemistry World that examined whether blogs are appropriate venues for policing the chemical literature for misconduct. I was interviewed for—and quoted in—the story, which I feel is thorough, is balanced, and represented my positions accurately. As you might imagine, I argue that blogs are indeed appropriate venues to report suspicious data and to analyze how the community should respond to misconduct.

There are plenty of people who disagree with me—to varying extents—and the article raises their concerns as well. That is fantastic, because this is a discussion that we need to have. I am happy to engage in thoughtful debate on the subject (see posts here and here) in hopes that we, as a community, can arrive at a more efficient system for removing manipulated data from the literature and preventing their publication in the future.

Mitch André Garcia, who runs both Chemistry-Blog and the chemistry subgroup of Reddit, is one of the people who took exception to my post on the manipulated spectra in Organic Letters. Here is what he wrote on Twitter:

@ChemBark I think you have lost sight of the line between witch hunting and the proper responsibilities of the online chemical community. :/

I’m left scratching my head here. How do the nanochopsticks he reported qualify as “acceptable to cover” for being “egregiously manipulated and…in a high impact journal” but not the erased impurities in the Anxionnat/Cossy spectra reported here? Seems pretty hypocritical. And if we can’t agree on whether these cases meet his standard for “egregiously manipulated” and “high impact”, how are we supposed to agree on anything?

My view on the matter is that anyone who wants to raise concerns publicly about data may do so, with the full realization that they are putting themselves on the line. If I raise concerns about the integrity of data in a paper, I am accountable to defamation law and the high intelligence and ethical standards of the readership here. I can only bring information to people’s attention. If that information is wrong or doesn’t support my opinions, I will be excoriated in the comments and lose credibility. If what I publish is defamatory, I will probably also be sued. The root cause of the outrage among chemists about these papers cannot be attributed to blogs; the data speak for themselves.

A few days ago, John at the blog It’s the Rheo Thingposted some cautionary advice to “activist [bloggers] that are confronting examples of fraud, plagiarism and other publishing infractions in the technical literature”:

What goes around, comes around. Many are pleased to bring the axe down hard on someone’s head, and hold as many people responsible as possible (from ALL the authors to the principal investigator and maybe even beyond that), but we need to keep in mind that publishing scientific research is a human effort and as such, will be imperfect at times even when no harm, deceit or other nefarious activity is intended. Many of the commentators screaming for blood are young professionals you have yet to run a large, established research group, but who think that they will be able to do so flawlessly in the future. Of course that won’t happen. You will have failings and shortcomings and things will go wrong despite your most fervent intent to prevent it. Most people do not have a problem with that.

Most people. But there will be plenty of others wanting your head on the same chopping block and with an added level of glee since you were responsible for bringing so many down yourself. It’s human nature. We can’t change it, this perverse desire to bring down the people bringing down others. Worse yet, these efforts to trap you may be entirely without merit. That won’t matter. “A lie can travel halfway around the world while the truth is still putting on its shoes” (Mark Twain). Your name and reputation can be placed in the same trash heap as those truly deserving it far more easily than you can ever imagine. Despite your noble intents and purity of heart.

User “juicebokz” on Reddit called John’s post “a letter to ChemBark”, and I feel compelled to weigh in with the following points:

Do you seriously think that the responsibilities of running a modestly popular blog don’t weigh on me? Do you think that I don’t consider whether I am treating the subjects of these sorts of posts fairly? These posts are not aimed at destroying scientists; they are aimed at protecting science. I do not take joy in the downfall of others, but I am not going to let a miscreant’s potential downfall prevent me from discussing a topic that I feel is important. Should any researchers be “brought down” for data fabrication, I will not be the person responsible for bringing them down. They will have been the people responsible for their own downfall.

And I am by no means a perfect person. Everyone makes mistakes and does things of which they are not proud. The point is that you have to pay for your mistakes, then dust yourself off and go about living a productive life. Should anyone gather the motivation to search through my past, or present, they’re going to find stuff that will embarrass me…but they are not going to find any fabrication of data.

As for drawing attention to co-authors who very likely did not actively participate in the fabrication of data, I still stand by the position that authors must share the responsibility for the content of their papers. “Share” does not mean “share equally”, but all authors should at least read through their papers and keep an eye out for things that are obviously wrong. When you are a corresponding author, ensuring the integrity of the data in your papers must be one of your priorities. If you think I’m alone in this view, please go back and read Smith’s editorial in Organic Letters. Any punishment doled out regarding fabricated data in a paper should be proportional to (i) one’s active involvement in the fabrication and (ii) one’s responsibilities as a conscientious scientist and/or manager. These responsibilities should be the subject of more discussion among chemists.

Finally, does anyone really think I am helping my career by reporting on scientific misconduct? Do you have any idea how uncomfortable it is to send e-mails to the editor-in-chief of a high-impact journal in my field asking for comment about how he’s going to deal with manipulated data in a paper written by one of his associate editors? Was it lost on people that Smith’s response to my inquiry was addressed “Dear Bracher”? It’s certainly not the most cordial of salutations. I asked a follow-up question by e-mail and was not given the courtesy of a reply.

I don’t like these sorts of awkward interactions, but asking hard questions is part of doing a thorough job of reporting, so I’ll just bite the bullet. I can only hope these interactions don’t come back to hurt me down the road, but that’s a possibility. At the end of the day, I would love not to have to write about scientific misconduct because (i) chemists have stopped doing it or (ii) universities, journals, and government have created a good system for dealing with it.

71 Responses to “Chemistry World and Others on Dodgy Data”

It depends so much on writing style, doesn’t it? If sadistic glee just drips from your post and if it’s clear from your style that your sole intention is to do a hatchet job, then it’s going to leave a bad taste in everyone’s mouth even if your objections are genuine (IMHO this happened a bit – unintentionally, I am sure- in the FWS post). But if your post reflects genuine concern and is expressed in moderate and respectful language, it leaves a very different impression, it tells people that you are not interested in simply tarring and feathering but want to advance the cause of your discipline. Personally I think your style has evolved and matured over the years (as does most serious bloggers’); the recent post fully conforms to high standards IMO and I see no reason why anyone would think of it as a witch hunt.

Scientific misconduct should be pointed out and discouraged, plain and simple. This is not as to say it should be punished, is it? but, should it? Jandrik Schön got stripped from his phd (http://joaquinbarroso.com/2011/09/19/the-not-so-schon-controversy/). How different is ‘erasing some impurities’ signal from NMR spectra’ to ‘making numbers up’?
All of us scientists are trying to share our findings but most times we get pushed by the system to just publish anything and if you don’t have anything then people start making up stuff. Why well stablished authors do it, is beyond my understanding.
Peer reviewing is not perfect but is there for a reason.
And what is this about a witch hunt? so, we’re not allowed to mention a thing about misconduct in our blogs? as Ash puts it, it all comes down to writing styles. It is one thing to name names and another thing is to make sure manipulated data doesn’t make it to scientific journals where it could have influence on further research.

There are indeed many better subjects to write about than scientific misconduct and I cannot think of any blogger posting on misconduct, real or perceived, who does not agree. But we do have a problem with misconduct and if you care to distil the problem (this is a chemistry blog, after all) and analyse the product it is well known: corruption. Misconduct leads to corruption, corruption is exceptionally corrosive and destructive.
We also teach and have a responsibility to the next generation. Can you look your students in the eye knowing that some may end up like the hapless “Emma” who was requested to make up an EA?
To use a naval analogy, we are the officer class and have been placed in a position of responsibility and we have to shoulder it, rather than jump into the first and only lifeboat. I would also echo the sentiment regarding the dangers of this activity. Perhaps not so severe as the scientist in Thailand whom Elizabeth Gibney wrote about today in the THE http://www.timeshighereducation.co.uk/news/death-threats-in-thailand-for-uk-whistleblower/2006659.article#.UihkNltA23Y who fears for his life, but nonetheless scary: you might be blackballed by a load of journals, your CV dies a slow death and forget grants, promotion, tenure, etc.
I would agree too that the tone of blogs can border on the vitriolic, but that comes from frustration at the virtual lack of response from journals and institutions, who generally do not want to know. Again, if you are not convinced, look at the case of Klarlund Pedersen, subject of a recent post on Retraction Watch.http://retractionwatch.wordpress.com/2013/09/02/regrettable-but-not-scientifically-dishonest-klarlund-pedersen-responds-to-danish-committee/
Standards, what standards?
Finally, misconduct can kill. There are plenty of examples where scientific misconduct in biomedical/medical research has led to the deaths of innocents. So rather than condemning, I would suggest working to help get science where it should be and that has to include blogging.

I said witch hunt because you pointed to some sketchy NMRs and literally said, “I have no definitive idea of what happened in the production of the spectra in this paper”. In my opinion, unless you have the smoking gun it is better to not point it out. I think it was a commenter that eventually showed you the white box layering trick using Adobe to give you the smoking gun you should have had before you went to press; I could be wrong though.

As far as your questions regarding high impact, Chemistry Blog has made the editorial decision to only uncover scientific fraud cases from journals with an impact factor greater than 12. Egregiously manipulated is defined as something that a statement like, “I have no definitive idea of what happened in the production of the spectra in this paper” is unnecessary.

I think you’re doing a great job, and suspect that many of the people complaining about you have a vested interest (ie know the people you’re calling out/are the people you’re calling out). I think you have been very balanced in not leading a witch hunt, and simply bringing things to the attention of the community.

I was very pleased with Amos Smith’s editorial and the policy that went with it, so it’s sad to see him being much less forthcoming when it’s one of the editorial board. I don’t see how his editorial retains any credibility if he doesn’t ask Prof. Cossey to leave the editorial board. This is not a case of witch-hunting, she has admitted there were fraudulent data in her papers; while I’m sure it wasn’t her that did it, Smith’s editorial specifically stated that ultimate responsibility lies with the PI, so I don’t understand how she can be allowed to remain on the editorial board.

@Mitch: Do you have a definitive idea of what happened in the Pease paper? As in, 100% complete knowledge of what went down and how?

My exaggerated tolerance for some sort of a reasonable explanation was a direct response to the call for more fairness to those whose integrity was being brought into question. I don’t think one can mount a tenable argument that I was unfair to any of the authors of that paper or the journal.

Also, there are plenty of posts where I know more info than what is posted. Sometimes, less is more.

I have to say that I think Mitch is all wet on this particular subject, Paul. Under no reasonable circumstances can the affair with the Cossy spectra be called a “witch hunt.” There were clear problems with those data, and it was appropriate to call attention to them–especially given the position of authority occupied by the corresponding author. And choosing an arbitrary cut-off in impact factor is hard to justify. The fact is that many, many people read Organic Letters, and have come to expect a high degree of rigor from the journal. So potentially digitally modified spectra (that were ultimately shown to be seriously manipulated) are a big deal that should be discussed.

I am also completely baffled by the commenters who tell you that you should stick to other subjects. They’re not paying for your content, so why do they think they should be able to dictate what you cover? That is entirely your decision, and if people are not happy reading about these issues, they can either skip over those posts, or stop reading the blog entirely.

That said, I do have a small concern about the negative tone of some commenters. While you cannot reasonably be accused of conducting a witch hunt, there are some anonymous commenters who seem to take great pleasure in dragging everyone involved through the mud. I suppose that you could moderate your comments, but that would probably detract from the intense level of engagement that this blog generates in the organic community. The next best thing would be to reply to the most egregious with the appropriate disavowals, though at a risk of being accused of trolling your own comment threads.

Anyway, all of this is to say, “You’re doing a great job.” And I do truly appreciate the way you have chosen to stick your neck out as an untenured professor. Most would not have the cojones to do something like that.

I think you went off the rails, just a bit, when you implied (or said as much) that you needed to rush this story out in order to best ‘your able competitors’ (or something along these lines). Your blog is good; you will survive. Take a few lessons on being impassive from the bearded one.

@Franz: I share your distaste for some of the comments left here, but I’ve decided to do as little moderation of the comments as necessary. My reasons are explained at the links on the bottom of this page.

@Bob: Point taken, but this story, the report on the NMR erasures, was not rushed.

Yes, I cannot imagine that Amos intended to address you as “Bracher.” Most likely he left it blank intending to come back and fill it in after determining your degree/position status. The lack of response to the follow-up is a little more troubling, but overall I would not assume that the E-in-C of Org. Lett. has it in for you, Paul.

You say “Chemistry Blog has made the editorial decision to only uncover scientific fraud cases from journals with an impact factor greater than 12″.

Would you care to elaborate as to why the line is drawn there? JACS is therefore beneath your notice? Along with the vast majority of chemistry journals. I can understand the need to limit the workload, but this seems to be a very arbitrary delimiter, and I would be interested to hear the reasoning.

I disagree with Paul’s opinion on the extent to which Cossy deserves blame. I dont think PI’s of large groups can reasonably be expected to go through SI with a fine-tooth comb. But the important point is that ChemBark brought to light a case of fraud – and that is something we should all agree is a very good thing. The blog is serving a very valuable role in a community that has no other serious means for enforcement of rules that are of critical importance. I just don’t get people focusing on the tone of the blog rather than these very important issues. Keep up the good work, Paul!

@Catalyzer – All due respect, but the PI of a large group should most certainly carefully inspect the SI of papers bearing their names. That’s the data that underlies every figure and conclusion! If you haven’t seen the data with your own eyes, how do you know it’s accurate?

A synthetic organic group of ~15 people (medium-sized, I’d say) might publish 10-20 papers in a given year. Even Prof. Trost, who runs a 25-person group, published only 26 papers in 2012. Do you believe that’s too many (2/month) for him to personally check?

I appreciate you do this without the anonymity but we’ve spoken about this ad nauseum and I suspect we’ve just got to agree to disagree — these incidents are cases of, most likely, a single person making bad decisions and you’re lumping everyone in the author list into the fire. I mean, at the end of the day it’s the PI’s responsibility, but it doesn’t mean the PI is actually an unethical piece of shit (EAgate apart). Poor oversight can happen to anyone and there’s no reason to run them through the ringer just because they fucked up by trusting someone too much. Come to think of it, sometimes this shit data might just be some middle author being a lazy asshole. Who knows?

Maybe you just don’t trust anyone. Maybe I trust people more than I should, I dunno. I just can’t get down with this. I dislike it, indeed, but I’ll defend your right to do it.

All of the developments of the past few years makes me wonder, how much of the literature of the past century is unreliable? And how many of the falsifications actually have an impact? Some might be pretty big. Most are probably insignificant. If we had a few trillion dollars we could go back and try to reproduce and confirm everything in the major journals. Until then, we need to trust in the integrity of the authors. Sadly, as Paul and others have been pointing out, that may not be a safe thing to do.

@Chemgeek …So if there have potentially been impactive falsifications through the history of chemistry, is the sky really falling as much as some purport? Is misconduct on the rise, or ability of identifying it on the rise? Has there EVER been such as thing as pure reporting of scientific data across the field?

Catalyzer – I’m a PI with a decent sized group and I disagree with your opinion of the responsibility of a PI / corresponding author. If you are running a group so large that you can’t exert effective quality control of your output, you ought to downsize, establish a more effective mentoring/oversight structure, or offload other responsibilities. The buck stops with the PI.

Cossy certainly isn’t the only PI asleep at the wheel, and it appears that this situations has served as a wake-up call that her publishing process was ineffective.

The frustrating aspect of fraud in synthetic methodology papers is that the cost of reproducing the experiment within the lab that did the work is usually negligible. The entire point of this field is to discover, optimize, and report new reactions. There should be a bare minimum expectation that more than one student will have tested a new method, that NMRs (the currency of the realm) will be scrutinized, and that a standard analytical proof of yield (crude NMR, crude HPLC analysis?) should be provided in the SI.

These safeguards will not prevent all fraud, but they would significantly raise the rigor of a field with openly acknowledged issues with reproducibility and fake yields.

Seems like Chemistry-Blog is fighting with Chembark! While Chembark said “JACS is the finest chemistry journal on the planet”, Chemistry-Blog does not even consider JACS to be of enough significant to look for fraud.
Also I always wonder why those nanojournals got so much high on impact factor? Doesn’t seem like many people reading them anyway.

“All of the developments of the past few years makes me wonder, how much of the literature of the past century is unreliable?”

If I had a dollar for every dodgy 70s or 80s Tetrahedron procedure that I tried back in my small organic molecules synthesis days… I don’t know, I would probably have a hundred bucks? Maybe less (30 bucks?), since I did that for only about two years. Still, very frustrating. But with that kind of chemistry you just curse the authors for a few seconds and move on to Tett. Lett. procedure number two.

@OldGuy: I started receiving a lot of tips about other cases of fraud and was asked to cover them by the tipsters. I brought this matter up to the active bloggers at Chemistry Blog and a clear majority do not want Chemistry Blog in the business of revealing data manipulation. Taking everyone’s voice into account I made the editorial decision to set the threshold where we would investigate journal manipulations to an impact factor>12. It is an arbitrary cutoff, but has the advantage of eliminating the need to investigate any of the tips we received as they all occurred in lower impact factor journals. Other blogs are of course free to set their own arbitrary thresholds, but this threshold feels right for us at Chemistry Blog. Wish I had a more analytical answer for you. Going through this process has just been a good learning experience for me personally, just one I rather not due very often.

@Mitch: I respect your polite and open answer to my question, thank you. Given the views of your bloggers, I personally think it might have been better to flatly rule out such coverage, or pick a discriminator of a different sort (potential impact in terms of OH&S, for example) that freed you from the vagaries of impact factor, but that’s just my opinion. There are no right answers here, and your clear communication is certainly helpful.

Paul: Since I’ve chipped in on this thread now, I wanted to add that I think what you are doing is important for the community, and you should be commended for being willing to put your head above the parapet. I don’t have an answer as to how to get “the system” to respond more effectively to these issues, but I suspect the publicity your efforts have generated will have many PIs reviewing their own activities. So that is a benefit already.

The issue that has occurred to me with some of the cases we’ve seen over the past few years is trust. Just how much trust should PIs put in their students? Science is a collaborative process and as such there has to be some trust between the leader and those putting in the grunt work in the labs and writing the papers. Does the PI need to sit beside their students when they upload anything to a journal, as Cossy says she will now do? Perhaps it’s something that starts much earlier where the PI has to take some serious time out to instil in all their students just what is acceptable and that they’d rather have the truth than some sugar coated perfect results.

I’d agree with the views that many of these problem journal articles are probably a result of poor oversight rather than deliberate unethical behaviour (although the evidence on some suggests that the latter may sometimes be true – “making up” elemental analysis??). However, where I disagree with Paul’s detractors is that this is somehow okay and we shouldn’t point it out. I see this failure of oversight as just as damaging to the field as deliberate unethical behaviour. Incompetence is no more tolerable than deliberate misdoings in terms of maintaining public trust – trust that is vital to ensure that taxpayers continue to fund important scientific research.

It is certainly not showing a lack of trust to ensure that, as a PI, you know what is going out of your group and into journals, especially if it is coming direct from PhD students and not experienced post-docs or research fellows. As a PI you have a duty to ensure that these early career researchers are taught how to do science properly, to the highest standards, and not exercising proper oversight is letting them down. Even more experienced researchers can benefit from a critical eye on their work every now and then and this should be seen as a positive thing (part of a mentoring process). The advantages in being active in oversight of PhD students is clear, for example my PhD supervisor always challenged and scrutinised my work especially for publication and that led me to a) be able to construct a more robust scientific argument first time round including being open about what post-processing was applied to data, b) plan experiments better so that I get comprehensive data & c) prepared me so that my viva (defence) was straightforward and didn’t hold any surprises – for that I am very thankful.

So, ideally the bulk of this mucking about with data should be eliminated within the group prior to publication if proper oversight is exercised and the rest should be dealt with in a timely and open manner by journals and universities. However, it is clear that this system isn’t quite up to the mark yet and so it is quite right that professional chemists and researchers should be able to raise their concerns in the wider media (including social media).

1) Again, I’m not sure how data manipulation/plagiarism issues would be better dealt with in private – the act of manipulating data in a journal article is public, not private, and the people who paid for the bogus research are not only the federal funders (mostly) but also the journal subscribers. As a matter of course, “Sunlight is the best disinfectant.”

2) I think Paul (having been dinged once) already knows about taking crap. I think people are well-meaning, but the warnings have the unintended effect of increaasing the barrier to noting likely research fraud. The point of chemistry blogs, it seems in part, is to make note of issues that are discussed in secret and never dealt with in open and substantive ways – things that are important to chemists and science but which conflict with the interests of powerful people (despite how they may have gotten there). In theory, what we really want to do is lower the barrier to noting potential research fraud, though I can easily see that devolving into a poo-fest. Just like in other whistleblower issues, though, we are ambiguous about whistleblowing (federal whistleblowers have to get so much money because their careers are toast after because no one supports them), and as in those issues, the only ones who benefit are those willing to cheat. It is left as an exercise for the reader to determine what behavior is likely to result.

3) I don’t see the NMR post as fishing for witches. Witch hunts generally have poorly defined crimes which allow publicly unpopular people to be cast into darkness under pretense. In this case, the manipulation of the NMR is pretty clear – it didn’t get that way by accident (so that the crime was well-defined) and the appropriate agents could investigate. It seems to be have been done in a way so that it had to be prosecuted publicly, which the powers-that-be didn’t like, and people jump to conclusions.

However, if an open and transparent system for prosecuting potential fraud existed, then there would be no pitchfork-toting mobs or blog posts – you could bring the spectra to the police and they would look (and you could trust that they would). I assume Prof. Smith didn’t like the public embarrassment of his journal found with bad spectra, but there isn’t a history of journals openly dealing with issues of data manipulation, and little trust that they actually would deal with it. If you can’t trust the police to protect you or your property, then unfortunately, you’re stuck with vigilantes, the mobs, or no protection at all.

Ultimately, if journals don’t want to be embarrassed by data manipulation, then they ought to deal with it openly. If you publish on the Web, sooner or later someone will find things you wish they hadn’t. Hoping everyone will keep quiet so that you’re not embarrassed is equivalent to asking them to conspire for your benefit and their harm, like witnesses to crimes of the Mafia or Colombia cartels. I’m sure that’s the moral high ground journals aspire to, right?

I don’t blame Prof. Cossy for the rather sad attempt at NMR manipulation, but how can anyone read Prof Smith III’s editorial and not see how this is the exact situation he warned about? How can she stay on the board and OL not lose the standard it tried to attain with the editorial? I also find it disturbing how she plans on responding to this situation, namely by taking back the trust she had in her students. As a good manager, she’s responsible for the action of her students and for setting up an environment where data manipulation is not only not tolerated, but also isn’t encouraged by immense pressure to generate positive results. Unfortunately, academia seems to be a great place for immense pressure to fabricate data.

Also, I find it sad that Chemistry Blog has set an artificial level (IF 12) where science becomes “important” enough to discuss data manipulation. I can understand why you have to have some sort of filtering device, but severity of the data manipulation would also seem to be more appropriate. Even data from shit journals (i.e. IF <12) is used for the "higher impact" research, and if the initial reports are crap to begin with, where can the science go from there?

I'm also disappointed in the scientific community that this blog is getting crap for reporting scientific fraud/manipulation. The difference between the reporting here and a witch hunt, is that there were never witches, but it's clear there is manipulation of scientific data.

I think Paul’s posts on scientific misconduct have been useful, well written, and an overall service to the community. As a previous commentator pointed out, fraudulent yields and data just lead to a subsequent waste of money, time, and resources for those trying to repeat said procedure. It then becomes more corrosive to future students from that lab because if I’m interviewing one of them for a job, I immediately think back to the time I may have tried (and failed) to repeat a procedure from that group and I thereby conclude that I can’t trust students from that lab. By putting fraud out in the open, it puts pressure on academics to better police their lab’s data, which serves as a net benefit to everyone involved.

I keep reading comments suggesting that the PI should be given a bit of slack and not be really held responsible for the actions of one bad group member. I find this excuse complete BS. My advisor had extremely rigorous control processes in place (multiple edits from all authors and other senior group members). It almost became a competition in the group to see who could catch errors in data and who could pass a paper through the boss with the least edits. This training served me well in industry where faking your data would certainly never end well because you will get caught and promptly replaced by someone more competent.

Maybe part of the problem is that PI’s start their careers with little experience in management. Whereas in industry, it may take you many years before you actually have a group reporting to you (usually quite small), an academic can suddenly find themselves with >5 people and little handle on how to motivate and manage people’s personalities. I’m sure that this transition can be difficult to manage and I look forward to reading posts from Paul on how a young academic navigates these unfamiliar waters.

@Mitch: I certainly understand your reluctance to devote additional space on the Chem Blog to scientific misconduct investigations. You control what topics you take on based on your interests and what you want the blog to be.

Likewise, you certainly can pick an impact factor cutoff to triage these stories, but why? The Pease story was newsworthy for the sheer audacity and poor quality of the fraud, such that it is hard to imagine the PI not being complicit and the reviewers not taking more than a cursory glance at the images. The story is of note whether the paper appeared in Nano Letters (above your cutoff) or Chemistry of Materials (below it).

For future inquiries, could you just say, “Sorry, we’re not interested in covering scientific misconduct except in exceptional circumstances”? Journal IF is a terrible proxy for the importance of any one paper.

“Finally, does anyone really think I am helping my career by reporting on scientific misconduct?” Well, it could actually, because you become a known entity within the chemical community, which ultimately makes you a known name if you decide to run for election or work at C&EN. These jobs are quite lucrative, in some cases seven figure salaries. This isn’t to say that I am scolding you for covering these stories. My take is that you are not a knight in shining armor, but have the right to cover these stories. People who are careful get pissed when others get ahead by cheating, even if it’s just erasing NMR impurities. Other people might put a lot of time into the purification, which means they use up energy and time, making it difficult to get that extra pub that could put them on the hot list for a faculty position/award/grant, etc. Having said that, some instances of dishonesty are worse than others. Nano-chopsticks is the worst case. I thought they had an ACS Nano paper that also had questionable data? Erasing impurities not as bad, but it pisses me off because of how much time I spent being diligent.

@Mitch: So setting the cut-off at IF>12, you’ll report on Angewandte and not JACS? While I think most people would agree Angew is a slightly better journal than JACS, this seems a slightly arbitrary decision?

I feel that these posts are most certainly appropriate and warranted. Scientific findings are published, first and foremost, to stimulate discussion and other research. The padding of one’s CV is secondary. I don’t put a star after my name in the author lists to signify that the issues raised in the manuscript have been fully resolved and are no longer up for discussion. As the corresponding author, it is my duty to do just that: be open to correspondence about the manuscript.

The data are invariably imperfect. At the very least, they are most likely open to alternative interpretations. How strongly an experiment supports an argument is debatable. If an author isn’t willing to have his methods and data questioned, he or she should not publish them. If you’re not willing to account for the work either publicly or privately, then you should not be an author on the manuscript.

In extreme cases, the work is fatally flawed, either by being partially or entirely stolen from someone else or by being partially or entirely fabricated. There have been some calls in the comments to keep the handling of such instances private. Well, that hasn’t worked. Universities keep these cases as hushed up as possible (Columbia and Sezen). Journals tend to do this as well (Angew. Chemie and La Clair). Even when they’re under public (and often blog-promoted) scrutiny, they release as little information as allowable. As a result, the culprits are free to publish more dodgy material, with the sole consequence being their promotion within the field with the aid of unearned grants and publications.

Quick follow-up: Actually, erasing impurities can be pretty bad because it could unintentionally (or intentionally) erase an overlapping peak of importance. This is also a problem with smoothing data. If you smooth your data, you should indicate that you smoothed it and include copies of the unsmoothed data in the SI.

@Special Guest Lecturer: I agree IF is a terrible proxy for deciding these things, and I’m fine with saying, “Sorry, we’re not interested in covering scientific misconduct except in exceptional circumstances”. It just becomes hard to describe what are exceptional circumstances to someone. On a side note, anyone who has sent me a tip I have told them to forward their concerns to ChemBark and the editor of the journal where the alleged transgression occurred. So I assume these concerns in the chemistry community are running their course through the system.

@Nick: We are fine talking about it if someone else breaks the story. We are just drawing the line where we will personally start investigating ourselves. There is of course some wiggle room for “exceptional circumstances”.

If you all don’t like that chemistry blog limits their articles to an impact factor of at least 12, then don’t read it, or better, start your own blog and investigate however you choose. As I see, the people running the blogs are free to do whatever they want (within reason under fair use and freedom of speech) but they are not paid for this service. They have every right to choose which articles to investigate and which ones not to.

Regarding the whole issue of these ‘blogs’ and reporting: The authors seem to try and do the best they can in my opinion. They are not really investigative reporters and they are not on reporting these in a news outlet. I highly doubt (although feel free to prove me wrong) that they get press credentials for their reporting efforts. My problem is the rush to judgement created by these blogs. You can look at my past comments. This should not be a guilty verdict such that the authors have to prove their innocence. The comments are usually the worst with an ‘off with their heads’ attitude. I am not trying to downgrade any of the potential cases of misconduct, but I do try to put myself in their shoes and take a different approach. e.g. is it possible that Pease had no idea about the paper since he is not even a corresponding author (doubtful but we just don’t know if the corresponding author put some mumbo jumbo in the submission system – why could he have not made a different account for Pease using another gmail address?).

I think the post by Stoddard is my view also. I would amend by saying it is the ultimate responsibility of the PI in these cases but that does not mean they are always at fault.

These posts are very informative and educative. They sparks several interesting discussions and comments about the role of the different authors of a paper, as well as the impact of reporting scientific misconduct.

Did you know that some withdrawn papers were still cited several months after their withdrawal? Once you have the paper in your data bank, you will not think of verifying if it is still valid before to include it in your references. This also means that your work might be based on untrustable data. I would like to see such a study done (maybe it already exists).

Concerning the role of the PI. In the case of the Organic Letters paper, I guess that when the first author came to tell the PI that he synthesized several molecules, the PI asked for the yield table and told the student to prepare the NMR spectra for the SI. If it is what happened, that was really running after troubles (and eventually catching them).

Now, it is also important to agree that responsibility does not mean complicity. A PI is responsible of what goes out of his lab. His level of implication in a forgery cannot be evaluated by a blog; this can only be done via a thorough investigation done by the PI’s university. Even in the case of the nano-chopstick paper, one cannot evaluate what was the role of the PI (even though it is very tempting). The only thing one can do is to point out the responsibility of the PI, to try to propose some solutions to prevent such things to happen again. All this process can only start if one first identifies forged papers, which was well done up to now by Paul Bracher.

I think one of the main differences between an investigative reporter for C&E News and you is that you’re also a professor with his own research group. As a result, it can probably leave someone wondering whether or not your true passion is actually to be a journalist, but the competition required to be one full-time left you following the career path in the sciences that you’ve taken, leaving this blog to be a hobby of sorts for you that has turned into a second job. Maybe I’m looking too deep into it, but that’s the feeling I get. While I agree with the reporting you’ve done, as I have grown weary of SI’s that leave me wondering if the authors really got what they reported and/or the yields they reported, the backlash against these stories may be justified in some sense. The problem is, while the reporting may be justified, you’re dealing with people’s careers involving at least 7-10 years of investment of time and work. I understand that fabrication needs to be addressed, and I fully support that, but the collateral damage needs to be taken into account as well.

Well this is a response that desperately needed to be put out there! I’m really impressed at the levelheadedness that seems to be prevailing from your posts. Whilst it seems obvious to me that you might not enjoy this aspect of blogging, it’s plain to see that not everybody else gets that. Especially when you consider some of the almost-vitriolic responses generated in the comments section of previous posts. Science is an endeavour in gaining knowledge and I still maintain that fraud only muddies the waters and serves us no use. I think there is a reluctance in journals and universities to tackle this problem and blogs (this one in particular) have helped to bring this debate closer to its “critical mass”. It doesn’t seem to be there quite yet, but I hope that there will come a point when the system changes and I suspect that we’ll all look back to this blog as the catalyst for that change.

I hope that this doesn’t adversely affect your career. You are obviously right in thinking that it’s not going to help matters though and all I can say is that I admire your courage for doing what you think is right. The questions at the beginning of your post ought to spark some thought in peoples’ minds. Of course there will always be those people who disagree, but that’s life. keep on blogging about a range of subjects and I’ll keep on reading.

IF and quality of the work:
By reading some of the posts, we have just the feeling that the quality of a scientific work should be only related to the IF of the journal in which it is published, that’s rather surprising to me!
IF is just an artificial evaluation of a journal and the meaning is more commercial than scientific. A golden rule for a scientist: the level (quality) of the work should be exactly the same whatever the journal and its damned IF, what will make the difference is only the originality and the possible interest to a wide or a specialized readership (or the influence of the corresponding author but that’s another issue). People that believe that low quality work should be published in “low” IF journals are not scientists.
Anyway, whatever the journal, the results have to be reported with honesty, that’s also a golden rule. In the particular case of synthetic methodology, if it is an important and useful new reaction (or reagents), it will be used by others, if not it will never and the paper will be forgotten by the community (even if published in Nature or Science). Look at the outstanding reagents developed by Corey (PCC, PDC, …): most of them have been reported in THL !!! and all of us are using them because those are great reagents not because of the journal in which they have been published.
Finally, ethical issues are the same whatever the IF of the journal…. If Cossy’s work would have been published in an obscure journal with a terribly low IF, I’m wondering if the NMR data would have been posted/discussed here?
Maybe be not but I’m very happy that you did it in the case of the OL paper, well done Paul. It may serve as a warning to the PI that are not seriously following the work done by their students and all of us can learn from these (sad) experience. So thanks Paul.

Regarding IF, I completely agree with you. The problem lies in the system that’s been established in the past decade, where your job/post-doc prospects can hinge on what journals you are published in as a graduate student, and this is somehow linked to how much clout your PI has based on how many papers he has in high IF journals. It is quite interesting how if you go back pre-2000, you’ll see most of the major groups published in JOC and TL, and you can even track some competition between groups in TL papers. Those days are gone, it seems.

…and we can add Lehn who got a Nobel prize for a work initially published in THL. Similarly, Heeger, MacDiarmid and Shirakawa received their Nobel prize for a paper published in Chem. Commun. Whatever the journal in which the work is published, if it is an outstanding contribution, it will be recognized.
Just imagine a Nobel prize for dendrimers this year, the first paper published by Vögtle appeared in Synthesis if I remember well. Would be fun. It seems easier to publish in high IF journals when you are working on hot topics but it starts to be difficult when you are proposing something really new and different.
Curious to see in 20 years who will be awarded the Nobel prize and in which journal the first paper was published. Be sure that the IF of this journal will not be considered by the Nobel committee!

Nic, during my first post-doc I applied for a grant to help post-docs reach the next level of their career. Although I had 10 papers, the reviewers complained that only 1 was a JACS. Many of the others were ACS journals and high quality journals of other companies (chem comm, etc.). They connected my lack of pubs in “high IF” journals to not being topnotch. Needless to say, I didn’t get the grant. We need more people like Dick Zaire who care about the quality and importance of the work rather than who won the game of dice.

Umbisam, that was certainly just an excuse to favor the application of someone related to your referee or someone else in the committee. Based on my long experience of selection committies, one argument used in one meeting to favor one application can be used in the next meeting to kill another one…
Unfortunately, many colleagues do not care about ethics and just care about their small power (and the story of Cossy is a good example of this behavior but she is in a system that is pushing you in this way).
Science is done by human beings and simply reflects our society. Over the last decade, science is turning crazy as our society. Not sure that all goes in the right direction…

Nic, that’s pretty much how I felt about it. I don’t know who my specific refs were, but I do know that someone who served as referee gave a presentation that had many similarities to my proposed project. One of the refs indicated that it was scientifically important, but that the community is not ready for it (or something like that). I wrote a letter to the grant committee, but nothing came out of it, and of course, I really don’t know if that particular professor refereed my grant, just that one of his assistant profs told me that he was a reviewer for that particular grant when this happened. Similar experience with a paper that got rejected (ref who didn’t like it was not native english speaker) only to find something very similar by a Chinese group (with introduction eerily similar to mine) get published and highlighted on ACS website as work of high importance. I didn’t fight it because I wasn’t sure how it would affect the reputation of my collaborators.

For papers reporting a new synthetic methodology, why not to introduce typical procedures with a couple of compounds that would be carefully “checked” for reproducibility in the laboratory of a member of the Board of Editors or of the Referees (not anonymous of course):http://www.orgsyn.org
The IF of Organic Synthesis is maybe not high but when I want to find a reliable procedure, it is the best place!
That would certainly prevent any attempts to incorporate manipulated sets of data!
Of course it would be difficult to apply such a procedure to total synthesis. But why not asking for samples of key intermediates + the final product allowing to perform NMR measurements and TLC analysis?
However, sad to see when reading your comments that scientists are more and more suspicious about work done by their colleagues. Honesty is the basis of science. Being may be naive, I still believe that all (apparently almost all) my colleagues are doing their work honestly and when reading a paper, I enjoy the science but I’m not suspicious (certainly critical but that’s normal for a scientist).
All of us should also remember that science is done by human beings, is it so surprising to find mistakes in scientific papers? Not at all, just read more about the history of science: there are many famous examples. An excellent paper on errors/ discoveries: Angew. Chem. Int. Ed. 2013, 52, 9362 (September issue).

Personally, I do not think that Paul Bracher’s reporting scientific misconduct is helping his career. In contrast, it might have been hurting his career: he eventually ended up at SLU out of his previous stunning education/professional path.

I am very supportive in term of unveiling academic misconducts. But my general feeling is that his posts in Chembark are sometimes too mean and judgmental.

If only these journals could do that! The reason why Organic Sytntheses is to reliable is because of its laborious process of checking reproducibility. A single submission could take up to a year instead of a couple months. Still, I trust Org Synth over any other journal.

I believe that we fundamentally share similar sentiments although, in my opinion, the whole IF system is actually more worthwhile then you deem it to be. While I agree that a scientist’s published work should be of the same (hopefully high) quality regardless of the impact factor of the journal, I think you would agree that work from different people are of different quality.

I find that having a system that places different journals in a hierarchy is beneficial to us because academic life is complicated enough – if there isn’t some sort of indicator to the *potential* quality and impact of the articles that we are reading then I will probably end up wasting a lot of time sieving through articles that I don’t necessarily want to read. The quality of work being published in the open literature does vary – that can be the result of, to name a few, creativity/originality; meticulosity; funding; human resources; politics (hopefully not!)… and so on. I think it would be really naïve of us to *not* think that “low quality work should be published in low IF journals”.

Let’s take Nature for example. I enjoy reading Nature like, I assume, many of us do. To me it’s not a journal with high IF so that it can sell better – it’s somewhere I can instantly find out the cutting-edge that is happening in science, learn and be inspired. In other words, the IF system promotes healthy competition and the fact that it exists is because our community has long evolved since the day (when access to information was limited) where Nobel Prize-winning publications were found in what we now think are “mediocre” journals.

@Dr Barracuda

I may be too young to have experienced the whole effect of it but, at least from what I have heard from my mentors, the system may have changed in the last decade or so – but rightly so because of increased competition. If you get a pile of applications, how else are you going to judge someone’s capability apart from 1. their publication record; 2. recommendations from your acquaintances; and 3. a research proposal (in come cases)? Well, you could always hold a competition of some sort and see what ideas they can come up with and how good their practical skills are but it’s clearly impossible in most cases – so we resort to statistics.

I believe that a sensible scientist should always strive for the occasional scientific breakthrough while constantly publishing solid, high quality, methods in specialised journals of her/his own field of interest. The problem that we are seeing now with academic misconducts can be attributed to lazy people trying too hard to keep their jobs. There is nobody to be blamed but themselves; and they are likely simply unsuitable to be scientists in the first place.

That brings me to the thing you said that made me feel uneasy regarding “collateral damage”. As many of us have pointed out, including yourself, that academic misconduct should be addressed simply because it just shouldn’t have happened in the first place (but it did). How do you propose that we deal with it to minimise “collateral damage” though? It sounds like as though you were suggesting that we should “give them a warning and let it slide” but isn’t the “collateral damage” of that even greater?

I personally think that any author who has knowledge of fabricated data but remained silent should lose their job without questions. I feel that I’m starting to touch on the subject of “whether an *innocent* PI (one who has *genuinely* no idea of the academic misconduct) should be responsible” so I might as well borrow this opportunity to say this: I think it’s unfortunate that many people who are higher up in the academic food chain fails to realise that their job, apart from producing results, is to inspire people. If you fail to instil in people with the rigour and integrity necessary in producing honest and high-quality results then an such incident should, as Nic has already said, serve as a warning/wake-up call.

I also find your comment comparing Paul to a C&E News reporter a bit puzzling – have you considered that he may actually be passionate about both being an academic and a scientific reporter? Many people have semi-professional hobbies outside of their regular jobs.

The system as I have experienced it hinges more upon 1) in what journals your papers have been published, 2) whose research group you’ve come from, and 3) who you know, and 4) the quality and quantity of work you’ve done. Point #1 is very important if you don’t have a lot of publications – having a couple high IF pubs can make the equivalent of several pubs in more specialized journals. I am NOT championing this notion, but that has really appeared to be the state of affairs. Point #2 and #3 are probably more important, it seems. While you make your own career, who you know and who you’ve worked for is very important. Point #4 is most important once you get the interview or when you have a potential employer looking at your work at a conference. You need, however, some combination of points 1, 2, and 3 to get you to the point of point #4 being useful. I do agree that high competition has caused this, though I do not feel this has actually been a positive result overall.

What I mean by collateral damage is, for example, the graduate student or post-doc leaving a lab like Dorta’s or Cossy’s – they are the collateral damage here that nobody is talking about. Their careers are tied to their PI’s reputation at some level, and now they are going to have to work incredibly hard to get those names off of the references section of their CV’s as quickly as possible. Also, I hold the position that the co-authors who never saw the SI that was submitted in some of these cases could be collateral damage. Paul sticks by the notion that even co-authors should be held responsible, but to what point do you hold them responsible? If they put in their own data, commented on the first author’s data (as Cossy noted she was aware there was a problem already but thought it had been addressed), there comes a point where it’s just up to the first author and the corresponding author (the PI) to have it out over the data. I’m not advocating a simple slap on the wrist and let it go; I suggested retraction as an option for Cossy, and my opinion is that Anxionnat has screwed himself over. In the end, I’m arguing for erring on the side of caution in these cases when bloggers report on this stuff – it hasn’t yet, but it could get incredibly heated and litigious. Maybe we should go back to the days of having the corresponding author also as the first author (meaning the PI as the first author) to put the responsibility back in the PI’s lap for maintaining quality data.

My comparision with Paul and a C&E News reporter was to make that point – he is obviously passionate about both, but his position as a professor also will make him quite vulnerable to the same types of attacks on his own research if there ever happens to be one slip-up. It’s fairly obvious he’s aware of that, but I wonder how often he has meditated on it, and I wonder what his thoughts are on now whistleblowing on his own colleagues. In a class spent on scientific ethics in my undergrad, part of it was spent on the dangers of whistleblowing even when you’re in the right, and how the entire scientific community can end up turning on you to the point where you can’t find employment. I see it as being in a tank of sharks at times.

There are many things I don’t write about here because I fear retribution that is simply not worth the good I think a post could achieve. But with the Bruno papers, we are talking about an obvious case of the cardinal sin of science, data fabrication, and how our field deals with the problem. I think this is a discussion worth having.

Perhaps you can liken the scenario to hearing a beating going on outside your apartment. You could choose to stay locked in your room and mind your own business, or you could choose to intervene. Intervention could help solve the problem, or it could just draw you in and make you a victim as well. Who knows what the case will be for me?

There are worse things in life than not earning tenure. Perhaps I would become a full-time reporter or a full-time teacher if I do not get tenure at SLU. Either job I would enjoy immensely. Also, in either job, I could pursue a broader range of stories on this blog and not have to worry about professional payback. Believe me, there is plenty to write about.

Thank you for taking the time to address my arguments and I actually really appreciate that you have done so. I actually have no arguments whatsoever with the four points that you have pointed out when it comes to how the current system judges the potential of a scientist. I don’t think it’s just the system that you have experienced; at least I can say that I used to think the system is like that back when I was a doctoral student – and I detest it.

In some cases you are also right about how Point 1, 2 and 3 come before Point 4. For example, recently in a casual conversation with a colleague of mine he told me that he gets invited to talks whenever he goes to conferences simply because of who he works for, I just smiled and nodded.

Having said that, I honestly don’t think it’s always that… depressing. I personally think that Point 4 should be the foundation of a good scientist but I do also think there are merits in performing well at Point 1 and 3 as they are part of the continual development of a good scientist. Moreover, I have come to realise that there are actually still some sensible employers out there who weights each of those things out in a balanced and objective manner. There are also funding bodies doing the same – such as certain strong research fellowship programs. For these reasons I don’t think it’s actually quite that bad!

Certainly, there may be lots of people out there who are simply gaming the system because of the amount of competition but many of them don’t really survive through to the end of they get there and become one of those sad people who are just desperately trying to hang onto their jobs. At the end of the day, it is still a good thing for the portion of us who actually care about doing what we do more than “just a job” because I truly believe that those who really care and work for it will eventually get there and make a difference. It’s just like politics – so many capable people simply give up half way through, that’s why most of the time we end up getting crappy politicians through (okay, that’s kind of a reverse argument but you get the idea). :p

Anyhow, I think I’ve gone off-topic a little…

Coming back to the collateral damage thing again; you probably have already gathered that I am actually supportive of the way Paul has done it but I think I understand a bit better than where you are coming from now. I’ll address the lead-author/PI issue first: I think most of the time we can identify the role of each of the co-authors such that the murky controversies are often only around the lead-author and the PI and, at the same time, the damage done to say, the crystallographer is likely minimal. I think the rest of it in this is then up to the institution to sort out. One way or the other, the PI has to take responsibilities be it small or large and I just want to say that it’s extremely worrying when the PI is not vocal about involvement (or the lack thereof) in academic misconduct throughout the whole incident. I mean, how many options are there…?

1. I offer my deepest apologies to the academic community and I shall resign. I shall retract all related publications as a gesture to protect the honour of all authors who were not involved with the misconduct and declare that the lead author and I the only parties involved in the misconduct.
2. I have absolutely no part in the fabrication of the data and I acknowledge that it was my oversight that such work has been published under my supervision. I shall retract all related publications as a gesture to protect the honour of all authors who were not involved with the misconduct and be held solely responsible for, and only for, the oversight.

Maybe I am a bit too confident to think that a PI in such position will write a public statement like example 1. :p

As for the problem with graduate students and post-doc leaving the same group where an academic misconduct was found… well. I think we need to place the trust on people being sensible – much in the same way that I think that most of us are, given enough information, able to identify whether a co-author is involved with the misconduct or not. If someone holds prejudice against me because I came from a group where a misconduct was found then I probably won’t want to work for her/him anyway. I think the problem here lies not in the person who made the misconduct public – it’s the integrity of the potential employer that is the problem. Also, if the institution is sensible then something should be done by it to protect their innocent graduates and collaborators as much as possible.

On the last point – I offer you my apologies because it seems that I have completely misunderstood what you were saying. I feel the same way as you do. If I were a blogger using my real life identity, I am ashamed to say that I would be very hesitant to cover something like that (I probably would still end up doing it in the end) as my career is still young and because of the scenario that you described. However, I do strongly believe that all the public reports, championed by people like Paul, on academic misconduct is going to only lead to better science in the future where only the occasional mistake occurs and where mistakes are treated as something that provokes healthy discussions.

My other comment on the role of the blogosphere here – I feel that it is entirely appropriate to cover these obvious cases of fraud. But what about the good things that are happening in our field – not just research highlights, but positive developments of mentoring, healthy and productive group cultures, outstanding teaching practices, honest scientists correcting their own errors in the literature, and innovation in publishing and data handling?

Much like murders and car crashes headline news broadcasts, a story that generates a lot of hits and comments does not necessarily have long term importance. There are often interesting aspects of the advisor/advisee relationship to discuss (negligence or worse on behalf of the PI), but there is also a good bit of rubbernecking and sermonizing.

Another way to put this: A lot of eyeballs watch this space. What other kinds of stories could be covered that would positively influence our enterprise?

Paul, I admire your bravery to continue to generate posts that highlight questionable data. I believe this needs to be done in some form due to the perceived (or actual) increase in “sketchy” data and the people responsible should be held accountable. One would hope that these journal editors, professors, their group members, past group members, colleagues, friends, etc. would seek retribution but one can never tell. It’s tough for young professors starting out these days (funding, publishing, etc.) and usually you see them trying not to rock the boat so to speak thereby making you much braver than most. Keep up the good work and remember an old saying, “assume good intentions but watch out for the dagger and poison.” Hang in there Paul and don;t let them dissuade you!!

@SGL: Perhaps the problem is, not a lot of exciting work gets published. Imagine you read JACS 10 years ago and then slept until the present. The first thing you do is read the current issue of JACS. Would something seem amiss, or would it just seem like the next week in 2003?

Umbisam: I completely disagree, although I can see that argument from the standpoint of organic synthesis and methodology. This is the best time in history to be a chemist from the standpoint of improved analytical methods and characterization within all sorts of non-traditional environments. We can do things that were not possible in 2003, or even 2008.

The discussion of scientific misconduct is good. It’s a hard pill to swallow though because no matter how justified you are in presenting a case, no one likes to hear the truth.

My only issue with the blogosphere is that redemption is essentially denied especially for those who might be innocent. Good example is Emma Drinkle; however the brunt of the press was put on her advisor Dorta who wrote the “make up…” comment in the SI. She is still guilty by association, and if someone does a web search of her name, this will always pop up. I think that’s a little unfair. The scrutinizing of her doctoral thesis with the actual publication was overkill. I don’t believe it was ChemBark who did that though.