Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ananyo writes "One of the largest-ever studies of retractions has found that two-thirds of retracted life-sciences papers were stricken from the scientific record because of misconduct such as fraud or suspected fraud — and that journals sometimes soft-pedal the reason. The study contradicts the conventional view that most retractions of papers in scientific journals are triggered by unintentional errors. The survey examined all 2,047 articles in the PubMed database that had been marked as retracted by 3 May this year. But rather than taking journals' retraction notices at face value, as previous analyses have done, the study used secondary sources to pin down the reasons for retraction if the notices were incomplete or vague. The analysis revealed that fraud or suspected fraud was responsible for 43% of the retractions. Other types of misconduct — duplicate publication and plagiarism — accounted for 14% and 10% of retractions, respectively. Only 21% of the papers were retracted because of error (abstract)."

"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

...The academic community needs to find another metric for researcher quality other than papers published...

such as?

Number of citations? No, it would take a 30-year probationary period before the trend was reliable.
Have experts evaluate your efforts? No, that would require extra effort on the part of expensive tenured experts.
Roll some dice? Hmm, maybe that could work.

I have the sudden urge to make a D20 modern "Tenure of Educational Evil" (If you don't get it, look up Temple of Elemental Evil) and propose that any researcher must be able to take their level 1 researcher through the module to get any funding. (and funding based on how many objectives they complete along the way)

As an added byproduct I bet A) Your average tenured professor would start to look a bit different, and B) You gotta bet they would be taking way less shit from students and TA's...

Though seriously though, I know in some circles it has been discussed that not every university be structured in the same way. For the most part most/many are more less training centres rather than places of deep discovery.

Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

Or you could measure net income from licensing of IP for creation of new technology. Notice I said licensing and creation, I wouldn't want to encourage our universities to continue acting like asses pursuing IP lawsuits as a means to make money.

How about calculated gross earnings of the students you have taught?
Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).

That would also make it in new professors' best interests to not teach the intro level courses where much of the class will change majors and doesn't want to be there in the first place. They'll instead focus on the upper level courses where the weak links have been weeded out.

Guess which courses new faculty get stuck doing now? That would be rewarding the really weaselly ones who were able to skip the hard work.

Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants. They're voting with their wallets for schools where research is valued more than instruction. So your solution is lacking a problem, at least according to the teachers and students of such schools.

Why? The respective research school selling point is that the teaching is better because the faculty is top notch.

Huh? Where did you get the idea that a good reseach faculty means a good teaching faculty? There's too much pressure on research faculty to do research to expect them to spend alot of their time concentrating on teaching. (Yeah, some research faculty are good teachers, but there is no causation.)

You go to a reseach school if you want to get involved in research, because that's where the student jobs in research are. You go to a teaching school if you want to learn, because as an undergraduate you aren't g

Where did you get the idea that a good reseach faculty means a good teaching faculty?

Note the use of the phrase, "selling point" in my original post. Where does anyone get that impression that research means better teaching and/or better education? From research schools marketing that angle.

Fundamentally, your assertion that "the students don't care about quality teachers" is based on flawed premises. Prospective college students aren't well known for understanding the nuances of a college and an education. So why expect them to "know" that "smaller schools known more for teaching" have

Most students are going to choose institutions where the certificate they get at the end will have the highest prestige attached. Now if certificates from universities with better teaching would provide higher prestige (which would make sense, because the students from there should be better educated, after all), then students would select the universities with the best teaching, and thus universities in turn would have a higher interest in improving their teaching in order to get more students.

Why do we need to rate researcher quality in the first place? To label a scientist as first grade, second grade, third grade? Can't we just rate every single research instead, we've got a lot of example of (so called) mediocre researchers that had a brilliant idea later in their life, while many young promising scientists produced very little after a good start.

With the public retreat from education, universities have to take their funding from more private sources. As a result, there is outside pressure to do research to favor these outside sources of funding, and you get a recipe for fraud and misconduct. Of course, the universities won't admit that they have had to make a deal with the devil to keep the doors open - and a large part of our (United States) political system is dead-set on taking us backward in terms of scientific progress to appease their less-

Well, you've got the devils on the corporate side, who may be trying to avoid bad press, say large organic potato farmers who don't wan to see studies that show the deleterious effects of carbohydrate intake on obesity, diabetes, heart disease and other chronic diseases. Fewer carbs sold means less profit to the company.

But then, you've got the devils on the government side, who also may be trying to avoid bad press, say the USDA regulators who don't want to see studies that show the deleterious effects of

With the public retreat from education, universities have to take their funding from more private sources.

Last I checked, there was no such retreat from education. There's been a remarkable decline in the quality of education and what public funds buy. But it's a dangerous illusion to claim that there has been a retreat from education when the problem is elsewhere.

Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.

Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.

That's what you might think, but getting (most) journals to publish negative results is very difficult.

The problem with biomed research is that the field is rife with people who don't understand models. Biomed research is not really science in that we are not yet at the point where we can express mathematical models to make predictions which are then falsified or not.

All too often, it is a case of "I knock down/over-express a gene, find that it does something, and then make up some bullshit where I pretend it'll cure cancer". In many cases, articles get published because the reviewers don't say "this claim i

A positive result is the rejection of a null hypothesis.
In the frequentist statistical paradigm, a failure to reject the null hypothesis is simply not significant. Insignificant results are not usually considered worthy of publication.
"If your study comes out a way you didn't expect," then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance. This way you can explain the significance of what you learned from the "failure" of your experiment, and there is no reason you should not be able to publish it.

That's the statistical paradigm. Results just aren't significant unless you can state them in a positive way.

then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance.

You have to be very careful here. In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.

There is also the danger of making an unjustified assumption of objectivity. In preliminary studies, scientists will have gathered data, analyzed it, looked for patterns, and tried to come up with all kinds of hypotheses that could be tested. Even the most final, definitively ob

often enough the scientist will have a pretty good idea of the expected nature of the data to be collected.

Sure. I was clarifying that you can't just change to a different method of analysis according to the data you got, just because your original analysis didn't give you the result you expected.

In other words, you simply can't change your plan once you've seen the data. In this lecture [lhup.edu], Feynman gives a very clear and real example of what can go wrong even if you're trying to be completely honest (look for the part where he talks about Millikan). That's a neat example because it's a very controlled experiment m

I think the OP was referring more to the fact that if you make the hypothesis that if you give drug X to rats their hair will turn green and instead giving drug X to rats caused them to, with a reasonable degree of statistical correlation, grow a third ear instead, you might still have a paper. What you thought was the case was wrong, but you still got an interesting result.

On the other hand the result "nothing happened" or "the rats all died" isn't necessarily all that interesting unless there was an exist

In biomed, there are not models to invalidate. the Michelson-Morley was significant because it proved that a model of the universe was wrong. To do the equivalent in biomed would require someone to have a model in the first place...

These are not models in the sense that the Ether Theory was a model. You build gene regulation networks by accumulating data on what gene acts as a promoter/repressor of another and what are the activation cascades.

No one will ever invalidate that work through an experiment -- some of the network might be revised, but it is not the case that someone will come up with some experimental proof that there is no such thing as a gene expression network, which would be the equivalent of the MM experiment.

The point I think you're missing is that simple phenomena are amenable to simple models, while more complicated phenomena require more complicated models. The propagation of light is, in and of itself, a simple thing, and the luminiferous ether model and the photon model which replaced it are pretty simple too--which is why it was possible to choose one over the other based on simple experiments.

The interaction of gene regulation within a living organism, on the other hand, is tremendously complicated, and

Positive results are interesting results. Lets say you prove that watermelons cause 80% of all cancers, that is headline and front page stealing news. The report detailing how you proved watermelons have no link to cancer is not.

Similarly, your scientific paper on how to cure cancer is worth billions and extremely interesting news, while you paper on how NOT to cure cancer is neither.

It's also a great way to keep the grant money flowing in even after you have tenure, particularly if you're publishing findings that are likely to get you grants. And no one gets paid a nice bonus for finding inconclusive or negative results.

I've long held the view that science only gained the credibility it has because it was free from politics and power.

But since science has gained such credibility, people think we should now *trust* with power. Which of course destroys the very thing that gave it that trust. Ye old saying 'power corrupts and absolute power corrupts absolutely'.

For one thing, we now have government funding for science. Sounds like a good idea... except of

"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

I think the issue is not that they need a new metric for researcher quality but to realize that not every professor needs to be an active researcher their whole career.

It's not just tenure, even getting a good faculty position is dependent on publication of research in high impact journals. I think that the major conflict of interest in my field, basic biomedical research, rather than funding by pharma etc, is the necessity to generate data suitable for publishing in big journal to get/keep jobs. I'm pretty sure this is going to get very messy if something isn't done to address the problem.

"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.

This sounds an awful lot like something teachers told me in grade school:

"Show me you're busy working on something or I'll send you to the office."

Then, an awful lot like something I heard when working during the dotcom era:

I've read the abstract and several stories that cite it, and I haven't seen some specific numbers that would make this story more relevant. They talk about the number of retractions being up sharply, and the number of those pulled for "misconduct" being up as well. The abstract and other sources have yet to put either number in relative terms. Of the number of papers published, is the percentage of those papers that are retracted up? Of those retracted, is the percentage of retractions due to misconduct up?

That's not the point. The point is that journals need to be clearer about why a paper is retracted. Fraudsters shouldn't be able to hide behind the assumption that a vague retraction notice means someone made an honest error. The authors specifically state that they cannot make statements about the fraud rate because they don't have a good measure of the total number of papers published.

The first figure of the PNAS paper shows that less than 0.01% (maybe 0.008%) of all published papers are retracted for fraud or suspected fraud and it has been increasing since 1975 (maybe around 0.001%). The authors state that the current number is probably under-reporting because not all fraud is detected and retracted. It is possible that the 1975 numbers are less representative, since fraud might have been harder to detect (at least for duplicate publication and plagiarism).

This article [guardian.co.uk] has the title "Tenfold increase in scientific research papers retracted for fraud", but at least mentions some actual numbers:

In addition, the long-term trend for misconduct was on the up: in 1976 there were only three retractions for misconduct out of 309,800 papers (0.00097%) whereas there were 83 retractions for misconduct out of 867,700 papers at a recent peak in 2007 (0.0096%).

Percentage-wise, we're talking about a very small number of papers. They quote one of the authors:

I dont know if more students cheat now than when I attended grade school in pre-internet days. But the ease and temptation with the web is greater now. Surveys I read suggest at least half of students cheat.
The mystery has been how one progresses from a cheating culture in grade school, then lose it by the time you reach grad school and professorship. Apparently fewer dont escape this culture. Significant science will be attempted to be replicated and fraud discovered.

What do you think peer reviewers do? They sure as hell aren't replicating your work! The feedback you get also varies greatly in quality and importance.

From what I've seen, the average reviewer doesn't spend more than an hour or two of their time on you. Even if they set aside a whole week for you, they can't guard against a completely fabricated experiment. Short of some gross or obvious error, I can see how such a thing could easily slip past.

Most papers are too esoteric. The group of peers who have enough expertise to examine validity of the papers is usually very small (no more than a dozen in the world). In fact, many papers are so full of cross references that understanding them is akin to understanding inside jokes (too much history for anyone outside to follow). Anything that is stated vaguely enough will probably be skimmed over and not challenged by the reviewers. And unless the paper claims any far-reaching discoveries, it is likely

Well, FWIW, if Hendrik Schön [wikipedia.org] hadn't gotten stupid and made some pretty massive (and physics-defying) claims in his paper, and stuck with semi-muddy results that looked pretty (as opposed to sexy), but were harder to replicate? His career would have likely lasted years, if not decades, before he got caught.

It all depends, from the fraudster's point-of-view, whether he wants rockstar status, or to make a comfortable living...

Eh, even if he had made up realistic-looking data, there were a lot of other red flags: not saving raw data or samples, no one else making measurements, all other groups unable to reproduce results, etc. In retrospect, it sounds like it only went on that long because he was at a private lab, but I see what you mean.

Yep. That's a very important, and very *missing* bit of information. Even if *ALL* of the retracted articles were for *blatant* and *intentional* misconduct (not duplicate publication), and all of them were published in the same year, and all of them were in PubMed, that would be a whopping 0.4% fraud rate.

It boggles my mind that this number wasn't asked for by the article's author.

Well, it *should*, but instead I'm just getting more cynical and assuming either incompetence (the author is writing about so

It boggles my mind that this number wasn't asked for by the article's author.

Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

It boggles my mind that this number wasn't asked for by the article's author.

Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.

Here's a case in point. Someone does research on fraud in science and the first thing that you and the parent poster think, "What is the ulterior motive?" That's just another anti-scientific attitude.

> So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.

Those are only the ones that get discovered. I roll my eyes often when I read medical papers. The statistics are frequently hopelessly muddled (and occasionally just plain wrong on their face), the studies are set up poorly (as in, I would expect better study designs from first year students), or they are obvious cases of plagarism.

Assuming that all fraudsters get caught which knowing the situation of medical science is very far from the truth. The paper doesn't talk about the number of erroneous articles, only the ratio between the number of frauds and the number of genuine mistakes.

Respectfully, you're compounding the error by referring to "all fraudsters" and the "situation of medical science," implying, by language, that this is a much larger problem than statistics show when considering the enormous volume of scientific articles. I'm not a scientist but I'm very good at interpreting numbers.

I didn't say that fraud does not exist, or that there isn't pressure to produce publishable results that might affect accuracy or ethics (on occasion.) I said that this is a much smaller proble

Maybe the summary does. The article itself states pretty revealing numbers 96 in a million vs 10 in a million. These are hardly scandalous. But they are indicative of poorly managed incentives. Examining incentives is well within the purview of ethicists. The article is solid. Slashdot summary is what it is.

Except that my point about the article -- that it implies that there is lots of fraud in science -- has already been made by the fact that a fair number of commenters jumped right to that unproven implication.

And it would be quite reasonable to complain about such a study of cancer deaths if the article implied that the deaths were substantially greater than might be expected in the general population, without offering evidence.

Life sciences. Not medical. That was in the first sentence of the summary.

Yes, math and physics have issues with misconduct. The article you link to mentions several physical scientists who think it's a problem. You identified a famous one. Retraction Watch lists others, quit a few in chemistry for some reason. Complete fabrication might be a bit less common for the reasons mentioned in your article, but I have no doubt that there's data pruning, faking extra results because you don't have time to do

And this is more proof that life science, medical science, about half of the articles seems to be "medical research" is not science. It is based too much on what people want to believe, too much on making a profit off pushing drugs, too little rigorous science. We know that many articles are paid for, written by ghost writers. We know that drug dealers want the drugs to be safe for kids, but really don't know or won't pay to do the proper research. We know that cancer is a business, and the research is

If a researcher follows proper procedure but ends up with an incorrect result, it's still valid science. Perhaps it's the exception to some theory that will lead to later breakthroughs in the future. Simply being incorrect is not a reason to retract. Rather, a retraction is wiping the slate clean, hoping to forget that the research was ever done. The only reason to do that is if the research itself was unethical.

That is like suspected murder. It needs to be clearly proven or the accused needs to admit to it. Just because there is a whisper campaign alleging fraud from someone doesn't mean it is automatically the case.

An honest journalist would have separated "demonstrated fraud" from "suspected fraud".

You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.

Parent is right. Small errors which don't affect the outcome are published as short "correction" notes. Larger, more subtle errors are corrected by the author and/or whoever noticed he was wrong writing a new paper which critiques the old one. But the original paper remains, because it's a useful part of the dialogue.

(And *that* is why you should always do a reverse bibliography search on any paper you read.)

The whole idea of an academic ecosystem distinct from the reality that the rest of the world operates in is an elitist adaptation of medieval socio-political structures. Granting someone an insulated job from which they can not be removed is ridiculous under any conditions. Whether someone publishes a peer reviewed article on something is irrelevant to whether they know what they are talking about in the current model.

The REAL peers are the folks doing work in the profession day in and day out. As a rule

The REAL peers are the folks doing work in the profession day in and day out.

As an astrophysicist in a research University, I'd like to know where these REAL peers are. I thought I was the expert, but now you tell me there's someone working hard at an astrophysics day job — so hard, in fact, they're too busy to review the papers I write while quaffing champagne by the bucket-load in the penthouse suite of my ivory tower.

The REAL peers are the folks doing work in the profession day in and day out. As a rule most peer reviews are conducted by people with a decidedly academic focus - the experts in the field are working day jobs that don't afford them time to participate in silly self congratulatory exercises.

And in most scientific fields, those folks are overwhelmingly to be found at academic institutions, and most of those who aren't in academia are in government. Corporate R&D is almost all "D" these days. There used to be a lot more research and publication, and peer review, by people outside academia--in light of your username, you might want to consider the history of Bell Labs, and how sad that history's been in recent years.

Getting it wrong an important part of doing science. Papers with errors should be corrected by new publications, not retracted. The incorrect paper inspired the correct one, and so is a useful part of the dialogue. Also, anyone else who has the same wrong idea can follow the paper trail, see the correction, and avoid making the same mistake again.

Classic but extreme example: the Bohr model of the atom, with the electrons orbiting the nucleus like planets around a star. It's wrong. Very wrong. But we s

Interesting. That's outside their declared scope: "PubMed comprises more than 22 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites."

One part of the problem is that peer review is set up in a way to catch mistakes, not really to vet for misconduct. I have no idea what would be required to properly vet for misconduct, but I'm guessing that it should be a good idea to statistically analyse any numbers presented, that should catch the most blatant cheaters.

You've discovered representative traits of different societies. In many Asian societies, individual achievement is valued highly, so each individual must work the hardest to be outstanding. In many Indian societies, the collective effort is what's valued, so a team gathering bits and pieces from myriad sources and reassembling them into a new product is the respectable path to success. In many European and American societies, slacking off and blaming others for the consequences is a venerated tradition.