Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

sciencehabit writes "The National Science Foundation (NSF) is investigating nearly 100 cases of suspected plagiarism drawn from a single year's worth of proposals funded by the agency. The cases grow out of an internal examination by NSF's Office of Inspector General (IG) of every proposal that NSF funded in fiscal year 2011. James Kroll, head of administrative investigations within the IG's office, tells ScienceInsider that applying plagiarism software to NSF's entire portfolio of some 8000 awards made that year resulted in a 'hit rate' of 1% to 1.5%. 'My group is now swamped,' he says about his staff of six investigators."

Well, at least the information is real...Some 'scientists' just make up their own data and publish it as if it were real data. http://en.wikipedia.org/wiki/Diederik_Stapel#Scientific_misconductThis of course is hardly possible in beta science...

I wonder if they're comparing these grants to other grants by the same researcher in different years? If you're studying gene X which is known to function in biochemical process Y there are probably a limited number of good ways to word your explanation of that fact in the introduction to grant application after grant application.

When researcher's lives are ruled by arbitrary metrics on volumes of papers published, people will cheat.
It's certainly true in computing. Try picking a few papers from the ACM Digital Library and start following the references. Rehash after rehash of other people's papers. Particularly their own...

Rehashing one's own work isn't considered misconduct. If it were, a lot of researchers have been guilty of submitting "new" papers that were about 95 percent rehashes of their previous work, plus (maybe) some new result that might hardly be worth reading about.

It isn't really a scandal until the cases of plagiarism are confirmed. I once tested some plagiarism software on published academic economics, and it produced many false positives, many of which required some knowledge to interpret. Notice that a grant application may seem to be a somewhat "safer" place to plagiarize, since only a few people will see the application. However, those few might well include the borrowed from author - the granting agency will be sending the proposal for review to many researchers who have written on the topic before..

And might not grant proposal writers be purposefully including snippets of text or stylistic flourishes or word-usage choices characteristic of those high-level academicians whom they expect to be reviewing the grant proposal?? If they do, the reviewer might see that the proposer has some genius in them, since they are obviously on the correct trail and path!!! If they used techniques or buzzwords that are not "au courant" or standard canon fodder [joke, joke, pun intended], then they'd be seen as idiots.

You can't really apply this to grant proposals. Grant proposals will often be submitted several times, sometimes with minor corrections and changes to the same and different agencies and even the same grant will be re-submitted by a different researcher with more 'credentials' so the grant can go through. It isn't easy to get a grant and there is a lot of time wasted in getting grants, I would say close to 70% of a researchers time is involving paperwork (getting grants, getting audited, rewriting grants, r

Are they just using a web service such as turnitin.com? I've used that for classroom assignments, and it has a rather high rate of false positives - even when factoring out direct quotes that students love to use to much to fill space.

Science is NOT 'believing what they tell us at face value'. Science IS asking (and getting) evidence before accepting their conclusions (the concept of repeatability). Yes, scientists are annoyingly human and therefore bedeviled with the same positive and negative attributes as the rest of humanity.

I'm not entirely sure what your point is. You seem to be confusing journalists with scientists. There is an unfortunate tendency for the press to dig around an abstract, find some enticing sentence and tart it up well past the point where the authors would even admit to writing about it - then have it repeated in the Internet echo chamber until it's a cure for cancer, scabies and annoying foot perspiration. But that isn't what science is.

Go to a conference about something and listen to the question and answer period. That's science.

Scientists have been proven to be human beings after all. They can lie and cheat like the rest of us. So that begs the question. Why should we all believe what they tell us at face value rather than using our deductive reasoning to figure out whether what they say is plausible? Why shouldn't we ask for hard evidence before accepting their conclusions at face value? If they are just as fallible as anyone else then why should we believe what they say rather than judging whether what they are saying makes any sense?

Well, you don't need to take what they say at face value(if you did, we could skip the tedious 'experiments' and 'peer review' and 'writing scientific papers' and 'data' and other boring stuff). More generally, the reason we don't generally use deductive reasoning is that it's somewhere between cumbersome and useless outside of toy examples written for deductive logic exercises. Induction is kind of lame by comparison; but it has the advantage of actually providing us with information about the world...

Short answer: You shouldn'tLong answer: You really shouldn't listen to them when it comes to areas outside their limited domain expertise. Some climatologist (to pick a current example) might tell you something about pollution or climate change, but you shouldn't listen to him on his policy advice. He is not an expert in economics, social studies, government, bureaucracy, history...

The reality:The public perception of *truth* remains as elusive as ever despite our ever increasing access to information. Ulti

Why shouldn't we ask for hard evidence before accepting their conclusions at face value?

Go ahead and do so, if you want - no-one's stopping you. You could even go to college, get that degree, spend 10 years out in the field and do everything else the first guy did to check that it's all true. Personally I'm happy to accept the existence of electrons and quarks on trust at this point.

If they are just as fallible as anyone else then why should we believe what they say rather than [confirming] whether what they are saying [survives further scientific scrutiny]

Your argument is deliberately misleading. You are conflating scientists with science.

Individual scientists are often wrong. This is occasionally due to malfeasance, but more often it is just the state of the art. The truth is not always completely self evident and often multiple theories compete.

Science, as a human endeavor, makes progress because results are always being verified by multiple parties. Funding proposals and peer reviewed publication are only a part of the progress. The key operation is reproducible results. Other people use existing results as part of their own work. If the outcome differs from previously known work, it will be reported. This is normal and expected. Eventually there is a consensus and and the scientific community moves forward.

Bad behavior by individuals slows down the process, but it does not derail it. Resources and time are wasted, but as long as the scientific method is employed, the results are trustworthy.

Your positions is one sometimes taken by jealous members of the social sciences and humanities, particularity in academia. (This is one of my problems with Michel Foucault and postmodernism/structuralism.) They see the respect, status and money going to technical fields, so they try and reframe science as having no more validity then any other intellectual area. The existence of modern society proves this wrong, but since they reject the scientific paradigm they seem to have no trouble ignoring external facts. (Actually Foucault has a lot of value when his work is applied to culture and society, so I should not be quite so harsh.)

That's the long answer. The short answer is that you're a troll, and appear to short in mental stature and emotional maturity.

The cases grow out of an internal examination by NSF's Office of Inspector General (IG) of every proposal that NSF funded in fiscal year 2011

It seems to me they are running the tool against things that are already funded. Wouldn't it make more sense to run the tool when recieving any proposal and then pass on the results to whoever is deciding if a proposal should be funded?

I think the correct usage is "found so many cases of suspected plagiarism"(A) rather than "found so many cases of alleged plagiarism" (B).

B would imply that so many allegations of plagiarism was found, rather than so many instances of possible plagiarism. I think alleged is fast becoming the most misused word in American, right next to "begs the question".

A plagiarism hit rate of only 1 to 1.5 percent is not that high, considering that many research grants are based upon the same core studies, use similar methods (e.g. "We will use a mass spectrometer with 8 plates of xxx"), and refer to prior studies in much the same way.

You call it plagiarism. I call it a good reason to retest your plagiarism software.

A more serious problem is duplication of human subjects in study designs. Many people with rare or recessive genetic problems like to volunteer for research

Also, on further consideration, one of the problems with scientific research recently, is the lack of "duplicative" studies.

Seeing results from only one lab of a scientific hypothesis only proves that it deserves further study. To study it, and "prove" it, you need to replicate (duplicate) the study.

We should, in fact, see MORE studies with similar wording and language, in that they should have more than one study test the hypothesis. A study of the same condition should have a high "plagiarism" rate, since

It's hard to tell from the summary or article what is going on here. I suspect a decent fraction of these may be people submitting proposals under different programs for similar or overlapping projects. Sometimes a scientific project will kind of fall between programs and people will submit more-or-less the same proposal to two different parts of the NSF, hoping for funding from one. Given the low funding rate and the great deal of uncertainty about funding (thanks, oh-so-functional Congress!) it is pre