Posted
by
timothy
on Thursday October 03, 2013 @04:05PM
from the how-to-be-a-famous-person dept.

sciencehabit writes "A sting operation orchestrated by Science's contributing news correspondent John Bohannon exposes the dark side of open-access publishing. Bohannon created a spoof scientific report, authored by made-up researchers from institutions that don't actually exist, and submitted it to 304 peer-reviewed, open-access journals around the world. His hoax paper claimed that a particular molecule slowed the growth of cancer cells, and it was riddled with obvious errors and contradictions. Unfortunately, despite the paper's flaws, more open-access journals accepted it for publication (157) than rejected it (98). In fact, only 36 of the journals solicited responded with substantive comments that recognized the report's scientific problems. The article reveals a 'Wild West' landscape that's emerging in academic publishing, where journals and their editorial staffs aren't necessarily who or what they claim to be."

Science Magazine did a bad experiment about submitting a spoof scientific report so that you would click on them! How can you trust a science magazine that uses bad scientific methods to make a point. Real scientists create experiments that can be reproduced and independently verified and they did not. Q.E.D.

What is stopping someone from "independently" creating a bogus paper and submitting it to numerous peer-reviewed, open-access journals and analyzing the results? It seems reproducible and independently verifiable to me.

It would be a de novo experiment as the first was not independent. It seems much like the Microsoft "Scroogle" ads. It made me think they must be desperate to employ such methods. "Coke says Pepsi sucks, Coke confirms it" It was not intended to be a real serious poke at them as I really like their magazine and they do have good articles in my opinion. It sounds like marketing department logic at work here.

The null hypothesis is that the journals have sufficiently good review processes to avoid publishing papers with obvious and fatal flaws. If you submit a paper with obvious and fatal flaws and it is published, that hypothesis is not true. It's proof by counter-example, and no control is required for it to be valid.

And you think that trying to reject the null hypothesis in case of traditional journals is not worth the time? Given the fact that they apparently checked for the quality of the rejection letters, I think it would be worth their while to try.

Real scientists create experiments that can be reproduced and independently verified and they did not. Q.E.D.

This is less about a failing of science and open publishing journals than the fact that on the internet, reputations can be shed like a snake sheds its skin -- you're just a few clicks away from a new account and a new identity. This has been a long-studied problem in cryptography -- how to create trust networks in public key crypto with key signing parties, etc. That the lessons learned there apply to social networking sites and open publication journals as well requires only the smallest amount of creativity to see.

If you want honesty, you need to have some way of punishing people who are dishonest. It really is that simple; You need a way to saddle them with a cost that can't be shed by simply switching identities. And the best way to do that, for better or for worse, is a central authority in the real world that matches online identities to real-world ones. Everything else is varying degrees of broken.

Create a blacklist of people who have lied and although you may be able to overwhelm the system for awhile, it is self-correcting... eventually it will run out of people willing or able to get blacklisted, and the quality will then start to rise as people are forced to be responsible for what they say and do.

How many of the open access journals rely on click through advertising? Follow the money, I say.

I think they're all trying to figure out their business models.

I know some respected organizations have created open access journals, though they rely on member fees to pay for the costs. Others rely on the author to pony up some cash (some up to $1500) which pays for it.

I think the author-pays is an interesting one - and quite possibly might be a way to cut down the number of bad articles - after all, if you're not willing to pony up, you probably don't have enough belief In your research.

I think the author-pays is an interesting one - and quite possibly might be a way to cut down the number of bad articles - after all, if you're not willing to pony up, you probably don't have enough belief In your research.

That fails to address an issue like the Heartland Institute financially backing anyone with a paper which supports one of their own policies.

Few if any, I'd guess. Academic journals are usually not ad-based publications. The "open access" model described here means that either the author pays for the publication, the author's institution, or the publication has an institutional grant to pay for it.

When you follow the money, it leads you to two groups. At the bottom of the pole will be the individual scammers who've set up these "journals" -- the article mentions a few professors who were at best slipping and at worst cynically & intentional

Clicks are not the problem. Journals don't get any money from advertisement clicks. Real problem is :

At present, "Open Access Publishing" mostly means "Author Pays". If the author is your customer, then obviously you publish whatever they want. We must abandon the extortionate academic publishers like Elsevier all together by building an arXiv overlay filters that take over the journal's role of reviewing and declaring papers important. And these must be paid for by tax money because the customer should be society.

Just like with universities, Britain has rampant grade inflation because the students all pay 15k USD per year (9k GBP). St Andrews has a 98% graduation rate. A 98% graduation rate tells me the university did basically no "selection" on their admitted students, all selection occurred when an admissions person read their test scores from high school. In other words, the student is the customer and the product is a little piece of paper. This is why Britain sucks so bad at engineering and must create that blatantly bullshit ranking system by THES to make themselves look good.

In continental europe, almost everyone who finishes high school can attend university without paying, but the universities select students by failing out the shitty ones, well society is the customer and the students are the product. It's infinitely more fare because gaming the system in high school does nothing and people who never really hit their stride until the find challenging material do well.

Seems like degree-mills are more common than actual universities by the same token.

To solve the problem simply eliminate final exams and meaningless accreditation and implement entrance exams for jobs. Instead of the traditional degree mill one of the others stories shows how Makerspaces could be the answer: "And here we have a 3D printed degree, and over here a Degree Lathe, and this one's a 3 axis CNC degree mill..."

What matters is the results from the top journals only, or maybe expand that to only the ones that people are currently trusting. The same argument is made about the number of crappy apps in a specific app store. Go ahead, add another ten million crappy apps to the library. It's irrelevant. Show the number of crappy apps that actually get downloaded or show up high in search results. Nobody cares how many fake journals are out there as the majority are painfully obvious. What matters is if the top ones have

The problem is that serious decisions are made by people who have no idea which journals are top quality. Bad tenure decisions, bad engineering choices, and god forbid bad medical decisions are being made daily on the basis of nothing more than "hey, the European Journal of Chemistry sounds legit."

In Norway, we have a "level" system that is used in academia throughout the country. It is used for evaluating researchers and research groups when it comes to employment, tenure, funding etc. Your "point score" is summed up, 2 points for publication in a "level 2" journal, 1 point for "level 1".

A journal is either "level 2", "level 1" or "level 0". "level 2" is a selection of top journals from each field in science, 2000 in total (for all of science, from computational physics to the sociology of music). "level 1" means the remaining serious peer-reviewed journals. "level 0" either means "bullshit journal" or "journal that was founded just last year".

Researchers may nominate journals for a change in status, e.g. 2->1, 0->, etc. The decisions are made by a government-appointet body on a yearly basis. It's nowhere near perfect, but it's a lot better than nothing.

A journal is either "level 2", "level 1" or "level 0". "level 2" is a selection of top journals from each field in science, 2000 in total (for all of science, from computational physics to the sociology of music). "level 1" means the remaining serious peer-reviewed journals. "level 0" either means "bullshit journal" or "journal that was founded just last year".

Here's the problem with doing that so systemically: it is fundamentally anticompetitive, and leads to stagnation. Nobody would bother submitting to a "level 0" journal because it won't earn them any props at all, which means that the journal can never become anything more than a "level 0" journal. This means that you don't get fresh blood with new ideas on the review boards, so progress moves at a snail's pace. There's something to be said for disruptive innovation, even in academic publishing circles.

Also, the entire notion of judging the value of your scientific contribution based on what journal agreed to publish it is as absurd as judging the value of a car based on what dealer sold it. A paper should stand or fall on its own merits. A good article that pushes science forward, even if published in a minor journal, should weigh significantly in your favor for tenure, and a lousy article, even if published in a major journal, should not. A system that does the opposite is abject stupidity, pure and simple.

Yep, and quality papers is what they should be competing for. The journal ranking systems used by univisities (not just in Norway) are designed to give more weight to journals that have a long track record of doing that. This is why the Nature and Science journals at at the top of the list, their long publishing history and track record of quality papers speaks for itself. A low ranked journal will stay a low ranked until it's track record is such that it can be deemed a reliable source. If it does nothing

The whole journal publishing idea seems like one great big obfuscating scam. It seems like publishing has nothing to do with it at all, it is all about the reality of peer review or it's absence. What seems to be the most important issue here is not publishing but the article peer review process. How many people reviewed it, who are they and what are their qualifications with regard to suitability to be involved in the review process. When left to private for profit enterprise this seems basically to be a

I need to move to Norway. For some reason, it seems to be the only sensible country on earth. Now if we could just tilt the rotational axis of the earth a little farther up, and we'd solve it's only problem: eternal days and eternal nights.

The total number of these journals is perhaps the more relevant part of this article. There are 304 journals that are potential relevant places for that one submission? How can anyone keep up with the current science in any field when there are 304 places to look? Never mind that many of those aren't sufficiently vetting the product.

And if you are just writing them off and basing your reading on the "top ones", of what value are these?

While science journals are often used by researchers to find out what their colleagues are doing and can thus be vetted by the reader, they are quite often the bases for undergraduate and graduate educations, and putting deliberate crap in front of them is a Bad Thing.

Search engines find words and phrases. They don't vet the material they return, nor do they usually return only what you are actually looking for. You've replaced the problem of finding relevant articles in the tables of contents of 304 journals with the problem of finding relevant articles in the 345,289 hits returned by Google.

And if you are going to just Google for the articles, why have 304 journals at all? Just put all the articles on the web and

No that is not the point. The point of OpenAccess papers is to allow a larger communicatino of the papers by removing the barrier of ridiculously high access fees. Accessing a single paper can cost $50 for a researcher that do not have the proper subscription. OpenAccess journals are mainly designed to take the editors and publishers which ask for a ridiculously high publication fee. or cost of access.

Open Access does not mean that anything get published in there. Though as a reviewer for many computer science journal, I can guarantee you that everybody can publish in there... assuming the level of contribution and style are up to standard of scientific method and writing. That is a very difficult thing to achieve for a non academic because of the time comitment in "learning" how to write these papers.

There is a certain amount of irony in someone attempting to prove that open access journals publish bad science through the use of bad science. I read the article, and his only mention of testing closed publications is in his conclusion, quoting a colleague who suggested just such a step. He discounts this by restating his thesis (that open access journals are more numerous and publish more papers than closed ones) before shifting topics.

To actually make any of the conclusions (or inferences) about the quality or rigor of open-access journals REQUIRES a control group of traditional journals to be operated on in a similar manner. In other words, there needs to be a sting on both open-access and traditional journals simultaneously.

Without that, no claims can be made. None. Not even one. Because we DO NOT KNOW how many traditional journals, like Science, would also have accepted their falsified paper(s). It's possible the traditional journals could have lower standards of quality and rigor than the open-access group.

Science and AAAS (of which I'm presently ashamed to be a member) should be blasted for publishing this tripe. It needs to be retracted, immediately. If they want to have the slightest shred of credibility here, they should at least conduct scientifically rigorous stings.

Yes, but. This isn't entirely a binary scientific question. If the question were "are open-access journals worse than traditional journals?", you'd obviously need a control. But "Is the peer review process at open-access journals acceptable?" is not a scientific question, but one of values and personal preference. Most people would decide that a 50% failure rate is not acceptable, control or no control.

Now, we're all *very* curious to know whether traditional journals fare better than open ones, and Science is showing bias and intellectual dishonesty by avoiding that question, BUT that doesn't mean that this study has no value.

No, it's not disgusting. It's biased and would be better with a control arm (which the author admits to). It also points out a significant issue on it's own - that there are a lot of scams in open access journals.

The more interesting question certainly is "are traditional, purportedly higher quality journals any better?" The author, or someone else, could certainly do that and I suspect someone will. But his methodology stands alone. He was not trying to find who was better or worse, just if there was

In fact, over the years, Science has published numerous scientifically fraudulent papers, some of which were pretty blatant. So, in a sense, we already have a control. In addition to control experiments, it needs three more things experiments usually need: a statistically representative data set, a justification, and a clearly defined hypothesis. It lacks all of those.

Peer review isn't meant to eliminate all errors from scientific papers, it's simply intended to make life a little easier for readers by weeding out papers they are probably not interested in. So, if the hypothesis is that "lower cost journals have less stringent peer review", that doesn't require any testing: it's almost certainly true, but it doesn't matter to anybody. Publishing a bad paper in a peer reviewed journal doesn't hurt anybody, except maybe the reputation of the journal.

From the start of this sting, I have conferred with a small group of scientists who care deeply about open access. Some say that the open-access model itself is not to blame for the poor quality control revealed by Science's investigation. If I had targeted traditional, subscription-based journals, Roos told me, "I strongly suspect you would get the same result."* But open access has multiplied that underclass of journals, and the number of papers they publish. "Everyone agrees that open-access is a good thing," Roos says. "The question is how to achieve it.

so he didn't miss it, maybe he is doing this right now, but isn't telling

Science and AAAS (of which I'm presently ashamed to be a member) should be blasted for publishing this tripe. It needs to be retracted, immediately. If they want to have the slightest shred of credibility here, they should at least conduct scientifically rigorous stings.

A lot of people cite the democratizing power of "open access" and "crowd sourcing". I feel this is an example of the same principle at work.

On one hand, it is easier for those that are not entrenched within the bastions of power to be heard, but on the other hand, all data received from these sources must be treated much more cautiously.

In the past "being published" was a big deal, as it required a fairly high bar of factual accuracy, and that is still the case of many prestigious journals, but in the rush to Twitter-ize research and accept as many publishable details as rapidly as possible in the name of profit and prestige, the barriers to entry have eroded.

In much the same way that hard investigative journalism with strong ethical guidelines, verifiable sources and solid editing will always have a place in my heart, these reputable journals can serve to establish a foundation of trust in the scientific arena. And now, in much the same way that one should treat any writing within the "blogosphere" as suspect until verified, many open access journals must now be treated with the same level of suspicion until it is proven otherwise that they hold themselves to a higher standard.

On the other hand, publishing nonsense can quickly be modded troll, if the journals have such a mechanize in place.

Moderating by scientists in the field seems better than letting some gatekeeper decide which new ideas get to see the light of day, and which get deep sixed simply because they are unpopular points of view at the moment.

How much actual damage can be done by publishing rubbish? (Its a serious question, because I don't pretend to know the answer).Aren't all results subject to verification by pee

Moderating by scientists in the field seems better than letting some gatekeeper decide which new ideas get to see the light of day, and which get deep sixed simply because they are unpopular points of view at the moment.

I take it you're unfamiliar with how journals such as Science decide whether or not a paper should be published?

AIUI what happens is they send the paper to a small group of reviewers who they regard as experts in the field. The reviewers aren't supposed to know whose paper they are reviewing but they can often figure it out anyway just from prior knowlage of who is doing what.

Some of those reviewers will do their job as honestly as they can (though I bet they will still be more faourable to stuff that confirms their beliefs), others will deliberately try to discredit any paper they see as being from a rival so they c

Close. The reviewers are actually the anonymous party, and they see the author list. So there's much more potential for abuse.

A quality editor can usually see through bias in a reviewer, and I've seen them override a reviewers decision if they think there's a problem. There have certainly been unfair, biased reviews from people with an agenda (arguably it's more common in grant reviews, in my experience), but this is not usually perceived as an endemic problem. In many journals, a submitter can recommend re

Moderating by scientists in the field seems better than letting some gatekeeper decide which new ideas get to see the light of day, and which get deep sixed simply because they are unpopular points of view at the moment.

Even for more trivial reasons like disliking the author or where they are from.Science isn't ment to work by "argumentum ad populum" or "argumentum ad auctoritatem" in the first place.

How much actual damage can be done by publishing rubbish? (Its a serious question, because I don't prete

First the disclaimer. I do believe that professionally peer reviewed journals and reporting still has a place. I pay significant sums of money to subscribe to a newspaper, a few top magazines, as well as Science and Nature. They serve a purpose and, to me, are worth the costs.

That said Science is not beyond reproach on accuracy. Both journals has had a very scandalous path over the past few years with their accepting clearly fraudulent papers. In July, evidently, Alirio Melendez had a paper retracted. This researcher fooled many major journals with at least 13 papers. Science also published the paper on bacteria living on arsenic, which is generally seen as having major issues. I recall reading a paper related to dancing and sexual attraction, maybe in Nature, being retracted due to fabricated data.

That said, there is little wrong with a single suspect paper being published. This is how scientists communicate. There is little protection against fraud such as occurred in this case because it is so patently silly. Building a system to protect against such silliness would mean that we would no longer be focused on science. The real problem here is that the popular media does not understand the difference between a single piece of research and the process of research. Places like/. should know better, but they don't. The process of science is to reproduce and extend results. When a bad paper corrupts the process, as has happened when Science and Nature has published suspect paper, that is a problem. These journals, having high impact factors, have a responsibility to proctor what they publish. A backwater online journal does no necessarily have such responsibility, rather relying on the ethics of the researcher and a faith in the process of science to ferret out unethical and silly people like these.

What is truly alarming is the simple bad science present in this research project. This experiment has no control group and does not try to match the target journals to an equivalent paper journals.

If the research was done properly the open access journals would be matched with closed journals on the basis of several relevant criteria, like impact factor, cost to publish, region predominately served, or the like. This is the way research is done. One can't just go out onto the street, ask 10 people who you don't like if they ever thought of killing someone, then claim that everyone in this group are murderers if 7 say yes.

The paper would then be submitted to all the journals, the results generated using well known statistical methods, and then, if there is some degree of confidence, the results published.

My prediction is that if you were paying a closed source low ranking journal to publish a paper asserting that the moon was composed of coagulated casein in a mesh of lipids they would not blink at printing it.

At the end of the day, in this case Science is no better than your average corrupt advertising agent.

First the disclaimer. I do believe that professionally peer reviewed journals and reporting still has a place. I pay significant sums of money to subscribe to a newspaper, a few top magazines, as well as Science and Nature. They serve a purpose and, to me, are worth the costs.

That said Science is not beyond reproach on accuracy. Both journals has had a very scandalous path over the past few years with their accepting clearly fraudulent papers. In July, evidently, Alirio Melendez had a paper retracted. This researcher fooled many major journals with at least 13 papers. Science also published the paper on bacteria living on arsenic, which is generally seen as having major issues. I recall reading a paper related to dancing and sexual attraction, maybe in Nature, being retracted due to fabricated data.

True, but fabricating data carefully to make to support your conclusions and fabricating data and the accompanying experimental manner to clearly be bad science is two very different things. In this case, the author was careful to be sure any decent peer review would reveal the flaws in order to gauge how well journals conduct reviews.

That said, there is little wrong with a single suspect paper being published. This is how scientists communicate. There is little protection against fraud such as occurred in this case because it is so patently silly. Building a system to protect against such silliness would mean that we would no longer be focused on science. The real problem here is that the popular media does not understand the difference between a single piece of research and the process of research. Places like/. should know better, but they don't. The process of science is to reproduce and extend results.

Scientists communicate by publishing results of experiments that are designed to be scientifically valid. They may disagree about conclusions or methods and check the underlyi

On one hand, it is easier for those that are not entrenched within the bastions of power to be heard, but on the other hand, all data received from these sources must be treated much more cautiously.

More cautiously than what? A Science paper? Science has published numerous scientific frauds over the years. In fact, if you're looking for a high profile scientific fraud, you're more likely to find it in Science than in an open access journal, because that's where the rewards are highest.

However, there is a clear difference between a fraudulent paper, and a shoddy paper in which the experimental results are clearly an error.

Catching fraud can be very tough for a reviewer, since they pretty much have to rely on the author's word that the primary evidence exists. They don't get to go look at the students' lab notebooks, or whatever. If someone wants to fabricate a graph, or photoshop a gel, that's going to be hard to figure out. It's only going to be caught when someone with the interest, kno

"And now, in much the same way that one should treat any writing within the "blogosphere" as suspect until verified,..."

Oddly enough, the entire point of science is that all claims are suspect until verified. And I applaud anyone that approaches any testimonials in that fashion. Whether it's in the blogosphere of the sciencesphere.

While you raise an interesting point, that open journals should be very suspicious and scrutinize probably better than closed systems, the point is regarding a study which was not at all scientific. If an experiment is done to show people accepting bad papers and only one group is tested, how is this "science". More importantly since this is an article, how is it "fair" journalism?

Since I see garbage on closed proprietary sites as well, why would they not also submit the same bogus papers to closed journa

I've made comments before comparing science and religion, and too often people think that I'm a religious person trying to belittle a genuine quest for knowledge. On the contrary, I think the genuine quest for knowledge is an amazingly worthwhile thing. However, science has become a method for the "practitioners" and "priests" to exert social, economic, and institutional influence by swaying the beliefs of those who are not educated enough or informed enough to differentiate between genuine knowledge and

Very few are fooled. Sociologists/Psychologists/Economists can say they've 'proved' something till they're blue in the face. Nobody will take them seriously.

Umm, what are you talking about? Too many people take them seriously. Loads of people pop pills all the time because of what psychologists have decided is "normal." Heck, the livelihood of most people in most developed countries is highly dependent on the people in control of the money supply following various economic theories -- and when those theories fail, the economy tanks.

Maybe "hard scientists" won't take these things seriously. But the vast majority of people in the world seem to -- often with

Say that when a child Psychologist helps kidnap your child and then tells the judge that your child is suffering from "Oppositional Defiant Disorder" because they keep saying they want to go home. Judges and juries take them serious, and men with guns enforce the will of these judges and juries.

The institutions and scientists at the heads of those institutions have become corrupted (and purged) multiple times throughout history (Lord Kelvin the traditional example), but the principles of science seem sound and correct over time.

That's insufficient to explain what science is, when it is what it should be. And then science today is not what it should be. "The process of getting closer to truth by experiment" is not what most people are talking about when they talk about 'science'.

That's insufficient to explain what science is, when it is what it should be. And then science today is not what it should be. "The process of getting closer to truth by experiment" is not what most people are talking about when they talk about 'science'.

What does public perception have to do with what science is? They're two completely different things. How do you know what "most people" think about this?

You say science today is not what it should be. Do you realise it never has been? The founders of natural science were known to work based on strange religious ideas, not to mention the whole alchemy background thing. Christianity continued to shape science significantly right up to the 20th century. By that point, the social sciences you so seem to loathe

How long is "over time"? From my perspective on history, scientific "principles" and methodology are continuously changing. The standards of what constituted valid "scientific argument" were vastly different over time -- Copernicus, Kepler, Newton, Lavoisier, Darwin, etc. all had incredibly different views on what constituted acceptable scientific methodology. Yes, they all collected data, and "experiments" have been performed in various ways over the centuries, but the foundational axioms of how theorie

It may seem that the idea of doing an experiment and believing the results seems obvious to you, but it wasn't obvious for millenia. That's basically it, science looks at the evidence, as opposed to what any authority might say.

Once again, to quote Feynman, "The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. "

On the contrary, I think the genuine quest for knowledge is an amazingly worthwhile thing. However, science has become a method for the "practitioners" and "priests" to exert social, economic, and institutional influence by swaying the beliefs of those who are not educated enough or informed enough to differentiate between genuine knowledge and blind dogma.

You have no idea what you are talking about. A paper published anywhere is just correspondence. It is intended for scientific community. That's all.

If you can't tell a boson from a photon, or you don't know what HDL actually is beyond the talking points you see on TV, then journals are NOT INTENDED FOR YOUR CONSUMPTION. It is like reading latest materials research while you don't know how to join two 2x4s together without using fasteners or glues. And journalists are just as bad or worse than general publi

Oh, I think you have a point. Lots of people deify 'science'. Even people who are supposed to know better (ie, this crowd). It's hard, we're stupid humans, not Vulcans. Science is a weird, counter intuitive thing to most people. Science knowledge is also enormous. No one human being can understand but a tiny fraction of what goes on and thus be in position to truly debate the merits of something. This crops up all of the time in the Climate Change debate. Yes, to really understand it you can go ba

However, science has become a method for the "practitioners" and "priests" to exert social, economic, and institutional influence by swaying the beliefs of those who are not educated enough or informed enough to differentiate between genuine knowledge and blind dogma.

While I agree with you on some aspects, you seem to miss out in that (in theory) other people can call upon those priests to verify their miracles. True, most people never do it. The scientific method requires that there are people to verify claims and catch mistakes. For most important problems, however, there are enough of people to catch mistakes sooner rather than later.

Losing faith in science would be like losing faith in open source because most people aren't educated enough to view the source and ve

I've made comments before comparing science and religion, and too often people think that I'm a religious person trying to belittle a genuine quest for knowledge. On the contrary, I think the genuine quest for knowledge is an amazingly worthwhile thing. However, science has become a method for the "practitioners" and "priests" to exert social, economic, and institutional influence by swaying the beliefs of those who are not educated enough or informed enough to differentiate between genuine knowledge and blind dogma.

A lot of people who dont understand how the scientific method works and only get their science information from tabloid news papers think this way.

It doesn't make it true.

The difference between science and religion is that science actually questions itself, it is designed to be questioned and if proven wrong, science has to change. Religion has no such requirement and even when proven wrong beyond all doubt, has no impetus to change.

For years we have known that there is a glut of graduates in the system. I remember my freshman year at university, the attitude of a lot of students was "the Masters is the new Bachelors, you have to have one to get an entry level job" or when I got closer to graduating it was "well I don't want to be done with school and my parents are helping me out so I'm going to go for my Masters". While education is awesome, the fact is that you don't have to be all that smart anymore to get a Masters or PhD.

Even as an undergrad I was pressured to publish. I didn't have the time nor the resources to do anything meaningful but my professors all said that I had to publish to even consider going to graduate school. They said that pretty much no matter what I do, even if its not novel or valuable to the academic community there will be a journal that will publish it. That's the current state of academics now.

Lets be clear: I'm not talking about MIT or Berkley. I'm talking about the thousands of research institutions across the country that while also doing amazing research, churn out Masters and PhDs like a printing mill. When you dilute the pool of researchers there is going to be subpar research. When there is a glut of subpar research there will be journals that see the business opportunity and publish anything you pay them to publish. This is not new.

Science has an axe to grind here, obviously, and this "experiment" is seriously biased.It does not appear that it was submitted to any closed, for-profit journals (like Science). It would have been much more interesting to see how many of them would have accepted the paper.

Is something you very badly need, because you utterly failed to either comprehend or answer my question.

Have you ever heard of a "control" group? (Hint: It's a basic part of the scientific method)

Yes, I've heard of a control group. No, it's not applicable here. If you're testing open access journals, you compare one to another (like for like). You don't do experiments on apples and use oranges as the control group. Or, to put it another way, you're a moron with no

Yes, they should have submitted it to a similar number of similarly ranked closed-access journals and seen if there's any difference due to the open access policies specifically. As it stands it's sort of interesting, but doesn't tell us squat about open access.

It does not appear that it was submitted to any closed, for-profit journals (like Science).

But they did submit a bogus paper to Science. It was titled "Who's Afraid of Peer Review?" The paper lacked a control group, but Science published it anyway in spite of its obvious failure to measure up to scientific standards.

How can you say that? They didn't divide the paper up between those that published the fake paper, and those that did not. They divided the groups between open access journals and... well... we won't talk about the other group. They then painted the open access journals that rejected Science's fake paper with the same brush that they painted the ones who accepted it.

This article is being widely panned as lacking controls, published without any critical review, and driven by self-interest from a traditional publisher with the most to lose from Open Access taking off (as it is). Some have gone so far to assert it's an over-reach for how badly it was done, and will make Science as a journal look partisan.

I'm sure this will heat up some much needed debate about poor quality journals and the failings of peer review, but with the lack of any controls at all, it says basically nothing about open access as a model for publishing.

I've been studying this (publishing) for some time, in the context of learning, verifying assumptions, and the scientific method.

It turns out that there is really no bar in scientific publishing. It doesn't have to be understandable, nor innovative, nor even correct. You only need to be ethical (ie - don't lie about the data), cite anything that you got from other sources, and show that there is less than a 1-in-20 chance that you are wrong (p > 0.5).

This is the same problem the internet has faced right from the beginning, and is not confined to academia: who do you trust online, and how can you be sure they're on the level? Someone or something is needed to weed out the bad apples... In other words, moderation. And yes, the same basic principles apply equally to discussion forums like Slashdot as they do to online scientific research journals. Ultimately it comes down to reputation, and some form of karma system. Slashdot's system uses temporary modera

... the real probem is that as problem size increases, the human brain just can't deal with all the stress and energy one must expend to fact check everything. This is why we need automation in checking papers for errors and contradictions, i.e. the number of facts you need to know grows exponentially as things get more complex. What we're really seeing is that the human brain is the biggest bottleneck since human beings have limited time and energy. So no one should be surprised it's easy to 'dupe' or game a system because the resources you need to stop untrustworthy people is unrealistically expensive. Any area of human endeavor is only as good as the people themselves.

In all seriousness, I - as a researcher myself - understand the need of easy access to publications. However, I never supported the open access models that came into existence and are being built and pursued today. Why? Because it's all about the money and a lot of such journals absolutely do not care about quality, or about having big name editors who'd perform very thorough revision of reviews and make proper decisions about paper acceptances. Big journals have good editorial and review staff, and they simply can't allow them to be bad and irresponsible, because they actually care about their reputation and credibility. New breed open access journals on the other hand only care about revenue.

The instititue I work at has mandated open access publication as well as others did, however, they did not provide funding for us to actually publish open access versions at big name journals, so we try to play the system whenever we can, and publish in traditional journals with traditional publication schemes. I do not care about some politician-flavored scientists' (most of them not even publishing) dreams about some utopistic open access world. I care about publications appearing in credible journals, reviewed by credible people, producing quality publications - even if they are only attainable for money.

Reputable journals are only marginally better. Just witnessed a back and forth where some research was attacked by a prominent scientist. The assumptions the latter made weren't quite on target, so the attacked researchers submitted a paper pointing this out. This passed anonymous peer review but then the paper solicited the opinion of this star scientists. He dismisses the paper with the most bizarre arguments that give the impression that he didn't even read it. Then the prestigious journal turns aroun

Actually, as somebody else here has pointed out, they did submit one single dodgy paper to a reputable journal as well, and it got accepted! (meaning 100% acceptance rate in "reputable" journal, versus only 51% in open-access...) So the study's conclusion should not really be what it looks like at first glance... Ok, admitted, the sample size of control group is way too small, but that's needed to make the paper dodgy.

... but in any case, it's an interesting twist of the liar's paradox... I say, a twist, r

John Bohannon [wikipedia.org] is a biologist, science journalist, and dancer based at Harvard University. He writes for Science Magazine, Discover Magazine, and Wired Magazine, and frequently reports on the intersections of science and war. After embedding in southern Afghanistan in 2010, he was the first journalist to convince the US military to voluntarily release civilian casualty data. He received a Reuters environmental journalism award in 2006 for his reporting on collaborative research in Gaza. He was also involved in some controversy over an article he wrote critiquing the Lancet survey of Iraq War Casualties.

At Science Magazine, Bohannon also adopts the ''Gonzo Scientist'' persona, where he ''takes a look at the intersections among science, culture, and art -- and, in true gonzo style, doesn't shrink from making himself a part of the story. The stories include original art and accompanying multimedia features.'' As the Gonzo Scientist, Bohannon's research on whether humans can tell the difference between pate and dog food led to Stephen Colbert eating cat food on the Colbert Report.

Bohannon is probably best known for creating the Dance Your PhD competition, in which scientists from all around the world interpret their doctoral dissertations in dance form. Slate Magazine ran a profile on Bohannon and the competition in 2011. He performed with the Black Label Movement dance troupe at TEDx Brussels in November 2011, where he satirized Jonathan Swift's A Modest Proposal by modestly proposing that Powerpoint software be replaced by live dancers. Bohannon then went on to perform with Black Label Movement at TED 2012 in Long Beach.

While visiting the Harvard University Program in Ethics and Health, he is working on two areas of research: 1) torture --- in particular the complicity of scientific and medical workers in torture, and 2) ethical problems involved with obtaining global health data, stemming from his journalistic coverage of the controversial attempts to estimate the health and mortality of the Iraqi population since the US-led invasion.

After completing a Ph.D. in molecular biology at the University of Oxford in 2002, John focused on bioethics as a Fulbright fellow (2003 --- 2004) in Berlin.