I was talking to two friends, both PHd students in the field of genetics, and they expressed their frustration about the lack of a journal which collects papers describing experiments which failed to yield useful results. They suggested that the lack of such a database of "failed" experiments led to replication of those same experiments by other researchers who weren't aware that it had already been done. they also suggested that the inability of scientists to publish such papers and thus receive credit for their work added to the pressure to produce "useful" results and this in turn encouraged an environment wherein new methodologies were ignored in favour of more conservative ones.

What i am curious about is if anyone here knows of such a journal and if you don't do you think that it would be useful to have one?the reason i have posted this in the suggestion forum is that it seems that the LoR would be a good place to conduct a survey of scientists and researchers to find instances of unintentional replication of "failed" experiments.

Also there are so many ways your work can fail it would be hard to write up useful information about what went wrong. If you were going to isolate the error so precisely it would be fairly easy for you to repeat the experiment and get it right.

Maybe it's my ignorance but I thought falsification experiments were half the battle in science. Why wouldn't it be useful to compile a database like that? It might get a little big really quickly, but it seems to me that it would help for the design and control of future experiments.

I was talking to two friends, both PHd students in the field of genetics, and they expressed their frustration about the lack of a journal which collects papers describing experiments which failed to yield useful results. They suggested that the lack of such a database of "failed" experiments led to replication of those same experiments by other researchers who weren't aware that it had already been done. they also suggested that the inability of scientists to publish such papers and thus receive credit for their work added to the pressure to produce "useful" results and this in turn encouraged an environment wherein new methodologies were ignored in favour of more conservative ones.

What i am curious about is if anyone here knows of such a journal and if you don't do you think that it would be useful to have one?the reason i have posted this in the suggestion forum is that it seems that the LoR would be a good place to conduct a survey of scientists and researchers to find instances of unintentional replication of "failed" experiments.

I can actually think of really good reasons this is a bad idea. It qualifies the entire experiment as a failure, where that is an assumption and that smells like voodoo.

How do you separate out the failures from the successes when you stop before you find out why it didn't work?Especially when you will never be able to account for all of the variables until you can reproduce the failure predictably?If you can't make it succeed predictably then you have no basis for being able to reproduce a failure predictably.

If you can't isolate why it failed then you have no basis for reporting a failure.

Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness......you'll never be full of shit again.

Science - You can see why it works.Religion - You can't see why it doesn't work

How do you separate out the failures from the successes when you stop before you find out why it didn't work?

This is my point. Do you think that the way that academic publishing currently works might provide an incentive for researchers to stop before they find out why something didn't work? and if that is the case might the opportunity to publish failures-in-progress alleviate such pressure. i ask only because my friends seemed to feel that they have had to drop lines of research before they had been exhausted, because of the pressure to publish.

Ratmonk wrote:How do you separate out the failures from the successes when you stop before you find out why it didn't work?

This is my point. Do you think that the way that academic publishing currently works might provide an incentive for researchers to stop before they find out why something didn't work? and if that is the case might the opportunity to publish failures-in-progress alleviate such pressure. i ask only because my friends seemed to feel that they have had to drop lines of research before they had been exhausted, because of the pressure to publish.

Pressure to publish is very real and I suppose one would have to consider it an evolutionary pressure. You either do your work in a manner that allows you to compete or perish. That may or may not be unfortunate.

I think one must qualify "failure" to the extent that if any conclusion can be reached based on evidence then even if the conclusion does not support the initial hypothesis that it adds to the information available for a known specific reason. It can not be called a failure just because the results provide evidence to the contrary of the initial hypothesis.

Example: michelson-morley

An experiment can only be qualified as a failure if no conclusion can be reached.Only then are you left with nothing to publish.

The more I think on it, the less helpful it seems to publish failures (where no conclusion can be reached).

Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness......you'll never be full of shit again.

Science - You can see why it works.Religion - You can't see why it doesn't work

I do take your point but i still don't understand why such a paper wouldn't be useful as a signpost saying "dead end, don't bother". if failures aren't published how can other researchers know wether or not they are about to embark on a four year project which someone else has already shown to be fruitless, surely it is a part of the study of methodology.

Ratmonk wrote:I do take your point but i still don't understand why such a paper wouldn't be useful as a signpost saying "dead end, don't bother". if failures aren't published how can other researchers know wether or not they are about to embark on a four year project which someone else has already shown to be fruitless, surely it is a part of the study of methodology.

Because if someone swapped grape koolaid for the ninhydrin solution and nobody thinks to check it may be a personal failure but publishing it as a warning that "dragons be here" is counterproductive.

What you are suggesting is that it would be excusable to stop research or commercial development based upon "dunno."

There is no place in science for "dunno" except as a question."Dunno" is never an answer.

If a publication can't add to knowledge, then the least one can expect is that it not stand in the way.

Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness......you'll never be full of shit again.

Science - You can see why it works.Religion - You can't see why it doesn't work

human error holds no water here as any experiment or piece of research may contain mistakes which go unnoticed. i was wrong to write "don't bother", of course we should bother to go down the same line as before, but we should be informed when we do so. So you've designed an experiment and want to check if anyone has tried it before. you see that someone did ten years ago and failed to get a useful result, so you think "ok, they ballsed it up somehow" but what if thirty different people have tried it independently of each other and all failed to get a useful result? surely that would cause you to re-examine your design which might in turn lead to a better experiment which does finally add some knowledge. I agree with your last post as a response to my last post which i see was badly worded however i still do not see why positive and negative results are published and null results are not.

Ratmonk wrote:human error holds no water here as any experiment or piece of research may contain mistakes which go unnoticed. i was wrong to write "don't bother", of course we should bother to go down the same line as before, but we should be informed when we do so. So you've designed an experiment and want to check if anyone has tried it before. you see that someone did ten years ago and failed to get a useful result, so you think "ok, they ballsed it up somehow" but what if thirty different people have tried it independently of each other and all failed to get a useful result? surely that would cause you to re-examine your design which might in turn lead to a better experiment which does finally add some knowledge. I agree with your last post as a response to my last post which i see was badly worded however i still do not see why positive and negative results are published and null results are not.

In science we do exclude lines of inquiry, but we do so for a substantial reason.

Let's use Pegasus as an example.

If we apply reason, and cladistic phylogenetics, we can discount Pegusi. There is no path through which an animal could inherit all of the genetic features required that it could have the traits of Pegasus.There are no mammalian hexopods, therefore regardless of whether some creatures evolved forelimbs into wings, there is no line through which it could also inherit an addition set of limbs which did not.We must not exclude them exclusively by their absence in taxonomic record, but we must exclude that for which no possible line of genetic inheritance exists.It is a positive disproof rather than an exclusion by absent evidence.

We can then say that inquiry into the existence of pink Pegusi is a frivolous inquiry and be justified.That is a valid exclusion and necessary to avoid redundancy of effort. We MUST exclude that which isn't possible to make inquiry efficient.We must have substantial reason to exclude.

When we design an inquiry we substantiate the line of logic behind it with precedence whenever possible.We apply precedence either directly or through some apparently suitable analogy.If we don't get the results we seek we may not exclude the strategy as possibly valid unless we can demonstrate that either the analogy used, or the precedent used was invalid.

Publishing a failure, excludes not merely the inquiry, but the foundation upon which the inquiry was based.If that can be done in such a way as to provide a positive proof that precedence is invalid, then the inquiry did not fail.If it can't provide a reason that the foundation of the inquiry was in error, either an error in the analogy employed, or an error in the precedence upon which the analogy was based, then it becomes an exclusion by absence, rather than positive proof that the inquiry is exclusive of merit.

"I did it this way and it didn't work," is not sufficient. Unless you can isolate all possible reasons that the inquiry failed.No, you can't exclude human error at that point because unless you know exactly why the inquiry failed then it is always possible that you overlooked something.One can't exclude human error by the absence of information... only by positive proof that the error lies elsewhere.

30 people before you can fail, and the information is meaningless unless a positive conclusion can be reached that the inquiry deserves no merit.

Yes, that means that science on the whole is a rather tedious and repetitious process.It does not advance by leaps (usually) but by slow measured progress of building upon precedence.

Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness......you'll never be full of shit again.

Science - You can see why it works.Religion - You can't see why it doesn't work

As much as I like what the OP suggests, there are simply too many ways to fail to be recorded. (I know Failblog tries to capture them all... That's why there's so much lulz on the interwebs)

I think the most parsimonious way, going forward, is to present what methods work - Then it may be induced from them that which don't work. If there are ways that work that we haven't found yet, then its a consequence of being cautious yet inefficient. I don't think science, as an enterprise, needs a timeline for completion.

The search would have to be naive. In other words, each entry would not record the experiment but would only record a list of the dependencies.Each item would be listed in a strict format.The search could not assume suitability for purpose for any of the dependencies, but the properties of that dependency which are most essential to the inquiry could be sub-tabled... like "Sulfur content not higher than 0.0003%" If that is important to the inquiry.

The search would only correlate a failure condition in a undescribed inquiries with dependencies in common.

Something like this might be useful before the experiment had to be abandoned and would not prejudice future inquiries without cause.

Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness......you'll never be full of shit again.

Science - You can see why it works.Religion - You can't see why it doesn't work