Opinion: Missing Methods

All of us know the apocryphal tale where the mother-in-law “shares” a secret family recipe, but as much as you try, the cookies never taste the same as Mama’s. Of course, Mama’s withholding of the full recipe is a move to sustain her generational authority, perhaps a last grasp for her son’s affection, and definitely, a chance to show up that good-for-nothing daughter-in-law. This is my current fear about science: that soon, we will no longer be able to make cookies the way Mama did.

A pillar of our scientific system is that “true” findings will be validated when other labs repeat experiments, and thus, there is almost a sacred obligation to clearly explain our technical details in the Methods or Procedures sections of our papers. Without a doubt, there has been a steady erosion of this process, making it difficult, if not impossible to recapitulate the findings of others.

A benign version of this phenomenon is realized at journal clubs: so often, as we review some interesting data, someone asks, “How did they do their assay?” When we do not find the answer in the paper, we turn to the supplementary materials, if the presenter has thought to bring a copy. To our consternation, the technical details are often skeletal and incomplete, with evidence suggesting that the text went unedited. Or, we are made to enter an Alice in Wonderland rabbit hole where we are referred to a trail of previous papers, whose Materials and Methods sections refer us to yet other previous papers that ultimately lack the technical details we seek.

As researchers, we are by nature dogs with a bone; that is, we don’t give up easily. How many of you have followed up this failed exercise in technique archeology with either an email or phone call to that trustworthy, stalwart colleague, the “Corresponding Author.” But oftentimes, that effort also leads nowhere, as we are simply told, “It’s all in the paper’s Materials and Methods.” This response is the harbinger of my worst suspicions: namely, that the authors don’t want our cookies to taste as good theirs, or heaven forbid, that they can no longer get their cookies to taste good at all. In other words, are researchers hesitant to share their methodological details for fear that their research efforts will be scooped, or that their results will be revealed as irreproducible?

Regardless of the motivations, the consequences of this methodological deterioration are worrisome. We will be less capable of building on the key findings of others, thus slowing the course of science, or worse, faulty, even fraudulent data may go unnoticed.

How has this happened? How has the requirement to share every iota of technical detail with the research community given way to “as described elsewhere,” elsewhere being Never Never Land? First, I blame the journal editorial boards. The push in recent years to shorten papers and limit the number of figures has never been clearly rationalized to the research public. Perhaps the current push for brevity is simply the bottom line: publishers trying to milk more profit by using fewer sheets of paper. But in reality, papers are not shorter nowadays; they are longer, except that much of the work is tucked into the mystical realm of the Supplement.

What started as a purgatory for data no one ever really wanted to scrutinize has evolved into a quasi-regulated cloud, containing techniques, hard data, and extra references—that is, everything that won’t fit into the shrinking overhead compartment that is the main paper. If dogged in our intent and devout in our trust of the system, it is here we will find the sought-after “truths” of the paper. But my experience has been that many supplemental sections, even in top journals, do not receive the same level of scrutiny given to the paper proper. This goes well beyond typos and bad grammar; there is an onus on peer-reviewers to check that sufficient technical details are reported, or if the details are described elsewhere, that the authors are made to demonstrate this in a sidebar. (This policy would be similar to the requirement by many journals to offer proof of evidence cited by “personal communication.”)

Next, I blame the authors. Failure to transmit clear and detailed technical details is not just a sin against the scientific community, it’s also indicative of poor internal mentoring skills. This is because subsequent waves of students in these labs rely on these same details in order to recapitulate and build on internal findings. There is a smell-test benchmark that all corresponding authors should use when assessing a paper’s methods section: is there sufficient detail for another researcher to redo our experiments and hopefully confirm our results? This brings up the worst consequence to our increasingly lax eye for technical detail: faster publication of findings in higher impact journals will mean squat if the data will not stand the test of time, and in our field, this means experimental reproducibility.

My frustration is surely not an indictment of the whole field, because there are many, many outstanding papers that do provide highly detailed technical descriptions of the work performed, even if published as Supplements. The recent proliferation of smaller journals devoted solely to publishing novel methods and technologies is a great advance in this regard. Nonetheless, the general erosion to standards for publishing technical details, both at the level of editorial boards and authors, will not only leave us with bland cookies, it will eventually leave us with tasteless crumbs.

Irwin H. Gelman is the John and Santa Palisano Chair of Cancer Genetics at Roswell Park Cancer Institute, Professor of Oncology and Chair of the Cell and Molecular Biology Graduate Program of the State University of New York at Buffalo.

Add a Comment

Comments

This problem actually has 2 edges.Â First is the loss of sharing "every iota of technical detail" of data collection, ie, the problem of missing methodological detail, as discussed in this article.Â The second, more pernicious edge is the web-based sharing of megadata, with everything from RCTs to gene expression studies posted somewhere on the Web.Â Today, just about anybody can download just about any kind of massive dataset posted by the megalabs who generate it, and then post their own interpretation on their own Blog.Â This is magically supposed to improve science, presumably by democratizing the analysis.Â This is a kind of community based participatory research, which I call this "COmmunity-Based Web-Enhanced Blog Science" (COBWEB Science, for short).Â We saw the same problem in epidemiology, where professionalization led to a cadre of data analysis specialists increasingly remote from the actual processes of data collection.Â Such a process necessarily leads to ignorance of the strengths, weaknesses and assumptions implicit in such data.Â After all, data is not "given" but created.Â The result is a proliferation of studies of data of unknown quality and unknown characteristics, by persons who actually know nothing about the data. John E

When writing material and method of a paper we usually include all that is necessary for co-authors to understand what has been done, then slowly along the editing and correction process that part of a paper is really skinned. It is not new and it seems to be part of editorial policy (like demanding cut gels instead of complete ones, etc..). In fact there was a fashion when nobody cared about what was written in that section, personally I certainly wonder how a reviewer can decide whether he is reading good science or junk without a complete method section!

I understand the frustration, but the lack of experimental details or limited details in the published papers is not a new phenomenon.Â We are frequently forgetting that science has a human face and reasons for publishing of scientific results are many.Â Truly scientific papers with the goal to allow other experimentalists to achieve the same results in their laboratories are rare and cherished.Â There are also papers reporting novel technologies with a great commercial value with limited data. And, of course, miraculously accepted papers with sketchy details authored by sloppy researchers to seek recognition.

Fortunately, the scientific community created ways dealing with the situation.Â Corresponding with the authors is the primary solution leading to open discussions, collaborations or corrections.Â There is a number of sources with checked lab protocols and recipes. In the end, all the hard-to-repeat papers or protocols are discarded and forgotten. Â Â

This problem actually has 2 edges.Â First is the loss of sharing "every iota of technical detail" of data collection, ie, the problem of missing methodological detail, as discussed in this article.Â The second, more pernicious edge is the web-based sharing of megadata, with everything from RCTs to gene expression studies posted somewhere on the Web.Â Today, just about anybody can download just about any kind of massive dataset posted by the megalabs who generate it, and then post their own interpretation on their own Blog.Â This is magically supposed to improve science, presumably by democratizing the analysis.Â This is a kind of community based participatory research, which I call this "COmmunity-Based Web-Enhanced Blog Science" (COBWEB Science, for short).Â We saw the same problem in epidemiology, where professionalization led to a cadre of data analysis specialists increasingly remote from the actual processes of data collection.Â Such a process necessarily leads to ignorance of the strengths, weaknesses and assumptions implicit in such data.Â After all, data is not "given" but created.Â The result is a proliferation of studies of data of unknown quality and unknown characteristics, by persons who actually know nothing about the data. John E

When writing material and method of a paper we usually include all that is necessary for co-authors to understand what has been done, then slowly along the editing and correction process that part of a paper is really skinned. It is not new and it seems to be part of editorial policy (like demanding cut gels instead of complete ones, etc..). In fact there was a fashion when nobody cared about what was written in that section, personally I certainly wonder how a reviewer can decide whether he is reading good science or junk without a complete method section!

I understand the frustration, but the lack of experimental details or limited details in the published papers is not a new phenomenon.Â We are frequently forgetting that science has a human face and reasons for publishing of scientific results are many.Â Truly scientific papers with the goal to allow other experimentalists to achieve the same results in their laboratories are rare and cherished.Â There are also papers reporting novel technologies with a great commercial value with limited data. And, of course, miraculously accepted papers with sketchy details authored by sloppy researchers to seek recognition.

Fortunately, the scientific community created ways dealing with the situation.Â Corresponding with the authors is the primary solution leading to open discussions, collaborations or corrections.Â There is a number of sources with checked lab protocols and recipes. In the end, all the hard-to-repeat papers or protocols are discarded and forgotten. Â Â

It seems one important factor has beenÂ has not been very well accountedÂ for here.Â Any amount of scientific research and the methods associated with it can potentially result in very large commercial gain for those involved or the organisations by whom they are employed (note Lech W. Dudycz's comment). I have seen papers published in very reputable journals that lack material and method sections to the degree that would, a few years ago render them unpublishable.As usual, the bottom line is money.

Or, we are made to enter an Alice in Wonderland rabbit hole where we are referred to a trail of previous papers, whose Materials and Methods sections refer us to yet other previous papers that ultimately lack the technical details we seek.

And all, of course, from the authors of the paper we originally consulted.

Another point is that, many times the "Corresponding Author" was not in the bench, and could never explain to another how to reproduce the data. In the section where we describe the "Author's contribution" we also should describe everyone participation in every particular experiments and include the e-mail for contact.

It seems one important factor has beenÂ has not been very well accountedÂ for here.Â Any amount of scientific research and the methods associated with it can potentially result in very large commercial gain for those involved or the organisations by whom they are employed (note Lech W. Dudycz's comment). I have seen papers published in very reputable journals that lack material and method sections to the degree that would, a few years ago render them unpublishable.As usual, the bottom line is money.

Or, we are made to enter an Alice in Wonderland rabbit hole where we are referred to a trail of previous papers, whose Materials and Methods sections refer us to yet other previous papers that ultimately lack the technical details we seek.

And all, of course, from the authors of the paper we originally consulted.

Another point is that, many times the "Corresponding Author" was not in the bench, and could never explain to another how to reproduce the data. In the section where we describe the "Author's contribution" we also should describe everyone participation in every particular experiments and include the e-mail for contact.

I personally think Science we do these days is no more than business. I am from Chemistry and I can safely say that synthetic procedure we follow from older journals (1930-1980) are reproducible but correct me if I say that from 1990 onwards if you can repeat the synthetic procedure. Now every reaction details seems to be magical to me, where I am not sure If I can repeat the procedure or not.

I think these days PI in order for instant fame, can go to any length to get their papers published. It is rat race for who comes first and how many papers we publish. I think we will contiue to struggle to repeat any procedure in the papers and will have to live with this modern science.

I personally think Science we do these days is no more than business. I am from Chemistry and I can safely say that synthetic procedure we follow from older journals (1930-1980) are reproducible but correct me if I say that from 1990 onwards if you can repeat the synthetic procedure. Now every reaction details seems to be magical to me, where I am not sure If I can repeat the procedure or not.

I think these days PI in order for instant fame, can go to any length to get their papers published. It is rat race for who comes first and how many papers we publish. I think we will contiue to struggle to repeat any procedure in the papers and will have to live with this modern science.

People who are skeptical of science are, by and large, only skeptical of the fact that the "science" is really science, and not just something someone's assembling to try to buttress something they believe or want to believe.

People who are skeptical of science are, by and large, only skeptical of the fact that the "science" is really science, and not just something someone's assembling to try to buttress something they believe or want to believe.

The problem is compounded by the fact that journals and the publication world in general have never developed a clearly articulated policy on whether or not one can plagiarize one's methods from a previous paper. According to some of the more draconian interpretations of plagiarism (and indeed repeated by editors have consulted) any published repetition of even a sentence constitutes plagiarism. Thus a clearly written methods section from a previous paper is either referenced or shortened/modified/revisedÂ to avoid being the subject of a plagiarism accusation. It is time for journals to clearly state that authors can describe their methods drawn verbatimÂ from one of their previous publications

The problem is compounded by the fact that journals and the publication world in general have never developed a clearly articulated policy on whether or not one can plagiarize one's methods from a previous paper. According to some of the more draconian interpretations of plagiarism (and indeed repeated by editors have consulted) any published repetition of even a sentence constitutes plagiarism. Thus a clearly written methods section from a previous paper is either referenced or shortened/modified/revisedÂ to avoid being the subject of a plagiarism accusation. It is time for journals to clearly state that authors can describe their methods drawn verbatimÂ from one of their previous publications