Journalisic Data Analysis

Get the opportunity to publish (bias)!

Even though it is unintentional, scientist are misled by their own biases. I previously mentioned one of the biggest biases of confirmation, but that is not the only bias people have. In this blogpost, I tell you something about the opportunistic bias and publication bias.

Jamie DeCoster: “The opportunistic bias occurs when the reported relations are stronger or otherwise more supportive of the researcher’s theories than they would be without the exploratory process.”

The opportunistic bias occurs when researchers examine multiple analyses before deciding which one to actually use. This selection process makes it more likely to find significant results and large effect sizes because you can pick the analysis that favors your expected prediction or theory most. But, according to DeCoster and Sparks there are different procedures that shift your result towards significance; you can create an opportunistic bias by examining the most preferable way of transforming variables, measure a large collection of variables and only report desirable results, and examine the same hypotheses with different analyses, methods, or in different subgroups of participants. Another possibility is scrutinizing undesirable findings more closely than desirable findings (e.g. double-check the unexpected finding). Michèle Nuijten also mentioned several activities and she noted the self-admission rates of professors. Below you’ll find the three most frequent procedures:

Failing to report all of the study’s dependent measures (63.4%);

Deciding whether to collect more data after looking at the results and their significance (55.9%);

Selectively reporting the studies that worked (45.8%).

As a consequence of the opportunistic bias, type I errors are easily made. Also, p-values can’t be interpreted as they should be because the actual probability of finding a significant result is much higher. Hence, opportunistic bias can lead to a significant effect (even when no effect is there). These wrongly drawn conclusions are incorporated in the general view in literature (as people and researchers read biased articles) and they systematically influence meta-analyses.

Why do we want these significant results so badly? Why do we transform data in a preferable way or do different analysis to get these significant results? One of the causes of the opportunistic bias is closely related to the publication bias.

Michèle Nuijten: “Publication bias is putting the non-significant results in the closet and publish the significant results in the journals.”

With publication bias, the whole view in literature gets distorted; by only reporting the articles that have significant results, researchers get triggered to (only) publish significant results and this stimulates the opportunistic bias increasingly. The view on the world changes and the (scientific) knowledge we have is not as objective as it should be. This could be dangerous in the field of medicine for instance. Publication bias is not only noticeable in the scientific world, but also in journalism. For journalists it is important to create remarkable and sensational stories in order to get people to read a blog/newspaper and get paid by their bosses. With this, incorrect and biased information is (even more) encouraged in our society resulting in a misguided worldview.

The problem is clear: the motivation of the opportunistic bias is closely related to the publication bias. What can we do about it?

[1] Researchers must create reliable articles with as little as possible (publication and opportunistic) biases. They should also make use of preregistration by means of the website OSF(which does not allow you to change anything when posted on). When referring to other articles or previous theories, they should be cautious. To indicate if an article is adequate, researchers could look for bad and good signs.

Bad signs:

Statistical errors;

Lot of p-values just below .05;

Post hoc explanations of covariates;

Removing outliers without doing a sensitivity check;

Vague and inaccurate language in the method section;

Degrees of freedom that don’t match the sample size.

Good signs:

High power or large sample size;

Preregister hypotheses, method and analysis plan;

Openness (share data, analyses, material online);

Replication with high power and preregistration;

Meta-analysis of different studies (test for publication bias).

[2] What could journals do? They could create a more rigorous and thorough reporting standard (e.g. reporting the intended and the actual sample size, describing all variables, mention the analyses which were pre-specified and which were done). In addition, journals could require an increased disclosure (e.g. researches have to write a log of all performed analyses and procedures). In my opinion, journals should also publish non-significant results because this is also a result. They can do this by accepting or rejecting research proposals on the basis of their theory, described method and proposed analysis. When it is accepted, the journal would agree to publish it no matter if the results are non-significant. I do think that the latter should have some other requirements to uphold the quality of research papers though.

[3] What can be done by journalists? Journalists should be cautious when referring to an article. In my opinion, a lot of journalists are not doing this; they are rather sensational instead of subtle. An example of this can be found in the news report “Even Casually Smoking Marijuana Can Change Your Brain, Study Says” of the Washington Post. The study they refer to solely indicated that there were differences in the brain of casual pot users compared to nonusers, but it did not mention that these differences were caused by marijuana use (because it even couldn’t show causality because of the study’s design). In this sense, journalists should be critical and more skeptical and not just write something that is exciting.

Vind ik leuk:

Berichtnavigatie

5 gedachtes over “Get the opportunity to publish (bias)!”

I agree on that they should be more ciritcal en skeptical, but I don’t think this will change or every article should be double checked by an external company. If there was an external company who would check everything double before it will be published, this might be a sollution to the problems, but this costs also a lot of money and I don’t know where that money should come from.

I fully agree with your opinion that journals should also publish non-significant results and that these results are also results. I do get though, that journals get to live by publishing articles that people really want to read and that they have to make choices sometimes. Just philosophizing here.. but maybe there should be some sort of ”editors note” part to a journal with study’s producing non-significant results. So that… articles can still have a choice in their top ”stories” but with all context included.

Hey Genya! Your analysis was really good since you included a lot of perspectives. I was really interested in the “solution” you provided associated with journals. Yes, this COULD change things. However, I am fully convinced that the whole story begins and ends with the journalist. He is the one who can really make the change.

I think journalists are being pressured by their bosses to write interesting and sensational articles. In this way, the truth might be less important. Being more critical AND writing an interesting article shouldn’t be impossible right?
Your idea of hiring a company to fact check articles is a pretty good idea. I just think the costs would be very high and most researchers might not be inclined to send their papers to such a company. Keep up the good work Genya!

Interesting blog! I totally agree with you on that journals should also publish non-significant studies. In the part where you mention what journalists can do, you say that journalists focus more on writing sensational stories instead of being subtle. I agree on this, in a news article based on a scientific study usually the focus is put on outstanding results, instead of mentioning anything about the way the research has been conducted or appointing limitations of the study. But I think that both researchers and journalists are under pressure to write a story or research that has either significant or outstanding results. Your possible solution that journals have to accept articles based on a research proposal instead of the outcomes is a very good option, but I doubt if that is an option that would be embraced by the journals. Hopefully, the opportunity and publication biases will become less important and that way, the pressure for journalists and researchers will be (partly) relieved.