Replication: Can new metric crack science’s credibility problem?

A newly proposed, citation-based metric assesses the veracity of scientific claims by evaluating the outcomes of subsequent replication attempts. Introduced in an August bioRxiv preprint by researchers at the for-profit firm Verum Analytics, the R-factor was developed in response to long-standing concerns about the lack of reproducibility in biomedicine and the social sciences. Yet the measure, which its creators also plan to apply to physics literature, has already triggered concerns among researchers for its relatively simple approach to solving a complex problem.

…

Although it takes on a critical flaw in modern science, the new metric has drawn plenty of criticism. Pseudonymous science blogger Neuroskeptic, who was one of the first to report on R-factors, writes that the metric fails to account for the fact that positive results are submitted and selected for publication more often than negative ones.

Another caveat is the tool’s simplicity, says Adam Russell, an anthropologist and program manager at the Defense Advanced Research Projects Agency who has called for solutions to improve the credibility of social and behavioral sciences research. “History suggests that simple metrics are unlikely to address the multifaceted problems that have given rise to these crises of reproducibility, in part because simple metrics are easier to game,” Russell says. Verum’s Rife, however, says R-factors are less susceptible to gaming than existing metrics are. More.

But social and behavioural sciences are mostly PC bunk anyway. True, a few brave souls battle the tsunami of grant-enabled, grantor-pleasing PR that too often becomes policy. But no metric aimed at science values can address that.

Question: Do fields like origin of life, evolution, and cosmology redound with looniness because the concept of replication is inherently difficult for them?

See also: The “Grand Challenge” for evolutionary psychology is that it is bunk

One Response to Replication: Can new metric crack science’s credibility problem?

Social “science” won’t reform for the same reason that media won’t reform and economics won’t reform.

In all three fields the customers are getting what they want. No need to mess with success.

The only thing that needs reforming is our attitude about these fields. We’ve bought the myths that economists are supposed to analyze the economy, and journalists are supposed to report the news, and social “scientists” are supposed to analyze human behavior.

Those myths have nothing to do with the actual missions of the fields.

Each field is assigned the task of creating unsolvable problems which require more budget and workforce to “solve” by creating more unsolvable problems which require more budget and workforce to “solve” by…..