A practical solution to the reproducibility crisis.
Image via Flickr user RDECOM
One thing we all vaguely remember from science class is the scientific method—that noble set of rules for testing hypotheses that ensures results and conclusions are useful to other scientists. If you forgot the specifics, it boils down to one thing: Show your work. Transparency about methodological decisions, specificity about variables and conditions, full accessibility to raw data—these allow scientists to trust the reliability of research because, allegedly, the findings can be replicated. Over the last decade however, the scientific world has realized that the current method just isn’t working.
The so-called reproducibility crisis kicked off in 2005, when Stanford epidemiologist John Ioannidis showed, in his now-famous paper “Why Most Published Research Findings Are False,” that a majority of medical studies are distorted—by researchers’ bias toward unlikely hypotheses, publishers’ bias toward novel claims, etc. Ensuing investigations have shown that 47 of 53 “landmark” cancer studies don’t hold up to scrutiny, two-thirds of findings published in three leading psychology journals can’t be replicated, and American scientists spend $28 billion each year on unreplicable biomedical research.
While these revelations are dramatic, the full extent of the problem is unknown. Reconducting research for the sake of verification is a fundamental part of the scientific process, but in reality, scientists rarely take this step. Limited budgets, competition for professional advancement, and fragile egos all disincentivize researchers from rerunning old studies instead of conducting new ones. At the same time, since journals favor the sensational and strange, scientists who do test old findings struggle to publish their results.
The Preclinical Reproducibility and Robustness channel, which launched last Thursday, aims to fill the void. The online-only journal, based in London, is the first ever dedicated exclusively to the replication and testing of past experiments. “Because science depends on observations that are verifiable, science is at its core self-correcting,” co-founders Bruce Alberts and Alexander Kamb write in the journal’s introductory editorial. “It is our hope that … a vigorous new publishing culture can be established to enhance the crucial self-correcting feature of science.”
In recent years, several other initiatives have begun to tackle the problem, including Stanford’s Meta-Research Innovation Center, co-founded by Ioannidis and dedicated to researching the processes, policies, and methodologies of scientific research. But until now, there was no institution promoting independent reproductions and no outlet in which to publish them. Since it went live, PRB has published three replication studies, including a debunking of the claim that high-fat diets improve metabolism in mice.