Reproducibility in Scientific Publishing

by | Dec 17, 2019

The reproducibility problem is turning the growing body of scientific literature into a house of cards: time to pour a new foundation.

For a published study to become a meaningful part of the scientific literature, it must be possible for other researchers to replicate the experiments and (more or less exactly) reproduce the results. A widespread problem affecting many fields, the lack of attention to detail and statistics when designing and reporting research methodologies and when analyzing the results is generating a growing body of literature which is being cited and built upon even when the work in those papers cannot itself be repeated – and it is crippling the translation of certain research work into the promised real-world solutions.

Awareness of this issue, termed the reproducibility crisis, is almost ten years old. This issue devalues the work done, wastes the time and effort of researchers trying to replicate results reported by others, and it encourages a publish-and-forget culture where the individual study has more value as a piece of paper than as an important avenue of scientific communication.

Far from standing on the shoulders of giants, it would seem we are constructing a house of cards.

The Wrong Dose: A Case Study of Reporting Practices in Radiobiology

To demonstrate the scale of the problem, Dr. Yannick Poirier and colleagues at the University of Maryland School of Medicine reviewed 1758 articles on radiobiology, published between 1997 and 2017 across 469 different journals. For each paper, they asked a simple question: does this study contain sufficient methodological details for a reader to understand, interpret, and replicate the results?

Their review, published a few months back, found that less than half of the peer-reviewed, published papers reporting novel results met this basic criterion.

“Only 43.5% of papers reported the minimum information … to understand and interpret studies, let alone the technical details required to truly replicate them,” says Poirier. “Perhaps more importantly, highly-cited articles published in high impact factor journals tended to report the least in their experimental methodology, despite these journals spearheading the movement to bring light on the reproducibility crisis.”

“Only 43.5% of papers reported the minimum information to understand and interpret studies.”

Dr. Yannick Poirier, University of Maryland School of Medicine

While Dr. Poirier points out that these results are specific to radiobiology, they suggest a worrying trend with regard to the larger picture of scientific communication. “We attribute this to strict word limits and an emphasis placed on results rather than methods in these [high impact factor] journals,” he comments, “as well as lack of qualified reviewers specializing in the manuscript’s field”.

Reproducibility: A Spreading Culture?

The reproducibility problem has certainly raised the most eyebrows in the life sciences, but it deserves much broader discussion. A positive-result bias and sloppy reporting practices aren’t isolated to a single scientific field: they reflect a more dangerous general culture.

A survey conducted by Nature in 2016 of over 1500 researchers revealed that more than 70% of them had “tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments”. A third of respondents also reported not having any procedure in place to handle these situations, though the past 5 years have seen a drastic increase in institutional practices being established.

The pressure to publish and selective reporting were cited as the major reasons behind this issue, supported by professional competition for funding and better positions.

More than 70% of the Nature survey’s respondents had “tried and failed to reproduce another scientist’s experiments”.

Monya Baker, Nature, 533, 452-54

More telling is that those surveyed worked across disciplines spanning chemistry, biology, physics and engineering, medicine, as well as Earth and environmental sciences, showing the widespread nature of the issue and, coincidentally, reflecting the broad reach of materials science.

As scientific endeavours become increasingly problem-focused (rather than discipline-focused), the issue of proper reporting and limited reproducibility has the potential to grow worse. As a collective, materials scientists necessarily tend to become generalists over the course of their careers. A great deal of materials science in particular therefore sees researchers reporting outside their original discipline, where they are less likely to have formal training and could more easily skip over details that are outside their comfort zone or which don’t seem immediately relevant to their framing of a study’s outcome.

Call to Action

Implications of inaction were described in a 2015 report, ‘Reproducibility in Science’, where authors C. Glenn Begley and John P.A. Ioannidis warn that “addressing this concern is central to ensuring ongoing confidence of and support from the public”, not to mention ensuring the most efficient and effective use of limited research funds. Calculating the total cost of a study based on work that is not robust must take into account all the time and resources expended for the initial study as well as those used for all following studies from researchers around the world. Depending on the scientific field, it must also consider the costs of scaled-up tests based on non-robust work, including the ethical problems and risks associated with tests performed on live subjects, such as animal trials or clinical studies.

To avoid the spread of reproducibility issues, especially as scientists increasingly slip between disciplines, a greater stress on awareness of the causes, a greater attention to detail, and institutional support would certainly help. Begley and Ioannidis assert that “this is a multifaceted, multistakeholder problem. No single party is solely responsible, and no single solution will suffice”. They suggest, rather positively, that it is an opportunity “to introduce, demand, and reward a level of rigor and robustness in designing, conducting, reporting, interpreting, validating, and disseminating research”.

“This is a multifaceted, multistakeholder problem. No single party is solely responsible, and no single solution will suffice.”

‘Reproducibility in Science’, C. Glenn Begley and John P.A. Ioannidis, 2015

A number of solutions have been proposed as to how the problem can be addressed: an emphasis on replication, sample sizes, and statistics in both education and funding practices; raising standards for the presentation of methods submitted for publication (for which online repositories and protocols journals already exist); stricter adherence to established guidelines and practices for a given field, and; being more aware of a positive-result bias (including making all acquired data openly available) during both the researching and publishing steps.

I therefore encourage active participation in this debate at all levels, and reflection on the following unfinished sentences:

As a researcher, I can take action by…

As a teacher, I can take action by…

As a reviewer, I can take action by…

As an editor, I can take action by…

As a research institute leader, I can support these individual actions by…

As a funder of scientific research, I can support these individual actions by…

Related posts: