Outcome reporting bias

From The Embassy of Good Science
Revision as of 16:45, 27 May 2020 by Marc.VanHoof (talk | contribs)

Outcome reporting bias

What is this about?

Outcome reporting bias refers to selective/distorted reporting of results, and/or biased interpretation of available information. This may involve overlooking some results or using specific statistical methods to achieve a desirable and often pre-determined outcome.

Why is this important?

Failing to report all aspects of research leads to publishing incomplete or inaccurate results. It wastes valuable resources such as peer-reviewers’ and editors’ time, research funds, and collaborators’ efforts. Being biased in reporting results distorts the integrity of science and if not discovered, might influence future studies.

Outcome reporting bias affects the entire research ecosystem. From research subjects and collaborators to research administrators, funders, and also other researchers that might rely on the results of the study. It also affects society’s trust in science.

For whom is this important?

What are the best practices?

The use of appropriate statistical analyses and the full publication of results (whether approving or rejecting the study’s hypothesis) are among the best practices. Preregistering research is another way of communicating research plans and improve the credibility of results[1].

In the case of clinical trials, ICMJE advises groups to register clinical trials and link the analysis of clinical trials to the same registration. They are also advised to make sure that there is no discrepancy between the registered methodology in the registries and what is published in the journals, and publish the registration number at the end of the abstract. Since July 2018, groups are also asked to include their data sharing plan and data sharing statement (ICMJE 2018[2]).

Furthermore, in order to encourage impartial and clear use of statistical methods, ICMJE asks groups to:

“Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to judge its appropriateness for the study and to verify the reported results. When possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals). Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size and precision of estimates. References for the design of the study and statistical methods should be to standard works when possible (with pages stated). Define statistical terms, abbreviations, and most symbols. Specify the statistical software package(s) and versions used. Distinguish pre-specified from exploratory analyses, including subgroup analyses” (ICMJE 2018, p.16-17[3]).

Other information

Virtues & Values
Good Practices & Misconduct
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.1.6