Publication bias (positive results)

From The Embassy of Good Science
Revision as of 10:42, 27 March 2021 by Benjamin.Benzon (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication bias (positive results)

What is this about?

This article presents the definition, sources, and consequences as well as misconceptions in scientific understanding of data which result in publication bias in favor of positive results.

Why is this important?

Publication bias is defined as a conscious or unconscious decision to publish or distribute a manuscript based on the study results . [1] Reviews of scientific literature point to the fact that papers with positive results are three times more likely to be published than the ones with negative results. Reasons for such practices are multiple. The most common one is that scientists simply do not submit studies with negative findings to journals. Unfortunately, a lot of negative findings are usually defined as those that do not reach the usual threshold of statistical significance of p<0.05. This issue was recently addressed in the American Statistical Association’s Statement on p value, which aims to steer future research away from p value as the most significant indicator. [2]Also, a part of responsibility lies with editors and scientometric criteria, which, by their nature, favor studies with positive results. Two dire consequences of publication bias are resource waste and negative impact on meta-analyses. The former is the result of scientists doing experiments which have already been conducted by others but never published (because they had no positive results), and the latter simply means that meta-analyses will be oversaturated with positive results and skew the conclusions . [3] [4] [5]

  1. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):1-193.
  2. Yaddanapudi LN. The American Statistical Association statement on P-values explained: J Anaesthesiol Clin Pharmacol. 2016 Oct-Dec;32(4):421-423. doi: 10.4103/0970-9185.194772.
  3. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337(8746):867-72.
  4. Dickersin K, Min YI. NIH clinical trials and publication bias. Online J Curr Clin Trials. 1993(50).
  5. Taragin MI. Learning from negative findings. Isr J Health Policy Res. 2019;8(1):019-0309.

For whom is this important?

What are the best practices?

Everybody who ever performed research probably experienced the sadness of getting a p value bigger than 0.05. Instead of critically looking at the data and results in the light of present knowledge and trying to figure out the impact of results, the hypothesis usually gets abandoned and another set of experiments gets initiated. There is some effort to change the existing practice. A new course in editorial policy for publication, which is considering only the methodological rigor, and not the direction of results, was set by PLOS some 20 years ago. Another praise worthy initiative is Journal of Negative Results which also considers methodological rigor as the only criteria for the publication of a manuscript.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.1.6