Intellectual conflicts of interest
Intellectual conflicts of interest
What is this about?
The term ‘intellectual conflicts of interest’ refers to the potential for a researcher to be attached to a specific point of view.[1] For instance, the researcher could be convinced that intervention A is more effective in treating a certain disease D, than intervention B even without the necessary evidence to back this view. Intellectual conflicts of interest usually occur based on the researcher’s prior research, education, institutional or other personal affiliations.[1][2] For instance, if the researcher has received more education about intervention A than B, they could be more likely to favor intervention A.
- ↑ 1.0 1.1 Akl EA, El-Hachem P, Abou-Haidar H, Neumann I, Schünemann HJ, Guyatt GH. Considering intellectual, in addition to financial, conflicts of interest proved important in a clinical practice guideline: a descriptive study. Journal of clinical epidemiology. 2014 Nov 1;67(11):1222-8.
- ↑ Bero L. What is in a name? Nonfinancial influences on the outcomes of systematic reviews and guidelines. Journal of clinical epidemiology. 2014 Nov 1;67(11):1239-41.
Why is this important?
There is concern that intellectual conflicts of interest may cause bias in research. The idea is that if researchers prefer one intervention over another, they will intentionally or unintentionally introduce bias into the design of the study, data analysis or data interpretation so that the results are in favor of the preferred intervention. However, there is not enough evidence on this. In psychotherapy research, it has been shown that researcher allegiance (i.e. intellectual conflicts of interest) is associated with study results: therapies with higher allegiance are shown to be more effective in randomized clinical trials.[1][2][3] It is not clear, though, whether this correlation is causal. It could be that researchers have allegiance towards more effective interventions because they are effective, rather than that the interventions appear to be more effective because the researchers have an allegiance to them.[4] Although some argue that the association between researcher allegiance and study results might be causal since the association exists even when there is evidence that two interventions are equally effective, the evidence remains inconclusive on how researcher allegiance/ intellectual conflicts of interest might affect study results.[4]
At the same time, some argue that intellectual conflicts of interest are unavoidable.[5] Researchers will have a certain educational background, prior research experience, and personal and professional affiliations, which will naturally make them more likely to favor one research outcome over another.[5] Not only is this unavoidable, but it is what drives science forward: researchers would not spend years researching a topic that they did not feel passionate about. It is precisely the passion and intellectual interest that researchers have that inspires them to delve into research.
What conclusions can we make?
If intellectual conflicts of interest lead researchers to introduce systematic biases into their research (e.g. selection bias, reporting bias, etc.), then they can be said to be problematic.[6] More evidence is needed to establish whether this is the case. However, if intellectual conflicts of interest do not lead to systematic bias, it is hard to argue that they are problematic even if they do affect study results.[6] For example, if researchers that prefer intervention A are more likely to have results that favor intervention A even in the absence of systematic bias, it could be that:
1) they are better trained in intervention A than other researchers, or
2) they know more about intervention A than other researchers , or
3) they carry out intervention A more diligently than other researchers.
It would be difficult to argue that possibilities 1, 2 or 3 are problematic. Innovation in science is driven by passionate researchers who develop and propose new hypotheses, which they often strongly believe in. It makes sense that if these passionate researchers who know more about the hypotheses than others are not able to show their truthfulness, then no one else will be able to, indicating that the hypotheses are not correct. Yet, if the innovative researchers are able to provide evidence on the truthfulness of the hypotheses, the hypotheses remain open to a closer scrutiny by the rest of the scientific community.[6] Since science is a self-correcting process, it may not matter if the researcher with the intellectual conflict of interest is more likely to obtain study results in line with their allegiances. Other researchers with different allegiances will be able to scrutinize and test the results, thereby correcting for the intellectual conflict of interest.
- ↑ Luborsky L, Diguer L, Seligman DA, Rosenthal R, Krause ED, Johnson S, Halperin G, Bishop M, Berman JS, Schweizer E. The researcher's own therapy allegiances: A “wild card” in comparisons of treatment efficacy. Clinical Psychology: Science and Practice. 1999 Mar;6(1):95-106.
- ↑ Miller S, Wampold B, Varhely K. Direct comparisons of treatment modalities for youth disorders: A meta-analysis. Psychotherapy Research. 2008 Jan 1;18(1):5-14.
- ↑ Munder T, Gerger H, Trelle S, Barth J. Testing the allegiance bias hypothesis: a meta-analysis. Psychotherapy Research. 2011 Nov 1;21(6):670-84.
- ↑ 4.0 4.1 Leykin Y, DeRubeis RJ. Allegiance in psychotherapy outcome research: Separating association from bias. Clinical Psychology: Science and Practice. 2009 Mar;16(1):54-65.
- ↑ 5.0 5.1 Bero L. What is in a name? Nonfinancial influences on the outcomes of systematic reviews and guidelines. Journal of clinical epidemiology. 2014 Nov 1;67(11):1239-41.
- ↑ 6.0 6.1 6.2 Ioannidis JP. Most psychotherapies do not really work, but those that might work should be assessed in biased studies. Epidemiology and psychiatric sciences. 2016 Oct;25(5):436-8.
For whom is this important?
Iris Lechner, Krishma Labib contributed to this theme. Latest contribution was Oct 22, 2020