Difference between revisions of "Theme:26631aa0-18f0-4635-b71b-80a6f4e58d33"
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
{{Theme | {{Theme | ||
|Theme Type=Misconduct & Misbehaviors | |Theme Type=Misconduct & Misbehaviors | ||
− | |Has Parent Theme=Theme: | + | |Has Parent Theme=Theme:48185295-9e1e-41fb-ab70-948596e588d5 |
|Title=Data driven hypotheses without disclosure (‘HARKing’) | |Title=Data driven hypotheses without disclosure (‘HARKing’) | ||
|Is About=HARKing i.e. Hypothesizing After the Results are Known or post hoc testing, as it is more widely known, is not unfamiliar to many researchers. In scientific methodology or statistics class in grad school, many of us have been told that such practice was flawed, but few of us has ever heard the rationale behind it. HARKing is considered to be a detrimental research practice.<ref>Bouter LM, Tijdink J, Axelsen N, Martinson BC, Ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integr Peer Rev. 2016;1:17.</ref> This thematic page will try to address the logic behind HARKing and hopefully shed some light on its nature and validity. | |Is About=HARKing i.e. Hypothesizing After the Results are Known or post hoc testing, as it is more widely known, is not unfamiliar to many researchers. In scientific methodology or statistics class in grad school, many of us have been told that such practice was flawed, but few of us has ever heard the rationale behind it. HARKing is considered to be a detrimental research practice.<ref>Bouter LM, Tijdink J, Axelsen N, Martinson BC, Ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integr Peer Rev. 2016;1:17.</ref> This thematic page will try to address the logic behind HARKing and hopefully shed some light on its nature and validity. | ||
Line 12: | Line 12: | ||
|Has Best Practice=The most prominent examples in practice are diagnostic studies and hypothesis generating studies. When developing new diagnostic models authors tend to combine multiple prognostic factors and then test such models using the ROC analysis on whole sample without validating the model on a separate sample. However, sometimes the need for validation of model is not disclosed in discussion section. Hypothesis generating studies are usually done on “big data” from databases such as The Cancer Genome Atlas. The primary goal of such studies is to build models based on large data sets and “get the feeling for the data”, or in more technical language to do exploratory data analysis, sometimes such studies do not disclose need for model validation (i.e. confirmatory data analysis). Sometimes after ANOVA, correction for multiple comparison testing also known as post hoc testing is done, these post hoc tests have more stringent statistical significance criteria with the purpose of somewhat replacing model validation. However, replacing model validation with more stringent statistical significance criteria is highly debated topics in a world of statistics. | |Has Best Practice=The most prominent examples in practice are diagnostic studies and hypothesis generating studies. When developing new diagnostic models authors tend to combine multiple prognostic factors and then test such models using the ROC analysis on whole sample without validating the model on a separate sample. However, sometimes the need for validation of model is not disclosed in discussion section. Hypothesis generating studies are usually done on “big data” from databases such as The Cancer Genome Atlas. The primary goal of such studies is to build models based on large data sets and “get the feeling for the data”, or in more technical language to do exploratory data analysis, sometimes such studies do not disclose need for model validation (i.e. confirmatory data analysis). Sometimes after ANOVA, correction for multiple comparison testing also known as post hoc testing is done, these post hoc tests have more stringent statistical significance criteria with the purpose of somewhat replacing model validation. However, replacing model validation with more stringent statistical significance criteria is highly debated topics in a world of statistics. | ||
− | Another case which is usually confused with HARKing are planned multiple comparisons after ANOVA. In this case the fact that comparisons are planned means that model was built before the experiment and based on it, comparisons are done after gathering data. <ref>Althouse AD. Adjust for Multiple Comparisons? It's Not That Simple. Ann Thorac Surg. 2016;101(5):1644-5.</ref> | + | Another case which is usually confused with HARKing are planned multiple comparisons after ANOVA. In this case the fact that comparisons are planned means that model was built before the experiment and based on it, comparisons are done after gathering data.<ref>Althouse AD. Adjust for Multiple Comparisons? It's Not That Simple. Ann Thorac Surg. 2016;101(5):1644-5.</ref> |
<references /> | <references /> | ||
}} | }} |
Latest revision as of 16:42, 26 March 2021
Data driven hypotheses without disclosure (‘HARKing’)
What is this about?
HARKing i.e. Hypothesizing After the Results are Known or post hoc testing, as it is more widely known, is not unfamiliar to many researchers. In scientific methodology or statistics class in grad school, many of us have been told that such practice was flawed, but few of us has ever heard the rationale behind it. HARKing is considered to be a detrimental research practice.[1] This thematic page will try to address the logic behind HARKing and hopefully shed some light on its nature and validity.
- ↑ Bouter LM, Tijdink J, Axelsen N, Martinson BC, Ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integr Peer Rev. 2016;1:17.
Why is this important?
HARKing can increase the chance of falsely rejecting the null hypothesis, or type I error. [1] Each time when a statistical analysis is being done, theories or hypotheses are formalized in terms of mathematical models. [2] Models are built from main outcome measure and factors that are supposed to influence the main outcome measure. [3] Factors that are supposed to determine the outcome measure are usually derived either from published research or data gathered in experiments or surveys. Once a model with satisfactory explanatory or predictive properties is built, it needs to be externally validated i.e. tested on a new, similar dataset. [4] This is needed because model might be so well suited for the data on which it was built that it becomes too specific, and thus loses ability to be generalized on somewhat similar datasets. [5] If we put this in more technical terms, some of explanatory or predictive factors in the model might correlate with real causes of effect only in our dataset but not in the other similar datasets.
Replication of studies is the way through HARKing can be recognized, [6] but that’s only after the damage has been done. Pre-registration of studies, with clearly stated hypotheses and planned statistical analysis, is how we can hope to prevent HARKing.
- ↑ Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998;2(3):196-217.
- ↑ Introduction to Process Modeling. 2012 [cited 6.27.2019.]. In: NIST/SEMATECH e-Handbook of Statistical Methods [Internet]. NIST, [cited 6.27.2019.]. Available from: https://www.itl.nist.gov/div898/handbook/index.htm.
- ↑ Introduction to Process Modeling. 2012 [cited 6.27.2019.]. In: NIST/SEMATECH e-Handbook of Statistical Methods [Internet]. NIST, [cited 6.27.2019.]. Available from: https://www.itl.nist.gov/div898/handbook/index.htm.
- ↑ Burnham KP, Anderson DR. Inference and Principle of Prasimony. In: Burnham KP, Anderson DR, editors. Model Selection and Multimodel Inference. Berlin: Springer; 2010. p. 29-37.
- ↑ Burnham KP, Anderson DR. Inference and Principle of Prasimony. In: Burnham KP, Anderson DR, editors. Model Selection and Multimodel Inference. Berlin: Springer; 2010. p. 29-37.
- ↑ Van Bavel JJ, Mende-Siedlecki P, Brady WJ, Reinero DA. Contextual sensitivity in scientific reproducibility. Proc Natl Acad Sci U S A. 2016;113(23):6454-9.
For whom is this important?
What are the best practices?
The most prominent examples in practice are diagnostic studies and hypothesis generating studies. When developing new diagnostic models authors tend to combine multiple prognostic factors and then test such models using the ROC analysis on whole sample without validating the model on a separate sample. However, sometimes the need for validation of model is not disclosed in discussion section. Hypothesis generating studies are usually done on “big data” from databases such as The Cancer Genome Atlas. The primary goal of such studies is to build models based on large data sets and “get the feeling for the data”, or in more technical language to do exploratory data analysis, sometimes such studies do not disclose need for model validation (i.e. confirmatory data analysis). Sometimes after ANOVA, correction for multiple comparison testing also known as post hoc testing is done, these post hoc tests have more stringent statistical significance criteria with the purpose of somewhat replacing model validation. However, replacing model validation with more stringent statistical significance criteria is highly debated topics in a world of statistics.
Another case which is usually confused with HARKing are planned multiple comparisons after ANOVA. In this case the fact that comparisons are planned means that model was built before the experiment and based on it, comparisons are done after gathering data.[1]
- ↑ Althouse AD. Adjust for Multiple Comparisons? It's Not That Simple. Ann Thorac Surg. 2016;101(5):1644-5.
The Embassy Editorial team, Iris Lechner, Natalie Evans, Benjamin Benzon contributed to this theme. Latest contribution was Mar 26, 2021