Difference between revisions of "Instruction:62bee3f2-de7c-4bc6-ae88-fe8b93f5d1f6"

From The Embassy of Good Science
 
(9 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
{{Instruction Step Trainee
 
{{Instruction Step Trainee
 
|Instruction Step Title=Enabling and Redoing
 
|Instruction Step Title=Enabling and Redoing
|Instruction Step Text=<span lang="EN-GB">The research team within TIER2 present an analytical framework that supports epistemic diversity by examining the potential relevance and degree of feasibility of reproducibility for different modes of knowledge production. The research team find current general typologies with the same aim wanting. They propose top-down derived enumerative lists of kinds of reproducibility organised according to vaguely defined fields, disciplines, methods or so-called research types. Current typologies cannot sufficiently characterise different kinds of research and their varying research context at the granularity needed to deal with how epistemic diversity and reproducibility relate. They also do not clarify the prevailing conceptual confusion surrounding reproducibility and replication. To clarify matters, we propose ''redoing'' to commonly describe the acts of reproducing and replicating and ''enabling'' to describe the acts of making something reproducible and replicable. We suggest mapping practices and epistemic functions to characterise what parts of a study should be redone or enabled and for what intended purposes. We propose knowledge production modes (KPM) as an organising construct to situate redoing and enabling within knowledge production’s epistemic, social, and contextual conditions. Epistemologies determine epistemic norms and criteria. Social conditions influence how research is organised, practised, rewarded, reported, and discussed. Contextual conditions put boundaries and restrictions on research, for example, due to subject matter, environment, availability of resources, and technologies, which are the ‘local’ conditions. Our framework clarifies the potential ''relevance'' of redoing and the degree of ''feasibility'' of redoing and enabling for a specific knowledge production mode. Relevance comprises research goals and epistemology. Epistemology is the basic assumption behind knowledge production modes. It determines how knowledge claims are produced and justified with systems of justification, the criteria for good/trustworthy research, and, thus, the epistemic norms. Different ways of knowing have different epistemic norms, practices, and criteria. Feasibility comprises the nature and complexity of the subject under investigation, the necessary investment for redoing or enabling, and the degree of theoretical and methodological uncertainty associated with the actual research. The proposed framework works bottom-up in that knowledge production modes are not defined a priori but derived from the analytical framework. The framework, therefore, supports epistemic diversity by being open and non-hierarchical and working at a sufficient level of granularity to discern the diverse conditions of knowledge production. The research team propose a framework that can clarify, not a cookbook. Enabling in some form always seems relevant in empirical work irrespective of ways of knowing—the same is not true for redoing.</span>
+
|Instruction Step Text=The research team within TIER2 present an analytical framework that supports epistemic diversity by examining the potential relevance and degree of feasibility of reproducibility for different modes of knowledge production. The research team find current general typologies with the same aim wanting. They propose top-down derived enumerative lists of kinds of reproducibility organised according to vaguely defined fields, disciplines, methods or so-called research types. Current typologies cannot sufficiently characterise different kinds of research and their varying research context at the granularity needed to deal with how epistemic diversity and reproducibility relate. They also do not clarify the prevailing conceptual confusion surrounding reproducibility and replication. To clarify matters, they propose ''redoing'' to commonly describe the acts of reproducing and replicating and ''enabling'' to describe the acts of making something reproducible and replicable. They suggest mapping practices and epistemic functions to characterise what parts of a study should be redone or enabled and for what intended purposes. The research team propose knowledge production modes (KPM) as an organising construct to situate redoing and enabling within knowledge production’s epistemic, social, and contextual conditions. Epistemologies determine epistemic norms and criteria. Social conditions influence how research is organised, practised, rewarded, reported, and discussed. Contextual conditions put boundaries and restrictions on research, for example, due to subject matter, environment, availability of resources, and technologies, which are the ‘local’ conditions. Their framework clarifies the potential ''relevance'' of redoing and the degree of ''feasibility'' of redoing and enabling for a specific knowledge production mode. Relevance comprises research goals and epistemology. Epistemology is the basic assumption behind knowledge production modes. It determines how knowledge claims are produced and justified with systems of justification, the criteria for good/trustworthy research, and, thus, the epistemic norms. Different ways of knowing have different epistemic norms, practices, and criteria. Feasibility comprises the nature and complexity of the subject under investigation, the necessary investment for redoing or enabling, and the degree of theoretical and methodological uncertainty associated with the actual research. The proposed framework works bottom-up in that knowledge production modes are not defined a priori but derived from the analytical framework. The framework, therefore, supports epistemic diversity by being open and non-hierarchical and working at a sufficient level of granularity to discern the diverse conditions of knowledge production. The research team propose a framework that can clarify, not a cookbook. Enabling in some form always seems relevant in empirical work irrespective of ways of knowing—the same is not true for redoing.  
 +
 
  
 
For the full paper click here: [https://osf.io/preprints/metaarxiv/ujnd9_v1 MetaArXiv Preprints - Knowledge Production Modes: The Relevance and Feasibility of Reproducibility]
 
For the full paper click here: [https://osf.io/preprints/metaarxiv/ujnd9_v1 MetaArXiv Preprints - Knowledge Production Modes: The Relevance and Feasibility of Reproducibility]
  
Reference
+
 
 +
 
 +
'''Reference'''
  
 
Ulpts, S., & Schneider, J. W. (2023, September 25). Knowledge Production Modes: The Relevance and Feasibility of Reproducibility. <nowiki>https://doi.org/10.31222/osf.io/ujnd9</nowiki>
 
Ulpts, S., & Schneider, J. W. (2023, September 25). Knowledge Production Modes: The Relevance and Feasibility of Reproducibility. <nowiki>https://doi.org/10.31222/osf.io/ujnd9</nowiki>
Line 22: Line 25:
 
|Instruction Step Title=Defining Reproducibility
 
|Instruction Step Title=Defining Reproducibility
 
|Instruction Step Text=Defining reproducibility and replicability, has been a challenge in the research community, as different interpretations and even contradicting definitions are often used. Defining these terms has proven to be challenging as their use and understanding differs between fields of research. However, the European funded iRise consortium developed a reproducibility glossary  by critically reviewing existing scientific literature. The glossary provides working definitions for the use of terms reproducibility, replicability and replication, as well as related concepts.
 
|Instruction Step Text=Defining reproducibility and replicability, has been a challenge in the research community, as different interpretations and even contradicting definitions are often used. Defining these terms has proven to be challenging as their use and understanding differs between fields of research. However, the European funded iRise consortium developed a reproducibility glossary  by critically reviewing existing scientific literature. The glossary provides working definitions for the use of terms reproducibility, replicability and replication, as well as related concepts.
 +
  
 
'''References'''
 
'''References'''
Line 41: Line 45:
 
</div><div>
 
</div><div>
 
Below we present the results of our study below.  
 
Below we present the results of our study below.  
</div>For more information, please refer to the full paper: [https://osf.io/preprints/metaarxiv/gx9jq_v1 MetaArXiv Preprints - How to get there from here? Barriers and enablers on the road towards reproducibility in research].  
+
</div>
 +
 
 +
 
 +
 
 +
For more information, please refer to the full paper: [https://osf.io/preprints/metaarxiv/gx9jq_v1 MetaArXiv Preprints - How to get there from here? Barriers and enablers on the road towards reproducibility in research].  
 +
|Instruction Step Interactive Content=Resource:H5P-116
 +
}}
 +
{{Instruction Step Trainee
 +
|Instruction Step Title=Open Science interventions to improve reproducibility and replicability of research
 +
|Instruction Step Text=<div>Various interventions – especially those related to open science – have been proposed to improve the reproducibility and replicability of scientific research. To assess whether and which interventions have been formally tested for their effectiveness in improving reproducibility and replicability, the research team from TIER2 and OSIRIS conducted a scoping review of the literature on interventions to improve reproducibility.  They systematically searched Medline, Embase, Web of Science, PsycINFO, Scopus and Eric, on August 18, 2023. Grey literature was requested from experts in the fields of reproducibility and open science. Any study empirically evaluating the effectiveness of interventions aimed at improving the reproducibility or replicability of scientific methods and findings wasincluded. An intervention could be any action taken by either individual researchers or scientific institutions (e.g., research institutes, publishers and funders). We summarized the retrieved evidence narratively and in an evidence gap map. Of the 104 distinct studies we included, 15 directly measured the effect of an intervention on reproducibility or replicability, while the other research questions addressed a proxy outcome that might be expected to increase reproducibility or replicability, such as data sharing, methods transparency or preregistration. Thirty research questions within included studies were non-comparative and 27 were comparative but cross-sectional, precluding any causal inference. Possible limitations of our review may be the search and selection strategy, which was done by a large team including researchers from different disciplines and different expertise levels. Despite studies investigating a range of interventions and addressing various outcomes, our findings indicate that in general the evidence-base for which various interventions to improve reproducibility of research remains remarkably limited in many respects.
 +
The full pre-print is available here: [https://osf.io/preprints/metaarxiv/a8rmu_v1 MetaArXiv Preprints - Open Science interventions to improve reproducibility and replicability of research: a scoping review preprint]
 +
 
 +
 
 +
Dudda, L., Kormann, E., Kozula, M., DeVito, N. J., Klebel, T., Dewi, A. P. M., … Leeflang, M. (2024, June 17). Open Science interventions to improve reproducibility and replicability of research: a scoping review preprint. <nowiki>https://doi.org/10.31222/osf.io/a8rmu</nowiki>
 +
</div>
 +
}}
 +
{{Instruction Step Trainee
 +
|Instruction Step Title=TIER2 Reproducibility Training Modules
 +
|Instruction Step Text=Designed for researchers, publishers, and funders, these specialised courses deliver both theoretical knowledge and practical tools to enhance reproducible research practices.
 +
 
 +
Our comprehensive training program explores the core principles, methodologies, and discipline-specific challenges of research reproducibility, providing actionable strategies that participants can implement in their work.   
 +
 
 +
Currently, three specialised modules are available online:     
 +
 
 +
*Reproducibility primer for publishers  
 +
*Reproducibility primer for funders
 +
*Reproducibility primer for qualitative research  
 +
*Reproducibility primer for AI-driven research  
 +
 
 +
Additional modules covering epistemic diversity studies, tools and best practices, and more will be released soon to complete our training program.
 +
 
 +
 
 +
Getting started is simple:
  
<iframe src="https://h5p.embassy.science/h5p/embed/116" width="750" height="1302" frameborder="0" allowfullscreen="allowfullscreen" allow="geolocation *; microphone *; camera *; midi *; encrypted-media *" title="Futures of Reproducibility"></iframe><script src="https://h5p.embassy.science/sites/all/modules/contrib/h5p/library/js/h5p-resizer.js" charset="UTF-8"></script>
+
#Create a [https://openplato.eu/login/index.php free account] on the OpenPlato platform
 +
#Enrol in either the comprehensive [https://openplato.eu/course/view.php?id=543 TIER2 Reproducibility Training Course] or individual modules based on your interests
 +
#Learn at your own pace with the interactive content
 
}}
 
}}
 
{{Instruction Remarks Trainee}}
 
{{Instruction Remarks Trainee}}

Latest revision as of 15:02, 7 November 2025

What is reproducibility?

Instructions for:TraineeTrainer
Related Initiative
Goal
This module provides an in-depth introduction to reproducibility through the work of TIER2 and iRise. The purpose and concept of reproducibility is introduced, working definitions are provided for important concepts in relation to reproducibility and replicability, and the futures of reproducibility and illustrated from the perspective of different stakeholders.
Duration (hours)
2
1
Enabling and Redoing

The research team within TIER2 present an analytical framework that supports epistemic diversity by examining the potential relevance and degree of feasibility of reproducibility for different modes of knowledge production. The research team find current general typologies with the same aim wanting. They propose top-down derived enumerative lists of kinds of reproducibility organised according to vaguely defined fields, disciplines, methods or so-called research types. Current typologies cannot sufficiently characterise different kinds of research and their varying research context at the granularity needed to deal with how epistemic diversity and reproducibility relate. They also do not clarify the prevailing conceptual confusion surrounding reproducibility and replication. To clarify matters, they propose redoing to commonly describe the acts of reproducing and replicating and enabling to describe the acts of making something reproducible and replicable. They suggest mapping practices and epistemic functions to characterise what parts of a study should be redone or enabled and for what intended purposes. The research team propose knowledge production modes (KPM) as an organising construct to situate redoing and enabling within knowledge production’s epistemic, social, and contextual conditions. Epistemologies determine epistemic norms and criteria. Social conditions influence how research is organised, practised, rewarded, reported, and discussed. Contextual conditions put boundaries and restrictions on research, for example, due to subject matter, environment, availability of resources, and technologies, which are the ‘local’ conditions. Their framework clarifies the potential relevance of redoing and the degree of feasibility of redoing and enabling for a specific knowledge production mode. Relevance comprises research goals and epistemology. Epistemology is the basic assumption behind knowledge production modes. It determines how knowledge claims are produced and justified with systems of justification, the criteria for good/trustworthy research, and, thus, the epistemic norms. Different ways of knowing have different epistemic norms, practices, and criteria. Feasibility comprises the nature and complexity of the subject under investigation, the necessary investment for redoing or enabling, and the degree of theoretical and methodological uncertainty associated with the actual research. The proposed framework works bottom-up in that knowledge production modes are not defined a priori but derived from the analytical framework. The framework, therefore, supports epistemic diversity by being open and non-hierarchical and working at a sufficient level of granularity to discern the diverse conditions of knowledge production. The research team propose a framework that can clarify, not a cookbook. Enabling in some form always seems relevant in empirical work irrespective of ways of knowing—the same is not true for redoing.


For the full paper click here: MetaArXiv Preprints - Knowledge Production Modes: The Relevance and Feasibility of Reproducibility


Reference

Ulpts, S., & Schneider, J. W. (2023, September 25). Knowledge Production Modes: The Relevance and Feasibility of Reproducibility. https://doi.org/10.31222/osf.io/ujnd9

2
Defining Reproducibility

Defining reproducibility and replicability, has been a challenge in the research community, as different interpretations and even contradicting definitions are often used. Defining these terms has proven to be challenging as their use and understanding differs between fields of research. However, the European funded iRise consortium developed a reproducibility glossary by critically reviewing existing scientific literature. The glossary provides working definitions for the use of terms reproducibility, replicability and replication, as well as related concepts.


References

Voelkl, B., Heyard, R., Fanelli, D., Wever, K., Held, L., Würbel, H., Zellers, S., & Maniadis, Z. (2024). Glossary of common terminology resulting from scoping reviews. https://osf.io/ewybt.

3
Futures of Reproducibility

Improving reproducibility is a multifaceted challenge requiring both behavioural and cultural change. The adoption of reproducibility practices has been sparked and embraced by the Open Science movement. However, a lot of researchers are not fully aware of the implications of reproducibility and how Open Science and reproducibility are connected and intertwined (Haven et al., 2022). To increase awareness and change research practices several steps should be taken (Nosek, 2019). First, the infrastructure for the desired behaviour should be provided to make it possible. Second, the user interface and experience of the infrastructure should be improved to make the behaviour easy. Third, communities of practice should be fostered to make the behaviour visible and so increasingly normative.  Fourth, incentives to enact the behaviour should be provided to make it rewarding. Last, policies should be enacted to make the behaviour required (Nosek, 2019).  To further this work, we sought to explore the future of reproducibility for different stakeholders and question what should be the next steps for reproducibility and how diverse epistemic contexts can adopt reproducibility in different forms. In this deliverable, we aim to add nuance to the reproducibility debate through flexible investigation of diverse epistemic contexts (researchers from the field of machine learning and researchers working with qualitative methods), exploring the future of reproducibility through the lens of diverse research stakeholders – researchers, funders, and publishers.  

In this context, we look to the future of reproducibility by exploring the preferred scenarios for multiple stakeholders, including how these scenarios can be realized. We reflect on the steps that are necessary for adherence to reproducibility-enabling practices and what different epistemic contexts need to make reproducibility a priority. Lastly, we reflect on what are the new problems that we may face when aiming to improve reproducibility. We believe exploring the possible futures for reproducibility is essential to discover the next steps for different members of the scientific community to take to realize the preferred future and the actions to avoid steering away from the dystopian futures.  

We aim to highlight the essential role of institutions, funders and publishers in this endeavor to make reproducibility a priority by recognizing, rewarding, evaluating and monitoring reproducibility. Ultimately, we hope to steer and move forward the debate on reproducibility in the research community by addressing a set of core research questions related to how key stakeholders in the academic community envision the way in which matters of reproducibility should be addressed in the future. More specifically, it asks representatives from research funders, scholarly publishers, and researchers from diverse disciplinary backgrounds:

1. What are the preferred futures of reproducibility?  

2. What are the enablers and barriers on the way to the preferred future or reproducibility more generally?  

Below we present the results of our study below.


For more information, please refer to the full paper: MetaArXiv Preprints - How to get there from here? Barriers and enablers on the road towards reproducibility in research.  

Futures of Reproducibility

4
Open Science interventions to improve reproducibility and replicability of research

Various interventions – especially those related to open science – have been proposed to improve the reproducibility and replicability of scientific research. To assess whether and which interventions have been formally tested for their effectiveness in improving reproducibility and replicability, the research team from TIER2 and OSIRIS conducted a scoping review of the literature on interventions to improve reproducibility. They systematically searched Medline, Embase, Web of Science, PsycINFO, Scopus and Eric, on August 18, 2023. Grey literature was requested from experts in the fields of reproducibility and open science. Any study empirically evaluating the effectiveness of interventions aimed at improving the reproducibility or replicability of scientific methods and findings wasincluded. An intervention could be any action taken by either individual researchers or scientific institutions (e.g., research institutes, publishers and funders). We summarized the retrieved evidence narratively and in an evidence gap map. Of the 104 distinct studies we included, 15 directly measured the effect of an intervention on reproducibility or replicability, while the other research questions addressed a proxy outcome that might be expected to increase reproducibility or replicability, such as data sharing, methods transparency or preregistration. Thirty research questions within included studies were non-comparative and 27 were comparative but cross-sectional, precluding any causal inference. Possible limitations of our review may be the search and selection strategy, which was done by a large team including researchers from different disciplines and different expertise levels. Despite studies investigating a range of interventions and addressing various outcomes, our findings indicate that in general the evidence-base for which various interventions to improve reproducibility of research remains remarkably limited in many respects.

The full pre-print is available here: MetaArXiv Preprints - Open Science interventions to improve reproducibility and replicability of research: a scoping review preprint


Dudda, L., Kormann, E., Kozula, M., DeVito, N. J., Klebel, T., Dewi, A. P. M., … Leeflang, M. (2024, June 17). Open Science interventions to improve reproducibility and replicability of research: a scoping review preprint. https://doi.org/10.31222/osf.io/a8rmu

5
TIER2 Reproducibility Training Modules

Designed for researchers, publishers, and funders, these specialised courses deliver both theoretical knowledge and practical tools to enhance reproducible research practices.

Our comprehensive training program explores the core principles, methodologies, and discipline-specific challenges of research reproducibility, providing actionable strategies that participants can implement in their work.

Currently, three specialised modules are available online:

  • Reproducibility primer for publishers  
  • Reproducibility primer for funders
  • Reproducibility primer for qualitative research  
  • Reproducibility primer for AI-driven research  

Additional modules covering epistemic diversity studies, tools and best practices, and more will be released soon to complete our training program.


Getting started is simple:

  1. Create a free account on the OpenPlato platform
  2. Enrol in either the comprehensive TIER2 Reproducibility Training Course or individual modules based on your interests
  3. Learn at your own pace with the interactive content

Steps

Other information

Who
Good Practices & Misconduct
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.2.9