Improving reproducibility in science

From The Embassy of Good Science
Revision as of 11:27, 19 February 2026 by 0009-0001-6741-1010 (talk | contribs) (Created page with "{{Theme |Theme Type=Principles & Aspirations |Has Parent Theme=Theme:520b3bc7-a6ab-4617-95f2-89c9dee31c53 |Title=Improving reproducibility in science |Is About=The '''iRISE (I...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Improving reproducibility in science

What is this about?

The iRISE (Improving Reproducibility in Science in Europe) project is a Horizon Europe initiative aimed at strengthening the reproducibility and integrity of research across disciplines. It brings together stakeholders from social sciences, natural sciences, and biomedical sciences to identify, prioritise, and implement practices and practical tools that enhance research quality. Grounded in the principles of Responsible Research and Innovation (RRI), iRISE works to ensure that interventions to improve reproducibility are evidence-based, fit for purpose, and tailored to the needs of different stakeholder communities.

Within iRISE, our work focuses on stakeholder engagement and the prioritisation of interventions to improve reproducibility. Building on a comprehensive scoping review (WP2), we use a structured Delphi consultation process to reach cross-disciplinary consensus on which practices and tools should be adopted directly and which require adaptation before implementation. The Delphi method involves iterative rounds of expert consultation, enabling participants to review anonymised group feedback and refine their responses until consensus is achieved. This approach ensures transparency, inclusivity, and community alignment in setting priorities.

Here on The Embassy of Good Science, we share the Delphi process outputs as part of a living, community-informed knowledge base. The data presented includes two priority lists: prioritised interventions and reproducibility measures. By making these results openly available, we aim to support ongoing dialogue, encourage community contribution, and facilitate the uptake, adaptation, and continuous improvement of practices that strengthen research reproducibility across disciplines and stakeholder groups.

For whom is this important?

Academic administrators and research leadersAcademic institutionsAcademics and researchersAll stakeholders engaged in science policyAll stakeholders in researchAll stakeholders in science communicationAll stakeholders in science fundingAll stakeholders in science implementationAll stakeholders in science oversightAnyone interested in the broader culture of research integrityCollaborating researchersComputer scientistsresesarch institutesstudents, researchers, and research ethics reviewersreserach institutionsresearchers in collaborative projectsresearch leadersData managers and research infrastructure teamsAuthorsENRIO member organisationsEU policymakers and legislatorsEarly-career researchers (PhD students, post-docs)EU policymakers, regulators, and standardisation bodiesEarly-career researchers and studentsEditorsEducational institutions and research organisationsEthics and integrity bodiesFundersFunding bodies and policymakersFunding Agencies and EU Research ProgramsFunding institutionsFunding agencies and research integrity officersGovernment science and innovation policy makersHealth and Life ScientistsHealth researchers and clinical research staffInstitutional Publishing Service Providers (IPSPs)Journal editors and academic publishersJournal editors and scholarly societiesJournal publishersOpen Science Coordinators and Research ManagersOpen access advocates and researchersPeer-reviewersPhD students, trainees, and visiting researchersPolicy Makers and Funding AgenciesPolicy Makers and Science Governance BodiesPolicy makers and regulatory authoritiesProspective Research Integrity TrainersPublishers, journals, and editorsResearch AdministratorsResearch Funding Organisations (RFOs)Research Integrity OfficersResearch Performing Organisations (RPOs)Research partners in the Global South and NorthResearchers, Research Ethics Communities and Research Integrity OfficesScience Advisory BodiesScientists, researchers and academic institutionsSenior researchers, SupervisorsUniversities and research institutions responsible for integrity frameworks

What are the best practices?

Reproducibility measures are:

-         Methodological quality (9.03) - 89.04 %

-         Reporting quality (9.00) - 87.67 %

-         Code and dana availability and re-use (8.66) - 84.93 %

-         Computational reproducibility (8.52) - 76.71 %

-         Transparency of research plan (8.47) - 73.97 %

-         Reproducible workflow practices (8.34) - 73.97 %

-         Trial registration (8.21) - 78. 08 %

-         Materials availability and re-use (8.16) - 71.23 %

Interventions:

-         Data management training (8.52) - 75.34 %

-         Data quality checks/feedback (8.33) - 72.60 %

-         Statistical training (8.33) - 71.23 %

-         Data sharing policy/guideline (8.22) - 72.60 %

-         Protocol/trial registration (8.15)- 73.97 %

-         Reproducible code/analysis training (8.11) - 71.23%  

In Detail

Round 1

In the first round, the panelists scored reproducibility measures and interventions on a scale from 1 to 10 on Likert scale. Items scoring 8–10 with at least 70% agreement were added to the priority list. Items scoring 1–3 with at least 70% agreement were discarded. The panelists could also comment on their scores. The reproducibility measures and interventions that scored 4-7, with 70% agreement, were again revised in the second round.

Round 2

The panelists reviewed the reproducibility measures and interventions that scored 4–7 with 70% agreement. The rankings and anonymized comments from the first round were shared to help participants reassess their scores.

Final Round

The final round consisted of an online meeting with eight selected panellists (two researchers, two editors, two publishers, one funder, and one policymaker). During this session, the participants revisited highest-scoring interventions that had not reached consensus in previous rounds. The panel then had a task to review the ranking order of the two prioritised lists. After the final round there were no changes to the prioritised lists
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.3.4