What are the best practices? (Has Best Practice)
From The Embassy of Good Science
1
10 Things for Curating Reproducible and FAIR Research
Thing 1: Completeness
Thing 2: Organization
Thing 3: Economy
Thing 4: Transparency
Thing 5: Documentation
Thing 6: Access
Thing 7: Provenance
Thing 8: Metadata
Thing 9: Automation
Thing 10: Review +
A
Reproducing any part of an article or book (figure, table, etc) definitely requires permission from the copyright holder. The copyright holder is usually the publisher since authors tend to transfer the copyright to the publisher upon submission of their manuscripts. +
COPE recommends the retraction of articles that contain fabricated data and a reporting made to the appropriate institutional misconduct body. Universities and research centres should be very sensitive to this important issue by reprimanding or dismissing researchers involved in fabrication. +
For COPE in matters relating to the addition or omission of an author, a request should be sent to the publishing journal. The journal will ask for the permission of all authors with corrections made following their consent. +
According to COPE, this is a clear case of guest or gift authorship. It is not recommended to add a researcher to the authors list of an article if he/she do not fulfil the requirements for authorship. If an editor finds out about an instance of gift authorship, COPE recommends the removal of the suspected gift author from the authorship list. For article submissions, it is strongly recommended that they include a statement of contributions agreed by all contributors. +
This practice is discouraged by COPE. Authors should resist such requests as much as possible. +
On submission of an article, authors are usually asked to mention whether their submission is under review elsewhere. Duplicate submission is a form of research misconduct. However, if a journal does not review a manuscript in an appropriate amount of time, authors can withdraw their manuscript. However, the editor-in-chief should be informed beforehand and a record of all correspondence maintained by the corresponding author. Authors should never submit a manuscript to another journal before appropriate withdrawal of the manuscript or notice of a rejection. +
On submission of an article, authors are usually asked to mention whether their submission is under review elsewhere. Duplicate submission is a form of research misconduct. However, if a journal does not review a manuscript in an appropriate amount of time, authors can withdraw their manuscript. However, the editor-in-chief should be informed beforehand and a record of all correspondence maintained by the corresponding author. Authors should never submit a manuscript to another journal before appropriate withdrawal of the manuscript or notice of a rejection. +
This is a case of redundant publication. Authors are usually asked to provide a signed statement that the manuscript they are submitting has not been published elsewhere. Any violation of this statement is considered to be a case of misconduct and can result in retraction. If a translation of a previously published article is going to be submitted to another journal, prior permission should be sought from the publisher of the first article and the second manuscript should contain an appropriate reference to the first publication +
Regulatory compliance
Data archiving and management +
- Maintaining Privacy
-Confidentiality and Anonymity
-Protecting vulnerable groups
-Data sharing +
Best practices include using deepfake therapy only under strict clinical supervision and when other therapeutic options have been exhausted. Therapists should obtain informed consent from patients and ensure transparency about the simulated nature of the interaction. Sensitive data used to create deepfakes must be securely stored and handled responsibly. Mental health professionals should also evaluate the patient’s psychological readiness and monitor potential risks such as re-traumatization or emotional overattachment to the simulated character. +
Best practices include regularly auditing AI hiring systems to detect and reduce bias in recruitment decisions. Organizations should evaluate training data, monitor algorithm performance, and apply fairness metrics to ensure equal opportunities. Transparency tools such as system cards and system maps help explain how AI makes decisions. Human oversight is essential so recruiters can review algorithmic results, provide feedback to candidates, and ensure that hiring decisions remain accountable, fair, and ethically responsible. +
Best practices include maintaining strong human oversight when AI systems flag potential hate speech. Police officers should critically evaluate AI-identified content to ensure it meets legal standards for investigation. Training in algorithmic literacy and bias awareness helps officers understand AI limitations and linguistic ambiguities. Transparent documentation of decisions influenced by AI is also important for accountability. Continuous monitoring, feedback loops, and model improvements can help reduce bias and improve the reliability of AI-assisted detection systems. +
Best practices include ensuring strong data protection and clear privacy controls for personal and family information. Users should be able to review, edit, or delete stored data and understand how the assistant uses their information. Transparent communication about system functions and limitations is essential. Families should also establish shared rules on how the assistant handles personal preferences and information. Human awareness and responsible use help maintain healthy boundaries and prevent overdependence on AI assistants. +
Best practices include ensuring that elderly care robots prioritize user safety, privacy, and transparency. Seniors and their families should clearly understand how personal and health data are collected and used. AI systems should support, rather than replace, human caregiving and social interaction. Designers should create user-friendly interfaces suited to older adults, while continuous monitoring and human oversight ensure that the technology functions reliably and ethically within healthcare and home environments. +
Best practices include ensuring transparency about how AI systems monitor workers and how the data is used. Workers should receive training to understand how AI tools evaluate movements and safety compliance. Human oversight is necessary when interpreting AI-generated alerts to avoid unfair disciplinary actions. Organizations should also ensure proportional use of monitoring technologies, protect worker privacy, and allow employees to challenge incorrect AI assessments or false positives. +
Best practices include maintaining strong human-in-the-loop oversight, where clinicians supervise and validate AI outputs rather than relying on them blindly. Medical professionals should develop algorithmic literacy to understand AI confidence scores, heatmaps, and potential errors. AI recommendations should be documented, including when they are accepted or overridden, to ensure accountability.
Systems should be regularly monitored and validated to detect failures or biases. Transparency with patients and colleagues about AI-supported decisions is also essential. Finally, integrating AI with training tools such as surgical simulators can help clinicians evaluate AI recommendations, improve their judgment, and safely adapt workflows to AI-assisted healthcare environments. +
Best practices include maintaining strong human oversight when using AI-assisted safety analysis tools. Safety engineers should critically evaluate AI-generated suggestions rather than relying on them automatically. Developing algorithmic literacy helps engineers understand AI limitations, scenario coverage, and potential failure modes. Parallel manual analyses can be used to validate AI-supported results and ensure reliability. Proper documentation of why AI recommendations are accepted, modified, or rejected is also essential. Continuous monitoring of AI tool performance ensures compliance with safety standards and preserves the integrity of safety assessments. +
This workbook discusses how to put the principle of AI Fairness into practice across the AI project workflow through Bias Self-Assessment and Bias Risk Management as well as through the documentation of metric-based fairness criteria in a Fairness Position Statement. +
