Text (Instruction Step Text)

From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
1
Close the exercise with underlining the importance of good communication in dealing with research integrity issues and dilemma’s. Continue with the next fragment or next part of the workshop.  +
Each subgroup presents the results of the last round to the other groups. The trainer thanks the participants for their work and the sharing of personal examples; recaps the lessons learnt and might refer to the objectives of the exercise that were presented at the start. End with evaluation. Depending on the agreements made prior to the training, the trainer might take a photo of each sheet and share these photo's with the participants so each can look back at the results.  +
Ask the group to reflect on the process, and to evaluate if the learning objectives were met. Foster a brief dialogue on what might have been learned as a group. In this step the facilitator may ask participants questions such as: -      Was it easy or difficult to identify the relevant principles and virtues in the chosen dilemma? -      Did this exercise help you with identifying and connecting to formally defined principles (e.g. from the European Code of Conduct for Research Integrity)? -      Did most of the players agree or disagree with the final choice? -      What were the main points of contention? -      Why did people disagree (e.g. differences in experience, training, background, values, norms…)? -      What were the other options? -      Was any alternative option proposed? -      Did anybody change her/his mind as a result of the discussion? -      Why would you NOT follow the morally ideal course of action? -      What is needed to act morally in your work setting? What were the most convincing arguments used in the discussion? -      On which areas do you feel there is insufficient consensus? -      How can you best address future dilemmas in your daily work? - How can shared values and principles be fostered?  +
This final part of the manual consists of two instructions, with the links listed below: [https://public.3.basecamp.com/p/R5e8zxXRHwd27Mz5PPfooByh Certification] [https://public.3.basecamp.com/p/vmLSq94iGyaNsbrWKFgFbCiN Recognition and networking]  +
Katılımcıları sürecin geneli üzerine düşünmeye davet edin: bu oturumdan çıkardıkları dersler neler? Katılımcılara aşağıdaki soruları sorarak belirli sonuçlar çıkarmaya çalışın: o   Erdemler ve normlar arasında ilişki kurmak kolay mıydı yoksa zor muydu? Neden? o   Kendinizi vakayı sunan kişinin yerine koymanız erdemlere ve dolayısıyla norm ve davranışlara olan bakış açınızı genişletti mi? Diğer katılımcıların belirlediği erdem ve normlar/ davranışlar sizin erdemlere daha farklı ya da geniş bir açıdan bakmanıza yardımcı oldu mu? Bunun uygulamada karşılaşacağınız AED ikilemleri karşısında düşünme şeklinizi etkileyeceğini düşünüyor musunuz?  +
Gruptan genel olarak süreç üzerine fikir yürütmelerini ve bu alıştırma bağlamında öğrenme hedeflerinin karşılanıp karşılanmadığına ilişkin bir değerlendirme yapmalarını isteyin. Katılımcıları bu alıştırma ile neler öğrendikleri üzerine kısa bir diyalog yürütmeye yönlendirin. Bu aşamada eğitmen katılımcılara aşağıdakilere benzer sorular sorabilir: -         Seçilen ikilem için ilgili prensip ve erdemleri belirlemek kolay oldu mu? -         Bu alıştırma sizin resmi olarak tanımlanmış prensipleri (ECoC) tespit edip bunlarla vakalar arasında bağlantı kurmanıza yardımcı oldu mu? -         Oyunu oynayan katılımcıların büyük çoğunluğu varılan nihai karara muvafakat etti mi? -         Anlaşmazlığa yol açan başlıca noktalar nelerdi? -         Katılımcıların bazı noktalarda hemfikir olmamasına sebep olan şeyler nelerdi (örn., kişilerin deneyimlerindeki, eğitimlerindeki, arka planlarındaki, değerlerindeki, normlarındaki vb. farklılıklar) -         Diğer seçenekler neydi? -         Herhangi bir alternatif seçenek önerildi mi? -         Tartışma sonucunda herhangi bir katılımcı fikrini değiştirdi mi? -         Ahlaki açıdan ideal olan şeyi YAPMAMANIZIN sebebi ne olurdu? -         Sizin iş ortamınızda ahlaki olarak iyi olana ulaşmak için neler gerekli? -         Tartışmada kullanılan en ikna edici argümanlar hangileriydi? -         Hangi noktalarda yeterince fikir birliğine varılmadığını düşünüyorsunuz? -         Gelecekte iş yaşamınızda bu gibi ikilemlerle en iyi hangi şekilde başa çıkabilirsiniz? -         Üzerinde daha yaygın bir şekilde anlaşmaya varılan değer ve ilkelere nasıl ulaşılır?  +
Lade die Teilnehmenden abschließend ein, über den gesamten Prozess während der vergangenen Übung nachzudenken: Was ist für sie die Take-Home-Message, die sie aus dieser Übung mitnehmen? Versuche, einige Schlussfolgerungen oder Erkenntnisse festzuhalten, indem du die Teilnehmenden fragst: -         War es einfach, die Werte/Tugenden zu den Normen in Beziehung zu setzen? War es schwierig? Warum? -         Hat der Versuch, sich in die Lage der Person zu versetzen, die die Beispielsituation erlebt hat, deine Sichtweise auf Werte/Tugenden und damit auch auf Normen oder Verhaltensweisen erweitert? -         Haben die von anderen genannten Werte / Tugenden, Normen oder Verhaltensweisen dabei geholfen, anders über das Thema nachzudenken und zum Beispiel Werte / Tugenden anders oder umfassender zu betrachten? Wie wird diese Erfahrung aus der Übung heute dein Denken über Dilemmata im Forschungsalltag verändern?  +
Invite participants to think about the entire process: what is the take home message of this session for them? Try to draw conclusions by asking participants: o          Was it easy or difficult to relate the virtues and norms to each other? Why? o          Did putting yourself in the case presenter’s shoes broaden the way you looked at virtues and, consequently norms and behaviors? o          Did the virtues and norms/behaviors identified by others help you to look at virtues differently or more broadly? Do you think that will influence your thinking on research integrity dilemmas in practice?  +
[[File:Man overlooking view.png|center|frameless|600x600px]] Kuhn suggested that all scientific knowledge is ‘situated’ knowledge and cannot represent a ‘view from nowhere’. We all view the world from within a particular set of social and epistemic practices. According to Kuhn, scientists working within different paradigms are effectively working in different worlds. But how do we know which paradigm we are working in?  +
From biological samples to precision medicine for patients. Virchows Arch. 2021 Aug;479(2):23Annaratone L, De Palma G, Bonizzi G, Sapino A, Botti G, Berrino E, Mannelli C, Arcella P, Di Martino S, Steffan A, Daidone MG, Canzonieri V, Parodi B, Paradiso AV, Barberis M, Marchiò C; Alleanza Contro il Cancro (ACC) Pathology and Biobanking Working Group (2021) Basic principles of biobanking:3-246. doi: 10.1007/s00428-021-03151-0. Epub 2021 Jul 13. PMID: 34255145; PMCID: PMC8275637.   https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8275637/   BBMRI-ERIC Common Service ELSI https://www.bbmri-eric.eu/services/common-service-elsi/   European Commission (2018) Data protection in the EU   https://commission.europa.eu/law/law-topic/data-protection/data-protection-eu_en   Council of Europe (2016) Recommendation CM/Rec(2016)6 of the Committee of Ministers to member States on research on biological materials of human origin https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=090000168064e8ff   Harati, M.D., Williams, R.R., Movassaghi, M., Hojat, A., Lucey, G.M., Yong, W.H. (2019). An Introduction to Starting a Biobank. In: Yong, W. (eds) Biobanking. Methods in Molecular Biology, vol 1897. Humana Press, New York, NY. https://doi.org/10.1007/978-1-4939-8935-5_2   Healthtalk.org What is biobanking and why is it important? https://healthtalk.org/experiences/biobanking/what-is-biobanking-and-why-is-it-important/   ISBER (International Society for Biological and Environmental Repositories) (2024) Best practices for repositories  https://www.isber.org/page/BPR   NIH (2024) Genomic Data Sharing (GDS) Policy https://sharing.nih.gov/genomic-data-sharing-policy   OECD  Guidelines for Human Biobanks and Genetic Research Databases (HBGRDs)   https://www.oecd.org/health/biotech/guidelines-for-human-biobanks-and-genetic-research-databases.htm World Medical Association (2016) Declaration of Tapei on ethical considerations regarding health databases and biobanks https://www.wma.net/policies-post/wma-declaration-of-taipei-on-ethical-considerations-regarding-health-databases-and-biobanks/    
[[File:M10..png|center|frameless|600x600px]] In terms of ethics dumping, the previously mentioned TRUST global code of conduct for equitable research partnerships offers a simple, jargon-free [https://www.globalcodeofconduct.org/the-code/ ethics code] comprised of 23 articles based around the moral values of Fairness, Respect, Care and Honesty, to help researchers ensure that international research is equitable and carried out without ‘ethics dumping’ or ‘helicopter research’. In terms of AI ethics, we recommend consulting the Ethics of AI in Healthcare: A checklist for Research Ethics Committees which was developed by irecs colleagues, Alexei Grinbaum and Etienne Aucouturier at CEA (French Alternative Energies and Atomic Energy Commission), as well as the materials in the [https://classroom.eneri.eu/node/238 irecs AI and ethics module]. Chapter 5 of the [https://www.who.int/publications/i/item/9789240029200 World Health Organization’s Ethics and Governance of Artificial Intelligence for Health] outlines six key ethical principles for AI research in healthcare. These include protecting patient autonomy, promoting human wellbeing, ensuring transparency and explainability, fostering accountability, promoting inclusiveness and equity, and supporting AI that is both responsive and sustainable. These principles serve as essential reminders for researchers and policymakers to prioritise ethical considerations in the development and deployment of AI technologies in healthcare settings. Another significant issue in the development of AI technologies across all fields is the potential for bias and inaccuracies in algorithms, which in the healthcare domain can result in incorrect diagnoses and treatment recommendations. These risks disproportionately affect vulnerable populations, raising concerns about inclusivity and equity. The [https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 EU’s Ethics Guidelines for Trustworthy AI] emphasise that AI systems must be lawful, ethical, and robust throughout their Life cycle. This includes compliance with applicable laws, adherence to ethical principles, and ensuring technical and social robustness. Importantly, these guidelines call for mechanisms to prevent algorithmic bias and protect privacy. Unethical applications involving AI are defined as those that risk violating physical or mental integrity, create addiction, risk damaging social processes and public institutions (e.g. by social scoring or contributing to misinformation). Projects must adhere to essential requirements, which encompass (but are not restricted to): * People must be made aware that they are interacting with an AI system, its abilities and Limitations, risks and benefits. * Mechanisms for human oversight, transparency and auditability must be built into the AI system. * AI-systems must be designed to avoid bias in input data and algorithmic design. * Compliance with data protection and privacy principles must be demonstrated. Our hypothetical proposal is not seeking funding from Horizon Europe, however, the [https://www.bbmri-eric.eu/wp-content/uploads/The-Ethics-Appraisal-Scheme-_BBMRI-webinar-september-2021_version-for-dessimination.pdf EU ethics appraisal scheme (pp74-80)], provides relevant guidance for several concerns in this case study. It highlights the importance of transparency, requiring that individuals interacting with AI systems be fully informed about the system’s capabilities, Limitations, risks, and benefits. It also underscores the necessity of building human oversight, transparency, and auditability into AI systems, ensuring that AI development remains accountable and aligned with societal values. Regulatory oversight has often lagged behind technological advancements, creating additional legal and ethical challenges. The WHO and EU guidelines, among others, stress the need for AI systems to comply with data protection and privacy principles, such as data minimisation, ensuring that only the necessary data is collected and used. This is crucial in building trust and safeguarding against the misuse of sensitive healthcare information. It is important to remember that different guidelines and regulations will apply to research projects in order to comply with the requirements of different institutions, organisations and geographical locations. Listed in the further resources section are sources to explore on ethics dumping, some of the ethics committees in Africa and the current most relevant EU or international guidelines or standards related to AI in health and healthcare, but you may need to explore further afield to locate those that apply to different situations.  
[[File:M10.png|center|frameless|600x600px]] While many research ethics codes and guidelines have something to say about the inclusion of vulnerable people in research, in general they promote the same two messages: first, that most vulnerabilities are associated with voluntariness, and second, that certain groups should be awarded more protection than others. When vulnerability is mentioned in research ethics codes, it is primarily in relation to the ability to provide informed consent. This can be associated with innate characteristics (for instance, young children or adults with severe cognitive dysfunctions). It can also be associated with circumstances that might impact upon the voluntariness of their consent (for instance, with prisoners or employees). Some codes also mention risk-based vulnerabilities whereby vulnerability stems from being at an increased risk of mental or physical harm (for instance, pregnant women). '''Exercise Feedback''' The Australian National Statement (2023, p12) provides an extensive list of the sorts of harm to which research participants might be vulnerable including: *physical harm: including injury, illness, pain or death; *psychological harm: including feelings of worthlessness, distress, guilt, anger, fear or anxiety related, for example, to disclosure of sensitive information, an experience of re-traumatisation, or learning about a genetic possibility of developing an untreatable disease; *devaluation of personal worth: including being humiliated, manipulated or in other ways treated disrespectfully or unjustly; *cultural harm: including misunderstanding, misrepresenting or misappropriating cultural beliefs, customs or practices; *social harm: including damage to social networks or relationships with others, discrimination in access to benefits, services, employment or insurance, social stigmatization, and unauthorized disclosure of personal information; *economic harm: including the imposition of direct or indirect costs on participants; *legal harm: including discovery and prosecution of criminal conduct.  
[[File:AI img8.png|center|frameless|600x600px]]  +
[[File:GovProc7.png|center|frameless|600x600px]] Preparing an application for the ethics approval of a research study can be a time-consuming process which is best approached methodically to ensure a coherent application with all required documentation. The process differs between institutions and organisations, but normally involves the following steps. [[File:GovProc8.png|center|frameless|600x600px]] Engagement with research ethics and integrity from the very start of a study helps researchers to design studies that are both ethical and of high quality. Nevertheless, identifying and dealing with ethical issues is not just something that happens at the beginning of a study; ethics awareness is required throughout, as researchers sometimes find themselves dealing with unforeseen consequences and navigating uncharted territory. This is especially the case for research involving new technologies. Some fields, like artificial intelligence and extended reality, are developing rapidly and this requires ongoing assessment of challenges and ethics guidance as it becomes available.  +
[[File:Ge2Image8.png|center|frameless|600x600px]] Treatments and therapies involving gene editing are already undergoing clinical trials for marketing approval in the EU and the US for certain diseases and are likely to incur equivalent costs to those of conventional gene-based therapies that are used for rare genetic diseases. However, they are very costly and may thus be restricted to wealthy patients or citizens in countries with corresponding health insurance or social security systems. The dilemma of resource allocation poses questions about the development of extremely expensive therapies.  +
[[File:AI Image9.png|center|frameless|600x600px]] Antonija Mijatovic shares her thoughts on challenges for data privacy and security. '''Challenges for data privacy and security''' When it comes to data security and privacy, the major issues are data breaches. Because many applications in AI involve health data, and health data is sensitive and confidential by nature. So, data breaches can lead to privacy violations, identity theft, even health risks. And they result in financial losses for healthcare organizations. Because healthcare is the top industry targeted by ransomware. Ransomware is a common cyber-attack. But aside of ransomware, data breaches can occur through hacking, phishing, and even if a device storing health information is lost or stolen. And data breaches can also happen unintentionally. For example, if patient data is emailed to the wrong recipient or posted online. And these incidents happen very often. For example, in the United States alone, only in the last year there have been more than 500 cases of cyber-attacks. So, this is why it is important to address. Researchers need to take multiple measures to ensure data security and privacy. And these include cyber security measures, such as strong passwords, restricted access, two-factor authentication, and even encryption of very sensitive data. In addition, researchers should create backups of very important folders. And also, because 90% of cyber-attacks were allowed due to human error, researchers who work with sensitive data should receive proper training in the subject. Ethics reviewers need to check whether researchers took all necessary measures to ensure data privacy and security. And they should also check whether researchers adhered to regulatory compliance. For example, in the European Union, personal data is regulated through the [https://gdpr.eu/what-is-gdpr/ GDPR] and personal data in AI is regulated through the [https://artificialintelligenceact.eu/ Artificial Intelligence Act]. While in the United States there are several guidelines such as the [https://aspe.hhs.gov/reports/health-insurance-portability-accountability-act-1996 Health Accountability and Portability Act].  
[[File:Bio3Image10.png|center|frameless|600x600px]] In addition to the guidelines discussed below, we have produced a checklist for RECs on the use of biobanking in research, attached at the end of this page. We hope that this will be useful for REC members considering proposals involving biobanking. Please also see the further resources section which includes the most relevant EU or international guidelines or standards related to biobanking, a bibliography and useful websites. In Europe, biobanking is governed by regulations in the [https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:102:0048:0058:en:PDF European Union's Clinical Trials Regulation and the Human Tissue and Cells Directive] which provides guidelines for sample collection, storage, and ethical considerations. Guideline 8 in [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9357355/ CIOMS International ethical guidelines for health-related research involving humans] sets out recommended practices for the collection, storage and use of biological materials and related data. Also relevant is the [https://www.isber.org/page/BPR General Data Protection Regulation]  (GDPR), which addresses the processing of personal data. [https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf The International Society for Biological and Environmental Repositories] (ISBER) provides guidelines for best practices, and the [https://www.oecd.org/health/biotech/guidelines-for-human-biobanks-and-genetic-research-databases.htm OECD's Guidelines on Human Biobanks and Genetic Research Databases] offer international recommendations. [https://doi.org/10.3390/genes15010066 Multiple national and regional regulations] further shape biobanking practices worldwide, emphasising ethical, legal, and privacy considerations. It is important to remember that different guidelines and regulations will apply to biobanks and related research projects in order to comply with the requirements of different institutions, organisations and geographical locations. The further resources section lists and provides links to other relevant EU, African and other international guidelines or standards related to biobanking in health and healthcare, but you may need to explore further afield to locate those that apply to different situations. [https://classroom.eneri.eu/sites/default/files/2024-11/Checklist%20for%20use%20of%20biobanking.pdf Checklist for RECS on the use of biobanking in research.]  
[[File:Rep9.png|center|thumb|500x500px]] Now we return to the research ethics committee perspective. Below is a hypothetical debate between members of a research ethics committee that is informed by the checklist about whether this VR study should be approved. The debate involves the following five characters: *Dr Taylor (Chair of research ethics committee) *Dr Evans (Bioethicist) *Dr Brown (Psychologist) *Dr Adams (Data privacy specialist) *And Ms Amanda Lee (Lay member) Did the research ethics committee discuss all of the issues you identified in the proposal? Did they miss anything important? They are clearly concerned about safeguarding participant wellbeing, privacy and data protection measures, but have they looked for fair participant recruitment or identified all risks and benefits, for example?  +
[[File:G13.png|center|frameless|600x600px]] Bostrom, Nick, et Rebecca Roache. « Ethical issues in human enhancement ». In New Waves in Applied Ethics, édité par J. Ryberg, T. Petersen, et C. Wolf, 120--152. Palgrave-Macmillan, 2007. (https://www.darpa.mil/program/insect-allies). Cohen, Y (2019) Did CRISPR help or harm the first-ever gene-edited babies? Science, 1st August [https://www.science.org/content/article/did-crispr-help-or-harm-first-ever-gene-edited-babies https://www.science.org/content/article/did-crispr-help-or-harm-first-ever-gene-edited-] [https://www.science.org/content/article/did-crispr-help-or-harm-first-ever-gene-edited-babies babies] Kleiderman, Erika, et Ubaka Ogbogu. « Realigning gene editing with clinical research ethics: What the “CRISPR Twins” debacle means for Chinese and international research ethics governance ». Accountability in Research 26 (9 mai 2019): 257-64. https://doi.org/10.1080/08989621.2019.1617138. Palazzani, Laura. « Gene-Editing: Ethical and Legal Challenges ». Medicina e Morale 72, no 1 (11 avril 2023): 49-57. https://doi.org/10.4081/mem.2023.1227. Singh SM. Lulu and Nana open Pandora's box far beyond Louise Brown. CMAJ. 2019 Jun 10;191(23):E642-E643. doi: 10.1503/cmaj.71979. PMID: 31182462; PMCID: PMC6565397. 1 AUG 2019 Smyth, Stuart J., Diego M. Macall, Peter W. B. Phillips, et Jeremy de Beer. « Implications of Biological Information Digitization: Access and Benefit Sharing of Plant Genetic Resources ». The Journal of World Intellectual Property 23, no 3-4 (2020): 267-87. https://doi.org/10.1111/jwip.12151. The Lancet. « Human Genome Editing: Ensuring Responsible Research ». The Lancet 401, no 10380 (mars 2023): 877. https://doi.org/10.1016/S0140-6736(23)00560-3. Wei, X., & Nielsen, R. (2019). CCR5-∆ 32 is deleterious in the homozygous state in humans. Nature medicine, 25(6), 909-910. https://www.nature.com/articles/s41591-019-0459-6  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.3.4