Text (Instruction Step Text)

From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
1
Close the exercise with underlining the importance of good communication in dealing with research integrity issues and dilemma’s. Continue with the next fragment or next part of the workshop.  +
Ask the group to reflect on the process, and to evaluate if the learning objectives were met. Foster a brief dialogue on what might have been learned as a group. In this step the facilitator may ask participants questions such as: -      Was it easy or difficult to identify the relevant principles and virtues in the chosen dilemma? -      Did this exercise help you with identifying and connecting to formally defined principles (e.g. from the European Code of Conduct for Research Integrity)? -      Did most of the players agree or disagree with the final choice? -      What were the main points of contention? -      Why did people disagree (e.g. differences in experience, training, background, values, norms…)? -      What were the other options? -      Was any alternative option proposed? -      Did anybody change her/his mind as a result of the discussion? -      Why would you NOT follow the morally ideal course of action? -      What is needed to act morally in your work setting? What were the most convincing arguments used in the discussion? -      On which areas do you feel there is insufficient consensus? -      How can you best address future dilemmas in your daily work? - How can shared values and principles be fostered?  +
This final part of the manual consists of two instructions, with the links listed below: [https://public.3.basecamp.com/p/R5e8zxXRHwd27Mz5PPfooByh Certification] [https://public.3.basecamp.com/p/vmLSq94iGyaNsbrWKFgFbCiN Recognition and networking]  +
Katılımcıları sürecin geneli üzerine düşünmeye davet edin: bu oturumdan çıkardıkları dersler neler? Katılımcılara aşağıdaki soruları sorarak belirli sonuçlar çıkarmaya çalışın: o   Erdemler ve normlar arasında ilişki kurmak kolay mıydı yoksa zor muydu? Neden? o   Kendinizi vakayı sunan kişinin yerine koymanız erdemlere ve dolayısıyla norm ve davranışlara olan bakış açınızı genişletti mi? Diğer katılımcıların belirlediği erdem ve normlar/ davranışlar sizin erdemlere daha farklı ya da geniş bir açıdan bakmanıza yardımcı oldu mu? Bunun uygulamada karşılaşacağınız AED ikilemleri karşısında düşünme şeklinizi etkileyeceğini düşünüyor musunuz?  +
Gruptan genel olarak süreç üzerine fikir yürütmelerini ve bu alıştırma bağlamında öğrenme hedeflerinin karşılanıp karşılanmadığına ilişkin bir değerlendirme yapmalarını isteyin. Katılımcıları bu alıştırma ile neler öğrendikleri üzerine kısa bir diyalog yürütmeye yönlendirin. Bu aşamada eğitmen katılımcılara aşağıdakilere benzer sorular sorabilir: -         Seçilen ikilem için ilgili prensip ve erdemleri belirlemek kolay oldu mu? -         Bu alıştırma sizin resmi olarak tanımlanmış prensipleri (ECoC) tespit edip bunlarla vakalar arasında bağlantı kurmanıza yardımcı oldu mu? -         Oyunu oynayan katılımcıların büyük çoğunluğu varılan nihai karara muvafakat etti mi? -         Anlaşmazlığa yol açan başlıca noktalar nelerdi? -         Katılımcıların bazı noktalarda hemfikir olmamasına sebep olan şeyler nelerdi (örn., kişilerin deneyimlerindeki, eğitimlerindeki, arka planlarındaki, değerlerindeki, normlarındaki vb. farklılıklar) -         Diğer seçenekler neydi? -         Herhangi bir alternatif seçenek önerildi mi? -         Tartışma sonucunda herhangi bir katılımcı fikrini değiştirdi mi? -         Ahlaki açıdan ideal olan şeyi YAPMAMANIZIN sebebi ne olurdu? -         Sizin iş ortamınızda ahlaki olarak iyi olana ulaşmak için neler gerekli? -         Tartışmada kullanılan en ikna edici argümanlar hangileriydi? -         Hangi noktalarda yeterince fikir birliğine varılmadığını düşünüyorsunuz? -         Gelecekte iş yaşamınızda bu gibi ikilemlerle en iyi hangi şekilde başa çıkabilirsiniz? -         Üzerinde daha yaygın bir şekilde anlaşmaya varılan değer ve ilkelere nasıl ulaşılır?  +
Lade die Teilnehmenden abschließend ein, über den gesamten Prozess während der vergangenen Übung nachzudenken: Was ist für sie die Take-Home-Message, die sie aus dieser Übung mitnehmen? Versuche, einige Schlussfolgerungen oder Erkenntnisse festzuhalten, indem du die Teilnehmenden fragst: -         War es einfach, die Werte/Tugenden zu den Normen in Beziehung zu setzen? War es schwierig? Warum? -         Hat der Versuch, sich in die Lage der Person zu versetzen, die die Beispielsituation erlebt hat, deine Sichtweise auf Werte/Tugenden und damit auch auf Normen oder Verhaltensweisen erweitert? -         Haben die von anderen genannten Werte / Tugenden, Normen oder Verhaltensweisen dabei geholfen, anders über das Thema nachzudenken und zum Beispiel Werte / Tugenden anders oder umfassender zu betrachten? Wie wird diese Erfahrung aus der Übung heute dein Denken über Dilemmata im Forschungsalltag verändern?  +
Invite participants to think about the entire process: what is the take home message of this session for them? Try to draw conclusions by asking participants: o          Was it easy or difficult to relate the virtues and norms to each other? Why? o          Did putting yourself in the case presenter’s shoes broaden the way you looked at virtues and, consequently norms and behaviors? o          Did the virtues and norms/behaviors identified by others help you to look at virtues differently or more broadly? Do you think that will influence your thinking on research integrity dilemmas in practice?  +
[[File:Man overlooking view.png|center|frameless|600x600px]] Kuhn suggested that all scientific knowledge is ‘situated’ knowledge and cannot represent a ‘view from nowhere’. We all view the world from within a particular set of social and epistemic practices. According to Kuhn, scientists working within different paradigms are effectively working in different worlds. But how do we know which paradigm we are working in?  +
[[File:M10..png|center|frameless|600x600px]] In terms of ethics dumping, the previously mentioned TRUST global code of conduct for equitable research partnerships offers a simple, jargon-free [https://www.globalcodeofconduct.org/the-code/ ethics code] comprised of 23 articles based around the moral values of Fairness, Respect, Care and Honesty, to help researchers ensure that international research is equitable and carried out without ‘ethics dumping’ or ‘helicopter research’. In terms of AI ethics, we recommend consulting the Ethics of AI in Healthcare: A checklist for Research Ethics Committees which was developed by irecs colleagues, Alexei Grinbaum and Etienne Aucouturier at CEA (French Alternative Energies and Atomic Energy Commission), as well as the materials in the [https://classroom.eneri.eu/node/238 irecs AI and ethics module]. Chapter 5 of the [https://www.who.int/publications/i/item/9789240029200 World Health Organization’s Ethics and Governance of Artificial Intelligence for Health] outlines six key ethical principles for AI research in healthcare. These include protecting patient autonomy, promoting human wellbeing, ensuring transparency and explainability, fostering accountability, promoting inclusiveness and equity, and supporting AI that is both responsive and sustainable. These principles serve as essential reminders for researchers and policymakers to prioritise ethical considerations in the development and deployment of AI technologies in healthcare settings. Another significant issue in the development of AI technologies across all fields is the potential for bias and inaccuracies in algorithms, which in the healthcare domain can result in incorrect diagnoses and treatment recommendations. These risks disproportionately affect vulnerable populations, raising concerns about inclusivity and equity. The [https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 EU’s Ethics Guidelines for Trustworthy AI] emphasise that AI systems must be lawful, ethical, and robust throughout their Life cycle. This includes compliance with applicable laws, adherence to ethical principles, and ensuring technical and social robustness. Importantly, these guidelines call for mechanisms to prevent algorithmic bias and protect privacy. Unethical applications involving AI are defined as those that risk violating physical or mental integrity, create addiction, risk damaging social processes and public institutions (e.g. by social scoring or contributing to misinformation). Projects must adhere to essential requirements, which encompass (but are not restricted to): * People must be made aware that they are interacting with an AI system, its abilities and Limitations, risks and benefits. * Mechanisms for human oversight, transparency and auditability must be built into the AI system. * AI-systems must be designed to avoid bias in input data and algorithmic design. * Compliance with data protection and privacy principles must be demonstrated. Our hypothetical proposal is not seeking funding from Horizon Europe, however, the [https://www.bbmri-eric.eu/wp-content/uploads/The-Ethics-Appraisal-Scheme-_BBMRI-webinar-september-2021_version-for-dessimination.pdf EU ethics appraisal scheme (pp74-80)], provides relevant guidance for several concerns in this case study. It highlights the importance of transparency, requiring that individuals interacting with AI systems be fully informed about the system’s capabilities, Limitations, risks, and benefits. It also underscores the necessity of building human oversight, transparency, and auditability into AI systems, ensuring that AI development remains accountable and aligned with societal values. Regulatory oversight has often lagged behind technological advancements, creating additional legal and ethical challenges. The WHO and EU guidelines, among others, stress the need for AI systems to comply with data protection and privacy principles, such as data minimisation, ensuring that only the necessary data is collected and used. This is crucial in building trust and safeguarding against the misuse of sensitive healthcare information. It is important to remember that different guidelines and regulations will apply to research projects in order to comply with the requirements of different institutions, organisations and geographical locations. Listed in the further resources section are sources to explore on ethics dumping, some of the ethics committees in Africa and the current most relevant EU or international guidelines or standards related to AI in health and healthcare, but you may need to explore further afield to locate those that apply to different situations.  
[[File:AI img8.png|center|frameless|600x600px]]  +
[[File:Ge2Image8.png|center|frameless|600x600px]] Treatments and therapies involving gene editing are already undergoing clinical trials for marketing approval in the EU and the US for certain diseases and are likely to incur equivalent costs to those of conventional gene-based therapies that are used for rare genetic diseases. However, they are very costly and may thus be restricted to wealthy patients or citizens in countries with corresponding health insurance or social security systems. The dilemma of resource allocation poses questions about the development of extremely expensive therapies.  +
[[File:AI Image9.png|center|frameless|600x600px]] Antonija Mijatovic shares her thoughts on challenges for data privacy and security. '''Challenges for data privacy and security''' When it comes to data security and privacy, the major issues are data breaches. Because many applications in AI involve health data, and health data is sensitive and confidential by nature. So, data breaches can lead to privacy violations, identity theft, even health risks. And they result in financial losses for healthcare organizations. Because healthcare is the top industry targeted by ransomware. Ransomware is a common cyber-attack. But aside of ransomware, data breaches can occur through hacking, phishing, and even if a device storing health information is lost or stolen. And data breaches can also happen unintentionally. For example, if patient data is emailed to the wrong recipient or posted online. And these incidents happen very often. For example, in the United States alone, only in the last year there have been more than 500 cases of cyber-attacks. So, this is why it is important to address. Researchers need to take multiple measures to ensure data security and privacy. And these include cyber security measures, such as strong passwords, restricted access, two-factor authentication, and even encryption of very sensitive data. In addition, researchers should create backups of very important folders. And also, because 90% of cyber-attacks were allowed due to human error, researchers who work with sensitive data should receive proper training in the subject. Ethics reviewers need to check whether researchers took all necessary measures to ensure data privacy and security. And they should also check whether researchers adhered to regulatory compliance. For example, in the European Union, personal data is regulated through the [https://gdpr.eu/what-is-gdpr/ GDPR] and personal data in AI is regulated through the [https://artificialintelligenceact.eu/ Artificial Intelligence Act]. While in the United States there are several guidelines such as the [https://aspe.hhs.gov/reports/health-insurance-portability-accountability-act-1996 Health Accountability and Portability Act].  
[[File:Bio3Image10.png|center|frameless|600x600px]] In addition to the guidelines discussed below, we have produced a checklist for RECs on the use of biobanking in research, attached at the end of this page. We hope that this will be useful for REC members considering proposals involving biobanking. Please also see the further resources section which includes the most relevant EU or international guidelines or standards related to biobanking, a bibliography and useful websites. In Europe, biobanking is governed by regulations in the [https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:102:0048:0058:en:PDF European Union's Clinical Trials Regulation and the Human Tissue and Cells Directive] which provides guidelines for sample collection, storage, and ethical considerations. Guideline 8 in [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9357355/ CIOMS International ethical guidelines for health-related research involving humans] sets out recommended practices for the collection, storage and use of biological materials and related data. Also relevant is the [https://www.isber.org/page/BPR General Data Protection Regulation]  (GDPR), which addresses the processing of personal data. [https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf The International Society for Biological and Environmental Repositories] (ISBER) provides guidelines for best practices, and the [https://www.oecd.org/health/biotech/guidelines-for-human-biobanks-and-genetic-research-databases.htm OECD's Guidelines on Human Biobanks and Genetic Research Databases] offer international recommendations. [https://doi.org/10.3390/genes15010066 Multiple national and regional regulations] further shape biobanking practices worldwide, emphasising ethical, legal, and privacy considerations. It is important to remember that different guidelines and regulations will apply to biobanks and related research projects in order to comply with the requirements of different institutions, organisations and geographical locations. The further resources section lists and provides links to other relevant EU, African and other international guidelines or standards related to biobanking in health and healthcare, but you may need to explore further afield to locate those that apply to different situations. [https://classroom.eneri.eu/sites/default/files/2024-11/Checklist%20for%20use%20of%20biobanking.pdf Checklist for RECS on the use of biobanking in research.]  
[[File:Rep9.png|center|thumb|500x500px]] Now we return to the research ethics committee perspective. Below is a hypothetical debate between members of a research ethics committee that is informed by the checklist about whether this VR study should be approved. The debate involves the following five characters: *Dr Taylor (Chair of research ethics committee) *Dr Evans (Bioethicist) *Dr Brown (Psychologist) *Dr Adams (Data privacy specialist) *And Ms Amanda Lee (Lay member) Did the research ethics committee discuss all of the issues you identified in the proposal? Did they miss anything important? They are clearly concerned about safeguarding participant wellbeing, privacy and data protection measures, but have they looked for fair participant recruitment or identified all risks and benefits, for example?  +
[[File:Mm10.png|center|frameless|600x600px]] The TRUST Code, a global code of conduct for equitable research partnerships, was designed to address ethics dumping. You can watch two videos here about the code: one gives an overview of the code, and one introduces its 23 articles.  +
[[File:Consent spelled out on blocks.jpg|alt=consent spelled out on blocks|center|frameless|600x600px|consent spelled out on blocks]] Informed consent is the cornerstone of ethical research with humans. It is of fundamental importance that people understand what a research project is about and provide their consent for taking part. There are seven key ingredients for valid informed consent. Match the following ingredients to their meanings.  +
[[File:Bio2Image11.png|center|frameless|600x600px]]  +
Helicopter research occurs when researchers from affluent countries extract data or resources from lower-income regions without considering local needs or ethical concerns. A notable example occurred during the Ebola crisis in 2014. Researchers from high-income countries requested access to vast amounts of mobile phone data from Sierra Leone, Guinea, and Liberia to track population movements, claiming that it would provide significant insights into Ebola transmission. The contrast between the handling of mobile phone data during the Ebola crisis and the German floods of 2021 underscores the double standards often present in helicopter research.  +
[[File:Ext.Image11.png|center|frameless|600x600px]] XR manipulation refers to the intentional alteration or distortion of reality within virtual environments. XR manipulation can alter users' perception of reality, create illusions or deceptions that trick users into perceiving virtual content as part of their physical environment. It can also be used to control the narrative within immersive experiences to shape their understanding, interpretation, and beliefs. The emergence of virtual beings, (for instance, avatars representing deceased individuals) introduces complex ethical questions regarding identity and agency. Immersive technologies can also incorporate nudging techniques that are used to guide users' actions, shape their experiences, or promote certain outcomes. In the context of VR, ‘nudging’ refers to the application of certain measures to subtly influence the user’s decision-making. For instance, it may involve prompts, reminders, or visual cues; the presentation of options in specific ways; portraying particular behaviours as the social norm; or the offering of rewards or incentives. Given the intention to influence, the use of nudging techniques has ethical implications related to user autonomy and informed consent, so needs to be considered carefully. While these facets can enhance immersion and entertainment value, they can also invoke ethical concerns related to transparency, consent, and user agency. XR manipulation can be exploited for malicious purposes, such as spreading misinformation, creating deceptive experiences, or manipulating users' behaviour for financial or political gain. Safeguards need to be implemented to prevent misuse of XR technologies and protect users from harmful manipulation.  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.6.0