Text (Instruction Step Text)

From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
1
[[File:GovProc9.png|center|frameless|600x600px]] ou can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected.  +
[[File:ReImg7..png|center|frameless|600x600px]] One aspect that is not fully described by the researchers or explored by the research ethics committee is that of the AI involvement in the study. AI involvement in the study Participants in the VR group will interact with AI driven avatars. This is addressed to some extent as concerns were raised about supervision of the AI system, monitoring of interactions, and accountability etc. Additionally, it was noted that it wasn’t clear whether there would be an AI expert on the research team. However, there is also no AI expert on the research ethics committee and maybe that is why they failed to address the issues associated with the inclusion of a generative AI element in the project. This is what the proposal states: “Additionally, the study hopes to contribute to improving AI-driven avatars in VR environments, to help make them more lifelike and responsive during human-to-AI interactions.” This statement implies that data from the project will be used to train AI models. Did you spot this in the proposal? Generative AI models typically require significant amounts of data, often stored for long periods. This increases the risk of data breaches or unauthorized access, especially given the sensitive nature data collected in this project. Biometric data, coupled with detailed interaction metrics (e.g., frequency, duration of social interactions), can be highly personal and although efforts can be made to anonymise data, the detailed nature of biometric and interaction data could lead to re-identification risks. Secure anonymisation protocols must be emphasised to protect participant identities. If people in the project are not fully aware that their data is being used to train AI models this raises concerns about autonomy and trust. Additionally, there are issues related to data bias and fairness as unrepresentative datasets might skew the AI model’s predictions. The large datasets on which generative AI models are trained can contain biases or stereotypes leading AI avatars to exhibit biased behaviours or make stereotyped assumptions. Participants from underrepresented groups might encounter responses from avatars that reflect these biases, and there may be a lack of nuanced cultural understanding, leading to responses that feel inappropriate or insensitive. Mitigating these risks will require transparency about AI’s nature, strict data handling policies and bias mitigation strategies including fairness in participant recruitment.   For proposals like this where there are ethics issues that cross both the involvement of XR and AI, it will be helpful to also consult the document Ethics of [https://classroom.eneri.eu/sites/default/files/2024-12/The%20use%20of%20XR%20technologies%20in%20research.pdf AI in Healthcare: A checklist for Research Ethics Committees.]  
[[File:AI Image10.png|center|frameless|600x600px]] <div><div> Think about places where data relating to your own health might be stored – tick all that apply </div></div><div> * </div>  +
[[File:Mm11.png|center|frameless|600x600px]] In the next video you will hear from some of the people who were involved in developing the TRUST Code, summarizing the 23 articles that promote equitable research partnerships in international collaborative research.  +
[[File:Ring by water.jpg|alt=ring by water|center|frameless|600x600px|ring by water]]  +
[[File:Gene Image12.png|center|frameless|600x600px]]  +
[[File:Ge-Hu2.png|center|frameless|600x600px]] '''Human enhancement''' The ability to edit genes raises ethical questions about the potential for "designer babies," where genetic enhancements are made for non-medical reasons. This raises concerns about social inequality, discrimination, and the potential misuse of gene editing technologies. '''Immunogenicity''' The use of gene editing tools, especially those involving viral vectors to deliver editing components, may trigger an immune response in the organism. The immune response could limit the effectiveness of the treatment or cause adverse reactions. '''Off-Target Effects''' Gene editing tools may unintentionally modify genomic regions other than the target, leading to unintended consequences. Off-target effects could potentially cause new genetic mutations or disrupt the function of other essential genes. '''On-target effects''' Gene editing tools may unintentionally modify the target DNA in the wrong way with unwanted deletions or insertions. For instance, the DNA coding for the Cas protein may become built into the DNA target sequence of the cell, which would lead to the gene in question not functioning properly. '''Mosaicism''' Genetic mosaicism is the presence of more than one genotype in one individual. Some cells in the target region undergo the desired genetic modification while others still carry the original DNA resulting in a mosaic pattern of edited and unedited cells. This can lead to problems in communication between cells. '''Slippery slope''' If genetic enhancement becomes acceptable for certain characteristics, the boundaries of what is considered acceptable will soon be pushed. Additionally, people may begin to feel pressure to enhance to ensure that their children are not disadvantaged in comparison with those who benefit from enhancements.  +
[[File:Bio2Image12.png|center|frameless|600x600px]] '''An example of a protocol to deal with incidental findings''' In addition to collecting biological samples, the population-based UK [https://www.nature.com/articles/s41467-020-15948-9#Sec19 Biobank also collect multi-modal images] from some donors which include MRIs, x-rays, and ultrasounds. Their protocol for dealing with incidental findings included the recruitment of 3rd party consultant radiologists to review scans that the biobank radiologists had flagged up if they had noticed signs of something clinically serious at the time of collection. The consultant radiologist then reviewed the images and advised the biobank if the donor and their healthcare practitioner should be notified. Of the donors who were notified of an incidental finding, all consulted their doctor, and 90% went on to further clinical assessments.  +
[[File:M11..png|center|frameless|600x600px]] This case study explored the ethical complexities of a malaria research proposal using AI to predict disease transmission hotspots in Sub-Saharan Africa. The project, led by researchers from high-income countries, illustrates critical ethical concerns around ethics dumping and AI ethics, particularly when research is conducted in low- and middle-income countries. Key ethics dumping issues included a lack of meaningful involvement of local researchers, limited benefit-sharing with participating communities, and insufficient local control over data ownership. The AI ethics concerns centered on informed consent complexities, data privacy, and transparency in AI processes which is especially challenging when participants may not fully understand how their data is used by advanced algorithms. The TRUST Global Code of Conduct principles—fairness, respect, care, and honesty—highlight the need for equitable and respectful research practices. Addressing these values in the proposal would promote ethical research partnerships, prevent exploitative practices, and ensure that the benefits extend to the local communities and health systems involved. This case study underscores the importance of balancing innovative research with robust ethical standards to foster trust, inclusivity, and long-term impact in global health initiatives.  +
Now it is time to decide whether or not to approve the study. As a member of the research ethics committee, which option will you go for? '''Feedback''' While gene drive technology presents a potentially revolutionary solution to malaria control, it raises a host of ethical concerns, particularly regarding ecological impacts, human consent, and long-term consequences. Careful consideration of these risks and proactive steps to mitigate them will be essential to the responsible development and deployment of this technology.  +
[[File:A view of mountains high up on a hill.png|center|frameless|600x600px]] Which paradigm are you working in? Look at the descriptions on the end points of each question, and try to work out where you and your research project might fall along each continuum. Are you more of a realist or more of a relativist? Is your approach to knowledge generation more positivist or interpretivist? Do these aspects fit with the methodological stance that you take in your research? Most people operate somewhere between the extremes. Additionally, it is possible to alter one’s positionality in response to different contexts. For instance, when addressing a research question which requires broad statistics, one might take a more positivist stance; when in-depth inquiry of a qualitative nature is required, one might take a more interpretivist stance. The important point is that we are cognisant of our perspective and its influence upon the knowledge that we create.  +
<blockquote>Is it ethical to intentionally infect healthy volunteers with a deadly virus? This video discusses the ethical controversy surrounding human challenge studies, particularly those involving SARS-CoV-2, the virus that causes COVID-19. While these studies can be valuable for scientific progress, they raise concerns about the Hippocratic Oath, which states "first do no harm." On the other hand, human challenge studies have been used in the development of other vaccines, such as malaria.</blockquote>  +
[[File:S1.png|center|frameless|600x600px]] It is generally agreed across ethics codes that the involvement of people with these types of vulnerabilities requires special justification and special protections. Indeed, the Declaration of Helsinki states that medical research with a vulnerable group is only justified if the research is responsive to the health needs or priorities of this group and the research cannot be carried out in a non-vulnerable group (Article 20). Of course, we need to protect people from exploitation in research but ethics codes and processes that aim to protect vulnerable populations might inadvertently lead to the exclusion of certain individuals if they are wrongly labelled as vulnerable, or the researchers do not understand how to mitigate their vulnerability in research.  +
[[File:Ext.Image12.png|center|frameless|600x600px]] Miltos Ladikas shares his thoughts on manipulation in XR. People are susceptible to manipulation particularly with extended reality more than any other technology, I believe. Because extended reality intrudes into people's mind like no other technology that we know of. The reason is that extended reality creates, basically, reality. That's why it's called extended reality. And this reality, which is not exactly real, it might be augmented, might be totally virtual, but in any case, it's something that our brain, our mind, our cognitive functions, accept as real. And once you get to that point, then the possibilities for manipulation are manifold, actually. And we know that from research that psychologists have done by using extended reality, especially these [https://c/Users/hpartington/AppData/Local/Microsoft/Windows/INetCache/Content.Outlook/83I5BQ72/cave CAVE] systems, as they call them, whereby you are in an extended reality environment, totally immersed in this environment. And so they use that to achieve, and they have achieved, better results in psychotherapy as a matter of fact. For example, acrophobia, height phobia, vertigo, for instance. And although this is, of course, a good aspect of, a good use, let's say, of extended reality, it shows how far it can go into manipulating our minds and even our belief systems, and how this manipulation can be really long term. So, yes, there is a real risk of manipulation by using extended reality. We should be aware of that. As I said before, of course, manipulation can have good aspects. Like, you know, we need to manipulate in psychology, instance, we do need to manipulate, the mind in order to create a better functioning. If you like, in a very blunt way, better functioning, cognitively, at least the mind. But one can imagine a lot of cases where this kind of manipulation, especially in younger people, and don't forget that extended reality is more accepted and more used by the younger generation for a very simple reason. They are more immersed into new technologies, in any case. They accept these new technologies, they have them in their lives, every day, from social media to, well any kind, basically, of new technologies that are out there. It's part of growing up, I suppose, nowadays. But that also means that young minds, which are yet not well-formed, if you like, socially at least, and even individually in many respects, they can really be manipulated through extended reality for, well, any kind of purpose imaginable by the ones that use basically and develop and distribute extended reality.  
The informed consent process should explain the study risks and benefits fully and clearly in terms of what is known, what is uncertain and what is unknown  +
In this lecture, Heidi Beate Bentzen examines the conflict between personal data protection and the objectives of Open Science. Initially, she delves into the principle of data minimisation, followed by an analysis of the barriers to global data sharing. '''Watch the lecture and then answer the questions.''' '''Further reading:''' Dennis, S., Garrett, P., Yim, H., Hamm, J., Osth, A. F., Sreekumar, V., & Stone, B. (2019). Privacy versus open science. Behavior Research Methods, 51(4), 1839–1848. https://doi.org/10.3758/s13428-019-01259-5 Phillips, M., & Knoppers, B. M. (2019). Whose Commons? Data Protection as a Legal Limit of Open Science. Journal of Law, Medicine & Ethics, 47(1), 106–111. https://doi.org/10.1177/1073110519840489 Paseri, L. (2023). Open Science and Data Protection: Engaging Scientific and Legal Contexts. Journal of Open Access to Law, 11(1), Article 1. https://doi.org/10.63567/1bnsyb91  +
[[File:AI Img10.png|center|frameless|600x600px]] You can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected.      +
Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page; [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2FK5LH08FyvQ&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cde983f54bcc64d66a02908dcd0b50ccd%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638614723283127814%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=shLTj7qPsGmGj0JOoPRZV2LhKbl5XOOhAbo7F%2FWzW7s%3D&reserved=0 https://forms.office.com/e/K5LH08FyvQ] Thank you!  +
[[File:ReImg8..png|center|frameless|600x600px]] An important role for a research ethics committee is to establish what the risks are and assess whether the risks are justified by the study’s objectives. This also involves assessment of the potential benefits of the study and checking of measures that will be employed to minimise the risks. By now, you should have a good idea of the potential risks and benefits that are associated with this study. [[File:ReImg9..png|center|frameless|400x400px]] Most research studies involving humans involve some level of risk, but there must be a realistic potential for benefits to justify the risks. Nevertheless, if the risks (for participants, society, or the environment etc.) are high, it is unlikely that the study will be permitted even when there is a great potential for benefits. In all cases, reliable measures must be put in place to mitigate or minimise any risks. Risks must be minimised, and potential benefits maximised. While there may appear to be a lot of risks associated with this proposal, most can be minimised or mitigated. What might that involve? Many of the points on the document, The use of XR technologies in research: A checklist for research ethics committees are relevant to risk minimisation and mitigation. For instance, Section 4, which is devoted to participant wellbeing and non-maleficence, asks about appropriate mitigation measures such as regular breaks during sessions or monitoring of participants for signs of discomfort, and appropriate protocols for managing emotional distress and offering support. These mitigation measures must be reviewed to ensure participant welfare, but also to help assess whether the study is justified. Do you think the potential benefits outweigh the potential risks involved in this proposed project?  +
[[File:AI Image11.png|center|frameless|600x600px]] Consider the following questions to reflect on the potential impact of AI technologies in healthcare on the patient-doctor relationship: Awareness and Trust: *Do you think patients may have concerns or reservations about relying on AI-driven insights over traditional doctor-patient interactions? Communication and Understanding: *In what ways do you think AI technologies could enhance or hinder communication between patients and healthcare professionals? Personalisation and Empathy: *Do you think the integration of AI could impact the empathetic aspects of the patient-doctor relationship? If so, in what ways? Role of the Healthcare Professional: *What role do you envision for healthcare professionals in a future where AI technologies play a significant role in diagnosis and treatment? *How can doctors maintain their essential role as caregivers and decision-makers while working alongside AI systems? Balancing Technology and Human Touch: *Reflect on the importance of finding a balance between AI technologies and the human touch in healthcare. How can technology enhance, rather than replace, the human connection between patients and healthcare providers?  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.2.9