Text (Instruction Step Text)

From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
1
Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page: [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2FUsKC9j09Tx&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cbe3ccf952ee04506e25608dd19dcde06%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638695158651948095%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=P5sNoxm3jW6tLABNV7bNiETR3fHQUG2VODMd3wk3r9E%3D&reserved=0 https://forms.office.com/e/UsKC9j09Tx] Thank you!  +
[[File:AI Image13.png|center|frameless|600x600px]] You can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected.     +
[[File:Question mark in speech bubble.jpg|alt=question mark in speech bubble|center|frameless|600x600px|question mark in speech bubble]] You can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected.  +
[[File:Bio2Image15.png|center|frameless|600x600px]] You can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected. What is a key distinction between the broad consent and study-specific consent models used for individuals donating biological samples to a biobank?  +
[[File:Self awareness blocks.png|center|frameless|600x600px]] When considering complex ethics issues, critical analysis ensures that decisions are based upon well-founded information and sound arguments rather than speculation or faulty logic. This will include in-depth consideration of an issue from multiple perspectives. To be critical in this regard, means that one recognises and questions the basis of assertions, unfounded assumptions, flaws in logic, generalisations and so on. Additionally, when undertaking critical analysis, it is vital that one is mindful of potential sources of bias through engagement in critical reflection. What might this look like in practise? In this module we have introduced a lot of complex ideas. Of course, it’s not possible to gain an in-depth understanding from this brief introduction. It takes time to develop the appropriate skills for critical thinking in ethics. If you want to delve into the subject further, please see the recommendations on the ‘further resources’ page. [https://classroom.eneri.eu/node/207 Next Page]  +
[[File:S5.png|center|frameless|600x600px]] Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page; [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2FK5LH08FyvQ&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cde983f54bcc64d66a02908dcd0b50ccd%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638614723283127814%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=shLTj7qPsGmGj0JOoPRZV2LhKbl5XOOhAbo7F%2FWzW7s%3D&reserved=0 https://forms.office.com/e/K5LH08FyvQ] Thank you!  +
[[File:Ext.Image16.png|center|frameless|600x600px]] Do you know what impacts the use of XR has on the environment? Make a note of as many advantages and disadvantages as you can think of, and then watch the video to check your lists. The potential environmental impacts associated with the use of XR technologies are significant, in both positive and negative ways. Potential positive impacts include the reduction of travel, through remote collaboration, training, and meetings, a reduction of material consumption and waste through virtual prototyping and design, and the use of immersive experiences to raise awareness about environmental issues and sustainable practices. However, the processing power required for high-fidelity VR and AR experiences, can be significant and there are concerns about the energy consumption in public places and cloud gaming server farms. Cloud-based XR applications rely on data centres for processing and storage, which consume substantial amounts of energy. Increasing demand for XR content and services may lead to the expansion of data centre infrastructure, exacerbating environmental concerns if not powered by renewable energy sources. The manufacturing of XR devices requires materials such as plastics, metals, and rare earth elements, which can have environmental impacts associated with extraction, processing, and disposal. The production of XR components and materials may also have indirect environmental impacts, such as habitat destruction,  pollution, and resource depletion associated with mining and manufacturing processes. Finally, the disposal of XR hardware contributes to electronic waste. As XR devices evolve rapidly, older models may become obsolete, leading to disposal challenges and environmental pollution if not properly managed. Addressing the environmental impacts of XR requires a holistic approach, including efforts to improve energy efficiency, reduce electronic waste, promote sustainable manufacturing practices, and invest in renewable energy infrastructure. Additionally, raising awareness about the environmental implications of XR technologies and fostering responsible consumption and disposal habits among users can contribute to minimizing their overall environmental footprint.  
Especially during pandemics, researchers who handle potentially infectious biological materials should be adequately trained and equipped to safeguard public health  +
Try to answer the questions about the case.  +
[[File:Person sat at computer in office.jpg|alt=person sat at computer in office|center|frameless|600x600px|person sat at computer in office]] Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page; [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2FK5LH08FyvQ&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cde983f54bcc64d66a02908dcd0b50ccd%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638614723283127814%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=shLTj7qPsGmGj0JOoPRZV2LhKbl5XOOhAbo7F%2FWzW7s%3D&reserved=0 https://forms.office.com/e/K5LH08FyvQ] Thank you!  +
[[File:Mm17.png|center|frameless|600x600px]] '''Articles and books''' Chatfield, K., Schroeder, D., Guantai, A., Bhatt, K., Bukusi, E., Adhiambo Odhiambo, J., ... & Kimani, J. (2021). Preventing ethics dumping: the challenges for Kenyan research ethics committees. Research Ethics, 17(1), 23-44. Available at: https://journals.sagepub.com/doi/full/10.1177/1747016120925064 (Free to download) Schroeder, D. (2007). Benefit sharing: it’s time for a definition. Journal of medical ethics, 33(4), 205-209. Schroeder, D., & Pisupati, B. (2010). Ethics, justice and the convention on biological diversity. Available at: https://clok.uclan.ac.uk/9695/1/Ethics,%20Justice%20and%20the%20convention.pdf (Free to download) Schroeder, D., Cook, J., Hirsch, F., Fenet, S., & Muthuswamy, V. (2018). Ethics dumping: case studies from north-south research collaborations. Springer Nature. Available at: https://link.springer.com/book/10.1007/978-3-319-64731-9 (Free to download) Schroeder, D., Chatfield, K., Singh, M., Chennells, R., & Herissone-Kelly, P. (2019). Equitable research partnerships: a global code of conduct to counter ethics dumping (p. 122). Springer Nature. Available at: https://link.springer.com/book/10.1007/978-3-030-15745-6 (Free to download) Schroeder, D., Chatfield, K., Muthuswamy, V., & Kumar, N. K. (2021). Ethics Dumping–How not to do research in resource-poor settings. Journal of Academics Stand Against Poverty, 1(1), 32-55. Available at: https://journalasap.org/index.php/asap/article/view/4 (Free to download) Wynberg, R., Schroeder, D., & Chennells, R. (2009). Indigenous peoples, consent and benefit sharing: lessons from the San-Hoodia case (Vol. 15). Berlin: Springer. '''Research ethics codes''' The San Code of Research Ethics, available from: [https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/ https://www.globalcodeofconduct.org/affiliated-codes/] The TRUST Global Code of Conduct for Equitable Research Partnerships, available from:  [https://allea.org/ https://www.globalcodeofconduct.org/] '''Videos''' More videos can be found here: https://www.youtube.com/@trustandprepared1000  
Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page; [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2FcimWP1L4tx&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cde983f54bcc64d66a02908dcd0b50ccd%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638614723283135084%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=mx6LNK14iRC744b7LLnUg8jxZ%2FKLXd%2Blo6I8kmwz%2Bwg%3D&reserved=0 https://forms.office.com/e/cimWP1L4tx] Thank you!  +
[[File:ReImage13..png|center|frameless|600x600px]] '''Artificial intelligence system:''' in the narrow sense (excluding deterministic "expert systems"), a machine learning system trained on a dataset and designed to operate autonomously, demonstrating adaptability to different inputs and producing outputs, such as predictions, recommendations, decisions or other content. '''Machine learning:''' automatic process by which information is generated in the form of mathematical correlations from a training dataset. Types of machine learning include reinforcement learning, supervised learning (using data annotated by humans) and self-supervised or unsupervised learning (without human labeling). The result of machine learning is called an AI model. An AI model needs fine-tuning and alignment before deployment. When combined with a user interface, an AI model trained on a large dataset to perform a variety of tasks becomes a general-purpose AI system. '''Fine-tuning:''' tailoring an AI model to perform specific tasks, by refining its training on a specialized dataset. '''Alignment:''' the design and application of filters and control mechanisms to prevent undesirable behavior of the AI system. Explainability: the capacity to provide textual or visual content allowing users to achieve satisfactory understanding of the causes that have led to the output of the AI system. '''Reproducibility:''' the capacity to obtain identical or similar results on multiple runs of the AI system with the same input. '''Hallucination:''' plausible but false or unreal output produced by the AI system. Bias: distortions in the outputs that occur when AI systems are trained on non-representative or unbalanced datasets, producing false or discriminatory results which can lead to the loss of user trust. '''Emergent capability:''' as perceived by the user, unpredictable characteristics or behavior of the AI system emerging without any explicit intent of the designer. '''Adversarial attack:''' an attack involving purposefully corrupted data or malicious inputs, designed to cause errors or induce undesirable behavior of the AI system. '''Synthetic data:''' simulated data produced by a very large AI system (itself trained on authentic or synthetic data) with the goal of training a smaller AI system. '''Ethics by design:''' a methodology for analyzing, as early as the design phase of an AI system, the technological choices likely to give rise to ethical tensions. It aims to translate ethical principles into operational measures, while adapting them to evolving standards. It also includes an ongoing evaluation of these measures on realistic use cases.  
Thank you for taking this irecs module! Your feedback is very valuable to us and will help us to improve future training materials. We would like to ask for your opinions: 1. To improve the irecs e-learning modules 2. For research purposes to evaluate the outcomes of the irecs project To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer. Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below. This link will take you to a new page; [https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2Fe%2F3puN6rfFYS&data=05%7C02%7CKChatfield%40uclan.ac.uk%7Cde983f54bcc64d66a02908dcd0b50ccd%7Cebf69982036b4cc4b2027aeb194c5065%7C0%7C0%7C638614723283142461%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=noGJNHMbkeQtmNkEzmf2zx2ua3sxX%2F7ta9F8pmckrSQ%3D&reserved=0 https://forms.office.com/e/3puN6rfFYS] Thank you!  +
[[File:Quiz blocks.png|center|frameless|600x600px]] You can try these questions to see whether your learning from this module addresses the intended learning outcomes. No one else will see your answers. No personal data is collected.     +
[[File:S6.png|center|frameless|600x600px]] '''Equality''' Equality implies that people are treated equally in terms of rights, or access to services etc. without discrimination or unfair advantage. In the context of social justice and human rights, equality involves equal access to resources and opportunities, as well as ensuring that individuals are not disadvantaged or marginalised. This can include efforts to address systemic inequalities, discrimination, and barriers to full participation in society. '''Equity''' Equity refers to fairness in the distribution of resources, opportunities, and rights. It involves ensuring that everyone has access to what they need to thrive and reach their full potential, regardless of their background, identity, or circumstances. Unlike equality, which aims to treat everyone the same, equity recognizes that different individuals or groups may require different levels of support or resources to achieve equal outcomes. [[File:S7.png|center|frameless|300x300px]] '''Human Dignity''' Human dignity can be thought of as the inherent value and worth that every individual possesses. It encompasses the idea that each person is deserving of respect, honor, and ethical treatment, regardless of their background, identity, or circumstances. Respect for  human dignity requires fostering environments that value diversity, protect human rights, and ensure the wellbeing and dignity of every individual within society. '''Inclusion''' Inclusion in research requires the active and meaningful involvement of individuals and groups from diverse backgrounds, identities, and experiences. It requires the creation of research designs, environments, systems, and policies that value and respect the contributions and perspectives of every individual, regardless of differences such as gender, ethnicity, religion, sexual orientation, disability, or socioeconomic status and so on. '''Solidarity''' Solidarity refers to cooperation and mutual support amongst individuals or groups, especially in pursuit of common values or goals. Solidarity recognizes the interdependence and interconnectedness of all members of society and emphasizes the importance of working together to create positive change and promote the wellbeing and dignity of all individuals. '''Structural Violence''' Unlike the more obvious physical violence, structural violence refers to a hidden violence that is embedded within the structures of social, economic, political, and cultural systems. It is rooted in unjust systems and power imbalances and operates through unequal power dynamics, systemic injustices, and institutionalized inequalities that provoke and perpetuate harm and suffering. Addressing structural violence requires challenging and transforming the underlying structures, systems, and ideologies that perpetuate inequality and oppression.  
[[File:Ext.Image17.png|center|frameless|600x600px]] Many of the ethics issues related to XR are not unique. For instance, matters related to data protection, privacy, and confidentiality in XR, appear to be similar to other technologies like biobanking or the use of artificial intelligence in healthcare. However, some aspects may require specialist consideration. For instance, for the involvement of children, for consideration of perceptions of reality, work training and cybercrime. An ethics-by-design approach is advocated by many, whereby ethical considerations are integrated into the design and development process of products, services, technologies, or systems from the outset. The goal is to proactively identify, address, and mitigate ethical risks and concerns throughout the entire lifecycle of a product or service, rather than addressing them as an afterthought or in response to ethical dilemmas that arise later. However, an ethics by design approach can be difficult to reconcile with standard procedures for ethics approval that generally require a detailed description of how ethics issues will be addressed before the onset of a project. There are currently no specific EU or international guidelines governing XR. Listed in the Further Resources section of this module are the current most relevant sources of ethics guidance as well as the most relevant related regulations.  +
Researchers should keep in mind how pandemic conditions may affect all stakeholders in a study (participants, healthcare staff, support staff etc.) and take appropriate measures to ease any additional burdens.  +
In this lecture, Panagiotis Kavouras discusses trust and trustworthiness in Open Science. The first segment describes trust and its relevance to science and it argues that trustworthiness is a more pertinent concept in this context. Furthermore, it is  explained that transparency in research conduct is a condition of trustworthiness. The second segment examines how transparency relates to Open Science practices and how it can be viewed in the context of translational research innovation. '''Watch the lecture and then answer the questions.''' '''Further reading:''' Peels, R., & Bouter, L. (2023). Replication and trustworthiness. Accountability in Research, 30(2), 77–87. https://doi.org/10.1080/08989621.2021.1963708 Robert K. Merton, The Normative Structure of Science (1942). (n.d.). Retrieved July 22, 2025, from https://www.panarchy.org/merton/science.html Kerasidou, A. (2017). Trust me, I’m a researcher!: The role of trust in biomedical research. Medicine, Health Care and Philosophy, 20(1), 43–50. https://doi.org/10.1007/s11019-016-9721-6  +
Alzubaidi, Laith, Jinglan Zhang, Amjad J. Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, J. Santamaría, Mohammed A. Fadhel, Muthana Al-Amidie, et Laith Farhan. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data 8, no 1 (31 mars 2021): 53. https://doi.org/10.1186/s40537-021-00444-8. Bertuzzi, L., ‘AI Act: EU Countries Headed to Tiered Approach on Foundation Models amid Broader Compromise’, [http://www.euractiv.com/ Www.Euractiv.Com], October 17, 2023. https://www.euractiv.com/section/artificial-intelligence/news/ai-act-eu-countries-headed-to-tiered-approach-on-foundation-models-amid-broader-compromise/ Bommasani, R. et al. (2022) ‘On the Opportunities and Risks of Foundation Models’. arXiv. Available at: https://doi.org/10.48550/arXiv.2108.07258. Castellino, Ronald A. « Computer aided detection (CAD): an overview. Cancer Imaging 5, no 1 (23 août 2005): 1719. https://doi.org/10.1102/1470-7330.2005.0018. Chan, Heang-Ping, Lubomir M. Hadjiiski, et Ravi K. Samala. Computer-Aided Diagnosis in the Era of Deep Learning. Medical Physics 47, no 5 (juin 2020): e218-27. https://doi.org/10.1002/mp.13764. Chapman, Benjamin P., Feng Lin, Shumita Roy, Ralph H. B. Benedict, et Jeffrey M. Lyness. Health risk prediction models incorporating personality data: Motivation, challenges, and illustration. Personality Disorders: Theory, Research, and Treatment 10, no 1 (2019): 46-58. https://doi.org/10.1037/per0000300. Coalition for Health (2023) Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare  https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf Davenport, Thomas, et Ravi Kalakota. The potential for artificial intelligence in healthcare Future Healthcare Journal 6, no 2 (juin 2019): 94-98. https://doi.org/10.7861/futurehosp.6-2-94. Jones, E  (2023) Explainer: What is a foundation model? Ada Lovelalce Institute 17 July 2023 https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/ Harrison, Conrad J., et Chris J. Sidey-Gibbons.  Machine learning in medicine: a practical introduction to natural language processing. BMC Medical Research Methodology 21, no 1 (31 juillet 2021): 158. https://doi.org/10.1186/s12874-021-01347-1. Shaikhina, Torgyn, et Natalia A. Khovanova.  Handling limited datasets with neural networks in medical applications: A small-data approach. Artificial Intelligence in Medicine 75 (1 janvier 2017): 51-63. Sharon, T. (2018). When digital health meets digital capitalism, how many common goods are at stake? Big Data & Society, 5(2). https://doi.org/10.1177/2053951718819032 Sordo, Margarita. Introduction to neural networks in healthcare Consulté le 15 août 2023. https://www.academia.edu/20719514/Introduction_to_neural_networks_in_healthcare. Sutton, Richard S., et Andrew G. Barto. Reinforcement learning: an introduction. Second edition. Adaptive computation and machine learning series. Cambridge, Massachusetts: The MIT Press, 2018.  
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.2.9