Text (Instruction Step Text)
From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.
- ⧼SA Foundation Data Type⧽: Text
7
This part consists of five instructions, one for each exercise. The links to each instruction is listed below
[https://public.3.basecamp.com/p/R4yJUVZ4mcprNtQrFpqbJVEi Debate & Dialogue]
[https://public.3.basecamp.com/p/MekBVkdTSuPp3ymjvm722tuZ Virtues & Norms]
[https://public.3.basecamp.com/p/nDoUx1DZC2rqGGS8YgFKCWQ9 The Middle Position]
[https://public.3.basecamp.com/p/Shd751EBvhJCBmVHy3FtsM61 Modified Dilemma Game]
[https://public.3.basecamp.com/p/hP14Zorvd8WANTar3gJegFi4 Self Declaration Approach] +
The goal of this module is to explore and question ethical issues in research using the themes of space science and space exploration. The end goal is not to become a topic expert, but to use space exploration as a vehicle to speculate and question issues around research integrity and research ethics. The themes will be explored using various teaching approaches including a walking debate and a design task which involves building a space habitat. These are designed to challenge students to create, collaborate and question what research integrity looks like in an applied group challenge. +
Katılımcıların eğitime hazırlık olarak neler yapmaları gerektiği konusunda bilgi sahibi olmaları için kendilerinden “Virt2ue eğitimine hazırlık” kısmını (eğitimi alanlar için hazırlanmış olan) okumalarını isteyin. Son teslim tarihlerini net bir biçimde belirleyin ve istenilen hazırlık materyallerinin (öz beyan formu ve vaka yorum formu) teslimi konusunda vereceğiniz talimatların açık olduğundan emin olun. +
Bir sonraki adıma geçmeden önce katılımcılara erdem ve norm kavramlarının tanımlarını verin (<u>[https://embassy.science/wiki/Theme:B4f7b2e3-af61-4466-94dc-2504affab5a8 Değerler ve normlar]</u> ve <u>[https://embassy.science/wiki/Theme:520b3bc7-a6ab-4617-95f2-89c9dee31c53 Araştırma doğruluğu ile ilgili erdemler]</u> sayfalarına bakınız). Katılımcıların kafasında soru işareti ya da şüphe doğuran herhangi bir nokta varsa bu aşamada bunları ele alabilirsiniz. +
Sunumunuzun ardından sizden diğer katılımcıların söylediklerini aktif bir biçimde dinlemeniz istenebilir. Oyunun küçük gruplar halinde oynanması durumunda, birbirinize sorular sormanız ve ikilemleri, yapılan seçimlerin ardında yatan sebepleri ve hatta daha genel temaları belirleyebilmeniz için toplu bir bilgilendirme oturumu yapılması faydalı olabilir. +
Katılımcıları oturuma toplu olarak devam etmeye davet edin. Kendilerini bireysel tercihleri üzerine fikir yürütmeye ve diyalog yaklaşımını kullanarak birbirleriyle sohbet etmeye teşvik edin. Toplumsal açıdan makbul olan yanıtların ne olduğuna ilişkin bir farkındalık oluşmasına yardımcı olun. Bunu katılımcılara aşağıdaki soruları yönelterek yapabilirsiniz:
- Bu durumda yapacağın şey ne olurdu? Neden?
- Bu durumda ''ideal olarak'' yapacağın şey ne olurdu? Neden? +
Lass nun die beiden Subgruppen wieder in kommunikative Interaktion miteinander treten, nun jedoch mit einer „dialogischen Haltung“.
Gleiches oder anderes Fallbeispiel?
* Es ist von Vorteil, wenn ihr das gleiche Fallbeispiel und dieselben Gruppen wie in Schritt 3) verwendet. An diesem Punkt der Übung kann es für die Teilnehmer:innen herausfordernd sein, ihre eigene Grundhaltung aktiv zu verändern (von Debatte zu Dialog). Zu diesem Zweck könnte es sinnvoll sein, ein neues Fallbeispiel einzuführen und neue Gruppen einzuteilen, die die unterschiedlichen Positionen des Dilemmas vertreten – den Teilnehmer:innen wird es leichter fallen, eine andere Grundhaltung einzunehmen.
Cave:
* Die Erfahrung mit dieser Übung hat gezeigt, dass die Teilnehmer:innen erst einmal wieder zu debattieren beginnen. Sei als Moderator:in des Dialogs aufmerksam, wenn Merkmale einer Debatte zu erkennen sind (z.B. wenn sich die Teilnehmer:innen gegenseitig unterbrechen, sich nicht zuhören, mit vorschnellen Urteilen oder wertenden Gesten kommunizieren, oder in den Selbstverteidigungsmodus verfallen, statt klärende Fragen zu stellen). Falls das passiert, beende das Gespräch und hilf den Teilnehmer:innen dabei, darüber zu reflektieren, was gerade passiert ist. Dabei können folgende Fragen nützlich sein:
o Was passiert gerade?
o Was nimmst du gerade wahr?
o Kann jemand erklären oder beschreiben, was gerade passiert ist?
o (Nach der Beschreibung, was gerade passiert ist:) Was könntet ihr nun stattdessen tun? (Dabei auf die Merkmale eines Dialogs verweisen.)
Versuche in diesem Moment, möglichst konkrete Aussagen herauszuarbeiten, die die Haltung in der eben erlebten Interaktion beschreiben (z.B. wir haben versucht, uns gegenseitig zu überzeugen, wir haben uns unterbrochen, …). Wenn ihr das herausgearbeitet habt, gib den Teilnehmer:innen Tipps, wie sie in der kommenden Interaktion einen Dialog aufgreifen können, z.B.:
o Was würde dir helfen, die andere Position besser zu verstehen?
o Welche Fragen könntest du stellen?
o Was kannst du tun, um die andere Gruppe dazu zu bringen, euch Fragen zu stellen?
o Was können wir verändern, um den Dialog zu stärken?
Bevor ihr in der Übung fortschreitet, erläutere den Teilnehmenden die Konzepte der Werte / Tugenden und der Normen anhand einer kurzen Definition (siehe „Werte und Tugenden“ und „Tugenden in der Research Integrity“). Falls die Teilnehmenden Fragen haben oder Zweifel besteht, ob inhaltlich alles verstanden wurde, dann ist jetzt der Zeitpunkt, um darüber zu sprechen. +
EnTIRE is the project behind [[Main Page|The Embassy of Good Science]], a Wiki based community driven platform on research ethics and integrity which provides resources such as guidelines, cases and training material for and by researchers and research stakeholders who want to support good science. +
EnTIRE is the project behind [[Main Page|The Embassy of Good Science]], a Wiki based community driven platform on research ethics and integrity which provides resources such as guidelines, cases and training material for and by researchers and research stakeholders who want to support good science. +
Split up the group (if more than 10 people attend) in smaller groups. For this session you can pick among the following activities. Each of these activities have been presented in a separate module and applied to a specific topic. You can use these instructions as a guide and adapt the format to your training topic.
· Case study
· Mind mapping
<span lang="EN-US">· [[Instruction:1d832939-90e0-4879-a557-e60627c0555e|Role play]]</span>
<span lang="EN-US">· </span>Think out lound
At the end of the group work facilitate a plenary reflection, reporting back and harvesting results of subgroups’ discussion. +
The final kind of resource that you can add to the Embassy are pieces of interactive content, made with H5P! There are many different types of interactive content that you can make with H5P, whether they are games, quizzes, course presentations or interactive videos, all aimed at exploring concepts relating to research ethics and research integrity, and to aid trainers in teaching and facilitating reflection on these topics.
'''We will cover how to create and save your own interactive content in the next section of this course.''' +
[[File:A spyglass.png|center|frameless|600x600px]]
Sometimes its not just the presence but the viewpoint which changes the interaction with the observed.
The third influencing factor upon what is observed stems from the viewpoint of the observer. Researchers are not neutral processors of information. As human beings, they bring with them a host of assumptions and preconceptions.
Observation is dependent upon and coloured by our individual senses and our background beliefs and assumptions. In research, many of our background beliefs and assumptions are associated with the paradigm in which we operate, as we consider next. +
[[File:M7..png|center|frameless|600x600px]]
Thus far we have mainly focused on concerns with the ethics dumping issues identified in this proposal. However, it is also important to consider the proposed use of AI technologies in the study and examine the ethics issues that arise from this aspect of the research.
Some AI-related risks have already been touched on in Brad and Janet’s podcast (e.g. explainability, informed consent, accountability), and in Dr Langa’s advice on ethics dumping (e.g. data ownership and access, and capacity building), demonstrating that ethics issues may overlap different domains.
Professor Smith and Dr Jones request a report from their colleague Dr Corry to summarise the AI ethics issues and potential impacts relevant to the proposed study. Watch the video to access Dr Corry’s report. +
[[File:M7.png|center|frameless|600x600px]]
We now consider a short example case in which social justice is relevant. In this case, both the exclusion and inclusion of a specific population provoke ethical and methodological questions. We encourage you to reflect on the intersection of social justice and research ethics in this case to consider the pros and cons of inclusion and exclusion. +
[[File:Mm6.png|center|frameless|600x600px]]
Benefit sharing is a legal requirement for all countries that have adopted the CBD. But what constitutes appropriate benefits for those who share their resources in research? Click on the hotspots to see some suggestions.
[[Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity]]
Appropriate benefit sharing arrangements may include a package of both monetary and non-monetary benefits. Most important is that fair benefits arrangements are discussed and agreed locally before the research begins. +
[[File:AI img6.png|center|frameless|600x600px]] +
David Shaw shares his perspective as a REC chair working in Europe. +
The term Slippery Slope is an argument that claims that an initial action will trigger a series of other events that will lead to basically some undesirable outcomes in the end. So, if we decide to allow
a procedure that heals cystic fibrosis in patients, for example. The argument claims that this will lead to
more controversial procedures, such as basically, for example, editing cells in terms of, let's say, growth, editing the height of people, and this will then lead to enhancements such as choosing the eye colour of people or other very controversial procedures.
The problem with human enhancement is that there is no consent, because human enhancement needs to be done before the birth of a baby. So, there's no consent from the baby, obviously. So, we would need to ask the parents. And that could be a problem, because the parents' intentions might not be aligned with the baby's intentions.
There's also the problem of accessibility, because obviously if people have to pay for it, then it would be accessible for rich people, but not for poorer ones. And that also leads to a problem of fairness.
And it could lead to a two-class system where rich people have access to enhancement, whereas poorer people don't have access. And that would be a problem for society as well. +
[[File:AI Image5.png|center|frameless|600x600px]]
Alexei Grinbaum shares his thoughts on reasons to be cautious about AI.
'''Reasons to be cautious about the use of AI'''
Like with all artificial intelligence systems, there should be limits, filters, and controls. We shouldn't just let it go uncontrolled completely. So unchecked completely. First, of course, there is the classic question of data, personal data, sensitive data. This is about health, our data. Some of it is genetic. Some of it lets us identify people. How do we treat that data? So, that is a very classic question. But beyond that, there are very interesting questions about human autonomy. AI systems, will they overtake the doctors? Will they still leave a place for human contact, human warmth?
If we seek advice from an AI system, does it mean that somehow the medical profession is changing completely? So, these kinds of questions are important. Again, they're not exactly specific to the medical sphere. They also exist, for example, for AI assistance. But in the medical sphere, there are interesting questions that are a little bit more specific. Some would say, cybersecurity, you know, it's everywhere. We have all heard about robustness and cybersecurity. But in the medical sphere, if you have a device that is interacting with your body, and if somebody can hack it, well then of course, it's a direct threat to our well-being, to our health. So, the questions of cybersecurity are also very touchy, I would say, in the medical sphere.
And then beyond that, we have classic big questions about organisations. Will the whole sphere of medical care with the hospitals, you know, the emergency rooms and all of these things, how will that evolve? Should it be managed not by humans, but by robots or AI systems? Will they respond faster? How will that change the way we build our social institutions? And that's another dimension of AI in health care.
So, there are definitely benefits at each of these levels. But there are also risks, or I would say, reasons to be cautious, reasons not to go too fast with the deployment of AI systems, because the human profession doesn't change in 10 days, right? We need time to evolve. We need time to learn new skills.
So not going too fast, teaching medical doctors and healthcare professionals to work together with these systems rather than be replaced by these systems. That is something that is very important. Take time. Take the time. Be cautious about bias, discrimination, accessibility, autonomy, control, all of those different things, and not go too fast.
