AI in Health: role play
AI in Health: role play
What is this about?
This activity has been designed to support students in reflecting and learning about AI in helath care. Before participating in this exercise students are asked to follow the online modules developed by IRECS on this topic.
By taking part in this activity students work towards the following learning goals and become:
- knowledgeable on relevant literature, developments and regulations with regards to the topic addressed
- able to indicate what ethical issues are pressing regarding research concerning AI in healthcare contexts
- able to apply relevant ethical concerns on a case
- aware how the learning materials are relevant for their professional context
Preparation
Before participating in this activity students prepare themselves by completing (one of) the e-modules:
- Case studies: AI in Healthcare
Introduction to the exericse
Eery participant plays one of following roles: Healthcare professional (physician); Representative of “HealthAI”; Patient rights advocacy; Medical ethicist; Representative of human resources of the hospital; Representative of a health insurance company.
The experts are invited to have a dialogue and to learn more from each other’s perspectives. The aim is to formulate an advice for the executive board.
Before starting the exercise, it can be useful to emphasize that the groups are invited to engage in dialogue rather than debate.
To encourage the dialogue a list of questions has been prepared (see step 3).
Forming groups
Participants are divided into groups of 6 or fewer, with each participant in each group assigned one of the following roles:
- Healthcare professional (physician)
- Representative of “HealthAI”
- Patient rights advocacy
- Medical ethicist
- Representative of human resources of the hospital
- Representative of a health insurance company
Case presentation
The case is presented to the group. It is helpful to distribute the case to students in printed or digital format, allowing them to access it during group work. Please find the case below: In a large hospital the executive board is considering purchasing a new AI system called “HealthAI”. HealthAI is designed to assist healthcare professionals in patient diagnoses based on input of personal data from patients (such as lab results, DNA-material and patient history), it also can be used to assist with administrative tasks. However, at national level concerns have been raised regarding the use of HealthAI due to, for example, privacy risks and system errors (false positive diagnosis). At the same time, the healthcare sector is facing an urgent shortage of health personnel as a result of societal challenges (e.g. aging population).
Participants are invited to engage in a conversation about the presented case. Playing the role of experts, they are invited to have a dialogue and learn from each other's perspectives. The aim is to formulate advice for the executive board. The following questions can support reflection and stimulate dialogue. These are questions that each expert might ask from their own specific perspective: Healthcare professional
- How will HealthAI affect your day-to-day work?
- How will HealthAI affect your relationship with the patient?
- Do you have concerns about informed consent? According to HealthAI, all patient data will be used for this AI system.
- Are there any conditions / what is needed to implement Health AI?
Representative of HealthAI
- Who takes care of responsibilities when errors have been made?
- A comment to bring to the table: The system prevents a lot of human errors.
Representative of patient rights advocacy
- How to ensure that patients have a good understanding of how HealthAI will handle their data?
- May patients refuse or give preference to another treatment suggested by HealthAI?
- Can data security be fully guaranteed?
Medical ethicist
- From early experience with AI systems, we have seen that the AI system can hallucinate and that biases (e.g. against certain patient groups) can be built into the algorithm. To what extent are these assumptions corrected in the system or by a healthcare professional?
- What ethical dilemmas or concerns could you foresee? How can we deal with them
Representative of Human Resources of the hospital
- What kind of expertise do we need to have in our hospital to successfully implement HealthAI? Do we need to recruit new employees, or what training should be organized by whom?
- What kind of impact may it have on the reputation of the hospital?
Representative of a health insurance company
- How will this affect health insurance? Should we offer different types of insurance?
- If a cheaper and perhaps better AI system for health care becomes available outside our nation, can we recommend it and put it into practice?
Forming advice and reporting to plenary
Each group formulates advice and reports it back to the plenary session. The teacher facilitates the presentation of each group's advice and encourages the group to ask questions and reflect on similarities and differences among groups' opinions and advice.
Wrap up and take home message
Participants are supported in formulating a take-home message from the session. The teacher encourages them to consider the content of the modules they had to study and asks them to reflect on how it relates to the case they have just explored.