Extended Reality: Ethics Issues

From The Embassy of Good Science
Revision as of 11:43, 10 June 2025 by 0000-0003-4416-1351 (talk | contribs) (Created page with "{{Instruction |Title=Extended Reality: Ethics Issues |Instruction Goal=== Module Introduction == '''The aim''' The aim of this module is to support students, researchers,...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Extended Reality: Ethics Issues

Instructions for:TraineeTrainer
Goal

Module Introduction

The aim

The aim of this module is to support students, researchers, and research ethics reviewers in learning about, and reflecting upon, the ethics issues associated with the development and use of extended reality.

Learning outcomes

At the end of this module, learners will be able to:


  • Consider the primary ethical issues related to the development and use of XR technologies.
  • Outline the challenges related to privacy and personal data processing for XR technologies.
  • Identify the implications for energy and resource consumption in relation to the development and use of XR technologies.
  • Access guidelines and further resources for ethics assessment of XR research and development.
Duration (hours)
1
Part of
Irecs.png
iRECS
1
An Example Project: Addressing Social Anxiety Disorder Through VR

Ext.Image1.png


A private technology company is collaborating with a team of university researchers, to design and develop a VR application, that will provide therapy for individuals with social anxiety disorder. Social anxiety disorder is characterized by an overwhelming fear of social situations, so that even everyday activities (like shopping, going to work or speaking on the phone) can cause great distress.


The researchers intend to create immersive virtual environments where users can practice social interactions in realistic scenarios, such as job interviews, public speaking, or social gatherings.The VR therapy aims to help users overcome their anxiety by gradually exposing them to challenging situations in a controlled and supportive environment.


Imagine that you are being invited to be a participant in this study. What concerns might you have? Take a few minutes to put yourself in the shoes of the participant, and think about the concerns that they might have. Then check against the primary ethics issues that are identified below.


Feedback

This brief scenario highlights some of the ethical issues that need to be considered when research involves a VR application. These issues can have impacts at many levels, from personal through to societal and environmental levels. In the rest of this module, we consider this along with the primary ethical issues in more detail.   

2
XR Data & Privacy Concerns

Ext.Image2.png


The primary issues related to Extended Reality data processing concern privacy and confidentiality, data security and breaches, the volume of data extracted, and the lack of clarity around the sharing of information.


Feedback

XR platforms and applications can collect many different types of personal data from users. This includes biometric information (such as facial expressions or eye movements), location data and device identifiers, as well as potentially sensitive content from personal conversations, or confidential information that is shared within virtual environments.


As the technology develops, so new types of data are being processed. For instance, eye tracking technology has the potential to gather highly sensitive data about individuals, including their gaze patterns, attentional focus, and emotional responses. This can be used to reveal personal preferences (including sensitive areas like sexual preferences), and certain health conditions, like autism and attention-deficit/hyperactivity disorder (ADHD).

3
XR and Privacy

Ext.Image3.png


Aside from the processing of personal data, other privacy matters concern the use of specific applications such as telemedicine: (particularly regarding the confidentiality of medical information) the potential use of brain data, data collection for industry purposes, and issues around governmental surveillance. Users need to be informed about the types of data collected, the purposes for which it will be used, and any third parties with whom it may be shared.


Informed consent from users may be needed, to ensure that they understand and agree to the terms of data collection and usage within XR environments. Additionally, users should have control over their privacy settings and preferences, allowing them to adjust the settings, manage data sharing permissions, and delete or anonymize their data as wanted.


But user consent and control may not always be possible. For instance, AR applications in public spaces raise concerns about reasonable expectations of privacy as they process and aggregate data about a user's broad surroundings in real time. This information gathering may require special consideration for bystander privacy, especially when government and law enforcement agencies use the technology.


Regardless of the setting, XR developers and platform operators must comply with relevant privacy laws and regulations, such as the General Data Protection Regulation (GDPR) in the European Union, or the California Consumer Privacy Act (CCPA) in the United States. This includes providing users with rights to access, rectify, and delete their personal data, as well as implementing mechanisms for data portability and transparency.

4
XR, Healthcare And Wellbeing

Ext.Image4..png


While the use of XR offers many potential benefits (for example, in training, design, or surgical planning), it also presents potential risks to health and wellbeing.

Ext.Image5.png


Furthermore, given that the technology is relatively new, the long-term impacts upon physical health and wellbeing are not yet well understood or evidenced.

5
XR, Healthcare and Wellbeing cont.

Ext.Image6.png


The use of XR also poses risks to mental health and wellbeing, inducing new problems and/or exacerbating existing concerns. Psychological impacts, including addiction and the need for a resting period after extended XR use, underscore the mental health dimensions associated with immersive technologies. However, it must be stressed that many of the alleged risks to mental health and wellbeing are currently tentative. Being a fairly new field of application and research, the evidence for harms and benefits is relatively scant.


Consequently, it is difficult to predict who precisely is at risk of harm, in what circumstances, and how this is best addressed. Nevertheless, adherence to the precautionary principle would entail that proactive measures to prevent harm are taken, even in the absence of conclusive scientific evidence. Click on the images to read more about some of the primary concerns for mental health and wellbeing that have been identified to date.


From a psychological perspective, there is ongoing debate about the welfare of children using immersive technologies. Children are identified as a highly vulnerable group and, as this module shows, there are many potential harms. Additionally, the long-term impacts upon psychological and emotional development are unknown. They necessitate careful examination to ensure the healthy growth and development of young XR users.

6
Alexei Grinbaum - Moral Equivalence

Ext.Image7.png


Moral equivalence in the context of VR and XR refers to the comparison of ethical behaviours and actions in digital environments with those in the real world. It raises questions about whether actions taken in virtual spaces—such as violence, harassment, or manipulation—should be judged or treated with the same moral standards as in physical reality. This concept highlights the challenge of defining ethical responsibility in immersive, simulated environments. In this expert interview, Alexi Grinbaum discusses moral equivalence, the potential impact on the wellbeing of users of VR and XR, and on wider society.


We wrote an article with a colleague about what we call moral equivalence between virtual worlds and material worlds. Now, what do we mean by that? In the material world, we have thoughts, feelings, emotions. In the virtual world, these are all imitated through computation by avatars. So, an avatar doesn't have thoughts or emotions, but it appears to having them. It creates an illusion in us humans, that if you meet an avatar in the virtual world, then this avatar may be happy or mad at us or, you know, or maybe thirsty. But of course, an avatar is never really hungry or thirsty. But these projections, these, you know, feelings by projection, these are illusions because, of course, behind there is just digital computation. But we feel that they somehow make something to us. So, they are kind of real to us through the relation that we establish with these avatars as we interact with them.


So, the question, the deep question is, these feelings by projection, these emotions by projection, they can go very far, responsibility by projection, crime and punishment by projection. When they do something to us through this relation, is that the same? Is it equivalent to what is happening in the material world? If there is an avatar that does something nasty, or something very nice to us in the virtual world and we feel for it. I mean, we do have feelings. Do these feelings matter as much as feelings in the material world?


That's the question of equivalence. And it's a dilemma. We can't really solve it here and now. We can't say yes or no because there are arguments for and against. But this is a deep philosophical and ethical dilemma of our relation to these non-material entities because they do something very real to us. Again, real, not in the sense of material, but in the sense of feeling real.


Virtual worlds are part of our reality. So how do we live with that? Do we need societal rules? Do we need laws? Do we need regulation? Do we let everyone decide for themselves or should society decide? Should a parliament decide? These are the big questions which all follow from this fundamental ethical dilemma, the question of equivalence, which doesn't have a simple solution.

7
XR and Harassment

Ext.Image8.png


Here are some of the many forms that virtual harassment can take. For each of the following, match the type of harassment to the description.

Instances of harassment, hate speech, violent content, and XR pornography underscore the challenges surrounding user dignity and respect within virtual environments.


For instance, virtual harassment can have serious consequences for users that extend beyond the virtual environment, including emotional distress, anxiety, depression, and in extreme cases, suicide. Researchers and ethics experts need to be aware of the potential for harassment in virtual spaces, to ensure that measures are taken to protect potential users from harm.

8
XR and Harassment cont.

Ext.Image9.png


To address virtual harassment in VR environments, developers and platform operators can implement various measures.

9
XR, Violence and Crime

Ext.Image10.png


While the aims of the research projects that you encounter are likely to be positive and beneficial, it is essential to recognise that the growing use of XR technologies opens new opportunities for criminal behaviour. Criminal behaviour might take the form of cybercrimes.


Cybercriminals might exploit vulnerabilities in XR software or networks to steal personal information, financial data, or intellectual property. XR technologies might also be exploited for fraudulent activities and scams.


Fraudulent activities could include deceptive advertising, phishing schemes, or through XR-enabled communication channels. Crimes can also be conducted in the virtual world, for example, rape or other forms of virtual assault/violence on an avatar, which, given the bond between a person and their avatar, can be seriously impactful for the victim. Correspondingly, there is ongoing debate about the status of crimes in the virtual world and whether they are equivalent to crimes in the real world. 


For instance, concerns about the sexual assault and rape of avatars raise questions about virtual crimes and their impact on users' emotional wellbeing.  Issues like violence, murder, or unwanted disappearance of avatars, as well as child crime in XR, demand ethical scrutiny regarding the depiction and consequences of violence in these digital spaces. Furthermore, the involvement of non-human agents, including AI, in violent or criminal acts complicates the attribution of responsibility in virtual environments.


If someone engages in harmful or aggressive behaviour towards another in a virtual environment, do you think they should they incur the same consequences as they would in the real world?


The continuum of immersive experience perceived by human users highlights the impossibility of drawing rigorous or fixed boundaries between the material and the virtual environments at which ethical questions of XR would no longer apply. XR technologies can provide highly immersive experiences, blurring the lines between the virtual and physical worlds. As a result, violent content in XR has the potential to be more emotionally impactful and realistic than traditional media forms, potentially desensitising users to violence or causing psychological distress.


Do you think that violent content in XR is likely to increase, decrease or have no effect upon the tendency for violent behaviours in the real world? There is ongoing debate regarding the impact of violent XR content on user behaviour. Some studies suggest that exposure to violent virtual environments may lead to increased aggression or desensitisation to violence, while others argue that the effects may vary depending on individual factors and context.


Addressing crime in the context of XR requires collaboration between XR developers, platform operators, law enforcement agencies, policymakers, and users. Strategies for addressing XR-related crime may include implementing robust security measures, enforcing community guidelines and codes of conduct, providing user education and awareness programs, and establishing legal frameworks to address virtual crimes that have real-world impacts.

10
XR Manipulation and Nudging

Ext.Image11.png


XR manipulation refers to the intentional alteration or distortion of reality within virtual environments. XR manipulation can alter users' perception of reality, create illusions or deceptions that trick users into perceiving virtual content as part of their physical environment. It can also be used to control the narrative within immersive experiences to shape their understanding, interpretation, and beliefs. The emergence of virtual beings, (for instance, avatars representing deceased individuals) introduces complex ethical questions regarding identity and agency.


Immersive technologies can also incorporate nudging techniques that are used to guide users' actions, shape their experiences, or promote certain outcomes. In the context of VR, ‘nudging’ refers to the application of certain measures to subtly influence the user’s decision-making. For instance, it may involve prompts, reminders, or visual cues; the presentation of options in specific ways; portraying particular behaviours as the social norm; or the offering of rewards or incentives. Given the intention to influence, the use of nudging techniques has ethical implications related to user autonomy and informed consent, so needs to be considered carefully.


While these facets can enhance immersion and entertainment value, they can also invoke ethical concerns related to transparency, consent, and user agency. XR manipulation can be exploited for malicious purposes, such as spreading misinformation, creating deceptive experiences, or manipulating users' behaviour for financial or political gain. Safeguards need to be implemented to prevent misuse of XR technologies and protect users from harmful manipulation.

Steps

Other information

Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.2.9