XR in Research: A Case Study

From The Embassy of Good Science

XR in Research: A Case Study

Instructions for:TraineeTrainer
Related Initiative
Goal

The aim of this module is to facilitate reflection upon the ethics issues associated with the development and use of extended reality in a research project.


Learning outcomes

At the end of this module, learners will be able to:

  1. Identify and analyse the ethics issues and dilemmas associated with an example research proposal.
  2. Make suggestions for how the ethics issues might be addressed.
  3. Identify ethics guidelines and policies that are relevant to the proposed research.
Duration (hours)
1
For whom is this important?
Part of
Irecs.png
iRECS
1
A Question for You

ImRe1.png


Feedback

Your opinions and assumptions about the use of XR and related ethics issues will likely be influenced by your prior experiences and understanding. How do you think your current understanding will impact upon your decision-making?


As you work through this module try to keep these thoughts in mind and notice whether your opinions or assumptions about XR in research change. Even if you don’t have any experience of XR, as a REC member who is reviewing this project proposal, you are being asked to make an evidence-informed and balanced judgement call.


If you are unfamiliar with the use of XR, it might be helpful to watch the following video about some of the benefits that XR has to offer.

2
Where’s the Ethical Dilemma?


Given the existing widescale use, and the many unexplored potential benefits, one might reasonably ask why it’s necessary to query the involvement of XR in research. Surely, more research is what we need. What is the ethical dilemma here? The involvement of XR in research actually generates many ethical issues as addressed in the module Extended Reality: Ethics Issues.


While XR holds incredible potential across many fields, it also has the potential to cause harm and infringe upon certain rights. Furthermore, there are significant gaps in our understanding, especially regarding the long-term psychological, neurological, social, and ethical impacts of XR use. You can watch this video to find out more.


Video Transcript

XR technology has advanced significantly in recent years, but there are many areas that remain underexplored or not fully understood. These gaps in our understanding span the technical, psychological, social, and health-related aspects of XR.


In particular, the long-term impacts of prolonged XR use are not well understood. This includes psychological factors like emotional regulation, attention span, and memory. Additionally, we don't yet fully understand how extensive use of XR for social interaction might impact real-world social skills and relationships. For instance, might heavy reliance on XR for communication lead to social withdrawal or decreased empathy in the real world?


Other unknowns concern the long-term effects on mental health or the impacts of immersive XR on feelings of detachment or brain plasticity. Might prolonged exposure to XR influence how the brain processes spatial awareness, or motor skills?


The long-term physical impacts are also not well defined. We know that use of XR can cause physical problems like motion sickness, eye strain, muscle strain and fatigue. But might it also have longer term impacts upon eye health or posture etc.?


Furthermore, we don’t know whether certain individuals might be more at risk. For instance, might children or adults with mental health issues be more susceptible to harm? As with all new technologies, research is essential to help us address the unknowns, but not at the expense of risky research.

3
The Research Proposal

Imre3.png

As you learn about the proposed research study, please refer to the downloaded document: The use of XR technologies in research: A checklist for research ethics committees to help you spot the possible ethical pitfalls. Note that while XR includes virtual reality, augmented reality, and mixed reality, this case study involves only the use of virtual reality or VR. Remember that you will be asked to make an assessment about whether to approve the study, ask for changes to be made, seek further information, or to disallow the study. Make a note of any points or questions that arise for you.

4
The Role of the Research Ethics Committee

Imre3.png

Question: As a member of the research ethics committee, what is your first impression of this proposal? Do you think that you would approve this project?


Feedback

On what did you base your decision? If you answered ‘yes’, is it because you think there aren’t any potential ethics issues, or that they are addressed adequately in the proposal?

If you answered ‘no’ or ‘I don’t know’, is it because you would need more information before you could approve? Is it because there are aspects of the proposal that you don’t understand?

With complex proposals like this, it is normally advisable to seek more information before reaching a decision unless the researchers have already addressed all relevant points that are in the document The use of XR technologies in research: A checklist for research ethics committees. Research ethics committee members might also do their own research about the topic or seek expert advice if there is insufficient technical knowledge and experience amongst the committee members.

5
The Role of the Research Ethics Committee - Exercise

ImRe4.png


In the following exercise we ask you to consider the points in each section of the document The use of XR technologies in research: A checklist for research ethics committees in terms of your role as a research ethics committee member. The exercise will take you through one section of the checklist at a time. For each section, we ask you to select the relevant items from a list of the roles of a research ethics committee. For example, when considering the section on data processing, if you think this is relevant to three of the itemised roles, you should select all three from the list.

Role of XR technologies in the project

  1. Does the project use an XR device (e.g., a headset) /XR technique, develop an XR device/technique, or both?
  2. If an XR device/technique is developed in the project, up to which technology readiness level will it be developed (research / industrial prototype / scalable commercial product)? Are compliance checks and certification included?
  3. If a third-party XR device is used in the project, is it already commercially available or a research prototype? Is it certified? Is a user manual included and made available to all participants?

Use of Artificial Intelligence (AI) in the Project

  1. Will an AI system be used or developed in the project together with the XR device? If so, is it compliant with the AI Act?
  2. Is the AI system operated under human supervision? Is this supervision occasional or continuous? What control powers does the supervisor have?
  3. Is it clear to participants which avatars or interactions in XR are controlled by humans, and which are controlled by AI, to avoid confusion and/or potential manipulation?
  4. Does the AI system use unsupervised or self-supervised learning, particularly on brain data? Are measures to enhance explicability included?
  5. Does the project include AI ethics experts or an ethics committee to oversee the development of the AI system?
  6. Does the project provide a procedure for assigning responsibility in case of damages caused by the AI system?

Data processing

  1. What are the procedures for data collection and storage? Is financial (or other) compensation offered in exchange for the collected data?
  2. If sensitive data (including biometric data, face photographs, video or audio recordings of people) is collected, is it clear how it will be used, stored, and used? Is data collection proportional to the purpose?
  3. If brain data is collected, is it clear how it will be used, stored, and used? Is data collection proportional to the purpose?
  4. Does the project include a clear and comprehensive informed consent procedure for data collection? Do participants have the option to withdraw or delete their data?
  5. Will the dataset(s) be open?
  6. Is the data minimization principle respected to ensure that only necessary data is processed?
  7. Will any third parties (e.g., XR technology providers, cloud storage companies) have access to participants’ data?

6
The Role of the Research Ethics Committee - Exercise Cont.

ImRe4.png


Participant wellbeing and non-maleficence

  1. Will the use of XR in the project under reasonable conditions cause or exacerbate physical problems, e.g. motion sickness, eye strain, or fatigue? If so, are appropriate mitigation measures in place such as regular breaks during sessions or monitoring of participants for signs of discomfort?
  2. Will the use of XR in the project under reasonable conditions cause or exacerbate psychological problems, e.g. emotional stress, anxiety, or dissociation? If so, are appropriate protocols in place for managing emotional distress and offering support (e.g. mental health resources or professional support)?
  3. Are measures in place to monitor participants for potential cognitive or psychological impacts over time?
  4. Will the use of XR in the project under reasonable conditions cause lasting personality effects (e.g., detachment from reality, altered social behaviours)?
  5. What measures are in place to limit offensive, harmful, or violent behaviours in the virtual environment? Are there clear and accessible ways to mitigate and report such behaviours?


Autonomy and nudging

  1. Is the XR technique likely to undermine the autonomy of participants? What measures are in place to ensure that XR interactions with avatars (including AI-powered avatars) do not violate participants’ personal space or autonomy?
  2. Are potential emotional triggers (e.g., fear or joy) justified by the research goals and managed appropriately? Are appropriate measures in place to avoid or minimise the risks of emotional manipulation or excessive nudging in the virtual environment?
  3. Will participants be fully aware of the nature of the study, the role of XR technologies, the nature of the technologies and any potential impacts on their mental or physical state? Is information provided in a clear and comprehensible manner?
  4. Is the system likely to expose users to catfishing (deception)? Is deception explicitly used in the project? If so, is it minimal, justified, and followed by debriefing?


Inclusivity and accessibility

  1. Does the project ensure inclusion of diverse demographic groups (e.g., gender, age, cultural backgrounds)?
  2. What measures are in place to ensure sensitivity to cultural and societal norms in the design of virtual environments and interactions?
  3. Are people with disabilities involved in or affected by the project? If so, are accessibility measures planned?
  4. Does the project purposefully exclude certain groups of individuals (e.g. people with disabilities)?

7
The Role of the Research Ethics Committee - Exercise Cont.

ImRe4.png


Cybersecurity

  1. Does the XR device/technique include protection mechanisms against adversarial attacks (exploiting system vulnerabilities) or hacking?
  2. Are robust data security measures, such as encryption, implemented to protect stored data from unauthorized access or breaches?
  3. Is the XR device susceptible to misuse or diversion? Is misuse plausible?

As you can see from the above exercise, the inclusion of XR in a research project can raise a broad and complex range of ethical issues that require attention from the research ethics committee for a variety of purposes. Some of the items on the checklist are relevant to assessment of the potential harms and benefits, some to the assessment of legal compliance and so on. The research ethics committee have a lot of factors to consider. Use of the document The use of XR technologies in research: A checklist for research ethics committees will help them spot these factors so that they can fulfil all aspects of their role effectively.

8
Stepping into the Shoes of the Participant

Re4Image.png

Now we ask you to drop the stance of the REC member for a moment and try to step into the shoes of a potential participant.

Please answer the following question, your responses will be recorded anonymously.

Feedback
People join studies for all different reasons and they each bring with them their different experiences, expectations etc. Read on to see some possible initial reactions.  

9
Stepping in to the Shoes of the Participant Cont.

Re.Img5.png

It’s not difficult to imagine a wide range of reactions to an advertisement for this project. Some people might be curious to find out more; others will be concerned about possible implications. Why do you think it might be important for researchers to consider what motivates or deters people from participating in the study? This is vital not only for reaching the target numbers, but also to promote fair recruitment, diversity, and inclusion. It would be pointless and unfair if, for instance, only people who already enjoy using XR for socialisation volunteered to participate. With a recruitment strategy that relies upon self-selection, this is a real possibility. The deterring factors raised above touch on some of the issues identified on the XR ethics checklist. Potential participants might not even be aware of many of the ethics issues, but it is the responsibility of the research ethics committee to ensure that all are addressed appropriately.

I’m really curious about XR technologies and I would love to have the opportunity to immerse myself in a virtual world and engage with AI avatars. I would be excited to give it a go.
Rep1.png
There’s no way I’m going to wear those tracking devices. I value my privacy and who knows where that data will end up.
Rep2..png
The thought of feeling cut off from reality scares me. Will I feel like I’m in control? What if I have a panic attack?
Rep3..png
Loneliness is a problem for me. This study will offer an opportunity to engage more, either with avatars or new people. I wouldn’t mind either way.
Rep4..png
I spend a lot of time in virtual worlds anyway. The study will give me an opportunity to reflect on how virtual environments affect my emotions and social wellbeing.
Rep5..png
I’ve tried using a VR headset previously. It’s a hassle to set up, uncomfortable wear and it caused terrible motion sickness and headaches.
Rep6..png
My disability means that I often feel isolated. But as a person with mobility challenges, I assume I can’t participate.
Rep7..png
The idea that interaction with an avatar can help loneliness is warped. It’s not real!
Rep8..png

10
Research Ethics Committee Discuss the Proposal

Rep9.png


Now we return to the research ethics committee perspective. Below is a hypothetical debate between members of a research ethics committee that is informed by the checklist about whether this VR study should be approved. The debate involves the following five characters:


  • Dr Taylor (Chair of research ethics committee)
  • Dr Evans (Bioethicist)
  • Dr Brown (Psychologist)
  • Dr Adams (Data privacy specialist)
  • And Ms Amanda Lee (Lay member)
Did the research ethics committee discuss all of the issues you identified in the proposal? Did they miss anything important? They are clearly concerned about safeguarding participant wellbeing, privacy and data protection measures, but have they looked for fair participant recruitment or identified all risks and benefits, for example?

11
AI Involvement in the Study

ReImg7..png


One aspect that is not fully described by the researchers or explored by the research ethics committee is that of the AI involvement in the study.

AI involvement in the study

Participants in the VR group will interact with AI driven avatars. This is addressed to some extent as concerns were raised about supervision of the AI system, monitoring of interactions, and accountability etc. Additionally, it was noted that it wasn’t clear whether there would be an AI expert on the research team.


However, there is also no AI expert on the research ethics committee and maybe that is why they failed to address the issues associated with the inclusion of a generative AI element in the project. This is what the proposal states: “Additionally, the study hopes to contribute to improving AI-driven avatars in VR environments, to help make them more lifelike and responsive during human-to-AI interactions.” This statement implies that data from the project will be used to train AI models. Did you spot this in the proposal?


Generative AI models typically require significant amounts of data, often stored for long periods. This increases the risk of data breaches or unauthorized access, especially given the sensitive nature data collected in this project. Biometric data, coupled with detailed interaction metrics (e.g., frequency, duration of social interactions), can be highly personal and although efforts can be made to anonymise data, the detailed nature of biometric and interaction data could lead to re-identification risks. Secure anonymisation protocols must be emphasised to protect participant identities. If people in the project are not fully aware that their data is being used to train AI models this raises concerns about autonomy and trust.


Additionally, there are issues related to data bias and fairness as unrepresentative datasets might skew the AI model’s predictions. The large datasets on which generative AI models are trained can contain biases or stereotypes leading AI avatars to exhibit biased behaviours or make stereotyped assumptions. Participants from underrepresented groups might encounter responses from avatars that reflect these biases, and there may be a lack of nuanced cultural understanding, leading to responses that feel inappropriate or insensitive.


Mitigating these risks will require transparency about AI’s nature, strict data handling policies and bias mitigation strategies including fairness in participant recruitment.  

For proposals like this where there are ethics issues that cross both the involvement of XR and AI, it will be helpful to also consult the document Ethics of AI in Healthcare: A checklist for Research Ethics Committees.

12
Weighing Risks and Benefits

ReImg8..png


An important role for a research ethics committee is to establish what the risks are and assess whether the risks are justified by the study’s objectives. This also involves assessment of the potential benefits of the study and checking of measures that will be employed to minimise the risks.

By now, you should have a good idea of the potential risks and benefits that are associated with this study.


ReImg9..png


Most research studies involving humans involve some level of risk, but there must be a realistic potential for benefits to justify the risks. Nevertheless, if the risks (for participants, society, or the environment etc.) are high, it is unlikely that the study will be permitted even when there is a great potential for benefits. In all cases, reliable measures must be put in place to mitigate or minimise any risks. Risks must be minimised, and potential benefits maximised.


While there may appear to be a lot of risks associated with this proposal, most can be minimised or mitigated. What might that involve? Many of the points on the document, The use of XR technologies in research: A checklist for research ethics committees are relevant to risk minimisation and mitigation.


For instance, Section 4, which is devoted to participant wellbeing and non-maleficence, asks about appropriate mitigation measures such as regular breaks during sessions or monitoring of participants for signs of discomfort, and appropriate protocols for managing emotional distress and offering support. These mitigation measures must be reviewed to ensure participant welfare, but also to help assess whether the study is justified. Do you think the potential benefits outweigh the potential risks involved in this proposed project?

13
Time to Decide

ReImage11..png

Now that you have considered this case in more depth, it’s time to decide whether you will approve the project.

Please answer the following question, your responses will be recorded anonymously.

Question: What is your decision?

Feedback

Through this case study we have sought to explore the ethics issues associated with the use of VR, AI avatars and generative AI in a research proposal. The proposed project raises a broad range of concerns, some of which are anticipated in the proposal, but many of which are not mentioned. Hence the proposal is not yet ‘approval ready’. With the help of the document, The use of XR technologies in research: A checklist for research ethics committees, it should be possible for you to spot the omissions in the proposal and formulate a list of requirements.

However, even if the researchers address all these requirements appropriately, there may still be disagreements about whether the study can be approved, whether the potential for benefits justify the potential risks.  Whatever your decision, we ask you to reflect back upon the first question we raised, ‘What experiences and understanding about the use of VR do you bring to this case study?’ Have your opinions and assumptions about the use of VR changed?

14
Relevant policies and guidelines

In addition to the checklist that we have been referring to throughout this case study, we also recommend consulting the policies and guidelines below when reviewing proposals involving the use of XR technologies. There are currently no specific EU or international guidelines governing XR. Listed here are the most relevant sources of ethics guidance as well as the most relevant regulations.


A not-for-profit organisation, The Metaverse Standards Forum brings together most of the industrial players involved in the metaverse, with the aim of creating the conditions for its worldwide interoperability: https://metaverse- standards.org/


Data Privacy and Security

General Data Protection Regulation (GDPR)

The GDPR is a comprehensive data protection law that applies to all research involving personal data from EU citizens, regardless of where the research is conducted.


Guidelines for AI and Emerging Technologies

OECD Principles on Artificial Intelligence

These principles promote responsible stewardship of trustworthy AI, calling for transparency, accountability, and data privacy. For XR studies that integrate AI-driven avatars or generative AI, these guidelines emphasise transparency in AI functionality and accountability for AI-driven outcomes.


UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021)

UNESCO’s guidelines focus on AI ethics, covering respect for human rights, transparency, and accountability. This is particularly relevant to XR studies that use AI to create immersive experiences, as researchers must ensure that AI-driven interactions are fair, transparent, and respectful of human dignity.


European Commission’s Ethics Guidelines for Trustworthy AI

These guidelines emphasise the need for AI systems to be lawful, ethical, and robust. In XR research involving AI avatars, this means ensuring that AI interactions do not mislead or psychologically manipulate participants and that data collected from these interactions complies with ethical standards.


Ethics By Design and Ethics of Use Approaches for Artificial Intelligence (2021)

https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021- 2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches- for-artificial-intelligence_he_en.pdf

Provides a comprehensive framework to ensure the ethical development and deployment of AI-driven avatars. Ethics by Design guides the development of AI systems that prioritize privacy, transparency, and fairness from the outset. Ethics of Use emphasizes ethical considerations during deployment, including informed consent, impact monitoring, participant support, and continuous feedback mechanisms.

European approach to artificial intelligence

Includes links to the current relevant guidelines, strategies and support that are relevant in the EU. Focusses on excellence and trust and aiming to boost research and industrial capacity while ensuring safety and fundamental rights.


World Health Organisation: Ethics and governance of artificial intelligence for health (2021)

https://www.who.int/publications/i/item/9789240029200

Although the WHO guidelines specifically address AI in health contexts, many principles are broadly applicable to XR research that uses AI and involves psychological and biometric assessments. The guidelines reinforce the importance of transparency, accountability, privacy, risk mitigation, fairness, and informed consent—all critical to conducting AI-driven research in immersive XR environments. While these guidelines are framed around health their principles apply well to any context where AI interacts with sensitive human experiences and data.


Standards for Biometric and Sensitive Data

ISO/IEC 27001 - Information Security Management

This international standard provides a framework for data security, emphasizing the need for strict measures to protect sensitive and biometric data (e.g., eye-tracking or heart rate data). For XR studies, adhering to ISO/IEC 27001 can ensure secure handling of biometric data.


ISO/IEC JTC 1/SC 42 - Artificial Intelligence Standards

This series of standards focuses on AI, including the use of biometric and biometric-enhancing technologies. Following these standards can help manage risks associated with XR’s use of biometric data, ensuring data accuracy, protection, and responsible use.


Digital Accessibility Standards

UN Convention on the Rights of Persons with Disabilities (CRPD)

This convention calls for inclusive access to technology, which applies to XR environments. Researchers must ensure that XR experiences are accessible to individuals with disabilities, including provisions for visual, auditory, or mobility-related needs.


Web Content Accessibility Guidelines (WCAG)

Although focused on web content, WCAG principles are increasingly applied to XR to ensure that immersive environments are accessible. Following these guidelines can ensure that XR experiences are inclusive and provide equitable access to all participants.

15
Module Evaluation

Thank you for taking this irecs module!

Your feedback is very valuable to us and will help us to improve future training materials.

We would like to ask for your opinions:

1. To improve the irecs e-learning modules

2. For research purposes to evaluate the outcomes of the irecs project

To this end we have developed a short questionnaire, which will take from 5 to 10 minutes to answer.

Your anonymity is guaranteed; you won’t be asked to share identifying information or any sensitive information. Data will be handled and stored securely and will only be used for the purposes detailed above. You can find the questionnaire by clicking on the link below.

This link will take you to a new page: https://forms.office.com/e/UsKC9j09Tx

Thank you!

16
Glossary

ReImage13..png


Artificial intelligence system: in the narrow sense (excluding deterministic "expert systems"), a machine learning system trained on a dataset and designed to operate autonomously, demonstrating adaptability to different inputs and producing outputs, such as predictions, recommendations, decisions or other content.


Machine learning: automatic process by which information is generated in the form of mathematical correlations from a training dataset. Types of machine learning include reinforcement learning, supervised learning (using data annotated by humans) and self-supervised or unsupervised learning (without human labeling). The result of machine learning is called an AI model. An AI model needs fine-tuning and alignment before deployment. When combined with a user interface, an AI model trained on a large dataset to perform a variety of tasks becomes a general-purpose AI system.


Fine-tuning: tailoring an AI model to perform specific tasks, by refining its training on a specialized dataset.

Alignment: the design and application of filters and control mechanisms to prevent undesirable behavior of the AI system.

Explainability: the capacity to provide textual or visual content allowing users to achieve satisfactory understanding of the causes that have led to the output of the AI system.


Reproducibility: the capacity to obtain identical or similar results on multiple runs of the AI system with the same input.


Hallucination: plausible but false or unreal output produced by the AI system.

Bias: distortions in the outputs that occur when AI systems are trained on non-representative or unbalanced datasets, producing false or discriminatory results which can lead to the loss of user trust.


Emergent capability: as perceived by the user, unpredictable characteristics or behavior of the AI system emerging without any explicit intent of the designer.


Adversarial attack: an attack involving purposefully corrupted data or malicious inputs, designed to cause errors or induce undesirable behavior of the AI system.


Synthetic data: simulated data produced by a very large AI system (itself trained on authentic or synthetic data) with the goal of training a smaller AI system.


Ethics by design: a methodology for analyzing, as early as the design phase of an AI system, the technological choices likely to give rise to ethical tensions. It aims to translate ethical principles into operational measures, while adapting them to evolving standards. It also includes an ongoing evaluation of these measures on realistic use cases.

17
References and Related Materials

ReImage15..png


Adomaitis, Laurynas, Alexei Grinbaum, and Dominic Lenzi. ‘D2.2 Identification and Specification of

Potential Ethical Issues and Impacts and Analysis of Ethical Issues’. Zenodo, 30 June 2022.

https://doi.org/10.5281/zenodo.7619852.

Aucouturier, E., Grinbaum, A., et al. (2023) Recommendations to address ethical challenges from research in new technologies. Available at: https://irp.cdn-website.com/5f961f00/files/uploaded/Deliverable_2.2.pdf

Chasid, Alon. ‘Imaginative Immersion, Regulation, and Doxastic Mediation’, 2021.

https://philarchive.org/rec/CHAIIR-2.

Langland-Hassan, Peter. Explaining Imagination. Oxford: Oxford University Press, 2020.

Liao, Shen-yi. ‘Immersion Is Attention / Becoming Immersed’, manuscript.

https://philarchive.org/rec/LIAIIA.

Li, Yang, Jin Huang, Feng Tian, Hong-An Wang, and Guo-Zhong Dai. ‘Gesture Interaction in Virtual

Reality’. Virtual Reality & Intelligent Hardware 1, no. 1 (1 February 2019): 84–112.

https://doi.org/10.3724/SP.J.2096-5796.2018.0006.

Schellenberg, Susanna. ‘Belief and Desire in Imagination and Immersion’. The Journal of Philosophy

110, no. 9 (2013): 497–517.

Suzuki, Keisuke, Alberto Mariola, David J. Schwartzman, and Anil K. Seth. ‘Using Extended Reality

to Study the Experience of Presence’. Current Topics in Behavioral Neurosciences, 3 January 2023.

https://doi.org/10.1007/7854_2022_401.

This case is inspired by SHARESPACE project: (https://cordis.europa.eu/project/id/101092889)

The IEEE Global Initiative on Ethics of Extended Reality:

IEEE SA - The IEEE Global Initiative on Ethics of Extended Reality

EU Research projects involving XR:

- Empower Refugee Women through XR supported Language learning (XRWomen):

https://www.motion-digital.eu/post/project-xr-women

- Volumetric 3D Teachers in Educational reality: https://vol3dedu.eu/

- REalisation of Virtual rEality LearnING Environments (VRLEs) for Higher Education

(REVEALING): https://revealing-project.eu/

- Extended Reality For DisasteR management And Media plAnning (xR4DRAMA):

https://xr4drama.eu/_project_/

- Augmented Reality Instructional Design for Language Learning – ARIDLL project:

https://aridll.eu/

Steps

Other information

Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.2.0