Text (Instruction Step Text)

From The Embassy of Good Science
Describe the actions the user should take to experience the material (including preparation and follow up if any). Write in an active way.


  • ⧼SA Foundation Data Type⧽: Text
Showing 20 pages using this property.
1
Please watch the video carefully! The purpose of this exercise is to assess your understanding of the concept of circularity and its role in addressing today’s environmental challenges. Circularity is essential because it helps reduce resource extraction, waste, and pollution while keeping materials in use for as long as possible. By fostering more sustainable production and consumption patterns, circularity contributes to protecting ecosystems, supporting equitable economies, and achieving the Sustainable Development Goals.  +
Read the slides carefully and learn about the concept of climate mainstreaming within organisations and the key steps required for its successful implementation.  +
What do we know about measuring training effectiveness? Self-assessment is one of the most prevalent means to measure effectiveness of REI training. The second most frequently used method for assessing training effect was a moral reasoning test. While the developed tools were mostly used pre and post intervention (with/without control groups) and the results were compared, there were other measures added to evaluate the learning process or student progress. It also seemed that the tests designed for ethics training (like DIT, DEST, TESS) cannot be universally implemented to all REI training due to very different formats of training and/or availability of tests. There are also qualitative possibilities (like learning diaries, tasks submitted during other courses, etc.) to monitor the learning progress, and through that assess the effectiveness of training. See figure 1 outlining the identified measures and their application scale and feasibility (more details in D4.1): Figure 1. Measurement tools identified in the literature review (numbers indicate Kirkpatrick’s levels, see below) (tool descriptions in D4.1). [[File:Screenshot 2025-11-17 205639.png|center|frameless|500x500px]] As can be seen in figure 1, self-reporting (blue bubbles) is the most feasible measure and that can be implemented large-scale (mostly). It is no wonder that this is, based on the literature review, also the most used approach. Also, SPEEES and SOLKA tests utilise self-reporting. Most tools measure the content or the learning process (green bubbles) – they give information about what was learned during the training. As indicated, feasibility of the measures is not high – either a lot of work needs to be put into implementing the tool, they are not openly available, or they may be field specific. Possibilities for measuring behaviour (yellow bubbles) are scarce. Comparing results collected with various tools is almost impossible because they measure different aspects of training with different analysis instruments. It is not possible to determine whether qualitative (indicated as ’qual’) or quantitative (indicated as ‘stat’ in Fig 1) methods of analysis are more feasible. Feasibility depends on the combination of various aspects, such as accessibility to the tool, need of special equipment, and competence required.  
The aim of educating secondary school students is to raise their awareness of the ethical and integrity challenges they may face. Resources for secondary school students include (please see the last section of the BEYOND trainer guide for an overview of materials divided by topics and target groups): *The [[Integrity Teacher Guide]] for secondary school students' education. *The [[Path2integrity Learning cards]] (Path2Integrity learning cards S) focusing on students in high schools, and a dedicated handbook (S-Series handbook). For training effectiveness measurement facilitators can use the following tools for learning output collection and for analysing collected material: {| class="wikitable" |+ !'''Tool for collecting learning outputs''' !'''Details''' !'''Analysis instrument **''' |- |'''ProLearning app''' |''ProLearning'': https://www.epfl.ch/labs/chili/dualt/current-projects/realto/ <span lang="EN-GB"></span> |learning analytics |- |'''Engagement app''' |''ForgetNot'' (by EduLog): https://web.htk.tlu.ee/forgetnot |learning analytics |- |'''Self-Reflection Form/Compass''' |App under development, [https://docs.google.com/forms/d/17ORaVeaLjBYucufYNGF6TNjgtqqNdlk5BhSp5bfM5eA/copy form] * (for copying and editing) |SOLO taxonomy, reflection levels, content criteria |- |'''Pre-post texts''' |Collect a short text (e.g. a response to a case or short essay) before the training and after the training |SOLO taxonomy, reflection levels, content criteria |- |'''Learning diaries''' |Ask learners keep a diary over a certain period, for each submission provide some guiding questions or topics |SOLO taxonomy, reflection levels, content criteria |- |'''Group reports''' |Ask groups working together to provide a (short) group report (or provide a template with points to work on) |SOLO taxonomy, content criteria |- |'''Group discussions''' |Monitor the group discussions to evaluate the level of understanding and content discussed (scaffold as appropriate) |SOLO taxonomy, content criteria |- |'''Group dynamics''' |''CoTrack'' application: https://www.cotrack.website/en/ |learning analytics |- |'''Retention check''' |After a certain time (few weeks/months) ask learners to provide a short text (analysis of a case, short essay on an ethics topic/question). Compare the levels of understanding to another piece collected during or right after the training. |SOLO taxonomy, content criteria |} For instance, to measure participants’ reactions during or right after the training, ProLearning app or Self-Reflection Form can be used. In addition, if learners worked in groups and provided a group-report, the learning process can be evaluated based on the SOLO taxonomy to measure the levels of understanding. Moreover, if possible, a couple of months after the training an additional case study could be given to the same learners, and the content of their analysis could again be evaluated with the SOLO taxonomy. This kind of effectiveness measure would give a possibility to triangulate the measurement in different time points.  
The aim of educating secondary school students is to raise their awareness of the ethical and integrity challenges they may face. Resources for secondary school students include (please see the last section of the BEYOND trainer guide for an overview of materials divided by topics and target groups): *The [[Integrity Teacher Guide]] for secondary school students' education. *The [[Path2integrity Learning cards]] (Path2Integrity learning cards S) focusing on students in high schools, and a dedicated handbook (S-Series handbook). For training effectiveness measurement facilitators can use the following tools for learning output collection and for analysing collected material: {| class="wikitable" |+ !'''Tool for collecting learning outputs''' !'''Details''' !'''Analysis instrument **''' |- |'''ProLearning app''' |''ProLearning'': https://www.epfl.ch/labs/chili/dualt/current-projects/realto/ <span lang="EN-GB"></span> |learning analytics |- |'''Engagement app''' |''ForgetNot'' (by EduLog): https://web.htk.tlu.ee/forgetnot |learning analytics |- |'''Self-Reflection Form/Compass''' |App under development, [https://docs.google.com/forms/d/17ORaVeaLjBYucufYNGF6TNjgtqqNdlk5BhSp5bfM5eA/copy form] * (for copying and editing) |SOLO taxonomy, reflection levels, content criteria |- |'''Pre-post texts''' |Collect a short text (e.g. a response to a case or short essay) before the training and after the training |SOLO taxonomy, reflection levels, content criteria |- |'''Learning diaries''' |Ask learners keep a diary over a certain period, for each submission provide some guiding questions or topics |SOLO taxonomy, reflection levels, content criteria |- |'''Group reports''' |Ask groups working together to provide a (short) group report (or provide a template with points to work on) |SOLO taxonomy, content criteria |- |'''Group discussions''' |Monitor the group discussions to evaluate the level of understanding and content discussed (scaffold as appropriate) |SOLO taxonomy, content criteria |- |'''Group dynamics''' |''CoTrack'' application: https://www.cotrack.website/en/ |learning analytics |- |'''Retention check''' |After a certain time (few weeks/months) ask learners to provide a short text (analysis of a case, short essay on an ethics topic/question). Compare the levels of understanding to another piece collected during or right after the training. |SOLO taxonomy, content criteria |} For instance, to measure participants’ reactions during or right after the training, ProLearning app or Self-Reflection Form can be used. In addition, if learners worked in groups and provided a group-report, the learning process can be evaluated based on the SOLO taxonomy to measure the levels of understanding. Moreover, if possible, a couple of months after the training an additional case study could be given to the same learners, and the content of their analysis could again be evaluated with the SOLO taxonomy. This kind of effectiveness measure would give a possibility to triangulate the measurement in different time points.  
There is evidence of how such a framework can be used to analyse reflective journals/learning logs (see Bell et al., 2011). We have tested the feasibility of this framework also in the context of REI. Figure 1 illustrates how reflection levels are displayed during a 6-week diary-keeping period related to REI learning. As indicated, some participants (P1–P5) show various levels, but some indicate constant levels. The exploration suggests that it is possible to analyse reflective journals/writing in REI context applying the framework of levels of reflective thinking. Figure 1. Eexample of analysis results (reflection levels) of learning diaries by 5 training participants (P1-5). [[File:Img7.png|center|frameless|500x500px]]  +
Ethical principles (Kitchener, 1985) can be used to evaluate the characteristics of the ethical dilemma: [[File:Img9.png|center|frameless|500x500px]] Figure 1. Ethical principles (Kitchener, 1985). In the case introduced in the section what is this about  the following response may be given by the learners: Ethical issues that could emerge in this case: underage children, parental consent, procedure of getting the consent, considering the wishes of the child, organising data collection, cooperation with the (pre)school, which is more harmful for the child - being recorded (for the research purposes) or being upset about the procedure of fulfilling the parent’s demands. Ethical principles that can be at stake in this case: * respect for autonomy - in this case there is a conflict between the autonomy of the child and the autonomy of the parent - for the researchers there is no simple answer. The procedure of informing should be altered to prevent these situations from happening in the future. * doing no harm (non-maleficence) - in this case the researchers’ only chance of not harming the children whose parents had not given their consent would have been not proceeding with data collection, but in that case they would be harming the research/society/greater good. * benefiting others (beneficence) - the research is probably important to the society, but it is not right to harm anyone to benefit others. * being just (justice) - the researchers must be fair towards the children, their parents, the school, the society - if their rights get into a conflict of interest, new means of data collection must be found. * being faithful (fidelity) - researchers must respect the wishes of the parents, but also children.  +
to set up and collect student responses (anonymously). The tools analyse the collected data instantly and provide teachers with an overview of the impact of the training. In addition to collecting learner reactions, the topics indicated by the app bring those aspects of learning into focus and learners start paying greater attention to them – this may have a more long-term impact on their behaviour. ProLearning collects learner responses to teacher-generated questions (yes/no or scale 0-100), asks teachers to predict the learner responses and write a short description of the learning situation. Only then can the teacher see the graph outlining learner responses (based on groups) in relation to their own predictions. If the teacher’s prediction of the performance of a group of learners is seen as the expected learning outcome, then the use of this tool can provide a measure of the alignment or discrepancy between the expected effectiveness and the learners’ performance. In addition, engagement in activities may have an impact on how well people acquire competencies. For measuring engagement, an application ForgetNot (by EduLog) is available. In the application, learners can provide their feedback on three aspects of the training: how they were engaged in the activities (behavioural aspect of learning), how they felt (emotional aspect of learning) and how relevant the knowledge was for them (cognitive aspect of learning). All these aspects are relevant in case of successful learning. The responses accumulate based on the group and provide information to the facilitator how the training format was perceived by learners. Both applications are suitable for any educational context, including HE. Data collected by tools provide an overview of the group advancement and is more suitable to evaluate short-term effects of training on the Kirkpatrick level 1 (learner reactions). It is more suitable to use this tool to evaluate shot-term trainings or specific activities during the training sessions. Both tools are freely available online and the teacher needs to create an account to create inquiry sessions and see the results. Students do not need to create an account they can join with the session code provided by the teacher. MMLA tools use statistics to analyse collected information and create graphs to illustrate the results. Teachers need to be able to read the graphs and draw conclusions based on the data. Modifications can be made to the training format and implementation based on the results. MMLA tools may be most suitable for beginners and short trainings where collecting reactions fast is relevant. But they can also be used with more advanced level learners. The tools are available at: ProLearning: [http://www.prolearning.realto.ch www.prolearning.realto.ch] ForgetNot (by EduLog): https://web.htk.tlu.ee/forgetnot  
While pre- and post-test are very common as a training effectiveness measure, we are proposing a pre- and post-text measure. Of course, tests are easy to implement and analyse (if statistics is used), but the improvement of the average scores may not provide the entire picture of the learning process. In addition, post-post texts can be used as a measure implemented several months after the training to assess the retention of the competencies – this may also provide insights into the potential change in the learner’s behaviour or practices (Kirkpatrick’s levels 2 and 3). The learner would provide a text (either an essay or short reflection of a case) prior to the training, after that participates in training activities and then submits another text (can again be a short essay or discussion of a case). Optionally, another text can be produced several months after the end of the training. If the same analysis tool is used, the long-term impact can be measured. This measure is suitable for HE context and in all disciplines. The measure is simple to implement. Common analysis tools make the work simpler and the progress levels comparable. The text can be evaluated based on the SOLO taxonomy and the reflection levels. Content criteria, like ethical principles, ethical analysis, ethical approaches can also be sought for in the texts. It may be challenging to use it in case of large groups as reading and analysis may take time. It may be difficult to find the learners months after the end of training. Ethics sections in doctoral dissertations can also be analysed as ‘pre- and post-texts’ if the final product can be compared to earlier drafts. The tool is suitable for use in training for all target groups in HE context.  +
While national REI surveys (barometers) may not address training directly, they can be used as macro-level long-term reflections of the state of REI in a given context, and as such they may also be reflections of whether training efforts, in a broad sense, have been efficient (Kirkpatrick’s level 4). As the evaluation of training effect cannot be tied to specific training at this macro-level, it may provide indications of the extent of challenges, which could have or can be alleviated through training, and point a direction of training needs. For instance, surveys usually collect information about participation in trainings and ask how confident researchers feel in dealing with ethical issues during their research (statistical data analysis). In addition, national as well as institutional surveys may provide an opportunity to collect cases of questionable practices and future trainings could address those topics. For collecting cases researchers consider confusing or problematic, open answer questions could be added in the surveys. In addition, the health of the entire research community can be evaluated by monitoring the leadership aspect in the surveys. For analysing this aspect, a REI Leadership framework (Tammeleht et al., 2022, submitted) can be used. The meta-analysis provides information about the wider impact of research practices in researcher institutions, but also helps institutional leaders support everyone in their organisation to obtain ethical research practices. This tool is suitable for use in training with ECRs and active researchers.  +
How can researchers reflect on their values, imagine alternative futures, and build solidarity in the face of shared struggles? Listen to Josie Chambers, Rianne Janssen, and Lucy Sabin explore transformative research.  +
The purpose of this exercise is to facilitate an understanding of sustainability as a wicked problem. At the end of the video, some questions will help you reflect on what you have seen.  +
Hier ist nur der H5P-Inhalt zum Testen.  +
This training consists of group discussion of ethics cases and aims at developing REI leadership competencies. The training utilises the conversation format described in the posters and includes 6 steps. These can be used to structure group conversations among colleagues, team-members, etc.). The posters can be either presented in printed form or on a slide-deck. Decide if you want to use the physical posters or only slides, print the posters if necessary. You may also choose and change the cases. Please have a look at the slides below.  +
<span lang="EN-US">Open the podcast episode below.</span>  +
<span lang="EN-US">In this episode, Lucy Sabin, Josephine Chambers, and Rianne Janssen engage in a conversation about transformative research and explain how this approach to research challenges the assumption that simply producing knowledge leads to societal change. Instead, this approach asks researchers to confront hidden narratives about their role, engage creatively with imagination, and recognize research as an emotional, ethical, and relational practice—not just a rational one. Through storytelling, creative expression, and reflection (e.g., the Omelas analogy), participants explore tensions between engaging societal agendas and preserving critical, imaginative independence.</span> <span lang="EN-US">Listening to this podcast we learn that transformation can happen on two levels: externally in systems and policies, and internally in researchers’ values, motivations, and identities. Taking this approach and reflecting on one's own research practices can recenter humanity in research, showing that imagination, creativity, and self-awareness are vital for shaping futures.</span>  +
<span lang="EN-GB">Listen to the third episode of “Earth to Research” and learn how research and innovation can be re-imagined in the context of ecological crisis.</span>  +
In ecology, field research aims to understand how ecosystems work, respond, and change. But whether we’re conducting observational surveys or setting up experiments, field activities can unintentionally damage the ecosystems that we want to protect.” This raises a central question of how we can minimize the environmental impact of our fieldwork in accordance with ethical standards. To be able to answer this question, watch the video on “Doing science responsibly: Minimizing ecological footprints in field research” to familiarize yourself with basic actions that can be implemented to minimize the environmental impacts of field research activities. Note down those actions that you find the most relevant to your research.  +
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.6.0