Using 360-degrees interactive videos in patient trauma treatment education: design, development and evaluation aspects

Extremely catastrophic situations are rare in Sweden, which makes training opportunities important to ensure competence among emergency personnel who should be actively involved during such situations. There is a requirement to conceptualize, design, and implement an interactive learning environment that allows the education, training and assessment of these catastrophic situations more often, and in different environments, conditions and places. Therefore, to address these challenges, a prototype system has been designed and developed, containing immersive, interactive 360-degrees videos that are available via a web browser. The content of these videos includes situations such as simulated learning scenes of a trauma team working at the hospital emergency department. Various forms of interactive mechanisms are integrated within the videos, to which learners should respond and act upon. The prototype was tested during the fall term of 2017 with 17 students (working in groups), from a specialist nursing program, and four experts. The video recordings of these study sessions were analyzed and the outcomes are presented in this paper. Different group interaction patterns with the proposed tool were identified. Furthermore, new requirements for refining the 360-degrees interactive video, and the technical challenges associated with the production of this content, have been found during the study. The results of our evaluation indicate that the system can provide the students with novel interaction mechanisms, to improve their skills, and it can be used as a complementary tool for the teaching and learning methods currently used in their education process.


Introduction
The use of simulation in nursing education has been in practice for the last 50 years, aiming to prepare students and health care professionals to exercise complex caregiving decisions in a safe environment (Aggarwal et al., 2010;Nehring & Lashley, 2009). Simulation based training provides students with an effective way to learn new skills (Hegland et al., 2017) as well as enables them to gain new knowledge and experience through "learning by doing" (Hope et al., 2011). An important aspect of this is the opportunity to try different procedures repeatedly (Andersson et al., 2013). The use of interactive technologies (e.g. virtual reality, mobile devices and sensors, serious games, 360-degrees videos) is quickly becoming a key instrument in simulating a real environment that can be used for training in the healthcare sector. One of the major advantages of this approach, in comparison to solely using traditional simulation techniques (e.g., mannequins), is having a rich model that resembles a real-world environment and can train any number of users simultaneously. Relying on the affordability provided by these technologies; features such as location and context-awareness, content adaptability, predictive analytics, can be used to assist in increasing the learners' level of engagement with their learning environment. According to Gros (2016), these characteristics are ones that can be considered while defining what a Smart Learning Environment (SLE) is. The author also states that an SLE should advise and predict the learners' needs at any time and place. A review of the literature regarding nursing education points out that little research has been carried out, exploring how professional nursing teachers and students take advantage of interactive technologies, such as those described above to support their teaching and learning.
Therefore, to address some of these challenges, we have designed and developed a prototype system, containing an interactive 360-degrees video that runs via web browsers (currently Google Chrome). The content of the video includes a realistic case about a patient trauma treatment at the ED unit of a central hospital in southern Sweden. Various types of interaction mechanisms are integrated into the video to which learners can act and respond. The learner's navigation in the video and the interactions with the content are saved in cloud storage, so these data sets can later be processed for further analysis. The primary purpose of collecting this data is to explore the users' navigation path and find the patterns in learners' interaction with the materials as well as issues related to their experience. In order to validate our approach, we have explored its use with 17 students, from a specialist nursing program (advanced level), and four medical experts. This paper aims at investigating how the students perceived the use of this 360-degrees interactive video to train them in treating trauma patients. The article is organized as follows: Section "Background and Related Work" provides some background about the use of simulations in nurse education and related work in this field; Section "Interactive 360-degrees video for patient trauma treatment" describes the various aspects related to requirements, design, and technical issues in connection to the 360-degrees interactive videos, and our proposed web-based solution. Section "Evaluation strategy" presents our evaluation strategy, while Section "Analysis of the Gathered Data" describes the video recording analysis of each study session conducted with the students. Section "Conclusions and Future Work" concludes the paper with an analysis of the observations, discussions, and a brief description of the future lines of research.

Simulation in nurse education
Simulated trauma exercises in nursing education can be performed in the following ways: (a) Traditional exercises with mannequins or humans playing a role or (b) computer-based exercises. The traditional exercises are the most common form of training in today's nursing education. Simulation with mannequins allows students to confidently exercise, for instance, technical skills (Hegland et al., 2017) and critical thinking abilities (Cant & Cooper, 2010). Exercising with low-fidelity mannequins (less advanced), allows students to practice, for instance, pulmonary auscultation and cardiopulmonary resuscitation (CPR). Furthermore, high-fidelity mannequins simulate the human physiology including the ability to talk, sweat, etc. (Healthy Simulation, 2014). The development of computer-based simulations in nursing education have increased during the last 10 years (Poikela & Poikela, 2012). Research results indicate that students can improve their knowledge using computer-based simulations (BinSubaih et al., 2009;Day-Black 2015;Koivisto et al., 2017). Moreover, this approach provides the instructor with detailed information about the students' performance, and it also contributes to a safe environment, where the students are allowed to make mistakes to improve their skills through detailed feedback (Bauman, 2012). Technological components and approaches of such computer-based simulations include the following: (a) Mobile technologies and sensors; (b) serious gaming technology; (c) videos and (d) virtual reality technology (VR). Mobile technologies use a platform, including mobile devices and sensors, where relevant information and facts are gathered. Additionally, this platform assists students in making make correct decisions during the care of a simulated patient (Athilingam et al., 2016;Sharples et al., 2009). Mobile technologies are used in some nursing areas, such as obstetrics and gynaecology (Chang et al., 2017). Another good example is using mobile devices with a combination of augmented reality (AR) to teach clinical lab skills (Garrett et al., 2018). This approach allows for the active learning of theoretical material (different nursing equipment in labs), individually or in groups. Serious gaming presents simulations in a realistic environment (Bellotti et al., 2011) and videos which provide students with a realistic picture of a patient (Ikegami et al., 2017). Both these approaches can help improve students' knowledge and their performance (Knigth et al., 2010). It is important, however, to keep in mind that solely using such learning tools is not enough, and it needs to be a complemented by regular knowledge-acquiring activities as illustrated by Holzinger et al. (2009). Using a simulation software can help students to increase their motivation and the ability to learn in a non-threatening environment (Blakely et al., 2009). However, using interactive simulations can be an important cognitive burden, hindering the learning process Video simulations have demonstrated that they can be used to prepare students for a real working environment, making them feel more secure and support active learning (Del Blanco et al., 2017;Giannakos et al., 2016). The Pocket Nurse 1 organization provides equipment and software for simulation in healthcare education. They provide multi-platforms (including VR, web and mobile technologies) to support interactive virtual simulation systems for nursing education (Kitchen, 2018).
Even though the use and development of simulation environments is increasing, its wide utilization is not a common practice. A research report on trauma education for nursing students in Sweden points out that traditional, simulated trauma exercises are insufficient in nursing education programs (Upstu & Persson, 2014). One possible explanation is that they are time and money consuming to plan and conduct, allowing only a restricted number of participants simultaneously (Rybing, 2018).
Current simulations used at the Health Care department at Linnaeus University (LNU), for supporting multidisciplinary specialist nursing students in assessing trauma patients, include the use of interactive dolls/mannequins. Teaching and learning under these circumstances requires additional resources, such as a dedicated room, a mechanized and remote-controlled mannequin with the dedicated software, and an operator controlling the doll and the settings of the scenario. This simulation is a good step forward, but it has some limitations related to the number of students that can use it, as well as cost related issues (one operator, usually a senior lecturer, for four students, in an hour). To overcome some of these challenges, we have proposed and developed a web-based solution that includes interacting with the content of a 360-degrees video. Examples and advantages of using this in nurse education are described in the following section.

Using 360-degrees videos in nurse education
The use of virtual reality, and lately 360-degrees videos in medical education, is increasing to provide an immersive learning experience (Kilmon et al., 2010;Lateef, 2010). There are a number of 360-degrees videos available on YouTube, including virtual tours in nursing laboratories and nursing and midwifery clinical spaces. Recently, distance education or online courses have started to use 360-degrees videos to improve the educational experience of online learners (Dawson, 2017). Penn State World Campus is using 360-degrees videos in their first-level nurse education to help students identify unsafe spaces for elderly people. According to them, these videos allow students to experience the virtual environment and understand concepts better than the traditional simulation approach (Dawson, 2017). Recent efforts are being put in exploring how nursing students experience the 360-degrees video technologies to find opportunities to improve learning and teaching in an interactive and meaningful way (Buchman & Henderson, 2018). These authors mentioned the advantage of having multidisciplinary teams of healthcare professionals in 360-degrees video, leading to enhancing their interprofessional competencies, such as communication within the team and better teamwork. Despite these latest developments and efforts, little research has been carried out when it comes to developing and using these videos in nursing education. The following section describes our approach to tackle this challenge, aiming at allowing nursing students to train in the treatment of trauma patients.

Interactive 360-degrees video for patient trauma treatment
The current simulation technique of using a manikin to teach students has several limitations that a web-based solution can overcome. First, the requirement for a web-based solution is to have one device with a touch-screen surface connected to the Internet. Second, multiple groups can use this tool at once. Third, there is no need for supervision or for an operator for each training session. Each group can perform the activity in any classroom without special equipment (save a computer) anywhere and anytime. Fourth, this learning environment can provide a more realistic simulation environment, by recording a 360-degrees video of an authentic scenario at a clinic's emergency department (ED) unit.
These initial requirements were gathered in our previous work (Herault et al., 2018) and can be described as follows: a) The tool must allow the students working in groups to collaborate and answer questions about medical emergencies; b) the tool must be inexpensive and can be utilized by multiple groups of students at the same time; c) the tool must be intuitive and allow for different kinds of interactions with the digital content; d) the tool must be usable in regular classrooms without uncommon technology, and e) students must be able to view specific parts of the activity as they wish and be able to identify easily the role of each person in the scene.
The content of the 360-degrees video is a realistic scenario of a trauma patient being treated, recorded at the ED unit of a Central Hospital in the city of Växjö, Sweden. The 360-degrees video rig GoPro Omni camera was used to record the videos. Afterwards, a first prototype described in the following sub-section was developed to make the video interactive. Figure 1 below depicts how the interface of the web-based solution we developed looks like.
More detailed information about aspects related to the web-interface can be found in our previous work (Herault et al., 2018). The next subsection provides an overview of the technical features and functionality of the proposed system.

Technical approach
While developing and implementing our web-based solution we used a combination of JavaScript libraries (Pannellum and Video.js) to display and provide interactivity to the 360-degrees trauma patient video. The single-page web application architecture was used to develop the first prototype (shown in Fig. 2).
The client side contains an html5 based 360-degrees video player and the server side contains a web service that collects user interaction data. The main client-side components are shown in Fig. 3.
There is a folder called videos that contains all 360-degrees videos (videoA, videoB, videoC, etc.) that are input for the 360-degrees video player. The main controller coordinates the communication between the other components in the system. The Pannellum viewer component uses a 360-degrees video viewer from the Pannellum library (Pannellum.org, 2015), containing features, such as adding hotspots to the video to make it interactive. A hotspot can contain any html5 content which is a big advantage in comparison with other similarly based video players, when adding any interactive content to the 360-degrees video. Hotspots in our case display content related questions (See Table 1), connected to the educational medical theory or knowledge in the treatment of trauma patients. These questions are represented as html templates in our system. The Video.js library is used to act as a video player. It is a standard video player with the possibility to connect to the Pannellum plugin to display the 360-degrees videos. The data collection component is responsible for the collection of different interaction data (e.g., camera movements, questions-answers, speech recordings, and others) and sending them to the server. The server side has been implemented as a web service, with GET/POST methods, to receive and save the data in our database. We are currently adding a learning analytics web service to analyze the interaction data collected during the different sections. We also plan to integrate a contextualization web service (Sotsenko, 2017) to contextually trigger hotspots, depending on the user navigation in the 360-degrees space. The contextual information will then become camera movement (pitch and yaw) data, zoom in/out data, and the video timeline. Depending on the current timeline of the video, the specific video scene will trigger a hotspot (with questions/information/notifications) related to the current context situation. For instance, a learner watching a scene from the video pointing to the monitor with health parameters where he/she starts to zoom to see more detailed information displayed on the monitor, will generate a hotspot point that triggers the medical information displayed on the monitor. Another example includes a user observing the 360-degrees video space where the current video scene shows medical equipment, and then a hotspot is triggered either with a related question or information about this equipment. There are many other examples that could be developed, related to how to use contextual information to trigger hotspots, depending on the learning objectives and learning scenarios. The combination of the different learning strategies with these features will enable a fully operational smart learning environment for nurses to train in the treatment of trauma patients. One of the initial requirements for this, based on solutions described previously, was to provide interactivity. Thus, we have, together with nursing education experts, developed a set of questions that could be integrated into the 360-degrees video content. The following section describes these questions.

Questions integrated with the 360-degrees video
The video is divided into four different segments. The first one is about the medical team presentation; the second is about receiving the patient and admitting him/her in the emergency room. The third segment is about the examination of the patient (Video A). Question 2: Process-based, general reflection about preparedness before the patient arrives; no alternatives-no right answer is revealed.
Question 3: Knowledge-based, patient specific questions about the blood pressure; three alternatives-the right answer is revealed in the video after the students answer.
Question 4: Process-based, general questions about the lack of a vein catheter; three alternatives-no right answer is revealed.
Question 5: Knowledge-based, general questions about blood transfusion; two alternatives-the right answer is revealed in the video after the students answer.
Question 6: Knowledge-based, patient specific questions about pharmacies; two alternatives-the right answer is revealed in the video after the students answer.
Question 7: Knowledge-based, patient specific questions about the temperature goal; two alternatives-no right answer is revealed.
Question 8: Knowledge-based, general questions about the triad of death; five alternatives (three are right)-if the answer is right = jump to video C-no right answer is revealed.

Video B (instable patient),
Question 9: Knowledge-based, patient specific questions about the changes in patients' pulse; three alternatives-the right answer is revealed in the video after the students answer.
Question 10: Knowledge-based, general questions about blood transfusion (not the same as above); two alternatives-if the answer is right jump to film C and wrong answer jump to Video D-no right answer is revealed.
Question 11: Processed-based, patient specific questions about the requirement level of care; two alternatives-the right answer takes to video B and the wrong answer takes to Video D-the right answer is revealed in the video after the students answer. Finally, the fourth one is about stabilizing the patient. This video segment also includes multiple outcomes, such as the patient stabilizing (Video B), the patient in a critical condition (Video C), or the patient as deceased (Video D). The videos contain in total 17 questions (as described in Table 1), both process-based (7 questions) and knowledge-based (10 questions). The questions have between two and seven alternatives as answers, and one question was aimed to promote reflection and discussions (free text type). The majority of the questions are related to the patient treatment. The students received feedback and explanations by answering nine of these questions. Eight of the 17 questions did not provide any feedback or explanations to the students because all the options are correct, and some of the questions are only for reflection and discussion. There are four question triggers, meaning that if the student chooses the wrong answer, the video shifts to another video with an instable patient (Video B); and if the student chooses the right answer, it switches back to a more stable patient (Video C). Table 1 below describes the questions in detail.

Evaluation strategy
An exploratory evaluation has been performed to investigate the use of the interactive 360-degrees video to support the learning experience regarding the treatment of trauma. The aim of this assessment was; on the one hand to provide us with relevant insights related to the prototype and the novel ways of interacting with it. On the other hand, we wanted to also gain some insight as to whether this kind of an approach has the potential to enhance a student's overall learning, and knowledge construction process, compared to traditional teaching approaches in this domain. The study was performed during the fall term of 2017. Participants of the study were 17 specialist nursing students, from the department of Health and Caring science at LNU aging from 24 to 45 years old, with a great majority of females who had never experienced immersive videos; except for one who saw a chirurgical intervention during a livestream. The prototype was also tested with four medical experts. During this study several different data sets were collected: Prototype questionnaire. After performing the activity, each group was given a questionnaire (see Additional file 1: Appendix B) to gather data about the "Ease of Use" and "Perceived Usefulness" (Davis, 1989). These questionnaires were analyzed and used in our previous work (Herault et al., 2018) to investigate the usefulness of the prototype, and how easy it was to understand the given task and answer the integrated questions in the video. Some questions were related to how easy it was to use the user interface (UI) and interact with the system in general. These answers were later compared to how they performed during the session using video recording. Usability is a very important part of the use of immersive videos as analyzed by Behringer et al. (2007). However, the analysis during this first iteration of the project were of explorative nature, and more emphasis and deeper elaboration on these data will be put on this in a future study. Prototype log data. Data about the activity carried out by each group was collected and stored in a database. The initial data analysis of these data sets can be found in Additional file 1: Appendix C.
Video recordings of the study session. Two cameras were placed in the room to film the screen, and the interaction of the participants with the touchscreen and their interactions with each other. In the following section we analyze these videos to investigate how nursing students interact with the 360-degrees interactive video.
Each study session (five in total) with the students (see Herault et al., 2018) was recorded using two cameras (screen interaction recordings and group interaction recordings), as shown in Fig. 4 below. An observer was present to explain the basic context of the exercise to the students, as well as the basics of using the tool. If needed, the observer was also here to solve any technical issues. Five sessions in total were recorded and needed to be analyzed. The video coding scheme (VCS) is one of the methods used to analyze recorded videos, such as the ones in our study. Kushniruk and Borycki (2015) have developed a method that combined both grounded theory (creation of the codes while watching the videos) and a pre-existing coding scheme. Additional codes were added to this recommended one to fit the needs of our study. Most of them were useful as they were developed for the healthcare field.
Each of the five videos were viewed and analyzed by two researchers. Each time an event fit within the VCS, the video was paused, and notes were taken and the time (beginning and end) of the event was noted down. Additional file 1: Appendix A describes the structure of the video coding inspired and adapted from the works of Kushniruk and Borycki (2015) as well as a description of each code. The added codes for this research were the following: Interaction, popup question, answering question, and strategy. These codes were then used in a spreadsheet to describe the segment of the video that was interesting. The researchers also modified the timestamp recommendation to not only include the time of the event, but the time where the event Example of coding from one of the video sessions: 00:01:03-00:01:28 -FONT -"What's written here? *mumble the written text* Ah ok" -The user had to get closer to the screen to read.
After watching all the video recordings, several coding data (shown in the example above) were gathered/collected into Excel spreadsheets for further analysis. The latest is described in the coming section.

Analysis of the gathered data
This section presents the analyses of the data we gathered as described in the previous section (using prototype questionnaires and video recordings from the study session).

Questionnaires analysis
The results of the questionnaires show that the activity in general was perceived as good (47%) to very good (52%) by the participants, and the use of 360-degrees videos was appreciated by most students (94%). The use of the touchscreen was perceived less enjoyable than the rest of the tools (12% reserved, 47% good, and 41% very good), due to some difficulties people faced in pressing specific buttons during the activity, as discussed later in the video analysis. However, answering questions during the video was very well received by all the participants, despite their previous issues. 12% of the participants had trouble and 18% had some trouble using the touchscreen. The rest of the participants had fewer issues (35%) to no issues (35%) using the web-based solution we developed. It is apparent from the video footage that due to their placement (both left and right of the screen), some participants definitely had some trouble pressing the correct spots. This is an issue regarding touchscreens in general and the experience will differ from screen to screen. These results are a part of what the students felt while using the system. Since all sessions were recorded, it is possible to compare what the students felt and what happened during the session. For instance, most of them did not mention in the questionnaire the difficulties they had using the interface. They did not indicate any level of frustration about that point, even though they were somewhat frustrated during the session.

Video coding analysis
The five groups of students took different approaches and behaved differently, while watching and interacting with the videos. Based on the gathered data from the video coding approach, we selected the events that occurred (i.e. if the event did not register during the video coding process, no analysis was necessary). During the analysis, the researchers also transcribed the content of the students' discussions; when and what they were saying and whether it was accurate/correct, relevant, or timely. This information allows teachers to understand why an answer is wrong and then use it for discussing with the students to delve deeper into the topic, making sure they understand why they made the mistake and then learn the appropriate response.
These discussions were also analyzed in terms of strategy. The students had different approaches to reading and answering questions from group to group. Each group's strategy was analyzed from the beginning of the video, and during each question, to distinguish patterns in their approach. These patterns can be further analyzed to deduce which approach seems the most efficient. However, the size of the sample in this study (17 students) is not enough to draw conclusions regarding that yet. This first study solely aims at discovering whether such patterns exist, and to see whether future groups will fit into these patterns or create new ones. Table 2 below depicts the summary of the analysis of the video coding. Here, we have clustered the video coding data into five main categories of events, which are as follows: Interactions (number of interactions with the screen), navigation (the number of times the students face an issue, while using a finger to navigate the 360-degrees space), instructions (the number of times a student asked for the observer's help), watch time (time spent watching the video), and discussion time (the time spent on discussions while answering the questions). The analysis began by identifying the events that were the primary concerns for the first prototype, which were the number of interactions with the screen (i.e. was the touchscreen used), the number of navigation issues (i.e. was the touchscreen difficult to use), aspects related to instructions (i.e. was the prototype easy to use without external help), and the time spent watching and discussing (i.e. to see if the tool helped triggering discussion among the students). The difference in watching time indicated, from group to group, was due to the correct/wrong answers the students provided. The total watching time is then different based on the number of correct answers given. In the case of group 5, the time is abnormal due to a technical failure described below. The time spent in discussion varies dramatically from group to group. Three groups spent a lot of time discussing, while two groups answered the questions without speaking much to each other.
Some groups were more active in exploring the 360-degrees space as the number of interactions illustrates. Interactions represents any time the students interact with the screen apart from answering questions (a touch, or a succession of brief touches on the screen, was counted as one interaction). Three groups had a high number of interactions (Gros, 2016;Herault et al., 2018), while two groups had fewer (Bauman, 2012;Bellotti et al., 2011). The number of navigation issues that occurred within the 360-degrees space is quite low, with only 6 in total. They occurred only with the groups that had many interactions. The instructions occurred only once when the first question button appeared on the screen, pausing the video. The students asked the observer about what they had to do, despite being given the instructions before the test. This latest fact indicates that our tool can be used whether an instructor or a technical assistant is present on site. There were several issues that occurred during the sessions, related to the UI and user experience of our proposed system. Those were connected to the use of small sized icons in the video player control panel and the font size for the questions. Also, each group had issues pressing the different buttons in the pop-up window for the questions. This was due to their position in relation to the screen, as well as the size of the buttons on the interface. This was due to the fact that originally the prototype was developed for tablets and mobile devices. However, during the study sessions, a touchable Microsoft screen was used instead. In the future version, this aspect of the prototype will be improved by modifying the user interface to facilitate the pressing of buttons. Another issue occurred with the internet connectivity with groups 4 and 5 (video recording 4 and 5, respectively). The interactive 360-degrees videos require high internet connectivity to play it in a video player. All these issues were discovered and identified, will be fixed in the second version of the prototype including a cabled and stable internet connection.

Conclusions and future work
In this section, we conclude the paper by discussing the various aspects related to how the students perceived the interaction with our proposed system as well as presenting lines of our future work. The content of the 360-degrees videos was much appreciated by all the students, who found it very valuable and authentic, as it allowed them to interact and explore learning materials that usually are offered in different forms and media. The first iteration of this project was very encouraging, as all parties (students, teachers, medical staff) agreed that using the tool added value to the education of nursing students. It will indeed not replace current methods; however, it will improve and add to them in a positive way. Recording the study sessions using two cameras was useful to observe both the interactions on the screen and between the students. This analysis reinforces the idea of implementing a screen recording functionality and using the in-built webcam of the screen to record the participants. While the video coding is very useful for analyzing the interactions between the students, the prototype is now equipped with an automatic data collection function for each interaction on the screen to enable faster and more precise analysis. This functionality will be tested and analyzed in a future study.

Interaction and learning aspects
One of the most interesting aspects for teachers is to identify the different strategies students have used for problem solving. In our current study, this has been performed manually, using the video coding scheme and analysis. The outcome of these efforts reveal the identification of several patterns which are as follows: (P1) active users: Several students were exploring the scene and trying out the touchscreen before starting the video; (P2) passive users: The students were not interested in touching the screen but just stared at it as a static video; (P3) single user pattern: One member of the group reads and answers the questions; (P4) multi-roles pattern: Some members of the group read the questions, another one approved the answer, while other members read the questions by themselves. We have also noticed that in the single user pattern (P3), students knew each other's knowledge, thus trusting their skills and knowledge to answer specific questions related to previous experiences. This led to having one leader in the group (probable the one with the best level of knowledge), even though the scenario targets multi-disciplinary nursing specialists, that were reading and answering the questions with less discussions. At the end, this group had several wrong answers to the questions. The video analysis also provided us with some interesting information about the students and their knowledge. Analyzing the discussion, they had while answering the questions, could help their teacher understand why they answered a question wrong. The ACCURACY/CORRECTNESS, RELEVENCE, and TIMELINESS code events (see Additional file 1: Appendix A) were used several times and allowed the researchers to understand the rationale behind the wrong answer. The latest could be used for future developments, if required, to send students personalized feedback and help them understand the subject better.

Technical and design related aspects
From a technical and design perspective the outcomes of this study provided us with new information and insights. We have found the following new technical requirements for the 360-degrees interactive video: (a) A high speed internet connection is required; (b) the quality of the video stream also needs to be good, since details, such as vital signs displayed in the monitor, need to be readable at all times; (c) the video should be high definition with proper sound quality and (d) the system should have a responsive user interface for interactive content in the 360-degrees video to support mobile, tablets, and large touch screens. Our results point to a several issues related to the production and distribution of the 360-degrees video content. These are enumerated as follows: (a) The placement of the recording equipment is crucial in these settings, since it needs to be arranged in a fashion that will not disturb the personnel during their tasks; (b) sound recording issues: One of the solutions could be to use an independent environmental microphone in the center of the room, or several microphones placed at strategic locations in the emergency room or on the medical personnel directly; (c) a high speed internet connection is required for high quality 360-degrees videos; which limits access to the video through mobile devices relying on low speed networks.

Limitations
Some limitations need to be mentioned as they might affect any potential attempts to replicate the study. First, the prototype was tested only in Sweden with Swedish medical standards and procedures as well as medication. It is to be expected that other countries might have different procedures and equipment available. Another limitation is based on the gender of the participants so far. Most of the students were female; this is due to the overwhelming number of female nursing students in the university where the test was conducted. This can be important as gender is still being researched as a pertinent issue in learning using computer simulation (Kickmeier-Rust et al., 2007).
To summarize; we have presented the outcomes of our efforts in this paper, while developing a novel simulation solution that can provide easy access to learning content for nursing students, anywhere and at any time. The proposed web-based tool, making use of the 360-degrees interactive videos, offers nursing students the possibility to experience, in a novel way, how an emergency room and its medical personnel function during an intervention. In addition to the navigation of the 360-degrees videos, the students can interact with the learning content by answering different questions. Providing the right or wrong answers has consequences on the patient's state, and accordingly different videos load. A user study was conducted with 17 students at the central hospital in the city of Växjö, Sweden, to validate our approach. The initial results of our analysis provide some indications that such a tool can be useful as an addition to the existing methods used in nursing education. Our upcoming direction of research and implementation include the exploration of the following aspects: (a) Adding contextualization software services for context-aware navigation, and providing individual feedback; (b) improving the UI to allow for a better experience for the users, as well as making it responsive to fit different types of screens; (c) adding more questions and scenarios to extend the use of the tool to other branches of emergency services, such as for the police force and firefighters.