skip to main content
10.1145/3613905.3651071acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

Immersive In-Situ Prototyping: Influence of Real-World Context on Evaluating Future Pedestrian Interfaces in Virtual Reality

Authors Info & Claims
Published:11 May 2024Publication History

Abstract

Pedestrian interfaces support people’s interaction with autonomous agents in traffic scenarios. Early studies relied on computer-generated (CG) environments to evaluate pedestrian interfaces in virtual reality (VR). More recently, real-world 360-degree videos have been used as an alternative to CG environments as they support immersive and realistic experiences. This paper reports on the combined use of 360-degree videos and dynamic CG interfaces as a new approach for evaluating pedestrian interfaces, referred to as immersive in-situ prototyping. We analyse participant feedback from two case studies that used this approach for evaluating pedestrian interfaces from a drone and from an autonomous vehicle. Results show that participants considered the immersive in-situ prototypes realistic, natural, and familiar and found them to facilitate connections to real-life experiences. We describe the process for developing immersive in-situ prototypes and offer technical considerations for future studies.

Figure 1:

Figure 1: Immersive in-situ prototypes for evaluating futuristic pedestrian interfaces.

Skip 1INTRODUCTION Section

1 INTRODUCTION

With the rapid advancement of technologies such as artificial intelligence and the internet of things, the human-computer interaction (HCI) community is continuously researching novel human-machine interfaces (HMIs); for example, to support pedestrian interactions with autonomous vehicles (AVs) [4, 12] or to provide drone-aided navigation services [6, 20]. These kinds of HMIs support the activity of pedestrians in urban environments, which this paper collectively refers to as “pedestrian interfaces”, represent a new area of research within the field of HCI. Prototypes that convey novel HMI concepts to prospective stakeholders play an important role in evaluating the effectiveness and acceptance of early design proposals or informing future development and deployment [7, 24]. However, there are design concepts that are pushing the boundaries of existing technologies, introducing technical, legal, or risk challenges in early-stage testings in the real world; for instance, externally projected pedestrian crosswalks [32] or augmented driving head-up displays [45]. The evaluation of these HMIs usually opt for virtual reality (VR) prototypes to ensure the safety of participants. Nevertheless, it is important for HMI prototypes to consider real-world dynamics and stimuli, as the physical deployment of the final product needs to consider contextual factors related to the location, environment, and local culture [14, 17, 27, 40].

Among prototyping methods that capture contexts, such as in-situ mockups and concept videos, extended reality (XR)1 offers promising platforms to simulate environments and scenarios where HMIs are intended for use. In recent years, computer-generated (CG) VR has gained considerable popularity for evaluating AV–pedestrian HMIs, as it is found to be immersive and flexible for rapid refinement [9, 31, 40]. Leveraging the naturalness of the physical world, HCI researchers have also started to develop traffic simulators based on realistic environments, including augmented reality (AR) vehicle-pedestrian simulators [28] and real-world video-based mixed reality (MR) driving simulators [48]. Considered a lightweight tool to construct XR applications [3, 50], 360-degree panoramic videos provide omnidirectional recordings of the real world. Previous studies found that 360-degree videos are both immersive and realistic when viewed through head-mounted displays (HMDs) [19, 35, 46]. In addition, they offer a relatively simple and inexpensive way (e.g., not requiring programming or 3D modelling skills) [2, 18, 50] to create contextualised environments in high fidelity [38, 46, 49]. While there is a growing interest in using 360-degree videos for immersive HMI evaluation [8, 15, 17], no research has yet explored the approach of combining 360-degree videos with visually dynamic CG pedestrian interfaces and its implications in supporting user evaluations.

Building on prior work, we present a rapid and cost-effective approach to introduce realistic contexts into early prototypes of futuristic and often speculative HMI proposals. The approach uses 360-degree recordings of the real world that are overlaid by 3D-rendered virtual objects (e.g., an AV with pedestrian interfaces). We refer to this prototyping approach as immersive in-situ prototyping. The term immersive denotes the prototypes being presented in a non-physical world (e.g., accessed via VR headsets). In-situ evaluation refers to evaluating a product in its real usage context [44]. In our method, the term in-situ captures the aspect of situating the HMI into its context of use with a fidelity that closely resembles reality. To provide early insights, we present two case studies that employed this approach for evaluating novel HMI proposals. Both studies investigated pedestrian interfaces related to intelligent traffic systems, notably drones and autonomous vehicles, in different urban settings. Based on our findings and prototyping processes, we discuss considerations for using and developing immersive in-situ prototypes.

Skip 2RELATED WORK Section

2 RELATED WORK

2.1 Immersive Real-World Video Applications

360-degree videos, offering the ability to capture reality in panoramic views, have become an increasingly popular technique for creating immersive experiences [19, 35, 46, 49]. Since 360-degree videos contain ample on-site information, they have been utilised in areas like tourism (e.g., cultural heritage visiting [3], destination promotion [47]), education (e.g., remote lecture [19], surgical training [49]) and journalism [22, 43]. Empirical studies in these fields have discovered that immersive 360-degree videos, i.e., viewing via HMDs, provide users with a high visual-audio realism [38, 46, 49, 50], sense of presence [19, 35, 43, 47], and situational awareness [46, 50]. Their applications in experiential media have been found to be engaging [3, 19, 38] and effective for storytelling [22, 43].

As 3D development platforms (e.g., Unity 3D2, Unreal Engine3) currently provide vast libraries and ease of deployment to various devices, researchers have started to explore methods to augment solely 360-degree video-based environments with virtual content. Hoggenmueller and Tomitsch [16] proposed the concept of “hyperreal prototyping” for urban pervasive displays, referring to the potential of such techniques in creating VR simulations where the distinction between the virtual and the physical becomes blurred. Similarly, Lee et al. [21] proposed “augmented virtual reality” for comparing interior design plans, emphasising real-world videos could help enhance the realism of completely CG VR. Using 360-degree videos to simulate a presence in the real world, some studies have prototyped AR experiences [5, 33] or added UI elements for interaction purposes [2, 3, 18, 50]. Our research builds on the conceptual and empirical foundations in literature and contributes to prototyping pedestrian experiences with HMIs in urban traffic situations.

2.2 Simulate Human-Machine Interfaces in Traffic

VR simulators are increasingly recognised for their flexibility and safety for pedestrian research [9, 40], allowing for the creation of diverse traffic scenarios with reduced time, cost, and safety risks compared to physical setups [10, 27, 31]. They are also useful for developing mockups of speculative HMI concepts that are difficult to physically implement with existing technologies, and therefore, facilitating early user feedback and concept refinement [11, 25, 32, 41]. Studies also highlight the importance of contextual setups in VR; for example, the visual realism and social atmosphere of VR environments can influence experiential qualities like sense of presence, level of comfort, and feeling of naturalness [36, 37, 40]. Real-world videos provide authentic representations of reality and hence are often employed in traffic research to increase the ecological validity of simulations, such as monitor-based videos [1, 26], projector-based immersive “CAVE” [14], and 360-degree video-based VR [8]. The latter has gained increasing attention in recent years, demonstrating that immersive real-world videos are effective in conveying contextual information with high visual fidelity, spatial presence, and engagement [13, 17, 48].

A few studies related to traffic HMIs have started to apply rendered overlays onto real-world video-based VR, including driving simulators (interior UIs [15, 48] and other on-road cars [48]) and AV–pedestrian HMIs (preprocessed static interfaces [42] and synthesised sounds [13]). However, so far, there has been no empirical evaluation of dynamic visual overlays of pedestrian interfaces that are integrated into immersive real-world videos. Furthermore, it is unclear how realistic environments can impact pedestrian evaluations of futuristic HMI proposals.

Skip 3METHODOLOGY Section

3 METHODOLOGY

We report on two case studies, in which we created immersive in-situ prototypes for testing speculative HMIs designed for pedestrians: (1) Drone–Pedestrian: the HMIs provided drone-assisted crossing instructions at an uncontrolled road; (2) AV–Pedestrian: the HMIs conveyed the AV’s intention to stop in a pedestrianised zone. Both case studies involved an evaluation with users to gather feedback on the HMI proposals. Analyses of user data were conducted and combined to understand our research question: How can real-world contexts in virtual reality influence pedestrian evaluations of futuristic human-machine interfaces?

3.1 Case Study Context

Investigating HMIs that can support pedestrian safety has gained considerable research attention in the last decade due to the rising of autonomous systems in the urban mobility infrastructure [10, 31]. This is motivated by the critical role of HMIs in conveying vehicle intention and facilitating pedestrian interactions [4, 12]. Considering future traffic as an intelligent, interconnected network, the Drone–Pedestrian study investigated how drone-based interfaces could guide pedestrians through dangerous road situations, utilising the birds-eye view advantage of drones for traffic monitoring [16, 20]. Building upon this body of work, the AV–Pedestrian study explored design options of AV–pedestrian communication interfaces in highly urbanised areas [27, 40]. The speculative nature of the HMIs involved in both case studies introduced difficulties in evaluating them in the real world. Therefore, the immersive in-situ prototypes provided opportunities to collect early user feedback with the inclusion of real-world contexts.

Figure 2:

Figure 2: An overview of the immersive in-situ prototyping process.

Figure 3:

Figure 3: The prototype setups across the two case studies.

3.2 Prototype Development

We summarised the development procedures based on the two case studies and present an overview of the process in Figure 2. Figure 3 reports details of the prototype setups. To record the 360-degree videos, we selected filming locations in a neighbourhood close to the city centre and used Insta360 Pro 2 as our video and sound recording device. The Drone–Pedestrian study contained two scenarios requiring participants to cross back and forth respectively on a busy public road next to a popular park. The AV–Pedestrian study involved participants walking down a pedestrianised corridor connecting a main road to a university campus. We obtained the 3D quadcopter model from Unity Asset Store and created the passenger transport pod using Autodesk 3ds Max (as a replica of one of our university’s real-world AVs). For the HMIs, a variety of interface modalities were encompassed by the two studies, ranging from displays to projections to tile changes (see Figure 3). Besides the drone displays and drone projections, which were modelled in Blender, the rest of the interfaces were developed using Unity 3D libraries.

We developed the 3D scenes integrating the 360-degree videos and the virtual 3D objects using Unity 3D (see Figure 4 for final scenes). To simulate a 360-degree video as background environment, we applied the video as a render texture to a panoramic skybox material. The viewers would be able to see the environment from the perspective of the 360-degree camera that filmed the video. Based on the design scenarios, we mapped the 3D objects into the 3D skybox environment with geometric adjustments, including scaling, position and rotation, to overlay the objects in relation to the spatial layout in the video. Further, we animated the 3D objects, e.g., changing their movements or appearances, using coding scripts and Unity built-in animators. In this process, we repeatedly adjusted parameters of the animations and of the geometric or visual properties of the objects to achieve good synchronisations with the videos. Finally, the prototypes in both studies were deployed to Oculus Quest 2 for user evaluations.

Figure 4:

Figure 4: VR scenes: Drone–Pedestrian (top), AV–Pedestrian (bottom). Colour and text cues via a display equipped on the drone (A1-A2) and colour cues and countdowns via projections from the drone (A3-A4) to advise crossing. On-vehicle light strip (B1), pulsating vehicle exterior (B2), ground projection (B3), and paving tile lighting (B4) to convey the AV’s intention to stop.

3.3 User Evaluation

3.3.1 Participants and Tasks.

Eighteen participants (13 male, 5 female) between the ages of 20–34 years (M=24.8, SD=3.0) were recruited for the Drone–Pedestrian study. Twenty-five participants (10 male, 15 female) between the ages of 20-50 years (M=28.7, SD=6.6) were recruited for the AV–Pedestrian study. Both user studies were approved by the human research ethics committee at the University of Sydney. In the Drone–Pedestrian study, upon encountering each drone interface, participants were asked to indicate their street-crossing decision by pressing a trigger button on the right controller when they felt like starting to cross. In the AV–Pedestrian study, as participants encountered the AV, they were asked to verbalise any thoughts, including any immediate feelings or intended actions, via the think-aloud protocol. In each study, participants experienced the proposed designs in randomised order.

3.3.2 Data Collection and Analysis.

To collect feedback specifically on the simulations, we asked participants to complete the ITC-Sense of Presence Inventory (ITC-SOPI) [23]. The questionnaire consists of 38 items on 5-point Likert scales to measure four factors, namely spatial presence (the feeling of “being there”), engagement (the intensity of the experience and the feeling of being involved), naturalness / ecological validity (how natural is the displayed environment and the sensation that the scenes are plausible), and negative effects (e.g., motion sickness). In addition, participants were asked to provide any comments on their VR experiences. All study sessions were audio-recorded. For quantitative analysis, we combined the scores for each of the four factors in ITC-SOPI after confirming the internal consistency of the data, followed by a descriptive analysis of the data. For qualitative analysis, we transcribed the audio recordings from the two studies and analysed comments pertaining to the effects of the real-world contexts on user evaluations. Initially, one researcher from each case study independently performed open coding. Then, both researchers collaboratively discussed common patterns from their findings.

Skip 4RESULTS Section

4 RESULTS

4.1 Sense of Presence

Results of the ITC-SOPI questionnaire (see Figure 5) show similar high ratings for the perceived naturalness or the ecological validity of the VR environments for both studies. Similarly, engagement ratings are generally high across the two studies. While spatial presence receives above middle ratings for both studies, the rating for the AV–Pedestrian study is lower than that for the Drone–Pedestrian study. Negative effects are low for both studies.

Figure 5:

Figure 5: Means (SD) of the ITC-SOPI questionnaire [23] across the two case studies.

4.2 Influence of Real-World Contexts

The qualitative analysis from both studies revealed three common patterns in how participants reacted to the prototypes.

4.2.1 Perceiving the environments as realistic and familiar.

After experiencing the prototypes in VR, the majority of participants reported a high degree of realism in the scenarios they encountered. They noted that the highly realistic scenes created a strong sense of presence, as if they were truly present in those real-world situations: “I feel that I’m in the real physical environment. I can see the pedestrians and the cars and hear traffic sounds” (P8, Drone). This sensation even extended to emotional aspects; for instance, one participant mentioned feeling genuinely nervous while preparing to cross a street in the simulation: “I did feel nervous to cross, even though I know it’s just VR. I really felt like I was there” (P17, Drone). Such experiences can largely be attributed to the scenes being recorded from the real world, where everything was considered natural and vivid, exhibiting high fidelity: “I can see that the real people on the street have their own goals and intentions” (P23, AV). Notably, the human behaviours within these environments were perceived by participants as very lifelike and consistent with those in reality: “people were acting very normal, like they were just pedestrians. They were standing there, chatting. So, the whole VR scene seems quite daily” (P22, AV), which further intensified the sense of authenticity: “there was one time when a pedestrian crossed at the back. I just looked and thought ‘oh, there’s a pedestrian behind me’. Yeah, the lady. At that moment. I felt like ‘oh, this seems to be very realistic”’ (P12, Drone). Additionally, since the filming locations were in areas familiar to most participants, many were able to immediately recall their real-life experiences in these settings, noting a strong sense of familiarity: “it’s the road around [name of the building], so it feels very realistic to me. I can relate myself and it reminds me of my daily life” (P11, Drone). Some participants further expressed their appreciation for this familiarity: “what I thought was really good was that it did look like [name of the street] and had that feeling” (P14, AV).

4.2.2 Making sense of the HMIs in relation to real-life observations.

As participants encountered the speculative designs, many comprehended the designs by drawing on their real-life observations. Some participants were intrigued or even perplexed by concepts that extended beyond their everyday experiences; for instance, one participant asked “how are the floors lighting up” (P8, AV). Another participant expressed that the screen display felt more realistic than the projected crossing because “there are indeed some drones [carrying a screen] like that...but you wouldn’t see a light on the road” (P5, Drone). P19 in the AV study noted that how one perceives such advanced concepts in VR “depends on how these are adopted as general public understanding”. Notably, many participants used their experiences in similar real-world situations to explain what they saw in the virtual overlays. For example, some conjectured the function or purpose of the AV as “security vans going around campus” (P20, AV) and “carrying something that needs to get from [name of the road] to us” (P18, AV). Participants in the Drone study were found to understand flight patterns using norms they believed, as P1 stated “if a drone is very close to you, I think it’s like the drone has something to tell you”, and P12 noted “it’s very interesting when the drone came back and emphasised [that I can cross], so at that moment I understand it better”.

4.2.3 Forming preferences based on habitual behaviours in similar settings.

When assessing the design concepts, participants reflected on their own daily behaviours in similar settings and used that as a basis for forming their preferences. For example, some participants related to their walking habits in pedestrianised zones and therefore considered certain interface modalities being more suitable for them, e.g., “if I had my noise cancelling headphones on, I will definitely be able to see the flashing lights more clearly compared to the light strip” (P13, AV), “if I’m on my phone, I would probably see the ones on the ground a bit quicker than the ones in the air” (P7, AV). Interestingly, we found that sometimes participants formed contrasted preferences based on their own analyses of the situations. In the Drone study, while some participants preferred the drone to be closer to “catch the content on the screen” (P14, Drone), others perceived closer proximity as a safety risk, worrying “what if there is an issue in its system” (P11, Drone). Similarly, in the AV study, to avoid the car, some participants chose to move towards the side where other pedestrians (in the video) stood, seeing it as “a safe place to stand” (P4, AV), whereas some preferred the side with more open areas since “there are already people [on the other side] and there is more space over here” (P16, AV).

Skip 5DISCUSSION Section

5 DISCUSSION

5.1 Implications for Using Immersive In-Situ Prototyping

Our results suggest that immersive VR environments created from real-world recordings can enhance the realism of scenes through their high naturalness and familiarity. Based on prior studies using 360-degree video-based VR, immersive environments recorded from reality might have inherent advantages in ecological validity compared to those synthesised by computers, even in high visual fidelity [17]. Creating interaction scenarios in good naturalness can be important for evaluating traffic HMIs, as it can reduce potential uncanny valley effects of avatars or distractions from the novelty of virtual simulations [37, 40]. Furthermore, since the 360-degree video method supports conveying narratives with high plausibility [22, 43], it can be used to set up various environmental or social aspects often considered in testing pedestrian interfaces, such as the influence of other pedestrians [9, 27].

Our results further indicate that the familiarity of settings facilitated participants to immerse themselves as pedestrians and, furthermore, explicitly recognised the scenes as part of their daily lives and actively related to their everyday behaviours when assessing the futuristic interfaces. Such approaches open up opportunities for inquiring user requirements using representations of local contexts when conducting actual field studies is not feasible. This could serve as a useful phase in iterative design processes, for example, supporting the testing of early proposals with socio-cultural considerations, such as local traffic norms. With continued investigations, the immersive in-situ prototyping technique holds promise to assist the broader user-centred design research, as prior work using similar prototyping methods suggests its potential for user-engaged design [21], co-creation [50], and interface learning [35, 50].

5.2 Technical Considerations

In examining the efficacy of integrating 360-degree videos and virtual overlays, we observed that most participants did not emphasise or distinctly mention the technical aspect of overlaying effects in our VR simulations. Their qualitative feedback around VR experiences primarily highlighted the feeling of realism in the scenes in general. We suspect two reasons behind this observation: (1) participants were mostly captivated by the novelty of those interfaces; and (2) they focused on the assigned tasks, which diverted their attention from analysing the technical aspects of the simulations. Despite this, we found indications of good integration of the two “materials”: (1) most participants appeared to naturally engage with the scenes and express their views on the interfaces without any hindrance; (2) there were a few comments on the relationship between the rendered AV (including its interfaces) and the other real-world pedestrians in the videos, suggesting the perception that the virtual and the real elements were in the same world. Nevertheless, we report below technical challenges in blending the two materials, along with participant feedback about the potential unnaturalness associated with these challenges.

A notable challenge was the spatial mapping of the two materials. For example, we found that there could be slight discrepancies between the scale of objects at varying distances from the viewer (i.e., the camera’s position). This phenomenon was primarily due to the 360-degree videos being projected onto a spherical surface, i.e., the panoramic skybox. Therefore, a practical solution that we employed was utilising the player’s egocentric perspective, as seen in Unity’s game mode, for closely tracking the overlaying effects during the video playback. This approach enabled us to identify and and adjust (e.g., through modifying scripts or animators) any misalignment at specific video moments. Nevertheless, three participants noted the feeling of the AV not being firmly attached to the ground, while this sensation was not observed in the Drone study, possibly because the drone was operating in the air. This could also explain why the spatial presence in the AV study was lower than that in the drone study (Figure 5). Other reasons might include the lower visibility during the night time, the different area sizes covered by the interfaces, among other factors.

Furthermore, to enhance the natural appearance of overlays, it is essential to accurately simulate natural phenomena that can support the intended interpretation of the modality. For example, a successful implementation in our work was the reflection of light on the AV’s wheels from the paving tile lighting, which effectively observed by participants as lights emanating from the ground. Nonetheless, one participant mentioned the drone projections might be too bright to be considered fully realistic, despite the increased transparency of those projections being implemented.

5.3 Limitations and Future Work

Since immersive 360-degree videos are not inherently interactive beyond the viewer’s ability to turn head and look in all directions [50], interactions initiated by pedestrians in our studies were represented through key-pressing on controllers (Drone) and through the think-aloud protocol (AV). While the two methods still allowed us to collect data essential to indicate participants’ decision-making, some participants expressed the desire to physically move around the environments. We currently experiment with (1) filming multiple videos at various positions to support users to move through the space and (2) enriching controller- or gesture-based commands for interaction with the dynamic overlays [2]. Future research could explore methods with more sophisticated technical setups, such as live streaming 360-degree videos [34] with dynamic insertion of real-time objects [39].

Skip 6CONCLUSION Section

6 CONCLUSION

This paper proposes immersive in-situ prototyping for evaluating pedestrian interfaces within autonomous traffic systems, presenting a cost-effective approach for early-stage testing of HMIs in contexts that closely mimic real-world situations. Based on case studies with a drone and with an autonomous vehicle, we found that immersive in-situ prototypes demonstrated high naturalness and was effective in eliciting participant resonance with real-life experiences, highlighting the potential of this approach to facilitate meaningful user feedback in assessing speculative proposals.

Moving forward, the application of immersive in-situ prototypes hold promise for supporting the rapidly evolving exploration of HMIs and enhancing the understanding of how users may interpret and relate to interface proposals in real-world scenarios. Future studies would benefit from exploring the scalability of this approach and its applicability across a broader range of speculative interfaces.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This study was funded by the Australian Research Council through grant number DP200102604 Trust and Safety in Autonomous Mobility Systems: A Human-Centered Approach.

Footnotes

  1. 1 In this paper, we use XR as an umbrella term to encompass VR, AR, and MR [3, 50], referring to blending physical and virtual environments through computer and display technologies [29, 30].

    Footnote
  2. 2 https://unity.com/

    Footnote
  3. 3 https://www.unrealengine.com/

    Footnote
Skip Supplemental Material Section

Supplemental Material

3613905.3651071-talk-video.mp4

Talk Video

mp4

49.4 MB

References

  1. Claudia Ackermann, Matthias Beggiato, Sarah Schubert, and Josef F Krems. 2019. An experimental study to investigate design and assessment criteria: What is important for communication between pedestrians and automated vehicles?Applied ergonomics 75 (2019), 272–282.Google ScholarGoogle Scholar
  2. Telmo Adão, Luís Pádua, Miguel Fonseca, Luís Agrellos, Joaquim J Sousa, Luís Magalhães, and Emanuel Peres. 2018. A rapid prototyping tool to produce 360 video-based immersive experiences enhanced with virtual/multimedia elements. Procedia computer science 138 (2018), 441–453.Google ScholarGoogle Scholar
  3. Lemonia Argyriou, Daphne Economou, and Vassiliki Bouki. 2020. Design methodology for 360 immersive video applications: the case study of a cultural heritage virtual tour. Personal and Ubiquitous Computing 24 (2020), 843–859.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Pavlo Bazilinskyy, Dimitra Dodou, and Joost De Winter. 2019. Survey on eHMI concepts: The effect of text, color, and perspective. Transportation research part F: traffic psychology and behaviour 67 (2019), 175–194.Google ScholarGoogle Scholar
  5. Matthias Berning, Takuro Yonezawa, Till Riedel, Jin Nakazawa, Michael Beigl, and Hide Tokuda. 2013. pARnorama: 360 degree interactive video for augmented reality prototyping. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. 1471–1474.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anke M Brock, Julia Chatain, Michelle Park, Tommy Fang, Martin Hachet, James A Landay, and Jessica R Cauchard. 2018. Flymap: Interacting with maps projected from a drone. In Proceedings of the 7th ACM International Symposium on Pervasive Displays. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Marion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. 424–433.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Chia-Ming Chang, Koki Toda, Xinyue Gui, Stela H Seo, and Takeo Igarashi. 2022. Can Eyes on a Car Reduce Traffic Accidents?. In Proceedings of the 14th international conference on automotive user interfaces and interactive vehicular applications. 349–359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mark Colley, Marcel Walch, and Enrico Rukzio. 2019. For a better (simulated) world: considerations for VR in external communication research. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. 442–449.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Shuchisnigdha Deb, Daniel W Carruth, Richard Sween, Lesley Strawderman, and Teena M Garrison. 2017. Efficacy of virtual reality in pedestrian safety research. Applied ergonomics 65 (2017), 449–460.Google ScholarGoogle Scholar
  11. Debargha Dey, Coen De Zeeuw, Miguel Bruns, and Bastian Pfleging. 2021. Shape-Changing Interfaces as eHMIs: Exploring the Design Space of Zoomorphic Communication between Automated Vehicles and Pedestrians. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 137–141.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Debargha Dey, Azra Habibovic, Andreas Löcken, Philipp Wintersberger, Bastian Pfleging, Andreas Riener, Marieke Martens, and Jacques Terken. 2020. Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles’ external human-machine interfaces. Transportation Research Interdisciplinary Perspectives 7 (2020), 100174.Google ScholarGoogle ScholarCross RefCross Ref
  13. Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, Martin Tomitsch, and Stewart Worrall. 2023. Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes. Multimodal Technologies and Interaction 7, 8 (2023), 82.Google ScholarGoogle ScholarCross RefCross Ref
  14. Lukas A Flohr, Dominik Janetzko, Dieter P Wallach, Sebastian C Scholz, and Antonio Krüger. 2020. Context-based interface prototyping and evaluation for (shared) autonomous vehicles using a lightweight immersive video-based simulator. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1379–1390.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Michael A Gerber, Ronald Schroeter, and Julia Vehns. 2019. A video-based automated driving simulator for automotive UI prototyping, UX and behaviour research. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 14–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Marius Hoggenmueller and Martin Tomitsch. 2019. Enhancing pedestrian safety through in-situ projections: a hyperreal design approach. In Proceedings of the 8th ACM international symposium on pervasive displays. 1–2.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Marius Hoggenmüller, Martin Tomitsch, Luke Hespanhol, Tram Thi Minh Tran, Stewart Worrall, and Eduardo Nebot. 2021. Context-based interface prototyping: Understanding the effect of prototype representation on user feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Robin Horst, Savina Diez, and Ralf Dörner. 2019. A 360 Video Virtual Reality Room Demonstration. In International Symposium on Visual Computing. Springer, 431–442.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jihyung Kim, Kyeongsun Kim, and Wooksung Kim. 2022. Impact of immersive virtual reality content using 360-degree videos in undergraduate education. IEEE Transactions on Learning Technologies 15, 1 (2022), 137–149.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Pascal Knierim, Steffen Maurer, Katrin Wolf, and Markus Funk. 2018. Quadcopter-projected in-situ navigation cues for improved location awareness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jin-Kook Lee, Sanghoon Lee, Young-chae Kim, Sumin Kim, and Seung-Wan Hong. 2023. Augmented virtual reality and 360 spatial visualization for supporting user-engaged design. Journal of Computational Design and Engineering 10, 3 (2023), 1047–1059.Google ScholarGoogle ScholarCross RefCross Ref
  22. Laurent Lescop. 2017. Narrative grammar in 360. In 2017 IEEE International symposium on mixed and augmented reality (Ismar-Adjunct). IEEE, 254–257.Google ScholarGoogle ScholarCross RefCross Ref
  23. Jane Lessiter, Jonathan Freeman, Edmund Keogh, and Jules Davidoff. 2001. A cross-media presence questionnaire: The ITC-Sense of Presence Inventory. Presence: Teleoperators & Virtual Environments 10, 3 (2001), 282–297.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Youn-Kyung Lim, Erik Stolterman, and Josh Tenenberg. 2008. The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design ideas. ACM Transactions on Computer-Human Interaction (TOCHI) 15, 2 (2008), 1–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Andreas Löcken, Carmen Golling, and Andreas Riener. 2019. How should automated vehicles interact with pedestrians? A comparative analysis of interaction concepts in virtual reality. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 262–274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Stefanie M. Faas, Johannes Kraus, Alexander Schoenhals, and Martin Baumann. 2021. Calibrating pedestrians’ trust in automated vehicles: does an intent display in an external HMI support trust calibration and safe crossing behavior?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Karthik Mahadevan, Elaheh Sanoubari, Sowmya Somanath, James E Young, and Ehud Sharlin. 2019. AV-Pedestrian interaction design using a pedestrian mixed traffic simulator. In Proceedings of the 2019 on designing interactive systems conference. 475–486.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Philipp Maruhn, André Dietrich, Lorenz Prasch, and Sonja Schneider. 2020. Analyzing pedestrian behavior in augmented reality—proof of concept. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 313–321.Google ScholarGoogle ScholarCross RefCross Ref
  29. Paul Milgram and Fumio Kishino. 1994. A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems 77, 12 (1994), 1321–1329.Google ScholarGoogle Scholar
  30. Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. 1995. Augmented reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and telepresence technologies, Vol. 2351. Spie, 282–292.Google ScholarGoogle Scholar
  31. Alexandre M Nascimento, Anna Carolina M Queiroz, Lucio F Vismari, Jeremy N Bailenson, Paulo S Cugnasca, João B Camargo Junior, and Jorge R de Almeida. 2019. The role of virtual reality in autonomous vehicles’ safety. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). IEEE, 50–507.Google ScholarGoogle Scholar
  32. Trung Thanh Nguyen, Kai Holländer, Marius Hoggenmueller, Callum Parker, and Martin Tomitsch. 2019. Designing for projection-based communication between autonomous vehicles and pedestrians. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 284–294.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Nadine Pfeiffer-Leßmann and Thies Pfeiffer. 2018. ExProtoVAR: A lightweight tool for experience-focused prototyping of augmented reality applications using virtual reality. In HCI International 2018–Posters’ Extended Abstracts: 20th International Conference, HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part II 20. Springer, 311–318.Google ScholarGoogle Scholar
  34. Taehyun Rhee, Stephen Thompson, Daniel Medeiros, Rafael Dos Anjos, and Andrew Chalmers. 2020. Augmented virtual teleportation for high-fidelity telecollaboration. IEEE transactions on visualization and computer graphics 26, 5 (2020), 1923–1933.Google ScholarGoogle ScholarCross RefCross Ref
  35. Michael A Rupp, Katy L Odette, James Kozachuk, Jessica R Michaelis, Janan A Smither, and Daniel S McConnell. 2019. Investigating learning outcomes and subjective experiences in 360-degree videos. Computers & Education 128 (2019), 256–268.Google ScholarGoogle ScholarCross RefCross Ref
  36. Martijn J Schuemie, Peter Van Der Straaten, Merel Krijn, and Charles APG Van Der Mast. 2001. Research on presence in virtual reality: A survey. Cyberpsychology & behavior 4, 2 (2001), 183–201.Google ScholarGoogle Scholar
  37. Mel Slater, Pankaj Khanna, Jesper Mortensen, and Insu Yu. 2009. Visual realism enhances realistic response in an immersive virtual environment. IEEE computer graphics and applications 29, 3 (2009), 76–84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Martha M Snyder, Steven Kramer, Diane Lippe, and Sharan Sankar. 2023. Design and Implementation of 360-Degree Video Vignettes in Immersive Virtual Reality: A Quality Management in Higher Education Case. The Qualitative Report 28, 7 (2023), 2113–2155.Google ScholarGoogle Scholar
  39. Joanna Tarko, James Tompkin, and Christian Richardt. 2019. Real-time virtual object insertion for moving 360 videos. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Tram Thi Minh Tran, Callum Parker, and Martin Tomitsch. 2021. A review of virtual reality studies on autonomous vehicle–pedestrian interaction. IEEE Transactions on Human-Machine Systems 51, 6 (2021), 641–652.Google ScholarGoogle ScholarCross RefCross Ref
  41. Tram Thi Minh Tran, Callum Parker, Yiyuan Wang, and Martin Tomitsch. 2022. Designing wearable augmented reality concepts to support scalability in autonomous vehicle-pedestrian interaction. Frontiers in Computer Science 4 (2022), 866516.Google ScholarGoogle ScholarCross RefCross Ref
  42. J Pablo Nuñez Velasco, Haneen Farah, Bart van Arem, and Marjan P Hagenzieker. 2019. Studying pedestrians’ crossing behavior when interacting with automated vehicles using virtual reality. Transportation research part F: traffic psychology and behaviour 66 (2019), 1–14.Google ScholarGoogle Scholar
  43. Paul Hendriks Vettehen, Daan Wiltink, Maite Huiskamp, Gabi Schaap, and Paul Ketelaar. 2019. Taking the full view: How viewers respond to 360-degree video news. Computers in human behavior 91 (2019), 24–32.Google ScholarGoogle Scholar
  44. Alexandra Voit, Sven Mayer, Valentin Schwind, and Niels Henze. 2019. Online, vr, ar, lab, and in-situ: Comparison of research methods to evaluate smart artifacts. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Tamara von Sawitzky, Philipp Wintersberger, Andreas Riener, and Joseph L Gabbard. 2019. Increasing trust in fully automated driving: Route indication on an augmented reality head-up display. In Proceedings of the 8th ACM International Symposium on Pervasive Displays. 1–7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Nicola Walshe and Paul Driver. 2019. Developing reflective trainee teacher practice with 360-degree video. Teaching and Teacher Education 78 (2019), 97–105.Google ScholarGoogle ScholarCross RefCross Ref
  47. Xiaohong Wu and Ivan Ka Wai Lai. 2021. Identifying the response factors in the formation of a sense of presence and a destination image from a 360-degree virtual tour. Journal of Destination Marketing & Management 21 (2021), 100640.Google ScholarGoogle ScholarCross RefCross Ref
  48. Dohyeon Yeo, Gwangbin Kim, and Seungjun Kim. 2020. Toward immersive self-driving simulations: Reports from a user study across six platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Sutharsan Yoganathan, David A Finch, E Parkin, and J Pollard. 2018. 360 virtual reality video for the acquisition of knot tying skills: A randomised controlled trial. International Journal of Surgery 54 (2018), 24–27.Google ScholarGoogle ScholarCross RefCross Ref
  50. Sangar Zucchi, Simone Keller Füchter, George Salazar, and Karen Alexander. 2020. Combining immersion and interaction in XR training with 360-degree video and 3D virtual objects. In 2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR). IEEE, 1–5.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Immersive In-Situ Prototyping: Influence of Real-World Context on Evaluating Future Pedestrian Interfaces in Virtual Reality

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
      May 2024
      4761 pages
      ISBN:9798400703317
      DOI:10.1145/3613905

      Copyright © 2024 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 May 2024

      Check for updates

      Qualifiers

      • Work in Progress
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)87
      • Downloads (Last 6 weeks)87

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format