skip to main content
10.1145/3613905.3651073acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

GaitWay: Gait Data-Based VR Locomotion Prediction System Robust to Visual Distraction

Authors Info & Claims
Published:11 May 2024Publication History

Abstract

In VR environments, user’s sense of presence is enhanced through natural locomotion. Redirected Walking (RDW) technology can provide a wider walking area by manipulating the trajectory of the user. Considering that the user’s future position enables a broader application of RDW, research has utilized gaze data combined with past positions to reduce prediction errors. However, in VR content that are replete with creatures and decorations, gaze dispersion may deteriorate the data quality. Thus, we propose an alternative system that utilizes gait data, GaitWay, which correlates directly to user locomotion. This study involved 11 participants navigating a visually distracting three-tiered VR environment while performing designated tasks. We employed a long short-term memory network for GaitWay to forecast positions two seconds ahead and evaluated the prediction accuracy. The findings demonstrated that incorporating gaze data significantly increased errors in highly-distracted settings, whereas GaitWay consistently reduced errors, regardless of the environmental complexity.

Figure 1:

Figure 1: Walking in monotonous experimental environment (left), and actual VR environment with visual distractions (right) while recording gaze and gait data.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Natural walking in virtual spaces significantly enhances the sense of presence and promotes authentic interactions within virtual reality (VR) environments [41, 48], thereby establishing natural walking as a fundamental aspect of VR space design [28, 48]. Nevertheless, walking potentially results in physical confrontations with obstacles, posing safety risks that necessitate mitigation. Redirected walking (RDW) technology has been developed to address these concerns, enhancing user safety and optimizing physical space utilization. RDW dynamically alters the VR environment in response to user movement, thereby facilitating exploration of more extensive VR areas [36]. However, heightened manipulation may induce simulation sickness [1, 9]. To counteract this, RDW incorporates a detection threshold (DT) that defines the permissible extent of virtual environment modification [13]. Consequently, the capacity of RDW to enlarge VR spaces is inherently limited. Within the confines of the DT, the RDW controller aims to guide users to strategic locations, maximizing VR space utilization while concurrently minimizing simulation sickness.

There are three types of RDW controllers: generalized, scripted, and predictive [35]. Predictive controllers have been extensively researched because they can handle general environments by predicting user positions [14, 18, 34, 38, 52, 53, 55]. Path prediction often involves analyzing the habitual behaviors of the users [18] by employing past directional and speed data to anticipate near-future locations [34, 55]. Gaze data incorporation has advanced research on dynamic user positioning [38, 52] and in static environments [14, 53]. Nonetheless, VR environments, often adorned with elaborate backgrounds and numerous objects, can distract users visually, impacting gaze-data reliability. Therefore, our study introduces new sensing mechanism: pressure sensors for predicting locomotion. Gait data relates directly to a person’s actual movement, expecting to provide a more robust prediction than gaze data, which is indirect. Gait data predict walking phases by characterizing repetitive walking movements and phases [10, 22]. Moreover, gait data can indicate user action states, such as navigating on ramps or stairs [40], and assist in distances estimation [47]. These findings suggest that gait data could be instrumental in predicting user locomotions in VR.

To assess the degree of visual distraction, we classified VR environments into three distinct categories: low, mid, and high visual distraction levels. The environments’ visual distraction was evaluated based on the average eye movement speed, gaze dispersion surveys, and the proportion of objects observed by users. In predicting locomotion, data regarding user position, rotation, gaze, and gait were collected. The gait data comprised the center of pressure (CoP) and the anterior–posterior acceleration of each foot. For data collection, we adopted the research scenario of Stein, which included three gaze interaction scenarios: search, follow, and avoid [38]. This data was then utilized to predict positions two seconds later by employing a long short-term memory (LSTM) neural network architecture. Our research question is as follows: Can GaitWay or gaze data reduce locomotion prediction errors in VR environments with visual distractions?

Skip 2BACKGROUND Section

2 BACKGROUND

2.1 Gaze During Walking and Visual Distraction

Previous studies have shown that eye movements are correlated with physical activities [26]. Typically, eye movements aligned with behavioral objectives are prioritized over other bodily movements [15, 25]. Therefore, eye movement can used to estimate action intention, but people do not always fix their gaze on future goals [8, 16, 33, 44, 45]. Eye movements during walking allow to discern both goals and obstacles in the path. For instance, pedestrians glance ahead to navigate safely but their gaze tends to shift toward their goal as they near it [12, 17]. Moreover, eye movements can assess the alternative targets and identify a specific target amid distractions [50, 52]. Consequently, eye movements can be easily affected by visual distractions like task requirements [46].

2.2 Gait Data Sensing

Gait data, such as CoP and foot acceleration, effectively represent current walking motion information [10, 22, 40]. In Chen et al.’s study, the utilization of plantar pressure sensors and inertial measurement units enabled the detection of foot pressure, facilitating the recognition of movement intentions in various activities [10]. Joo et al.’s research employed insole-type pressure sensors to gather plantar pressure, estimating the speed of the walking cycle and exploring the relationship between walking speed and pressure [22]. Truong’s study leveraged gait pressure data to accumulate walking data over linear distances, employing phase information for precise walking distance estimation [47]. These studies collectively suggest that gait data, inclusive of walking pressure and foot acceleration, a comprehensive range of detailed information pertinent to walking. Moreover, since we always have to move our feet when walking regardless of task, gait data is directly related to human movement. Hence, we configured our GaitWay system to include posterior-anterior acceleration and total foot pressure in addition to foot CoP.

2.3 Redirected Walking and Locomotion Prediction

Natural walking in VR can offer a heightened sense of presence [41, 48] compared to other motion techniques like treadmills [27] or teleportation [5], but it also presents safety challenges due to physical obstacles [36]. Although RDW was developed to address user safety and walking space [36], it can induce motion sickness owing to mismatches between the virtual and real environments [1, 9]. Thus, RDW incorporates a DT [13], which restricts the extent of user direction changes, limiting the capability of RDW to avoid obstacles and curtailing the expandable space.

Efforts to enhance the spatial capabilities of RDW have focused on two main strategies: 1) DT expansion, and 2) enhancement of RDW controllers. Firstly, the expansion of DT has been pursued by using insights from the human vestibular system or gaze, which can categorized into stimulus-based methods [3, 19, 20, 21, 24, 29, 30, 32] and behavior-based methods [4, 30, 31, 42, 43]. Secondly, improving RDW controllers, particularly predictive controllers, can further improve the spatial expansion of RDW by optimizing the increased DT. There are three types of RDW controllers: generalized, scripted, and predictive [35]. Generalized controllers direct users to specific points (steer-to-center, steer-to-waypoint) or orbits (steer-to-orbit) [35], whereas scripted controllers alter direction based on predefined criteria at certain locations [36]. However, these two methods encounter limitations in enhancing RDW performance due to their static algorithms and predetermined directional guidance [54, 55]. Consequently, there are studies tried to develop predictive controllers dealing with more diverse VR environments, which dynamically determine the appropriate direction and manipulation intensity. Zmuda and Nescher developed an algorithm that steers users towards directions with more expansive physical space in VR intersections, by calculating the future real-world locations of potential walking paths [34, 55]. Zank and Kunz demonstrated that models incorporating additional gaze data in T-shaped corridors, based on walking patterns, yielded fast and accurate predictions [2]. Recent studies even combined additional body data like gaze [7, 14, 17, 26], or applied deep-learning methods [11]. Stein et al.’s work further extended this concept by integrating gaze data into the LSTM model, demonstrating enhanced performance with this inclusion [38].

Our study aims to reduce locomotion prediction errors by suggesting GaitWay, which incorporates gait data into the deep learning framework.

Skip 3METHOD Section

3 METHOD

3.1 Apparatus

The experimental setup involved the Meta Quest Pro, featuring a 72 Hz refresh rate, 105° field of view, and 1080 × 1920 pixels display per eye. The VR environment was developed in Unity and wirelessly connected to a desktop with an RTX 2080 GPU. The participants were outfitted with a waist belt and had the nondominant-hand controller affixed to the belt to measure the body’s yaw. Object interactions during tasks were executed using the dominant-hand controller. Eye tracking was conducted using the Quest Pro, achieving an average accuracy of 1.652° and an SD of 0.699° within a 15° field of view, on par with current VR headsets [37, 39, 49], at a 30 Hz tracking rate. Prior to the experiment, participants underwent an eye calibration process using the Quest Pro built-in procedure. During each trial, participants engaged in gaze fixation on five custom-designed fixation points [23] to calibrate potential slippage during walking. For tracking gait data, the Moticon OpenGo sensor was utilized, comprising 16 sensors with a sensitivity range of 0–50 N/cm2 and a resolution of 0.25 N/cm2. The tracking frequency for gait data was set at 50 Hz.

3.2 Experimental Settings

During the experiment, participants engaged in tasks within the Search Room, Corridor, and Obstacle Room. The overall experimental design was influenced by a previous study [11] in which three gaze interaction scenarios during locomotion (search, follow, avoid) were outlined. However, we introduced an element of randomness to this setup: we randomized the angle of cylinder arrangement in the Search Room and randomized route direction in the Obstacle Room. The VR space was sized at 5.5 m × 6.5 m, matching the dimensions of the experimental location. Participants initiated the experiment by completing a 5-fixation point gaze task and responding to a distraction question: “How much does it seem like your visual attention is scattered?" on an 11-point Likert scale to measure perceived visual distractions. Upon completing a task in one room, they exited through a door, traversed the corridor to the opposite door, and entered the next room for the subsequent task. Corridors were situated on either side of the rooms. After each task, a door leading to either the right or left corridor was randomly generated, ensuring equal occurrences of each. The specific tasks in each room were as follows:

Figure 2:

Figure 2: Three types of room: (a) search room, (b) corridor, and (c) obstacle room

Search Room. Participants were tasked with identifying the correct option among seven elliptical columns, as depicted in Fig. 2 (a). The orientation of each column was randomized for every task, and the hexagonal arrangement, with a 1.6 m gap, was rotated between 0° to 60°. Participants confirmed their selection by bringing the controller close to the sphere atop the column. The correct choices triggered a sound, whereas incorrect selections elicited no feedback. The position of the correct column varied randomly in each trial.

Corridor. The corridor served as a transitional space between the Search Room and the Obstacle Room. After the task, participants navigated through a randomly generated left or right corridor, designed to emulate scenarios of following either wall direction.

Obstacle Room. Participants walked toward a target, avoiding obstacles en route. Upon reaching the target and aligning the controller with it, a red platform materialized randomly. The participant then moved to this platform, triggering a new path creation. An obstacle was placed centrally between the platform and the target, potentially offset by 30 cm to the right, left, or directly along the path. Participants completed this process three times. Afterwards, they engaged in a fixation task and answered the question.

The experimental tasks were conducted against backgrounds of varying visual distractions, categorized into three levels to express the VR content situation on the market: low, mid, and high. Each level was designed with distinct characteristics, while ensuring that all task-related objects remained consistent across the environments. Each of the three level was created considering the controlled environment like the most experiments, the environment with feasible background that users can encounter in VR games, and the environment with additional active creatures and changes that appear in battles or event situations. The created stages was adjusted to have significant differences in ‘survey response’ and ‘proportion of watching distractor’ (details on 3.3.1) through a pilot test by four lab members.

Figure 3:

Figure 3: Three distinct levels of distraction in the environment: (a) low, (b) mid, and (c) high distraction environments.

Low distraction. This environment was minimalistic, featuring only the essential objects for task completion. It lacked enclosing walls but included a monochrome gray-colored ground and white skybox.

Mid distraction. The surrounding space was crafted to emulate a snowy landscape, complete with snow-covered terrain, trees, and an ornate skybox. This was one of the free assets uploaded to the Unity Asset Store. Additionally, eight tall trees and five small bushes were placed continuously around the user’s walking area, allowing users to gaze naturally at the trees in most situations.

High distraction. Dynamic elements were added to this setting. Participants encountered moving objects such as birds, dragon, two tornadoes, volcano, and fireworks. A total of five sitting birds were placed on a tree, floor, or stone with similar spacing around the floor. The remaining large distractors were placed in a total of five at approximately 70-degree intervals to divert attention from all directions.

This study received approval from the institutional review board. Eleven participants (9 males, 2 females; M = 24.7 years, SD = 2.3 years) were enrolled. Three of them had experience in a VR environment using an HMD and had a history of participating in VR experiments related to RDW. The experiment comprised two phases: training data collection and test data collection. In the training phase, participants undertook 10 trials in a low distraction environment. For the test phase, they completed 5 trials in each of the low, mid, and high distraction environments.

3.3 Data Configuration

Throughout the experiment, we collected two types of data: 1) data for assessing visual distraction and 2) data for locomotion prediction.

3.3.1 Data for checking visual distraction.

This involved gathering users’ gaze data and administering a survey to determine three key metrics for evaluating visual distraction: 1) users’ survey responses, 2) mean eye movement speed, and 3) the proportion of time spent focusing on distractors. Eye movement speed can implies visual distraction [51], mean eye movement speed was quantified by calculating the average two-dimensional velocity of both eyes. The ratio of attention was determined by categorizing the objects viewed by participants into two groups: ‘Task’ and ‘Distractor’. ‘Task’ includes all objects essential for task execution in each room (e.g., target, obstacle, ellipse floor, wall), whereas ’Distractor’ includes all other elements (e.g., background, non-task-related objects).

Table 1:
DataHMDBodyGazeLeft and Right FootNo. ofFeatures
2D VelocityYaw/PitchYawYaw/PitchML/AP CoPAP Acc.Total Pressure
OOO5
GazeOOOO7
CopOOOO9
GaitWayOOOOOO13

Table 1: Four types of data configuration for input features. AP indicates Anterior–Posterior (forward–backward) direction, and ML indicates Medio–Lateral (left–right) direction.

3.3.2 Data for locomotion prediction.

Data for locomotion prediction are compared across four conditions: control, gaze, cop, and GaitWay. This is to see whether our new proposal, GaitWay, is superior to other types of data configuration. Each condition are described in Table 1. As the foot-related features were not influenced by the experimental environment, they were standardized separately. Details on additional processing for the remaining features can be found in Appendix A. The input sequence for the predictive model was established at 75 samples, corresponding to a 3 s timeframe, enabling the model to predict the position 2 s into the future based on the preceding 3 s of data.

3.4 Prediction Model Specification

Figure 4:

Figure 4: LSTM network structure used in locomotion prediction.

In Niklas’s previous work, prediction errors were reduced when LSTM was applied to data including gaze data [38]. We used the same LSTM model for GaitWay system to clarify whether changes in the data were due to visual distraction. The model’s architecture includes two LSTM layers with 64 hidden units. Following these LSTM layers, a dropout layer with a probability of 0.3 was implemented, succeeded by a dense output layer. A weight decay of 1 × 10− 4 was also applied. The training phase employed the mean–square error as the loss function, with a batch size set at 256, and was conducted over ten epochs. 10% of the training data was utilized as validation data.

Skip 4RESULTS Section

4 RESULTS

4.1 Analysis of Gaze Data According to Visual Distraction Level

The analysis of the gaze data, as depicted in Fig. 5, was performed using a one-way repeated measures ANOVA (RM ANOVA) except for Question because of normality. Post-hoc tests was then applied using the Bonferroni correction.

Figure 5:

Figure 5: Values of response of question, mean eye-movement speed, proportion of watching objects in the distractor category according to distraction levels.

Question. Due to the violation of normality assumptions in one of the three datasets, the Friedman test was employed. Results revealed significant difference, χ2(2) = 8.909, p = .012. Post-hoc tests using Wilcoxon signed-rank test showed notable differences in all comparisons (all p = .006).

Mean eye movement speed. The results identified a significant difference, with F(2, 20) = 16.345, p < .001. Post-hoc comparisons revealed significant distinctions between low-mid and low-high distraction levels. Notably, eye movement speed in the high distraction condition exceeded that of the mid condition, although this was not statistically significant.

Proportion of watching distractor. The gaze ratio across the three levels revealed a significant variance, F(2, 18) = 19.805, p < .001. Post-hoc testing displayed significant differences between low-mid and low-high levels.

Overall results indicate that the three levels of distraction in our environment made differences. This suggests that when a user feels a large gaze dispersion, actual gaze data such as eye movements are also affected.

4.2 Comparison of Locomotion Prediction Performance

Table 2 presents the mean distance error (MDE) for predicting positions 2 s in the future. Subsequently, using the MDE obtained from each trial as a data point, differences were analyzed. After post-processing, over 90% of the collected data were utilized. All data passed the normality test using the Shapiro-Wilks test. The analysis of the gaze data, as depicted in Fig. 6, was performed using a one-way RM ANOVA for data configurations, and ANOVA for distraction level, followed by post-hoc tests using Bonferroni correction.

Table 2:
Data Configurations
Distraction LevelcontrolgazecopGaitWay
low117.3 cm113.0 cm108.7 cm105.7 cm
mid120.6 cm124.7 cm111.9 cm109.3 cm
high120.5 cm126.5 cm112.6 cm109.2 cm

Table 2: MDE values obtained for each condition

Figure 6:

Figure 6: MDE results according to distraction levels and data configurations

GaitWay reduces the prediction error the most. In the low distraction environment, a considerable difference was observed, with F(3, 196) = 11.187, p < .001. Post-hoc analyses showed significant disparities between the cop–control, GaitWay–gaze, and GaitWay–control conditions. While the gaze condition did not reach statistical significance compared to control, it exhibited an error reduction of 4.3 cm. In the mid distraction environment, a significant discrepancy was observed with F(3, 196) = 22.036, p < .001. Post-hoc tests revealed significant differences for all pairs except gaze-control and GaitWay–cop conditions. The gaze condition, although not statistically significant, displayed an approximate error increase of 4.1 cm compared to the control condition. At the high distraction level, a significant difference was noted, with F(3, 196) = 26.601, p < .001, and post-hoc testing indicated significance for all pairs except the GaitWay–cop condition. A significant increase in error of approximately 6.0 cm was observed in the gaze condition compared with the control condition (p = .034). This implies our GaitWay configuration consistently reduces prediction error across distraction levels. Consequently, GaitWay provides robust prediction enhancement for locomotion prediction.

Gaze data increases the prediction error when visually distracted. Significant differences were observed solely in the gaze condition, with F(2, 78) = 34.630, p < .001; all other configurations were nonsignificant (p > .1). Post-hoc test results of the gaze condition indicated significance in the low-mid and low-high pairs (p < .001). Although a difference of 1.8 cm between mid-high was noted, this did not achieve statistical significance (p = .180). These imply that the gaze condition significantly affected by the visual distractions. Hence, using gaze data may cause prediction degradation in an actual VR environment rather than a monotonous experimental environment, leading to shrunken walking space.

Skip 5DISCUSSION Section

5 DISCUSSION

In this study, we introduced varying degrees of visual distraction in the background of the experimental environment while participants performed identical tasks. Our goal was to present a robust and refined locomotion prediction model using gait data. Regarding locomotion prediction, our GaitWay demonstrated a notable reduction in prediction error, independent of the level of visual distraction. Through this, we demonstrated that gait-related data can provide sufficient help in predicting 2D locomotion. Conversely, gaze data was observed to increase prediction error under distracting conditions, though it reduced error in low distraction scenarios. Thus, GaitWay can improve predictive RDW controllers, widening the walking space in VR environments with task-irrelevant objects, while gaze data can cause counterproductiveness in RDW improvement.

Moreover, we verified that incorporating additional data features (5 to 13) into the LSTM network still ensures feasible prediction times. An increase in input features did not adversely impact inference time. We observed an inference time of 4-5 ms per input–output pair, consistent across different data configurations. This is lower than both the data recording interval of 20 ms and the sampling interval of 40 ms, indicating the viability of prediction without data loss.

Skip 6CONCLUSION AND FUTURE WORKS Section

6 CONCLUSION AND FUTURE WORKS

Although gaze data remains a valuable component for locomotion prediction in VR, its effectiveness diminishes in the presence of visual distractions like a VR game. On the other hand, our study proved that GaitWay consistently reduces prediction error, irrespective of visual distractions. This implies users can walk broader spaces using RDW in actual VR content by GaitWay system than before. However, it remains unclear how much improvement in locomotion prediction performance translates to RDW enhancement. Furthermore, the integration of gaze data can still diminish predictions error, especially in low distraction. This approach can be achieved by using different data configurations for GaitWay depending on the distraction level. Other prediction models presented after LSTM can also be applied to enhance accuracy. We aim to pursue further research regarding the potential spatial expansion benefits when applying enhanced GaitWay system to RDW or whether the end-to-end inference time for prediction is applicable with real-time user usability evaluation.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This work was supported by the GIST-MIT Research Collaboration grant funded by the GIST in 2023 and ‘Project for science and technology opens the future of the region’ program through the Innopolis Foundation funded by Ministry of Science and ICT. (Project Number: 2022-DD-UP-0312)

Skip ADATA PROCESSING EXCEPT FOR GAIT DATA Section

A DATA PROCESSING EXCEPT FOR GAIT DATA

In our predictive model, we focused on forecasting the user’s position two seconds ahead from the moment data is received. Stein [38] selected a prediction time of 2.5 s, whereas Cho [11] opted for a 1 s interval. Although reducing the prediction time might decrease the error, this is often proportional to the average distance a user traverses during that interval. In our study, the average distance covered by users in 2 s was 157 cm, consistent with the findings in [38]. Therefore, we selected this duration for our experiments. Consequently, the mathematical representation for predicting the position two seconds later is denoted as \(\left(X_{t+2s}^{H}, Y_{t+2s}^{H}, Z_{t+2s}^{H} \right)\).

To mitigate the influence of environmental factors, we recentered all the coordinates surrounding the user’s position, adopting the coordinate system outlined in [6]. This entailed calculating the average head orientation within the input sequence to determine the reference coordinate system \(\left(\Phi _{t-i}^{H}, \Theta _{t-i}^{H}, \Psi _{t-i}^{H} \right)\). Following this, we rotated the future vector F, which indicates the position two seconds ahead, based on the reference yaw angle. (1) \(\begin{eqnarray} \overrightarrow{F}_{t} = \left(F_{t}^{X}, F_{t}^{Z} \right) = \left(X_{t+2s}^{H} - X_{t}^{H}, Z_{t+2s}^{H} - Z_{t}^{H} \right) \end{eqnarray} \) (2) \(\begin{eqnarray} f^{x}_{t} = {\bf cos}\left(-\overline{\Psi ^{H}_{t-i}} \right)F^{X}_{t} - {\bf sin} \left(-\overline{\Psi ^{H}_{t-i}} \right)F_{t}^{Z} \end{eqnarray} \) (3) \(\begin{eqnarray} f^{z}_{t} = {\bf sin}\left(-\overline{\Psi ^{H}_{t-i}} \right)F^{X}_{t} + {\bf cos} \left(-\overline{\Psi ^{H}_{t-i}} \right)F_{t}^{Z} \end{eqnarray} \)

The velocity of the head, two of the seven features, was also rotated with respect to the reference coordinate system. However, it’s important to note that the velocity along the y-axis was not included in this rotation. (4) \(\begin{eqnarray} v^{x}_{t-i} = {\bf cos}\left(-\overline{\Psi ^{H}_{t-i}} \right)V^{X}_{t-i} - {\bf sin} \left(-\overline{\Psi ^{H}_{t-i}} \right)V_{t-i}^{Z} \end{eqnarray} \) (5) \(\begin{eqnarray} v^{z}_{t-i} = {\bf sin}\left(-\overline{\Psi ^{H}_{t-i}} \right)V^{X}_{t-i} + {\bf cos} \left(-\overline{\Psi ^{H}_{t-i}} \right)V_{t-i}^{Z} \end{eqnarray} \)

The yaw and pitch of the head, the yaw of the body, and the yaw and pitch of the eyes were all adjusted to the yaw and pitch of the reference coordinate system. As described in Section 3.1, the body yaw was collected using the nondominant-hand controller. (6) \(\begin{eqnarray} \psi ^{H}_{t-i} = \Psi ^{H}_{t-i} - \overline{\Psi ^{H}_{t-i} } \end{eqnarray} \) (7) \(\begin{eqnarray} \theta ^{H}_{t-i} = \Theta ^{H}_{t-i} - \overline{\Theta ^{H}_{t-i} } \end{eqnarray} \) (8) \(\begin{eqnarray} \psi ^{B}_{t-i} = \Psi ^{B}_{t-i} - \overline{\Psi ^{H}_{t-i} } \end{eqnarray} \) (9) \(\begin{eqnarray} \psi ^{H}_{t-i} = \Psi ^{E}_{t-i} - \overline{\Psi ^{H}_{t-i} } \end{eqnarray} \) (10) \(\begin{eqnarray} \theta ^{H}_{t-i} = \Theta ^{E}_{t-i} - \overline{\Theta ^{H}_{t-i} } \end{eqnarray} \)

Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

54.4 MB

3613905.3651073-talk-video.mp4

Talk Video

mp4

87.3 MB

3613905.3651073-video-figure.mp4

Video Figure

mp4

251.7 MB

References

  1. Hironori Akiduki, Suetaka Nishiike, Hiroshi Watanabe, Katsunori Matsuoka, Takeshi Kubo, and Noriaki Takeda. 2003. Visual-vestibular conflict induced by virtual reality in humans. Neuroscience letters 340, 3 (2003), 197–200.Google ScholarGoogle Scholar
  2. Gustavo Arechavaleta, Jean-Paul Laumond, Halim Hicheur, and Alain Berthoz. 2008. An Optimality Principle Governing Human Walking. IEEE Transactions on Robotics 24, 1 (2008), 5–14. https://doi.org/10.1109/TRO.2008.915449Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Luke Bölling, Niklas Stein, Frank Steinicke, and Markus Lappe. 2019. Shrinking circles: Adaptation to increased curvature gain in redirected walking. IEEE transactions on visualization and computer graphics 25, 5 (2019), 2032–2039.Google ScholarGoogle Scholar
  4. Benjamin Bolte and Markus Lappe. 2015. Subliminal reorientation and repositioning in immersive virtual environments using saccadic suppression. IEEE transactions on visualization and computer graphics 21, 4 (2015), 545–552.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Evren Bozgeyikli, Andrew Raij, Srinivas Katkoori, and Rajiv Dubey. 2016. Point & teleport locomotion technique for virtual reality. In Proceedings of the 2016 annual symposium on computer-human interaction in play. 205–216.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gianni Bremer, Niklas Stein, and Markus Lappe. 2021. Predicting Future Position From Natural Walking and Eye Movements with Machine Learning. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). 19–28. https://doi.org/10.1109/AIVR52153.2021.00013Google ScholarGoogle ScholarCross RefCross Ref
  7. Hugo Brument, Iana Podkosova, Hannes Kaufmann, Anne Hélène Olivier, and Ferran Argelaguet. 2019. Virtual vs. Physical Navigation in VR: Study of Gaze and Body Segments Temporal Reorientation Behaviour. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 680–689. https://doi.org/10.1109/VR.2019.8797721Google ScholarGoogle ScholarCross RefCross Ref
  8. Dirk Calow and Markus Lappe. 2008. Efficient encoding of natural optic flow. Network: Computation in Neural Systems 19, 3 (2008), 183–212. https://doi.org/10.1080/09548980802368764 arXiv:https://doi.org/10.1080/09548980802368764PMID: 18946836.Google ScholarGoogle ScholarCross RefCross Ref
  9. Polona Caserman, Augusto Garcia-Agundez, Alvar Gámez Zerban, and Stefan Göbel. 2021. Cybersickness in current-generation virtual reality head-mounted displays: systematic review and outlook. Virtual Reality 25, 4 (2021), 1153–1170.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Baojun Chen, Enhao Zheng, and Qining Wang. 2014. A Locomotion Intent Prediction System Based on Multi-Sensor Fusion. Sensors 14, 7 (July 2014), 12349–12369. https://doi.org/10.3390/s140712349Google ScholarGoogle ScholarCross RefCross Ref
  11. Yong-Hun Cho, Dong-Yong Lee, and In-Kwon Lee. 2018. Path Prediction Using LSTM Network for Redirected Walking. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 527–528. https://doi.org/10.1109/VR.2018.8446442Google ScholarGoogle ScholarCross RefCross Ref
  12. Szonya Durant and Johannes M. Zanker. 2020. The combined effect of eye movements improve head centred local motion information during walking. PLOS ONE 15, 1 (01 2020), 1–17. https://doi.org/10.1371/journal.pone.0228345Google ScholarGoogle ScholarCross RefCross Ref
  13. Richard C Fitzpatrick, Daniel L Wardman, and Janet L Taylor. 1999. Effects of galvanic vestibular stimulation during human walking. The Journal of Physiology 517, Pt 3 (1999), 931.Google ScholarGoogle ScholarCross RefCross Ref
  14. Jonathan Gandrud and Victoria Interrante. 2016. Predicting Destination Using Head Orientation and Gaze Direction during Locomotion in VR. In Proceedings of the ACM Symposium on Applied Perception (Anaheim, California) (SAP ’16). Association for Computing Machinery, New York, NY, USA, 31–38. https://doi.org/10.1145/2931002.2931010Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Mary Hayhoe and Dana Ballard. 2005. Eye movements in natural behavior. Trends in Cognitive Sciences 9, 4 (2005), 188–194. https://doi.org/10.1016/j.tics.2005.02.009Google ScholarGoogle ScholarCross RefCross Ref
  16. Mark Hollands and Dilwyn Marple-Horvat. 1996. Visually guided stepping under conditions of step cycle-related denial of visual information. Experimental brain research. Experimentelle Hirnforschung. Expérimentation cérébrale 109 (06 1996), 343–56. https://doi.org/10.1007/BF00231792Google ScholarGoogle ScholarCross RefCross Ref
  17. M A Hollands, AftabE. Patla, and Joan N. Vickers. 2002. “Look where you’re going!”: gaze behaviour associated with maintaining and changing the direction of locomotion. Experimental Brain Research 143 (2002), 221–230. https://api.semanticscholar.org/CorpusID:29510260Google ScholarGoogle ScholarCross RefCross Ref
  18. Courtney Hutton and Evan Suma. 2016. A realistic walking model for enhancing redirection in virtual reality. In 2016 IEEE Virtual Reality (VR). 183–184. https://doi.org/10.1109/VR.2016.7504714Google ScholarGoogle ScholarCross RefCross Ref
  19. S. Hwang, Y. Kim, Y. Seo, and S. Kim. 2023. Enhancing Seamless Walking in Virtual Reality: Application of Bone-Conduction Vibration in Redirected Walking. In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE Computer Society, Los Alamitos, CA, USA, 1181–1190. https://doi.org/10.1109/ISMAR59233.2023.00135Google ScholarGoogle ScholarCross RefCross Ref
  20. Seokhyun Hwang, Jieun Lee, YoungIn Kim, and SeungJun Kim. 2022. REVES: Redirection Enhancement Using Four-Pole Vestibular Electrode Stimulation. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 267, 7 pages. https://doi.org/10.1145/3491101.3519626Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Seokhyun Hwang, Jieun Lee, Youngin Kim, Youngseok Seo, and Seungjun Kim. 2023. Electrical, Vibrational, and Cooling Stimuli-Based Redirected Walking: Comparison of Various Vestibular Stimulation-Based Redirected Walking Systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 767, 18 pages. https://doi.org/10.1145/3544548.3580862Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Su-Bin Joo, Seung Eel Oh, Taeyong Sim, Hyunggun Kim, Chang Hyun Choi, Hyeran Koo, and Joung Hwan Mun. 2014. Prediction of gait speed from plantar pressure using artificial neural networks. Expert Systems with Applications 41, 16 (2014), 7398–7405. https://doi.org/10.1016/j.eswa.2014.06.002Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Anuradha Kar and Peter Corcoran. 2017. A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms. IEEE Access 5 (2017), 16495–16519. https://doi.org/10.1109/ACCESS.2017.2735633Google ScholarGoogle ScholarCross RefCross Ref
  24. Lucie Kruse, Eike Langbehn, and Frank Steinicke. 2018. I can see on my feet while walking: Sensitivity to translation gains with visible feet. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 305–312.Google ScholarGoogle ScholarCross RefCross Ref
  25. Michael F. Land and Mary Hayhoe. 2001. In what ways do eye movements contribute to everyday activities?Vision Research 41, 25 (2001), 3559–3565. https://doi.org/10.1016/S0042-6989(01)00102-XGoogle ScholarGoogle ScholarCross RefCross Ref
  26. Michael F. Land and Benjamin W. Tatler. 2009. 100101Locomotion on foot. In Looking and Acting: Vision and eye movements in natural behaviour. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198570943.003.0006 arXiv:https://academic.oup.com/book/0/chapter/144253574/chapter-ag-pdf/45015183/book_3269_section_144253574.ag.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  27. Eike Langbehn, Tobias Eichler, Sobin Ghose, Kai von Luck, Gerd Bruder, and Frank Steinicke. 2015. Evaluation of an Omnidirectional Walking-in- Place User Interface with Virtual Locomotion Speed Scaled by Forward Leaning Angle. Proceedings of the GI Workshop on Virtual and Augmented Reality (GI VR/AR) (2015), 149–160. https://basilic.informatik.uni-hamburg.de/Publications/2015/LEGVBS15/Google ScholarGoogle Scholar
  28. Eike Langbehn, Paul Lubos, and Frank Steinicke. 2018. Evaluation of Locomotion Techniques for Room-Scale VR: Joystick, Teleportation, and Redirected Walking. In Proceedings of the Virtual Reality International Conference - Laval Virtual (Laval, France) (VRIC ’18). Association for Computing Machinery, New York, NY, USA, Article 4, 9 pages. https://doi.org/10.1145/3234253.3234291Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Eike Langbehn, Frank Steinicke, Ping Koo-Poeggel, Lisa Marshall, and Gerd Bruder. 2019. Stimulating the brain in VR: Effects of transcranial direct-current stimulation on redirected walking. In ACM Symposium on Applied Perception 2019. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Eike Langbehn, Frank Steinicke, Markus Lappe, Gregory F Welch, and Gerd Bruder. 2018. In the blink of an eye: leveraging blink-induced suppression for imperceptible position and orientation redirection in virtual reality. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Jieun Lee, Seokhyun Hwang, Aya Ataya, and SeungJun Kim. 2024. Effect of optical flow and user VR familiarity on curvature gain thresholds for redirected walking. Virtual Reality 28, 1 (29 Jan 2024), 35. https://doi.org/10.1007/s10055-023-00935-4Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jieun Lee, Seokhyun Hwang, Kyunghwan Kim, and SeungJun Kim. 2022. Auditory and Olfactory Stimuli-Based Attractors to Induce Reorientation in Virtual Reality Forward Redirected Walking. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 446, 7 pages. https://doi.org/10.1145/3491101.3519719Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jonathan Samir Matthis, Jacob L. Yates, and Mary M. Hayhoe. 2018. Gaze and the Control of Foot Placement When Walking in Natural Terrain. Current Biology 28, 8 (2018), 1224–1233.e5. https://doi.org/10.1016/j.cub.2018.03.008Google ScholarGoogle ScholarCross RefCross Ref
  34. Thomas Nescher, Ying-Yin Huang, and Andreas Kunz. 2014. Planning redirection techniques for optimal free walking experience using model predictive control. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). 111–118. https://doi.org/10.1109/3DUI.2014.6798851Google ScholarGoogle ScholarCross RefCross Ref
  35. Niels Christian Nilsson, Tabitha Peck, Gerd Bruder, Eri Hodgson, Stefania Serafin, Mary Whitton, Frank Steinicke, and Evan Suma Rosenberg. 2018. 15 years of research on redirected walking in immersive virtual environments. IEEE computer graphics and applications 38, 2 (2018), 44–56.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Sharif Razzaque, Zachariah Kohn, and Mary C. Whitton. 2001. Redirected Walking. In Eurographics 2001 - Short Presentations. Eurographics Association. https://doi.org/10.2312/egs.20011036Google ScholarGoogle ScholarCross RefCross Ref
  37. Alexandra Sipatchin, Siegfried Wahl, and Katharina Rifai. 2021. Eye-Tracking for Clinical Ophthalmology with Virtual Reality (VR): A Case Study of the HTC Vive Pro Eye’s Usability. Healthcare 9, 2 (2021). https://doi.org/10.3390/healthcare9020180Google ScholarGoogle ScholarCross RefCross Ref
  38. Niklas Stein, Gianni Bremer, and Markus Lappe. 2022. Eye Tracking-based LSTM for Locomotion Prediction in VR. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 493–503. https://doi.org/10.1109/VR51125.2022.00069Google ScholarGoogle ScholarCross RefCross Ref
  39. Niklas Stein, Diederick Niehorster, Tamara Watson, Frank Steinicke, Katharina Rifai, Siegfried Wahl, and Markus Lappe. 2021. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. i-Perception 12 (02 2021), 204166952098333. https://doi.org/10.1177/2041669520983338Google ScholarGoogle ScholarCross RefCross Ref
  40. Binbin Su, Yi-Xing Liu, and Elena M. Gutierrez-Farewik. 2021. Locomotion Mode Transition Prediction Based on Gait-Event Identification Using Wearable Sensors and Multilayer Perceptrons. Sensors 21, 22 (Nov. 2021), 7473. https://doi.org/10.3390/s21227473Google ScholarGoogle ScholarCross RefCross Ref
  41. Evan A. Suma, Gerd Bruder, Frank Steinicke, David M. Krum, and Mark Bolas. 2012. A taxonomy for deploying redirection techniques in immersive virtual environments. In 2012 IEEE Virtual Reality Workshops (VRW). 43–46. https://doi.org/10.1109/VR.2012.6180877Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Evan A Suma, Seth Clark, David Krum, Samantha Finkelstein, Mark Bolas, and Zachary Warte. 2011. Leveraging change blindness for redirection in virtual environments. In 2011 IEEE Virtual Reality Conference. IEEE, 159–166.Google ScholarGoogle ScholarCross RefCross Ref
  43. Qi Sun, Anjul Patney, Li-Yi Wei, Omer Shapira, Jingwan Lu, Paul Asente, Suwen Zhu, Morgan McGuire, David Luebke, and Arie Kaufman. 2018. Towards virtual reality infinite walking: dynamic saccadic redirection. ACM Transactions on Graphics (TOG) 37, 4 (2018), 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Bernard t Hart and Wolfgang Einhäuser. 2012. Mind the step: Complementary effects of an implicit task on eye and head movements in real-life gaze allocation. Experimental Brain Research 223 (11 2012). https://doi.org/10.1007/s00221-012-3254-xGoogle ScholarGoogle ScholarCross RefCross Ref
  45. Benjamin W. Tatler, Mary M. Hayhoe, Michael F. Land, and Dana H. Ballard. 2011. Eye guidance in natural vision: Reinterpreting salience. Journal of Vision 11, 5 (05 2011), 5–5. https://doi.org/10.1167/11.5.5 arXiv:https://arvojournals.org/arvo/content_public/journal/jov/933487/jov-11-5-5.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  46. Benjamin W. Tatler and Sarah L Tatler. 2013. The influence of instructions on object memory in a real-world setting.Journal of vision 13 2 (2013), 5. https://api.semanticscholar.org/CorpusID:20754143Google ScholarGoogle Scholar
  47. Phuc Truong, Jinwook Lee, Ae-Ran Kwon, and Gu-Min Jeong. 2016. Stride Counting in Human Walking and Walking Distance Estimation Using Insole Sensors. Sensors 16, 6 (June 2016), 823. https://doi.org/10.3390/s16060823Google ScholarGoogle ScholarCross RefCross Ref
  48. Yolanda Vazquez-Alvarez and Stephen A. Brewster. 2011. Eyes-Free Multitasking: The Effect of Cognitive Load on Mobile Spatial Audio Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2173–2176. https://doi.org/10.1145/1978942.1979258Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Shu Wei, Desmond Bloemers, and Aitor Rovira. 2023. A Preliminary Study of the Eye Tracker in the Meta Quest Pro. In Proceedings of the 2023 ACM International Conference on Interactive Media Experiences (Nantes, France) (IMX ’23). Association for Computing Machinery, New York, NY, USA, 216–221. https://doi.org/10.1145/3573381.3596467Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Jan Malte Wiener, Olivier De Condappa, and Christoph Hölscher. 2011. Do you have to look where you go? Gaze behaviour during spatial decision making. Cognitive Science 33 (2011). https://api.semanticscholar.org/CorpusID:14148078Google ScholarGoogle Scholar
  51. Huacai Xian, Lisheng Jin, Haijing Hou, Qingning Niu, and Huanhuan Lv. 2014. Analyzing Effects of Pressing Radio Button on Driver’s Visual Cognition. Vol. 215. 69–78. https://doi.org/10.1007/978-3-642-37835-5_7Google ScholarGoogle ScholarCross RefCross Ref
  52. Markus Zank and Andreas Kunz. 2016. Eye tracking for locomotion prediction in redirected walking. In 2016 IEEE Symposium on 3D User Interfaces (3DUI). 49–58. https://doi.org/10.1109/3DUI.2016.7460030Google ScholarGoogle ScholarCross RefCross Ref
  53. Markus Zank and Andreas Kunz. 2016. Where Are You Going? Using Human Locomotion Models for Target Estimation. Vis. Comput. 32, 10 (oct 2016), 1323–1335. https://doi.org/10.1007/s00371-016-1229-9Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Markus Zank and Andreas Kunz. 2017. Optimized graph extraction and locomotion prediction for redirected walking. In 2017 IEEE Symposium on 3D User Interfaces (3DUI). 120–129. https://doi.org/10.1109/3DUI.2017.7893328Google ScholarGoogle ScholarCross RefCross Ref
  55. Michael A. Zmuda, Joshua L. Wonser, Eric R. Bachmann, and Eric Hodgson. 2013. Optimizing Constrained-Environment Redirected Walking Instructions Using Search Techniques. IEEE Transactions on Visualization and Computer Graphics 19, 11 (2013), 1872–1884. https://doi.org/10.1109/TVCG.2013.88Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905

    Copyright © 2024 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 11 May 2024

    Check for updates

    Qualifiers

    • Work in Progress
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate6,164of23,696submissions,26%
  • Article Metrics

    • Downloads (Last 12 months)96
    • Downloads (Last 6 weeks)96

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format