Influence of sensory modality and control dynamics on human path integration

Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.


30
Imagine driving a car on a muddy or icy road, where steering dynamics can change rapidly. To avoid crashing, 31 one must rapidly infer the new dynamics and respond appropriately to keep the car on the desired path. 32 Conversely, when you drive out of an ice patch, control dynamics change again, compelling you to re-adjust your 33 steering. The quality of sensory cues may also vary depending on environmental factors (e.g. reduced visibility 34 in fog or twilight, sub-threshold vestibular stimulation under near-constant travel velocity). Humans are adept at 35 using time-varying sensory cues to adapt quickly to a wide range of latent control dynamics in volatile 36 environments. However, the relative contributions of different sensory modalities and the precise impact of latent 37 control dynamics on goal-directed navigation remain poorly understood. Here we study this using path 38 integration. 39 Path integration, a natural computation in which the brain uses dynamic sensory cues to infer the evolution of 40 latent world states to continuously maintain a self-position estimate, has been studied in humans, but past 41 experimental paradigms imposed several constraints. First, in many tasks, the motion was passive and/or restricted 42 along predetermined, often one-dimensional, trajectories (Klatzky et al., 1998  There is a tight link between path integration and spatial navigation on the one hand and internal models and  To explore how temporal dynamics influence navigation across sensory modalities (visual, vestibular, or both), 64 we have built upon a naturalistic paradigm of path integration in which participants navigate to a briefly-cued in the absence of sustained acceleration, and that an accurate internal model of control dynamics is needed to 81 make use of sensory observations when navigating in volatile environments.

83
Task structure 84 Human participants steered towards a briefly-cued target location on a virtual ground plane, with varying sensory 85 conditions and control dynamics interleaved across trials. Participants sat on a motion platform in front of a screen ). This input is lowpass filtered to mimic the existence of inertia. The time constant of the filter varies across trials (time constant τ). In our framework, maximum velocity also varies according to the time constant τ of each trial to ensure comparable travel times across trials (see Methods -Control Dynamics). Right: the same joystick input (scaled by the corresponding maximum velocity for each τ) produces different velocity profiles for different time constants ( = 0.6 corresponds to velocity control; = 3 corresponds to acceleration control; τ values varied randomly along a continuum across trials, see Methods). Also depicted is the brief cueing period of the target at the beginning of the trial (gray zone, 1 second long). (E) Markov decision process governing self-motion sensation (Methods -Equation 1). , , and denote joystick input, movement velocity, and sensory observations, respectively, and subscripts denote time indices. Note that due to the 2-D nature of the task, these variables are all vector-valued, but we depict them as scalars for the purpose of illustration. By varying the time constant, we manipulated the control dynamics (i.e., the degree to which the current velocity carried over to the future, indicated by the thickness of the horizontal lines) along a continuum such that the joystick position primarily determined either the participant's velocity (top; thin lines) or acceleration (bottom; thick lines) (compare with (D) top and bottom, respectively). Sensory observations were available in the form of vestibular (left), optic flow (middle), or both (right). displaying a virtual environment (Fig. 1A). Stereoscopic depth cues were provided using polarizing goggles. On 87 each trial, a circular target appeared briefly at a random location (drawn from a uniform distribution within the 88 field of view; Fig. 1B,C) and participants had to navigate to the remembered target location in the virtual world 89 using a joystick to control linear and angular self-motion. The virtual ground plane was defined visually by a 90 texture of many small triangles which independently appeared only transiently; they could therefore only provide 91 optic-flow information and could not be used as landmarks. The self-motion process evolved according to Markov 92 dynamics, such that the movement velocity at the next time step depended only on the current joystick input and 93 the current velocity (Methods -Equation 1).

94
A time constant for the control filter (control timescale) governed the control dynamics: in trials with a small time 95 constant and a fast filter, joystick position essentially controlled velocity, providing participants with responsive 96 control over their self-motion, resembling regular road-driving dynamics. However, when the time constant was 97 large and the control filter was slow, joystick position mainly controlled acceleration, mimicking high inertia 98 under viscous damping, as one would experience on an icy road where steering is sluggish (Fig. 1D right and 1E 99 -top vs bottom). For these experiments, as the control timescale changed, the maximum velocity was adjusted 100 so that the participant could reach the typical target in about the same amount of time on average. This design 101 ensured that the effect of changing control dynamics would not be confused with the effect of integrating sensory 102 signals over a longer or shorter time.

103
Concurrently, we manipulated the modality of sensory observations to generate three conditions: 1) a vestibular 104 condition in which participants navigated in darkness, and sensed only the platform's motion (note that this 105 condition also engages somatosensory cues, see Methods), 2) a visual condition in which the motion platform 106 was stationary and velocity was signaled by optic flow, and 3) a combined condition in which both cues were 107 available ( Fig. 1E -left to right). Across trials, sensory conditions were randomly interleaved while manipulation 108 of the time constant followed a bounded random walk (Methods -Equation 2). Participants did not receive any 109 performance-related feedback.

110
Effect of sensory modality 111 We first compared the participants' stopping locations on each trial to the corresponding target locations, 112 separately for each sensory condition. We calculated the radial distance ̃ and angular eccentricity � of the 113 participants' final position relative to the initial position ( Fig. 2A), and compared them to the initial target distance 114 and angle , as shown for all trials (all time constants together) of a typical participant in Fig. 2B  three independent analyses revealed a substantial influence of control dynamics on steering response gain, which 152 was exaggerated for the vestibular, as compared to the visual and combined conditions. 153 We have shown previously that accumulating sensory noise over an extended time (~10s) would lead to a large in vestibular signals. On the other hand, we observed performance differences across control dynamics within 160 each sensory modality, so those differences cannot be attributed to differences in the reliability of self-motion 161 cues (instantaneous uncertainty). However, it might seem that this effect of control dynamics must be due to either Solid diagonal line has unit slope. Across participants, radial correlations, which were larger for the vestibular condition, were greater than angular correlations (See also Table 2). (D) Linear regression coefficients for the prediction of participants' response location (final position: , � ; left and right, respectively) from initial target location ( , ) and the interaction between initial target location and the time constant ( , ) (all variables were standardized before regressing, see Methods). Asterisks denote statistical significance of the difference in coefficient values of the interaction terms across sensory conditions (paired t-test; *: p<0.05, **: p<0.01, ***: p<0.001; see main text). Error bars denote ±1 SEM. Note a qualitative agreement between the terms that included target location only and the gains calculated with the simple linear regression model (Fig. 2B).
unlikely explanation since the expected effect of the different velocity profiles on the participants' responses is 166 the opposite of the observed effect of the control dynamics (Suppl. Fig. S4B). Consequently, unlike the effect of From a normative standpoint, to optimally infer movement velocity, one must combine sensory observations with 173 the knowledge of the time constant. Misestimating the time constant would produce errors in velocity estimates, 174 which would then propagate to position estimates, leading control dynamics to influence response gain (Fig. 4A, 175 middle-right). This is akin to misestimating the slipperiness of an ice patch on the road causing an inappropriate 176 steering response, that would culminate in a displacement that differs from the intended one (Suppl. Fig. S5). 177 However, in the absence of performance-related feedback, participants would be unaware of this discrepancy, 178 wrongly believing that the actual trajectory was indeed the intended one. We tested the hypothesis that participants 179 misestimated the time constant using a two-step model that reconstructs the participants' believed trajectories over the variable = log as Gaussians in log-space. We parameterized both the prior and the likelihood with a 198 mean ( ) and standard deviation ( ). The parameters of the prior ( , ) were allowed to freely vary across sensory 199 conditions but assumed to remain fixed across trials. On each trial, the likelihood was assumed to be centered on 200 the actual value of the log time-constant * on that trial according to = * = log * and was therefore not a free 201 parameter, but its width was free to vary across sensory conditions. Thus, for each sensory condition, we fit 202 three parameters: the of the likelihood, as well as the and of the prior. As mentioned above, we fit the model 203 to minimize the difference between their believed stopping locations and their experimentally-measured mean 204 stopping location (subjective residual errors), using a least-squares approach (Methods) and obtained one set of 205 parameters for each condition. Finally, the participant's estimated time-constant ̂ on each trial was taken to be 206 the median of the best-fit model, which equals the median of the distribution over (Fig. 4A, left). By integrating 207 the subject's joystick inputs on each trial using ̂ rather than the actual time-constant , we computed the believed 208 stopping location and the subjective residual errors implied by the best-fit model. 209 We then compared the correlations between the time constant and the real (from  was indeed significantly smaller when these errors were computed using the subjective (believed) rather than real 217 stopping location (Fig. 4B). In fact, subjective residual errors were completely uncorrelated with the time constant 218 (see Methods), suggesting that the apparent influence of control dynamics on behavioral performance was 219 entirely because participants misestimated the time constant of the underlying dynamics.

220
The relationship between real and model-estimated time constants for all participants can be seen in Fig. 5A. In 221 the vestibular condition, all participants consistently misestimated τ, exhibiting a substantial regression towards 222 the mean (Fig. 5A, green). This effect was much weaker in the visual condition. Only a few participants showed  Alternative models 240 To test whether our assumption of a static prior distribution over time constants was reasonable, we fit an 241 alternative Bayesian model in which the prior distribution was updated iteratively on every trial, as a weighted 242 average of the prior on the previous trial and the current likelihood over (Dynamic prior model; see Methods). 243 For this version, we once again modeled the likelihood and prior as normal distributions over the log-transformed 244 variable, , where the likelihood was centered on the actual and was therefore not a free parameter. Thus, we 245 fit three parameters: of the likelihood and the prior and of the initial prior. On each trial, the relative weighting 246 of prior and likelihood responsible for the update of the prior depended solely on the relationship between their 247 corresponding (i.e. relative widths). The performance between the static and dynamic prior models was 248 comparable in all conditions, for both distance and angle, suggesting that a static prior is adequate in explaining 249 the participants' behavior on this task ( Fig. 6 . 5A), this latter model performed comparably in the 258 vestibular condition, but substantially worse in the visual and combined conditions (Fig. 6). This suggests that 259 optic flow, but not vestibular signals, primarily contributes to inferring the latent velocity dynamics.  were correlated across trials, our models showed that participants did not take advantage of those correlations to 272 improve their estimates.  (Körding et al., 2004). Thus, participants try to exploit the additional information that 305 the dynamics contain about their self-motion in order to achieve the desired displacement.

306
Task performance was substantially worse in the vestibular condition, in a manner suggesting that vestibular 307 inputs lack the reliability to precisely estimate control dynamics on individual trials. Nevertheless, the vestibular 308 system could still facilitate inference by integrating trial history to build expectations about their statistics.

309
Consistent with this, the mean of the prior distribution over the dynamics fit to data was very close to the mean Hz.

366
Visual stimulus 367 Visual stimuli were generated and rendered using C++ Open Graphics Library (OpenGL) by continuously 368 repositioning the camera based on joystick inputs to update the visual scene at 60 Hz. The camera was positioned 369 at a height of 70cm above the ground plane, whose textural elements whose lifetimes were limited (~250ms) to 370 avoid serving as landmarks. The ground plane was circular with a radius of 37.5m (near and far clipping planes 371 at 5cm and 3750cm respectively), with the participant positioned at its center at the beginning of each trial. Each 372 texture element was an isosceles triangle (base × height 5.95 × 12.95 cm) that was randomly repositioned and 373 reoriented at the end of its lifetime. The floor density was held constant across trials at = 2.5 elements/m 2 .

374
The target, a circle of radius 25cm whose luminance was matched to the texture elements, flickered at 5Hz and 375 appeared at a random location between = ±38° of visual angle at a distance of = 2.5 − 5.5 m (average 376 distance ̅ = 4 m) relative to where the participant was stationed at the beginning of the trial. The stereoscopic 377 visual stimulus was rendered in an alternate frame sequencing format and participants wore active-shutter 3D 378 goggles to view the stimulus.

379
Behavioral task -Visual, Inertial and Multisensory motion cues 380 Participants were asked to navigate to a remembered target ('firefly') location on a horizontal virtual plane using 381 a joystick, rendered in 3D from a forward-facing vantage point above the plane. Participants pressed a button on 382 the joystick to initiate each trial and were tasked with steering to a randomly placed target that was cued briefly 383 at the beginning of the trial. A short tone at every button push indicated the beginning of the trial and the 384 appearance of the target. After one second, the target disappeared, which was a cue for the participant to start 385 steering. During steering, visual and/or vestibular/somatosensory sensory feedback was provided (see below).

386
Participants were instructed to stop at the remembered target location, and then push the button to register their 387 final position and start the next trial. Participants did not receive any feedback about their performance. Prior to 388 the first session, all participants performed about ten practice trials to familiarize themselves with joystick 389 movements and the task structure.

390
The three sensory conditions (visual, vestibular, combined) were randomly interleaved. In the visual condition, 391 participants had to navigate towards the remembered target position given only visual information (optic flow).

392
Visual feedback was stereoscopic, composed of flashing triangles to provide self-motion information but no 393 landmark. In the vestibular condition, after the target disappeared, the entire visual stimulus was shut off too, 394 leaving the participants to navigate in complete darkness using only vestibular/somatosensory cues generated by 395 the motion platform. In the combined condition, participants were provided with both visual and vestibular 396 information during their movement.

397
Independently of the manipulation of the sensory information, the properties of the motion controller also varied 398 from trial to trial. Participants experienced different time constants in each trial, which affected the type and 399 amount of control that was required to complete the task. In trials with short time constants, joystick position  The joystick controlled both the visual and vestibular stimuli through an algorithm that involved two processes.

413
The first varied the control dynamics (CD), producing velocities given by a lowpass filtering of the joystick input, The time constant of the lowpass filter determined the coefficient (Fig. S1A): Sustained maximal controller inputs of = 1 or = 1 produce velocities that saturate at We wanted to set and in such a way that would ensure that a target at an average linear or angular 431 displacement is reachable in an average time , regardless of . This constrains the input gains and . We 432 derived these desired gains based on a 1D bang-bang control model (i.e. purely forward movement, or pure 433 turning) which assumes maximal positive control until time , followed by maximal negative control until time 434 (Fig.S1A). Although we implemented the leaky integration in discrete time with a frame rate of 60Hz, we 435 derived the input gains using continuous time and translated them to discrete time.

436
The velocity at any time 0 ≤ ≤ during the control is: where is the velocity at the switching time when control switched from positive to negative, given by: By substituting into Eq. (1.4) and using the fact that at time , the controlled velocity should return to 0, we 439 obtain an expression that we can use to solve for : Observe that max cancels in this equation, so the switching time is independent of max and therefore also 441 independent of the displacement (see also Fig. S1A): Integrating the velocity profile of Equation 1.4 to obtain the distance travelled by time , substituting the switch 443 time (Fig. S1A), and simplifying, we obtain: (1.10) where is the average angle we want the participants to be able to steer within time .

451
Setting control gains according to Equation 1.9 allows us to manipulate the control timescale , while 452 approximately maintaining the average trial duration for each target location (Fig. S1B). Converting these 453 maximal velocities into discrete-time control gains using Equations 1.1-1.3 gives us the desired inertial control 454 dynamics. 456 The time constant τ was sampled according to a temporally correlated log-normal distribution. The log of the time 457 constant, = log , followed a bounded random walk across trials according to (Fig. S1C)

460
The velocity timescales changed across trials with their own timescale , related to the update coefficient by where we set Δ to be one trial and to be two trials. To produce the desired equilibrium 462 distribution of we set the scale of the random walk Gaussian noise ~� , 2 � with = (1 − ) and   (Fig. S2, 'Desired Platform Linear Acceleration') was produced by translating the platform, whereas 482 the low-pass component was produced by tilting the platform (Fig. S2, 'Desired Platform Tilt').

483
Even though this method is generally sufficient to ensure that platform motion remains within its envelope, it 484 does not guarantee it. Thus, the platform's position, velocity and acceleration commands were fed though a 485 sigmoid function (Fig. S2, 'Platform Limits'). This function was equal to the identity function ( ( ) = ) as 486 long as motion commands were within 75% of the platform's limits, so motion commands were unaffected. When 487 motion commands exceed this range, the function bends smoothly to saturates at a value set slightly below the as the slope of the respective regressions (Fig 2A). In addition, we followed the same process to calculate gain 513 terms within three τ groups of equal size (Fig. 3A).

Correlation between residual error and time constant τ 515
To evaluate the influence of the time constant on the steering responses, we computed the correlation coefficient 516 between the time constants and the residual errors from the mean response (estimated using the response gain) 517 for distance and angle. Under each sensory condition, the radial residual error ( ) for each trial was given by: where ̃ is the radial response, and the mean radial response is given by multiplying the target distance by the 519 radial gain . Similarly, the angular residual error ( ) was calculated as: Regression model containing τ 521 To assess the manner in which the time constant affected the steering responses, we augmented the simple linear 522 regression models for response gain estimation mentioned above with τ-dependent terms (Suppl. Fig. S3; and 523 * for radial response , and * for angular response � ). Subsequently, we calculated the Pearson linear 524 partial correlations between the response positions and each of the three predictors. Rationale behind modeling approach 535 We tested the hypothesis that the τ-dependent errors in steering responses arise from participants misestimating was assumed to be normal in log-space with mean, μ prior and standard deviation, σ prior . The measurement 556 distribution p( | ) was also assumed to be normal in log-space with mean , and standard deviation σ measure .

557
Note that whereas the prior p( ) remains fixed across trials of a particular sensory modality, the mean of 558 measurement distribution is governed by and thus varies across trials. For each sensory modality, we fit three 559 parameters, Θ ∋ {μ prior , σ prior , σ measure }.
As indicates, the relative weighting between prior and measurement on each trial depends solely on their relative 593 widths, . This model assumes that across trials, σ prior remains fixed within each sensory condition. Suppl. Figure S2: A) Flow diagram of Motion Cueing algorithm (MC). The participant pilots themself in a simulated environment using a joystick. The motion cueing algorithm aims at controlling a platform such that the sum of inertial and gravitational acceleration experienced when sitting on the platform (desired platform GIA, blue; the curve illustrates an example profile consisting of a single rectangular waveform) matches the linear acceleration experienced in the simulated virtual environment. "Desired" refers to the fact that the motion platform may not be able to match this acceleration exactly. The desired GIA is fed through a step impulse function to compute the desired linear acceleration of the platform. The difference between the desired linear acceleration and GIA is used to compute the desired platform tilt. The desired platform motion (linear and tilt motion) are passed through a controller that restricts its motion to the actuator's limits (in term of linear and angular acceleration, velocity, and position). The two actuator output commands are sent to the platform and are also used to compute the actuator GIA which is actually rendered by the platform. To ensure that the inertial motion produced by the platform matches the motion in the simulated environment, the actuator GIA is compared to the desired linear acceleration to compute an actuator GIA error feedback signal, which updates the simulated motion. B) Acceleration profile of an actual trial. The first panel shows the desired GIA of the participant for that trial. The second and third panels show the desired linear acceleration (red) and desired tilt acceleration (green), respectively. The fourth panel shows the final GIA achieved (blue) and the GIA error (magenta). C) Correspondence between visual acceleration and platform GIA (blue), measured independently from the motion cueing algorithm using an inertial measurement unit mounted next to the participant's head. There is an almost perfect match between the two. The gray histogram indicates the range of acceleration experienced by the participant. 629 630 631 Suppl. Figure S3: (A) Partial correlation coefficients for prediction of stopping distance ̃ (relative to starting position) from initial target distance ( ), τ, and the interaction of the two ( ), for all participants across sensory conditions. Values at each bar group represent the average coefficient value across participants ±1 standard deviation. The contribution of the τ-only term was considered insignificant across all conditions. The simplified version of this model would be: ̃= ( + ), which implies that the radial gain is τ-dependent. (B) Partial correlation coefficients for prediction of stopping angle � (relative to starting position) from initial target angle ( ), τ, and the interaction of the two ( ), for all participants across sensory conditions. Values at each bar group represent the average coefficient value across participants ±1 standard deviation. In agreement with the findings for the response distance, the contribution of the τ-only term was considered insignificant across all conditions. The simplified version of this model would be: � = ( + ), which implies that the angular gain is also τ-dependent. Open and filled circles denote statistical significance according to the legend. No consistent effect of the time constant on either travel duration or average velocity across participants was observed. Specifically, correlations are distributed around zero across all sensory conditions for both travel duration and average velocity with only a subset of participants exhibiting statistical significance (filled circles). We designed the control dynamics based on a bang-bang controller such that we ensure travel duration, and consequently average velocity, is comparable across time constants (Suppl. Fig. S1A, see Methods). Bang-bang control implies maximal positive control (forward and angular motion), followed by maximal negative control (braking) until stopping at a desired location. The type of control that participants actually implement would vary from this ideal control model, allowing for the range of correlation coefficients between the time constant and trial duration that we observe here.
B. Left: Uncertainty (variance) of instantaneous self-motion velocity estimation. Illustration of a linear (blue) and a quadratic (orange) model of velocity estimation uncertainty as a function of the instantaneous velocity magnitude. The uncertainty of velocity estimates is accumulated over time to produce uncertainty in position. According to Lakshminarasimhan et al 2018, higher uncertainty in position leads to more undershooting of the target locations. In this experiment, performance tended towards overshooting for higher time constants. Since different time constants produce different velocity profiles, we wanted to test whether the effect of the time constant on performance could be attributed to differences in the accumulated uncertainty of the different velocity profiles. Right: Correlation between time constant and accumulated uncertainty for the linear and quadratic models. Error bars denote ± 1 SEM. We found that the accumulated uncertainty is positively correlated with the time constant for both models (adding an intercept term to the models did not qualitatively change the results). This means that higher time constants yield larger uncertainty and, therefore, participants should undershoot more. However, this is the opposite of the observed effect of the time constant on the responses. Hence, differences in accumulation of perceptual uncertainty cannot account for it. Figure S5: Changes in travel distance for a given control input under different control dynamics. Whether in the domain of velocity (top) or acceleration control (bottom), a control input that is appropriate to reach a certain target distance (horizontal black dashed line) under only a certain time constant (red vertical line) will produce erroneous displacements under any other time constant (blue line). For smaller time constants, the intended distance will be undershot, whereas larger time constants will lead to overshooting. In other words, assuming that the red vertical line denotes the believed dynamics of a controller, a larger actual time constant (underestimation) will lead to overshooting (relative to the intended displacement; horizontal black dashed line). Inversely, overestimation of the time constant would lead to undershooting. Note that, for acceleration control we chose a bang-bang controller such that we can demonstrate that this holds true whether there is braking at the end of the trial or not.  Here, the current acceleration is calculated in the VR coordinates ( VR, , VR, ). This is also where the GIA error 661 feedback loop (see STEP 10) updates the VR acceleration. Where � is the updated velocity from the previous timestep ( MC explained in STEP 10). After the acceleration 666 is obtained, it is being transformed back to the participant's coordinates � sub, , sub, �: Where +1 desired, is the desired platform acceleration.

STEP 5:
678 This is the Motion-Cueing (MC) step. Here, the amount of tilt and translation that will be commanded is 679 computed, based on the tilt-translation trade-off we set. First, the platform's desired acceleration is computed by In a next step, we compute the motion command that should be sent by the platform. Note that the platform is 705 placed at a height ℎ below the head. Therefore, tilting the platform by an angle induces a linear displacement 706 of the head corresponding to −ℎ • • 180 � . Therefore, a linear displacement is added to the platform's motion 707 to compensate for this. Next, we limit the platform's acceleration, velocity and position commands to ensure that 708 they remain within the limit of the actuators. For this purpose, we define the following function , ( ): The same operation takes place for the y component of the acceleration, as well as for the platform velocity and 720 position. The process is repeated for the tilt command itself. 721 We set = 0.75 and = 4 / 2 , = 0.4 / , = 0.23 , ̈= 300 °/ , ̇= 30 °/ and 722 = 10°, slightly below the platform's and actuator physical limits. This ensured that the platform's motion 723 matched exactly the motion cueing algorithm's output, as long as it stayed within 75% of the platform's range.

724
Otherwise, the function ensured a smooth trajectory and, as detailed in STEP 8 to 10, a feedback mechanism 725 was used to update the participant position in the VR environment, so as to guarantee that visual motion always 726 matched inertial motion. However, we found that this led to numerical instability, and instead we introduced a time constant MC = 1 in 766 the computation, as shown in STEP 3.