Abstract
Path integration is a process in which navigators estimate their position and orientation relative to a known location by using body-based internal sensory cues that arise from navigation-related bodily motion (e.g., vestibular and proprioceptive signals). Although humans are capable of navigating via path integration in small-scale space, a question has been raised as to whether path integration plays any role in human navigation in large-scale space because it is inherently prone to accumulating error. In this review, we examined whether there is evidence that path integration contributes to large-scale human navigation. It was found that navigation with path integration (e.g., walking in a large-scale environment) can enhance learning of the layout of the environment as compared with mere exposure to the environment without path integration (e.g., viewing a walk-through video while sitting), suggesting that the body-based cues are reliably processed and encoded through path integration when they are present during navigation. This facilitatory effect is clearer when proprioceptive cues are available than when the navigators receive vestibular cues only (e.g., driving or being pushed in a wheelchair). More specifically, path integration with proprioceptive cues may help build survey knowledge of the environment in which metric distance and direction between landmarks are represented. Overall, the existing data are indicative of path integration’s contributions to large-scale navigation. This suggests that instead of dismissing it as too error-prone, path integration should be characterised as a fundamental mechanism of human navigation irrespective of the scale of a space in which it is carried out.
Similar content being viewed by others
Spatial navigation is crucial for human interaction and survival. Sensory guidance of navigation is achieved via two complementary processes: piloting and path integration (Loomis et al., 1999). In piloting, navigators determine their position by perceiving visual, auditory, and tactile landmarks in an environment. In this method, external sensing of the landmarks is necessary, but the navigators do not need to process information about their movement. In path integration (also known as dead reckoning), on the other hand, the navigators derive the velocity and acceleration of their locomotion from internal cues that are generated by the motion of the body (i.e., vestibular, proprioceptive, and efference-copy signals—collectively referred to as idiothetic signals; Mittelstaedt & Mittelstaedt, 2001),Footnote 1 and use this information to track and update their location and orientation. There is abundant literature investigating the mechanisms of path integration and how it is utilised by humans in small-scale experimental environments (e.g., Chance et al., 1998; Klatzky et al., 1998; Mittelstaedt & Mittelstaedt, 2001). However, the involvement of path integration in real-world navigational situations remains poorly understood.
A current position on this issue highlights the error in estimating self-location when human navigators use path integration by itself. Generating, sensing, and integrating body-based cues each come with their own errors, and since the computation of the self-location is continuously updated as the navigators take every step, these errors carry over and accumulate as a locomotion path progresses (Cheung et al., 2007, 2008). According to this position, the rate of error accumulation is too rapid, and therefore path integration alone should be useful only for brief periods. Indeed, when Souman et al. (2009) had blindfolded participants attempt to walk straight in a large field for 50 minutes, on average, they could not go farther than 100 m from the starting location due to random veering. They reached this asymptotic level of displacement within just a few minutes, suggesting that beyond this point they were not able to maintain any systematic estimation of their location and orientation by path integration. On the basis of findings like this, path integration is recognised as a primitive mechanism that does not contribute much to navigation in the real-world context (Dudchenko, 2010; Eichenbaum & Cohen, 2014). When piloting is simultaneously possible, the navigators might predominantly rely on external landmarks and make little use of body-based cues (Foo et al., 2005, 2007).
However, after controlling for or resetting the accumulated error, path integration has demonstrated to be capable of letting blindfolded navigators accurately walk to a goal that is up to 20 m away (Andre & Rogers, 2006; Rieser et al., 1990; Thomson, 1983). Furthermore, although the error increases beyond this distance, the primary mechanisms of path integration seem to remain consistent up to 500 m, generating less accurate but still systematic estimation of direction and distance of travel (Cornell & Greidanus, 2006; Harootonian et al., 2020; Hecht et al., 2018). Thus, if it is the accumulation of error that makes navigation by path integration impractical, all it takes for navigators to utilise path integration may be to clear or reduce the error periodically (e.g., approximately every 20 m). The error is corrected when the navigators can determine their location and orientation using external landmarks (Etienne et al., 2004; Philbeck & O’Leary, 2005; Zhang & Mou, 2017; Zhao & Warren, 2015), which occurs frequently during navigation in everyday environments. For example, for the last author of this article to go from his desk to the post room, he walks 3 m to reach his office door, at which path integration error is reduced to zero; then he walks 17 m in the corridor to come to a corner, and by turning there he obtains another opportunity to reset the error; in the following segment he walks 5 m to arrive at the post room. In this manner, even though the entire trajectory may go beyond the limit of path integration, it often consists of a series of short segments, within each of which path integration may well offer a reliable navigation strategy. Indeed, in small-scale space that includes external cues, it has been shown that internal cues are not disregarded but combined, often in a nearly optimal (i.e., Bayesian) fashion, with the external cues to mediate various aspects of navigational behaviour (Chen et al., 2017; Kalia et al., 2013; McNamara & Chen, 2022; Nardini et al., 2008; Newman & McNamara, 2021; Qi et al., 2021; Sjolund et al., 2018; Tcheang et al., 2011; ter Horst et al., 2015; Zhang et al., 2020; see also Harootonian et al., 2022; Zhao & Warren, 2015, 2018).
Furthermore, in the animal literature, there is recent emphasis on the role of proximal, as opposed to distal, landmarks in determining the contribution of body-based cues to driving both location-learning behaviour and spatial tuning of hippocampal place cells (Jayakumar et al., 2019; Knierim & Hamilton, 2011; Knierim & Rao, 2003; Sanchez et al., 2016). Similar findings are available from human behavioural studies as well (albeit limited yet; e.g., Jabbari et al., 2021; Zhang & Mou, 2019), suggesting that path integration does interact with local external cues that identify specific locations in the immediate surroundings. Taken together, these ideas lead to the proposal that instead of dismissing path integration as too error-prone, human navigation in real-world situations may better be conceptualised as a multimodal process to which both landmarks and body-based cues contribute. From this point of view, this focussed review examines whether the role of path integration in large-scale navigation by humans is demonstrated in the current literature.
Scope of this article and related previous reviews
Chrastil and Warren (2012) provided a broad review of active and passive spatial learning, and as a part of it, they discussed roles of body-based cues in navigation and other related processes. Compared with their work, the current review is more narrowly focussed on possible contributions of path integration to navigation in large-scale space, which is defined here as an environment that is large enough to let navigators travel farther than the presumable limit of reliable path integration in the absence of error resetting (i.e., approximately 20 m; Andre & Rogers, 2006; Rieser et al., 1990; Thomson, 1983). Such navigation is referred to as large-scale navigation in this article. To meet this definition, the environment does not need to have an expanse that accommodates a straight-line distance of 20 m. Even when the environment itself is smaller, it can still contain a path that is longer than 20 m (e.g., a maze-like environment). Typically, large-scale navigation involves going beyond locations that can be directly perceived from a single vantage point (Ittelson, 1973; Montello, 1993). This specific focus is justified because, as summarised above, this is the kind of navigation in which the utility of path integration is most debatable.
Ruddle (2013) and Waller and Hodgson (2013) also reviewed similar topics with the overarching goal of assessing whether and how body-based cues enhance spatial orientation and navigation in virtual environments. The scope of the current review is different from theirs—this review is concerned with path integration and its role in large-scale navigation in general, whereas the aims of their reviews were to contribute specifically to understanding and development of virtual-reality technology.
Although some key studies that predate the previous reviews (Chrastil & Warren, 2012; Ruddle, 2013; Waller & Hodgson, 2013) are discussed in this article, an emphasis was given to recent data that became available after publication of those reviews. In addition, other than a few references that are particularly relevant to discussions below, this review is centred on findings about human spatial navigation. Comprehensive reviews are available for research on path integration by nonhuman species (Collett & Collett, 2000; Etienne & Jeffery, 2004; Heinze et al., 2018; McNaughton et al., 1996; McNaughton et al., 2006; Moser et al., 2014).
While this review was primarily developed through the authors’ knowledge of the literature, it was complemented with a systematic search of the PubMed database (https://pubmed.ncbi.nlm.nih.gov) for ensuring comprehensive coverage of previous studies. The search was run in December 2021, with the following combination of keywords: (‘path integration’ or ‘dead reckoning’ or body-based or idiothetic or proprioceptive or vestibular or proprioception) and (navigation or wayfinding) and human. This search returned 513 items, 31 of which had already been included in the initial draft of this review. The remaining 482 items were screened by using their titles, abstracts, and method sections. The screening was carried out with the goal of identifying empirical papers that described studies on large-scale navigation by humans. To be considered for this review, the studies had to include independent variables through which the effects of body-based cues on large-scale navigation could be inferred (e.g., comparing navigation performance with and without those cues). This screening resulted in excluding 491 papers; most of them were excluded because navigation was not large-scale enough or no body-based cues were involved (e.g., navigation in a virtual environment using visual cues only). These processes were repeated in August 2022 and found two more papers (out of 15 new items) that were deemed relevant. Thus, the total of 24 papers were added to this review as a result of conducting the PubMed search.
The current understanding of path integration in large-scale navigation
To begin this review, we first give an outline of the current knowledge about the contribution of path integration to everyday human navigation. The goal of this section is to broadly survey the literature to set up a working hypothesis about possible roles of path integration in large-scale navigation. To this end, relevant previous studies are grouped into four types and summarised below.
Assessing spatial memory that results from navigation with or without path integration
In previous studies that aimed to evaluate human navigation in a large-scale environment, a common approach was to test spatial memory that participants formed after they navigated in the environment. In this approach, memory performance was used as a measure for assessing the degree to which a given kind of spatial information was acquired and processed in the service of navigation. When applied to research on path integration, these studies examined whether the presence of body-based cues during navigation enhanced learning of the environmental layout.
Waller et al. (2004) examined the effects of vestibular and proprioceptive cues on the acquisition of landmark locations in a university campus. Among several manipulations made in this study, critical conditions involved participants who navigated the campus either by walking in it (which allowed them to obtain vestibular, proprioceptive, and visual cues) or by watching a walk-through video of the campus while sitting still in a laboratory (which resulted in the visual cues only). Notably, the two groups of participants received the identical visual cues because those who walked shot the video through a camera attached to their head and watched it in real time through a head-mounted display (Fig. 1). Their memory for the landmark locations was tested by having them point to the landmarks and measuring absolute angular error in pointing. The walking group performed this test significantly better than the video group, showing that the vestibular and proprioceptive cues benefited spatial learning over and above what the visual cues did. Comparable patterns were found when participants learnt the layout of a building via either navigating in it by foot, being pushed in a wheelchair, or viewing a video captured by the participants who walked (Waller & Greenauer, 2007): Whereas the wheelchair and video groups (that had either visual and vestibular cues or visual cues only) did not differ from each other in a memory task in which they pointed target locations along complex paths they experienced, the walking group (that had visual, vestibular, and proprioceptive cues) outperformed the other two groups. Together, these results indicate that body-based (particularly proprioceptive) cues were processed and encoded in mental spatial representations, which in turn suggests that path integration yielded reliable output during large-scale navigation.
More recently, Bonavita et al. (2022) took a similar approach by using two sets of spatial learning tasks, one involved experiencing a hospital campus via actual navigation and the other required watching a video that simulated driving through a city. Participants’ spatial memories for the hospital campus and the city were assessed by having them indicate the direction of travel at each intersection, recognise individual landmarks, and specify the order in which the landmarks were encountered. Additionally, the participants either drew the navigated path on a blank map of the hospital campus or placed the pictures of the landmarks on a map of the city in which the simulated driving path was already indicated. Performance in these tasks was evaluated in such a way that higher scores corresponded to more accurate memories for the environments (e.g., counting the number of correctly recognised landmarks). When scores of each task were compared between the two learning conditions (actual navigation versus simulated driving), they showed statistically significant positive correlations (rs > .45, ps < .01), with the exception that the scores from the direction indication task were not related to each other (r = .02, p = .46). Bonavita et al. interpreted this pattern of correlations by postulating that the presence of body-based cues during spatial learning was important for memorising specific turn directions, but for other aspects of navigation that were captured by their tasks, the body-based cues were not crucial and the participants formed comparable representations of large-scale space with or without these cues. However, the validity of this interpretation is unclear because the learning conditions were not equated. Actual navigation was performed only on the hospital campus and simulated driving was carried out only in the city, creating multiple differences in what the participants learnt in each condition (e.g., different numbers of intersections and landmarks). The tasks also differed in their details between the conditions. For example, while the participants were first shown the correct path in the navigation condition, they acquired it by trial and error in the simulated driving condition (i.e., they guessed the travel direction and received corrective feedback). Thus, although it is interesting that only the direction indication task did not yield a reliable correlation, the dissimilarities between the conditions and the tasks do not permit drawing a clear conclusion as to whether the body-based cues contributed to large-scale navigation in the Bonavita et al. study.
Using the same design as in the Waller et al. (2004) and Waller and Greenauer (2007) studies, Waller et al. (2003) specifically examined the effects of vestibular cues through the comparison between participants who were in the back seat of a car that drove through a city neighbourhood (which allowed them to obtain both vestibular and visual cues) and those who were seated in a room and watched a video of the neighbourhood taken from this car (which gave them the visual cues only). The two groups were matched in the visual cues they received—those in the car viewed the same video through a head-mounted display as it was recorded. All participants performed memory tests in which they pointed to landmarks in the neighbourhood, estimated straight-line distances between them, and indicated the landmark locations by placing markers on a grid sheet. In any of these measures, the car group did not show enhancement of spatial memory relative to the video group. These results are consistent with those from Waller and Greenauer (2007) and suggest that path integration on the basis of vestibular signals alone might not always provide spatial information about the environment above and beyond what is obtainable through vision.
It should be noted, however, that there are instances in the literature that are suggestive of vestibular contributions to large-scale navigation. For example, Jabbari et al. (2021) had participants drive along predetermined paths in a virtual town with or without vestibular cues by using a driving simulator on a motion platform that was capable of delivering vestibular stimulation according to the simulated vehicle’s movement. The participants learnt the paths by following signs, and then at test, they were to reach the same destinations as in the learning phase without the signs. The rate of success in arriving at the destinations within a given time limit was significantly higher in the presence of the vestibular cues, but this pattern was observed only when the virtual town contained proximal landmarks (as opposed to distal or no landmarks). Although it is yet to be specified how the vestibular cues interacted with the proximal landmarks in this driving task, results like this demonstrate that there are situations in which the vestibular signals can exert observable effects on navigation in large-scale space.
Another notable pattern of findings in the literature is that patients with vestibular damage are impaired in navigation in large-scale space. Biju et al. (2021) had those patients and age- and gender-matched control participants walk paths in a hospital floor that were approximately 30-m long. The paths led them to designated destinations via circuitous routes. When the patients and controls were asked to retrace the learnt paths and also to go directly to the destinations, the patients showed less optimal performance than the controls by walking longer distances in both tasks. These results suggest that vestibular dysfunction negatively affected the patients’ navigation in the hospital floor. In interpreting them, it should be pointed out that these patients exhibited the same impairments when they performed the tasks by navigating in an equivalent virtual environment using a joystick (i.e., under the condition in which navigation did not evoke vestibular self-motion cues). Thus, it is possible that the impaired navigation shown by the patients was not due to the lack of incoming vestibular signals during locomotion, but instead it could be attributed to the abnormality in higher-order navigation-related computation that could have resulted from the prolonged absence of vestibular stimulation (Bigelow & Agrawal, 2015; Schautzer et al., 2003; Vitte et al., 1996). Indeed, the reduced volume of cortical and hippocampal grey matter has been reported in vestibular patients (Brandt et al., 2005; Göttlich et al., 2016; Hüfner et al., 2009; Kamil et al., 2018; Kremmyda et al., 2016), which can cause decline in spatial memory and navigation abilities (Guderian et al., 2015; Nedelska et al., 2012). Nevertheless, regardless of whether the patients’ navigational behaviour was accounted for by the loss of online vestibular information or abnormal higher-order functions, the fact remains that there is a direct or indirect consequence of vestibular deprivation, suggesting that a certain role is played by the vestibular system in large-scale navigation.
Taken together, it can be inferred from the above studies that path integration contributes to navigation in large-scale space, in so far as proprioceptive input is available. It is possible that vestibular cues also play a part in large-scale navigation, but at this point, their role is less clearly characterised and it may well be less salient than that of proprioceptive cues. These inferences are consistent with findings from research on path integration in small-scale space in which blindfolded navigators track their location and orientation well while walking multisegment paths (i.e., with proprioception; Klatzky et al., 1990; Loomis et al., 1993; Philbeck et al., 2001; Yamamoto et al., 2014), but their self-tracking performance is poor and can even be indicative of disorientation when following the paths while sitting in a wheelchair (i.e., with vestibular cues alone; Sholl, 1989), showing the importance of the proprioceptive cues for staying oriented and localised in the surroundings.
Looking for modality-specific effects of path integration on navigation
In the behavioural studies reviewed above, participants in varying conditions (e.g., walking, being in a car, and viewing a video) navigated an environment the same number of times, and differential memory performance that resulted was taken as evidence that different cues contributed to navigation differently. By contrast, Huffman and Ekstrom (2019) equated participants’ learning of the locations of landmarks across various cue conditions and examined brain activation patterns while the participants retrieved the landmark locations from memory. A strength of this approach is that if the patterns of activation differed between the conditions, this outcome would not be confounded by dissimilar levels of spatial learning and therefore could be unequivocally attributed to the effects of varied sensory cues.
Specifically, Huffman and Ekstrom (2019) had participants navigate in large-scale immersive virtual environments through various methods that differed in the types of spatial cues they afforded. For example, participants in the enriched group walked and turned on an omnidirectional treadmill (Fig. 2). This allowed them to receive visual, vestibular, and proprioceptive cues because unlike conventional linear treadmills on which users can walk only in a single direction (usually along their sagittal axis), omnidirectional treadmills allow for walking in any direction, enabling the users to make not only translational but also rotational body movements (Harootonian et al., 2020; Souman et al., 2011). On the other hand, participants in the impoverished group controlled movement entirely by a joystick, receiving mostly visual cues only. All groups of participants repeatedly learnt the environments until their accuracy in pointing relative directions among landmarks reached a common criterion. Subsequently, the participants performed the same pointing task one more time while neuronal activation in their brain was recorded via functional magnetic resonance imaging (fMRI). Neuroimaging data showed very similar patterns of activation irrespective of which cues were available during navigation. In addition, behavioural performance in the pointing task did not reveal any effects of the cues—pointing accuracy was statistically indistinguishable between the groups even after learning each environment just once, and there was no group difference in the number of repetition required to reach the criterion. These results led to the conclusion that path integration, even with proprioceptive cues, did not play any unique roles in this study.
This conclusion is consistent with evidence from behavioural studies in which comparable performance in spatial memory retrieval was found after experiencing environmental layouts through different sensory modalities, supporting the view that spatial representations are at least functionally equivalent, or perhaps even fully amodal (i.e., independent of the encoding modality), regardless of the way they are encoded in long-term memory (Avraamides et al., 2004; Bryant, 1997; Eilan, 1993; Giudice et al., 2011; Loomis et al., 2002; Valiquette et al., 2003; Wolbers et al., 2011; Yamamoto & Shelton, 2009). It should be noted, however, that findings that are not readily compatible with this view also exist in the spatial memory literature, showing that after learning object locations to criterion, participants still exhibited dissimilar levels of performance in remembering the layouts of the objects depending on encoding modalities (Newell et al., 2005; Yamamoto & Philbeck, 2013; Yamamoto & Shelton, 2007). For example, Yamamoto and Philbeck (2013) showed that when participants learnt object locations in a room to criterion by viewing them with or without eye movements (i.e., with or without proprioceptive cues from extraocular muscles), accuracy and response latency in pointing relative directions among them still differed between the two learning conditions—the presence of the extraocular proprioceptive cues during memory encoding facilitated subsequent retrieval and mental manipulation of spatial representations. Thus, although Huffman and Ekstrom’s (2019) results put forth the view that path integration does not make distinctive contributions to the learning of large-scale environmental layout, the debate as to whether the encoding modality leaves unique traces in spatial memory has not been settled (Shelton & Yamamoto, 2009).
In addition, although Huffman and Ekstrom’s (2019) approach had the advantage discussed above, it also had a disadvantage: For learning conditions to be equated as to memory performance they afford, the study must be designed in such a way that the most cue-impoverished vision-only condition provides sufficient information for navigating and learning the environments as well as for retrieving memory for the environments later. This design is inherently conducive to finding no effects of information encoded via nonvisual modalities because this information is added to the already sufficient visual information. Indeed, it has been suggested that for the merit of multimodal spatial learning to become observable through comparison against purely visual learning, to-be-learnt environments or tasks that assess spatial knowledge of the environments must be sufficiently complex (e.g., Grant & Magee, 1998; Richardson et al., 1999; Sun et al., 2004; Yamamoto & Shelton, 2005, 2007). Thus, it is possible that the null result in the Huffman and Ekstrom (2019) study was a consequence of making the nonvisual information supplied by path integration noncritical or even irrelevant to the task.
Relating path integration ability to performance in large-scale navigation
Another approach to investigating path integration’s role in large-scale navigation is to measure abilities in path integration and navigation separately and examine the relationship between them. Two studies took this approach and yielded apparently inconsistent results: Hegarty et al. (2002) showed that better performance of path integration correlated with greater self-reported proficiency of navigation, whereas Ishikawa and Zhou (2020) claimed that improved path integration skills did not help construct more accurate spatial memories when participants navigated in a city.
Hegarty et al. (2002) measured participants’ path integration by guiding them along 60- or 180-ft multisegment paths of various configurations without vision and then asking them to point to the origin of each path. Subsequently, the participants’ preference and experience in everyday navigation were assessed using the Santa Barbara Sense of Direction (SBSOD) scale, a self-report questionnaire that has been shown to correlate with individuals’ spatial aptitude in large-scale environments (Hegarty et al., 2006; Labate et al., 2014; Schinazi et al., 2013; Sholl et al., 2006). Results showed that absolute angular error in pointing in the path integration task significantly correlated with the self-rating of navigation ability (the smaller the error, the higher the self-evaluation; r = −.40), suggesting that heightened sensitivity to body-based self-motion cues is one contributing factor for proficient navigation in large-scale space.
Ishikawa and Zhou’s (2020) path integration task was similar to Hegarty et al.’s (2002), except that paths were shorter (approximately 3–16 m) and participants walked without guidance from the end of each path to the origin of the path. In addition, when the participants stopped at the location that they thought was the origin, they pointed the direction of North. Some of the participants, whose sense of direction was poor as determined by the SBSOD scale (Montello & Xiao, 2011), were trained on the path integration task such that they performed it repeatedly with feedback. Subsequently, they were compared against other participants without such training on tasks in which all participants walked 450-m urban paths and learnt landmark locations along them. There were two groups of the untrained participants, one in which participants’ sense of direction was as poor as that of the trained participants, and the other in which participants had a better sense of direction that was considered average, both according to their scores on the SBSOD scale. The training significantly increased accuracy of path integration within the trained group, particularly in the residual distance from each participant’s stopping point to the origin. However, it did not help achieve competent learning of the landmark locations. Specifically, the trained participants made smaller error in judging directions between the landmarks than the untrained poor-sense-of-direction participants, but the effect size of this group difference was small, and the trained participants still performed the task poorly—that is, they were still less accurate than the untrained participants with the average sense of direction. In estimating pathway and straight-line distances between the landmarks and drawing maps of the travelled paths, the two poor-sense-of-direction groups did not differ from each other, showing no benefit of the training. Ishikawa and Zhou interpreted these results to mean that the improved accuracy of path integration in small-scale space had limited effects on large-scale navigation performance in naturalistic settings.
The approach taken by these studies was promising, but its implementation had several issues. Hegarty et al. (2002) administered the SBSOD scale after participants performed the path integration task. Thus, it is possible that those who thought they did well on the path integration task rated their navigation ability higher in the questionnaire, leading to the observed correlation. However, given that items on the scale are largely focussed on episodes in large-scale navigation (e.g., ‘I very easily get lost in a new city’) and none of them explicitly ask about the experience of tracking a location while walking, it is not very likely that the study design caused the suspected carryover effect. On the other hand, a more serious issue in the Ishikawa and Zhou (2020) study is that they did not give the untrained participants the path integration task at all. This left it unclear what post-training difference was present between the trained and untrained poor-sense-of-direction groups as well as between the trained poor-sense-of-direction and untrained average-sense-of-direction groups in their baseline-level path integration performance. For example, the possibility remains that despite the significant improvement within the trained group, the training did not sufficiently differentiate the trained and untrained poor-sense-of-direction groups. If this was the case, the mostly similar results these two groups yielded from the landmark-learning tasks might not have been surprising. Until this issue is resolved, it is difficult to draw any firm conclusions from these results.
Inferring the roles of body-based cues through manipulation of bodily self-consciousness
To examine the effects of path integration on navigation, a straightforward method is to manipulate sensory self-motion cues so that navigators carry out path integration to different degrees. Indeed, most of the studies reviewed in this article followed this methodology. On the other hand, Moon et al. (2022) took a unique approach in which they inferred the roles of body-based cues by having participants navigate in a virtual environment with or without an avatar while lying in an MRI scanner (i.e., in the absence of the body-based cues). The avatar was designed such that the participants felt some sense of ownership of the avatar’s body through mechanisms that were similar to those of rubber-hand and full-body illusions (Botvinick & Cohen, 1998; Ehrsson, 2007; Lenggenhager et al., 2007)—that is, by viewing the avatar’s posture and hand movements that were synchronous to the participants’ supine posture and hand movements for moving a joystick in the scanner, the participants experienced illusory self-identification with the avatar (Fig. 3). This illusion helped psychologically simulate navigation with body-based cues because under normal conditions bodily self-consciousness is thought to be achieved by having coherent visual, vestibular, and proprioceptive signals that are all anchored in one’s own body (Blanke et al., 2015).
Participants in the Moon et al. (2022) study first learnt object locations by freely exploring a large circular field (110 m in diameter in the virtual space) that only contained distal landmarks other than the target objects (Fig. 3). Subsequently, the targets disappeared from the field, and the participants were asked to navigate back to the location of each target. The presence of an avatar enhanced performance in this task—compared with when the participants performed the same task without the avatar, they moved closer to the actual target location and also took a more efficient (i.e., shorter) path in doing so. In addition, the participants stayed farther away from the border of the field while navigating with the avatar. This result suggested that the avatar shifted the participants’ perceived self-location in the field (i.e., it was moved forward to the avatar’s location that was shown in front of the participants, which in turn had the effect of making them stop sooner when approaching the border), providing evidence that the avatar did induce the intended illusion (Dieguez & Lopez, 2017). Furthermore, the behavioural improvement in the task was associated with increased neural activity in the right retrosplenial cortex, a brain area that contributes to encoding and retrieval of spatial information about a large-scale environment during first-person navigation (Baumann & Mattingley, 2013; Byrne et al., 2007; Chrastil, 2018; Sherrill et al., 2013; Vann et al., 2009). Taken together, these results suggest that the simulated presence of body-based cues improved the participants’ learning of the object locations, showing promise of Moon et al.’s approach that can facilitate neuroimaging investigation of these cues’ roles in large-scale navigation.
Interim summary
In sum, there are several studies that specifically investigated the contribution of path integration to navigation in large-scale space. Although they have yet to converge on a clear conclusion, the evidence they present is sufficiently strong for drawing out a working hypothesis that path integration does take part in large-scale navigation when proprioceptive cues are available (Waller & Greenauer, 2007; Waller et al., 2004). On the other hand, more research is needed to clarify whether vestibular cues are crucial for competent performance in everyday navigation. While data from patients with vestibular loss are indicative of important roles played by the vestibular cues (Biju et al., 2021), those from participants with intact vestibular functions often identify limited benefits of having the vestibular cues on top of visual cues (Jabbari et al., 2021; Waller & Greenauer, 2007; Waller et al., 2003), suggesting that the vestibular roles may not be primarily defined by online spatial information the vestibular system provides to ongoing navigation.
Possible specific roles of path integration in large-scale navigation
Building on the working hypothesis formulated above, we now examine whether path integration makes any specific contributions to navigation in large-scale space. That is, if there are any unique roles that path integration plays, what can they be? Three possibilities regarding this question are considered below with the aim of generating more precise hypotheses about how path integration contributes to large-scale navigation in humans.
As shown below, the three ideas discussed in this section are interrelated. First, we consider the possibility that the chief role of path integration is to help acquire metric properties of a large-scale space. Second, we elaborate on this idea by specifying that path integration can be more important for encoding distance information than direction information during large-scale navigation. Third, we describe a claim by Wang (2016), in which she pushed the above ideas further by arguing that spatial information obtained through path integration is sufficient for constructing a detailed mental representation of an environment.
Providing metric information about an environment
To understand how path integration may uniquely contribute to human spatial navigation, it is useful to consider certain forms of mental spatial representation: route and survey knowledge (Siegel & White, 1975; Fig. 4). Route knowledge is composed of information about identities of landmarks and specific routes between them. For example, when navigators have travelled from Landmark A to Landmark B on one occasion and from Landmark A to Landmark C on another occasion, they can learn what the landmarks are and how two of the three landmarks are connected (e.g., ‘to go from King George Square Station to the Library, turn right immediately after coming out of the station and walk one block; to go from King George Square Station to the Botanic Garden, walk straight from the station exit until the street ends’). With this coarse knowledge, the navigators would have trouble going directly from Landmark B to Landmark C because deriving this untravelled direct path requires metric information about the distance and direction of A–B and A–C pairs, but the route knowledge only consists of propositional and topological relationships between the experienced landmark pairs. When the navigators have come to acquire such metric details that allow them to take novel shortcuts between places, they are said to possess survey knowledge, or a cognitive map (O’Keefe & Nadel, 1978; Tolman, 1948), of the environment. As shown below, there is evidence that path integration, particularly with proprioceptive cues, can provide the metric information required for the acquisition of survey knowledge (Chrastil & Warren, 2012; Gallistel, 1990).
Precise definitions of survey knowledge vary in the literature in terms of how strictly it must follow the principles of Euclidean geometry (Chrastil, 2013; Chrastil & Warren, 2014; Gallistel, 1990; Gillner & Mallot, 1998; O’Keefe & Nadel, 1978; Peer et al., 2021; Poucet, 1993; Warren, 2019; Warren et al., 2017; Weisberg & Newcombe, 2016; Widdowson & Wang, 2022; Zetzsche et al., 2009). Performance in spatial memory tasks that tap into metric properties of large-scale environments often violates the Euclidean principles (e.g., McNamara & Diwadkar, 1997; Moar & Bower, 1983; Sadalla et al., 1980). This is accounted for either by permitting some biases and distortions in map-like representations (Shelton & Yamamoto, 2009) or by postulating an intermediate stage between route and survey knowledge (i.e., graph knowledge; Warren, 2019). Notably, Chrastil and Warren (2014) proposed the concept of a cognitive graph, which represents topological connections between places in a node-and-edge structure and local metric information about direction and distance of each connection using node labels and edge weights. In this manner, the exact nature of the representation of metric information continues to be debated. However, delineating the difference between these views goes beyond the scope of the current review. Rather, for its purpose, it is sufficient to broadly define survey knowledge as an internal representation of space that contains some forms of metric details of an environment in a way that allows for flexible navigational behaviour such as shortcutting.
Using a maze-like immersive virtual environment that was in the scale of a large room (11 × 12 m), Chrastil and Warren (2013) examined whether having vestibular and proprioceptive cues while navigating in the environment would help participants take shortcuts between landmarks later (Fig. 5). The participants first learnt the landmark locations by either walking in the maze, moving through it while sitting in a wheelchair, or viewing dynamic images that resulted from movements in the environment while seated in a stationary chair. All participants received similar visual cues, those who were in the wheelchair additionally obtained vestibular cues, and those who walked had proprioceptive cues on top of the vestibular cues. At test, the participants were brought back to a landmark and asked to go to another designated landmark by making a single turn and walking one straight path while the maze and all objects contained in it were made invisible. Results showed that the participants who walked during the learning phase turned more accurately when taking novel shortcuts than those who only viewed the dynamic images. The wheelchair and viewing-only groups did not differ from each other, suggesting that proprioceptive but not vestibular cues provided metric information about the environment that aided the participants in estimating unexperienced directions between the landmarks.
He et al. (2019) conducted a similar experiment by having participants navigate virtual shopping precincts (50 × 50 m) with or without body-based rotational self-motion cues. Each precinct contained nine square buildings that were arranged in a 3 × 3 grid pattern. Four sides of a building were occupied by different shops, creating 36 unique shopfronts. Nine of them were designated as targets and the participants were asked to go to the target shops by taking the shortest possible path. Half the participants viewed the virtual environments on a desktop computer monitor and controlled their navigation entirely via a joystick, receiving no body-based self-motion cues. The other half wore a head-mounted display to view the environments, and executed translational movements by using the joystick and rotational movements by physically turning their bodies. Thus, the latter group received rotational body-based cues (and visual cues). An interesting manipulation in this experiment was that it largely consisted of trials in which the participants were able to penetrate the buildings (penetrable trials; a video demonstration is provided by Qiliang He at https://osf.io/cmsug). In these trials, to achieve the shortest possible path, the participants had to move between shops in a straight line by going through the buildings. This manipulation presumably facilitated development of survey knowledge because it encouraged the participants to focus more on global distances and directions among the shops than on specific routes that would have been taken when moving from one shop to another in an ordinary fashion without penetrating the buildings. Results showed that the participants with the body-based cues travelled shorter distances in reaching the targets than those without the body-based cues, suggesting the benefit of having body-based self-motion signals in acquiring metric details of the environments. However, there is one caveat in this interpretation: The two groups differed not only in the availability of the body-based cues but also in the quality of visual cues they were presented with. By being immersed in the virtual environments, those with the body-based cues most likely had richer visual cues as well, making it unclear whether the superior performance exhibited by these participants was ascribed to the body-based cues themselves.
Notably, the experimental design employed by Chrastil and Warren (2013) helped separate the effects of path integration per se from those of other processes that often co-occur when path integration is carried out. For one thing, when navigators actively move in an environment (and therefore they can perform path integration), they make decisions about their movement (e.g., when to turn; how long they travel). Thus, cognitive decision making, not body-based sensory information, could be the primary cause of performance enhancement in navigation tasks that involve active exploration. However, it is likely that the role of decision making is limited—although half the participants in each group of the Chrastil and Warren (2013) study freely moved in the maze as they made navigational decisions during the learning phase, their performance in the shortcutting task did not differ from that of the other half who passively followed experimenters to learn the environment (for similar findings, see also Wan et al., 2010). In a follow-up study, Chrastil and Warren (2015) changed the task such that participants attempted to go from one landmark to another in the maze by taking the shortest route between them, which was not always experienced during learning (Fig. 6). As in the previous study, the participants either walked or viewed dynamic images for learning the maze (the wheelchair group was omitted in this study), and each group was split into two according to whether they controlled their navigation during the learning phase. In this case, participants who exercised their own navigational decisions did select ideal routes more frequently at test, but the benefit of decision making was confined to the walking group. Comparable facilitatory effects of decision making on spatial learning were reported by Guo et al. (2019) who had participants navigate in immersive virtual environments with rotational body-based cues (evoked by physically turning the body). These results, together with those of Chrastil and Warren (2013), suggest that being an active agent of navigation is advantageous to acquiring certain forms of metric environmental information, but its role in facilitating the formation of survey knowledge is not as fundamental as that of path integration.
For another, walking in an environment raises the level of physiological arousal, which may have general facilitatory effects on learning and memory regardless of the types of information being remembered (Salas et al., 2011). Therefore, the presumable contribution of path integration to spatial learning might better be explained by the effects of physiological arousal instead. This is a real possibility because the benefit of doing path integration seems to be greater when navigators walk than when they do not move their bodies by themselves (e.g., being pushed in a wheelchair). This pattern has been attributed to the difference in body-based (i.e., proprioceptive versus vestibular) cues the navigators receive during navigation, but it is also consistent with the alternative possibility. To address this issue, Lhuillier et al. (2021) varied when and how much participants walked in their experiment—they either (a) walked on a linear treadmill not only while viewing a walk-through video of a virtual city but also while performing memory tests in which they retrieved locations of landmarks in the city; (b) walked only while viewing the video; (c) walked only while doing the memory tests; or (d) stood still on the unmoving treadmill throughout the experiment. Whereas all participants who walked experienced some increase of physiological arousal, only those in the first two groups obtained proprioceptive cues that corresponded to the virtual walk in the city. Results showed that the participants with the proprioceptive cues indicated the landmark locations more accurately on a blank map of the city than the other groups of participants, indicating that path integration during the learning phase, not heightened physiological arousal that accompanied physical walking, was the main contributor to the enhanced spatial memory. Interestingly, the groups did not differ in another test in which the participants specified the directions of turning at given intersections that led to designated landmarks. Compared with the map-based test that assessed memory for the overall layout of the landmarks, this test focussed on remembering particular navigational actions that took place between two specific locations (i.e., route knowledge). These results further suggest that the role of path integration is more pronounced in the acquisition of survey than route knowledge.
Encoding distance information
The previous section concluded that path integration may be an important source of metric information about a large-scale environment. While it is a distinct claim in and of itself, it also calls for a further question: If path integration supplies metric information, what metric information is it, exactly—is it about distance, direction, or both?
In many of the studies reviewed above, when effects of body-based (in particular, proprioceptive) cues on spatial learning and navigation were found in large-scale space, they tended to be more pronounced in tasks that involved judging direction of landmarks than those that required estimating distance between them (Chrastil & Warren, 2013; Lhuillier et al., 2021; Waller & Greenauer, 2007; Waller et al., 2004). This pattern of results could mean that proprioceptive information feeds more into direction than distance representations in memory, but it should be noted that more accurate encoding of inter-landmark distances can lead to better estimation of relative directions between the landmarks via trigonometric computation. Thus, it is possible that the proprioceptive cues contribute metric distance information to survey knowledge, and the biased results were consequences of unspecified task demands that differed between direction and distance judgements (e.g., the extent of error in any direction judgement is limited to ±180°, whereas it is less constrained and can be unlimited in distance judgement). Indeed, there are both theoretical and empirical reasons to hypothesise that path integration with proprioceptive signals may help acquire distance information about a large-scale environment.
The theoretical element that underlies this hypothesis is concerned with possible differential contributions of vestibular and proprioceptive signals, the two major sources of self-motion sensory cues in path integration, to encoding rotational and translational movements of navigators. For the rotational encoding, sensing head and gaze directions in the horizontal plane would be important, and the vestibular system may play a primary role here because it includes the sensory apparatus that is specialised for transducing changes of the head direction (i.e., semicircular canals). These changes also affect the gaze direction through the connection between the semicircular canals and extraocular muscles (e.g., the vestibulo-ocular reflex; Bronstein et al., 2015). On the other hand, although some proprioceptors in the neck and extraocular muscles may be closely involved in controlling the head and gaze directions (Crowell et al., 1998; Donaldson, 2000; Pettorossi & Schieppati, 2014), the majority of them in the other parts of the body would not directly encode these directions.Footnote 2 By contrast, for the translational encoding, proprioceptive signals generated from gait-related body motion should be critical. The vestibular system has the otolith organs that can sense linear acceleration of the body, and there is evidence that they do participate in tracking translational movements in small-scale space (Campos et al., 2012; Israël et al., 1997). However, it has been shown that impairment of these vestibular signals does not prevent participants from walking straight to remembered locations without vision (Arthur et al., 2012; Glasauer et al., 1994; Péruch et al., 2005), suggesting that the vestibular cues are not necessary for perceiving travelled distance in the presence of ambulatory proprioceptive cues. Considering that the effects of path integration on survey knowledge acquisition seem to primarily stem from proprioceptive cues (Chrastil & Warren, 2013), these characteristics of the vestibular and proprioceptive systems make it likely that path integration in large-scale space, when it is carried out via walking that evokes proprioceptive cues, provides distance information about the environment.
The empirical foundation for the hypothesis has been provided by studies that attempted to dissociate the rotational and translational components of path integration in the context of large-scale navigation. For example, Ruddle et al. (2011) had participants learn locations of targets by searching them in maze-like immersive virtual environments that varied in size (9.75 × 6.75 or 65 × 45 m), and subsequently asked them to estimate straight-line distances between the targets. Some of the participants walked either in a real room or on an omnidirectional treadmill to move about the virtual environments, evoking both vestibular and proprioceptive signals that conveyed information about rotational and translational self-motion in the environments. Others of them made rotational movements by physically turning in place and translational movements by manipulating a joystick, receiving body-based self-motion cues that were mostly restricted to vestibular rotational signals. Additionally, for navigating in the larger environment only, another group of participants walked on a linear treadmill for moving straight and used the joystick to make turns. These participants had proprioceptive cues but few vestibular cues.Footnote 3 All of the participants viewed the environments through a head-mounted display and received comparable visual cues. Regardless of the size of the environments and the mode of walking (i.e., walking freely or on the treadmills), the walking groups were significantly more accurate in estimating the distances than the turning-in-place groups. These results showed the benefit of having the translational body-based cues, which should have been largely based on the proprioceptive signals as summarised above, to incorporating metric distance information into survey knowledge of large-scale space.
It should be noted, however, that findings about the role of proprioceptive cues in acquiring metric distance information have not been fully consistent in the literature. The inconsistency is notable when participants walked on treadmills instead of actually moving through space. On the one hand, Ruddle et al. (2011) found that translational body-based cues obtained through treadmill walking were useful for estimating straight-line distances between previously visited locations. Similarly, as discussed earlier, Lhuillier et al. (2021) showed that participants who walked on a treadmill during learning later placed landmarks on a blank map more accurately, demonstrating the benefit of treadmill-based proprioceptive signals to acquisition of inter-landmark distance (and direction). On the other hand, when Li et al. (2021) had participants view the floor of a virtual shopping centre (2,964 m2 of navigable space; Fig. 7) through a head-mounted display and navigate in it either by walking on an omnidirectional treadmill or by making translational movement with a controller and rotational movement via turning a head while otherwise stationary, the two navigation methods yielded equivalent estimates of direct and pathway distances between locations along travelled routes. These results showed no merit of having proprioceptive cues in building survey knowledge of the shopping centre, contrasting with those of Ruddle et al. and Lhuillier et al. The findings of Li et al. could be due to the particular method used in their study—Li et al. made categorical assessment of the distance estimates by classifying them as correct or incorrect, instead of analysing quantified measures of the estimates. Nevertheless, further research should be conducted to resolve the conflict. In particular, it should be examined whether body-based cues that result from less naturalistic ambulatory movement on an omnidirectional treadmill could make participants rely more on other (probably visual) cues, particularly when the other cues are of high fidelity as in the Li et al. study (Chen et al., 2017; Foo et al., 2005; Nardini et al., 2008; Zhao & Warren, 2015).
Interestingly, when studies included a condition in which participants physically walked in a real environment, they often learnt the environment better than those who navigated in a virtual replica of the environment using an omnidirectional treadmill (Hejtmanek et al., 2020; Li et al., 2021). Thus, although virtual environments combined with (omnidirectional) treadmills bring many advantages to spatial navigation research (e.g., enabling presentation of a large-scale environment in a physically limited space), it should be carefully examined what is critically different in navigation via treadmills as compared with unconstrained ambulation (Huffman & Ekstrom, 2021; Steel et al., 2021). Clarifying this difference might provide a clue to explaining the seemingly incompatible outcomes from the studies discussed in the previous paragraph.
Building survey knowledge
Previous studies reviewed in the above sections showed that path integration can feed metric details of an environment into survey-level representations. Wang (2016) advanced this idea further by proposing computational mechanisms that would allow for the construction of cognitive maps solely from path integration. Specifically, she argued that humans (and some other animals) should be able to maintain multiple path integrators, each of which is used to track a landmark location as they move about the environment. For example, when navigators traverse from Landmark A to Landmark B, they establish Path Integrator A that holds and updates the location of Landmark A. As they continue travelling from Landmark B to Landmark C, they set up Path Integrator B independently of Path Integrator A so that they can keep track of both Landmark A and Landmark B during their subsequent travel. By repeating this process, the navigators create a collection of path integrators that record locations of several landmarks. These path integrators enable the navigators to take a novel shortcut from Landmark C to Landmark A because Path Integrator A still monitors the location of Landmark A when the navigators arrive at Landmark C. The number of path integrators that can be run in parallel would be limited by each navigator’s working memory capacity, but this limitation may be overcome by storing outputs of active path integrators into long-term memory from time to time and retrieving them later when needed. In fact, at least in small-scale space, it has been shown that the number of locations that are to be tracked while moving without vision does not always affect how well the locations are mentally updated (i.e., there is no effect of set size), suggesting that some of the updating processes are carried out using enduring representations of the locations in long-term memory (Hodgson & Waller, 2006; Lu et al., 2020; Wang et al., 2006). Thus, theoretically, it is possible that path integration is not just one of many contributors to construction of survey knowledge; rather, it may even be developed directly out of the path integration system.
Wang’s (2016) theory is unique in that it provides a mechanistic account of how the path integration system could play a fundamental role in building survey knowledge (or cognitive mapping; Downs & Stea, 1973) beyond what has been suggested in the literature (Foo et al., 2005, 2007; Ishikawa & Zhou, 2020). To our knowledge, direct tests of the theory have not been done yet, but there are some empirical data that are consistent with the theory. For example, if navigators maintain multiple path integrators for each of landmarks they encounter during a trip, they should be able to go directly back to any of the landmarks from the end of the trip, not just to the beginning of the trip (the latter is called homing, which has traditionally been studied in the path integration literature, particularly in nonhuman species; Etienne & Jeffery, 2004; Heinze et al., 2018; Loomis et al., 1999). Humans are certainly capable of returning to intermediate landmarks via direct paths when they navigate by path integration (e.g., walking without vision)—indeed, children as young as five years old demonstrated this capacity after learning four locations in an 8 × 8-m room (Bostelmann et al., 2020). Similarly, Wan et al. (2012) showed that adult navigators were able to move straight back to one of two landmarks they passed while travelling multisegment paths of 15–25 m without external cues. These results suggest that at least in small-scale space with a limited number of landmarks, it is possible to construct survey-like knowledge from path integration alone. It remains to be seen whether Wang’s theory holds when navigators travel in large-scale space while dealing with a number of landmarks that exceeds their working memory capacity.
Concluding remarks
This review examined whether and how path integration would contribute to human navigation in large-scale environments. Although numerous studies have been conducted to characterise basic mechanisms of human path integration under controlled conditions, research on path integration’s role in navigation that goes beyond small laboratory spaces is still new. Now that the contribution of body-based cues to small-scale navigation has been well established (Chance et al., 1998; Chen et al., 2017; Kalia et al., 2013; Klatzky et al., 1998; Nardini et al., 2008; Qi et al., 2021; Sjolund et al., 2018; Tcheang et al., 2011; Zhang et al., 2020), it is time to extend the scope of investigation to explore what effects these cues have on navigation and other related processes that take place in more expansive naturalistic settings.
Previous studies that investigated this topic suggested that body-based, in particular proprioceptive, cues can enhance spatial learning that occurs during navigation in a large-scale environment (Waller & Greenauer, 2007; Waller et al., 2004). More specifically, navigators may acquire metric details of their movement through these cues, which play a vital role in constructing survey knowledge of the environment (Chrastil & Warren, 2013, 2015; Lhuillier et al., 2021). At this stage, however, findings from these studies are yet to be conclusive. Although it is theoretically and empirically possible that path integration with the proprioceptive cues primarily helps encode metric information about distance as opposed to direction in the environment, the previous studies have not attained this level of specificity in their conclusions. Results from relevant studies displayed apparent discrepancy, which is largely ascribed to the use of virtual environments and treadmills that is common in this research (Lhuillier et al., 2021; Li et al., 2021; Ruddle et al., 2011). That is, they created differences in the methods of walking as well as in the richness of external (particularly visual) cues, which prevented the studies from converging on a specific outcome. Considering that future research will most likely employ these technologies more frequently, and doing so is even necessary for investigating navigation in large-scale space with systematic manipulation of sensory and other cues, it is essential to clarify what peculiarities they bring to navigation as compared with natural walking in real environments (Steinicke et al., 2013). With a clearer understanding of the methodologies, the important next step is to scrutinise conditions that determine what metric information path integration provides.
Another important issue that needs to be resolved in future research, which is also related to the above point, is that conditions of previous studies typically overlapped in terms of what cues they afforded (e.g., walking—proprioceptive, vestibular, and visual cues; being pushed in a wheelchair—vestibular and visual cues). Therefore, unless it is assumed that contributions of different kinds of cues would be linearly additive, comparison between these conditions does not necessarily allow for isolating effects that are unique to one particular cue type. For example, if a walking condition yielded better memory performance than a wheelchair condition, it could mean either that proprioception made a difference by itself or that proprioceptive and vestibular information interacted and this interaction was crucial for enhancing spatial memory. Given that different types of body-based cues are tightly coupled in the somatosensory system (Cabolis et al., 2018; Cullen, 2012; Ferrè et al., 2011; Ferrè et al., 2013), the assumption of linear additivity may be too simplistic. Thus, for now, the conclusion drawn from the studies reviewed in this article should be qualified accordingly—that is, it seems likely that path integration in the presence of proprioceptive cues does contribute to large-scale human navigation by facilitating encoding of metric environmental information, but it remains to be seen whether it is proprioception itself or the amalgamation of body-based cues including proprioception that brings this benefit to navigators.
As shown in this review, past research in this field generally focussed on assessing spatial memory that resulted from navigation in large-scale space. Although it is a valid and useful approach to inferring what mental operations are carried out during navigation, it inevitably creates room for possible confounds with any post-navigation processes that might occur in the course of retaining, consolidating, and retrieving spatial information from memory. Considering that spatial memories, particularly those of large-scale environments, are known to be susceptible to some stereotypical biases (Hirtle & Jonides, 1985; Mark, 1992; McNamara, 1986; McNamara & Diwadkar, 1997; Moar & Bower, 1983; Sadalla et al., 1980; Stevens & Coupe, 1978), it is important to devise paradigms through which effects of body-based cues on navigation can be measured en route as they unfold. Whether behavioural or neuroscientific, such paradigms would offer new insights into what contributions path integration makes to large-scale human navigation and whether they are attributed to path integration per se. Some initial attempts in this regard have already emerged in the literature (e.g., Moon et al., 2022), and these efforts should be expended further in the future.
Notes
Another important source of input into the path integration system is optic flow, which is considered a special class of self-motion cues (referred to as allothetic cues; Loomis et al., 1999), because it is based on external visual information. However, possible contributions of optic flow to large-scale navigation are not discussed in this article for maintaining a clear focus on the roles of idiothetic cues. A brief review of path integration by optic flow is available in Shelton and Yamamoto (2009).
There are leg and body muscles that make systematic movements while curvilinear walking, some of which may occur through dynamic interaction with head and gaze directions (Becker et al., 2002; Chia Bejarano et al., 2017; Courtine et al., 2006; Courtine & Schieppati, 2003; Hicheur et al., 2005; Imai et al., 2001). Thus, there should be proprioceptive components in encoding rotational body motion. However, to our knowledge, the role of these rotational proprioceptive cues in path integration has not been well characterised.
Strictly speaking, walking on a linear treadmill might evoke some vestibular signals. For example, as walkers step in place, their head moves up and down systematically according to their gait pattern (Hirasaki et al., 1999). Thus, if this head oscillation can be encoded by the vestibular system, it can inform the walkers about their supposed speed of forward locomotion in a virtual environment (Bossard & Mestre, 2018; Tiwari et al., 2021). However, compared with rich proprioceptive cues elicited by full-body ambulatory motion, these vestibular signals presumably carry much less information about the walkers’ translational movements.
References
Andre, J., & Rogers, S. (2006). Using verbal and blind-walking distance estimates to investigate the two visual systems hypothesis. Perception & Psychophysics, 68(3), 353–361. https://doi.org/10.3758/BF03193682
Arthur, J. C., Kortte, K. B., Shelhamer, M., & Schubert, M. C. (2012). Linear path integration deficits in patients with abnormal vestibular afference. Seeing and Perceiving, 25(2), 155–178. https://doi.org/10.1163/187847612X629928
Avraamides, M. N., Loomis, J. M., Klatzky, R. L., & Golledge, R. G. (2004). Functional equivalence of spatial representations derived from vision and language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 801–814. https://doi.org/10.1037/0278-7393.30.4.801
Baumann, O., & Mattingley, J. B. (2013). Dissociable representations of environmental size and complexity in the human hippocampus. Journal of Neuroscience, 33(25), 10526–10533. https://doi.org/10.1523/JNEUROSCI.0350-13.2013
Becker, W., Nasios, G., Raab, S., & Jürgens, R. (2002). Fusion of vestibular and podokinesthetic information during self-turning towards instructed targets. Experimental Brain Research, 144(4), 458–474. https://doi.org/10.1007/s00221-002-1053-5
Bigelow, R. T., & Agrawal, Y. (2015). Vestibular involvement in cognition: Visuospatial ability, attention, executive function, and memory. Journal of Vestibular Research, 25(2), 73–89. https://doi.org/10.3233/VES-150544
Biju, K., Wei, E. X., Rebello, E., Matthews, J., He, Q., McNamara, T. P., & Agrawal, Y. (2021). Performance in real world- and virtual reality-based spatial navigation tasks in patients with vestibular dysfunction. Otology & Neurotology, 42(10), e1524–e1531. https://doi.org/10.1097/MAO.0000000000003289
Blanke, O., Slater, M., & Serino, A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron, 88(1), 145–166. https://doi.org/10.1016/j.neuron.2015.09.029
Bonavita, A., Teghil, A., Pesola, M. C., Guariglia, C., D’Antonio, F., Di Vita, A., & Boccia, M. (2022). Overcoming navigational challenges: A novel approach to the study and assessment of topographical orientation. Behavior Research Methods, 54(2), 752–762. https://doi.org/10.3758/s13428-021-01666-7
Bossard, M., & Mestre, D. R. (2018). The relative contributions of various viewpoint oscillation frequencies to the perception of distance traveled. Journal of Vision, 18(2), Article 3. https://doi.org/10.1167/18.2.3
Bostelmann, M., Lavenex, P., & Banta Lavenex, P. (2020). Children five-to-nine years old can use path integration to build a cognitive map without vision. Cognitive Psychology, 121, Article 101307. https://doi.org/10.1016/j.cogpsych.2020.101307
Botvinick, M., & Cohen, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature, 391(6669), 756–756. https://doi.org/10.1038/35784
Brandt, T., Schautzer, F., Hamilton, D. A., Brüning, R., Markowitsch, H. J., Kalla, R., Darlington, C., Smith, P., & Strupp, M. (2005). Vestibular loss causes hippocampal atrophy and impaired spatial memory in humans. Brain, 128(11), 2732–2741. https://doi.org/10.1093/brain/awh617
Bronstein, A. M., Patel, M., & Arshad, Q. (2015). A brief review of the clinical anatomy of the vestibular-ocular connections—How much do we know? Eye, 29(2), 163–170. https://doi.org/10.1038/eye.2014.262
Bryant, D. J. (1997). Representing space in language and perception. Mind and Language, 12(3/4), 239–264. https://doi.org/10.1111/j.1468-0017.1997.tb00073.x
Byrne, P., Becker, S., & Burgess, N. (2007). Remembering the past and imagining the future: A neural model of spatial memory and imagery. Psychological Review, 114(2), 340–375. https://doi.org/10.1037/0033-295X.114.2.340
Cabolis, K., Steinberg, A., & Ferrè, E. R. (2018). Somatosensory modulation of perceptual vestibular detection. Experimental Brain Research, 236(3), 859–865. https://doi.org/10.1007/s00221-018-5167-9
Campos, J., Butler, J., & Bülthoff, H. (2012). Multisensory integration in the estimation of walked distances. Experimental Brain Research, 218(4), 551–565. https://doi.org/10.1007/s00221-012-3048-1
Chance, S. S., Gaunet, F., Beall, A. C., & Loomis, J. M. (1998). Locomotion mode affects the updating of objects encountered during travel: The contribution of vestibular and proprioceptive inputs to path integration. Presence: Teleoperators and Virtual Environments, 7(2), 168–178. https://doi.org/10.1162/105474698565659
Chen, X., McNamara, T. P., Kelly, J. W., & Wolbers, T. (2017). Cue combination in human spatial navigation. Cognitive Psychology, 95, 105–144. https://doi.org/10.1016/j.cogpsych.2017.04.003
Cheung, A., Zhang, S., Stricker, C., & Srinivasan, M. V. (2007). Animal navigation: The difficulty of moving in a straight line. Biological Cybernetics, 97(1), 47–61. https://doi.org/10.1007/s00422-007-0158-0
Cheung, A., Zhang, S., Stricker, C., & Srinivasan, M. V. (2008). Animal navigation: General properties of directed walks. Biological Cybernetics, 99(3), 197–217. https://doi.org/10.1007/s00422-008-0251-z
Chia Bejarano, N., Pedrocchi, A., Nardone, A., Schieppati, M., Baccinelli, W., Monticone, M., Ferrigno, G., & Ferrante, S. (2017). Tuning of muscle synergies during walking along rectilinear and curvilinear trajectories in humans. Annals of Biomedical Engineering, 45(5), 1204–1218. https://doi.org/10.1007/s10439-017-1802-z
Chrastil, E. R. (2013). Neural evidence supports a novel framework for spatial navigation. Psychonomic Bulletin & Review, 20(2), 208–227. https://doi.org/10.3758/s13423-012-0351-6
Chrastil, E. R. (2018). Heterogeneity in human retrosplenial cortex: A review of function and connectivity. Behavioral Neuroscience, 132(5), 317–338. https://doi.org/10.1037/bne0000261
Chrastil, E. R., & Warren, W. (2012). Active and passive contributions to spatial learning. Psychonomic Bulletin & Review, 19(1), 1–23. https://doi.org/10.3758/s13423-011-0182-x
Chrastil, E. R., & Warren, W. H. (2013). Active and passive spatial learning in human navigation: Acquisition of survey knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(5), 1520–1537. https://doi.org/10.1037/a0032382
Chrastil, E. R., & Warren, W. H. (2014). From cognitive maps to cognitive graphs. PLOS ONE, 9(11), Article e112544. https://doi.org/10.1371/journal.pone.0112544
Chrastil, E. R., & Warren, W. H. (2015). Active and passive spatial learning in human navigation: Acquisition of graph knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(4), 1162–1178. https://doi.org/10.1037/xlm0000082
Collett, T. S., & Collett, M. (2000). Path integration in insects. Current Opinion in Neurobiology, 10(6), 757–762. https://doi.org/10.1016/S0959-4388(00)00150-1
Cornell, E. H., & Greidanus, E. (2006). Path integration during a neighborhood walk. Spatial Cognition & Computation, 6(3), 203–234. https://doi.org/10.1207/s15427633scc0603_2
Courtine, G., & Schieppati, M. (2003). Human walking along a curved path. I. Body trajectory, segment orientation and the effect of vision. European Journal of Neuroscience, 18(1), 177–190. https://doi.org/10.1046/j.1460-9568.2003.02736.x
Courtine, G., Papaxanthis, C., & Schieppati, M. (2006). Coordinated modulation of locomotor muscle synergies constructs straight-ahead and curvilinear walking in humans. Experimental Brain Research, 170(3), 320–335. https://doi.org/10.1007/s00221-005-0215-7
Crowell, J. A., Banks, M. S., Shenoy, K. V., & Andersen, R. A. (1998). Visual self-motion perception during head turns. Nature Neuroscience, 1(8), 732–737. https://doi.org/10.1038/3732
Cullen, K. E. (2012). The vestibular system: Multimodal integration and encoding of self-motion for motor control. Trends in Neurosciences, 35(3), 185–196. https://doi.org/10.1016/j.tins.2011.12.001
Dieguez, S., & Lopez, C. (2017). The bodily self: Insights from clinical and experimental research. Annals of Physical and Rehabilitation Medicine, 60(3), 198–207. https://doi.org/10.1016/j.rehab.2016.04.007
Donaldson, I. M. L. (2000). The functions of the proprioceptors of the eye muscles. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 355(1404), 1685–1754. https://doi.org/10.1098/rstb.2000.0732
Downs, R. M., & Stea, D. (1973). Cognitive maps and spatial behavior: Process and products. In R. M. Downs & D. Stea (Eds.), Image and environment: Cognitive mapping and spatial behavior (pp. 8–26). Aldine Publishing.
Dudchenko, P. (2010). Why people get lost: The psychology and neuroscience of spatial cognition. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199210862.001.0001
Ehrsson, H. H. (2007). The experimental induction of out-of-body experiences. Science, 317(5841), 1048–1048. https://doi.org/10.1126/science.1142175
Eichenbaum, H., & Cohen, N. J. (2014). Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron, 83(4), 764–770. https://doi.org/10.1016/j.neuron.2014.07.032
Eilan, N. (1993). Molyneux’s question and the idea of an external world. In N. Eilan, R. McCarthy, & B. Brewer (Eds.), Spatial representation: Problems in philosophy and psychology (pp. 236–255). Oxford University Press.
Etienne, A. S., & Jeffery, K. J. (2004). Path integration in mammals. Hippocampus, 14(2), 180–192. https://doi.org/10.1002/hipo.10173
Etienne, A. S., Maurer, R., Boulens, V., Levy, A., & Rowe, T. (2004). Resetting the path integrator: A basic condition for route-based navigation. Journal of Experimental Biology, 207(9), 1491–1508. https://doi.org/10.1242/jeb.00906
Ferrè, E. R., Bottini, G., & Haggard, P. (2011). Vestibular modulation of somatosensory perception. European Journal of Neuroscience, 34(8), 1337–1344. https://doi.org/10.1111/j.1460-9568.2011.07859.x
Ferrè, E. R., Longo, M. R., Fiori, F., & Haggard, P. (2013). Vestibular modulation of spatial perception. Frontiers in Human Neuroscience, 7, Article 660. https://doi.org/10.3389/fnhum.2013.00660
Foo, P., Warren, W. H., Duchon, A., & Tarr, M. J. (2005). Do humans integrate routes into a cognitive map? Map- versus landmark-based navigation of novel shortcuts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(2), 195–215. https://doi.org/10.1037/0278-7393.31.2.195
Foo, P., Duchon, A., Warren, W. H., Jr., & Tarr, M. J. (2007). Humans do not switch between path knowledge and landmarks when learning a new environment. Psychological Research, 71(3), 240–251. https://doi.org/10.1007/s00426-006-0080-4
Gallistel, C. R. (1990). The organization of learning. MIT Press.
Gillner, S., & Mallot, H. A. (1998). Navigation and acquisition of spatial knowledge in a virtual maze. Journal of Cognitive Neuroscience, 10(4), 445–463. https://doi.org/10.1162/089892998562861
Giudice, N. A., Betty, M. R., & Loomis, J. M. (2011). Functional equivalence of spatial images from touch and vision: Evidence from spatial updating in blind and sighted individuals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(3), 621–634. https://doi.org/10.1037/a0022331
Glasauer, S., Amorim, M.-A., Vitte, E., & Berthoz, A. (1994). Goal-directed linear locomotion in normal and labyrinthine-defective subjects. Experimental Brain Research, 98(2), 323–335. https://doi.org/10.1007/BF00228420
Göttlich, M., Jandl, N. M., Sprenger, A., Wojak, J. F., Münte, T. F., Krämer, U. M., & Helmchen, C. (2016). Hippocampal gray matter volume in bilateral vestibular failure. Human Brain Mapping, 37(5), 1998–2006. https://doi.org/10.1002/hbm.23152
Grant, S. C., & Magee, L. E. (1998). Contributions of proprioception to navigation in virtual environments. Human Factors, 40(3), 489–497. https://doi.org/10.1518/001872098779591296
Guderian, S., Dzieciol, A. M., Gadian, D. G., Jentschke, S., Doeller, C. F., Burgess, N., Mishkin, M., & Vargha-Khadem, F. (2015). Hippocampal volume reduction in humans predicts impaired allocentric spatial memory in virtual-reality navigation. Journal of Neuroscience, 35(42), 14123–14131. https://doi.org/10.1523/JNEUROSCI.0801-15.2015
Guo, J., Huang, J., & Wan, X. (2019). Influence of route decision-making and experience on human path integration. Acta Psychologica, 193, 66–72. https://doi.org/10.1016/j.actpsy.2018.12.005
Harootonian, S. K., Wilson, R. C., Hejtmánek, L., Ziskin, E. M., & Ekstrom, A. D. (2020). Path integration in large-scale space and with novel geometries: Comparing vector addition and encoding-error models. PLOS Computational Biology, 16(5), Article e1007489. https://doi.org/10.1371/journal.pcbi.1007489
Harootonian, S. K., Ekstrom, A. D., & Wilson, R. C. (2022). Combination and competition between path integration and landmark navigation in the estimation of heading direction. PLOS Computational Biology, 18(2), Article e1009222. https://doi.org/10.1371/journal.pcbi.1009222
He, Q., McNamara, T. P., Bodenheimer, B., & Klippel, A. (2019). Acquisition and transfer of spatial knowledge during wayfinding. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(8), 1364–1386. https://doi.org/10.1037/xlm0000654
Hecht, H., Ramdohr, M., & von Castell, C. (2018). Underestimation of large distances in active and passive locomotion. Experimental Brain Research, 236(6), 1603–1609. https://doi.org/10.1007/s00221-018-5245-z
Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence, 30(5), 425–447. https://doi.org/10.1016/S0160-2896(02)00116-2
Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning. Intelligence, 34(2), 151–176. https://doi.org/10.1016/j.intell.2005.09.005
Heinze, S., Narendra, A., & Cheung, A. (2018). Principles of insect path integration. Current Biology, 28(17), R1043–R1058. https://doi.org/10.1016/j.cub.2018.04.058
Hejtmanek, L., Starrett, M., Ferrer, E., & Ekstrom, A. D. (2020). How much of what we learn in virtual reality transfers to real-world navigation? Multisensory Research, 33(4/5), 479–503. https://doi.org/10.1163/22134808-20201445
Hicheur, H., Vieilledent, S., & Berthoz, A. (2005). Head motion in humans alternating between straight and curved walking path: Combination of stabilizing and anticipatory orienting mechanisms. Neuroscience Letters, 383(1), 87–92. https://doi.org/10.1016/j.neulet.2005.03.046
Hirasaki, E., Moore, S. T., Raphan, T., & Cohen, B. (1999). Effects of walking velocity on vertical head and body movements during locomotion. Experimental Brain Research, 127(2), 117–130. https://doi.org/10.1007/s002210050781
Hirtle, S. C., & Jonides, J. (1985). Evidence of hierarchies in cognitive maps. Memory & Cognition, 13(3), 208–217. https://doi.org/10.3758/BF03197683
Hodgson, E., & Waller, D. (2006). Lack of set size effects in spatial updating: Evidence for offline updating. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 854–866. https://doi.org/10.1037/0278-7393.32.4.854
Huffman, D. J., & Ekstrom, A. D. (2019). A modality-independent network underlies the retrieval of large-scale spatial environments in the human brain. Neuron, 104(3), 611–622. https://doi.org/10.1016/j.neuron.2019.08.012
Huffman, D. J., & Ekstrom, A. D. (2021). An important step toward understanding the role of body-based cues on human spatial memory for large-scale environments. Journal of Cognitive Neuroscience, 33(2), 167–179. https://doi.org/10.1162/jocn_a_01653
Hüfner, K., Stephan, T., Hamilton, D. A., Kalla, R., Glasauer, S., Strupp, M., & Brandt, T. (2009). Gray-matter atrophy after chronic complete unilateral vestibular deafferentation. Annals of the New York Academy of Sciences, 1164(1), 383–385. https://doi.org/10.1111/j.1749-6632.2008.03719.x
Imai, T., Moore, S. T., Raphan, T., & Cohen, B. (2001). Interaction of the body, head, and eyes during walking and turning. Experimental Brain Research, 136(1), 1–18. https://doi.org/10.1007/s002210000533
Ishikawa, T., & Zhou, Y. (2020). Improving cognitive mapping by training for people with a poor sense of direction. Cognitive Research: Principles and Implications, 5(1), Article 39. https://doi.org/10.1186/s41235-020-00238-1
Israël, I., Grasso, R., Georges-François, P., Tsuzuku, T., & Berthoz, A. (1997). Spatial memory and path integration studied by self-driven passive linear displacement. I. Basic properties. Journal of Neurophysiology, 77(6), 3180–3192. https://doi.org/10.1152/jn.1997.77.6.3180
Ittelson, W. H. (1973). Environment perception and contemporary perceptual theory. In W. H. Ittelson (Ed.), Environment and cognition (pp. 1–19). Seminar Press.
Jabbari, Y., Kenney, D. M., von Mohrenschildt, M., & Shedden, J. M. (2021). Vestibular cues improve landmark-based route navigation: A simulated driving study. Memory & Cognition, 49(8), 1633–1644. https://doi.org/10.3758/s13421-021-01181-2
Jayakumar, R. P., Madhav, M. S., Savelli, F., Blair, H. T., Cowan, N. J., & Knierim, J. J. (2019). Recalibration of path integration in hippocampal place cells. Nature, 566(7745), 533–537. https://doi.org/10.1038/s41586-019-0939-3
Kalia, A. A., Schrater, P. R., & Legge, G. E. (2013). Combining path integration and remembered landmarks when navigating without vision. PLOS ONE, 8(9), Article e72170. https://doi.org/10.1371/journal.pone.0072170
Kamil, R. J., Jacob, A., Ratnanather, J. T., Resnick, S. M., & Agrawal, Y. (2018). Vestibular function and hippocampal volume in the Baltimore longitudinal study of aging (BLSA). Otology & Neurotology, 39(6), 765–771. https://doi.org/10.1097/MAO.0000000000001838
Klatzky, R. L., Loomis, J. M., Golledge, R. G., Cicinelli, J. G., Doherty, S., & Pellegrino, J. W. (1990). Acquisition of route and survey knowledge in the absence of vision. Journal of Motor Behavior, 22(1), 19–43. https://doi.org/10.1080/00222895.1990.10735500
Klatzky, R. L., Loomis, J. M., Beall, A. C., Chance, S. S., & Golledge, R. G. (1998). Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychological Science, 9(4), 293–298. https://doi.org/10.1111/1467-9280.00058
Knierim, J. J., & Hamilton, D. A. (2011). Framing spatial cognition: Neural representations of proximal and distal frames of reference and their roles in navigation. Physiological Reviews, 91(4), 1245–1279. https://doi.org/10.1152/physrev.00021.2010
Knierim, J. J., & Rao, G. (2003). Distal landmarks and hippocampal place cells: Effects of relative translation versus rotation. Hippocampus, 13(5), 604–617. https://doi.org/10.1002/hipo.10092
Kremmyda, O., Hüfner, K., Flanagin, V. L., Hamilton, D. A., Linn, J., Strupp, M., Jahn, K., & Brandt, T. (2016). Beyond dizziness: Virtual navigation, spatial anxiety and hippocampal volume in bilateral vestibulopathy. Frontiers in Human Neuroscience, 10, Article 139. https://doi.org/10.3389/fnhum.2016.00139
Labate, E., Pazzaglia, F., & Hegarty, M. (2014). What working memory subcomponents are needed in the acquisition of survey knowledge? Evidence from direction estimation and shortcut tasks. Journal of Environmental Psychology, 37, 73–79. https://doi.org/10.1016/j.jenvp.2013.11.007
Lenggenhager, B., Tadi, T., Metzinger, T., & Blanke, O. (2007). Video ergo sum: Manipulating bodily self-consciousness. Science, 317(5841), 1096–1099. https://doi.org/10.1126/science.1143439
Lhuillier, S., Gyselinck, V., Piolino, P., & Nicolas, S. (2021). Walk this way: Specific contributions of active walking to the encoding of metric properties during spatial learning. Psychological Research, 85(7), 2502–2517. https://doi.org/10.1007/s00426-020-01415-z
Li, H., Mavros, P., Krukar, J., & Hölscher, C. (2021). The effect of navigation method and visual display on distance perception in a large-scale virtual building. Cognitive Processing, 22(2), 239–259. https://doi.org/10.1007/s10339-020-01011-4
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W., & Fry, P. A. (1993). Nonvisual navigation by blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology: General, 122(1), 73–91. https://doi.org/10.1037/0096-3445.122.1.73
Loomis, J. M., Klatzky, R. L., Golledge, R. G., & Philbeck, J. W. (1999). Human navigation by path integration. In R. G. Golledge (Ed.), Wayfinding behavior: Cognitive mapping and other spatial processes (pp. 125–151). Johns Hopkins University Press.
Loomis, J. M., Lippa, Y., Klatzky, R. L., & Golledge, R. G. (2002). Spatial updating of locations specified by 3-D sound and spatial language. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(2), 335–345. https://doi.org/10.1037/0278-7393.28.2.335
Lu, R., Yu, C., Li, Z., Mou, W., & Li, Z. (2020). Set size effects in spatial updating are independent of the online/offline updating strategy. Journal of Experimental Psychology: Human Perception and Performance, 46(9), 901–911. https://doi.org/10.1037/xhp0000756
Mark, D. M. (1992). Counter-intuitive geographic ‘facts’: Clues for spatial reasoning at geographic scales. In A. U. Frank, I. Campari, & U. Formentini (Eds.), Lecture notes in computer science: Vol. 639. Theories and methods of spatio-temporal reasoning in geographic space (pp. 305–317). Springer-Verlag. https://doi.org/10.1007/3-540-55966-3_18
McNamara, T. P. (1986). Mental representations of spatial relations. Cognitive Psychology, 18(1), 87–121. https://doi.org/10.1016/0010-0285(86)90016-2
McNamara, T. P., & Chen, X. (2022). Bayesian decision theory and navigation. Psychonomic Bulletin & Review, 29(3), 721–752. https://doi.org/10.3758/s13423-021-01988-9
McNamara, T. P., & Diwadkar, V. A. (1997). Symmetry and asymmetry of human spatial memory. Cognitive Psychology, 34(2), 160–190. https://doi.org/10.1006/cogp.1997.0669
McNaughton, B. L., Barnes, C. A., Gerrard, J. L., Gothard, K., Jung, M. W., Knierim, J. J., Kudrimoti, H., Qin, Y., Skaggs, W. E., Suster, M., & Weaver, K. L. (1996). Deciphering the hippocampal polyglot: The hippocampus as a path integration system. Journal of Experimental Biology, 199(1), 173–185. https://doi.org/10.1242/jeb.199.1.173
McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., & Moser, M.-B. (2006). Path integration and the neural basis of the ‘cognitive map.’ Nature Reviews Neuroscience, 7(8), 663–678. https://doi.org/10.1038/nrn1932
Mittelstaedt, M.-L., & Mittelstaedt, H. (2001). Idiothetic navigation in humans: Estimation of path length. Experimental Brain Research, 139(3), 318–332. https://doi.org/10.1007/s002210100735
Moar, I., & Bower, G. H. (1983). Inconsistency in spatial knowledge. Memory & Cognition, 11(2), 107–113. https://doi.org/10.3758/BF03213464
Montello, D. R. (1993). Scale and multiple psychologies of space. In A. U. Frank & I. Campari (Eds.), Lecture notes in computer science: Vol. 716. Spatial information theory: A theoretical basis for GIS (pp. 312–321). Springer-Verlag. https://doi.org/10.1007/3-540-57207-4_21
Montello, D. R., & Xiao, D. (2011). Linguistic and cultural universality of the concept of sense-of-direction. In M. Egenhofer, N. Giudice, R. Moratz, & M. Worboys (Eds.), Lecture notes in computer science: Vol. 6899. Spatial information theory (pp. 264–282). Springer-Verlag. https://doi.org/10.1007/978-3-642-23196-4_15
Moon, H.-J., Gauthier, B., Park, H.-D., Faivre, N., & Blanke, O. (2022). Sense of self impacts spatial navigation and hexadirectional coding in human entorhinal cortex. Communications Biology, 5, Article 406. https://doi.org/10.1038/s42003-022-03361-5
Moser, E. I., Roudi, Y., Witter, M. P., Kentros, C., Bonhoeffer, T., & Moser, M.-B. (2014). Grid cells and cortical representation. Nature Reviews Neuroscience, 15(7), 466–481. https://doi.org/10.1038/nrn3766
Nardini, M., Jones, P., Bedford, R., & Braddick, O. (2008). Development of cue integration in human navigation. Current Biology, 18(9), 689–693. https://doi.org/10.1016/j.cub.2008.04.021
Nedelska, Z., Andel, R., Laczó, J., Vlcek, K., Horinek, D., Lisy, J., Sheardova, K., Bureš, J., & Hort, J. (2012). Spatial navigation impairment is proportional to right hippocampal volume. Proceedings of the National Academy of Sciences of the United States of America, 109(7), 2590–2594. https://doi.org/10.1073/pnas.1121588109
Newell, F. N., Woods, A. T., Mernagh, M., & Bülthoff, H. H. (2005). Visual, haptic and crossmodal recognition of scenes. Experimental Brain Research, 161(2), 233–242. https://doi.org/10.1007/s00221-004-2067-y
Newman, P. M., & McNamara, T. P. (2021). A comparison of methods of assessing cue combination during navigation. Behavior Research Methods, 53(1), 390–398. https://doi.org/10.3758/s13428-020-01451-y
O’Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Clarendon Press.
Peer, M., Brunec, I. K., Newcombe, N. S., & Epstein, R. A. (2021). Structuring knowledge with cognitive maps and cognitive graphs. Trends in Cognitive Sciences, 25(1), 37–54. https://doi.org/10.1016/j.tics.2020.10.004
Péruch, P., Borel, L., Magnan, J., & Lacour, M. (2005). Direction and distance deficits in path integration after unilateral vestibular loss depend on task complexity. Cognitive Brain Research, 25(3), 862–872. https://doi.org/10.1016/j.cogbrainres.2005.09.012
Pettorossi, V. E., & Schieppati, M. (2014). Neck proprioception shapes body orientation and perception of motion. Frontiers in Human Neuroscience, 8, Article 895. https://doi.org/10.3389/fnhum.2014.00895
Philbeck, J. W., & O’Leary, S. (2005). Remembered landmarks enhance the precision of path integration. Psicológica, 26(1), 7–24.
Philbeck, J. W., Klatzky, R. L., Behrmann, M., Loomis, J. M., & Goodridge, J. (2001). Active control of locomotion facilitates nonvisual navigation. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 141–153. https://doi.org/10.1037/0096-1523.27.1.141
Poucet, B. (1993). Spatial cognitive maps in animals: New hypotheses on their structure and neural mechanisms. Psychological Review, 100(2), 163–182. https://doi.org/10.1037/0033-295X.100.2.163
Qi, Y., Mou, W., & Lei, X. (2021). Cue combination in goal-oriented navigation. Quarterly Journal of Experimental Psychology, 74(11), 1981–2001. https://doi.org/10.1177/17470218211015796
Richardson, A. E., Montello, D. R., & Hegarty, M. (1999). Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Memory & Cognition, 27(4), 741–750. https://doi.org/10.3758/BF03211566
Rieser, J. J., Ashmead, D. H., Talor, C. R., & Youngquist, G. A. (1990). Visual perception and the guidance of locomotion without vision to previously seen targets. Perception, 19(5), 675–689. https://doi.org/10.1068/p190675
Ruddle, R. A. (2013). The effect of translational and rotational body-based information on navigation. In F. Steinicke, Y. Visell, J. Campos, & A. Lécuyer (Eds.), Human walking in virtual environments: Perception, technology, and applications (pp. 99–112). Springer. https://doi.org/10.1007/978-1-4419-8432-6_5
Ruddle, R. A., Volkova, E., & Bülthoff, H. H. (2011). Walking improves your cognitive map in environments that are large-scale and large in extent. ACM Transactions on Computer-Human Interaction, 18(2), 10:1–10:20. https://doi.org/10.1145/1970378.1970384
Sadalla, E. K., Burroughs, W. J., & Staplin, L. J. (1980). Reference points in spatial cognition. Journal of Experimental Psychology: Human Learning and Memory, 6(5), 516–528. https://doi.org/10.1037/0278-7393.6.5.516
Salas, C. R., Minakata, K., & Kelemen, W. L. (2011). Walking before study enhances free recall but not judgement-of-learning magnitude. Journal of Cognitive Psychology, 23(4), 507–513. https://doi.org/10.1080/20445911.2011.532207
Sanchez, L. M., Thompson, S. M., & Clark, B. J. (2016). Influence of proximal, distal, and vestibular frames of reference in object-place paired associate learning in the rat. PLOS ONE, 11(9), Article e0163102. https://doi.org/10.1371/journal.pone.0163102
Schautzer, F., Hamilton, D., Kalla, R., Strupp, M., & Brandt, T. (2003). Spatial memory deficits in patients with chronic bilateral vestibular failure. Annals of the New York Academy of Sciences, 1004(1), 316–324. https://doi.org/10.1196/annals.1303.029
Schinazi, V. R., Nardi, D., Newcombe, N. S., Shipley, T. F., & Epstein, R. A. (2013). Hippocampal size predicts rapid learning of a cognitive map in humans. Hippocampus, 23(6), 515–528. https://doi.org/10.1002/hipo.22111
Shelton, A. L., & Yamamoto, N. (2009). Visual memory, spatial representation, and navigation. In J. R. Brockmole (Ed.), The visual world in memory (pp. 140–177). Psychology Press. https://doi.org/10.4324/9780203889770-11
Sherrill, K. R., Erdem, U. M., Ross, R. S., Brown, T. I., Hasselmo, M. E., & Stern, C. E. (2013). Hippocampus and retrosplenial cortex combine path integration signals for successful navigation. Journal of Neuroscience, 33(49), 19304–19313. https://doi.org/10.1523/JNEUROSCI.1825-13.2013
Sholl, M. J. (1989). The relation between horizontality and rod-and-frame and vestibular navigational performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(1), 110–125. https://doi.org/10.1037/0278-7393.15.1.110
Sholl, M. J., Kenny, R. J., & DellaPorta, K. A. (2006). Allocentric-heading recall and its relation to self-reported sense-of-direction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(3), 516–533. https://doi.org/10.1037/0278-7393.32.3.516
Siegel, A. W., & White, S. H. (1975). The development of spatial representations of large-scale environments. In H. W. Reese (Ed.), Advances in child development and behavior (Vol. 10, pp. 9–55). Academic Press. https://doi.org/10.1016/S0065-2407(08)60007-5
Sjolund, L. A., Kelly, J. W., & McNamara, T. P. (2018). Optimal combination of environmental cues and path integration during navigation. Memory & Cognition, 46(1), 89–99. https://doi.org/10.3758/s13421-017-0747-7
Souman, J. L., Frissen, I., Sreenivasa, M. N., & Ernst, M. O. (2009). Walking straight into circles. Current Biology, 19(18), 1538–1542. https://doi.org/10.1016/j.cub.2009.07.053
Souman, J. L., Giordano, P. R., Schwaiger, M., Frissen, I., Thümmel, T., Ulbrich, H., Luca, A. D., Bülthoff, H. H., & Ernst, M. O. (2011). CyberWalk: Enabling unconstrained omnidirectional walking through virtual environments. ACM Transactions on Applied Perception, 8(4), 25:1–25:22. https://doi.org/10.1145/2043603.2043607
Steel, A., Robertson, C. E., & Taube, J. S. (2021). Current promises and limitations of combined virtual reality and functional magnetic resonance imaging research in humans: A commentary on Huffman and Ekstrom (2019). Journal of Cognitive Neuroscience, 33(2), 159–166. https://doi.org/10.1162/jocn_a_01635
Steinicke, F., Visell, Y., Campos, J., & Lécuyer, A. (Eds.). (2013). Human walking in virtual environments: Perception, technology, and applications. Springer. https://doi.org/10.1007/978-1-4419-8432-6
Stevens, A., & Coupe, P. (1978). Distortions in judged spatial relations. Cognitive Psychology, 10(4), 422–437. https://doi.org/10.1016/0010-0285(78)90006-3
Sun, H.-J., Chan, G. S. W., & Campos, J. L. (2004). Active navigation and orientation-free spatial representations. Memory & Cognition, 32(1), 51–71. https://doi.org/10.3758/BF03195820
Tcheang, L., Bülthoff, H. H., & Burgess, N. (2011). Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proceedings of the National Academy of Sciences of the United States of America, 108(3), 1152–1157. https://doi.org/10.1073/pnas.1011843108
ter Horst, A. C., Koppen, M., Selen, L. P. J., & Medendorp, W. P. (2015). Reliability-based weighting of visual and vestibular cues in displacement estimation. PLOS ONE, 10(12), Article e0145015. https://doi.org/10.1371/journal.pone.0145015
Thomson, J. A. (1983). Is continuous visual monitoring necessary in visually guided locomotion? Journal of Experimental Psychology: Human Perception and Performance, 9(3), 427–443. https://doi.org/10.1037/0096-1523.9.3.427
Tiwari, K., Kyrki, V., Cheung, A., & Yamamoto, N. (2021). DeFINE: Delayed feedback-based immersive navigation environment for studying goal-directed human navigation. Behavior Research Methods, 53(6), 2668–2688. https://doi.org/10.3758/s13428-021-01586-6
Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208. https://doi.org/10.1037/h0061626
Valiquette, C. M., McNamara, T. P., & Smith, K. (2003). Locomotion, incidental learning, and the selection. Memory & Cognition, 31(3), 479–489. https://doi.org/10.3758/BF03194405
Vann, S. D., Aggleton, J. P., & Maguire, E. A. (2009). What does the retrosplenial cortex do? Nature Reviews Neuroscience, 10(11), 792–802. https://doi.org/10.1038/nrn2733
Vitte, E., Derosier, C., Caritu, Y., Berthoz, A., Hasboun, D., & Soulié, D. (1996). Activation of the hippocampal formation by vestibular stimulation: A functional magnetic resonance imaging study. Experimental Brain Research, 112(3), 523–526. https://doi.org/10.1007/BF00227958
Waller, D., & Greenauer, N. (2007). The role of body-based sensory information in the acquisition of enduring spatial representations. Psychological Research, 71(3), 322–332. https://doi.org/10.1007/s00426-006-0087-x
Waller, D., & Hodgson, E. (2013). Sensory contributions to spatial knowledge of real and virtual environments. In F. Steinicke, Y. Visell, J. Campos, & A. Lécuyer (Eds.), Human walking in virtual environments: Perception, technology, and applications (pp. 3–26). Springer. https://doi.org/10.1007/978-1-4419-8432-6_1
Waller, D., Loomis, J. M., & Steck, S. D. (2003). Inertial cues do not enhance knowledge of environmental layout. Psychonomic Bulletin & Review, 10(4), 987–993. https://doi.org/10.3758/BF03196563
Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157–163. https://doi.org/10.3758/BF03206476
Wan, X., Wang, R. F., & Crowell, J. A. (2010). The effect of active selection in human path integration. Journal of Vision, 10(11), Article 25. https://doi.org/10.1167/10.11.25
Wan, X., Wang, R. F., & Crowell, J. A. (2012). The effect of landmarks in human path integration. Acta Psychologica, 140(1), 7–12. https://doi.org/10.1016/j.actpsy.2011.12.011
Wang, R. F. (2016). Building a cognitive map by assembling multiple path integration systems. Psychonomic Bulletin & Review, 23(3), 692–702. https://doi.org/10.3758/s13423-015-0952-y
Wang, R. F., Crowell, J. A., Simons, D. J., Irwin, D. E., Kramer, A. F., Ambinder, M. S., Thomas, L. E., Gosney, J. L., Levinthal, B. R., & Hsieh, B. B. (2006). Spatial updating relies on an egocentric representation of space: Effects of the number of objects. Psychonomic Bulletin & Review, 13(2), 281–286. https://doi.org/10.3758/BF03193844
Warren, W. H. (2019). Non-Euclidean navigation. Journal of Experimental Biology, 222(Suppl. 1), Article jeb187971. https://doi.org/10.1242/jeb.187971
Warren, W. H., Rothman, D. B., Schnapp, B. H., & Ericson, J. D. (2017). Wormholes in virtual space: From cognitive maps to cognitive graphs. Cognition, 166, 152–163. https://doi.org/10.1016/j.cognition.2017.05.020
Weisberg, S. M., & Newcombe, N. S. (2016). How do (some) people make a cognitive map? Routes, places, and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(5), 768–785. https://doi.org/10.1037/xlm0000200
Widdowson, C., & Wang, R. F. (2022). Human navigation in curved spaces. Cognition, 218, Article 104923. https://doi.org/10.1016/j.cognition.2021.104923
Wolbers, T., Klatzky, R. L., Loomis, J. M., Wutte, M. G., & Giudice, N. A. (2011). Modality-independent coding of spatial layout in the human brain. Current Biology, 21(11), 984–989. https://doi.org/10.1016/j.cub.2011.04.038
Yamamoto, N., & Philbeck, J. W. (2013). Peripheral vision benefits spatial learning by guiding eye movements. Memory & Cognition, 41(1), 109–121. https://doi.org/10.3758/s13421-012-0240-2
Yamamoto, N., & Shelton, A. L. (2005). Visual and proprioceptive representations in spatial memory. Memory & Cognition, 33(1), 140–150. https://doi.org/10.3758/BF03195304
Yamamoto, N., & Shelton, A. L. (2007). Path information effects in visual and proprioceptive spatial learning. Acta Psychologica, 125(3), 346–360. https://doi.org/10.1016/j.actpsy.2006.09.001
Yamamoto, N., & Shelton, A. L. (2009). Orientation dependence of spatial memory acquired from auditory experience. Psychonomic Bulletin & Review, 16(2), 301–305. https://doi.org/10.3758/PBR.16.2.301
Yamamoto, N., Meléndez, J. A., & Menzies, D. T. (2014). Homing by path integration when a locomotion trajectory crosses itself. Perception, 43(10), 1049–1060. https://doi.org/10.1068/p7624
Zetzsche, C., Wolter, J., Galbraith, C., & Schill, K. (2009). Representation of space: Image-like or sensorimotor? Spatial Vision, 22(5), 409–424. https://doi.org/10.1163/156856809789476074
Zhang, L., & Mou, W. (2017). Piloting systems reset path integration systems during position estimation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 472–491. https://doi.org/10.1037/xlm0000324
Zhang, L., & Mou, W. (2019). Selective resetting position and heading estimations while driving in a large-scale immersive virtual environment. Experimental Brain Research, 237(2), 335–350. https://doi.org/10.1007/s00221-018-5417-x
Zhang, L., Mou, W., Lei, X., & Du, Y. (2020). Cue combination used to update the navigator’s self-localization, not the home location. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(12), 2314–2339. https://doi.org/10.1037/xlm0000794
Zhao, M., & Warren, W. H. (2015). How you get there from here: Interaction of visual landmarks and path integration in human navigation. Psychological Science, 26(6), 915–924. https://doi.org/10.1177/0956797615574952
Zhao, M., & Warren, W. H. (2018). Non-optimal perceptual decision in human navigation. Behavioral and Brain Sciences, 41, Article e250. https://doi.org/10.1017/S0140525X18001498
Author note
The authors thank John Philbeck, Mary Hegarty, and Paul Cumming for helpful discussions of key ideas expressed in this article.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no known conflict of interest to disclose.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Anastasiou, C., Baumann, O. & Yamamoto, N. Does path integration contribute to human navigation in large-scale space?. Psychon Bull Rev 30, 822–842 (2023). https://doi.org/10.3758/s13423-022-02216-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13423-022-02216-8