Elsevier

Cognitive Psychology

Volume 47, Issue 4, December 2003, Pages 402-431
Cognitive Psychology

Spatial updating of environments described in texts

https://doi.org/10.1016/S0010-0285(03)00098-7Get rights and content

Abstract

People update egocentric spatial relations in an effortless and on-line manner when they move in the environment, but not when they only imagine themselves moving. In contrast to previous studies, the present experiments examined egocentric updating with spatial scenes that were encoded linguistically instead of perceived directly. Experiment 1 demonstrated that, regardless of the mode of rotation (physical or imagined), egocentric updating takes place in a deliberate and backward fashion when the locations of objects are anchored in a mental framework. Experiment 2 involved only imagined rotations and showed that results remained unchanged when spatial labels were removed from the scene descriptions. Experiment 3 provided evidence that physical rotations—but not imagined rotations—lead to on-line updating of egocentric relations, provided that the objects of the scene are represented in a sensorimotor framework. The present results suggest that physical movements and sensorimotor encoding are both prerequisites of effortless egocentric updating.

Introduction

Performing everyday tasks such as driving home from work requires that we form and maintain spatial representations of physical space. However, the fact that both people and objects constantly change their locations in the environment entails that these spatial representations be continuously updated if they are to provide useful information to support spatial activity.

Research evidence suggests that animals are equipped with the capacity to continuously update their spatial representations. First, the seminal work of O’ Keefe and Nadel, 1978, O’ Keefe and Burgess, 1996 with rats shows that “place” cells in the hippocampus fire to track the position of the animal within an allocentrically represented environment. Second, humans seem to be able to update egocentric relations, that is, self-to-object directions and distances, in an effortless manner when they change either their position or simply their facing orientation in the environment (e.g., Amorim, Glassauer, Corpinot, & Berthoz, 1997; Loomis, Da Silva, Fujita, & Fukusima, 1992; Loomis, Lippa, Klatzky, & Golledge, 2002; Philbeck, Loomis, & Beall, 1997; Rieser, Guth, & Hill, 1986; Rieser, 1989; Simons & Wang, 1998).

Recently, a number of researchers have suggested that spatial updating is carried out by an internal mechanism which constantly computes the egocentric locations of objects as people move in the environment (e.g., Simons & Wang, 1998; Wang & Spelke, 2000). The efficient functioning of the updating mechanism has been shown to rely on information that is present during active locomotion of the organism (e.g., Easton & Sholl, 1995; Simons & Wang, 1998; Wang & Simons, 1999). Active locomotion typically provides two kinds of feedback which can support spatial updating (Mittelstaedt, 1985). First, allothetic information is derived from sensing the environment (e.g., visual and acoustic flow). This input specifies—in both allocentric and egocentric terms—not only the environment that is perceived, but also the current position (location and orientation) of the observer (Carlson, 1997; Gibson, 1998). Therefore, it can be used by a moving observer to keep track of her or his position and orientation while traversing an environment. Indeed, Beer (1993) has shown that subjects can make use of changes in optic flow to update a scene during visually simulated self-motion. Furthermore, Riecke, van Veen, and Bülthoff (2002), using Virtual-Reality, provided evidence that people can successfully perform spatial tasks, that involve translations, rotations, and triangle completion, solely on the basis of visual information. However, other studies have shown that allothetic information alone does not provide adequate support for the efficient operation of the updating mechanism (e.g., Chance, Gaunet, Beall, & Loomis, 1998; Klatzky, Loomis, Beall, Chance, & Golledge, 1998). The fact that people can locomote to previously seen locations very efficiently even without vision (e.g., Farell & Thomson, 1999; Loomis, Klatzky, Philbeck, & Golledge, 1998; Philbeck et al., 1997) suggests that visual information is not very important for successful updating. Instead, idiothetic information (e.g., vestibular and proprioceptive signals), which is internally generated during movement, is essential for efficient spatial updating to take place (e.g., Chance et al., 1998; Klatzky et al., 1998). Indeed, in studies in which natural locomotion was disrupted, spatial performance was hindered suggesting interference with (Simons & Wang, 1998; Wang & Spelke, 2000).

In contrast to physical movements, imagined movements lack both allothetic and idiothetic feedback. However, imagery has been shown to evoke mental operations that are similar to the perceptual-motor operations that apply to real situations (Jeannerod, 1995; Kosslyn, 1994). In fact, it has been documented that the same areas of the brain are active during real and imagined movements (e.g., Deschaumes-Molinaro, Dittmer, & Vernet-Maury, 1992), and also that mental scanning can be used in imagery as a mental analogue to visual perception (Kosslyn, Ball, & Reiser, 1978). Despite the parallel nature of imagery and perception, experimental evidence suggests that spatial updating during imagined movements does not take place as effortlessly as it does with real movements. For example, Rieser et al. (1986) and Loomis et al. (1993) showed that real but sightless movements lead to more efficient updating of spatial orientation than imagined movements. Also, Klatzky et al. (1998) showed that when subjects perform a homing task during an imagined traversal of a path, they fail to update their perceived heading. Furthermore, while Piaget and Inhelder (1956) documented the difficulty children have with imagining how a layout would look from a different perspective, Huttenlocher and Presson (1978) showed that when the children physically adopt the new perspective they no longer make egocentric errors.

The typical way of constructing spatial representations is by interacting directly with the world. For this reason, the majority of previous studies that examined spatial updating have used visually perceived scenes. The typical paradigm used (e.g., Easton & Sholl, 1995; Presson & Montello, 1994; Rieser, 1989) involves presenting a layout of objects at various locations and requires participants to point to objects after they have locomoted to a new position in the array or after they have changed their facing direction. That is, participants respond to statements such as “face object x and point to object y.” These experiments usually manipulate the mode of movement by having people either move physically or only imagine the movement to the new position or orientation and point to objects from the novel standpoint. Several interesting results have been reported from these studies. For translation trials (i.e., when the participant moved to a new position but did not change orientation), Rieser (1989) reported that participants were equally good at pointing to objects from an imagined novel standpoint and from their actual standpoint (but see Easton & Sholl, 1995). For rotation trials, (i.e., when participants changed orientation but not position), Rieser reported a performance difference between physical and imagined rotations. In the physical rotation condition, participants pointed to objects equally well from all standpoints (novel and original). For imagined rotations, response latencies were greater for pointing to objects from novel standpoints than from their actual facing direction. Furthermore, latencies increased with the angle between the actual facing direction and the novel direction participants had to imagine adopting.

The results from the rotation trials suggest that while performing physical rotations, participants were able to keep track and update the changing self-to-object relations. By the time they had completed the rotation to the novel standpoint they had already updated these egocentric relations and were therefore ready to point to the target. However, when they performed imagined rotations such updating did not seem to take place. In fact, a number of other studies (e.g., Presson & Montello, 1994) have suggested that on-line spatial updating—that is, spatial updating that takes place effortlessly and concurrently with the change of spatial relations—depends on the availability of proprioceptive and vestibular information, explaining why additional processing is needed to update spatial relations after imagined movements. Such processing presumably takes place in a backward fashion, that is, after the imagined movements are completed, and therefore leads to the longer latencies that are documented in the literature.1 It should be pointed out that while on-line updating is holistic (that is, all self-to-object relations are updated at once), backward updating takes place in a piecemeal fashion (i.e., each egocentric relation is updated independently of the other self-to-object relations). When performing backward updating, a person updates the egocentric location of only one object, and at the time the location is probed (De Vega, 1995).

Direct experience with space by means of perception is probably the primary means for constructing spatial representations, albeit not the only one. Spatial representations can be formed through various spatial artifacts such as maps and language (Ferguson & Hegarty, 1994; Taylor & Tversky, 1992). In fact, a vast number of studies on mental models (Johnson-Laird, 1983, Johnson-Laird, 1996) or situation models (Van Dijk & Kintsch, 1983) have documented that spatial representations derived from language preserve many of the properties of real environments (see Zwaan & Radvanksy, 1998 for a review). Also, studies from the mental scanning literature (e.g., Denis & Cocude, 1989) have shown that at least in terms of their geometrical properties, mental representations constructed from language are equivalent to those created through perception.

Research on many aspects of situation models suggests that they are embodied; that is, readers seem to experience vicariously the state of affairs described by the text (Segal, 1995; Zwaan, 1999). Reviewing the extensive literature on situation models, Zwaan and Radvansky (1998) provide evidence that readers readily imagine themselves within the narrated situation and identify themselves with the protagonist of the story. Indeed, other studies have shown that readers of narratives place themselves within the story by adopting the perspective—in space and time—of the protagonist (e.g., Bryant, Tversky, & Franklin, 1992; Zwaan, 1996) as well as his/her goals and moods (e.g., Trabasso & Suh, 1993). Furthermore, as in everyday life, the spatial position of the protagonist affects how accessible the objects in the scene are. For example, objects that are described to be near the story protagonist are accessed more easily than distant objects (Glenberg, Meyer, & Lindem, 1987; Morrow, Greenspan, & Bower, 1987).

Understanding how people make use of spatial representations derived from language is important from both a theoretical perspective (e.g., assessing the degree to which spatial representations from language are similar to those that result from perception) and a practical one. It is very common in our every day lives to reason about space or perform actions in space on the basis of linguistic information. One such example is when we give or follow driving and walking directions either verbally or by using text. Nevertheless, it remains an open question whether the spatial representations that are derived from language are updated in a similar fashion with those derived from perception. In other words, it is unclear whether physical movements lead to on-line and effortless spatial updating and imagined movements to backward and effortful updating when the scenes are encoded by means of language. On one hand, the two types of representations have been previously shown to be very similar (e.g., Denis & Cocude, 1989). In fact, Bryant (1997; see also Loomis, Lippa, Klatzky, & Golledge, 2002) has proposed that a common spatial representation system creates amodal geometric representations of space by using perceptual and linguistic input (but see Barsalou, 1999 for criticism of amodal representations). This implies that the patterns of results found with perceptual scenes should be also expected with scenes that are encoded through language. On the other hand, the reference frames involved in encoding scenes from language are sometimes different from those used when perceiving scenes directly. With perceptual scenes the locations of objects are typically encoded relative to the observer’s actual egocentric reference frame, hereinafter referred to as the ecological reference frame; that is, a coordinate system that consists of three orthogonal axes and is centered on the observer’s body (but see Mou & McNamara, 2002 for evidence of intrinsic to the scenes reference frames). In these cases, the representation is believed to be grounded in a sensorimotor framework. This is also the case when language is used to describe an immediate environment (as in Loomis et al., 2002). However, when language is used to described a non-immediate environment (e.g., an opera house when the observer is elsewhere) spatial relations are encoded relative to an imagined egocentric reference frame; that is, a coordinate system centered on an imagined representation of the reader (i.e., a reference frame centered on the story protagonist), which is not necessarily aligned with the reader’s ecological reference frame. In other words, the reader needs to construct a reference frame that is distinct from his/her own ecological frame and project it onto the protagonist of the story. Because, as shown in the situation model literature, readers typically identify themselves with the protagonists, this new reference frame represents an imagined egocentric reference frame. In these cases, the reader is assumed to operate in a purely mental framework, from which his/her ecological self is detached. Because spatial updating has been studied primarily with perceptual scenes, it remains unclear whether the typical pattern of spatial updating results can be generalized to scenes that are formed without sensorimotor encoding.

The few studies that used situation models to study spatial relations have focused on how readers keep track of the story protagonist’s allocentric location in the imagined space and did not examine whether egocentric relations are updated as well. Two paradigms have been traditionally used by these studies. The first is a reading-time paradigm in which participants read texts that sometimes contain a reference to information that is not consistent with the protagonist’s most recent location. For example, a sentence could describe the protagonist standing inside the house when a previous sentence had already described the protagonist exiting the house. O’Brien and Albrecht (1992) as well as De Vega (1995, experiment 1) have shown that reading times are longer for the location-inconsistent sentences which suggests that readers are sensitive to information relating to the protagonist’s location. The second paradigm involves a recognition task in which participants are probed with names of objects and are asked to report whether they were present in the stories they had read earlier. Probes could be consistent with either the current or a former location of the protagonist. De Vega reported no difference in recognition latency for the two types of probes and concluded that readers did not update on-line the location of the protagonist. However, Levine and Klin (2001) repeated the task after modifying the stories to make location information more salient and reported longer recognition times for probes that were consistent with the former location of the protagonist. They concluded that the accessibility of objects is affected by the current location of the protagonist. This suggests that readers were able to update the protagonist’s location as the action in the story developed.

In short, the evidence suggests that under some circumstances readers are able to track and update the location of the protagonist in the environment described by a text. This is compatible with theories that propose a special status for spatial information in situation model construction. Garrod and Sanford (1990) have proposed that spatial information is held in an implicit focus while Ericsson and Kintsch (1995) suggested that it is held in long-term working memory (see Zwaan & Radvansky, 1998 for a discussion). Both accounts imply that spatial information is maintained in a state of high availability and it is updated immediately when changes in spatial relations are described. Nevertheless, no studies have examined directly the nature of the spatial information that is maintained. The studies reviewed above suggest that allocentric locations are retained and updated. There is hardly any evidence on whether egocentric relations are also held and updated on-line. To date, only a study by De Vega and Rodrigo (2001) has examined egocentric updating of scenes derived from texts.

In a series of experiments De Vega and Rodrigo (2001) examined egocentric spatial updating after physical and imagined movements. In their studies, they had people sitting on a rotating chair and reading stories on a laptop computer. When the story described a change in the protagonist’s perspective, half of the participants were instructed to physically rotate their selves to align their facing direction with that of the protagonist. The other half were simply instructed to imagine themselves rotating. In both cases, participants made spatial judgments from the perspective of the protagonist. In one experiment, participants responded by pointing to target objects, and in another they used spatial labels to indicate the egocentric locations of the target objects. Participants were faster at pointing when they performed physical than imagined rotations but they were equally fast in the two modes of rotation when they used spatial labels to respond. De Vega and Rodrigo concluded that participants were able to update on-line their spatial representations when performing physical rotations in the pointing experiment but not in the labeling experiment.

Certain limitations of the study create concerns about the validity of the conclusions. First, the criterion used to assess whether effortless spatial updating took place does not seem appropriate. For example, the absence of a difference between the two rotation modes in the labeling experiment does not necessarily mean that participants failed to update their representation on-line. Instead, it could be the case that participants were able to do so in both modes of rotation. The fact that there were only four target objects in the stories and that cues (green circles placed at canonical orientations) were placed in the room to indicate the possible locations of objects, might have made it easy for people to update egocentric locations even with imagined rotations. Second, the text in the stories could have made it hard for people to visualize themselves in the scenes. A couple of sentences were first used to set the environment (e.g., “You are in the middle of the Main Square”) and then one sentence introduced the names of all the target objects. Next, participants named a direction (e.g., left, front, etc.) and the experimenter provided a sentence describing the object occupying that direction. These sentences did not describe the objects from the perspective of the protagonist. This could have discouraged people from adopting an embedded perspective (i.e., use an imagined egocentric reference frame centered on the protagonist) and instead encouraged them to form an allocentric representation of the scene. In addition, the sentences were not connected in a natural manner. For example, the sentence “You can hear a foxtrot sound coming from the BANDSTAND” could follow the sentence “The BANDSTAND columns have baroque ornaments.” The mere fact that the name of the object was used repeatedly in each sentence has been previously shown to impede comprehension (repeated name penalty; Gordon, Grosz, & Gilliom, 1993).

In light of these concerns, the present study examines in more detail whether and how spatial representations from texts are updated after physical and imagined rotations. The set of experiments presented here attempts to determine the circumstances under which spatial updating can take place with scenes learned from texts.

The experiments reported here use the Spatial Frameworks2 paradigm, developed by Franklin and Tversky (1990), and examine whether readers of narratives update their spatial representations in an on-line or a backward manner. In this paradigm, participants read narratives that describe themselves as protagonists in settings where objects are located at the extensions of their body axes (head-feet, front-back, and left-right). The reader/protagonist is occasionally described to reorient in the scene.3 Probed with a direction, participants are then asked to indicate the name of the object that occupies that direction (or vice versa). When the protagonist is upright in the scene, results show that people are fastest reporting objects located on the head/feet axis, intermediately fast with objects on the front/back axis, and slowest reporting objects on the left/right axis (see also Clark, 1973). There is also an asymmetry within the front/back axis, such that objects in front of the central character are associated with faster responses than objects at the back (Bryant et al., 1992; Franklin & Tversky, 1990).

The primary interest of the present experiments is not the latencies associated with each body direction but instead whether the imagined egocentric relations are updated or not at the time the reader/protagonist is described to reorient in the scene. In order to assess this, latencies for locating objects from the original perspective (i.e., the perspective of the protagonist when the scene was first described) are contrasted to those from novel perspectives (i.e., the new perspectives later adopted by the protagonist). If participants update egocentric relations on-line at the time the protagonist is reoriented then no difference in latencies between original and novel perspectives should be expected. However, if instead participants update these relations in a backward manner, then performing the task from novel perspectives should take longer because of the additional cognitive processing that needs to be done when a spatial relation is probed. The criterion used to distinguish between on-line and backward updating is in fact the one used by most studies with perceptual scenes (e.g., Rieser et al., 1986), but it is different from the one used by De Vega and Rodrigo (2001) who simply compared overall latencies for physical and imagined rotations.

Section snippets

Experiment 14

Studies that used visually perceived environments (e.g., Rieser, 1989) suggest that people update spatial relations more efficiently when they physically rotate than when they imagine themselves rotating. Perhaps this is because the internal updating mechanism relies on the presence of allothetic and idiothetic information which is available only during physical movement.

The goal of Experiment 1 is to determine whether the results obtained from studies with perceptual scenes generalize to

Experiment 2

In Experiment 1 participants located objects faster when the perspective of the protagonist at test was the one described in the initial portion of the narrative. This was the case regardless of whether the reader adopted the perspective of the protagonist by physical or imagined rotations. A possible explanation for this result is that when reading the narrative, participants formed a mental representation of the scene but did not update it every time the subsequent text described a change of

Experiment 3

Experiment 1 showed that locating objects is done more efficiently when the original perspective is reinstated at test regardless of whether physical or imagined rotations are involved. Furthermore, it showed that the participants’ actual perspective affects the ease by which spatial judgments are made. Experiment 2 ruled out the hypothesis that the advantage with the original perspective is due to the maintenance of surface information from the text.

In contrast with results from studies with

General discussion

A number of previous studies have established the difficulty of reasoning about space from imagined perspectives. Studies using the pointing paradigm (e.g., Presson & Montello, 1994) and the scene-recognition paradigm (e.g., Simons & Wang, 1998) have shown that although people have no problems locating objects after physical movements to a new observation point, such problems arise when the movements are only imagined. Rieser (1989; see also Klatzky et al., 1998) argued that the proprioceptive

Acknowledgements

I am grateful to Richard Carlson for many helpful discussions on this and other related work. I also thank Frank Ritter, Judy Kroll, Lael Schooler, and Jack Loomis for useful comments and suggestions, Barbara Tversky for generously providing narrative material and Jessica Glick, Allison De Grano, and Tom Jolly for their assistance with conducting the experiments. Rolf Zwaan and two anonymous reviewers have provided valuable comments on a previous draft of this article. The reported work has

References (75)

  • M.A. Amorim et al.

    Updating an object’s orientation and location during nonvisual navigation: A comparison between two processing models

    Perception & Psychophysics

    (1997)
  • M.N. Avraamides et al.

    Egocentric organization of spatial activities in imagined navigation

    Memory & Cognition

    (2003)
  • M.N. Avraamides

    Do people update spatial relations described in texts?

  • L.W. Barsalou

    Perceptual symbol systems

    Behavioral and Brain Sciences

    (1999)
  • J.M.A. Beer

    Perceiving scene layout through an aperture during visually simulated self-motion

    Journal of Experimental Psychology: Human Perception and Performance

    (1993)
  • D.J. Bryant et al.

    Spatial concepts and perception of physical and diagrammed scenes

    Perceptual and Motor Skills

    (1995)
  • D.J. Bryant

    Representing space in language and perception

    Mind & Language

    (1997)
  • D.J. Bryant et al.

    How body asymmetries determine accessibility in Spatial Frameworks

    The Quarterly Journal of Experimental Psychology

    (1999)
  • R.A. Carlson

    Experienced cognition

    (1997)
  • S.S. Chance et al.

    Locomotion mode affects the updating of objects during travel: The contribution of vestibular and proprioceptive inputs to path integration

    Presence

    (1998)
  • H.H. Clark

    Space, time, semantics, and the child

  • M.C. Corballis et al.

    The psychology of left and right

    (1976)
  • N. Cowan

    An embedded-processes model of working memory

  • M. De Vega

    Backward updating of mental models during continuous reading of narratives

    Journal of Experimental Psychology: Learning, Memory, & Cognition

    (1995)
  • M. De Vega et al.

    Pointing and labeling directions in egocentric frameworks

    Journal of Memory and Language

    (1996)
  • M. De Vega et al.

    Updating spatial layouts mediated by pointing and labelling under physical and imaginary rotation

    European Journal of Cognitive Psychology

    (2001)
  • M. Denis et al.

    Scanning visual images generated from verbal descriptions

    European Journal of Cognitive Psychology

    (1989)
  • M.G. Ealy

    Determining the shapes of landsurfaces from topographical maps

    Ergonomics

    (1988)
  • R.D. Easton et al.

    Object–array structure, frames of reference, and retrieval of spatial knowledge

    Journal of Experimental Psychology: Learning, Memory, & Cognition

    (1995)
  • K.A. Ericsson et al.

    Long-term working memory

    Psychological Review

    (1995)
  • M.J. Farell et al.

    On-line updating of spatial information during locomotion without vision

    Journal of Motor Behavior

    (1999)
  • M.J. Farell et al.

    Mental rotation and the automatic updating of body-center spatial relationships

    Journal of Experimental Psychology: Learning, Memory, & Cognition

    (1998)
  • Federico, T., & Franklin, N. (1997). Long-term spatial representations from pictorial and textual input. In S. C....
  • E.L. Ferguson et al.

    Properties of cognitive maps constructed from text

    Memory & Cognition

    (1994)
  • N. Franklin et al.

    Searching imagined environments

    Journal of Experimental Psychology: General

    (1990)
  • N. Franklin et al.

    Switching points of view in spatial mental models

    Memory & Cognition

    (1992)
  • S.C. Garrod et al.

    Referential processes in reading: Focusing on roles and individuals

  • Cited by (40)

    • Walking during the encoding of described environments enhances a heading-independent spatial representation

      2017, Acta Psychologica
      Citation Excerpt :

      In spatial updating literature, while immediate and remote environments have been widely studied, described environments have received less attention by researchers. The occurrence of spatial updating in described environments, namely environments linguistically described and not previously experienced, has been investigated only in a few studies (e.g., Avraamides, 2003; Avraamides, Galati, Pazzaglia, et al., 2013; Rieser et al., 1994). Only some of them suggested that people were able to update egocentric relations within narratives, and physical movement seemed to be a crucial factor (Hatzipanayioti, Galati, & Avraamides, 2014; Santoro et al., 2017).

    • Cross-sensory transfer of reference frames in spatial memory

      2011, Cognition
      Citation Excerpt :

      Non-visual senses also provide access to egocentric and environmental information (Millar & Al-Attar, 2004). Accordingly, reference frames characterize memories of locations learned through touch (Newell, Woods, Mernagh, & Bulthoff, 2005; Pasqualotto, Finucane, & Newell, 2005), audition (Yamamoto & Shelton, 2009), proprioception (Valiquette, McNamara, & Smith, 2003; Yamamoto & Shelton, 2005), and spatial language (Avraamides, 2003; Avraamides & Kelly, 2010; Mou, Zhang, & McNamara, 2004; Wilson, Tlauka, & Wildbur, 1999). However, it is unknown whether locations learned through different senses are stored within separate representations organized by separate reference frames or if they can be integrated into a common representation organized by a common reference frame.

    • Moving through imagined space: Mentally simulating locomotion during spatial description reading

      2010, Acta Psychologica
      Citation Excerpt :

      For instance, learning in the route perspective and subsequently making Euclidean determinations of landmark relations, or learning in the survey perspective and providing route instructions to a tourist. Controversy currently exists within the spatial cognition literature with regard to the flexible versus invariant nature of spatial memory resulting from various input types (e.g., Avraamides, 2003; Avraamides, Loomis, Klatzky, & Golledge, 2004; Brunyé et al., 2008; De Beni et al., 2005; Denis, 2008; Ishikawa & Montello, 2006; Klatzky, Lippa, Loomis, & Golledge, 2003; Lee & Tversky, 2001; Levinson, 2003; Noordzij, Van der Lubbe, & Postma, 2005; Noordzij & Van der Lubbe, 2006; Noordzij & Zuidhoek, 2006; Pazzaglia et al., 2007; Péruch, Chabanne, Nese, Thinus-Blanc, & Denis, 2006; Shelton & McNamara, 2004; van Asselen, Fritschy, & Postma, 2006). Indeed some have found results suggesting that spatial memory need not maintain the perspective provided by the learning experience (e.g., Brunyé & Taylor, 2008a; Denis, 2008; Lee & Tversky, 2001; Noordzij & Postma, 2005; Taylor & Tversky, 1992a), whereas others have found results suggesting that spatial memory is experientially grounded and maintains the learned perspective (e.g., Lee & Tversky, 2005; Péruch et al., 2006; Schneider & Taylor, 1999; Shelton & McNamara, 2004).

    • Cross-sensory transfer of reference frames in spatial memory

      2010, Developmental Cell
      Citation Excerpt :

      Non-visual senses also provide access to egocentric and environmental information (Millar & Al-Attar, 2004). Accordingly, reference frames characterize memories of locations learned through touch (Newell, Woods, Mernagh, & Bulthoff, 2005; Pasqualotto, Finucane, & Newell, 2005), audition (Yamamoto & Shelton, 2009), proprioception (Valiquette, McNamara, & Smith, 2003; Yamamoto & Shelton, 2005), and spatial language (Avraamides, 2003; Avraamides & Kelly, 2010; Mou, Zhang, & McNamara, 2004; Wilson, Tlauka, & Wildbur, 1999). However, it is unknown whether locations learned through different senses are stored within separate representations organized by separate reference frames or if they can be integrated into a common representation organized by a common reference frame.

    View all citing articles on Scopus
    View full text