Learning Directions of Objects Specified by Vision, Spatial Audition, or Auditory Spatial Language

  1. Roberta L. Klatzky1,4,
  2. Yvonne Lippa2,
  3. Jack M. Loomis2, and
  4. Reginald G. Golledge3
  1. 1Department of Psychology, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213, USA; 2Department of Psychology, 3Department of Geography, University of California, Santa Barbara, California 93106, USA

Abstract

The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.

Footnotes

  • 4 Corresponding author.

  • E-MAIL klatzky{at}cmu.edu; FAX (412) 268-3464.

  • Article and publication are at http://www.learnmem.org/cgi/doi/10.1101/lm.51702.

    • Received June 14, 2002.
    • Accepted September 4, 2002.
| Table of Contents