Elsevier

Behavioural Brain Research

Volume 184, Issue 1, 22 November 2007, Pages 19-30
Behavioural Brain Research

Research report
Linguistic and spatial information for action

https://doi.org/10.1016/j.bbr.2007.06.011Get rights and content

Abstract

Motor acts can be triggered according to either semantic or spatial objects attributes, which are thought to predominantly involve the ventral and the dorsal stream of the visual system, respectively, but with different time constraints. To date, no study has specifically addressed the issue of a possible interaction between spatial and semantic information when responding according to linguistic stimuli. In particular, it is not well established whether a motor response to a linguistic stimulus may remain immune to the influence of concurrent spatial information. In this vein, we tested the influence of the presentation of congruent and incongruent spatial information on a right–left motor response made towards a linguistic stimulus, and also the reverse condition. Results showed that the time to respond to a linguistic stimulus was greater than that observed to react to a spatial stimulus. Furthermore, we found an absence of interference of linguistic information on response accuracy and reaction time when responding to spatial stimuli. In fact, a strong interference of spatial information in the form of an increase in reaction time and misdirected responses was observed when responding to the linguistic stimulus, but predominantly for responses with short reaction times (300–500 ms) and in presence of incongruent spatial information. In the latter condition, correct responses showed in addition a tendency to veer away from the distracting spatial stimulus. We conclude that response selection can be influenced by irrelevant visual information. This suggests that information that is processed within the ventral and the dorsal visual streams compete very early under different time constraints, in order to specify the relevant visual signal for action.

Introduction

Motor acts are generally performed towards visual objects that have a particular shape, location and orientation in space. The function of the visual system is, thus, to isolate one or few target-objects and to provide information about the relevant spatial characteristics that make possible the organisation of action, in relation to body constraints and motor abilities [1]. As a consequence, visual selection for action in the peri-personal space depends on the true limit of action-capability for the upper limbs [2], [5]. Generally, the selection of action-relevant objects involves processing of intrinsic (e.g., colour, shape or texture) and extrinsic (e.g., location or orientation) visual (eventually auditory) attributes. It is, however, quite common to also use language for spatial description of target-objects in the peri-personal space. For instance, one may use spatial descriptors such as the object “to the right”, “near …”, “close to …”. Though it seems intuitively accepted that words can be used for action selection, spatial characteristics and words processing are generally considered as two independent activities involving specific and different brain areas, even when words and spatial information are processed within the same visual modality and concern peri-personal space [14], [17].

In this context, the dominant view of the visual system is that there is a sharp division of labour between ‘vision-for-action’ controlled by the dorsal pathway from primary visual cortex (V1) to the posterior parietal cortex (including the intra-parietal sulcus-IPS and the superior parietal lobe-SPL) and ‘vision-for-conscious perception’ controlled by the ventral pathway from V1 to the inferotemporal cortex [29]. One of the main aspects of the visual system for perception is that it deals with explicit holistic descriptions of the visual input, even when such information leads to errors in spatial processing (e.g., visual illusion). Conversely, the visual system for action deals with the absolute metrics of the visual input that are relevant for specific actions (e.g., reaching and grasping), and, thus, remains unaffected by illusory context that nevertheless lead to erroneous perceptual judgements. An interesting aspect of the ventral stream is that it is thought to participate in the processing of visual input associated with word reading and understanding. In a broader sense, word reading presumably entails basic sensory and motor components, as well as more central components, such as the analysis of visual word forms (orthography), the analysis of word sounds (phonology) and the analysis of word meaning (semantics). As a consequence, when focussing on the initial stage of word reading increased activity within specific brain areas was observed, principally the left-lateralised regions in occipital and occipito-temporal cortex including the superior and middle temporal cortex [33], [11]1. These areas are, thus, different from that involved in the processing of spatial properties of visual objects, which mainly includes at least for perception of location and orientation the posterior parietal cortex when representing potential targets for action [6], [29], [50].

Besides from the dissociation made for the nature of the visual characteristics that would be processed either by the ventral or the dorsal visual streams, temporal dissociations between the two pathways have also been reported. Nowack and Bullier [30], for instance, reported that the latencies to visual stimulation are widely dependent on the underlying type of neural conduction. According to these authors, the major factor that influences the speed of activation of cortical areas appears to be whether or not a cortical area is activated by the heavily myelinated, fast conducting magnocellular channel, which is almost exclusively the case for the dorsal stream. Accordingly, visual latencies were observed around 40–80 ms within the parietal areas whereas they were observed around 100–150 ms within the temporal areas predominantly fed with parvocellular fibres characterised by slower transfer of information. Moreover, synaptic relays from the visual cortex to the premotor areas are more numerous via the ventral than via the dorsal pathway [46]. As a direct consequence, reaction times to a location–discrimination task were reported to be within the range of 350–450 ms [37], [45], whereas reaction times to lexical and semantic stimuli when reading usual words were found to be within the range of 400–650 ms [28], [39].

Although spatial processing for action and semantic knowledge are considered in some respects to be independent one from another [35], recent data suggest that this is not the case in all circumstances [e.g., 25]. A convincing illustration was reported by Creem and Proffitt [4], who found that inappropriate grasping for household tools occurred when object grasping was paired with semantic information in a dual-task experiment (completing verbally a concurrent paired-associates task). This inappropriate grasping was, however, reduced when paired with a visuospatial dual-task (imagine a block letter and classify the corners). Thus, semantic processing can have an interfering effect when grasping a meaningful object. Interestingly, a similar effect was also observed when subjects had to bisect a vertical line in the presence of a word (“top” or “down”) placed at the extremities of the line [22], [49]. However, controversial data exist concerning the interaction between semantic and sensorimotor processing [13] and most of the data available so far suggest that semantic information when using distracting words influences principally the kinematics of reaching and grasping motor acts [14], [15], [16], [17]. Gentilucci et al. [15], for instance, reported that words denoting “far” and “near” printed on to-be-grasped objects had an effect on movement kinematics comparable to the greater or shorter distances that separate hand location and object. In the same vein, Glover and Dixon [17] reported that maximum grip aperture was enlarged when grasping an object with the word “large” printed on top, as compared to grasping the same object with the word “small”. However, no study has specifically addressed the issue of the interaction between spatial and semantic processing when responding according to a linguistic stimulus. In particular, it is not well established whether (1) motor performance varies when responding according to a linguistic rather than a spatial stimulus, (2) motor response to a linguistic stimulus may remain immune to the influence of contextual spatial information. In this respect, previous studies about the effect of irrelevant spatial information in a manual reaching task have shown that reach path is affected by the presence of distractor. For instance, Tipper et al. [47] found that the hand veered away from distractor, even though it does not represent a physical obstacle to the reaching hand. Whether this effect holds, when processing linguistic information in the presence of congruent and incongruent spatial information, remains an open issue.

In the literature on interference effects, the classic paradigm used to study of spatial and semantic interferences is based on reaction times (for an overview see [27]). To give some examples, the Simon-like effect paradigms explore the interference effect of spatial stimulus-response compatibility [42]. In this task, participants are instructed classically to press a right or left button to stimuli appearing either on the left or on the right side of a screen. When the spatial stimulus and the response codes correspond, it speeded up the response. Conversely, when there is no correspondence between the stimulus and the response, spatial contents can be in conflict, which leads to the consequence that reaction times increase. Similarly, in the spatial Stroop task, participants are asked to respond to the physical location of stimuli, whose content also designates a spatial location [27], [40]. Here, a word can appear on the left or on the right side of a fixation point, when the word itself is either “left” or “right” [23], [32], [44]. Again, when the word position and its meaning are congruent, reaction times are faster than when they are not.

The common explanations of these Simon-like effect paradigms postulate two separate routes for perception and action [9], [20], [24]. In a conditional route, the appropriate response is intentionally selected and activated, whereas in a parallel, unconditional route the response that spatially corresponds to stimulus location is automatically activated. Thus, right-sided stimuli facilitate right-sided responses, whereas left-sided stimuli facilitate left-sided responses. As a result, when stimulus and response location are congruent, there is a reduction in reaction time. In contrast, for non-congruent stimulus and response location, the two routes activate conflicting responses, leading to prolonged reaction times.

In the present study, our goal was to explore to which extent a spatial irrelevant information influenced goal-directed movements that were triggered by a linguistic stimulus. Instead of Simon-like effect paradigms where the position of the stimulus is in conflict with the meaning of the stimulus itself (i.e., the word “left” is posited on the right), we have employed a spatial interference task [12], [47], in which the semantic information (“left”; “right”) was never in conflict with the response code, since the word stimulus was always located in a central position. More specifically, participants were required to respond according to (1) a word that indicate the right or left direction, or (2) a spatial target that appear to the right or to the left of the mid-sagittal body axis. In some trials, the two types of information were simultaneously available but the instruction encouraged subjects to consider only one of them. Assuming that linguistic and spatial information involve independent processing, no interference effects were expected when presenting simultaneously the linguistic and the spatial stimuli. Conversely, assuming that processing of both linguistic and spatial information interact in some way, an interference effect was expected but predominantly when responding according to the linguistic stimulus due to the difference in latency characterising the processing of information within the ventral and dorsal pathways of the visual system.

Section snippets

Participants

Nine self-declared voluntary students from the University of Lille3 participated in the experiment (seven females and two males, mean age 25.2 years). All participants had normal or corrected-to-normal visual acuity and were naive as to the purpose of the experiment. They all gave their informed consent prior to their inclusion in the experiment, which was approved by the University Charles de Gaulle ethical committee and in accordance with the principles of the Helsinki 1964 declaration. They

Reaction time

Considering correct responses, mean reaction time was 577 ms (S.D.: 191 ms) and was shorter when responding to the spatial stimulus (mean: 484 ms, S.D.: 109 ms) than to the linguistic stimulus (mean: 670 ms, S.D.: 210 ms, F(1,8) = 11.86; p < 0.01). Reaction time was influenced by the context of stimulus presentation (F(2,16) = 3.90; p = 0.04), and an interaction between the two factors was revealed (F(2,16) = 10.82; p < 0.01). Analysis of simple effects associated with the interaction showed that reaction time

Discussion and conclusion

The aim of the present experiment was to explore to which extent irrelevant spatial information can influence goal-directed movements that are triggered by a linguistic stimulus. Overall, the linguistic stimulus was processed much slower than the spatial stimulus. The temporal difference that we observed (186 ms) is consistent with the differences previously reported between location and discrimination tasks and word processing tasks. This effect statistically revealed despite the presentation

Acknowledgements

Supported by Région Nord Pas de Calais and University Charles De Gaulle grants, European Science Foundation, Eurocores CNCC CRP grant and ANR “Neurosciences, Neurologie et Psychiatrie” program from the French Ministry. The authors gratefully acknowledge the assistance of Yvonne Delevoye-Turrell in the preparation of the paper.

References (50)

  • S.H. Creem et al.

    Grasping objects by their handles: a necessary interaction between cognition and action

    J Exp Psychol Hum Percept Perform

    (2001)
  • Y. Coello et al.

    Effect of structuring the workspace on cognitive and sensorimotor distance estimation: no dissociation between perception and action

    Percept Psychophys

    (2006)
  • J.C. Culham et al.

    Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas

    Exp Brain Res

    (2003)
  • H. Damasio et al.

    Lesion analysis in neuropsychology

    (1989)
  • M.P. Deiber et al.

    Cerebral structures participating in motor preparation in humans: a positron emission tomography study

    J Neurophysiol

    (1996)
  • R. De Jong et al.

    Conditional and unconditional automaticity: a dual-process model of effects of spatial stimulus-response correspondence

    J Exp Psychol Hum Percept Perform

    (1994)
  • J.A. Fiez et al.

    Neuroimaging studies of word reading

    Proc Nat Aca Sci

    (1998)
  • M. Fisher et al.

    Distractor effects on pointing: the role of spatial layout

    Exp Brain Res

    (2001)
  • C. Garofeanu et al.

    Naming and grasping common objects: a priming study

    Exp Brain Res

    (2004)
  • Gentilucci et al.

    Influence of automatic word reading on motor control

    Eur J Neurosci

    (1998)
  • M. Gentilucci et al.

    Language and motor control

    Exp Brain Res

    (2000)
  • S. Glover

    Separate visual representations in the planning and control of action

    Behav Brain Sci

    (2004)
  • S. Glover et al.

    Semantics affect the planning but not control of grasping

    Exp Brain Res

    (2002)
  • S. Glover et al.

    Grasping the meaning of words

    Exp Brain Res

    (2004)
  • M.A. Goodale

    Perception and action in the human visual system

  • Cited by (0)

    View full text