Research reportLinguistic and spatial information for action
Introduction
Motor acts are generally performed towards visual objects that have a particular shape, location and orientation in space. The function of the visual system is, thus, to isolate one or few target-objects and to provide information about the relevant spatial characteristics that make possible the organisation of action, in relation to body constraints and motor abilities [1]. As a consequence, visual selection for action in the peri-personal space depends on the true limit of action-capability for the upper limbs [2], [5]. Generally, the selection of action-relevant objects involves processing of intrinsic (e.g., colour, shape or texture) and extrinsic (e.g., location or orientation) visual (eventually auditory) attributes. It is, however, quite common to also use language for spatial description of target-objects in the peri-personal space. For instance, one may use spatial descriptors such as the object “to the right”, “near …”, “close to …”. Though it seems intuitively accepted that words can be used for action selection, spatial characteristics and words processing are generally considered as two independent activities involving specific and different brain areas, even when words and spatial information are processed within the same visual modality and concern peri-personal space [14], [17].
In this context, the dominant view of the visual system is that there is a sharp division of labour between ‘vision-for-action’ controlled by the dorsal pathway from primary visual cortex (V1) to the posterior parietal cortex (including the intra-parietal sulcus-IPS and the superior parietal lobe-SPL) and ‘vision-for-conscious perception’ controlled by the ventral pathway from V1 to the inferotemporal cortex [29]. One of the main aspects of the visual system for perception is that it deals with explicit holistic descriptions of the visual input, even when such information leads to errors in spatial processing (e.g., visual illusion). Conversely, the visual system for action deals with the absolute metrics of the visual input that are relevant for specific actions (e.g., reaching and grasping), and, thus, remains unaffected by illusory context that nevertheless lead to erroneous perceptual judgements. An interesting aspect of the ventral stream is that it is thought to participate in the processing of visual input associated with word reading and understanding. In a broader sense, word reading presumably entails basic sensory and motor components, as well as more central components, such as the analysis of visual word forms (orthography), the analysis of word sounds (phonology) and the analysis of word meaning (semantics). As a consequence, when focussing on the initial stage of word reading increased activity within specific brain areas was observed, principally the left-lateralised regions in occipital and occipito-temporal cortex including the superior and middle temporal cortex [33], [11]1. These areas are, thus, different from that involved in the processing of spatial properties of visual objects, which mainly includes at least for perception of location and orientation the posterior parietal cortex when representing potential targets for action [6], [29], [50].
Besides from the dissociation made for the nature of the visual characteristics that would be processed either by the ventral or the dorsal visual streams, temporal dissociations between the two pathways have also been reported. Nowack and Bullier [30], for instance, reported that the latencies to visual stimulation are widely dependent on the underlying type of neural conduction. According to these authors, the major factor that influences the speed of activation of cortical areas appears to be whether or not a cortical area is activated by the heavily myelinated, fast conducting magnocellular channel, which is almost exclusively the case for the dorsal stream. Accordingly, visual latencies were observed around 40–80 ms within the parietal areas whereas they were observed around 100–150 ms within the temporal areas predominantly fed with parvocellular fibres characterised by slower transfer of information. Moreover, synaptic relays from the visual cortex to the premotor areas are more numerous via the ventral than via the dorsal pathway [46]. As a direct consequence, reaction times to a location–discrimination task were reported to be within the range of 350–450 ms [37], [45], whereas reaction times to lexical and semantic stimuli when reading usual words were found to be within the range of 400–650 ms [28], [39].
Although spatial processing for action and semantic knowledge are considered in some respects to be independent one from another [35], recent data suggest that this is not the case in all circumstances [e.g., 25]. A convincing illustration was reported by Creem and Proffitt [4], who found that inappropriate grasping for household tools occurred when object grasping was paired with semantic information in a dual-task experiment (completing verbally a concurrent paired-associates task). This inappropriate grasping was, however, reduced when paired with a visuospatial dual-task (imagine a block letter and classify the corners). Thus, semantic processing can have an interfering effect when grasping a meaningful object. Interestingly, a similar effect was also observed when subjects had to bisect a vertical line in the presence of a word (“top” or “down”) placed at the extremities of the line [22], [49]. However, controversial data exist concerning the interaction between semantic and sensorimotor processing [13] and most of the data available so far suggest that semantic information when using distracting words influences principally the kinematics of reaching and grasping motor acts [14], [15], [16], [17]. Gentilucci et al. [15], for instance, reported that words denoting “far” and “near” printed on to-be-grasped objects had an effect on movement kinematics comparable to the greater or shorter distances that separate hand location and object. In the same vein, Glover and Dixon [17] reported that maximum grip aperture was enlarged when grasping an object with the word “large” printed on top, as compared to grasping the same object with the word “small”. However, no study has specifically addressed the issue of the interaction between spatial and semantic processing when responding according to a linguistic stimulus. In particular, it is not well established whether (1) motor performance varies when responding according to a linguistic rather than a spatial stimulus, (2) motor response to a linguistic stimulus may remain immune to the influence of contextual spatial information. In this respect, previous studies about the effect of irrelevant spatial information in a manual reaching task have shown that reach path is affected by the presence of distractor. For instance, Tipper et al. [47] found that the hand veered away from distractor, even though it does not represent a physical obstacle to the reaching hand. Whether this effect holds, when processing linguistic information in the presence of congruent and incongruent spatial information, remains an open issue.
In the literature on interference effects, the classic paradigm used to study of spatial and semantic interferences is based on reaction times (for an overview see [27]). To give some examples, the Simon-like effect paradigms explore the interference effect of spatial stimulus-response compatibility [42]. In this task, participants are instructed classically to press a right or left button to stimuli appearing either on the left or on the right side of a screen. When the spatial stimulus and the response codes correspond, it speeded up the response. Conversely, when there is no correspondence between the stimulus and the response, spatial contents can be in conflict, which leads to the consequence that reaction times increase. Similarly, in the spatial Stroop task, participants are asked to respond to the physical location of stimuli, whose content also designates a spatial location [27], [40]. Here, a word can appear on the left or on the right side of a fixation point, when the word itself is either “left” or “right” [23], [32], [44]. Again, when the word position and its meaning are congruent, reaction times are faster than when they are not.
The common explanations of these Simon-like effect paradigms postulate two separate routes for perception and action [9], [20], [24]. In a conditional route, the appropriate response is intentionally selected and activated, whereas in a parallel, unconditional route the response that spatially corresponds to stimulus location is automatically activated. Thus, right-sided stimuli facilitate right-sided responses, whereas left-sided stimuli facilitate left-sided responses. As a result, when stimulus and response location are congruent, there is a reduction in reaction time. In contrast, for non-congruent stimulus and response location, the two routes activate conflicting responses, leading to prolonged reaction times.
In the present study, our goal was to explore to which extent a spatial irrelevant information influenced goal-directed movements that were triggered by a linguistic stimulus. Instead of Simon-like effect paradigms where the position of the stimulus is in conflict with the meaning of the stimulus itself (i.e., the word “left” is posited on the right), we have employed a spatial interference task [12], [47], in which the semantic information (“left”; “right”) was never in conflict with the response code, since the word stimulus was always located in a central position. More specifically, participants were required to respond according to (1) a word that indicate the right or left direction, or (2) a spatial target that appear to the right or to the left of the mid-sagittal body axis. In some trials, the two types of information were simultaneously available but the instruction encouraged subjects to consider only one of them. Assuming that linguistic and spatial information involve independent processing, no interference effects were expected when presenting simultaneously the linguistic and the spatial stimuli. Conversely, assuming that processing of both linguistic and spatial information interact in some way, an interference effect was expected but predominantly when responding according to the linguistic stimulus due to the difference in latency characterising the processing of information within the ventral and dorsal pathways of the visual system.
Section snippets
Participants
Nine self-declared voluntary students from the University of Lille3 participated in the experiment (seven females and two males, mean age 25.2 years). All participants had normal or corrected-to-normal visual acuity and were naive as to the purpose of the experiment. They all gave their informed consent prior to their inclusion in the experiment, which was approved by the University Charles de Gaulle ethical committee and in accordance with the principles of the Helsinki 1964 declaration. They
Reaction time
Considering correct responses, mean reaction time was 577 ms (S.D.: 191 ms) and was shorter when responding to the spatial stimulus (mean: 484 ms, S.D.: 109 ms) than to the linguistic stimulus (mean: 670 ms, S.D.: 210 ms, F(1,8) = 11.86; p < 0.01). Reaction time was influenced by the context of stimulus presentation (F(2,16) = 3.90; p = 0.04), and an interaction between the two factors was revealed (F(2,16) = 10.82; p < 0.01). Analysis of simple effects associated with the interaction showed that reaction time
Discussion and conclusion
The aim of the present experiment was to explore to which extent irrelevant spatial information can influence goal-directed movements that are triggered by a linguistic stimulus. Overall, the linguistic stimulus was processed much slower than the spatial stimulus. The temporal difference that we observed (186 ms) is consistent with the differences previously reported between location and discrimination tasks and word processing tasks. This effect statistically revealed despite the presentation
Acknowledgements
Supported by Région Nord Pas de Calais and University Charles De Gaulle grants, European Science Foundation, Eurocores CNCC CRP grant and ANR “Neurosciences, Neurologie et Psychiatrie” program from the French Ministry. The authors gratefully acknowledge the assistance of Yvonne Delevoye-Turrell in the preparation of the paper.
References (50)
- et al.
Saccade target selection and object recognition: evidence for a common attentional mechanism
Vis Res
(1996) - et al.
Space and the parietal cortex
Trends Cogn Sci
(2007) - et al.
Visual presentation of single letters activates a premotor area involved in writing
Neuroimage
(2003) The assessment and analysis of handedness: the Edinburgh inventory
Neuropsychologia
(1971)- et al.
Cognitive conjunction: a new approach to brain activation experiments
Neuroimage
(1997) - et al.
Location vs. feature: reaction time reveals dissociation between two visual functions
Vis Res
(1996) - et al.
A double dissociation between sensitivity to changes in object identity and object orientation in the ventral and dorsal visual streams: a human fMRI study
Neuropsychologia
(2006) Selection for action: some behavioral and neurophysiological considerations of attention and action
- et al.
Visual search is modulated by action intentions
Psychol Sci
(2002) - et al.
Pathways for motion analysis: cortical connections of the medial superior temporal and fundus of the superior temporal visual areas in the macaque
J Comput Neurol
(1990)