People often respond faster when the orientation of an object’s graspable part, irrelevant to the task, corresponds with the response location than when it does not (e.g., Bub & Masson, 2010; Tucker & Ellis, 1998). As a representative example, participants might make a left or right keypress in response to the category of an object with a graspable handle oriented to the left or the right. The basic finding is that responses are faster when the correct response is on the same side as the handle (e.g., a left key response when the handle is oriented to the left) than when it is on the opposite side (e.g., Tucker & Ellis, 1998). This object-based correspondence effect has been attributed by many authors to a grasping affordance (the object affordance account; e.g., Iani, Baroni, Pellicano, & Nicoletti, 2011; Pellicano, Iani, Borghi, Rubichi, & Nicoletti, 2010; Tucker & Ellis, 1998). However, other authors have proposed instead that the effect is due primarily, if not entirely, to location coding similar to the coding that underlies the correspondence effects obtained for stimuli displayed in left and right locations (the spatial-coding account; e.g., Bub & Masson, 2010; Cho & Proctor, 2010, 2011, in press; see also Bub, Masson, & Lin, 2013).

Goslin, Dixon, Fischer, Cangelosi, and Ellis (2012) recently reported a study that they interpreted as providing electrophysiological evidence for the affordance account of such correspondence effects. In that study, participants categorized a centrally located picture of an object, for which the handle was left-facing or right-facing, as a kitchen utensil or tool by pressing a left key with the left hand or right key with the right hand. Their logic was that “affordance would lead to facilitation when the hand . . . was congruent with the direction the handle of the object was facing, and affordance would lead to inhibition when the handle orientation and response hand were incongruent” (p. 152). To examine whether visual processing is modulated by the afforded grasping action, Goslin et al. measured the P1 and N1 components of the event-related potential (ERP). When attention is allocated to a particular location in the visual field, stimuli presented at that location are assumed to elicit an enlarged P1 (e.g., during the period 100–130 ms after stimulus onset) and N1 (e.g., 150–200 ms after stimulus onset), relative to stimuli at unattended locations, especially at parietal and occipital electrode sites (for reviews, see Luck, 2005; Mangun, 1995). Goslin et al. (2012) noted that other studies have shown these ERP components to be sensitive to object-based features such as orientation (e.g., Karayanidis & Michie, 1997), and reasoned that “if embodiment really is deeply embedded in early visual processing, we would also expect that the P1 and N1 components would . . . be modulated by the motor intentions of the participants” (p. 153).

In agreement with this reasoning, within a single experiment, Goslin et al. (2012) found the correspondence effect between response hand and left–right handle orientation of the object in response time (RT) and the P1 and N1 ERP components. The effect was also observed in the stimulus-locked lateralized readiness potential (LRP), an index of the activation of response-related processes (e.g., Coles, 1989; De Jong, Wierda, Mulder, & Mulder, 1988; Eimer, 1998). Goslin et al. (2012) concluded that this correspondence effect is due to grasping affordance and is driven by a binding between visual processing and action that takes place early in the sensory pathways.

However, consideration of related research and a close examination of Goslin et al.’s (2012) results suggest that their conclusion is premature. Bub and Masson (2010) noted that although object-based correspondence effects obtained with actual grasping responses likely are a consequence of representations associated with grasping the handle, there is little evidence to indicate that such effects obtained with keypress responses are. Object-based correspondence effects obtained with keypresses “instead involve more abstract spatial codes activated by the orientation of an object that affect any left–right response discrimination (e.g., index vs. middle finger of the same hand)” (p. 341). Consistent with a spatial-coding account, Cho and Proctor (2010, 2011, in press) have shown that such effects are relatively large (~25 ms or more) when the handles appear in distinct left and right display locations from one trial to the next, but not when the entire object is centered (on the basis of total object width) and location codes for the handle are not obvious. In Goslin et al.’s (2012) study, the correspondence effect on RT was indeed small, being only 5 ms overall. Moreover, the effect was evident for tools (10 ms) but not for kitchen utensils (0 ms), without any explanation provided for why the graspable utensils would not also yield an effect. Although both the P1 and N1 were modulated by the correspondence between response hand and handle orientation, these effects likewise were evident only for tools. Furthermore, the correspondence effects on the P1 and N1 were negligible (see their Fig. 1c).

Fig. 1
figure 1

Panel A shows an example event sequence used in Experiment 1. Panel B shows the subset of objects from Goslin et al.’s (2012) stimuli that we used in Experiments 2 and 3. For the purpose of our study, we changed the color stimuli to grayscale and reduced the size of the objects to approximately 50% of the original size. Only left-handle, centrally located objects are shown. The permission for using Goslin et al.’s stimuli was obtained by e-mail on February 20th, 2012

As noted above, Goslin et al. (2012) also found that the pooled LRPs across tools and kitchen utensils were modulated by the correspondence between response hand and handle orientation, around 100–200 ms after stimulus onset. They took this finding as evidence that response activation triggered by intended grasping action (i.e., handle orientation) occurred during early visual processes. Nevertheless, caution should be exercised in accepting this interpretation. The divergence of LRPs for corresponding and noncorresponding trials in Goslin et al.’s study started about 100 ms after stimulus onset and lasted for at least 50 ms before the preparation shifted to the correct hand for the noncorresponding trials (see their Fig. 1b). This shift of motor preparation was not fully reflected in the correspondence effect on RTs, which was only 5 ms overall. Furthermore, as was pointed out by Masaki, Wild-Wall, Sanglas, and Sommer (2004), stimulus-locked LRPs emerge from multiple streams of activation (e.g., response activation by nonspatial stimulus features and motor activation by selecting response hands). Thus, the LRP data by themselves do not provide an unambiguous picture regarding whether response activation is triggered by the intended grasping action.

The present study

Given that the affordance view has implications for how vision and action are integrated, it is critical to verify whether the grasping affordance account is correct. There is a reason to be skeptical regarding Goslin et al.’s (2012) finding since their data did not consistently show a correspondence effect for graspable objects. Thus, the present study was conducted to examine their claim and to test further the affordance account against the spatial-coding account of object-based correspondence effects. Our first step was to replicate Goslin et al.’s study in Experiment 1 and then examine whether the correspondence effect was primarily driven by the object affordance (as suggested by the affordance account) or object location (as suggested by the spatial-coding account) in Experiments 2 and 3.

Experiment 1

In Experiment 1, we closely replicated Goslin et al.’s (2012) procedures using their stimuli. The objects were centrally located, as in their study. Participants determined whether the object was a kitchen utensil or a tool by pressing the left key with their left hand or the right key with their right hand. The handle orientation, though irrelevant to the task, corresponded or not with the response hand/location. We measured the correspondence effect (noncorresponding minus corresponding) on both behavioral and ERP data, as in Goslin et al.’s study.

Method

Participants

A group of 29 undergraduate students from Oregon State University participated in exchange for extra course credit. Data from three participants were excluded from the final analyses due to excessive artifacts in the electroencephalographic (EEG) data (see below). The remaining 26 participants (16 females, 10 males) had a mean age of 20 years (range: 18–34). Four were left-handed and 22 were right-handed. All reported having normal or corrected-to-normal acuity.

Apparatus, stimuli, and procedure

Stimuli, displayed on a 19-in. monitor, were viewed from a distance of about 55 cm. The complete set of 84 colored pictures of objects—42 kitchen utensils and 42 tools—from Goslin et al. (2012) was used (see their Fig. 1a for an example of the stimuli).Footnote 1 Within each category, half of the handles were on the left and half on the right. A trial started with a fixation cross in the center screen for 1,000 to 1,200 ms, randomly selected from a uniform distribution. The stimulus appeared in the center screen immediately after offset of the fixation cross and remained present until participants had made a response or until a 2,000-ms deadline was reached. The participants’ task was to indicate whether the stimulus was a kitchen utensil or a tool by pressing the leftmost response-box button with their left index finger or the rightmost button with their right index finger. Feedback (a tone for an incorrect response or the fixation display for a correct response) was presented for 100 ms. The next trial then began with the fixation display (see Fig. 1, panel A).

Each participant completed two sessions, one with each of the two possible mappings of response hand/location to the category (e.g., left hand for kitchen utensils and right hand for tools, or vice versa). Session order was counterbalanced between participants. Within each session, participants performed one practice block of 24 trials, followed by six experimental blocks of 84 trials each (a total of 504 experimental trials for each session/mapping). Thus, each object image was repeated 12 times (six times per session/mapping) for each participant. For 50% of the trials, the response hand/location corresponded with the orientation of the stimulus’s handle. For the remaining 50% of the trials, they were noncorresponding. Participants completed these two sessions within a single visit and were given breaks between blocks and between sessions.

EEG recording

EEG activity was recorded from electrodes F3, Fz, F4, C3, Cz, C4, P3, Pz, P4, P7, P8, PO7, PO8, O1, and O2. These sites and the right mastoid were recorded in relation to a reference electrode at the left mastoid. The ERP waveforms were then re-referenced offline to the average of the left and right mastoids (Luck, 2005). The horizontal electrooculogram (HEOG) was recorded bipolarly from electrodes at the outer canthi of both eyes, and the vertical electrooculogram (VEOG) was recorded from electrodes above and below the midpoint of the left eye. Electrode impedance was kept below 5 kΩ. EEG, HEOG, and VEOG were amplified using Synamps2 (Neuroscan) with a gain of 2,000 and a bandpass of 0.1–40 Hz. The amplified signals were digitized at 500 Hz.

Trials with artifacts were identified in two steps. First, trials with artifacts were rejected automatically using a threshold of ±75 μV for a 1,000-ms epoch beginning 200 ms before stimulus onset and ending 800 ms after stimulus onset. Second, each of these candidate artifact trials was then inspected manually. Three of the original 29 participants were eliminated because of artifact rejection on more than 25% of trials. Thus, only 26 participants’ data were included in the final analyses.

Results

We intended to exclude trials from the final analyses of the behavioral data (RTs and proportions of errors [PEs]) and ERP data if RTs were less than 100 ms, but no trials were in that range. Rejection of trials with EEG artifacts led to the elimination of 5% of trials, with no more than 14% rejected for any individual participant. Trials were also excluded from the RT and ERP analyses if the response was incorrect (note that the trials were considered incorrect if participants failed to respond within the 2,000-ms deadline). An alpha level of .05 was used to ascertain statistical significance.

Behavioral data analyses

An analysis of variance (ANOVA) was conducted including the within-subjects variables object category (kitchen utensils vs. tools) and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding).Footnote 2 Table 1 shows the mean RTs and PEs for each of these conditions.

Table 1 Mean response times (RTs, in milliseconds) and proportions of errors (PEs) as a function of object category (kitchen utensils vs. tools) and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding) in Experiment 1

Mean RTs tended to be slightly longer for the tools (593 ms) than for the kitchen utensils (585 ms), F(1, 25) = 3.85, p = .0609, η p 2 = .13. The overall correspondence effect was only 1 ± 4 ms at the 95% confidence interval and was not significant, F < 1.0. The difference in the correspondence effect between kitchen utensils (4 ± 6 ms) and tools (–2 ± 6 ms) was not significant, F(1, 25) = 1.99, p = .1706, η p 2 = .07, but note that the trend was in the opposite direction from that reported by Goslin et al. (2012), in which the kitchen utensils yielded a 0-ms effect and the tools a 10-ms effect.

The mean PE was .012 higher for the kitchen utensils (.055) than for the tools (.043), F(1, 25) = 16.07, p < .001, η p 2 = .39. As in the RT data, neither the main effect of correspondence nor its interaction with category was significant, Fs(1, 25) ≤ 1.52, ps ≥ .2293, η p 2s ≤ .06. The correspondence effects were only .004 (± .007) for the kitchen utensils and .004 (± .006) for the tools.

ERP data analyses

P1 and N1

To quantify the overall magnitude of the P1 and N1 effects, we focused on the time windows of 100–130 ms and 150–200 ms after stimulus onset, respectively,Footnote 3 and calculated the mean amplitude from parietal and occipital electrodes: P7, P8, O1, and O2. Figure 2 shows the scalp distributions of brain potentials during the critical time windows used to measure the P1 and N1. All ERP data were adjusted relative to the mean amplitude during a 200-ms prestimulus onset baseline period. Note that if the P1 and N1 were modulated by the correspondence between response location and handle orientation, corresponding trials should produce larger ERP amplitudes than noncorresponding trials (i.e., more positive for corresponding than noncorresponding trials). To ensure consistency, we measured the correspondence effect for ERPs using the same equation as for the behavioral data (noncorresponding minus corresponding). Thus, we would expect the correspondence effect to be negative in value in ERP measures (P1 and N1) while being positive in behavioral measures (RT and PE).

Fig. 2
figure 2

Scalp distribution of event-related potentials for every 24-ms interval during the time window 100–200 ms after stimulus onset in Experiment 1. These topographies indicated increased activity for parietal and occipital sites during this time window

Following Goslin et al. (2012), we analyzed the P1 and N1 data as a function of object category (kitchen utensils vs. tools), handle orientation (left vs. right), response hand/location (left vs. right), electrode site (parietal [P7, P8] vs. occipital [O1, O2]), and electrode hemisphere (left [P7, O1] vs. right [P8, O2]). All factors were within-subjects variables. Figure 3 shows the mean P1 and N1 amplitudes, averaged across the P7, P8, O1, and O2 electrodes for the kitchen utensils and tools, as well as the pooled data from these two categories. Our primary interest was in whether the P1 and N1 were modulated by the correspondence between handle orientation and response hand. As in Goslin et al.’s study, therefore, we report only these effects below. The complete summary of the ANOVA is given in Appendix Table 4.

Fig. 3
figure 3

Grand average P1 and N1 waveforms across the P7, P8, O1, and O2 electrodes for kitchen utensils and tools in Experiment 1. In addition, pooled data were obtained by averaging the P1s and N1s across the two object categories. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (both left or both right) or noncorresponding (one left and one right). The unfilled rectangular boxes indicate the time windows used to assess the P1 effect (100–130 ms after stimulus onset) and the N1 effect (150–200 ms after stimulus onset). Negative is plotted upward, and time zero represents stimulus onset

For the P1 data, the correspondence effect (between response location and handle orientation) was negligible (–0.010 μV) and nonsignificant, F < 1.0. The correspondence effect was not significantly different between kitchen utensils and tools, F(1, 25) = 2.41, p = .1334, η p 2 = .09. The correspondence effect was very small for both kitchen utensils and tools (0.120 μV and –0.141 μV, respectively). Further analyses revealed that the correspondence effect was nonsignificant for both categories, |ts(25)| ≤ 1.26, ps ≥ .2191.

As in the P1 data, the N1 data showed that neither the interaction of response hand and handle orientation (i.e., the correspondence effect) nor its interaction with category was significant, Fs < 1.0. The correspondence effect was only –0.093 μV for kitchen utensils and –0.103 μV for tools. Further analysis also revealed that the correspondence effect was non-significant for both categories, |ts(25)| < 1.0. These findings suggest that both P1 and N1 were not modulated by the correspondence between response hand and handle orientation.

LRPs

As we discussed above, the LRP data do not allow for a clear-cut interpretation of correspondence effects, nor do they provide insight regarding whether response activation is triggered by the intended grasping action. Nevertheless, we also report LRPs for the sake of completeness. As in Goslin et al. (2012), the LRPs were measured by calculating the difference waveforms between the C3 and C4 electrode sites using the following equation: LRP = (left hand [C4–C3] + right hand [C3–C4])/2. The average LRP amplitudes were analyzed over the five consecutive 100-ms time windows from 0 to 500 ms after stimulus onset, and were adjusted relative to the mean amplitude during a 200-ms baseline period prior to stimulus onset. The data were analyzed as a function of object category (kitchen utensils vs. tools) and correspondence between response hand and handle orientation (corresponding vs. noncorresponding). A complete summary of the ANOVA results is given in Appendix Table 5. Figure 4 shows the LRPs for kitchen utensils and tools, as well as the pooled data from these two categories.

Fig. 4
figure 4

Grand average lateralized readiness potential (LRP) waveforms for kitchen utensils and tools in Experiment 1. In addition, pooled data were obtained by averaging the LRPs across the two object categories. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (i.e., both left or both right) or noncorresponding (one left and one right). Negative is plotted upward, and time zero represents stimulus onset

The main effect of correspondence was significant for all time windows, Fs(1, 25) ≥ 5.51, ps ≤ .0324, η p 2s ≥ .17. The patterns of correspondence between response hand and handle orientation on LRPs were similar for kitchen utensils and tools from 0 to 300 ms after stimulus onset, Fs(1, 25) ≤ 3.49, ps ≥ .07, η p 2s ≤ .12. It appears that LRPs emerge early after stimulus onset, which cannot be attributed purely to response activation or selection (which typically occurs 200 ms after stimulus onset; see Masaki et al., 2004).

Discussion

The correspondence effects on RTs were negligible for kitchen utensils and tools, Fs < 1.0. Also, the correspondence between response hand and handle orientation had no effect on the P1 and N1 for either category, Fs < 1.0, arguing against the claim that vision–action binding takes place early in the sensory pathways. These results provide no evidence to support Goslin et al.’s (2012) claim that “some of the brain’s earliest responses to an individual visual object are modulated by the relation between the action associated with the object and the action intentions of the observer” (p. 156).

Experiment 2

Previous studies have suggested that correspondence effects with keypresses are driven by object location (the spatial-coding account; e.g., Bub & Masson, 2010; Cho & Proctor, 2010) rather than by the orientation of the grasping component. Specifically, Cho and Proctor (in press) provided evidence that the object-based correspondence effect is absent when objects are centered in the display. Thus, the absence of correspondence effects in the present Experiment 1 could be due to the lack of a left or right location code for the object (i.e., the object was located centrally). Experiment 2 tested this hypothesis in the following manner.

In addition to presenting the stimulus in the center of the screen (the central location condition), as in Experiment 1, we included a condition for which the stimulus was presented on the left or right side of the screen (the peripheral location condition). For the peripheral objects, the handle orientation (left vs. right) and object location (left vs. right) always corresponded, which allowed us to assess the impact of the object location on the correspondence effect without pitting it against the handle orientation. That is, when the object was presented to the left location, the handle orientation was toward left. When the object was presented to the right location, the handle orientation was toward right. Thus, the peripheral location condition included an explicit left–right spatial code that the central location condition did not. We used only eight kitchen utensils and eight tools from Experiment 1, each with a distinct base and handle, which would make the direction of the grasping action toward the handles more explicit. The affordance view would predict correspondence effects in both the central and peripheral location conditions, whereas the spatial-coding view would predict an effect in the peripheral location condition but not in the central location condition.

Method

Participants

We recruited 24 new participants from the same participant pool as in Experiment 1. Four of the participants’ data were excluded; three participants’ EEG artifact rejection rates were greater than 25% of trials, and one participant’s EEG data failed to record. Therefore, the data from 20 participants (15 females, 5 males), mean age of 20 years (range: 18–22), were included in the final data analyses. Two were left-handed and 18 were right-handed. All reported having normal or corrected-to-normal visual acuity.

Apparatus, stimuli, and procedure

The tasks, stimuli, and equipment were the same as in Experiment 1, except for the following changes. First, only eight kitchen utensils and eight tools with distinct bases and handles from Experiment 1 were used (see Fig. 1, panel B, for an example of the objects used in Exp. 2). Again, within each category, half of the handles were on the left and half on the right. Second, the object was presented centrally, as in Experiment 1, or peripherally (5.19º from central fixation). Object location (central vs. peripheral) was randomly intermixed within blocks. To manipulate the object location, it was necessary for us to change the color of the image to grey and reduce the size of the original object images used in Goslin et al. (2012) and in Experiment 1 by 50%. Third, we increased the total number of experimental trials from 1,008 (504 trials per session [i.e., per S–R mapping]) in Experiment 1 to 1,440 (720 trials per session) in Experiment 2. Thus, each object image was repeated 45 times in each location. As in Experiment 1, each participant performed two sessions with different S–R mappings (left hand for kitchen utensils and right hand for tools vs. right hand for kitchen utensils and left hand for tools). Session order (i.e., response mapping order) was counterbalanced between participants.

Results

The data analysis was similar to that for Experiment 1. Application of the pre-determined RT cutoff (<100 ms) did not eliminate any trials (note that the 2,000-ms response deadline was also used in Exp. 2 as in our Exp. 1 and Goslin et al.’s, 2012, study). Rejection of trials with EEG artifacts led to the further elimination of 7% of trials, but no more than 19% for any participant.

Behavioral data analyses

The behavioral data were analyzed as a function of object location (central vs. peripheral), object category (kitchen utensils vs. tools), and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding). All factors were within-subjects variables. Table 2 shows the mean RTs and PEs for each of these conditions.

Table 2 Mean response times (RTs, in milliseconds) and proportions of errors (PEs) as a function of object location (central vs. peripheral), object category (kitchen utensils vs. tools), and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding) in Experiment 2

The mean RT was 8 ms longer for peripherally located objects (547 ms) than for centrally located objects (538 ms), F(1, 19) = 15.34, p < .001, η p 2 = .45. A significant correspondence effect of 14 ms was observed, F(1, 19) = 53.46, p < .001, η p 2 = .74. The correspondence effect was larger for the peripheral objects (at the 95% confidence level, 25 ± 8 ms) than for the central objects (3 ± 6 ms), F(1, 19) = 15.54, p < .001, η p 2 = .45, with the former being significant, t(19) = 6.54, p < .0001, but not the latter, t(19) = 1.06, p = .3016. The mean RT was shorter for kitchen utensils (534 ms) than for tools (543 ms) when they were presented centrally, but was identical for both categories when they were presented peripherally (547 ms for both), F(1, 19) = 4.81, p = .0409, η p 2 = .20. No other effects were significant.

The mean PE was .007 higher for peripherally located objects (.046) than for centrally located objects (.038), F(1, 19) = 6.68, p = .0182, η p 2 = .26. The PE was .008 higher for the kitchen utensils (.046) than for the tools (.038), F(1, 19) = 7.04, p = .0157, η p 2 = .27. A significant correspondence effect of .012 was observed, F(1, 19) = 21.56, p < .001, η p 2 = .53. As in the RT data, the correspondence effect was larger for the peripheral objects (.022 ± .009) than for the central objects (.001 ± .005), F(1, 19) = 19.43, p < .001, η p 2 = .51. Further analyses revealed that the correspondence effect was significant for the peripheral objects, t(19) = 5.29, p < .0001, but not for the central objects, t < 1.0.

ERP data analyses

P1 AND N1

We analyzed the P1 and N1 data as a function of object location (central vs. peripheral), object category (kitchen utensils vs. tools), handle orientation (left vs. right), response hand/location (left vs. right), electrode site (parietal [P7, P8] vs. occipital [O1, O2]), and electrode hemisphere (left [P7, O1] vs. right [P8, O2]). All factors were within-subjects variables. Figure 5 shows the P1s and N1s averaged across the P7, P8, O1, and O2 electrodes for kitchen utensils and tools, as well as the pooled data from these two categories, for each location. As in Experiment 1, we report only effects involving the correspondence between handle orientation and response hand. A complete summary of the ANOVA results is given in Appendix Table 6.

Fig. 5
figure 5

Grand average P1 and N1 waveforms across the P7, P8, O1, and O2 electrodes for kitchen utensils and tools when they were centrally versus peripherally located in Experiment 2. In addition, pooled data were obtained by averaging the P1s and N1s across the two object categories for each location. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (both left or both right) or noncorresponding (one left and one right). The unfilled rectangular boxes indicate the time windows used to assess the P1 effect (100–130 ms after stimulus onset) and the N1 effect (150–200 ms after stimulus onset). Negative is plotted upward, and time zero represents stimulus onset

For the P1 data, although the interaction of response hand and handle orientation (i.e., the correspondence effect) was not significant, F < 1.0, their three-way interaction with object location was significant, F(1, 19) = 4.88, p = .0396, η p 2 = .20. The correspondence effect on P1 was larger for peripheral objects (–0.313 μV) than for central objects (0.185 μV). The further analyses revealed that only the former effect approached statistical significance, |t(19)| = 1.83, p = .0823, with the latter tending in the opposite direction, t(19) = 1.54, p = .1404. Object category did not interact with these variables, Fs < 1.0.

The N1 data showed no interaction of response hand and handle orientation, F < 1.0. However, as in P1, the three-way interaction of these variables and object location was significant, F(1, 19) = 5.31, p = .0327, η p 2 = .22. The correspondence effect on the N1 was larger for peripheral objects (–0.378 μV) than for central objects (0.239 μV). The further analyses again revealed that the correspondence effect approached the .05 significant level for peripheral objects, |t(19)| = 1.82, p = .0849, but not for central objects, t(19) = 1.48, p = .1556. Object category did not interact with these variables, Fs < 1.0. These findings suggest that both the P1 and N1 were modulated by a correspondence between response hand and handle orientation only when objects were presented in a peripheral location, rather than a central location.

LRPs

It should be noted that the LRP data do not allow us to test between the affordance account and the spatial-coding account. Any object displayed in one visual field, as in the peripheral location condition, will produce different brain potentials between the left and right hemifields, in addition to the response code activated by the object itself. The lateralized brain activity could simply reflect differences in stimulus energy between the left and right visual fields in the display (for further discussion, see Lien, Ruthruff, & Cornett, 2010; Lien, Ruthruff, Goodin, & Remington, 2008; Luck, 2005). Furthermore, attention allocation to a lateralized object would also elicit an increased negativity over the posterior, occipital, and temporal scalp contralateral to an attended stimulus, starting roughly 170 ms after stimulus onset (e.g., Eimer, 1996; Luck & Hillyard, 1990; Luck, Heinze, Mangun, & Hillyard, 1990). This attention allocation takes place prior to any processing of nonspatial features (e.g., Hillyard & Anllo-Vento, 1998). These factors would contribute to LRPs in the same polarity in Experiment 2 (note that object location and handle orientation always corresponded in our design). Thus, the LRP (differences in brain potentials between left and right hemifields [C3 and C4 electrodes]) would be exaggerated in the peripheral location condition (due to the combination of visual processing of object locations and features) as compared to the central location condition. Despite this limitation, we report the data here for the sake of the completeness.

The LRP data were analyzed as a function of object location (central vs. peripheral), object category (kitchen utensils vs. tools), and correspondence between response hand and handle orientation (corresponding vs. noncorresponding). The complete summary of the ANOVA is in Appendix Table 7. Figure 6 shows the LRPs for kitchen utensils and tools, as well as the pooled data from these two categories, for each location.

Fig. 6
figure 6

Grand average lateralized readiness potential (LRP) waveforms for kitchen utensils and tools when they were centrally versus peripherally located in Experiment 2. In addition, pooled data were obtained by averaging the LRPs across the two object categories for each location. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (both left or both right) or noncorresponding (one left and one right). Negative is plotted upward, and time zero represents stimulus onset

The correspondence effect approached significance for the first two time windows (0–100 and 100–200 ms), Fs(1,19) ≥ 3.86, ps = .06, η p 2s ≥ .17. The effect was more pronounced for the peripheral than for the central objects during the 100- to 200-ms window. This finding suggests that LRPs were modulated by object location. Again, the moderate modulation of correspondence on LRPs early in time (0–200 ms after stimulus onset) cannot be unambiguously attributed to response activation or selection.

Discussion

Replicating the findings of Experiment 1, central objects produced no significant correspondence effect in RTs (the 95% confidence interval was 3 ± 6 ms, averaged across both categories). Consistent with the behavioral data, the P1 and N1 showed no modulation by the correspondence between response hand and handle orientation. Once more, these findings are inconsistent with the grasping affordance account. Unlike central objects, peripheral objects elicited a large correspondence effect in RTs (25 ± 8 ms), similar in size to a standard Simon effect and to the object-based Simon effects obtained in Cho and Proctor’s (2010, 2011) studies with kitchen utensils whose handle locations varied. The P1 and N1 were modulated by the correspondence between response hand and handle orientation for peripheral objects. Thus, the presence or absence of the correspondence effect strongly depends on the object location, supporting the spatial-coding account.

For the peripheral objects, the handle orientation (left vs. right) and object location (left vs. right) always corresponded. One could still argue that the observed correspondence effect was due primarily to the joint effect from both handle orientation and object location. Although a joint effect seems unlikely due to the absence of the correspondence effect from the handle orientation in the center object condition, it is important to rule out this alternative explanation. Consequently, we conducted a behavioral experiment including only the peripheral object condition but with the handle orientation being varied (see also Symes, Ellis, & Tucker, 2005). That is, when the object was presented in the left location, the handle was orientated toward the left for a random half of the trials and toward the right for the other half. The same was also true when the object was presented in the right location. If the correspondence effect is primarily due to the spatial coding of the object location, and not the joint influence of both handle orientation and object location, then one would expect similar correspondence effects regardless of handle orientation.

Data from this control experiment with 92 new participants showed a significant overall correspondence effect of 12 ± 3 ms between object location and response location, F(1, 91) = 56.91, p < .0001, η p 2 = .38. Critically, this effect did not depend on handle orientation, F(1, 91) = 2.55, p = .1139, η p 2 = .03; the effect was 14 ± 4 ms and 10 ± 4 ms for the corresponding and noncorresponding handle orientations, respectively. These variables did not interact with category, either, F < 1.0. These results replicated Symes et al.’s (2005) Experiment 1, in which they found that the correspondence effect between object location and response location did not depend on handle orientation. Thus, the finding from our control experiment converges on the conclusion that the object-based correspondence effect is primarily due to object location, not object affordance.

Experiment 3

In Experiment 2, the correspondence effect was observed only when the object was presented to the left or right (the peripheral condition), not when the object was presented centrally (the central condition). Along with the finding in Experiment 1, the results suggest that the object-based correspondence effect is primarily driven by object location (the spatial-coding account; e.g., Bub & Masson, 2010; Cho & Proctor, 2010) rather than by the orientation of an object’s grasping component (the object affordance account; Goslin et al., 2012; Iani et al., 2011; Pellicano et al., 2010; Tucker & Ellis, 1998).

Experiment 3 further tested our conclusion by employing Cho and Proctor’s (2011) approach of using the central condition only, but with either the base of the object or the whole object being in the center. In the base-centered condition, the base of the object was in the center, so the handle was clearly positioned to the left or the right side. In the object-centered condition, the object was in the center, so the handle was relatively positioned in the median line as in the central object condition of Experiment 2. This was similar to the object presentation in Experiment 1 and Goslin et al.’s (2012) study, except for the color and size of the object. If the object affordance is the key triggering the correspondence effect, as suggested by the affordance account, then similar correspondence effects should be observed in both conditions. On the contrary, the spatial-coding account predicts the presence of the correspondence effect in the base-centered condition but not in the object-centered condition, because of the lack of a spatial code in the latter case.

Method

Participants

We recruited 24 new participants from the same participant pool as in Experiment 1. Two of the participants’ EEG data failed to record. Therefore, data from 22 participants (12 females, 10 males), mean age of 21 years (range: 18–30), were included in the final data analyses. All were right-handed and reported having normal or corrected-to-normal visual acuity.

Apparatus, stimuli, and procedure

The tasks, stimuli, and equipment were the same as in Experiment 2, except for the following changes. The object always appeared roughly in the center of the screen, depending on the condition. The object-centered condition was identical to the central object condition of Experiment 2 (see also Exp. 1); thus, the handle position was not clearly to the left or right. The base-centered condition was similar to the object-centered condition, except that the base of the object was presented centrally, resulting in the handle position being explicitly to the left or right. Again, within each condition, half of the handles were oriented to the vvvleft and half to the right. Object condition (object-centered vs. based-centered) was randomly intermixed within blocks. As in Experiment 1, each participant performed two sessions with different S–R mappings (left hand for kitchen utensils and right hand for tools vs. right hand for kitchen utensils and left hand for tools). Session order (i.e., response mapping order) was counterbalanced between participants.

Results

The data analysis was similar to that of Experiment 2. No trials fell outside the range of the pre-determined RT cutoff (<100 ms). Rejection of trials with EEG artifacts led to the further elimination of 6% of trials, but no more than 20% for any participant.

Behavioral data analyses

The behavioral data were analyzed as a function of object condition (object-centered vs. base-centered), object category (kitchen utensils vs. tools), and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding). All factors were within-subjects variables. Table 3 shows the mean RTs and PEs for each of these conditions.

Table 3 Mean response times (RTs, in milliseconds) and proportions of errors (PEs) as a function of object condition (object-centered vs. base-centered), object category (kitchen utensils vs. tools), and response-hand/handle-orientation correspondence (corresponding vs. noncorresponding) in Experiment 3

An overall correspondence effect of 8 ms was significant, F(1, 21) = 12.19, p < .01, η p 2 = .37, which was larger for tools (11 ms) than for kitchen utensils (4 ms), F(1, 21) = 5.54, p = .0284, η p 2 = .21. Critically, the correspondence effect was larger for the base-centered condition (the 95% confidence level was 16 ± 6 ms) than for the object-centered condition (–1 ± 5 ms), F(1, 21) = 28.09, p < .0001, η p 2 = .57, with the former being significant, t(21) = 5.67, p < .0001, but not the latter, t < 1.0. The mean RT was shorter for kitchen utensils (509 ms) than for tools (524 ms), F(1, 21) = 30.99, p < .0001, η p 2 = .60.

A significant correspondence effect of .011 was observed in the PE data, F(1, 21) = 11.00, p < .01, η p 2 = .34. As in the RT data, the correspondence effect was larger for the base-centered condition (.025 ± .011) than for the object-centered condition (–.002 ± .007), F(1, 21) = 23.53, p < .0001, η p 2 = .53. Further analyses revealed that the correspondence effect was significant for the base-centered condition, t(21) = 4.70, p < .001, but not for the object-centered condition, t < 1.0. Both kitchen utensils and tools produced similar large correspondence effect in the base-centered condition, whereas the effect was relatively small for tools and even became negative for kitchen utensils in the object-centered condition, F(1, 21) = 6.19, p = .0213, η p 2 = .23 (see Table 3). No other effects on RTs and PEs were significant.

ERP data analyses

P1 and N1

We analyzed the P1 and N1 data as a function of object condition (object-centered vs. base-centered), object category (kitchen utensils vs. tools), handle orientation (left vs. right), response hand/location (left vs. right), electrode site (parietal [P7, P8] vs. occipital [O1, O2]), and electrode hemisphere (left [P7, O1] vs. right [P8, O2]). All factors were within-subjects variables. Figure 7 shows the P1 and N1 averaged across the P7, P8, O1, and O2 electrodes for kitchen utensils and tools, as well as the pooled data from these two categories, for each location. As in Experiment 2, we report only effects involving the correspondence between handle orientation and response hand. A complete summary of the ANOVA results is given in Appendix Table 8.

Fig. 7
figure 7

Grand average P1 and N1 waveforms across the P7, P8, O1, and O2 electrodes for kitchen utensils and tools in the object-centered versus base-centered conditions in Experiment 3. In addition, pooled data were obtained by averaging the P1s and N1s across the two object categories for each condition. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (both left or both right) or noncorresponding (one left and one right). The unfilled rectangular boxes indicate the time windows used to assess the P1 effect (100–130 ms after stimulus onset) and the N1 effect (150–200 ms after stimulus onset). Negative is plotted upward, and time zero represents stimulus onset

For the P1 data, neither the interaction of response hand and handle orientation (i.e., the correspondence effect) nor its interaction with object condition was significant, Fs < 1.0. The correspondence effect on P1 was 0.009 μV for the object-centered condition and was –0.036 μV for the base-centered condition. Object category did not interact with these variables, Fs < 1.0. The further analyses revealed that none of the correspondence effects was significant for kitchen utensils and tools in either object condition, |ts| < 1.0. The correspondence effects on P1 were 0.054 μV and –0.037 μV for kitchen utensils and tools, respectively, in the object-centered condition, and were –0.133 μV and 0.060 μV in the base-centered condition.

As with P1, the N1 data showed no interaction between response hand and handle orientation as well as its interactions with object condition and/or category, Fs < 1.10. Again, the further analyses revealed that none of the correspondence effects was significant for kitchen utensils and tools in either object condition, |ts| < 1.0; the correspondence effects on N1 were 0.203 μV and 0.078 μV for kitchen utensils and tools, respectively, in the object-centered condition, and were –0.106 μV and 0.108 μV, respectively, in the base-centered condition. These findings suggest that both P1 and N1 were not modulated significantly by correspondence between response hand and handle orientation in either the object-centered condition or base-centered condition.

LRPs

The LRP data were analyzed as a function of object condition (object-centered vs. base-centered), object category (kitchen utensils vs. tools), and correspondence between response hand and handle orientation (corresponding vs. noncorresponding). The complete summary of the ANOVA is in Appendix Table 9. Figure 8 shows the LRPs for kitchen utensils and tools, as well as the pooled data from these two categories, for each condition.

Fig. 8
figure 8

Grand average lateralized readiness potential (LRP) waveforms for kitchen utensils and tools in the object-centered versus base-centered conditions in Experiment 3. In addition, pooled data were obtained by averaging the LRPs across the two object categories for each condition. Data are plotted as a function of whether the response hand and the handle orientation were corresponding (both left or both right) or noncorresponding (one left and one right). Negative is plotted upward, and time zero represents stimulus onset

The correspondence effect on LRPs was significant for every 100-ms time window between 100 and 500 ms, Fs(1,21) ≥ 6.60, ps ≤ .0179, η p 2s ≥ .24. The effect was more pronounced for the base-centered than for the object-centered conditions during the 200- to 500-ms window, Fs(1,21) ≥ 4.73, ps ≤ .0412, η p 2s ≥ .18. The correspondence effects on LRPs were not significantly different between kitchen utensils and tools in the 0- to 400-ms time windows, Fs < 1.0. These findings suggest that early LRPs were not modulated by category but by object location when the whole object or the object base was centrally located.

Discussion

Replicating the findings of Experiments 1 and 2, the correspondence effect was not evident when the midline of the object was presented in the screen center (–2 ± 5 ms). Nevertheless, a large significant effect was observed when the base of the object was centrally located (16 ± 6 ms), so that the handle was clearly positioned to either the left or the right of center. This finding replicates that of Cho and Proctor (2011) and is consistent with the spatial-coding account, which predicts that the correspondence effect should be absent for an object-centered but present for a base-centered condition. The absence of the correspondence between response hand and handle orientation for either condition on P1 and N1 provides no evidence that an intended grasping action modulates visual attention.

General discussion

The aim of the present study was to examine whether object-based correspondence effects with keypress responses are driven primarily by the orientation of an object’s graspable component or by the object’s location. According to Goslin et al. (2012), the correspondence effect is the result of visual–action binding in which the response is primed by deploying visual attention toward the object’s graspable component. Consistent with this claim, they found correspondence effects on RT and sensory-evoked P1 and N1 ERPs, which have been assumed to reflect early processing of visual–spatial attention. However, their observed correspondence effects in both the RT and ERP were small (for tools) or even nonexistent (for kitchen utensils).

The close replication of Goslin et al.’s (2012) study in our Experiment 1 indeed yielded no significant correspondence effects in RTs (overall, 1 ± 4 ms). Consistent with the behavioral data, the P1 and N1 ERPs were also not modulated by correspondence between response hand and object handle orientation. Thus, these ERP results are inconsistent with the hypothesis that, when making keypress responses, visual processing is modulated by the intended object grasping action (e.g., Goslin et al., 2012; Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003).

Likewise, for centrally presented objects in Experiments 2 and 3, we found no significant correspondence effect for any of the measures. The pooled data on the centrally located object condition from a total of 68 participants in all three experiments revealed that the correspondence effect on RTs was negligible (1 ± 3 ms at the 95% confidence interval) and not significant, F < 1.0, nor was the interaction of correspondence and object category significant, F < 1.0. The effects were only –1 ± 4 ms for kitchen utensils and 2 ± 4 ms for tools. Consistent with the RT data, both pooled P1 and N1 data showed no main effect of correspondence, Fs ≤ 1.38, nor an interaction with category, Fs < 1.0. Thus, even with a slightly larger sample size (68 in ours and 65 in Goslin et al., 2012) and twice as many trials (1,008 in our Exp. 1, and 504 in theirs), we found no evidence for the affordance account of the correspondence effect. It should also be noted that the correspondence effect reported in Goslin et al.’s (2012) single experiment was only 5 ms overall, with the effect being observed only for tools (10 ms) and not for kitchen utensils. Using G*Power 3 analyses, we estimated that the power to observe an effect of 5 ms (the size reported by Goslin et al.; a two-tailed test, .05 level) with our pooled sample size of 68 was .94 (Faul, Erdfelder, Lang, & Buchner, 2007). Thus, our nonreplication of their single-experiment finding cannot be attributed to inadequate power to detect a meaningful effect. It is possible that Goslin et al.’s results might have been due to a Type I error.

Experiments 2 and 3 provided further evidence that object locations rather than handle orientations produce a correspondence effect. In Experiment 2, the effect was observed when objects appeared peripherally but not centrally. In Experiment 3, the effect emerged only for the base-centered objects, in which the handle was clearly positioned to the left or right of center, a finding in agreement with Cho and Proctor’s (2011) results obtained with door-handle stimuli. The results from these three experiments contradict Goslin et al.’s (2012) claim and provide no evidence that correspondence effects result from an afforded grasping action. Instead, they show that spatial coding (i.e., object location) is the primary contributor to correspondence effects obtained with keypress responses.

Two findings were notable from the ERPs in the present study. First, in Experiment 2 the presence of P1 and N1 modulations by the correspondence for the peripheral objects and their absence for the central objects suggest that object location, not handle orientation, guides the allocation of visual/spatial attention. Although one might also have expected the base-centered condition to show the modulation of P1 and N1 by correspondence in Experiment 3, the object location (left and right of center) was not as distinct as it was in the peripheral object condition in Experiment 2. These findings are in line with Luck et al.’s (1990) study showing that P1 and N1 were enhanced in response to attended unilateral stimuli. They concluded that these ERP components reflect early visual processing of selections on the basis of stimulus location.

Second, the overall N1 effect was much larger in Experiments 2 and 3 than in Experiment 1. Previous studies have shown that the N1 effect is mediated by stimulus repetition—a decreased N1 effect associated with repeating auditory stimuli but an increased N1 effect associated with repeating visual stimuli (e.g., Olofsson & Polich, 2007; Sable, Low, Maclin, Fabiani, & Gratton, 2004). Given that our stimuli were presented visually and more often in Experiments 2 and 3 than in Experiment 1 (45 times vs. 12 times, respectively), one would expect increased N1 effects in Experiments 2 and 3, as we observed.

In conclusion, the present study confirms that effects on performance attributable to the correspondence of object locations with keypresses occur when both the locations and keypresses can be spatially coded as left or right. Moreover, in agreement with findings of Bub and Masson (2010) and Cho and Proctor (2011, in press), our results provide no evidence that object-handle orientations produce such correspondence effects with keypresses when the displayed object is centered. The pooled data from the present three experiments also further rule out the possible effect of handle orientation. Not surprisingly, correspondence effects also tend to appear in the electrophysiological measures of P1 and N1 when the objects are displayed in distinct left and right locations, but not when they are centered. Correspondence effects for handle location and keypresses do occur, however, when the object is presented in such a way that the handle itself is clearly positioned to the left or right side of the center (Cho & Proctor, 2010, 2011, in press), but this is because the handle produces a location code similar to that generated by varying the left–right location of the object in the present Experiments 2 and 3.

Goslin et al. (2012) emphasized intention in their account, stating that processing “is modulated by the action associations of objects and the intentions of the viewer” (p. 152). It should be noted, however, that the viewer’s intentions were to produce left or right keypresses and not to grasp the pictures of objects that are shown on the screen. These intentions therefore would prime left or right stimulus locations and weight these location codes more heavily in the decision process (e.g., Yamaguchi & Proctor, 2012). That intentions to make keypress responses do not as a rule prime the processing of object properties related to grasping was articulated clearly by Bub and Masson (2010) in explaining why they found no object-based correspondence effects in their Experiment 2, which used keypress responses:

Apparently, a key press is too far removed from any action compatible with the irrelevant object to evoke motor representations that favor one hand over another. Moreover, depressing a finger already resting on a response key does not involve the process of transporting the hand to a target location in space and forming the hand shape to fit that target. (p. 349)

We fully agree that “intentions of the viewer” are important in human information processing, but a wealth of data indicate that keypress responses are coded mainly in terms of spatial location (Proctor & Vu, 2006). Both the behavioral and electrophysiological evidence in the present study support the view that the “automatic” processing of objects that occurs (resulting in correspondence effects) in tasks requiring keypresses is a consequence of overlap of the spatial codes for the stimuli with those for the responses (e.g., Kornblum, Hasbroucq, & Osman, 1990), as in other spatial compatibility effects.