Our ability to comprehend and produce both spoken and written words requires bidirectional links between concepts (meanings) and their corresponding phonological (spoken) and orthographic (written) lexical forms. In language comprehension, words are mapped onto meanings. In language production, meanings are mapped onto words. The present study examined the extent to which these bidirectional mappings occur automatically. Specifically, we asked whether concepts automatically activate their corresponding phonological and orthographic forms, and if so, do these lexical activations modulate conceptual processes via feedback connections.

Models of lexical access vary in the way they capture the relationship between conceptual-semantic representations and their associated phonological and/or orthographic lexical codes. Modular, feed-forward models (e.g., Fodor, 1983; Forster, 1979; Levelt, 2001) assume that lexical and conceptual information is processed sequentially via separate mechanisms. According to this serial view, in word comprehension, meanings are accessed only when phonological or orthographic form-based processes are completed. As a result, higher level conceptual processes do not influence lower level phonological or orthographic processes. Similarly, in word production, it is only after meaning-based processes are completed that phonological or orthographic form-based processes begin. As a result, lower level phonological or orthographic processes do not influence higher level, semantic processes. In contrast, interactive models (Grainger & Ferrand, 1994; Grainger & Holcomb, 2009; Harm & Seidenberg, 2004; Seidenberg & McClelland, 1989) allow information from different domains (lexical and conceptual) to be shared freely throughout the cognitive system. As a result, phonological and orthographic form-based processes may modulate conceptual meaning-based processes, and vice versa.

In particular, models of speech production distinguish between two levels of lexical representations: an abstract lexical representation, termed “lemma,” that contains information about the word’s semantic (and syntactic) properties, and a lexical form representation that contains information about the word’s phonological properties (e.g., Dell, 1986; Jescheniak & Levelt, 1994; Levelt, 1989; but see Caramazza, 1997; Miozzo & Caramazza, 2005, for arguments against this distinction). Modular models of speech production (e.g., Levelt, 1989, 1999; Levelt, Roelofs, & Meyer, 1999) assume that words are retrieved in two separate stages: In the first stage, concepts activate their corresponding lemmas. In the second stage, the lemma selected for speech is phonologically encoded. Importantly, the activation of the word form begins only when a single lemma is selected for speech. Thus, according to this serial model, while concepts automatically activate their corresponding lemmas, only lemmas selected for speech will activate their corresponding lexical forms (e.g., Schriefers, Meyer, & Levelt, 1990). Alternatively, interactive models of speech production (e.g., Dell, 1986; Dell, Schwartz, Martin, Saffran, & Gagnon, 1997) assume that lemmas and word forms are bidirectionally connected, such that activation automatically spreads from concepts to lemmas to word forms, and vice versa. Thus, according to this interactive model, every concept that is activated in a speaker’s mind automatically activates not only its corresponding lemma but also its corresponding word form (e.g., Cutting & Ferreira, 1999; Meyer & Damian, 2007; Morsella & Miozzo, 2002; Navarrete & Costa, 2005).

To test the predictions of these two models, several studies examined whether exposure to a pictorial object results in the automatic activation of its name (i.e., its corresponding phonological form), even when naming is not required (e.g., Gorges, Oppermann, Jescheniak, & Schriefers, 2013; Jescheniak, Schriefers, Garrett, & Friederici, 2002; Meyer, Belke, Telling, & Humphreys, 2007; Zelinsky & Murphy, 2000). Empirical findings obtained in these studies, however, have not been monolithic. On the one hand, several studies have shown that the activation of the phonological form is restricted to tasks that require explicit or implicit naming (e.g., Jescheniak et al., 2002; Zelinsky & Murphy, 2000). For example, in an event-related potential (ERP) study, conducted by Jescheniak et al. (2002). participants were presented with pictures and were asked to either name the picture (a linguistic task) or to make a natural size judgment (a nonverbal task). Before the performance of each task, an auditory target word was presented, which was either semantically related to the picture, phonologically related to the picture, or unrelated to it. While semantic effects (a significantly less negative ERP wave form in the related condition compared to the unrelated condition) were obtained in both tasks, phonological effects were obtained only in the naming task. Similarly, Zelinsky and Murphy (2000) measured the time participants spend looking at faces, whose name they had just learned. Some of the names were long (three syllables) and some short (one syllable). Phonological effects (longer fixations on faces associated with longer names) were obtained in a memory task that required implicit naming. However, these word length effects were not found in a visual search task, whose completion did not require naming of any sort. Thus, consistent with modular, two-stage, models of speech production, such findings have been taken to show that objects automatically activate their conceptual-semantic representations, but not their corresponding phonological representations.

On the other hand, recent studies suggest that objects’ names are automatically activated, even when naming is not explicitly (or implicitly) required. For example, in an eye movement tracking experiment, Meyer et al. (2007) utilized a similar procedure to the one used by Zelinsky and Murphy’s (2000) experiment. However, instead of using unfamiliar faces with newly learnt names, all stimuli were familiar objects. In addition, unlike Jescheniak et al.’s experiment, here the prime and target pairs did not just have similar names but were completely identical phonologically (i.e., homophones such as bat–animal and bat–baseball). In this study, participants were presented with a target picture followed by a four-object search array, and their task was to indicate whether the target (e.g., bat–animal) was present in the four-object search array. In the critical trials, the four-object search array contained either a phonological competitor (bat–baseball) or a semantic competitor. Compared to unrelated objects, both competitors attracted the participants’ visual attention and thereby delayed the participants’ decision. Thus, consistent with interactive models of speech production, these results indicate that objects automatically activate not only their corresponding semantic codes but also their corresponding phonological forms.

Although there is some evidence that speakers access the names of objects even when they are not required to name them (e.g., Gorges et al., 2013; Kuipers & La Heij, 2009; Mani & Plunkett, 2010, 2011; Meyer et al., 2007; Morsella & Miozzo, 2002; Navarrete & Costa, 2005). the nature of the information that is automatically accessed is undetermined. As mentioned above, previous studies focused mainly on the automatic activation of phonological codes. However, because in most cases, words that sound the same are also spelled the same (e.g., bat), it is impossible to determine which information is activated automatically: phonological (sound), orthographic (spelling), or both. Interactive “triangle” models of reading (e.g., Harm & Seidenberg, 2004; Seidenberg & McClelland, 1989) assume a bidirectional flow of activation between orthographic, phonological and semantic codes. Thus, according to this interactive view, semantic representations, once activated, will automatically activate both their corresponding phonological and orthographic codes. The goal of our first experiment was therefore to examine whether concepts automatically activate not only their corresponding phonological forms but also their corresponding orthographic forms. In addition, while there is some evidence that concepts automatically activate their corresponding lexical forms (e.g., Meyer & Damian, 2007). the degree to which these lexical activations influence conceptual processing, is still under investigation (see Harley, 2008, for review). Thus, a second aim of this experiment was to examine not only whether concepts automatically activate their corresponding phonological and orthographic forms but also whether these lexical forms, once activated, modulate conceptual processes via feedback connections.

Experiment 1

To examine lexical effects in conceptual processing, participants were asked to decide whether two pictorial targets were semantically related or not. We compared response latencies and accuracy for semantically unrelated pairs in two conditions: ambiguous and unambiguous. In the ambiguous condition, each pair of pictures represented two distinct meanings of an ambiguous word (e.g., bat–animal and bat–baseball). In the unambiguous condition, the first picture (e.g., bat–animal) was replaced with an unambiguous control from a similar semantic category (e.g., eagle). In both conditions, the two pictures were semantically unrelated. However, in the unambiguous condition, the two pictures were both semantically and lexically unrelated (e.g., eagle, baseball bat), while in the ambiguous condition, the two pictures shared the same lexical form (e.g., bat).

To perform the task, participants were only required to activate conceptual-semantic representations associated with the pictorial stimuli. If nonlinguistic semantic judgments can be performed without activating lexical codes, then there should be no difference between lexically ambiguous and lexically unambiguous pairs, as both pairs are semantically unrelated. However, if semantic codes automatically activate their corresponding lexical forms, and if both codes (conceptual and lexical) influence the semantic decision process, then “ambiguous” picture pairs (bat–animal, bat–baseball), which are lexically related (i.e., the two pictures are associated with the same lexical form), will be more difficult to be judged as semantically unrelated, relative to “unambiguous” picture pairs (e.g., eagle, baseball bat), which are semantically and lexically unrelated (i.e., associated with two different words). Thus, slower and less accurate responses to lexically ambiguous pairs (in comparison to their lexically unambiguous controls) would indicate automatic lexical activation during nonlinguistic conceptual processing (for a similar prediction with cross-language activations, see Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011).

Importantly, in order to distinguish between orthographic and phonological effects, three types of Hebrew ambiguous words were used – all very common in HebrewFootnote 1: (1) Homonyms – two different concepts associated with a single phonological and orthographic representation (e.g., bat). (2) Homophones – two different concepts associated with a single phonological representation but with two different orthographic codes (e.g., flower/flour). (3) Homographs – two different concepts associated with a single orthographic representation but with different phonological codes (e.g., tear–/tɪər/ /tɛər/). Table 1 presents Hebrew examples of the three types of ambiguity.

Table 1 Hebrew examples of the three types of ambiguity

Differences between these three types of ambiguity would indicate degree of phonological and/or orthographic effects. If concepts automatically activate their corresponding phonological codes but not their corresponding orthographic codes, then ambiguity effects (differences between ambiguous and unambiguous pairs) will be found in the case of homonyms and homophones that share a single phonological representation, but not in the case of homographs that are only orthographically related. Similarly, if concepts automatically activate their corresponding orthographic codes but not their corresponding phonological codes, then ambiguity effects will be found in the case of homonyms and homographs that share a single orthographic representation, but not in the case of homophones that are only phonologically related. However, if concepts automatically activate both their spoken and written lexical forms, then larger ambiguity effects are expected for homonyms (which are both orthographically and phonologically related) than for homophones (which are only phonologically related) or homographs (which are only orthographically related).

Method

Participants

A total of 36 students from Tel Aviv University, 20 males and 16 females, ages 20 to 40 years (M = 27.00, SD = 5.21), participated in exchange for 20 NIS (approx. $6.5). They were all native speakers of Hebrew, with correct or corrected vision. Another 124 participants, from the same population, were recruited for the pretesting of the stimuli.

Materials

The experimental materials consisted of 84 pictures, representing the two meanings of 42 ambiguous Hebrew words – 14 homonyms (e.g., bat), 14 homophones (e.g., flower/flour), and 14 homographs (e.g., tear–/tɪər/ / tɛər/). Word length was controlled for, and two pretests were performed in order to ensure that the three types of ambiguous words were balanced in terms of frequency and polarity. The first pretest measured subjective overall word-form frequency. Forty judges were presented with the ambiguous words and were asked to rate their frequency on a 5-point scale, ranging from 1 (never encountered) to 5 (highly frequent). In order to test overall word-form frequency, homonyms (bat) and homographs (tear) were presented visually, and homophones (flower/flour) were presented orally. The average rates on the frequency scale were 3.84, 3.86, and 3.95 for homonyms, homophones, and homographs, respectively, and did not differ, F < 1.

An additional pretest established the degree and direction of polarity of the ambiguous words. A booklet containing ambiguous words and their paraphrased meanings was presented to 20 participants, who were instructed to choose the first meaning that came to their minds when presented with the ambiguous form. The dominant meaning of an ambiguous word was defined as the meaning chosen by more than half of participants. Overall, the selected corpus was polarized with the dominant meaning being chosen with a mean of .79, .79, and .87 for homonyms, homophones, and homographs, respectively. An analysis of variance revealed no reliable difference between the three types of ambiguity, F < 1. The three types of ambiguous words were also compared in terms of length. The means for number of syllables (homonyms: 2, homophones; 1.86, homographs: 2), and number of letters (homonyms: 3.71, homophones: 3.43, homographs: 3.50) did not differ, Fs < 1.

Pictures representing both meanings of each ambiguous word were selected from an online image bank. In order to make sure that all pictures truly activate their target ambiguous word, a pretest was conducted in which 44 participants were asked to name the pictures, and the percentage of naming using the desired, ambiguous word was recorded. Pictures that elicited the required response by less than 90 % of the participants were replaced by new pictures and retested. Within each pair, the picture that elicited a higher percentage of “correct” naming was selected to be the picture that appears first in the online experiment (henceforth, “Meaning1”). The average “correct” naming for “Meaning1” was .97 (.98 for homonyms, .97 for homophones, and .99 for homographs), with no significant difference between the three sets, F < 1. Since the order of the meanings within each pair did not correlate with dominance (i.e., the first picture could either be the dominant or the subordinate meaning of the ambiguous word), the results of the first polarity test were reanalyzed to indicate the proportions of dominance of “Meaning1.” An analysis of variance revealed that the three sets of ambiguous words were not significantly different, with means of .70, .79, and .85 for homonyms, homophones, and homographs, respectively, F(2, 39) = 1.69, p = .32, MSE = .21.

Unambiguous control pairs (henceforth, “Control”) were created by replacing the first picture (“Meaning1”) in each ambiguous pair (e.g., a picture of the animal bat) with an object from a similar semantic category that bore no lexical resemblance to the ambiguous word (e.g., a picture of an eagle). The ambiguous “Meaning1” pictures (bat–animal) and their “Control” pictures (eagle) were equated in terms of ease of object recognition and degree of familiarity. Twenty participants were asked to rate how easy it was to recognize the object in each picture, on a scale of 1 (very difficult to recognize) to 5 (very easy to recognize). The average rates on the ease-of-recognition scale for the three types of ambiguous pictures (4.60, 4.58, and 4.64) and their unambiguous “Controls” (4.72, 4.44, and 4.86) did not differ: There was no main effect for lexical ambiguity (ambiguous vs. unambiguous names), F < 1, and for the ambiguity’s type (homonyms, homophones, or homographs), F(1, 39) = 1.4, p = .26, MSE = .29, and no interaction was found between the two, F < 1.

The same participants were also asked to rate the familiarity of each object on a scale of 1 (very unfamiliar) to 5 (very familiar). Familiarity was defined as “the degree to which you come in contact with or think about the object in the picture” (Alario & Ferrand, 1999, p. 533). The average familiarity rankings for the three types of ambiguous “Meaning1” pictures (4.37, 4.41, and 4.62) and their unambiguous “Controls” (4.18, 4.30, and 4.45) did not differ: there was no main effect for lexical ambiguity, F(1, 39) = 2.47, p = .12, MSE = .20, and for the type of ambiguity, F(1, 39) = 2.07, p = .14, MSE = .23, and no interaction was found between these two variables, F < 1. Table 2 presents a summary of the pretest results.

Table 2 Summary of descriptive and inferential statistics of the pretests

The total experimental materials consisted of 126 pictures: 42 “Meaning1s”, 42 “Controls,” and 42 “Meaning2s” (the second picture that remained the same in both conditions), creating 84 semantically unrelated pairs, half of which are lexically related “ambiguous” pairs (“Meaning1” + “Meaning2”) and half lexically unrelated “unambiguous” pairs (“Control” + “Meaning2”). Table 3 presents Hebrew examples of the six experimental conditions. (For a complete list of words used in this experiment, see Appendix Tables 4, 5, and 6).

Table 3 Examples of the six experimental conditions: 3 Ambiguity Types (homonyms, homophones, or homographs) × 2 Lexical Conditions (ambiguous pairs or unambiguous controls)

In addition to the experimental stimuli, two types of fillers were added: First, given that the 84 experimental pairs (42 ambiguous pairs and their 42 controls) were always semantically unrelated, 126 images were used to create two sets of 42 semantically related pairs, which share the same second picture (84 related pairs). Second, because in the experimental sets, the second picture is always paired with a semantically unrelated object, whereas in the filler sets, the second picture is always paired with a related object, participants might recognize that whenever a target picture is encountered a second time, it has the same response it had the first time it was encountered. Thus, to ensure that the relatedness could not be determined simply from the identity of the second picture, another group of 120 images was used to create two sets of 40 filler pairs, in which the second pictorial object is paired once with a semantically related object, and once with a semantically unrelated object. The total materials thus consisted of 372 pictures, creating 248 pairs, half of which were semantically related (the related 124 filler pairs) and half of which were semantically unrelated (42 ambiguous pairs, 42 control pairs, 40 unrelated filler pairs).

Apparatus

The experiment was constructed and run using E-Prime software version 10.242, on an HP Compaq Elite 8300 Microtower desktop computer. Response latencies were collected using a PST Serial Response Box. The pictures were approximately 12.5 cm × 13.5 cm in size, fitted into a white rectangle of 14.5 cm × 15.5 cm and presented in the center of the screen, on gray background.

Design and procedure

A 2 × 3 factorial design was used with Lexical Condition (ambiguous/unambiguous) and Ambiguity Type (homonym/homophone/homograph) as independent variables. In the ambiguous condition, participants saw a “Meaning1” picture followed by a “Meaning2” picture, and in the unambiguous condition, they saw a “Control” picture followed by a “Meaning2” picture (see Table 3).

Participants were tested individually in a sound-attenuated room, seated approximately 60 cm from the screen. At the beginning of the experiment, participants were instructed to decide, as quickly and accurately as possible, whether two pictures presented one after the other were semantically related or not, and press a “yes” or “no” key accordingly. Semantic relatedness was defined as either a categorical or associative relation. Four examples of semantically related and unrelated pairs were introduced, followed by a practice session of 20 pairs, half related and half unrelated. All trials had the same sequence of events (see Fig. 1). At the start of each trial, participants were presented with a central fixation marker for 2,000 ms. The offset of the marker was followed by the first picture that remained on the screen for 1,500 ms. A blank interval of 500 ms followed, and then the second picture was presented until the subject responded or until 3,000 ms had lapsed. After a blank screen of 1,000 ms, the next trial began. If the second picture expired without a response, a tone signified the move to the next trial. Tonal feedback was also provided for incorrect decisions (responding “related” to unrelated pairs or “unrelated” to related ones). Response latencies were measured from the onset of the second picture presentation. The stimuli were presented in a random order, with the restriction that not more than four semantically related pairs (“yes” responses) or semantically unrelated pairs (“no” responses) occurred in sequence.

Fig. 1
figure 1

The sequence of events in a trial

Each participant completed two experimental sessions of 124 trials each, separated by a 5-minute break. Each “Meaning2” picture was presented twice in the experiment, once in each session. In one session it was preceded by “Meaning1” and in the other by its “Control.” The same division applied to the filler sets. The order of sessions was counterbalanced across participants, so that each “Meaning1” picture had the same chance as its “Control” for appearing in the first or second session. Each session was divided into four blocks of 31 trials each, separated by short rest breaks. The participant ended the breaks and resumed the test by pressing a key on the response box. Each session lasted about 20 minutes. To verify that “Meaning1” pictures elicited the expected name, after the completion of the online experiment, each participant was asked to name all “Meaning1” pictures. In the final part of the experiment, participants’ conscious awareness of the lexical ambiguity was assessed by asking them whether they noticed during the test that some of the pairs of objects shared similar names.

Results

For each participant, response times (RTs) and error rates were analyzed only for experimental trials whose “Meaning1” picture was named with the desired (ambiguous) word in the naming test, which followed the online experiment. RTs were calculated only for accurately answered trials. In total, 95 trials were detracted from the analysis, constituting 6.25 % of all trials. Four additional items were omitted from the final analysis because they were “correctly” named by less than 75 % of the participants (a total of 4 items out of the 42, including three homophones and one homograph). All the effects and interactions reported below were preserved when analyses included the four omitted items. Responses to filler trials were not analyzed.

The RTs from correct trials (correct “no” responses) and the error rates for unrelated pairs were analyzed using a linear mixed effects (LME) model (Baayen, Davidson, & Bates, 2008). This computation allows the testing of hypotheses while taking into account the variance due to participants and to items simultaneously. The model was constructed for the analysis, with the effects of Lexical Condition (ambiguous/unambiguous controls) and Ambiguity Type (homonym/homophone/homograph) as fixed factors, and the effects of Participants and Items as random factors.

RT analyses

The analysis of RT revealed that a model with the fixed factors Lexical Condition and Ambiguity Type in a two-way interaction, and the random factors of Participants and Items, resulted in the best fit for the data, χ2(2) = 6.14, p < .05, relative to the model that includes only the main effects. Within this model, a main effect for Lexical Condition was found, F(1, 2662.4) = 13.3, p < .001, indicating that ambiguous pairs were significantly slower (M = 802, SE = 26) than their unambiguous controls (M = 773, SE = 26). Importantly, the two-way interaction between Lexical Condition and Ambiguity Type was significant, F(2, 2662.3) = 3.07, p < .05. The estimated effects of ambiguity are illustrated in Fig. 2. Each value represents response latencies in each of the six conditions (2 Lexical Conditions × 3 Ambiguity Types).

Fig. 2
figure 2

Response times in the six experimental conditions: 3 Ambiguity Types (homonyms, homophones, or homographs) × 2 Lexical Conditions (ambiguous pairs vs. unambiguous controls), computed over Participants and Items simultaneously. Error bars indicate standard errors

In order to follow up the two-way interaction, we analyzed the effect of Lexical Condition (i.e., ambiguous vs. unambiguous conditions) in each Ambiguity Type (homonym/homophone/homograph), using the Bonferroni adjustment. As can be seen in Fig. 2, this difference was significant in the case of homonyms, χ2(1) = 14.98, p < .001, but not in the case of homophones, χ2(1) = 4.33, NS, or homographs, χ2(1) =.13, NS. Further analysis revealed a significantly greater ambiguity effect (longer RTs for ambiguous than for unambiguous pairs) for homonyms than for homographs, χ2(1) = 6.14, p < .05, but the ambiguity effect of homophones was not significantly different from either the homographs, χ2(1) = 1.5, NS, or the homonyms, χ2(1) = 1.53, NS.

Error analyses

The analysis of error rates revealed that the two-way model best fits the data ,(2(2) = 11.22, p < .005, relative to the model that includes only the main effects. Within this model, a main effect for Lexical Condition was found, F(1, 2755.5) = 19.9, p < .001, indicating that the error rate for ambiguous pairs was significantly higher (M = .043, SE = .008) than their unambiguous controls (M = .016, SE = .008). Importantly, the two-way interaction between Lexical Condition and Ambiguity Type was significant, F(2, 2755.52) = 5.61, p < .005. The estimated effects of ambiguity are illustrated in Fig. 3. Each value represents the error rate in each condition.

Fig. 3
figure 3

Error rates in the six experimental conditions: 3 Ambiguity Types (homonyms, homophones, or homographs) × 2 Lexical Conditions (ambiguous pairs vs. unambiguous controls), computed over Participants and Items simultaneously. Error bars indicate standard errors

In order to follow up the two-way interaction, we analyzed the effect of Lexical Condition (ambiguous vs. unambiguous conditions) in each Ambiguity Type (homonym/homophone/homograph), using the Bonferroni adjustment. As can be seen in Fig. 3, this difference was significant in the case of homonyms, χ2(1) = 28.63, p < .001, but not in the cases of homophones, χ2(1) = 1.49, NS, or homographs, χ2(1) = 1.44, NS. Further analysis of the effects of ambiguity between the different ambiguity types revealed that the ambiguity effect was significantly greater in homonyms than in homophones, χ2(1) = 8.19, p < .05, and in homonyms than in homographs, χ2(1) = 8.56, p < .05, but there was no significant difference between the ambiguity effects of homophones and homographs, χ2(1) = .001, NS.

Discussion

Experiment 1 showed that the lexical status of the picture pairs (ambiguous vs. unambiguous) had a strong effect on participants’ propensity to judge the critical pairs as semantically unrelated. Recall that both the ambiguous pairs (e.g., bat–animal, bat–baseball) and their unambiguous control pairs (e.g., eagle, bat–baseball) were semantically unrelated. Moreover, since unambiguous conditions were created by replacing the first picture in the ambiguous condition (e.g., bat–animal) with an “unambiguous” object from a similar semantic category (e.g., eagle), we assumed there was no semantic difference between the two conditions. That is, the semantic distance between the two objects (between bat–animal and bat–baseball; or between eagle and bat–baseball) was similar in both conditions (for further discussion, see Experiment 2). However, while the two conditions were semantically similar, they were different in terms of their lexical status: In the unambiguous condition, the two semantically unrelated pictures were associated with two completely different lexical forms (e.g., eagle, bat–baseball), whereas in the ambiguous condition, both pictures were associated with a common linguistic form (e.g., bat–animal, bat–baseball). Results indicated that this lexical manipulation modulated the semantic response. Overall responses to ambiguous pairs were slower and less accurate compared to their unambiguous control pairs, suggesting that lexically ambiguous pairs were harder to judge as semantically unrelated, simply because the two objects share a common name.

Given that neither the stimuli (pictures) nor the task (a semantic judgment task) required the activation of lexical forms, these results support previous studies showing that an object’s name becomes automatically activated even in situations in which participants do not have the intention to name it (e.g., Gorges et al., 2013; Mani & Plunkett, 2010, 2011; Meyer et al., 2007). Furthermore, consistent with interactive models of language production (e.g., Dell, 1986; Dell et al., 1997). the results of this experiment suggest that information spreads in both directions, such that concepts automatically activate their corresponding lexical forms, and these lexical forms, once activated, influence the semantic decision task via feedback (lexical-conceptual) connections.

Importantly, these ambiguity effects (higher error rates and slower RTs in the ambiguous condition compared to the unambiguous condition) interacted with the type of ambiguity. In both errors and RT data, homonyms (which are both orthographically and phonologically related) elicited stronger ambiguity effects than homographs (which are only orthographically related) or homophones (which are only phonologically related). This suggests that concepts automatically activate not only their corresponding phonological codes but also their corresponding orthographic representations.

First, a significant ambiguity effect (slower and less accurate responses to ambiguous pairs compared to their unambiguous control pairs) was found for homonyms but not for homophones or homographs. This suggests that exposure to pictorial objects automatically activates both their phonological and orthographic codes, since an ambiguity effect has emerged only when the two picture shared both codes. Second, homonyms resulted in significantly larger differences between ambiguous and unambiguous pairs than both homographs (in error rates and RTs) and homophones (in error rates), whereas ambiguity effects for homographs and homophones did not differ from each other. This indicates that both lexical forms have contributed to the obtained ambiguity effects. Taken together, the results of Experiment 1 support connectionist “triangle” models assuming a fully interconnected network in which semantic representations automatically activate their corresponding phonological and orthographic codes, and these lexical forms, once activated, modulate conceptual processes via feedback connections.

Experiment 2

The results of Experiment 1 suggest that concepts automatically activate (to some degree) both their corresponding phonological and orthographic lexical forms. The aim of the second experiment was twofold. First, we wanted to verify that the ambiguity effects observed in Experiment 1 were indeed a function of the linguistic representation common to both meanings of the ambiguous pairs. To accomplish this goal, the second experiment tested monolingual English speakers’ responses to the same picture pairs presented in Experiment 1. Given that the ambiguous pairs are lexically ambiguous in Hebrew, but not in English, lexical effects (differences between ambiguous and unambiguous pairs) should only occur with Hebrew speakers.

Our second goal was to investigate the extent to which objects’ names modulate the way we think about the objects they represent. For instance, do speakers perceive two objects as more similar simply because they share a common linguistic label? According to the linguistic relativity hypothesis (e.g., Boroditsky, 2001; Gentner & Goldin-Medow, 2003; Whorf, 1956), the specific language we speak influences the way we think about the world. Several studies have shown that people’s thinking about objects may be influenced by the grammatical gender their native language assigns to their names (Boroditsky, Schmidt, & Phillips, 2003; Konishi, 1993). For example, in an offline study conducted by Konishi (1993). German and Spanish speakers were asked to rate concrete nouns (e.g., spoon) on a potency scale, which is considered a masculine trait. Of interest were nouns that have opposite grammatical gender in the two languages. For example, the word spoon is masculine in German, but feminine in Spanish, whereas fork is masculine in Spanish, but feminine in German. Results indicated that German speakers rated “spoon” as more potent than “fork,” whereas for Spanish speakers, the reverse order was found. This suggests that grammatical aspects that differ across languages can influence people’s thinking about objects.

Other studies, however, have challenged this conclusion (e.g., Cubelli, Paolieri, Lotto, & Job, 2011; Kousta, Vinson, & Vigliocco, 2008). For example, Cubelli et al. (2011). asked participants to judge whether two objects, whose names did or did not share grammatical gender, belonged to the same semantic category. Irrespective of semantic relatedness, responses where faster when the two pictures shared the same grammatical gender, indicating that grammatical gender affect semantic processing. Nonetheless, the fact that responses to gender congruent pairs were always faster (irrespective of whether the two pictures were semantically related or unrelated), led the researchers to conclude that the locus of this interaction (between gender and meaning) is located at the linguistic level rather than at the conceptual level. This suggestion was further supported by the fact that the grammatical gender effect disappeared when participants performed the same task under articulatory suppression condition. Thus, according to Cubelli et al. (2011). while grammatical gender affects the speed of semantic processing, it does not shape the way people think about these objects.

In Experiment 2, we focused on lexical aspects that differ across languages. In particular, we aimed to investigate whether cross-linguistic differences in the relationship between objects and their corresponding lexical forms affect the way speakers of different languages perceive the relationship between two different concepts. For example, in English, the concepts “map” and “tablecloth” have two different names. In contrast, in Hebrew, these two different concepts are associated via a single lexical form (/mapa/(מפה. Do Hebrew speakers, who systematically refer to these two concepts with the same lexical form, consider these two concepts as more similar compared to English speakers who systematically refer to these two concepts using two different lexical forms?

To test these unresolved issues, the second experiment used the same materials as the first experiment, with two main differences: (1) Both Hebrew speakers and English speakers participated in the experiment. (2) A scalar rather than a dichotomous measure was used, asking the participants to rate the picture pairs on a scale of varying degrees of relatedness, which better captures the more subtle variations in semantic relatedness judgments. Thus, in the second experiment, native Hebrew speakers and monolingual native speakers of English were asked to rate the semantic relatedness of the same picture pairs presented in Experiment 1, on a 5-point scale.

If the ambiguity effects (differences between ambiguous and unambiguous pairs), observed in the first experiment, are only a function of the objects’ names, then these effects will be obtained with Hebrew speakers but not with English speakers. In addition, if a shared lexical representation can strengthen the connection between two concepts as assumed by the linguistic relativity hypothesis, then Hebrew speakers will rate Hebrew ambiguous pairs as more related than their unambiguous controls, whereas English speakers will not distinguish between the two conditions. Finally, if both phonological and orthographic codes influence the semantic decision, then, for Hebrew speakers, ambiguity effects will be stronger for homonyms than for either homophones or homographs.

Method

Participants

The experiment included two groups of participants – Hebrew native speakers and monolingual English speakers. The Hebrew speaking group was composed of 40 undergraduate students from Tel-Aviv University, 38 women and 2 men, ages 21 to 29 years (M = 23.62, SD = 1.72), who volunteered to participate. The English speaking group was composed of 40 Mechanical Turk workers, 22 men and 18 women, ages 22 to 38 years (M = 29.58, SD = 4.73), who were paid 75 cents for participation.

Materials

Materials were the same as those used in Experiment 1, including the same 42 picture pairs – 14 homonym sets, 14 homophone sets, 14 homograph sets, and 42 semantically related filler pairs. Each experimental set comprised an ambiguous pair and an unambiguous control pair, sharing the same second picture.

Experimental design and procedures

A 2 × 3 factorial design was used with Lexical Condition (ambiguous/unambiguous) and Ambiguity Type (homonym/homophone/homograph) as independent variables. The picture pairs and rating scales were presented to the Hebrew-speaking participants on a laptop, in a MS PowerPoint presentation. The English-speaking participants saw the same pairs in the exact same orders, in an online questionnaire constructed in Qualtrics software. Participants were asked to rate the semantic relatedness of each picture pair on a scale of 1 (no relation) to 5 (strongly related). Each participant was presented with half of the experimental trials, so that every participant saw each “Meaning2” picture once; either preceded by its “Meaning1” counterpart or by the “Control” picture. Overall, each participant was presented with 21 ambiguous pairs, 21 control pairs, and 42 filler pairs. The pairs were presented in a pseudorandomized order, with no more than two consecutive repetitions of one condition. Average rankings of ambiguous pairs were compared to their control averages, in each type of ambiguity.

Results

The ratings of unrelated pairs were analyzed using a linear mixed effects (LME) model (Baayen et al., 2008). The model was constructed for the analysis with the effects of Lexical Condition (ambiguous/unambiguous), Ambiguity Type (homonym/homophone/homograph) and Participant Language (Hebrew/English) as fixed factors, and the effects of Participants and Items as random factors.

The analysis of the ratings revealed that a model with the fixed factors Lexical Condition, Ambiguity Type, and Participant Language in a three-way interaction provided the best fit for the data, χ2(2) = 6.85, p < .05, relative to the next best two-way model. Within this model, the three-way interaction between Lexical Condition, Ambiguity Type, and Participant Language was significant, F(2, 3236.4) = 3.42, p < .05. The estimated effects of ambiguity are illustrated in Fig. 4. Each value represents the rating values in each of the six conditions (2 Lexical Conditions × 3 Ambiguity Types). Panel A of Fig. 4 represents the Hebrew speakers’ ratings, whereas Panel B represents the English speakers’ ratings.

Fig. 4
figure 4

Relatedness ratings in the six experimental conditions by Hebrew speakers (Panel a) and English speakers (Panel b). Error bars indicate standard errors

In order to follow up the three-way interaction, we analyzed the effect of Lexical Condition (i.e., ambiguous vs. unambiguous conditions) in each Ambiguity Type (homonym/homophone/homograph), using the Bonferroni adjustment, for Hebrew and English speakers separately. As can be seen in Panel A of Fig. 4, the ambiguity effect, in the case of Hebrew speakers, was significant in each of the Ambiguity Types: homonyms χ2(1) = 99.54, p < .001; homophones, χ2(1) = 20.43, p < .001; and homographs, χ2(1) = 15.43, p < .001. Further analysis of the effects of ambiguity between the different ambiguity types revealed that the ambiguity effect was greater in homonyms than in homophones, χ2(1) = 14.78, p < .001, and in homonyms than in homographs, χ2(1) = 18.42, p < .001, but there was no significant difference between the ambiguity effects of homophones and homographs, χ2(1) = .19, NS.

Panel B of Fig. 4 illustrates that, in the case of English speakers, no significant ambiguity effects were observed in either of the ambiguity types (all χ 2s < 1). Subsequently, no further analyses were required.

Discussion

As the results of Experiment 2 clearly demonstrate, Hebrew speakers rated picture pairs of objects that shared a common lexical form in Hebrew as significantly more similar than their lexically unrelated control pairs. In contrast, non-Hebrew speakers (monolingual English native speakers) did not distinguish between the two conditions. These findings suggest that cross-linguistic differences in the relationship between concepts and their corresponding lexical forms may influence purely semantic decisions regarding the degree of semantic similarity between two objects (for similar results with bilingual speakers, see Degani, Prior, & Tokowicz, 2011; Degani & Tokowicz, 2013).

Recall that ambiguous and unambiguous pairs are different only in terms of their lexical status in Hebrew. In the ambiguous condition (e.g., a map and a tablecloth), the two objects are associated with a common lexical form (/mapa/ מפה), whereas in the unambiguous condition (e.g., a flashlight and a tablecloth), the two objects are associated with different lexical forms (/panas/ פנס and /mapa/ (מפה. Importantly, in English, there is no difference between the two conditions, as each object in each pair is associated with a different name. The fact that English speakers rated “ambiguous” and “unambiguous” pairs similarly indicates that in terms of semantic relatedness, the two pairs are indeed identical. That is, the semantic distance between the two pictures in the two conditions (e.g., between a map and a tablecloth, and between a flashlight and a tablecloth) is not statistically different. This confirms that the ambiguity effects (differences between ambiguous and unambiguous pairs) obtained in both experiments, when Hebrew speakers were tested, are indeed a function of the linguistic representation common to both meanings of the ambiguous pairs. Thus, the results provide further evidence for the claim that semantic representations automatically activate their corresponding lexical forms, and these lexical representation may influence semantic relatedness judgements.

One possible interpretation of these results, which is more consistent with the linguistic relativity hypothesis (e.g., Boroditsky, 2001; Gentner & Goldin-Medow, 2003; Whorf, 1956), is that cross-linguistic differences in the relationship between concepts and their corresponding lexical forms may indeed influence the way speakers of different languages organize their conceptual knowledge. That is, the specific language we speak and read may change the organization of the semantic network, such that two concepts that are associated with a single lexical representation may become closer in the semantic network, relative to an equally matched pair in which each concept is associated with a different lexical form. This hypothesis suggests that the co-activation of conceptual and lexical representations results in long-term changes at the conceptual level itself. However, following Cubelli et al. (2011). an alternative explanation for these results is that cross-linguistic differences in the relationship between concepts and their corresponding lexical forms do not eventually lead to differences at the conceptual level. According to this view, pictorial objects automatically activate their corresponding lexical forms, and these lexical codes influence semantic judgments via connections on the lexical level, without changing the semantic representations themselves and/or the semantic distance between them. Importantly, however, both interpretations assume bidirectional connections between conceptual and lexical representations, such that the activation of a conceptual representation automatically activates its corresponding lexical form and vice versa. Whether these automatic lexical activations result in long-term conceptual differences, as assumed by the linguistic relativity hypothesis (e.g., Boroditsky, 2001). or not (e.g., Cubelli et al., 2011). remains an open question.

Finally, as in Experiment 1, differences between the three types of ambiguity indicate that concepts automatically activate not only their corresponding phonological codes but also their corresponding orthographic codes. In Experiment 1, ambiguity effects have emerged only when the two pictures shared both their phonological and orthographic representations. In other words, while homonyms (which are both phonologically and orthographically similar) were significantly different from their unambiguous controls, homophones (which are only phonologically similar) and homographs (which are only orthographically similar) were not. In this experiment, Hebrew speakers rated ambiguous pairs as significantly more related than their unambiguous controls, irrespective of the type of ambiguity. That is, ambiguity effects were obtained when the two pictures were phonologically related (homophones), orthographically related (homographs), or both phonologically and orthographically related (homonyms). Thus, ambiguity effects were overall stronger, probably due to the scalar measure used in this experiment (as opposed to the dichotomous measure used in Experiment 1), which was able to detect more subtle differences between the two conditions (ambiguous vs. unambiguous).

Nevertheless, as in Experiment 1 (error data), the ambiguity effect was significantly larger for homonyms than for either homophones or homographs, while there was no significant difference, in terms of ambiguity effect, between the latter types. The fact that ambiguity effects were larger when both the spoken and the written lexical forms were shared provides further support for the claim that semantic, phonological, and orthographic codes are fully interconnected, such that semantic codes automatically activate their corresponding phonological and orthographic codes, and these, in turn, modulate semantic processes via feedback connections (e.g., Seidenberg & McClelland, 1989).

Experiment 3

Although the results of both Experiment 1 and Experiment 2 clearly demonstrate lexical effects in conceptual processing, one may argue that the particular design of the first two experiments, in which the two pictures were presented sequentially, has contributed to this effect. Specifically, because the task required the consideration of both pictures, and because the pictures were presented one after the other, participants needed to hold the first picture in memory and wait for the second picture in order to generate a correct response. Given that both phonological and orthographic codes were found to be involved in tasks that require memorization (e.g., Alario, Perre, Castel, & Ziegler, 2007; Baddeley, 2007). it is possible that the activation of lexical information in our previous experiments, was a consequence of task demand of memorization (e.g., greater involvement of the phonological loop) rather than an automatic process in semantic-conceptual processing. Thus, the goal of the third experiment was to rule out any “working memory” explanation of lexical effects in conceptual processing by presenting the two pictures simultaneously rather than sequentially.

Method

Participants

A total of 36 students from Tel Aviv University, 12 males and 24 females, ages 19 to 37 years (M = 25.92, SD = 3.68), participated in exchange for 20 NIS (approx. $6.5). They were all native speakers of Hebrew, with correct or corrected vision.

Materials

Materials were the same as those used in Experiment 1.

Experimental design and procedures

The experimental design and procedures were exactly the same as Experiment 1 except that the two pictures were presented simultaneously (in the center of the screen). All trials had the same sequence of events. At the start of each trial, participants were presented with a central fixation marker for 2,000 ms. The offset of the marker was followed by the simultaneous presentation of the two pictures: Picture1 (the picture presented first in Experiment 1) was always presented above the central fixation marker, and Picture2 was always presented below the central fixation marker. The two pictures remained on the screen until the subject responded, or until 3,000 ms had lapsed.

Results

For each participant, response times (RTs) and error rates were analyzed only for experimental trials whose “Meaning1” picture was named with the desired (ambiguous) word in the naming test, which followed the online experiment. RTs were calculated only for accurately answered trials. In total, 92 trials were detracted from the analysis, constituting 6.08 % of all trials. In addition, three homophones were omitted from the final analysis because they were “correctly” named by less than 75 % of the participants. All the effects and interactions reported below were preserved when analyses included the three omitted items. Responses to filler trials were not analyzed.

The RTs from correct trials (correct “no” responses) and the error rates for unrelated pairs were analyzed using a linear mixed effects (LME) model (Baayen et al., 2008). This computation allows the testing of hypotheses while taking into account the variance due to participants and to items simultaneously. The model was constructed for the analysis, with the effects of Lexical Condition (ambiguous/unambiguous controls) and Ambiguity Type (homonym/homophone/homograph) as fixed factors, and the effects of Participants and Items as random factors.

RT analyses

To examine the RT data, two models were compared: The first model included the fixed main effects of Lexical Condition and Ambiguity Type and the random effects of Participants and Items. The second model included the fixed main effects of Lexical Condition and Ambiguity Type, the interactions between them, and the random effects of Participants and Items. Given that the second model did not provide a significant improvement over the first, χ2(2) = 0.33, NS, the model that includes only the fixed main effects was used for further analysis. The analysis of this model revealed a significant effect of Lexical Condition, F(1, 2514.81) = 15.53, p < .001, indicating that ambiguous pairs were significantly slower (M = 1267.5, SE = 46.2) than their unambiguous controls (M = 1213.5, SE = 46.1). The main effect of Ambiguity Type was not significant, F(2, 3568) = 0.15, NS, indicating that the three types of ambiguity were not significantly different. The estimated effects of ambiguity are illustrated in Fig. 5. Each value represents response latencies in each of the six conditions (2 Lexical Conditions × 3 Ambiguity Types).

Fig. 5
figure 5

Response times in the six experimental conditions: 3 Ambiguity Types (homonyms, homophones, or homographs) × 2 Lexical Conditions (ambiguous pairs vs. unambiguous controls), computed over Participants and Items simultaneously. Error bars indicate standard errors

Error analyses

The analysis of error rates revealed that the two-way model (the model that includes the interaction in addition to the main effects) best fits the data, χ2 (2) = 23.67, p < .001, relative to the model that includes only the main effects. Within this model, a main effect for Lexical Condition was found, F(1, 2608) = 38.46, p < .001, indicating that the error rate for ambiguous pairs was significantly higher (M = .006, SE = .001) than their unambiguous controls (M = .017, SE = .009). Importantly, the two-way interaction between Lexical Condition and Ambiguity Type was significant, F(2, 2608) = 11.88, p < .001.

In order to follow up the two-way interaction, we analyzed the effect of Lexical Condition (ambiguous vs. unambiguous conditions) in each Ambiguity Type (homonym/homophone/homograph), using the Bonferroni adjustment. As can be seen in Fig. 6, this difference was significant in the case of homonyms, χ2(1) = 50.91, p < .001, and in the case homographs, χ2(1) = 18.69, p < .001, but not in the case of homophones, χ2(1) = 0.04, NS.

Fig. 6
figure 6

Error rates in the six experimental conditions: 3 Ambiguity Types (homonyms, homophones, or homographs) × 2 Lexical Conditions (ambiguous pairs vs. unambiguous controls), computed over Participants and Items simultaneously. Error bars denote ± one standard error

Further analysis of the effects of ambiguity between the different Ambiguity Types revealed significant differences between all three types, such that the ambiguity effect was significantly greater in homonyms than in homophones, χ2(1) = 23.64, p < .001, in homonyms than in homographs, χ2(1) = 3.88, p < .05, and in homographs than in homophones, χ2(1) = 9.02, p < .003.

To directly compare the magnitude of the ambiguity effect for the three types of ambiguity in Experiments 1 and 3, we tested for the interaction between Presentation Type (sequential vs. simultaneous), Lexical Condition (ambiguous pairs vs. unambiguous pairs), and Ambiguity Type (homonyms, homophones, or homographs) in both RT and error rates. Within this model, a main effect for Presentation Type was found in RT, F(1, 70) = 82.18, p < .001, but not in errors, F(1, 71.7) = 2.33, NS, indicating that the simultaneous presentation (Experiment 3) resulted in longer RTs compared to the sequential presentation (Experiment 1). Importantly, a main effect for Lexical Condition was found in both RT, F(1, 5107.1) = 25.01, p < .001, and error rates, F(1, 5400.9) = 59.63, p < .001, indicating that, irrespective to Presentation Type, ambiguous pairs were responded to more slowly and less accurately than their unambiguous controls. Finally, a significant three-way interaction was found in the error rates, F(2, 5400.9) = 3.296, p < .05, but not in RTs, F(2, 5105.6) = 0.094, NS. In order to follow up the three-way interaction, we analyzed the interaction between Lexical Condition (ambiguous vs. unambiguous conditions) and Presentation Type (sequential vs. simultaneous) separately for each Ambiguity Type (homonym/homophone/homograph). This analysis revealed that a significant, χ2(1) = 6.3580, p < .05, interaction between Lexical Condition and Presentation Type exists only in homographs. Further analysis revealed that in homographs, a significant ambiguity effect exists under simultaneous presentation conditions χ2(1) = 21.7817, p < .001, but not under sequential presentation conditions, χ2(1) = 1.2060, NS.

Discussion

Overall, the results of Experiment 3 replicated and further expended the results of Experiment 1. Ambiguous pairs (e.g., animal bat, baseball bat) were more difficult to be judged as semantically unrelated, compared to their unambiguous controls (e.g., eagle, baseball bat). In terms of RTs, significant ambiguity effects (slower responses to lexically ambiguous pairs in comparison to their lexically unambiguous controls), were obtained irrespective of ambiguity type (homonyms, homophones, and homographs), indicating that semantic representations automatically activate both their corresponding phonological and orthographic lexical forms. Interestingly, in terms of error rates, significant ambiguity effects were obtained in the case of homonyms (which are both phonologically and orthographically similar) and homographs (which are only orthographically similar), but not in the case of homophones (which are only phonologically similar). This indicates that, the orthographic effect was overall stronger than the phonological effect. Nevertheless, even though homophones were not significantly different from their unambiguous controls, phonological effects were evident by the fact that homonyms elicited stronger ambiguity effects than homographs. This difference can only be explained if the phonological representation of the word is also automatically activated.

It is important to note that because Experiment 3 did not require participants to hold the first picture in memory (since the two pictures were presented simultaneously) the data allows us to reach firmer conclusions regarding lexical effects in conceptual processing. In particular, the results of this experiment indicate that lexical activations during nonlinguistic conceptual processing are not restricted to tasks that require memorization. Nonetheless, although ambiguity effects (differences between ambiguous and unambiguous conditions) were observed in both experiments, the type of presentation (sequential or simultaneous) modulated the degree and nature of this effect. Specifically, while in Experiment 1 significant ambiguity effects were obtained only in the case of homonyms (which are both phonologically and orthographically similar), in Experiment 3, in terms of error rates, significant ambiguity effects were obtained only when the two pictures shared the same orthographic representation (in the case of homonyms and homographs). Thus, orthographic effects were more pronounced when the two pictures were presented simultaneously (Experiment 3) rather than sequentially (Experiment 1). A possible explanation for this difference may be related to the greater involvement of the phonological loop in the sequential condition. That is, it is possible that when the two pictures were presented sequentially, the phonological effect overshadowed the orthographic effect. Importantly, however, as in Experiment 1, homonyms (which are both phonologically and orthographically similar) elicited significantly larger ambiguity effects (in terms of error rates) than either homophones (which are only phonologically similar) or homographs (which are only orthographically similar). Thus, despite differences in degree of activation, the conclusion of both experiments (as well as Experiment 2) remains the same: Semantic representations automatically activate their corresponding phonological and orthographic codes, and these lexical forms, once activated, can influence semantic processes via feedback connections.

General discussion

The present study investigated the relationship between concepts and their corresponding phonological and orthographic lexical forms. In the first experiment, Hebrew speakers were asked to decide whether two pictorial targets were semantically related or not. In the second experiment, Hebrew speakers and English speakers rated the semantic relatedness of the same picture pairs on a linear scale. The third experiment was identical to the first experiment except that the two pictures were presented simultaneously rather than sequentially. In all experiments, we compared responses to semantically unrelated pairs in two conditions: In the ambiguous condition (e.g., a map and a tablecloth), the two pictures represented two distinct meanings of an ambiguous Hebrew word (e.g., /mapa/ מפה). In the unambiguous condition (e.g., a flashlight and a tablecloth), each object was associated with a different word (e.g., /panas/ פנס, /mapa/ מפה). To disentangle phonological and orthographic effects, three types of Hebrew ambiguous words were used: homonyms – two distinct meanings associated with a single phonological and orthographic form, such as bat; homophones – two distinct meanings associated with a single phonological form, but with two different orthographic forms, such as flower/flour; and homographs – two distinct meanings associated with a single orthographic form, but with two different phonological forms, such as tear (/tɪər/, /tɛər/).

In Experiments 1 and 3, ambiguous pairs were more difficult to be judged as semantically unrelated compared to their unambiguous control pairs. In Experiment 2, while English speakers did not distinguish between the two lexical conditions, Hebrew speakers rated ambiguous pairs as significantly more related than their unambiguous controls. These results replicate earlier evidence indicating that speakers access the names of objects even when they are not required to name them (e.g., Gorges et al., 2013; Mani & Plunkett, 2010, 2011; Meyer et al., 2007). Thus, consistent with interactive (or cascade) models of speech production (e.g., Dell, 1986; Dell et al., 1997). our findings demonstrate that concepts automatically activate not only their corresponding lemmas, but also their corresponding lexical forms.

More importantly, our results suggest that concepts automatically activate not only their corresponding phonological forms but also their corresponding orthographic forms. In Experiment 1, a significant ambiguity effect (slower and less accurate responses to ambiguous pairs compared to their unambiguous control pairs) has emerged only when the two pictures shared both codes. That is, an ambiguity effect was found for homonyms but not for homophones or homographs. Moreover, the fact that homonyms resulted in significantly larger differences between ambiguous and unambiguous pairs than both homophones (in error rates) and homographs (in error rates and RTs), indicate that both lexical forms have contributed to the obtained ambiguity effects.

In Experiment 2, significant ambiguity effects were obtained irrespective of ambiguity type. Nevertheless, as in Experiment 1, these ambiguity effects were significantly larger for homonyms than for either homophones or homographs, again suggesting that both lexical forms are automatically activated during nonverbal conceptual processing. In Experiment 3, significant ambiguity effects, in terms of RTs, were obtained irrespective of ambiguity type, indicating automatic activations of both phonological and orthographic codes. Nevertheless, these lexical effects were not additive, as homonyms were not significantly different from either homophones or homographs. Interestingly, in terms of error rates, ambiguity effects were obtained only when the two pictures shared the same orthographic code. That is, in the case of homonyms and homographs, but not in the case of homophones. Nonetheless, the fact that homonyms (which are both orthographically and phonologically similar) resulted in significantly larger differences between ambiguous and unambiguous pairs than homographs (which are only orthographically similar) suggest that phonological information has also contributed to the obtained ambiguity effects.

Taken together, the results of the present study demonstrate that irrespective of type of presentation (sequential or simultaneous), or type of measure (a dichotomous yes/no measure or a more graded scalar measure), exposure to nonverbal pictorial objects automatically activate not only their semantic and phonological representations, but also their orthographic codes. Thus, consistent with connectionist “triangle” models (e.g., Harm & Seidenberg, 2004; Seidenberg & McClelland, 1989). we show that semantic, phonological, and orthographic representations are fully interconnected, such that semantic representations automatically activate their corresponding phonological and orthographic codes, and these lexical forms, once activated, can influence semantic processes via feedback connections. Given that the current study was conducted in Hebrew, a question that remains open is whether these automatic lexical activations reflect a unique property of Hebrew or can be generalized to different languages and writing systems.

Another issue that remains unresolved is whether these automatic lexical activations result in long-term conceptual differences, as assumed by the linguistic relativity hypothesis (e.g., Boroditsky, 2001). or not (e.g., Cubelli et al., 2011). The results of the present study show that cross-linguistic differences in the relationship between concepts and their corresponding lexical forms can influence the way speakers of different languages perceive the relationship between two different concepts. In particular, if lexical information can influence semantic relatedness judgments, then, two objects with different names should be conceptually more dissimilar than two objects sharing the same name. Similarly, when required to judge whether two objects are semantically related or not, it should be more difficult to press “no” in response to semantically unrelated pairs that share the same name compared to equally semantically unrelated pairs that are associated with different names. This is exactly what we found. First, we show that speakers who systematically refer to two objects with a common name may judge these two objects as more similar compared to speakers of a different language who refer to the same two objects with two different names. Thus, while English speakers rated ambiguous and unambiguous pairs as equally unrelated, Hebrew speakers rated the ambiguous pairs as significantly more related than their unambiguous controls (Experiment 2). Second, responses to semantically unrelated pairs that share a common name (ambiguous pairs) were slower and less accurate compared to their unambiguous controls (Experiments 1 and 3). Taken together these results suggest that exposure to nonverbal pictorial objects not only evokes lexical phonological and orthographic representations, but that these purely linguistic (phonological and orthographic) representations may influence non-verbal semantic decisions.

As mentioned above, one possible interpretation for these findings, which is more consistent with the linguistic relativity hypothesis (e.g., Boroditsky, 2001). is that the co-activation of conceptual and lexical representations results in long-term changes at the conceptual level itself. However, following Cubelli et al. (2011). an alternative interpretation for these results is that cross-linguistic differences in the relationship between concepts and their corresponding lexical forms do not eventually lead to differences at the conceptual level. According to this view, pictorial objects automatically activate their corresponding lexical forms, and these lexical codes influence semantic judgments via connections on the lexical level, without changing the semantic representations themselves and/or the semantic distance between them. Importantly, however, both interpretations assume bidirectional connections between conceptual and lexical representations, such that the activation of a conceptual representation automatically activates its corresponding lexical form and vice versa.

These results are inconsistent with models of speech production that postulate only feed-forward connections between conceptual-semantic representations and their corresponding lexical forms. In a discrete two-stage model (Levelt et al., 1999). semantic processes (i.e., lemma selection) must be completed before any activation of lexical form takes place. As a result, only lemmas selected for speech will activate their corresponding lexical forms (e.g., Schriefers et al., 1990). In a unidirectional cascade model of lexical access, information can spread from one level to the following level before it has completed its processing. As a result, activation automatically spreads not only from concepts to lemmas but also from lemmas to word forms (e.g., Peterson & Savoy, 1998). Nevertheless, although these models make different assumptions regarding the automatic activation of lexical forms, they both assume that activation feeds only forward and does not feed back from the lexical (phonological or orthographic) level to the conceptual (semantic) level. Thus, according to both models, nonlinguistic conceptual processes are not expected to be influenced by lexical representations and processes. The ambiguity effects obtained in the present study are therefore problematic for both models.

Overall, our results support interactive models that permit not only cascading but also bottom-up feedback (e.g. Dell, 1986). In particular, the results of this investigation suggest that semantic, phonological, and orthographic representations are fully interconnected, (e.g., Seidenberg & McClelland, 1989), such that (a) conceptual-semantic representations automatically activate both their corresponding phonological and orthographic lexical forms, and (b) these lexical forms, once activated, may in turn affect semantic-conceptual processes via feedback connections. Nevertheless, given that, several studies have recently suggested hemispheric differences in the functional connectivity between orthographic, phonological, and semantic codes (e.g., Federmeier, 2007; Peleg & Eviatar, 2008, 2009, 2012). the next challenge is to investigate the neural mechanisms that support these interactions.