Iconicity in sign language production: Task matters

The present study explored the influence of iconicity on sign lexical retrieval and whether it is modulated by the task at hand. Lexical frequency was also manipulated to have an index of lexical processing during sign production. Behavioural and electrophysiological measures (ERPs) were collected from 22 Deaf bimodal bilinguals while performing a picture naming task in Catalan Sign Language ( Llengua de Signes Catalana , LSC) and a word-to-sign translation task (Spanish written-words to LSC). Iconicity effects were observed in the picture naming task, but not in the word-to-sign translation task, both behaviourally and at the ERP level. In contrast, frequency effects were observed in the two tasks, with ERP effects appearing earlier in the word-to-sign translation than in the picture naming task. These results support the idea that iconicity in sign language is not pervasive but modulated by task demands. As discussed, iconicity effects in sign language would be emphasised when naming pictures because sign lexical representations in this task are retrieved via semantic-to-phonological links. Conversely, attenuated iconicity effects when translating words might result from sign lexical representations being directly accessed from the lexical representations of the word.


Introduction
It is broadly agreed that iconicity, understood as the non-arbitrary resemblance between the linguistic form and its meaning, is a general property of languages, hence including oral and sign languages (e.g., Dingemanse, 2018;Perniss et al., 2010).Both modalities include iconic examples in their linguistic repertoires (e.g., onomatopoeias), although they differ in the prevalence of this language feature.Sign languages stand out for a high incidence of iconicity.Relative to the aural-oral modality, the visual-manual modality provides a rich medium to express sign forms that mimic perceptuomotor properties of their referents (e.g., in Italian Sign Language -Lingua dei Segni Italiana -, 50% of the handshapes are considered to have an iconic motivation, Perlman et al., 2018;Pietrandrea, 2002).For instance, the sign TEAR in many sign languages is represented with the index finger moving down from the eye towards the cheek, resembling the path of a falling tear.
The singularity of sign languages regarding iconicity has stimulated an increasing interest of scholars on language processing to determine the role of iconicity in fundamental language processes such as sign comprehension, production or learning (see Ortega, 2017;Thompson, 2011, for a review).Iconicity appears crucial during L2 adult sign learning, with numerous studies revealing that iconicity benefits sign learning.Iconic signs are more accurately learned and memorized than non-iconic signs (Baus et al., 2013;Campbell et al., 1992;Lieberth and Gamble, 1991;Ortega andMorgan, 2015b, 2015a;Poizner et al., 1981; see Ortega, 2017, for a review).These results have been interpreted as iconicity helping sign learning and processing because of a strengthened link between semantics and the form (manual representation) of iconic signs relative to non-iconic signs.In contrast to the generalised effect of iconicity on sign learning, its impact during sign language processing remains disputable, with some studies showing iconicity effects but not others.
In sign comprehension, accumulated evidence shows that iconicity influences sign recognition.Differences between iconic and non-iconic signs in picture-sign matching tasks (Grote and Linz, 2003;Thompson et al., 2009;Vinson et al., 2015), or phoneme monitoring tasks (Thompson et al., 2010;Vinson et al., 2015), have been taken as evidence that phonological forms of iconic signs automatically activate the corresponding semantic representations.Thompson et al. (2010) showed that Deaf signers were slower making phonological decisions about signs (whether handshapes involved straight or curved fingers) when those signs were iconic.The presence of iconicity effects in a task in which semantic activation was not necessarily required, was taken as evidence that iconicity effects are not limited to tasks tapping into semantic representations.Notwithstanding, these results contrast with others in sign comprehension using tasks tapping into semantics that did not obtain priming effects related to iconicity (e.g., Bosworth and Emmorey, 2010;Mott et al., 2020).
In sign production, the same mixture of results has been obtained.For instance, Baus et al. (2013) showed no effect of iconicity when hearing L2 signers were translating words into signs.In contrast, Baus and Costa (2015) showed that iconic signs were produced faster than non-iconic signs when hearing L2 signers named pictures in Catalan Sign Language (Llengua de Signes Catalana, LSC; see also Pretato et al., 2018, Experiment 1 for the same effect testing Deaf signers).Importantly here, Pretato et al. (2018) showed that iconicity effects in sign production relied on the semantic access required by the task.In a picture naming task, the authors showed that the effect of iconicity disappeared when semantic-to-phonology mappings were bypassed by the task requirements (but see Thompson et al., 2010).Iconicity effects revealed when naming pictures in Italian Sign Language (Lingua dei Segni Italiana, LIS), were not observed when for the same pictures, signers were asked to use a marked demonstrative pronoun (referring to the previous location of the object) plus the colour of the object, hence avoiding the picture's corresponding name.As these results showed, iconicity effects were only observed when the link between the picture concept and its corresponding phonological form was emphasised by the task demands.
These contrastive results on iconicity, both in sign comprehension and production, suggest that the influence of iconicity during online processing is not pervasive, but rather induced by different factors, including the type of materials (Grote and Linz, 2003;Thompson et al., 2009;Vinson et al., 2015), type of task (Navarrete et al., 2015;Pretato et al., 2018), sign language competence (Baus et al., 2013;Thompson et al., 2009), or the interaction of iconicity with other psycholinguistic variables (e.g., frequency, Baus and Costa, 2015;age of acquisition, Vinson et al., 2015).The present study aims to shed light on the functional role of iconicity during sign language production by exploring iconicity effects in two tasks differing in how they prompt semantic-to-phonology mappings, a picture naming and a word-to-sign translation.The use of these tasks was motivated by the observation that words do not tap into semantic representations as pictures do.Different studies, both in signed and oral languages, reveal that lexical retrieval is not mediated by semantics or at least not to the same extent when stimuli are words and the tasks involved word processing (Damian et al., 2001;Navarrete et al., 2015;Vigliocco et al., 2005).Of relevance, in sign language processing Navarrete et al. (2015) showed that Deaf signers showed a cumulative semantic cost effect when naming pictures in LIS, but the effect was not observed when naming Italian written words in LIS (i.e., word-to-sign translation task).That is, when naming pictures in a sequence, picture-naming latencies increased for every successive exemplar within the same semantic category.Crucially, no increase in naming latencies was observed with printed word stimuli.The authors argued that in the word-to-sign translation task, Deaf bimodal bilinguals can directly access the sign language lexicon, without semantic mediation.In this sense, lexical access would rely on the direct mapping between the lexical representations of the spoken language and the production of lexical representations of the sign.
Following the same rationale in our study, we can articulate the following hypothesis: if iconicity effects stem from semantic activation (Pretato et al., 2018) and this is modulated by the task requirements (Navarrete et al., 2015), differences are expected depending on how semantic-to-phonology mappings are prompted by the task.To explore this issue, Deaf bimodal bilinguals performed two tasks while event-related potentials (ERPs) were recorded.Importantly, the experimental design was maintained in both tasks except for the input that triggered lexical processing: naming pictures in LSC (picture naming task) or signing the corresponding Spanish written words (word-to-sign translation task).If the influence of iconicity is automatic in nature and does not depend on induced semantic activation by the task (Thompson et al., 2010), iconicity effects should be found in both tasks.In contrast, if iconicity is modulated by semantic activation and this is modulated by the task requirements, iconicity effects are expected to be reduced or even cancelled out when signing written words compared to pictures (Navarrete et al., 2015;Pretato et al., 2018).
Electrophysiological measures are especially relevant to explore the influence of iconicity throughout the time-course of sign production.Baus and Costa (2015) observed an early effect of iconicity (P100) when hearing L2 signers named pictures in LSC.Also in a picture naming task, McGarry et al. (2020) reported iconicity modulations associated to the N400 component.Despite some differences between studies, the same interpretation of the results was provided: iconicity effects resulted from the engagement of the conceptual system when signing, with greater activation of semantic features for iconic compared to non-iconic signs (Baus and Costa, 2015;McGarry et al., 2020;Navarrete et al., 2017).From these results, we can predict iconicity effects both at the behavioural and ERP level in the picture naming task.Importantly, by comparing iconicity effects in the two tasks, we would be able to determine whether the influence of iconicity in sign processing is a general property in sign lexicalization or determined by the characteristics of the task at hand.We hypothesised that, if iconicity is mainly driven by pictures inducing the activation of semantic representations, we should observe ERP modulations associated to iconicity in the picture naming task but not in the word-to-sign translation task.
While the main interest was on iconicity, lexical frequency was also manipulated in our study.Lexical frequency effects, taken as an index of lexical processing, have been observed in both sign and oral languages and in both word and picture naming tasks.High-frequency words and signs are produced faster than low-frequency ones (Emmorey et al., 2013;Jescheniak and Levelt, 1994).Similarly, ERP modulations in sign processing have also showed differences between high and low-frequency signs (Baus and Costa, 2015;Emmorey et al., 2020).Baus and Costa (2015) and Emmorey et al. (2020) reported ERP lexical frequency effects while hearing signers named pictures in LSC or performed a go/no-go semantic categorization task to videoclips of American Sign Language (ASL) signs.Thus, in the present study we expected to find lexical frequency effects both when naming pictures or translating written words.Moreover, frequency effects at the ERP level will offer a benchmark to evaluate the time course of iconicity effects in relation to when lexical processing takes place.
In sum, the purpose of the present study was to explore iconicity effects on sign lexical retrieval, and how it varies depending on the task demands.To that end, we compared behavioural and electrophysiological measures from Deaf bimodal bilinguals while signing picture names or written words.Lexical frequency effects were taken as an index of lexical processing, and their timing was compared to that obtained for iconicity.

Participants
Twenty-two Deaf LSC-Spanish bilinguals participated in the present study (M age = 35.3years, SD = 14.5 years).Participants reported normal or corrected-to-normal vision and completed an informed consent form before the experiment and a language background questionnaire (Table 1).In this questionnaire, similar age of exposure to LSC and Spanish (t(21) = 0.77, p = 0.45) were reported.Self-ratings of proficiency on a 10-point scale showed that LSC was rated higher than Spanish (t(21) = 4.4, p < 0.001).All participants received monetary compensation for their participation in the experiment according with the standards of the Center for Brain and Cognition (Universitat Pompeu Fabra).Three additional participants were run but excluded from the analyses due to excessive number of artefacts.

Materials
The items employed, iconicity ratings, and lexical frequency values were obtained from the study of Baus and Costa (2015), which comprised two hundred forty pictures from different databases (Bates et al., 2003;Snodgrass and Vanderwart, 1980).Within this set, half of the items were categorised as corresponding to iconic signs and the other half corresponding to non-iconic signs.Iconic signs included two types of iconicity: signs resembling perceptual features of the referent (e.g., shape; the sign BALL represents the shape of the ball) and signs depicting pantomimic elements of the referent (e.g., how the referent is used or how an animated referent moves; the sign KEY is performed by moving the hand as if turning a key inside a door lock). 1 To ensure that the choice of iconic and non-iconic signs was properly made, Baus and Costa (2015) obtained iconicity ratings from two different groups of participants.First, iconicity ratings were obtained from a group of hearing speakers (n = 12) with no knowledge of sign language.Raters were asked to evaluate the iconic relation between pictures and their corresponding signs.For each sign, raters were asked to evaluate how well the sign resembled the picture presented on a scale from 1 (no iconic) to 5 (very iconic).Iconicity ratings from this group were also compared with ratings provided by a second group of four Deaf signers.Iconicity ratings in both groups were highly correlated (r = 0.81, p < 0.001).Thus, one hundred twenty stimuli were considered as having iconic sign translations (mean = 3.8, SD = 0.9) and the remaining one hundred twenty stimuli as having non-iconic sign translations (mean = 2.2, SD = 1), (t (238) = 12.3, p < 0.001).To ensure that iconicity ratings based on picture items did not bias results in the word-to-sign translation task, a new group of twelve hearing non-signers rated iconicity with written words instead of pictures.Iconicity ratings from written Spanish words (iconic: mean = 3.7, SD = 1; non-iconic: mean = 2.4, SD = 1.2; t(232) = 9.3, p < 0.001) were compared with iconicity ratings from pictures in Baus and Costa (2015).Both ratings were significantly correlated (r = 0.71, p < 0.001).Thus, it can be excluded that iconicity ratings in Baus and Costa (2015) were biased because picture stimuli were used in the rating task.
Considering lexical frequency, word frequency values were taken from the Spanish corpus B-Pal (Davis and Perea, 2005).Unfortunately, unlike oral languages, there are no sign language databases available for psycholinguistic variables of signs based on millions of sign-tokens.As a consequence, frequency values are usually taken either from subjective measures of sign familiarity or from spoken language databases (Baus and Costa, 2015;Carreiras et al., 2008;Emmorey et al., 2020).In Baus and Costa (2015), there was no LSC corpus available to assess whether the lexical frequency for the Spanish translation of the pictures was similar to the LSC translation, so the group of four Deaf signers also rated familiarity of the signs.Frequency values from the Spanish corpus B-Pal (Davis and Perea, 2005) correlated with familiarity ratings from the group of Deaf signers (r = 0.17, p < 0.01).Thus, half of the stimuli (n = 120) were considered of high-frequency (mean frequency per millions of occurrences: 49.9; SD = 76.04) and half of the stimuli were considered of low-frequency (mean frequency per millions of occurrences: 4.04, SD = 2.5).Stimuli orthogonally varied in iconicity and lexical frequency.Importantly, the degree of iconicity was similar between high and low-frequency sets (t(238)<1) and frequency values were similarly distributed between iconic and non-iconic sets (t(238)<1; high-frequency iconic: mean = 54.8,SD = 93; high-frequency non-iconic: mean = 45, SD = 53; low-frequency iconic: mean = 4.5, SD = 2; low-frequency non-iconic: mean = 3.5, SD = 2).

Procedure
Participants were tested individually in a sound attenuating dimly lit room while engaging in two tasks: a picture naming task and a word-tosign translation task.The order of the tasks was counterbalanced across participants.Before the experiment, participants were familiarized with the pictures and words selected.In this familiarization phase, the experimenter showed the participants each of the pictures and its related word, and asked them to perform the corresponding sign.If the participant knew more than one sign for the object or she/he did not know the sign, the experimenter showed the appropriate sign and asked the participant to repeat it.
The following procedure was the same in the two tasks, with the exception that participants were presented with either pictures (picture naming task in LSC) or words (Spanish written-words to LSC translation task).In both tasks, participants were asked to perform the picture/word corresponding sign while ERPs were continuously recorded.E-Prime 2.0® was used to present the stimuli and record signing latencies.In both tasks, stimuli were presented randomly in three blocks with eighty trials in each block, and each task began with a practice block of eight warm-up trials.At the beginning of each trial, an instruction message on the screen asked participants to press and hold the spacebar of the keyboard.Once the spacebar was pressed, a 500 ms central fixation point was presented followed by a blank of 300 ms.Stimulus was displayed and maintained for 3000 ms or until participants released the spacebar to perform the sign.Signing latencies were measured from the onset of the stimulus display until the moment that participants released their hands from the space bar.Participants' responses were recorded on video and checked for accuracy after the experiment ended.

Behavioural analysis
Signing latencies were analysed for each task in a 2 × 2 ANOVA (R Core Team, 2019; package ez; Lawrence and Lawrence, 2016).The factors included in the analysis were iconicity (iconic vs non-iconic) and lexical frequency (high-frequency vs low-frequency) as independent measures, and participants (F1) and items (F2) as random factors.Tukey's range test was applied for pairwise comparisons and corrected values are reported.Responses were considered as errors and were excluded from the analyses when the elicited sign was not correct or when participants released their hands from the keyboard but hesitated before performing the sign (picture naming task = 7.8%; word-to-sign translation task = 9.6%).
In addition, correlation analyses were conducted considering iconicity and lexical frequency as continuous variables.Signing latencies of items were correlated with lexical frequency (B-Pal; Davis and Perea, 2005) and iconicity ratings (a composite of iconicity ratings from pictures in Baus & Costa, 2015, and from the written words obtained here).

Table 1
Demographic information of the participants.Mean ratings (M) and standard deviation (SD).
1 Despite exploring type of iconicity was out the scope of the present study, a post-hoc analysis considering the influence of type of iconicity on performance was carried out.No differences were obtained between those signs resembling objects forms and those resembling object actions, both at the behavioural and ERP levels.

EEG recording and analysis
EEG activity was continuously recorded from 30 Ag-AgCl electrodes, mounted on an elastic cap (ActiCap, Munich, Germany) and positioned according to the international 10-20 system.EEG was recorded online to a common reference located at electrode site FCz.Eye movements and blinks were monitored with two electrodes placed below the right eye and at the outer canthus of the left eye.EEG data was sampled at 500 Hz with a bandpass of the hardware filter of 0.1-125 Hz.Offline EEG data pre-processing and processing were carried out using the Brain Analyzer 2.1 (Brain Products, Munich, Germany), EEGLAB (Delorme and Makeig, 2004) and ERPLAB (Lopez-Calderon and Luck, 2014) toolboxes.Signals were filtered offline with a bandpass filter of 0.03-20 Hz and re-referenced to the average activity of the two mastoids.Eye blinks and motor or low band-pass artefacts were corrected by the Infomax ICA decomposition algorithm of Brain Analyzer 2.1 (number of ICA steps: 512; number of computed components: 20, classic sphering).ERPs were computed offline for each participant in each condition, time-locked to the onset of the target stimuli presentation, relative to a 100 ms pre-stimulus baseline and until 750 ms post-stimulus onset.Epochs with incorrect responses, amplitudes above or below 100 μV or with a difference between the maximum and the minimum amplitude larger than 75 μV were discarded from the analysis.ERP values were computed for the two factors of interest, iconicity, and lexical frequency.An average of 53 trials per condition in the picture naming task (89% of the total number of epochs) and 49 trials in the word-to-sign translation task (82% of the epochs) were considered in the final analysis.Mean amplitudes for six stimulus-locked latency windows were submitted to repeated-measures ANOVAs: P1 (70-140 ms), N1 (140-210 ms), P2 (210-280 ms), N3 (280-350 ms), and 350-550 ms.

Picture naming task
Only significant results related to our factors of interest are discussed below (see Figs. 3 and 4).The ERP analysis for the picture naming task at the 140-210 ms time window revealed a significant interaction between iconicity and electrode cluster (F(2.98,62.48) = 2.73; p = 0.05, η 2 p = 0.11).Follow-up comparisons revealed that the iconicity effect was significant at the Centro-Posterior Central region (t(47.9)= 2.19; p = 0.03, η 2 p = 0.09).In this region, pictures whose corresponding signs were non-iconic elicited more positive-going waves compared to those pictures corresponding to iconic signs.
At the 210-280 ms time window, there was a significant interaction between iconicity and electrode cluster (F(3.02,63.33) = 4.46; p < 0.01, η 2 p = 0.17).Follow-up comparisons showed that the effect of iconicity was significant at the Posterior Right region (t(30.2) = 2.32; p = 0.03, η 2 p = 0.15), following the same pattern of positivity as in the previous time window.
Finally, at the 350-500 ms time window, the analysis revealed a main effect of frequency (F(1,21) = 7.83; p = 0.01, η 2 p = 0.27).In this late time window, pictures related to low lexical frequency elicited more positive waves compared to pictures related to high lexical frequency.

Word-to-sign translation task
Only significant results related to our factors of interest are discussed below (see Figs. 5 and 6).The analysis of the word-to-sign translation task revealed a significant effect of lexical frequency at the 140-210 ms time window (F(1,21) = 4.41; p = 0.05, η 2 p = 0.17).High-frequency words elicited greater positivity compared to low-frequency ones.
At the 280-350 ms time window, there was observed a main effect of lexical frequency (F(1,21) = 4.34; p = 0.05, η 2 p = 0.17), following the same pattern of positivity observed in the early 140-210 ms time window.

Discussion
The present study explored the influence of iconicity and lexical frequency in sign lexical retrieval and how this is modulated by the type of task employed.Our working hypothesis was that if pictures and words differed in the extent to which semantic representations are activated, then iconicity, a semantic-related variable, but not frequency, a lexicalrelated variable, should be affected by the characteristics of the task.A group of Deaf bimodal bilinguals performed a picture naming task and a word-to-sign translation task while behavioural and electrophysiological measures were recorded.Both tasks required production of the same signs but differed in the stimuli triggering the sign response: pictures or Spanish written words.The results confirmed our hypothesis: iconicity effects in sign production were modulated by the processes induced by the task at hand.
Both at the behavioural and ERP levels, iconicity effects were observed in the picture naming task but not in the word-to-sign translation task.Participants were faster naming pictures related to iconic signs than pictures related to non-iconic signs.The effect of iconicity was

Table 2
Signing latencies (in ms) and errors by iconicity and frequency across tasks.Values for mean, standard errors, confidence intervals, and percentage of errors are reported.Values in the iconic/non-iconic and high/low-frequency conditions were computed for half of the items in each factor sorted by its estimate values.M. Gimeno-Martínez and C. Baus greater for low-frequency signs compared to high-frequency ones, in line with previous findings showing a greater impact of iconicity when lexical retrieval entails some level of difficulty, such as naming lowfrequency signs (Baus and Costa, 2015) or late-acquired signs (Vinson et al., 2015).Such interaction was not observed at the ERP level, where only a main effect of iconicity was found around 200 ms after stimulus onset presentation.Importantly, the iconicity ERP effect was only observed in the picture naming task but not in the word-to-sign translation.At the 140-210 ms and 210-280 ms time windows, pictures related to non-iconic signs elicited more positive amplitudes than pictures related to iconic signs.Although the iconicity ERP effect appeared earlier in our study than in McGarry et al. (2020), the same polarity was obtained in both studies, with iconic signs eliciting a reduced positivity compared to non-iconic signs (see Baus and Costa, 2015, for a different polarity of the iconicity effect).In the realm of the polarity and timing (N400 time-range) obtained, McGarry et al. (2020) interpreted the effect of iconicity as having a semantic origin.In particular, differences between iconic and non-iconic signs were described to arise from a greater activation of semantic and sensory-motoric features (e.g., the physical properties of an object or how an object is used) denoted by iconic signs relative to non-iconic signs.Our results here, expanded McGarry et al. (2020) and other studies on iconicity (Baus and Costa, 2015;Bosworth and Emmorey, 2010) by showing that iconicity, as other semantic manipulations (cumulative semantic effect; Navarrete et al., 2015), only facilitates lexical retrieval when semantic representations are sufficiently activated by the task, as in the picture naming task.
In contrast with the results observed in the picture naming task, results in the word-to-sign translation task showed that participants were unaffected by iconicity.The absence of significant effects replicates previous evidence testing semantically-related manipulations (iconicity: Baus et al., 2013; cumulative semantic effect: Navarrete et al., 2015) and supports the idea that translating words into signs could be accomplished lexically, without semantic mediation.Navarrete et al. (2015) interpreted the lack of semantic effects in the word-to-sign translation as  consistent with Kroll and Steward's (1994) proposal that links between bilinguals' two languages are semantically driven from the L1 to the L2, and lexically driven from the L2 to the L1.As in Navarrete et al. (2015), participants in our study were Deaf signers translating printed words (L2) into signs (L1), which would indicate that lexical links between the bilinguals' two lexicons are not determined by the modality of bilingualism, whether unimodal or bimodal.Those proposals though, cannot readily account for the lack of iconicity effects obtained in Baus et al. (2013) when hearing bimodal bilinguals translated printed words, their L1, to sign language, their L2.Considering all those results, a more plausible explanation of the lack of semantic effects in the word-to-sign translation task is that words do not trigger activation of semantic representations as pictures do, thus affecting the impact of semantic-related variables.In line with this view, Vigliocco et al. (2005) showed that words do not automatically activate imagistic conceptual representations unless the task motivates it.In a meaning similarity judgement task, hearing speakers were presented with three words of three semantic classes, and they were asked to group the two more similar in meaning.When English speakers were instructed to make a mental image of the words, their grouping of words was more similar to the grouping of signs (in British Sign Language, BSL) made by Deaf native signers.The authors argued that, while Deaf signers automatically activate imagistic representations related to sign forms (e.g., iconicity), English speakers only activate imagistic representations related to words when the task explicitly motivates it.
Altogether, our results on iconicity do not support the idea that iconicity in sign language is automatic in sign language processing (Thompson et al., 2010).The present behavioural and electrophysiological results are more in line with the idea that iconicity effects arise when the task enhances semantic-to-phonological links (Meteyard et al., 2015;Pretato et al., 2018).As such, differences between the two tasks in the present study would arise because pictures (but not words) lead to greater activation of semantics, thus promoting the engagement of the links between semantic concepts and sign phonological representations.Conversely, written words might activate phonological sign representations via a direct lexical route, bypassing semantic activation.
Our results on lexical frequency also emphasised the idea that differences between words and pictures were restricted to semantics and did not expand to other levels of processing.When lexical frequency was manipulated, both tasks revealed an effect: high-frequency items, pictures and words, were signed faster than low-frequency items.These results replicated the well-established phenomenon reported in the signed modality (Baus and Costa, 2015;Emmorey et al., 2012Emmorey et al., , 2013Emmorey et al., , 2020) ) as well as in the oral modality (Oldfield and Wingfield, 1964;see Brysbaert et al., 2018, for a review).In addition, frequency effects were obtained at the ERP level in the two tasks.While those results clearly reflect sensitivity to frequency during lexical access in sign production, there were some differences worth commenting between the two tasks regarding the polarity and the latency of the frequency effect.
In the picture naming task, pictures related to high-frequency signs elicited a reduced positivity at the 350-550 ms time window, in comparison to pictures related to low-frequency signs.These results replicate the polarity obtained in previous picture naming studies in both sign and oral languages (Baus et al., 2014;Baus and Costa, 2015;Qu et al., 2016;Strijkers et al., 2010;Strijkers and Costa, 2011) and the timing reported in picture naming studies in the signed modality (Baus and Costa, 2015).In contrast, ERP frequency effects in the word-to-sign translation task showed a reverse polarity and an earlier onset of the effect.High-frequency words elicited a greater positivity compared to low-frequency words, an effect starting in the 140-210 ms time window and being maximal at around 400 ms (350-550 ms).
Differences in polarity between pictures and words are not unusual and have previously been found in oral language experiments (e.g., Fairs et al., 2021), which supports the idea that task-related variables regulate not the presence of the effect but how lexical variables, such as lexical frequency, modulate the pattern of ERP components (e.g., Fischer-Baum et al., 2014;Strijkers et al., 2015).
Differences between the two tasks were also observed in the latency of the ERP frequency effects.Both tasks revealed a prominent effect of lexical frequency in the N400 time-range (although opposite polarity), thus replicating previous findings both in the oral, written and spoken (Fischer-Baum et al., 2014;Winsler et al., 2018), and signed literature (Baus and Costa, 2015;Emmorey et al., 2020;Osmond et al., 2018).Frequency effects in this time-range have been univocally attributed to lexical processing, with the N400 indexing changes in the level of activation of lexical representations depending on their frequency.In the picture naming task, such ERP modulations most likely reflected the greater activation of high-frequency signs compared to low-frequency ones.Contrastingly, in the word-to-sign translation task, because both language modalities are involved in the task (one in the input and the other in the output), the N400 ERP frequency effect could be reflecting the impact of word frequency, sign frequency or the parallel activation of both modalities (Gimeno-Martínez et al., 2021;Lee et al., 2019).Although the present data do not allow determining the modality of processing reflected in the N400 time-range, two data points appear to suggest that effects in the word-to-sign translation are reflecting the processing of words.First, the early effect of frequency obtained in the word-to-sign translation resembled that reported in the oral modality, using words or pictures as stimuli in the task (Baus and Costa, 2015;Dambacher et al., 2006;Fairs et al., 2021;Hauk and Pulvermüller, 2004;Strijkers et al., 2010Strijkers et al., , 2015;;Winsler et al., 2018).Many of those studies interpreted the early effect of frequency as the impact of frequency on sublexical processing during word recognition.Importantly, characteristics of the task influence participant's attention to those sublexical properties (Strijkers et al., 2015;Winsler et al., 2018).Considering those results, written words in our study might have influenced sensitivity of our participants to the sublexical properties of the oral modality.Second, the polarity of the frequency effect in the word-to-sign translation task was the same in the early and the late portion of the effect (but reversed to that obtained in the picture naming task).Thus, taking into account the polarity and latency of the frequency effect, a more parsimonious explanation of the frequency effects in the word-to-sign translation is that frequency effects might occur at multiples levels during word processing, including sublexical and lexical processing (e. g., Emmorey et al., 2020;Knobel et al., 2008;Winsler et al., 2018).
Note that, although the main interest of our study was on iconicity, the results on lexical frequency across tasks revealed a very interesting pattern, worth further exploring in the future.Importantly, finding differences in latency and polarity of the frequency effect across tasks does not preclude our conclusion that iconicity effects but not frequency effects are modulated by the characteristics of the task.
To summarize, the present study explored the influence of iconicity on lexical access during sign language production.By means of a picture naming task in LSC and a Spanish written-word to LSC translation task, we investigated iconicity as a constituent index of the sign language modality, and lexical frequency as a general index of lexical access across language modalities.Both behavioural and electrophysiological results reported here showed that the impact of iconicity on sign language production is dependent on the processes induced by the task at hand.

Fig. 1 .
Fig. 1.Electrode montage used in the present study.Highlighted sites and regions were included in the analysis.

Fig. 3 .
Fig. 3. Picture Naming Task.Main effects and interactions in the five analyses epochs.Significant results are highlighted in bold.Topographical p-values maps for lexical frequency effects (High-frequency -Low-frequency) and iconicity effects (Iconic -No iconic).

Fig. 4 .
Fig. 4. Picture Naming Task.ERP amplitudes for the main effects of iconicity (left panel) and lexical frequency (right panel).Seven regions of interest are represented for the ERP amplitudes.Positive amplitudes are plotted down.For the iconicity effect panel, black lines represent iconic words and red lines represent non-iconic words.For the lexical frequency effect panel, black lines represent high-frequency words and red lines represent low-frequency words.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)

Fig. 5 .
Fig. 5. Word-to-sign translation task.Main effects and interactions in the five analyses epochs.Significant results are highlighted in bold.Topographical p-values maps for lexical frequency effects (High-frequency -Low-frequency) and iconicity effects (Iconic -No iconic).

Fig. 6 .
Fig. 6.Word-to-sign translation task.ERP amplitudes for the main effects of iconicity (left panel) and lexical frequency (right panel).Seven regions of interest are represented for the ERP amplitudes.Positive amplitudes are plotted down.For the iconicity effect panel, black lines represent iconic words and red lines represent non-iconic words.For the lexical frequency effect panel, black lines represent high-frequency words and red lines represent low-frequency words.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)