Made you look! Temporal and emotional characteristics of attentional shift towards gazed locations

Abstract Studies using a cued gazing paradigm show that attention is reflexively shifted to the gazed-at location. However, there is disagreement as to the factors modulating attention orienting due to gaze cueing. In a series of three experiments, we investigated the role of the emotional expression of the cue (Exp. 1, 2 and 3), cue-target stimulus onset asynchrony (SOA) (Exp. 2 and 3) and emotional valence of the target (Exp. 3) in the participants’ ability to attend to the target. Experiments 1 and 3 were discrimination tasks. Participants had to differentiate between two neutral targets in Exp. 1 and between two emotionally laden targets (a “square” and a “circle” associated with positive or negative emotions) in Experiment 3. In Experiment 2, participants had to detect a single target presented at different time intervals. The results suggest that attention is oriented towards gazed locations regardless of the accompanying emotional expression, SOA and emotion target association. Thus, eye gaze-mediated attention shifts in normal healthy adults seem to be unaffected by the experimental manipulations studied herein.


Introduction
Gaze following is a cognitive skill. We follow the gaze of other agents naturally and also expect to find objects compatible with the emotions expressed by the gazing face. Following fearful faces may lead to early discovery of stimuli that may be dangerous i.e. a snake. Similarly, following a happy face may lead to a pleasant object and enhance social cohesion. Faces have evolutionary significance for visual search (Purcell & Stewart, 1986 and emotional appraisal (Eastwood, Smilek, & Merikle, 2001;Fox et al., 2000). Their emotional content influences how faces engage attention. While gaze cues may shift attention automatically, emotions do play a critical role in this activity. We look at gaze cues because we expect some objects in the environment whose valence is congruent with the emotion of the gazing face. Emotional content of gazing faces influences processing since valence of a face is processed with a brief glance (Calvo & Esteves, 2005;Streit et al., 2003). Identification of facial emotions seems to be an automatic and unconscious process (see Mumenthaler & Sander, 2015;Palermo & Rhodes, 2007 for a review). However, faces displaying distinct emotions are detected at different intervals (Batty & Taylor, 2003;Liu, Ioannides, & Streit, 1999). While much research has shown how one detects the emotion of a face, there is no agreement as to how emotional content of faces influences attention orienting.
In this study, we examined how the eye gaze of schematic emotional faces influenced attention orienting during a visual target detection/discrimination task. We also explored if targets presented as "good" or "bad" receive preferential attentional allocation when cued by a matching emotional face. We presented faces centrally to measure the influence of gaze on target identification and discrimination in a spatial cueing task (Posner, 1978). Human participants automatically shift their gaze as the face they are looking at shifts eyes in any direction (Driver et al., 1999;Friesen & Kingstone, 1998;Friesen, Ristic, & Kingstone, 2004). Friesen et al. (2004) suggest that this form of attention orienting may indicate a completely different type of attentional mechanism than conforming to the traditional endogenous and exogenous forms as such. This attention shift in the gazed-at direction is reflexive and automatic (Tipples, 2002). Gaze following thus has both a social and cognitive basis. Even infants and children follow gazes during early stages of development to look at and approach objects in the environment (Klinnert, Campos, Sorce, Emde, & Svejda, 1983). Centrally presented gaze cues have been shown to shift attention automatically (Friesen & Kingstone, 1998). That is, as soon as one looks at a pair of gazing eyes, one starts to look in the direction of the gaze. Again, this behaviour of attention orienting can be assumed to be linked to our intense familiarity with faces and gazes. However, whether emotional expression of gazing faces influences attention orienting has been a much debated issue with contradictory results.
Several researchers have used the spatial cueing paradigm to investigate whether the distinct emotional content of faces leads to different patterns of attention orienting (Bayliss, Frischen, Fenske, & Tipper, 2007;Hietanen & Leppänen, 2003;Mathews, Fox, Yiend, & Calder, 2003;Tipples, 2006). This paradigm is an offshoot of the Posner spatial cueing paradigm. Here, a face is presented centrally followed by a gaze shift in any direction. In some trials, targets appear at the gazed-at locations; but in others, they do not. It has been found that just like centrally presented arrows, centrally presented gaze cues also lead to higher facilitation in target processing when targets appear at the gazed-at locations. Stimulus onset asynchrony (SOA) appears to be an important variable in this paradigm, as studies have shown differing results in gaze cueing tasks as a result of SOA (Tipples, 2006). Further, Tipples (2002) found that eye gaze and other symbolic cues such as arrows behave similarly in terms of triggering automatic orienting to targets. Friesen et al. (2004) showed that directional arrows might lead to reflexive shifts of attention when they are non-predictive, but they do not do so when they are counterpredictive. However, counterpredictive gaze cues can automatically orient attention. As a result, the authors suggest that gaze cues may be more strongly reflexive than arrow cues. In other words, participants immediately orient attention towards a location indicated by the gaze shift. Most researchers have obtained a cue validity effect with centrally presented gaze cues in spatial-orienting tasks. Yet it has been less clear if the emotional content of faces that gaze directly influence the focus of attention.
However, an important question in this regard is whether the emotional content of the gazing face influences gaze shift. Considerable evidence shows that the human attentional system is sensitive to different facial emotions (Fox et al., 2000;Hansen & Hansen, 1988). However, few studies have examined how the eye gaze of emotional faces affects attentional mechanisms. Facial emotions and gaze directions were processed together by participants (Ganel, Valyear, Goshen-Gottstein, & Goodale, 2005). Furthermore, it has been shown that participants may be faster in orienting to "fearful" faces compared to "happy" faces (Tipples, 2006). This suggests a threat superiority effect manifested in an urge to look around for "bad" things when cued by a "fearful" face. While this argument predicts that observers should show selective attention orienting to different emotional faces (i.e. faster for fearful faces), the results are not conclusive across studies. It is quite possible that such a discrepancy among studies may have resulted from differences in stimuli and other aspects of the experimental designs. For instance, different tasks (i.e. detection vs. discrimination) can play a role in eliciting the effect of emotional content of faces on orienting as well as SOA (see  for a review).
In detection tasks, there is a single target to which participants are asked to respond. On the other hand, during discrimination tasks, participants are asked to distinguish between two possible targets and make separate responses. Discrimination tasks are more attention demanding, as detecting a signal is easier for the visual system compared to discriminating between signals that share similar features (Sagi & Julesz, 1985). In a recent paper reporting a series of experiments on the effect of masked and unmasked neutral faces on the gaze cueing effect, it was found that those effects for unmasked cues also hold in discrimination, detection and localization tasks (Al-Janabi & Finkbeiner, 2014). However, for the particular case of discrimination and detection tasks, it is not known if the type of task determines whether gazing emotional faces influence attention.
What types of emotions influence gaze-induced attention shift? This is an important question, since faces do exhibit emotion and gazes. Therefore, when we look at a face that is gazing, we also automatically process its emotion(s). Therefore, it is likely that both facial emotion and gaze have a combined influence on attention orientation. Tipples (2006) presented happy, fearful and neutral gazing faces followed by targets after either a short (300 ms) or longer (700 ms) SOA. The results showed a cue validity effect in target detection both after short and longer SOAs. However, participants were faster in orienting to fearful faces than happy faces. It was also observed that individuals with fearful traits showed a greater orienting effect to fearful faces at short SOA. This pattern of results suggests that gaze-related orienting is influenced by emotions, by cue-target intervals, and by the interactions between them. Further, these effects are modulated by factors related to the psychological profiles of participants.
Happy faces have also been shown to influence attention orienting (Hori et al., 2005;Putman, Hermans, & van Honk, 2006). Why do happy faces manipulate attention if they do not activate any sense of alertness? It has been suggested that happy faces have a processing advantage compared to other emotions: they are recognized faster and earlier (Kirita & Endo, 1995). Adams and Kleck (2003) observed that happy faces, like other emotional expressions, influence attention orienting. One expects disturbing stimuli following a fearful gaze. Similarly, one expects pleasant stimuli following a happy face. It is, of course, debatable and difficult to establish which type of expression has been evolutionarily more useful as far as attention orienting in the environment is concerned. Smiling and happy faces of mothers play an important role in the emotional development of an infant (e.g. Tronick, 1989). Therefore, appreciation of happy faces too could trigger automatic attention orienting similar to sad or fearful faces.
Studies on this issue have not consistently observed an influence of emotional expression of a gazing face on attentional orientation. Hietanen and Leppänen (2003) did not find any evidence of a distinct influence of facial expression on attention orienting beyond a validity effect, in that targets at cued locations were responded to faster than targets at uncued locations. In this study, the authors presented fearful, angry, happy and neutral faces both as schematic drawings and real pictures in a spatial cueing task. There was an influence of gaze cue at a very short SOA (i.e. 14 ms) but no influence of the corresponding emotion on cueing effect. Additionally, the authors observed that the gaze cueing effect was stronger for schematic faces than real faces. In spite of the fact that the authors used SOAs ranging from 14 to 600 ms in several experiments, there was no measurable influence of emotion on the cueing effect. Therefore, it is probably not the case that longer SOAs allow the emotional effect to develop. Tipples (2006) observed the effect of emotion at both 300 and 700 ms. Hietanen and Leppänen (2003) thus suggest that orienting attention to facial gaze cues could be independent of emotional analysis of such faces. Further, the authors speculate that emotional content of faces may influence "shift" and "disengage" components of attention differently. Those influences may not be readily visible in reaction times (RT). It is also possible that the influence of emotional faces could be more prominent, not on attention orienting but on social appraisal of stimuli. Therefore, these proposals suggest that the traditional spatial cueing paradigm may not be the ideal paradigm to investigate emotional effects on attention orienting with traditional variables such as speed and accuracy to targets. Nevertheless, many other studies have used the spatial cueing paradigm and have obtained effects that do suggest an influence of emotional faces on attention orienting. However, methodological differences between studies cannot be ruled out as major factors could have influenced the results and their interpretations.
For instance, enhanced orienting to fearful faces has only been observed in highly anxious participants (Mathews et al., 2003). Others have advanced the proposal that looking at an emotional face induces different types of expectancy related to particular objects. Therefore, one may see an influence of emotional faces on gaze shift if emotion congruent objects are to be found in the gazed at direction. It is one thing to expect faster or slower RT as a result of attending to emotional faces and another to expect a "good" or "bad" object as a result of such emotional appraisal. For instance, expecting a "snake" to a fearful face as opposed to expecting a "baby" to a happy face (Bayliss, Schuch, & Tipper, 2010). Some evidence suggests that the affective context of the target plays a role in gaze cueing by emotional faces (Bayliss et al., 2010;Pecchinenda, Pes, Ferlazzo, & Zoccolotti, 2008). Participants, therefore, may not just orient differently to emotional gaze cues; instead, they may "expect" to find targets matching in valence with such cues. For example, Bayliss et al. (2007) observed that even if emotional content may not directly influence attention orienting, as far as RTs are concerned, participants view objects differently (as "good" or "bad") depending on the type of emotion. In this study, the authors induced gaze cueing using emotional faces and asked participants to evaluate the attractiveness of objects gazed at. It was observed that people liked objects more and rated them highly when the gaze shift was triggered by a happy face than a sad face. Therefore, the authors proposed that regular RT measures may not show the combined effect of gaze cueing and emotion in a spatial attention task. Rather, the effect of emotion on target evaluation is more subtle. Gaze cues accompanied by facial emotion enhance subjective appraisal of objects looked at. Therefore, it is possible that faces with different emotions induce unique expectations among participants. Pecchinenda et al. (2008) argued that gaze cueing is modulated by facial expression only when participants have an evaluative goal towards the targets. These researchers presented happy, disgusted, fearful and neutral faces as cues. Positive and negative words were used as targets. The authors observed that fearful or disgusted faces elicited stronger cueing effects, but only when the task was to evaluate whether the target words denoted something positive or negative. However, when the task was to respond based on the case of the target words (UPPERCASE/lowercase), gaze cueing effects were comparable across all the emotional faces. Indeed, an explicit emotion evaluation of the target is needed in order to show that emotion effects have also been reported in tasks requiring the mapping of emotions onto space (Marmolejo-Ramos, Montoro, Elosúa, Contreras, & Jiménez-Jiménez, 2014). This type of evidence suggests that a gaze cueing effect cannot occur unless an explicit emotional assessment is required.

Current study
We examined these issues further in three different studies using schematic faces with different facial emotions (i.e. happy, sad and neutral) in target detection and discrimination tasks. (Note that most studies have used "angry" faces and not many have examined the effect of "sad" expressions.) We also examined if emotional gaze cues selectively affect the processing of targets that themselves have "affective" status . While it has been shown that happy and sad gaze cues influence the affective ratings of targets by observers, it is not clear if a "sad" gaze helps find a "sad" object faster. In Experiment 1, participants were presented emotional faces that shifted gaze in four directions. They were asked to differentiate between two objects. This experiment was designed to examine the claims of earlier work in which evidence of emotional gaze cueing was found in a discrimination task but not in a detection task. In Experiment 2, we introduced a detection task along with two SOAs. Previous research has shown that the emotional content of faces takes at least 100-300 ms after the onset of the gaze cue to show its full effect on tasks (Friesen & Kingstone, 1998Ristic, Friesen, & Kingstone, 2002). Based on this reasoning, one should expect the effect of emotional gaze cues to be enhanced by a longer SOA than by a shorter SOA. It has also been shown that "fear" and "happy" emotions influence attention orienting differently at shorter and longer SOAs (Tipples, 2006). In Experiment 3, we examined if facial emotion influenced attention orienting when observers look for "good" and "bad" targets. In other words, we tested whether emotional eye gaze facilitates detection of objects with congruent affective profiles.
Hence, if the facial expression of a cueing face influences attention orienting due to a gaze cue, we predict that the gaze cueing effects obtained due to happy/sad cueing faces should vary compared to neutral faces. Also, such a gaze cueing effect should be larger at the long SOA. Additionally, we also expect to observe an effect of emotional gaze cueing modulated by the relationship between the emotion of the central face and valence of the target object. That is, responses should be faster towards "happy"/"sad" objects when gazed at via centrally presented "happy"/"sad" faces.

Participants
Forty-two students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table 1). All participants gave written consent for their participation; and for this and subsequent experiments, the ethics committee from the University of Hyderabad approved the protocol.

Stimuli and procedure
Schematic diagrams of a face expressing happy, sad and neutral emotions were used in the experiments. The faces measured 6° × 6°. For each emotion type, the gaze direction of the face varied between frontal, left and right. The stimuli were presented on a 19″ LCD monitor with a screen resolution of 1,024 × 768 pixels and refresh rate of 60 Hz. Participants were seated at a distance of 60 cm from the monitor. Stimuli were designed and presented using DMDX software (Forster & Forster, 2003). The trials in each experiment started with a fixation cross at the centre subtending a visual angle of 2° × 2° which stayed for 1,000 ms. Participants were asked to fixate their gaze at this cross. The fixation cross was followed by a frontal-looking face displaying happy/neutral/sad emotion at the centre surrounded by four visible placeholders subtending 2° × 2°. The placeholders were located at an eccentricity of 9° horizontally (left, right) and 7° vertically (up, down). This event remained on the screen for 1,000 ms.
Immediately after the previous event, the face (displaying happy/sad/neutral emotion) shifted its gaze (looking up/down/left/right) which lasted for 100 ms. The face then returned to the normal position with eyes looking forward and stayed for 1,000 ms acting as cue-back. The facial expression of the central face did not change during a trial. There were two kinds of targets to be differentiated: a square and circle that appeared in one of the quadrangles. Then the target (1° × 1°) appeared for 2,000 ms or until a response was made. Using a keyboard, the subjects pressed "s" if they saw a square and "c" if they saw a circle. If the target appeared at the location cued by the schematic face, the trial was considered as "valid". If the cued location was different from the target location, the trial was considered to be "invalid". There was an equal number of "valid" and "invalid" trials in the experiment. On invalid trials, the target appeared only at the location exactly opposite to the cue. For example, if the gaze cue was towards the left, the target appeared only in the right box on invalid trials. There were 240 trials in total; 80 trials for each emotion type. For each emotion type, there were 40 valid and 40 invalid trials. The location of the target was equally divided between the four place holders. The participants were given a break for two minutes halfway through the experiment (see Figure 1(A)).

Design and statistical analyses
The design consisted of the variables "emotion cue" (happy, neutral, sad) and "validity" (valid, invalid). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT and accuracy rates (AR) were used as dependent variables. Various statistical techniques were applied to the RT data before analysing them via parametric ANOVAs. This comprehensive analytical approach is a variation of the "side-by-side" data analysis approach (see discussion section in Marmolejo-Ramos, Cousineau, Benites, & Maehara, 2015). It requires applying various statistical methods to the same data-set in order to find patterns in the data (see Notes in Table 2 for extra details). To assist in finding patterns in the data, estimations were performed of expected and observed sizes of main effects and interactions (see Table 3). Table 2 summarizes the results of the parametric-only, "side-by-side" analyses performed on the data obtained in the experiment. Effect sizes for main effects, interactions ( 2 p symbol) and pairwise comparisons (Cohen's d symbol) are also provided along with their observed power (here, op) (see Lakens, 2013). See Notes in Table 2 for interpretation benchmarks of the effect sizes. Table 3 focuses on observed and expected sizes of main effects and interactions.

RT data
In Experiment 1, three of the analyses showed a medium effect of the Validity × Emotion-Cue interaction (M

AR data
No significant main effects or interactions were evident in Experiment 1 (all p > .05), (see Table 2).

Discussion
In Experiment 1, we considered gaze cues with three different emotion types: happy, sad and neutral. We observed a main effect of validity, that is, responses were faster to targets at cued-at locations than otherwise. We also observed a significant interaction between emotion of the cueing face and validity. However, pairwise interactions showed that differences in response speed on valid and  invalid trials were significantly different only for the neutral face. "Happy" or "sad" faces did not significantly modulate the cueing effect. However, we did not include a cue-target delay in Experiment 1; and this might have led to the null effect in the emotional expression of the face. Several studies have shown that facial expression of gaze cues affects attention orienting at long  Notes: The only available benchmarks for interpreting the partial eta squared effect size are: .01-small, .06-medium, and .14-large (Kittler, Menard, and Phillips (2007). The benchmarks to interpret Cohen's d are: .2-small, .5-medium, and above .8-large (Cohen, 1992).
A M : ANOVA performed on mean RTs and ARs after participants with ARs < -2SD and with RTs < 100 ms and RTs > 1,200 ms were discarded. A log : ANOVA performed on mean logarithm (base 10) transformed RTs after participants with ARs < -2SD were discarded. A inv : ANOVA performed on mean inverse transformed RTs after participants with ARs < -2SD were discarded. A Mdn : ANOVA performed on median RTs after participants with ARs < -2SD were discarded. This procedure was used in Exp. 1 and 3. For Exp. 2, the percentage of catch trial errors was used as the criteria (>2SD) for discarding participants. After the "AR < -2SD" criteria was applied, three participants in Experiment 1, two participants in Experiment 2, and two participants in Experiment 3 were removed. When the "100 ms < RTs < 1,200 ms" truncation criteria was applied, 7.1% data in Experiment 1, 14.02% data in Experiment 2, and 2.17% data in Experiment 3 were removed.

Exp.
Dep  SOAs (Driver et al., 1999;Friesen & Kingstone, 1998;Ristic et al., 2002). To investigate the role of SOA in attention orienting due to emotion cues, we decided to run another experiment by including two different SOAs: 500 ms and 1,000 ms. Further, most cueing studies using emotional faces have not employed a cue-back procedure. The cueing face typically gazes at a location/target and stays there until a response. Hence, to be able to compare to existing studies, we used a similar procedure where the gaze direction stayed at the cued-at location.

Figure 2. Mean RT in Experiments 1-3.
Notes: The average RTs correspond to those obtained in analysis A M (see Notes in Table 2). The within-subjects variables are "validity" (valid, invalid), "emotion cue" (happy, neutral, sad), SOA (500 ms, 1,000 ms), and "target association" (P = positive and N = negative). Although the Y axis was adapted to the range of RTs found in each experiment, the major unit (10 ms) was kept constant. Error bars represent ±1 SE.

Participants
Nineteen students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table 1). All participants gave written consent for their participation.

Stimuli and procedure
The stimuli used in Experiment 2 were identical to that used in Experiment 1. After the fixation cross, a frontal looking face was presented for 1,000 ms. Immediately after this, the central face with its gaze directed towards one of the boxes (left/right/up/down) stayed on the screen for 500 ms or 1,000 ms time, after which the target appeared. The target remained on the screen for a maximum of 2,000 ms. The trials were divided into two blocks, one for each SOA, and consisted of 104 trials. In this experiment, instead of a discrimination task, we administered a detection task. Participants were instructed to press the "SPACE" bar as soon as they detected a circle in one of the placeholders. Thus, there were catch-error trials although no AR were collected. There were a total of 240 trials, out of which 32 were catch trials (see Figure 1(B)).

Design and statistical analyses
Experiment 2 consisted of the variables "emotion cue" (happy, neutral, sad), "validity" (valid, invalid) and SOA (500 ms, 1,000 ms). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT were used as dependent variables, and the statistical analyses were as those applied to the data obtained from Experiment 1. Table 2 summarizes the results of the analyses performed on the data obtained in the experiment.

RT data
In Experiment 2, three of the analyses showed a large main effect of validity (M 2 p = .4, SD = .06 M op = .85, SD = .08) such that participants responded faster to valid than invalid trials (M Cohen's d = .71, SD = .45, M op = .85, SD = .08). One of the analyses (A Mdn ) showed a significant three-way interaction SOA × Validity × Emotion-Cue substantiated by valid trials being responded to faster than invalid trials when a happy face appeared for 500 ms, a neutral face appeared for 500 ms and a neutral face appeared for 1,000 ms (see Figure 2).

AR data
Given the design of Experiment 2, no accuracy rate analysis was performed. The total percentage of catch-error trials was found to be very low (3.53%). Hence, further analyses were not performed on the errors.

Discussion
In Experiment 2, we found that a delay between gaze cueing and the appearance of a target does not always affect attention orienting. Although we found some evidence indicating that SOA and emotion cue and validity can interact, once again the analyses demonstrated that cue validity was the driver of the effect (see also Table 3). It is possible that the emotion of the cueing face modulates shifts in attention only when gaze cueing is towards an affective target. Previously, Friesen, Halvorson, and Graham (2011) showed that gaze cueing was stronger when people had to respond to emotionally valenced targets. Several studies have shown that affective context of the target plays a role in attention orienting towards the targets (Bayliss et al., 2010;Pecchinenda et al., 2008). Since the emotional valence of an object is subjective and temporally dynamic, we decided to assign emotional symbolic values to arbitrary objects such as a square or circle.

Participants
Twenty-four students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table 1). All participants gave written consent for their participation.

Stimuli and procedure
In Experiment 3, the stimuli and basic design of Experiment 2 were used again but the task was different. The participants had to differentiate between a square and a circle, and they were instructed to press "A" or "L" on seeing the target. This was done to make sure that participants would use two different hands while responding. The correspondence between the response key ("A"/"L") and target-type (circle/square) was counterbalanced across participants. Additionally, before the start of the experiment, participants were asked to form a positive association with target "X" and a negative association with target "Y". (The assignment of "X" and "Y" to circle/square was counterbalanced across participants.) For example, in the case of a positive association with the target, they were told "In this experiment, the circle represents positive emotions. Please think of events, experiences and words that evoke positive emotions within you. In other words, associate 'X' with something that makes you happy". Similar instructions were given with respect to target "Y" but with reference to a negative association. These instructions appeared together on the same screen after every 48 trials and stayed on for 30 s. The experiment had a total of 288 trials divided into six blocks. There were 96 trials for each emotion type, divided into 48 valid and 48 invalid trials. The trials were further divided equally between SOA and type of emotion associated with the target. The trials within a block and the blocks themselves were presented in a randomized order (see Figure 1(C)).

Design and statistical analyses
Experiment 3 consisted of "emotion cue" (happy, neutral, sad), "validity" (valid, invalid), SOA (500 ms, 1,000 ms) and "target association" (positive, negative). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT and AR were used as dependent variables, and the statistical analyses were as those applied to the data obtained from Experiments 1 and 2.

Results
Tables 2 and 3 summarize the results of the analyses performed on the data obtained in the experiment.

RT data
In Experiment 3, the four analyses also showed a large main effect of validity (M 2 p = .48, SD = .09, M op = .96, SD = .05) in that valid trials were responded to faster than invalid trials (M Cohen's d = .94, SD = .49, M op = .96, SD = .05). Also, three of the analyses showed a main effect of target association (M 2 p = .29, SD = .03, M op = .80, SD = .06) in that negative target associations were responded to faster than positive target associations (M Cohen's d = .77, SD = .59, M op = .80, SD = .06). In one of the analyses, an interaction between target association and SOA was present such that in the 500 ms SOA, negative target associations were responded to faster than positive target associations. Finally, in another of the analyses, it was found that a three-way interaction between SOA, target association and emotion cue validated by negative target associations was responded to faster than positive target associations during the 500 ms SOA and the neutral emotion cue and the 1,000 ms SOA and the happy emotion cue. Figure 2 summarizes the results of the experiment.

AR data
No significant main effects or interactions were evident in Experiment 3 (all p > .05).

Discussion
Experiment 3 once again showed that participants responded faster to targets at cued locations as opposed to targets at uncued locations. However, once again we found no evidence to suggest that either the emotion of the face or the variable SOA had any influence on the gaze-cueing effect. This replicates our findings in Experiments 1 and 2 (see also Table 3). In addition, we had participants attaching positive and negative values for targets to be differentiated in this experiment. We found that affective context of the target object modulated the speed at which participants responded to the targets. People responded faster to objects associated with negative emotion than to objects associated with positive emotion.

General discussion
A series of experiments was devised to investigate the factors affecting attention orienting by emotional gaze cues. The main message across the studies, the "side-by-side" analysis and the observedexpected effect size estimations, is that the expression of the cueing face or the cue-target delay does not affect the participants' response to gazed cues. Instead, the cueing itself resulted in faster orienting to targets located at the cued location as supported by a significantly large effect of validity. (A trial was considered to be "valid" when the target location and the cued location were the same and "invalid" when they were not.) We also found that the association of the target object with positive or negative valence modulated the speed at which participants responded to the targets. Response time to objects associated with "negative" emotion elicited faster responses compared to objects associated with "positive" emotion. In addition, in Experiment 1, we observed that gaze direction interacted significantly with the emotional content of the cueing face. But this effect was seen only for neutral faces, and not for "happy" or "sad" faces. This effect was not replicated in the next two experiments. It is possible that using a cue-back procedure in Experiment 1 might have resulted in this effect. However, few studies in the past have employed a cue-back in gaze cueing studies with emotional faces. Future studies should examine the influence of emotional faces when the eye-gaze does not stay at the cued location.
The results of Experiments 1 and 2 are in line with research which showed photographs exhibiting facial affect do not influence spatial attention, unless an explicit emotion evaluation task is required (Pecchinenda et al., 2008). They also support the idea that facial expression (be it real or schematic) and SOA seem to have no effect on gaze-cued attention (Hietanen & Leppänen, 2003). The fact that the main effect of cue validity (i.e. valid gaze cue faster than invalid) held across discrimination, detection and symbolic emotion discrimination tasks lends support to a recent study which demonstrated that informative unmasked gaze cueing effects are not task dependent (Al-Janabi & Finkbeiner, 2014). Further, Yamada and Kawabe (2013) found that gaze influences the mislocalization of a target object (i.e. an object not gazed at) at a SOA of two seconds, but not at a SOA of zero seconds. Our results complement those results by showing that for SOAs as short as 500 ms, gaze influences attention towards the gazed location regardless of the accompanying emotional face. Not only does this indicate that attention to external objects is not affected by the location looked at (Yamada & Kawabe, 2013), but also that emotions have no effect on attended objects or locations gazed at. It could be argued that the pictorial faces used in our study did not portray a realistic emotional face and thus did not lead to an emotion effect. However, as shown in previous studies (Pecchinenda et al., 2008), using real or pictorial faces does not seem to modulate the effect.
Our Experiment 3 extended previous studies that used targets with affective contexts to examine the influence of attention orienting due to emotional faces (Bayliss et al., 2010;Pecchinenda et al., 2008). Our results show that the emotional valence of the stimuli modulated the response speed of the participants. However, it did not modulate gaze cueing of the emotional face. Bayliss et al. (2010) observed higher cueing effects when the emotion of the cueing face matched the affective context of the target object. Similarly, Pecchinenda et al. (2008) found the emotional content of the cueing face exerted influence only when the targets were supposed to be affectively evaluated. However, there are important differences between these studies and the present study. We did not use stimuli inherently associated with a particular affect (such as the picture of a snake or specific emotional words). Instead, we used arbitrary objects which were temporarily associated with an emotional valence. It is possible that the association formed was not strong enough and that mapping of emotions onto arbitrary symbols may not be sufficient to deviate spatial attention.
Thus, this last result could be investigated by examining the relationship between emotional gaze and attention to spatial locations from an embodied cognition viewpoint. Studies on the metaphorical mapping of emotions onto space indicate that positive concepts are mapped onto upper spatial locations, whereas negative concepts are mapped onto lower spatial locations (Ansorge & Bohner, 2013;Marmolejo-Ramos, Elosúa, Yamada, Hamm, & Noguchi, 2013;Marmolejo-Ramos et al., 2014). It could be suggested that targets located in upper locations are detected faster when happy faces gaze at those locations than when targets are located in lower locations gazed at by happy faces. By the same token, targets located in upper locations are detected more slowly than targets located in lower locations when these are gazed at by sad faces. If the gaze cueing effect is resistant to factors such as emotion faces, SOA and emotion-target association, it might be the case that the metaphorical mapping of emotions onto space does not interact with gaze cues. However, we believe these conjectures have not been tested as yet.