Visuospatial perception is not affected by self-related information

Previous research suggests that attention is drawn by self-related information. Three online experiments were conducted to investigate whether self-related stimuli alter visuospatial perceptual judgments. In a matching task, associations were learned between labels (‘Yourself ’ /friend/ stranger ’ s name) paired with cues. Cues were coloured outlines (Experiment 1, N = 135), geometric shapes (Experiment 2, N = 102), or coloured gradients (Experiment 3, N = 110). Visuo-spatial perception bias was measured with a greyscales task. Cues were presented prior to, and/or alongside greyscales. We hypothesized there would be a bias towards the self-related cue. In all experiments, we found a self-related bias in the matching task. Furthermore, there was an overall leftward visuospatial perceptual bias (pseudoneglect). However, we found anecdotal to moderate evidence for the absence of an effect of self-related cues on visuospatial perception judgments. Although self-related stimuli influence how our attention is oriented to stimuli, attention mechanisms that influence perceptual judgements are seemingly not affected by a self-bias.


Introduction
We are surrounded by a large number of visual stimuli, of which only a small part is selected and registered.This selection process is called attention.Attention can seemingly alter visuospatial perception: it reduces perceptual thresholds and changes judgments about size, numerosity, and brightness (e.g., Nicholls et al., 1999).Attention is driven by bottom-up and top-down mechanisms.A particular example of top-down modulation of attention is the bias towards self-related information, which has been well replicated (e.g., Alexopoulos et al., 2012;Bortolon & Raffard, 2018;Brédart et al., 2006;Röer et al., 2013).It is unknown whether this specific type of top-down attention modulation can alter visuospatial perception.The current study aimed to evaluate the effect of the self-bias on visuospatial perception.If the self-bias modulates visuospatial perception, this would give an indication of whether the self-bias itself operates by modulating low-level perceptual mechanisms versus (only) by impacting later processing.Furthermore, it provides insight into the mechanisms of visuospatial attention, which could potentially be applied in the treatment of disorders in visuospatial functioning such as visuospatial neglect.
A bias for self-related information is not only seen for stimuli that are familiar to the participant, such as their own name or face, but also for neutral stimuli that participants learned to associate with the self, thereby controlling for effects of familiarity and low-level characteristics of the stimulus (Sui et al., 2009(Sui et al., , 2012)).Sui et al. (2012) showed this by having participants associate geometric shapes to their own name, a friend's name, and a stranger's name.In a perceptual matching task, participants had to indicate whether a shapename combination was a match or mismatch.Performance was better for the self-related shape than for the friend-, or stranger-related shapes.This self-bias towards abstract, temporarily self-associated stimuli is equivalent to the self-bias towards long-term self-associated stimuli such as names (Scheller & Sui, 2022).
In addition to the enhanced perceptual matching performance for self-related stimuli, self-related stimuli act as powerful exogenous and endogenous orienting cues (Sui & Rotshtein, 2019).Sui et al. (2009) had participants associate red or green arrows with the self or a friend, which were then used as endogenous cues in a cueing task.Self-referential cues shifted spatial attention to cued locations more effectively than cues that were related to others (Sui et al., 2009).Furthermore, self-related cues can cause more interference than friend-, or stranger-related cues (Li et al., 2022).Dalmaso et al. (2019) observed that when self-related and stranger-related shapes were presented centrally, participants were slower to initiate a saccade away from the self-related central shape.This can be explained by difficulties in disengaging attention from self-related information, in line with longer fixations at familiar versus unfamiliar faces in visual search (Devue et al., 2009).
Self-related stimuli are not only hypothesized to strongly orient attention, but also to be processed without attention and to determine what is attended (attentional selection).Patients showing visual extinction identified a self-related shape more often relative to a shape associated with a friend, even when presented in their contralesional (neglected) field (Sui & Humphreys, 2017).Macrae et al. (2017) showed that self-relevance of stimuli enhanced visual awareness in healthy participants.In a continuous flash suppression task, participants had to identify geometric shapes that were associated with the self, a friend, or a stranger, and the selfshapes were prioritized in visual awareness (Macrae et al., 2017).In a different study, however, where Gabor patches were related to the self or other, there was no difference in breakthrough into awareness (Stein et al., 2016).Contrary to the experiment of Macrae et al. (2017), in the latter experiment the identity of the stimuli was irrelevant to the task, as participants only had to report on the stimulus location.To summarize, self-related stimuli are strongly salient and thereby alter attentional orienting and attentional selection.This must involve top-down mechanisms since effects cannot be explained by low-level stimulus characteristics (also see a review by Humphreys & Sui, 2016).Top-down attention can also alter perceptual processing, by enhancing signals of task-relevant stimuli, and inhibiting signals of task-irrelevant stimuli (e.g., Friedman-Hill et al., 2003;Moran & Desimone, 1985).Given the strong salience of self-related cues, and given that this is driven by top-down mechanisms, self-related cues possibly also alter our perceptual processing.
The aim of the current study was to investigate whether the bias for abstract, temporarily established, self-related stimuli in orienting spatial attention also affects visuospatial perception.To this aim, healthy participants performed several versions of a greyscales task (Mattingley et al., 2004).In the greyscales task, two horizontal rectangles with the same, but left/right mirrored brightness gradients are presented above each other.Participants must indicate which of the gradients is lighter (or darker) on average.Healthy people tend to show a slight, but consistent preference to choose the gradient that is lighter (or darker) on the left-hand side (Bowers & Heilman, 1980;Jewell & McCourt, 2000;Mattingley et al., 1994).This pattern is referred to as 'pseudoneglect', a subtle leftward visuospatial bias seen in the healthy population that is somewhat similar, but opposite to, the rightward visuospatial bias seen in people with hemispatial neglect.Spatial cues presented before the onset of the greyscales stimuli can affect the degree of pseudoneglect (Nicholls & Roberts, 2002; Experiment 2).Similarly, Loftus et al. (2008) reported that overlays with low numbers reversed the rightward bias in left neglect patients whereas overlays with high numbers had no effect on the rightward bias, which is explained by their relative positions on the mental number line (i.e., low numbers more leftward and high numbers more rightward).This shows that higher-order internally driven representations of space can influence visuospatial perception.
We aimed to see whether lateralised self-related cues could alter visuospatial perceptual judgments.We hypothesized that people would tend to choose the gradient with the target (either the lighter or darker part) on the same side in space as where the self-related cue appeared.Thus, we expected that people would perceive that gradient as either lighter or darker (depending on the instructions).We conducted three experiments in which participants completed different versions of the greyscales task, in which the main stimuli were presented along with cues.The cues consisted of coloured outlines (Experiment 1) or geometric shapes (Experiment 2) on the left and right sides, or coloured gradients with different colours on the left and right sides (Experiment 3).First, participants learned associations between the labels 'Yourself', 'Friend', and 'Stranger' and the cues.Next, the greyscales task was administered.We expected that after learning the associations, there would be a perceptual visuospatial attention bias on the greyscales task towards the side of the self-related cue compared to the friend-related cue and the stranger-related cue.
In the first experiment, we additionally studied whether any potential cueing effect in the greyscales task could have been caused by having learned an association between the cue and label in general.To this aim, we included a second group of participants who learned associations between the labels 'Yourself' and 'Stranger', and were presented with a third, unlearned cue during the greyscales task.This group was included to help us interpret the results on our main question.If a stranger-related cue would bias visuospatial perceptual judgments as opposed to an unlearned cue, this would indicate that simply having rehearsed a cue-label association and/or having seem a stimulus more often makes it a more salient attentional cue.If we additionally would find the hypothesized effect of selfrelated versus stranger-related cues, these findings combined would raise the questions whether, possibly, already more attention is spent on the self-related versus stranger-related cue in the process of learning the cue-labels associations, which eventually makes the cue more salient.Contrary, if there would be no difference between cues paired with a stranger's name versus unlearned cues, the hypothesized effect of the self-related cue can be completely explained by the mechanisms at play during the greyscales task.

Participants, equipment, and procedure
The first experiment was an exploratory study.Participants consisted of healthy controls who completed the task as part of a teaching exercise for an undergraduate psychology class at the University of Bath.As such, our sample size was based on the number of students who took part in the class rather than an a priori sample size calculation.We used G*Power (version 3.1.9.2) to compute the required effect size (sensitivity) to be detected with a one-sample t-test.With an alpha of 0.05 and a power of 0.80, the required effect size was small (0.27) with 86 participants (group 1), and medium (0.36) with 49 participants (group 2).Inclusion criteria were being aged between 16 and 30 years and having no severe visual deficits.Participants who completed the study could choose to either receive research participation credits or participate in a prize draw.The protocol was approved by the ethical committee at the University of .
The language of the experiment was in English.The experiment was conducted online, performed at the participant's home or any other location of their choice on a laptop or desktop computer with a screen size of at least 11.6 in.(29.5 cm).Participants opened the experiment by clicking on a link, after which Inquisit Player 6 (Inquisit 6.0.8.0, 2014) was installed.The experiment started with information about the study and an informed consent form.Subsequently, the screen size and number of pixels per cm were computed by asking participants to adjust the size of a rectangle on the screen to match with the size of a credit card.Participants completed the following sequence of tasks (Fig. 1): (1) greyscales practice, (2) greyscales baseline, (3) matching practice, (4) matching main, (5) postmatching greyscales task, and (6) Edinburgh Handedness Inventory.The Edinburgh Handedness Inventory measures the extent to which a person uses their left hand (score − 100 to − 40) or right hand (score 40 to 100) for everyday activities (Oldfield, 1971).The English version of the Edinburgh Handedness Inventory was used.At the end of the experiment, participants were debriefed about the aim of the study.For all experiments, stimuli and experiment scripts can be found at https://osf.io/ekhsa/.

Greyscales task
Participants completed 2 blocks of 36 trials of the greyscales task before (i.e., baseline) and 6 blocks of 36 trials after (i.e., postmatching) learning associations between coloured cues and the labels Yourself, Friend, and Stranger (see section 2.1.3).An example sequence of a trial of the greyscales task is depicted in Fig. 2. Stimuli were presented on a white background.A trial started with a central fixation cross (black, 1 × 1 cm), presented for 500 ms.Next, two rectangles with coloured outlines (i.e., the cues) appeared, the left side of the rectangles being one colour, the right side being another colour.After 500 ms, the cues were joined by two gradients (i.e., the greyscales), which were presented for an additional 200 ms.The gradients were presented 1.78 cm above each other, had a height of 1.78 cm, and a width of either 12.76, 15.88, or 19.0 cm.We used coloured outlines as cues so that they would be spatially close to the greyscales stimuli, without altering their dimensions.
Participants were asked to choose which gradient was lighter or darker by pressing the 'y' key with the index finger of the right hand for the upper gradient, and the 'b' key with the index finger of the left hand for the lower gradient.The requested target (i.e., lighter/darker) was counterbalanced between participants.The colour cues were present in all of the greyscales blocks (i.e., practice, baseline, and post-matching) to ensure stimuli were identical in this regard.However, only the post-matching greyscales block occurred after the colour cues were associated to labels.Participants were not given any explicit instructions regarding the coloured cues.A trial was terminated after a response was given, or after 2000 ms.In the latter case, the text "Too slow, Press space to continue" appeared.In this way, the experiment would pause automatically if participants would stop responding.The inter-trial interval was 500 ms.
Participants started with a block of 12 practice trials, in which one of the two gradients was darker than the other (i.e., 75% black/ 25% white, versus 50% black/50% white).This block was repeated if the accuracy was below 75%, up to three times maximum.The response window was 5000 ms.In the baseline and post-matching greyscales tasks, the two gradients were mirror-image versions of the same gradient that contained an equal amount of black and white pixels.As there was no correct or incorrect answer, no feedback on accuracy was given in the baseline and post-matching greyscales tasks.

Matching task
The goal of the matching task was to have participants learn associations between cues (in this experiment, coloured outlines) and name-labels.There were two groups.In group 1, participants learned associations between three different cues and the labels 'Yourself', a friend's name, and a stranger's name.In group 2, participants only learned associations between two different cues and the labels 'Yourself' and a stranger's name.The third cue was not part of the matching task (i.e., 'unlearned').In this way, we could study whether any potential cueing effect in the greyscales task was caused by having learned an association between cue and label in general, in which case there would be a cueing effect of a stranger-related cue versus an unlearned cue.Immediately before completing the matching task, participants were asked to type into a text box the name of their best same-sex friend.This became the label for the 'Friend' cue.For the label 'Stranger', participants were asked to choose from a list of same-sex names and were instructed to choose a name that did not correspond to the name of someone familiar.We decided to limit the labels to same-sex friends or strangers to avoid any potential unknown sources of variability that might be introduced by allowing different combinations of same-and opposite-sex labels.Participants who did not indicate their sex as being either male or female could choose from all names.
The cues were coloured three-sided outlines (i.e., red, purple, and light blue, 6.38 × 1.78 cm; Fig. 3).All cue-label combinations were possible and randomized between participants.At the start of the matching task, participants were presented with their cue-label combinations and were instructed to memorise the combinations.
After the labels for the cues had been assigned, participants were presented with one cue-label combination at a time and had to judge whether it was a 'match' or a 'mismatch', using the keys 'z' and 'm' on the keyboard.Which key belonged to 'match' and which key to 'mismatch' was counterbalanced between participants.A trial started with a central fixation cross (black, 1.78 × 1.78 cm) presented for 500 ms.Next, a cue appeared 3 cm above fixation and the label (font style Arial, font size 14) appeared 3 cm below fixation for 200 ms.For all trials, feedback on the accuracy was provided.
There were 18 unique trials, based on the cue (i.e., Yourself, Friend, Stranger), label (i.e., Yourself, Friend, Stranger), and cue opening (i.e., left, right).The trials were balanced in sets of 24 so that the cue-label combination was a match in 50% of trials and a mismatch in the other 50% of trials.Participants started with a block of 48 practice trials for which the response window was 5000 ms.After the practice block, participants received feedback on their average RT and accuracy.They then completed the main matching task, consisting of 4 blocks of 48 trials (i.e., 192 trials in total; 32 trials per condition) with a response window of 2000 ms for each trial.

Statistical analysis
Analyses were performed using SPSS Statistics 26 and JASP 0.14.0.0.A significance level of alpha = 0.05 was used.The effect sizes for the t-scores were calculated with the formula r = √(t2 /(t2 + df)).The effect sizes for the z-scores were calculated with the formula r = z / √N.The following cut-offs were used to judge effect size: small effect (r = 0.10), medium effect (r = 0.30), large effect (r = 0.50)  (Field, 2013).
In addition to the frequentist statistics, we computed Bayes factors to indicate whether there was more evidence in favour of the null hypothesis versus the alternative hypothesis based on the observed data.We reported BF 10 , the evidence in favour of the alternative hypothesis.We interpreted a BF 10 of 1-3 as providing anecdotal, 3-10 moderate, 10-30 strong, 30-100 very strong, and > 100 extreme evidence (Wagenmakers et al., 2018).
For all experiments, data and analysis scripts can be found at https://osf.io/ekhsa/.No raw data files are provided as these contain potentially identifiable information (e.g., friend names).

Pseudoneglect in the greyscales task
We first examined whether participants showed pseudoneglect (i.e., a leftward bias) in both the baseline and the post-matching greyscales task, which would provide insight into whether our online experiment was sensitive enough to measure a visuospatial attention bias.We calculated the proportion of trials in which the greyscale with the target (i.e., lighter or darker) on the left side was chosen, averaged over all six cue pairs.Values above 0.5 would indicate a bias towards the right, values below 0.5 would indicate a bias towards the left.To evaluate whether this value was significantly lower than 0.5, a one-tailed t-test was used in case of normally distributed data, and a Wilcoxon signed-rank test in case of skewed data.

Self-bias in the matching task
To confirm that participants had learned the cue-label associations we examined whether there was a self-bias in the matching task, as has previously been established.A self-bias would manifest as faster responses and better discrimination between match and mismatch pairs for the Yourself-cue compared to the Friend-cue or Stranger-cue.For each condition (i.e., Yourself, Friend, and Stranger), we calculated the average RT and sensitivity (d').
D prime (d') was computed based on signal detection theory, which allows for the distinction between effects that can be attributed to changes in perceptual sensitivity (i.e., the ability to discriminate between match and mismatch) and effects that are related to response biases (i.e., whether participants have a preferred type of response throughout the task).Each response was first classified as either a correct identification of a match ('hit'), a correct rejection of a mismatch ('correct rejection'), a misidentification of a mismatch ('false alarm'), and misjudging a match as a mismatch ('miss').D' was computed as follows: d'= z(Hit, match) -z(False alarm, mismatch).The sensitivity index (d') represents the ability to distinguish matching and cue-based mismatching stimuli (Enock et al., 2020).
RT and d' were compared between conditions.To compare RT and d' between the Yourself, Friend, and Stranger condition in the group with three learned cues (group 1), we conducted a repeated-measures ANOVA in case of normally distributed data, and a Friedman test in case of skewed data.A Bonferroni correction was applied for post-hoc pairwise comparisons, which was analysed with a Wilcoxon signed-ranks test in case of skewed data.To compare RT and d' between the Yourself and Stranger condition in the group with two learned cues (group 2), we conducted a paired t-test in case of normally distributed data, and a Wilcoxon signed-ranks test in case of skewed data.

Influence of self-bias on visuospatial attention
To examine the effect of self-related cues on visuospatial attention in the greyscales task, we evaluated whether there was any evidence that participants' perceptual judgements were biased in the direction of one cue over the others using the data of the postmatching greyscales task.A bias towards one cue versus the other manifests in the selection of the gradient that is lighter or darker (depending on the instruction) on the side of that cue.For each of the three conditions, the cue that we expected an attention bias towards was designated the reference cue.That is, Yourself was the reference cue when presented with Stranger or Friend, and Friend was the reference cue when presented with Stranger.Per condition (i.e., Yourself-Stranger, Yourself-Friend, Friend-Stranger), we computed the proportion of trials in which the greyscale with the target on the side of the reference cue was chosen, averaged over the left-right arrangement of the cue pairs of that specific condition (e.g., Yourself-Stranger and Stranger-Yourself).This resulted in a number between 0 and 1, with values above 0.5 indicating a bias towards the reference cue.Thus, a score higher than 0.5 represented a bias towards Yourself (in the Yourself-Stranger and Yourself-Friend conditions) or towards Friend (in the Friend-Stranger condition).To evaluate whether these values were significantly higher than 0.5, one-tailed t-tests were used in case of normally distributed data, and Wilcoxon signed-rank tests in case of skewed data.

Relationship between self-bias in the matching task and the greyscales task
We considered the possibility that only people who showed a strong self-bias in the matching task would show a visuospatial attention bias towards the self-cue in the greyscales task.As a measure of self-bias in the matching task, we computed difference scores which measure how much faster and how much better match and mismatch pairs were discriminated in the conditions with the Yourself cue versus conditions with the Stranger cue by extracting the RT in the Yourself condition from that in the Stranger condition and by extracting d' in the Stranger condition from that in the Yourself condition.We correlated these values (i.e., one based on RT and one based on the sensitivity index) with the bias score of the Yourself-Stranger condition in the greyscales task, using Spearman correlations.

Participants
The experiment was started by 166 participants, of whom 139 completed it.Four participants did not obtain a score of ≥75 % correct in the third practice block of the greyscales task and were therefore excluded.One participant indicated to be colour-blind and specified to have protanopia.The colours used were sufficiently distinctive for this type of colour-blindness, therefore, data of this participant was not excluded.A total of 86 participants was included in group 1, a total of 49 participants was included in group 2. The study took 32.29 min (SD = 6.53) for group 1, and 27.89 min (SD = 7.10) for group 2. Demographic characteristics are shown in Table 1.

Table 1
Demographic characteristics and scores on the questionnaires, frequencies (%), and means (SD; range).
To summarize, in both groups there was very strong to extreme evidence that participants responded faster and discriminated better between match and mismatch pairs in the Yourself versus Stranger condition in the matching task.In group 1, there was extreme evidence that participants responded faster and discriminated better between match and mismatch pairs in the Yourself versus Friend condition.To conclude, there was a self-bias in the matching task.
To summarize, there was moderate evidence for the absence of an effect of yourself-, friend-or stranger-related cues on a visuospatial perception bias in the greyscales task.
To summarize, there was anecdotal to moderate evidence for the absence of a relationship between the strength of the self-bias as seen in the matching task (i.e., based on RT and d') with a potential effect of self-related cues versus stranger-related cues in the greyscales task.

Discussion
The aim of Experiment 1 was to test whether self-related cues (i.e., coloured outlines) would yield a perceptual visuospatial bias as measured with a greyscales task.First, we replicated the effect of pseudoneglect (i.e., an overall leftward bias) in a greyscales task as found in previous research (e.g., Mattingley et al., 1994).Second, there was a self-bias in a matching task (i.e., faster responses and better discrimination between match and mismatch pairs for self-cues versus stranger-or friend-cues), also replicating previous research (Sui et al., 2012).However, there was moderate evidence for no perceptual visuospatial bias for a self-related cue versus a stranger-related cue, a friend-related cue, or an unlearned cue.This was against expectations based on previous studies, in which a spatial attention bias towards a self-related cue was, for example, seen in attention orienting responses (Alexopoulos et al., 2012;Sui et al., 2009).Our results suggest that there is no top-down influence of self-related bias on perceptual judgements of the greyscales stimuli.A possible alternative explanation for these unexpected findings could be the nature of the cue stimuli that were used (i.e., coloured outlines).Coloured outlines have not been used as cues in previous studies, and might not have been salient enough in order to yield a self-related bias that is expected to be subtle.In a second experiment, therefore, we used geometric shapes as cues, which have been used in previous research (Sui et al., 2012).

Participants, equipment, and procedure
We used G*Power (version 3.1.9.2) to compute the minimum required sample size for a one-sample t-test to compare the bias score in the greyscales task with 0.5.With an alpha of 0.05 and a power of 0.80, it was estimated that at least 101 participants were needed to detect a small effect size (f = 0.25).This effect size was a conservative estimation, not based on previous data because moderate evidence for no effects were found in Experiment 1, which could have been due to its design.Participants were friends and relatives of the researchers, and undergraduate psychology students at Utrecht University and the University of Bath.The experiment was either in English or in Dutch.In the latter case, Dutch versions of the questionnaires were used.The inclusion criteria were similar as in Experiment 1, with the exception that participants could not have participated in the first experiment.The equipment, procedure, and statistical analyses were the same.The protocol was approved by the ethical committees of the University of Bath (protocol number 20-203) and Utrecht University (protocol number 20-0199).

Greyscales task
The greyscales task was broadly similar to that described in Experiment 1, with the following adjustments.First, before the cues were presented, the fixation cross changed from black to red for 250 ms to retain attention, after which it changed into black again.Second, black geometric shapes (i.e., square, triangle, circle; 1.78 × 1.78 cm) were used as cues, rather than coloured outlines.These cues were presented 2 cm left and right of fixation for 500 ms.Subsequently, these cues were joined by the two gradients and were presented 2 cm left or right from the gradients (Fig. 2).Third, the number of baseline trials was reduced to 6, to still be able to practice the greyscales task without it having a great impact on participants' concentration early on in the experiment.

Matching task
The design of the matching task was similar as in Experiment 1, except that the cues were geometric shapes (i.e., square, triangle, circle; 3.5 × 3.5 cm; Fig. 3).Each cue was matched to either 'Yourself', a friend's name, or a stranger's name for all participants.There was no 'Unlearned' cue.

Participants
The experiment was started by 126 participants, of whom 106 completed the experiment.Four participants did not obtain a score of ≥ 75% correct in the third practice block of the greyscales task, and were therefore excluded, leaving 102 participants.Most (97; 95.1%) completed the experiment in Dutch.Participants were aged between 17 and 27 years (M = 22.12, SD = 2.22), 54.9% were female versus 45.1% male, and 91.2% were right-handed versus 8.8% left-handed.The study took 37.93 min (SD = 51.67).The average Edinburgh Handedness Inventory score was 83.22 (SD = 46.36).
To summarize, there was extreme evidence that participants responded faster and discriminated better between match and mismatch pairs in the Yourself versus Stranger condition in the matching task.Also, there was extreme evidence for faster responses in the Yourself versus Friend condition, but anecdotal evidence for better discrimination between match and mismatch pairs between these conditions.Lastly, there was anecdotal to moderate evidence that participants responded faster and discriminated better in the Friend versus Stranger condition.To conclude, there was evidence for a self-bias in the matching task.

Relationship between self-bias in the matching task and the greyscales task
There was no relationship between the visuospatial perceptual bias score in the Yourself-Stranger condition in the greyscales task and the self-bias in the matching task based on RT (r s = -0.03,p =.386, τ = -0.02,BF 10 = 0.14/BF 01 = 7.35) or based on d' (r s = 0.09, p =.182, τ = 0.07, BF 10 = 0.22/BF 01 = 4.60).Thus, there was moderate evidence for no relationship between the strength of the self-bias as seen in the matching task (i.e., based on RT and d') with a potential effect of self-related versus stranger-related cues in the greyscales task.A.F.Ten Brink et al.

Discussion
The aim of Experiment 2 was to investigate if geometric shapes, instead of coloured outlines, would elicit a self-related bias.As in Experiment 1, we replicated pseudoneglect (i.e., a leftward attention bias) in a greyscales task, and we replicated a self-bias in a matching task.Also, once again, we found anecdotal to moderate evidence for no effect of the self-, friend-or stranger-related cues on visuospatial perceptual attention.These results provide further evidence for the absence of a self-related bias on visuospatial attention, and that there is no top-down influence of self-related bias on perceptual judgements of the greyscales stimuli.
In both experiments, however, cues were presented 500 ms prior to the greyscales stimuli.Potentially, an orienting response was induced towards the self-related cues in isolation, but had already disappeared at the time the cues were combined with the greyscales stimuli.Furthermore, previous studies have shown that the extent to which the self-related stimulus is presented within the focus of attention, and whether self-related stimuli share properties with the target seem important to the extent of prioritization of self-related information (Devue, Van der Stigchel, Brédart, & Theeuwes, 2009).Therefore, we conducted a third experiment designed in such a way that the cues were fully integrated with the greyscales stimuli rather than being separate features or objects.

Participants, equipment, and procedure
The target sample size was based on the same a priori power calculation as for Experiment 2, and the sample population and inclusion criteria were similar as in Experiment 1, with the exception that participants could not have participated in the first or second experiment.We conservatively estimated the effect size to be small (f = 0.25).We did not determine the effect size on previous data because no effects were found in Experiment 1 and Experiment 2, which could have been due to their designs.Furthermore, we additionally recruited participants via Prolific.An English version of the experiment was used for all participants.Participants who participated via Prolific had to have selected English as their primary language and the UK as their country of residence.Besides that, the procedure and statistical analyses were the same as for the previous experiments.

Greyscales task
The design of the greyscales task was similar as in Experiment 2, with the main change being the integration of colours in the greyscale gradients.Specifically, rather than being comprised of black and white gradients, the greyscales stimuli themselves were coloured, with the left half of both bars containing pixels of one colour and the right half of both bars containing pixels of another colour (Fig. 2).These colours constituted cues (see section 4.1.3).Thus, although the 'greyscales' were no longer grey, the overall ratio of coloured to white pixels of the two horizontal bars was still on average the same.As such, no cues were presented prior to presentation of the greyscales, because the cues were part of the greyscales figure itself.The instructions were the same as in the previous experiments, participants had to indicate which of the two coloured gradients was lighter or darker on average.Another difference was that the fixation cross at the start of the trial was red, and changed to black when the greyscales stimuli appeared (after 500 ms), in order to capture attention and assure that participants would fixate the central cross.

Matching task
The design of the matching task was similar as in Experiment 1, with the main change that the cues were coloured gradients (i.e., red, blue, and green; 6.38 × 1.78 cm; Fig. 3).Since we did not control which device and monitor participants used, we could not control for luminance.However, cue-label combinations were fully counterbalanced between participants and any effects of colour would, therefore, be controlled for.
The cues were matched to 'Yourself', a friend's name, or a stranger's name for all participants.There were 36 unique trials, based on the cue (i.e., Yourself, Friend, Stranger), label (i.e., Yourself, Friend, Stranger), cue opening (i.e., left, right), and position of the dark part (i.e., up, down).The trials were balanced in sets of 48 so that the cue-label combination was a match in 50% of trials and a mismatch in the other 50% of trials.

Participants
The experiment was started by 134 participants, of whom 117 completed the experiment.Six participants did not obtain a score of ≥75 % correct in the third practice block of the greyscales task, and were therefore excluded.One participant was colour-blind and specified to have protanopia.Because red and green stimuli were used, data of this participant were excluded, leaving 110 participants.
To summarize, there was strong to extreme evidence that participants responded faster and discriminated better between match and mismatch pairs in both the Yourself versus Friend condition as the Yourself versus Stranger condition.This shows a self-bias in the matching task.

Relationship between self-bias in the matching task and the greyscales task
There was no relationship between the visuospatial perceptual bias score in the Yourself-Stranger condition in the greyscales task and the self-bias in the matching task based on RT (r s = 0.06, p =.254, τ = 0.05, BF 10 = 0.16/BF 01 = 6.19) or based on d' (r s = 0.11, p =.136, τ = 0.07, BF 10 = 0.23/BF 01 = 4.36).Thus, there was moderate evidence for no relationship between the strength of the self-bias as seen in the matching task (i.e., based on RT and d') with a potential effect of self-related versus stranger-related cues in the greyscales task.

Discussion
The aim of Experiment 3 was to investigate if coloured cues, integrated with the greyscales stimuli, would elicit a self-related bias.As in Experiment 1 and Experiment 2, we observed moderate evidence for no effect of the self-, friend-or stranger-related cues on visuospatial perceptual attention, despite finding pseudoneglect in a greyscales task and an effect of self-bias in a matching task.Overall, based on the results of Experiment 3, we conclude that even when the self-related cues are fully integrated into, and are indeed part of, the greyscales stimuli, they do not elicit an overall change in visuospatial perception and there is no top-down influence of selfrelated bias on perceptual judgements of the greyscales stimuli.was relevant.Effects of self-prioritization were found in an endogenous cueing task in which red or green arrows were associated with the self or a friend.The arrows were used as endogenous cues in a cueing task.Even though the self-and friend-related cues were similar in predicting the target location, and the colours (i.e., stimulus identities) were irrelevant to the task, self-referential cues shifted spatial attention to cued locations more effectively than cues that were related to others (Sui et al., 2009).Another study, however, did not observe self-prioritization when eye movements had to be made towards line orientations that were associated with the self and a stranger (Siebold et al., 2015).The conflicting findings may reflect differences in the measures and tasks of these studies.
Third, a stimulus can be completely task-irrelevant, in the sense that processing the stimulus is not required to perform the task at hand.For example, when self-related stimuli serve as uninformative cues in cueing and attentional capture tasks, or as distractors in visual search tasks.Attentional biases for own versus a friend's face or name have been found in cueing and attentional capture tasks in which attention is oriented automatically (Alexopoulos et al., 2012;Liu et al., 2016).In visual search paradigms, whether or not such task-irrelevant self-related stimuli are distracting, additionally depends on whether they are presented within the focus of attention and/or whether they share properties with the target (Devue et al., 2009).In the current study, the self-related cues were always irrelevant to perform the task at hand.In our third experiment, the cues were integrated with the greyscales stimuli and therefore shared properties with the target.However, no identification of the cues was needed.Altogether, this suggests that when stimuli are task-irrelevant, the long-term established knowledge might play a crucial role in whether or not there is a self-bias.In other words, where one's own name or face draws attention even when irrelevant for the task at hand, this might be less likely for abstract, temporarily established cue-label associations, as used in the current study.
Finally, we did not measure the initial orienting of attention to one side versus the other.Possibly, self-related information results in a bias in the automatic orientation of attention, whereas it does not result in a perceptual visuospatial bias.

Potential limitations
Since the experiment was conducted online, there was no control over the environment and circumstances in which participants performed the task.This may have added noise to the data, the nature and effects of which are unknown.To ensure participants understood the instructions, participants had to perform a practice task and were excluded if they did not obtain a sufficient score in their third try.We believe it is unlikely that the lack of control over the environment impacted the results, because for all three experiments we found moderate to extreme evidence for a self-bias in the matching task (Sui et al., 2012) and moderate to extreme evidence for pseudoneglect on the greyscales task (Mattingley et al., 1994), replicating previous findings.Therefore, it is unlikely that the lack of a visuospatial perceptual bias towards self-related cues in the greyscales task was due to the online format of the study.Furthermore, to test our hypothesis, we conducted three independent experiments of which two were powered to detect small effect sizes.This provides confidence that our results can be considered reliable.

Conclusion.
In the current study, self-related stimuli did not elicit a perceptual visuospatial attention bias, suggesting that there is no top-down influence of self-related bias on perceptual judgements of the greyscales stimuli.This is contrary to findings in previous studies, in which self-related stimuli strongly influence where attention is guided, even when abstract, temporarily established cue-label associations are used.The consistent absence of a self-bias throughout the three experiments of this study gives more insight into the working mechanisms of a self-bias.The degree of task relevance, long-term established knowledge (e.g., own face or name versus abstract, temporarily established cue-label associations), and in the case of distractibility, whether the stimulus is in the focus of attention and/or shares properties with the target, likely influence the degree of self-prioritisation.Future studies on effects of selfrelated information on attentional processing could use cues that are more relevant to the task at hand, for example by asking participants to report the identity of both cues after each trial.Another direction could be to assess effects of self-related cues on attentional orienting on a sub-second scale, for example using pupillometry (Strauch et al., 2022).To conclude, based upon the current results it is likely that there are different underlying mechanisms in the effect of a self-bias in visual perception and other cognitive domains like attention.Specifically, it suggests that self-bias plays a role in orienting attention and selective attention, but not a role in allocation of attention of perceptual judgments.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Fig. 1 .
Fig. 1.Procedure of Experiment 1, the number of blocks per task are depicted by the number of boxes.

Fig. 3 .
Fig. 3. Example of cue-label combinations for the matching task used in Experiment 1 (A), Experiment 2 (B), and Experiment 3 (C).Cue-label combinations were counterbalanced between participants.All of these cues from Experiment 1 and Experiment 3 are open on the right, however, cues could also be open on the left.Note that in Experiment 1, group 1 learned all cue-label combinations and group 2 only learned the cue-label combinations of Yourself and Stranger.
Fig. 4. Average reaction times (RT's) for the correct responses (upper panels), and average sensitivity index (d') scores (lower panels) in Experiment 1 for group 1 (left-hand panels; N = 86) and group 2 (right-hand panels; N = 49), split per cue in the matching task.Error bars indicate 95 % confidence intervals.

Fig. 5 .
Fig. 5. Proportion of trials in the post-matching task for Experiment 1 in which there was a preference for the side of one cue over the other (i.e., bias score) split for group 1 (N = 86) and group 2 (N = 49).A bias score > 0.5 indicates a preference for Yourself in the conditions with a Yourself cue, and a preference for Friend/Unlearned cue in the conditions without a Yourself cue.A bias score of 0.5 (dashed line) indicates no bias.Error bars indicate 95 % confidence intervals.

Fig. 7 .
Fig. 7. Proportion of trials in the post-matching task for Experiment 2 (N = 102) in which there was a preference for the side of one cue over the other.A bias score > 0.5 indicates a preference for the Yourself cue in the Yourself-Friend and Yourself-Stranger conditions, and a preference for the Friend cue in the Friend-Stranger condition.A bias score of 0.5 (dashed line) indicates no bias.Error bars indicate 95% confidence intervals.