Comparison with others influences encoding and recognition of their faces: Behavioural and ERP evidence

In daily life, faces are often memorized within contexts involving interpersonal interactions. However, little is known about whether interpersonal interaction-related contexts influence face memory. The present study aimed to understand this question by investigating how social comparison-related context affects face encoding and recognition. To address this issue, 40 participants were informed that they and another player each played a monetary game and were then presented with both of their outcomes (either monetary gain or loss). Subse-quently, participants were shown the face of the player whom they were just paired with. After all the faces had been encoded, participants were asked to perform a sudden old/new recognition task involving these faces. The results showed that, during the encoding phase, another player ’ s monetary gain, compared to loss, resulted in more negative responses in the N170 and early posterior negativity (EPN)/N250 to relevant players ’ faces when participants encountered monetary loss and a smaller late positive potential (LPP) response irrespective of self-related outcomes. In the subsequent recognition phase, preceding another player ’ s monetary gain as compared to loss led to better recognition performance and stronger EPN/N250 and LPP responses to the faces of relevant players when participants had lost some amount of money. These findings suggest that the social comparison-related context, particularly self-disadvantageous outcomes in the context, influences the memory of comparators ’ faces.


Introduction
Faces convey information about a person's identity that can allow us to individuate others.The ability to memorize faces is thus critical for taking appropriate actions when these particular individuals are encountered again in the future.In daily life, however, faces are not viewed in isolation but rather together with the social context, particularly the social-emotional-related context.The question of interest would be how social-emotional-related context influences face memory (e.g., face encoding and recognition).
Face memory is often identified by behavioral parameters and eventrelated potential (ERP) components [e.g., the N170, early posterior negativity (EPN), N250 and late positive potential (LPP)].Regarding ERPs, the N170, a face-sensitive component that peaks approximately 170 ms after stimulus onset and is maximal at occipito-temporal scalp sites, has been suggested to reflect attention during structural encoding of faces (e.g., Eimer, 2000aEimer, , 2000b;;Eimer and McCarthy, 1999;Holmes et al., 2003;Wronka and Walentowska, 2011).The EPN, which is distributed like the N170 and starts at approximately 200 ms, is thought to be associated with automatic attention (Schupp et al., 2006(Schupp et al., , 2003(Schupp et al., , 2004)).The N250 component, which is similar to the EPN in terms of time range and scalp distribution, is thought to be associated with facial familiarity (e.g., Herzmann et al., 2004;Kaufmann et al., 2009;Schweinberger et al., 2002;Tanaka et al., 2006).We refer to EPN and N250 as "EPN/N250" in this study, as these two components cannot be disentangled in face memory studies (Hagemann et al., 2016;Lin and Liang, 2023a).The LPP, which starts at approximately 500 ms and lasts for seconds, has been suggested to be associated with cognitive and complex encoding (Dillon et al., 2006;Labuschagne et al., 2010) and conscious recollection of details about a previous episode .The LPP is widely distributed across fronto-parietal scalp sites, whereas effects involving social comparisons are greatest over central scalp sites (e.g., Li et al., 2010;Lin and Liang, 2021c;Zhou et al., 2010).
Previous studies have shown that negative emotions implied in a face (i.e., negative facial expressions) influence the encoding and recognition of the face.In the encoding phase, the N170 and EPN/N250 responses have been shown to be greater for faces showing negative expressions than for the faces showing neutral or even positive expressions (e.g., Balconi and Mazza, 2009;Batty and Taylor, 2003;Calvo andBeltrán, 2013, 2014;Hagemann et al., 2016;Müller-Bardorff et al., 2016;Righi et al., 2012;Smith et al., 2013).There were effects of negative facial expressions on LPP responses, whereas the direction of the effects varied according to the factors such as expression intensity and attentional focus (Lin and Liang, 2023a;Müller-Bardorff et al., 2016).With respect to face recognition, accumulating evidence has suggested that recognition performance (e.g., hit and error rates and d' scores) and recognition-related ERP responses (e.g., N170, EPN/N250 and LPP) to faces are influenced by negative expressions of relevant faces (e.g., Johansson et al., 2004;Hagemann et al., 2016;Lin and Liang, 2023a;Lin et al., 2015b;Righi et al., 2012;Sessa et al., 2011).Nevertheless, the direction of the effect is modulated by factors such as attentional focus and facial expressions shown in the encoding and recognition phases (Lin and Liang, 2023a).
More importantly, previous studies have also investigated how social context-particularly the negative social context-influences face encoding and recognition.In terms of face encoding, the N170 response was found to be greater for faces that received negative evaluations than for those that received neutral evaluations (e.g., Krasowski et al., 2021;Schindler et al., 2021).However, compared with neutral faces, contextually fearful faces were found to reduce the amplitude of N170 to subsequently-presented target faces (Furl et al., 2007;Lin et al., 2015b).In addition, compared with positive social scenes, negative ones also produced differential N170 responses to simultaneously presented faces, and the effect of social-emotional scenes was dependent on attentional focus (e.g., facial-related or facial-unrelated attributes; Cao et al., 2022;Diéguez-Risco et al., 2015;Hietanen and Astikainen, 2013;Righart andDe Gelder, 2006, 2008).For the EPN/N250, the response was found to be greater for faces who were paired with negative evaluations than for those who were paired with neutral evaluations (Krasowski et al., 2021;Schindler et al., 2021;Wieser et al., 2014).However, it was found that the EPN/N250 response was smaller for faces preceded by cues indicating upcoming negative faces than for those preceded by cues indicating neutral faces (Lin et al., 2016).With respect to the LPP, the response was greater for faces paired with negative vocal expressions (Lin andLiang, 2019, 2023b) and social cues (Lin et al., 2016) than for the faces paired with relevant neutral stimuli.However, the LPP response decreased to target faces preceded by negative social scenes (Hietanen and Astikainen, 2013) and contextual facial expressions (Richards et al., 2013).In addition, the effects of negative evaluations on LPP responses to faces were dependent on attentional focus (e.g., emotional or non-emotional content; Diéguez-Risco et al., 2015;Krasowski et al., 2021;Schindler et al., 2021).Therefore, the findings generally suggest that negative social context influences face encoding, although the effect is influenced by factors such as manipulations of social context and attentional focus.
Several studies have shown the effects of negative social contexts on subsequent face recognition performance (e.g., Lin et al., 2015b;Mattarozzi et al., 2019;van den Stock and de Gelder, 2012) and corresponding ERP responses (Abdel Rahman, 2011;Lin andLiang, 2019, 2023b).With respect to recognition performance, van den Stock and de Gelder (2012) required participants to view neutral faces that were paired with negative, positive and neutral social scenes and subsequently to recognize relevant faces without scene pairs.The findings revealed impaired recognition of neutral faces encoded together with negative and positive social scenes compared to faces encoded with neutral scenes.Our previous study revealed impaired recognition of target faces that had been previously encoded after contextual fearful faces than after contextual neutral faces (Lin et al., 2015b).However, recognition performance was better for faces paired with negative behavioral descriptions during the preceding encoding phase than for faces paired with neutral descriptions (Mattarozzi et al., 2019).
For ERPs, Abdel Rahman (2011) reported greater recognition-related EPN/N250 and LPP responses during face recognition when the faces had previously been encoded with negative and positive biographical knowledge than when they had been encoded with neutral knowledge.Our previous study (Lin and Liang, 2019) showed that recognition of neutral faces elicited larger recognition-related N170 and LPP responses when the faces had previously been encoded together with angry vocal expressions than when they had been encoded together with neutral vocal expressions.Using similar stimuli, however, our other study (Lin and Liang, 2023b) reported smaller recognition-related LPP responses to neutral faces that had been previously encoded after angry vocal expressions than to faces that had been encoded after neutral vocal expressions.Taken together, the findings indicate that negatively valenced social contexts influence recognition performance and ERP responses during subsequent face recognition, and the influence pattern is dependent on the manipulations of the social context.
Note that while the abovementioned studies have suggested that negatively valenced social contexts influence face memory (e.g., face encoding and recognition), the social contexts in these studies did not involve person-to-person interactions.In complex social circumstances, however, faces are frequently memorized in the context of social interactions, which also involve emotional valences.Thus, it would also be interesting to investigate whether social interaction-related emotional contexts influence face encoding and recognition.
Regarding the influence of social comparison on face memory, Sugimoto et al. (2021) required participants to play a monetary game with several other opponents and subsequently presented the participants with angry, happy or neutral faces.The valence of facial expression signified the outcomes involving social comparison, i.e., angry, happy and neutral expressions indicated participant-win/opponent-loss, participant-loss/opponent-win and draw, respectively.Subsequently, in a sudden old/new recognition task, participants were asked to recognize whether the prompted faces had been presented before.The results showed that recognition performance was greater on angry faces than on neutral and happy faces, suggesting better recognition of opponents' faces when individuals obtained self-advantageous outcomes.
Notably, in Sugimoto et al. (2021) study, social comparison-related outcomes were indicated by the valence of facial expression.However, as negative facial expressions are thought to influence face recognition (e.g., Johansson et al., 2004;Hagemann et al., 2016;Lin and Liang, 2023a;Lin et al., 2015b;Righi et al., 2012;Sessa et al., 2011), the effect regarding face recognition observed in Sugimoto et al. ( 2021) study H. Lin and J. Liang might be influenced not only by social-comparison-related outcomes but also by facial expressions.Moreover, facial expression signified both self-related and nonself-related outcomes (e.g., an angry expression indicated one's gain and the opponent's loss).Thus, it was unclear whether the altered recognition of opponents' faces was due to self-related outcomes (e.g., participants' win), nonself-related outcomes (e.g., opponents' loss) or social comparison-related outcomes (e.g., participants' monetary outcomes are better than their opponents' outcomes).Taken together, facial expressions might not be suitable for manipulations of social comparison-related outcomes.
Therefore, the present study aimed to further investigate whether the social comparison-related context, particularly outcomes in this context, influence face memory.To address this issue, participants in the present study were informed that they and several other players each played a monetary game.To investigate the influence of social comparison, participants were then presented with either a positive or negative outcome for themselves or for the other player.The outcomes for participants and the other player were independent of each other.This approach helps in understanding whether face memory is influenced by self-related outcomes, nonself-related outcomes or social comparisonrelated outcomes.Previous studies have consistently used similar approaches to investigate the processing of social comparison-related outcomes (e.g., Boksem et al., 2011;Hu et al., 2017;Lin and Liang, 2021c;Qi et al., 2018;Wu et al., 2012;Zhang et al., 2021Zhang et al., , 2020)).Subsequently, participants were presented with a neutral face and were told that the nonself-related outcome was obtained from the person they saw.Using neutral faces could exclude the influence of facial expressions on face memory.After the encoding phase, participants were required to perform a sudden old/new recognition task in which the faces were not preceded by outcomes.Additionally, electroencephalograms (EEGs) were recorded during the experiment.
In general, individuals feel unpleasant when they obtain a negative outcome.These feelings might be even worse when individuals experience a self-disadvantageous social comparison-related outcome (i.e., others obtain a better outcome).This increased unpleasant feeling might arise in persons who, to some extent, cause such a self-disadvantageous outcome.Previous studies have shown that negative evaluations involving a person increase face encoding and recognition of this person (e.g., Krasowski et al., 2021;Mattarozzi et al., 2019;Schindler et al., 2021).Accordingly, we predict that in the present study, when individuals obtain a negative outcome, other players' positive outcomes would increase the encoding of those players' faces reflected by ERP (N170, EPN/N250 and LPP) responses and subsequent recognition performance and relevant ERP responses.

Participants
Forty-seven undergraduate students were recruited as participants.Seven participants were excluded due to excessive EEG artefacts.This led to a final sample of 40 participants (22 women, 18-21 years, M = 19.36,SD = 0.69).We considered whether this sample size should detect a small to medium effect size according to our previous studies regarding social comparisons (Lin and Liang, 2021c, Lin and Liang, in press).Thus, we performed a sensitivity power analysis using G*Power 3.1.7(Faul et al., 2007).The findings showed that the current sample size acquired a power of > 80 % to detect a small to medium effect size (f = 0.188) for the interaction between self-outcome and other-outcome.All participants were right-handed as determined by the Edinburgh Handedness Inventory (Oldfield, 1971).Participants had normal or corrected-to-normal vision and did not report neurological illness or a history.All participants provided written informed consent, which was consistent with standard ethical guidelines from the Declaration of Helsinki.The study was approved by the Academic Committee of Laboratory for Behavioral and Regional Finance, Guangdong University of Finance.

Stimuli
The stimuli in the present study included 300 neutral faces (150 females and 150 males) obtained from the Chinese Academy of Sciences' Pose, Expression, Accessory and Lighting (CAS-PEAL) Large-Scale Chinese Face Database (Gao et al., 2008) and the Chinese Academy of Sciences' Institute of Automation (CASIA) -3D FaceV1 (http://biomet rics.idealtest.org/).All the stimuli were adjusted to a size of 6.75 cm × 9 cm (horizontal × vertical).Facial pictures were cropped similarly around the face outline and centred, so that the eyes, noses and mouths were located at similar positions.External features (e.g., neck, shoulder, distant hair and jewellery) were removed.Because facial stimuli in the CAS-PEAL Large-Scale Chinese Face Database (Gao et al., 2008) were grayscale, we converted the other faces to grayscale using Adobe Photoshop CS6 in order that the colours of all the faces were in line with each other.The physical features (e.g., luminance and contrast) of the facial pictures were also aligned using Adobe Photoshop CS6.Finally, the background colour was set to black.
Two hundred facial stimuli (100 females and 100 males) were used as target faces that were presented in both the encoding and recognition phases, and the remaining 100 facial stimuli served as novel faces that were only presented during the recognition phase.The target faces were separated pseudorandomly into 4 sets, with 50 faces (25 females and males) for each.These sets of facial stimuli were assigned randomly to one of the experimental conditions (i.e., self-gainother-gain, self-gain other-loss, self-lossother-gain and self-lossother-loss).The assignment of sets was counterbalanced across participants.

Procedure
After informed consent had been obtained and handedness had been determined, participants sat in a comfortable chair in a quiet room approximately 100 cm directly in front of a 22-inch computer monitor with a screen resolution of 640 × 480 pixels, so that the visual angle of the facial stimuli was 3.87 • × 5.15 • (horizontal × vertical).Stimulus presentation and behavioral data collection were controlled by E-Prime 2.0 software (Psychology Software Tools, Inc., Sharpsburg, PA, USA).All stimuli were presented against a dark background.
Prior to the actual experiment, each participant was told to play a monetary game.Participants were informed that numerous undergraduate students (i.e., other players) from other universities or colleges had previously performed this game.Participants were informed that their outcomes would be presented in each trial and that the outcomes would be consistent with those of the other players.In fact, there were no other players, and the outcomes from those nonexistent players were predetermined with experimental randomization.Participants were informed that they would obtain basic compensation of 25 Chinese dollars for their participation.Moreover, participants were told that they would obtain or lose an amount of money for each trial based on % of the value in the box selection task (see the following paragraph for details).Participants were informed that the overall compensation would be based on basic compensation ± 50 % of the general gains/ losses for all trials in the box selection task (e.g., if participants generally gained a value of 10 for all trials in the box selection task, then they would receive [25 + 10 × 50 %] Chinese dollars.If they generally lost Chinese dollars in the task, then they would receive [25 -10 × 50 %] Chinese dollars).In fact, the general gains or losses were randomized by computers, ranging from − 10 to +10.
As shown in Fig. 1, this experiment included the encoding and recognition phases.In the encoding phase, each trial started with a white fixation cross for 500 ms.Subsequently, participants were presented with two white boxes on the left and right sides of the center.Participants were told to select one of the boxes by pressing the "F" or "D" key for the left and right boxes, respectively, using the index and middle fingers, respectively, of the left hand.Participants were told that there was either − 10 or 10 in each box and that they would gain or lose the amount of money according to their selections.Participants were informed that this was a game of chance and that there were no correlations between the location of the boxes and monetary gain or loss.There was no time limit for the response.After a blank screen was presented for 800-1200 ms (M = 1000 ms), the outcomes between the participant and the other player were presented on the left and right sides of the screen center, respectively, for 1500 ms.The number signified the amount of money gained or lost.The symbols "+" and "-" on the left side of the number indicate monetary gain and loss, respectively.According to self-related and nonself-related outcomes, there were 4 experimental conditions: self-gainother-gain (i.e., self vs. other = +10 vs. +10), self-gainother-loss (i.e., +10 vs. − 10), self-lossother-gain (i.e., − 10 vs. +10) and self-lossother-loss (i.e., − 10 vs. − 10).The presentation sequences of these conditions were randomized.
Participants were asked to view the outcome and subsequently rate the pleasantness of the outcome on a 9-point scale (1 = very unpleasant, 9 = very pleasant) by pressing the corresponding number on the number keypad using the right hand.After a random blank screen ranging from 800 to 1200 ms (M = 1000 ms), a neutral face was presented at the center of the screen for 2500 ms.Participants were told that the outcome they just saw was obtained from this person and were asked to view the face carefully.
After the encoding phase, a sudden old/new recognition task occurred.Participants did not know about the recognition task until it commenced.In this task, each trial started with a white fixation cross for 500 ms.After a random blank screen appeared for a range of 800 to 1200 ms (M = 1000 ms), there was a neutral face at the center of the screen for 2500 ms.Participants were asked to identify whether the prompted face had been presented in the preceding phase by pressing the "F" or "J" key using the index finger of the left or right hand, respectively.Response assignments were counterbalanced across participants.Participants were told to respond as quickly and accurately as possible.The next trial started after the appearance of a blank screen for 500 ms.
There were 50 trials for each experimental condition during the encoding phase.During the recognition phase, there were also 50 trials in each experimental condition and 100 trials in the novel face condition.Thus, the actual experiment consisted of 200 trials (i.e., 50 × 2 selfoutcomes × 2 other-outcomes) during the encoding phase and 300 trials (i.e., 50 old faces × 2 self-outcomes × 2 other-outcomes + 100 novel faces) during the recognition phase.There were 3 breaks in the encoding phase and 2 breaks in the recognition phase, and the duration of the break was controlled by the participants.Before the encoding phase, there were 8 practice trials to familiarize the participants with the experimental procedure.Note that in the practice trials, participants were told that nonself-related outcomes were presented randomly by the computer; thus, there were no other player faces to be presented.After the experiment, participants were asked whether they had participated in similar psychological experiments, whether they believed in the existence of the other players and whether the outcomes between themselves and the other players were presented randomly.All the participants reported that they never had experience with similar experiments and that they believed in playing with real persons and did not think the outcomes were presented at random during the experiment.The experiment (including practice trials) lasted for approximately one hour.

Behavioural recordings
In the encoding phase, pleasantness ratings for all the outcomes were recorded.During the later recognition phase, hit rates and reaction times (RTs) for each face were recorded.RTs were analysed for correct trials only.
Offline, EEG data were further processed using BrainVision Analyzer 2.0 software (Brain Products GmbH, Munich, Germany).Ocular movements were inspected and removed from the EEG signal using the algorithm by Gratton et al. (1983).The EEG data were then segmented into 2700 ms epochs from − 200 ms to 2500 ms relative to the onset of facial stimuli during the encoding and recognition phases, with the first 200 ms epoch used for baseline correction.Artefact rejection was performed with an amplitude threshold of 100 μV, a gradient criterion of 50 μV/ms and lowest activity criterion of 0.5 μV per 100 ms.Artefact-free trials were then averaged for each channel and experimental condition.Averaged ERPs were low-passed filtered at 30 Hz (24 db/oct, zero phase shift).Finally, ERPs were recomputed to an average reference excluding vertical and horizontal EOG channels.In general, 85.65 % and 85.21 % of trials were retained in the encoding and recognition phases, respectively.

Data analysis
For the behavioral data, ratings of pleasantness during the encoding phase and hit rates and RTs during the recognition phase were performed by 2 × 2 repeated measures analyses of variance (ANOVAs) with self-outcome (gain versus loss) and other-outcome (gain versus loss) as within-subject factors.In terms of ERP data, we averaged the amplitudes of the N170, EPN/N250 and LPP for all electrodes of interest separately in the encoding and recognition phases.The amplitudes of these components were assessed by using the abovementioned 2 × 2 ANOVA.
Furthermore, to understand the relationships between the encoding and recognition phases, we calculated the mean differences in response to other-gain versus other-loss separately in the self-gain and self-loss conditions for each dependent variable and each participant across trials, and analyzed whether the mean differences in encoding-related dependent variables (i.e., pleasantness ratings and the encodingrelated N170, EPN/N250 and LPP responses) were correlated with the differences in recognition-related dependent variables (i.e., hit rates, RTs, recognition-related N170 and EPN/N250 and LPP responses).Statistical analyses were performed using IBM SPSS Statistics software (Version 26; SPSS ING., an IBM company, Chicago, Illinois).The data that support the findings of this study are openly available in open science framework at https://osf.io/3najr/?view_only=50226046cfa64 ba393bf591cb7e53f3e.

Encoding-related ERP data
Encoding-related N170 The analysis only showed an interaction effect between self-outcome and other-outcome (F(1, 39) = 9.02, p = .005,η 2 p = 0.188; Figs. 3 and 4).Further analysis of each level of selfoutcome showed that when participants encountered monetary loss, the N170 response was more negative for faces in the other-gain condition than for those in the other-loss condition (F(1, 39) = 8.31, p = .006,η 2 p = 0.176), whereas the effect of the other-outcome was not significant when participants encountered monetary gain (p = .757).The other main effects were not significant (p ≥ .119).
Encoding-related EPN/N250 There was only a significant interaction effect between self-outcome and other-outcome (F(1, 39) = 8.17, p = .005,η 2 p = 0.173; Figs. 3 and 4).A separate analysis for each level of self-outcome revealed that when individuals encountered monetary loss, faces elicited a more negative response in the other-gain condition than in the other-loss condition (F(1, 39) = 9.14, p = .004,η 2 p = 0.190), whereas the effect of the other-outcome was not significant when individuals encountered monetary gain (p = .736).No main effects were significant (p ≥ .068).
Encoding-related LPP The analysis showed only a main effect of other-outcome, with more negative responses to faces in the other-gain condition than in the other-loss condition (F(1, 39) = 5.31, p = .026,η 2 p = 0.120; Figs. 4 and 5).The other main effects or interactions were not significant (p ≥ .909).

Hit rates
There were no main effects of self-outcome or other-outcome (p ≥ .109),whereas the interaction effect between these two factors was significant (F(1, 39) = 6.36, p = .016,η 2 p = 0.140; Fig. 2).A separate analysis for each level of self-outcome showed that when participants themselves encountered monetary loss, hit rates were greater for faces in the other-gain condition than for those in the other-loss condition (F(1, 39) = 9.71, p = .003,η 2 p = 0.199), whereas the effect of other-outcome was not significant when participants encountered monetary gain (p = .549).

RTs
There were no main effects of self-outcome or other-outcome (p ≥ .521),but there was an interaction effect between these two factors on RTs (F(1, 39) = 4.19, p = .047,η 2 p = 0.097; Fig. 2).For the interaction, however, further analysis for each level of self-outcome did not reveal an effect of other-outcome in either the self-gain or self-loss condition (p ≥ .142).

Recognition-related ERP data
Recognition-related N170 The analysis did not reveal any main effects or interactions (p ≥ .147;Figs. 6 and 7).
Recognition-related EPN/N250 There was a significant interaction effect between self-outcome and other-outcome (F(1, 39) = 5.33, p = .026,η 2 p = 0.120; Figs. 6 and 7), although no main effects were significant (p ≥ .069).A separate analysis for each level of self-outcome revealed that in the self-loss condition, preceding others' monetary gain elicited a more negative response to faces that did preceding others' monetary gain (F(1, 39) = 10.30,p = .003,η 2 p = 0.209), whereas the effect of other-outcome was not shown in the self-gain condition (p = .903).
Recognition-related LPP The analysis showed only a main effect of other-outcome (F(1, 39) = 4.53, p = .040,η 2 p = 0.104; Figs.7 and 8).The response was more positive for faces in the other-gain condition than for those in the other-loss condition.The other main effects or interactions  components for all the experimental conditions and for the differences between the other-gain and other-loss conditions for each level of self-outcome.
H. Lin and J. Liang were not significant (p ≥ .188).
As shown in Figs.7 and 8, however, the effect of other-outcome seemed to be prominent only in the self-loss condition and not in the self-gain condition.Therefore, exploratory post hoc tests for each level of self-outcome revealed that the effect of other-outcome was significant only in the self-loss condition (F(1, 39) = 4.62, p = .038,η 2 p = 0.106) and not in the self-gain condition (p = .690),suggesting a potentially stronger effect of other-outcome in the self-loss condition.

Correlations between the encoding and recognition phases
The analysis revealed negative correlations between mean pleasantness ratings and hit rates (r = − 0.320, p = .044)and between amplitudes of LPP and hit rates (r = − 0.365, p = .020;Fig. 9A) to other-gain versus other-loss outcomes in the self-loss condition.For this contrast, there were also positive correlations between encoding-related and recognition-related N170 responses (r = 0.375, p = .017),between the encoding-related N170 response and the recognition-related EPN/N250 response (r = 0.330, p = .037),between the encoding-related EPN/N250 response and the recognition-related N170 response (r = 0.320, p = .044)and between encoding-related and recognition-related EPN/N250 responses (r = 0.421, p = .007;Fig. 9B).There were no significant correlations between other pairs of encoding-related and recognitionrelated dependent variables (p ≥ .069).

Discussion
The present study investigated whether face encoding and recognition were influenced by the social comparison-related context.The current findings revealed that, during the encoding phase, other players' monetary gain, compared to loss, resulted in larger encoding-related N170 and EPN/N250 responses to the players' faces in the self-loss condition and a smaller encoding-related LPP response irrespective of self-outcome.In the subsequent recognition phase, preceding other players' monetary gain compared to loss led to better recognition performance and greater recognition-related EPN/N250 and LPP responses to relevant faces in the self-loss condition.These findings might suggest that self-disadvantageous outcomes in the context of social comparison influence the encoding and recognition memory of comparators' faces.
Previous studies have suggested that the N170 is associated with attention during structural and perceptual encoding of a face (e.g., Eimer, 2000aEimer, , 2000b;;Eimer and McCarthy, 1999;Holmes et al., 2003;Wronka and Walentowska, 2011).The EPN/N250 is thought to be associated with stimulus attention (Schupp et al., 2006(Schupp et al., , 2003(Schupp et al., , 2004)).The LPP has been suggested to be relevant to attention allocation during cognitive and detailed encoding of a stimulus (Dillon et al., 2006;Labuschagne et al., 2010).Therefore, the current findings regarding the encoding phase might imply that others' positive outcomes, compared to their negative outcomes, increase attention during perceptual encoding of others' faces when one obtains a negative outcome and decrease attention during elaborate encoding of relevant faces irrespective of self-outcome.Individuals may perceive others as potential competitors even though their interests are independent of each other (Fukushima and Hiraki, 2006;Wang et al., 2017).In this potential competition context, individuals might compare themselves to others either intentionally or unintentionally (e.g., Festinger, 1954;Suls et al., 2002).When individuals obtain a negative outcome, they might experience envy and feel threatened by others who obtain a positive outcome (e.g., Feather and Nairn, 2005;Feather and Sherman, 2002;Leach and Spears, 2008;Lin andLiang, 2021a, 2021b;Parrott and Smith, 1993;Van de Ven et al., 2015).Envy and threat might lead individuals to direct their attention towards other-relevant information (e.g., others' faces) immediately after their presentations, leading to increased perceptual encoding of relevant faces.Moreover, in late time ranges, individuals might also experience envy and threat, sustaining attentional focus on the faces and therefore reducing the recruitment of additional attention for detailed encoding of relevant faces.In line with this interpretation, our current findings revealed stronger negative feelings towards other players' monetary gain than towards monetary loss in the self-loss condition.
In contrast to the self-loss condition, the self-gain condition did not generalize an effect of other-outcome on encoding-related N170 or EPN/ N250 responses to faces, although the effect of other-outcome on encoding-related LPP responses was still evident.It has been suggested that individuals are more empathetic and cooperative with others when they are relatively successful (Shen et al., 2013).More relatively, Zhang et al. (2020) suggested that when individuals receive a gain, they also want others to gain.Accordingly, in the current study, participants might also wish the other player could obtain a positive outcome when they obtained such an outcome.This prosocial wish might decrease unpleasant feelings (e.g., envy and threat) even though the other players also obtained a positive outcome.This decrease might weaken immediate attention given to the other player's faces, leading to the failure of the effect of other-outcome on encoding-related N170 and EPN/N250 responses.
In the subsequent recognition phase, the current findings revealed better recognition performance and greater recognition-related EPN/ N250 and LPP responses to other players' faces whose identities had been encountered with monetary gain than to those faces whose identities had been encountered with monetary loss in the self-loss condition.In contrast to encoding-related EPN/N250 and LPP, recognition-related EPN/N250 and LPP are thought to be associated with face familiarity and conscious recollection, respectively.Therefore, the current findings might suggest that other players' faces in the other-gain condition might be considered more familiar and more likely to recollect than faces in the other-loss condition when individuals previously encountered monetary loss and thus, had better recognition performance.
As mentioned above, when individuals obtain a negative outcome, others' positive as compared to negative outcome might cause individuals to feel envied and threatened and thus increase the encoding of others' faces.Moreover, it has been shown that high arousal/intense emotional stimuli increase memory consolidation (e.g., Anderson et al., 2006).For the present study, the findings revealed that in the self-loss condition, others' gains were rated as more unpleasant than others' losses and thus might be perceived as more arousing and intense.Increased emotional arousal and intensity might apply to relevant faces, leading to increased memory consolidation.The increased encoding and consolidation might subsequently improve recognition performance and recognition-related ERP (e.g., EPN/N250 and LPP) responses to faces.Consistently, our current findings revealed negative correlations between pleasantness ratings/encoding-related LPP responses, which are related to emotional experiences, and hit rates and positive correlations between encoding-related ERP responses (e.g., N170 and EPN/N250), which are also relevant to emotional experiences, and  However, there was no effect of other-outcome on recognition performance towards faces in the self-gain condition or on recognitionrelated N170 responses irrespective of self-outcome.During the encoding phase, attention, reflected by the encoding-related LPP response, was still sustained more automatically to encode the faces in the othergain condition than the faces in the other-loss condition when participants obtained monetary gains.However, based on the pleasantness rating, the difference in emotional arousal/intensity [i.e., the difference between the pleasantness rating and the neutral value (i.e., 5)] was smaller in the self-gain condition than in the self-loss condition.The smaller emotional arousal/intensity in the self-gain condition might be insufficient to increase memory consolidation of the faces, leading to the absence of an effect of other-outcome on face recognition.In addition, the nonsignificant effect of other-outcome on recognition-related N170 responses might be because participants were unable to retrieve in which context-relevant faces had been encountered during the preceding encoding phase at such early points.
Inconsistent with the findings of the present study, Sugimoto et al. ( 2021) study revealed greater recognition of opponents' faces in the self-advantageous condition (i.e., participant-win/opponent-loss).However, as mentioned in the introduction section, this study manipulated social comparison-related outcomes by the facial expressions of the opponents.Using this approach, the effect of social comparison-related outcomes can be contaminated by facial expressions.Self-related and nonself-related outcomes were indicated by a specific category of facial expression.The covariance between self-related and nonself-related outcomes failed to reveal whether face memory was associated with social comparison-related, self-related or nonself-related outcomes.In the current study, self-related or nonself-related outcomes varied independently.These outcomes were presented prior to the presentation of neutral faces and were not indicated by facial attributes.Using these approaches, the current findings suggested the role of self-disadvantageous social comparison-related outcomes in face encoding and recognition.
Previous studies have repeatedly investigated the emotional effects of static social contexts (e.g., social scenes) on face encoding and recognition, suggesting that negative valenced social contexts influence face memory (Abdel Rahman, 2011;Cao et al., 2022;Furl et al., 2007;Diéguez-Risco et al., 2015;Hietanen and Astikainen, 2013;Lin andLiang, 2019, 2023b;Lin et al., 2015b;Righart andDe Gelder, 2006, 2008;van den Stock and de Gelder, 2012).In real social circumstances, faces are memorized more frequently within contexts involving interpersonal interactions.However, only limited studies have investigated face memory in relevant contexts (Matyjek et al., 2021;Niedtfeld and Kroneisen, 2020;Park et al., 2019;Sugimoto et al., 2021).In particular, Sugimoto et al. (2021) suggested that the social comparison-related context influences face recognition.However, these findings should be further validated due to the abovementioned limitations.By circumventing the limitations of the previous study, the findings in the current study indicated enhanced face encoding and recognition in the self-disadvantageous social comparison-related context (i.e., self-loss-other-gain).Given that this condition was evaluated as more negative than other conditions based on pleasantness ratings, the current findings might imply that negative interpersonal interaction contexts-in particular, contexts involving social comparison-have a strong impact on face memory, similar to other static social contexts.
In addition, our present study suggested the influence of social comparison on face memory.Moreover, previous studies have shown that social comparison influences memory of nonfacial stimuli (e.g., words), even when social comparison is manipulated by other designs and tasks (e.g., with and without social comparison; comparing one's own concrete outcomes with the average outcome of others; DiMenichi and Tricomi, 2017;Mano et al., 2011;Sugimoto et al., 2016;Suran et al., 2021).Taking together, the findings of previous studies and the current study suggest that social comparison might be a critical factor that alters individuals' memory systems irrespective of stimuli, designs and tasks.
Finally, we would like to mention some of the limitations in the present study and suggest for future studies.First, previous studies have shown that ERP responses, in particular LPP responses, differ between successfully recognized stimuli and stimuli that cannot be recognized (e. g., Rugg et al., 1998;Wilding, 2000).Therefore, it might be ideal to analyze recognition-related ERP responses specifically to successfully recognized faces in the present study.However, we now use a sudden recognition task, and hit rates are low (approximately 45 %), so trials of correct responses are insufficient for ERP averaging.Future studies might increase hit rates by using an explicit memory task to further investigate this issue.Second, the analysis revealed correlations between pleasantness ratings/LPP responses and hit rates and between encoding-and recognition-related N170 and EPN/N250 responses as an effect of other-outcome in the self-loss condition.However, no other correlations (e.g., the correlations between encoding-related N170 or EPN/N250 responses and hit rates) were shown for this effect, possibly due to the limited sample size.Future studies might expand the sample size to further investigate the correlations between encoding-related and recognition-related dependent variables.Finally, the current study suggested the effect of social comparison on face memory by varying self-related outcomes and the outcomes of another person.However, social comparison could also be manipulated by other designs and/or tasks (e.g., comparing one's own concrete outcomes with the averaged outcome of others).Future studies might further investigate the influence of social comparison on face memory across a range of different designs and tasks.

Conclusion
In general, the present study suggested that compared with negative outcome, other's positive outcomes influence the encoding and recognition of relevant faces, particularly when individuals obtain a negative outcome.These findings might contribute to the understanding of the role of the social comparison-related context, particularly selfdisadvantageous outcomes in this context, in face memory.

Declaration of competing interest
There are no conflicts of interest.

Fig. 1 .
Fig. 1.Experimental procedure during the encoding phase (the upper panel) and the recognition phase (the lower panel).

Fig. 2 .
Fig. 2. Bar charts and scatterplots for means and SEs of pleasantness ratings during the encoding phase (the left panel) and hit rates (the middle panel) and RTs (the right panel) during the recognition phase for all the experimental conditions.The "***" symbol indicates p < .005.

Fig. 3 .
Fig. 3. (A) ERP waveforms at electrodes of interest (PO7 and PO8 were pooled) for all the experimental conditions during the encoding phase.Shaded areas represent the time windows for the encoding-related N170 (165-195 ms) and EPN/N250 (240-330 ms).(B) Bar charts and scatterplots for means and SEs of encoding-related N170 and EPN/N250 amplitudes for all the experimental conditions.The "***" and "**" symbols indicate p < .005and p < .01,respectively.

Fig. 4 .
Fig. 4. Topography maps based on the mean amplitudes (μV) of the encoding-related N170 (165-195 ms), EPN/N250 (240-330 ms) and LPP (800-2500 ms)components for all the experimental conditions and for the differences between the other-gain and other-loss conditions for each level of self-outcome.

Fig. 5 .
Fig. 5. (A) ERP waveforms at electrodes of interest (FCz, Cz and CPz were pooled) for all the experimental conditions during the encoding phase.Shaded areas represent the time window for the encoding-related LPP (800-2500 ms) component.(B) Bar charts and scatterplots for means and SEs of encodingrelated LPP amplitudes for all the experimental conditions.The "*" symbol indicates p < .05.

Fig. 6 .
Fig. 6. (A) ERP waveforms at electrodes of interest (PO7 and PO8 were pooled) for all the experimental conditions during the recognition phase.Shaded areas represent the time windows for the recognition-related N170 (165-195 ms) and EPN/N250 (240-330 ms) components.(B) Bar charts and scatterplots for means and SEs of recognition-related N170 and EPN/N250 amplitudes for all the experimental conditions.The "***" symbol indicates p < .005.

Fig. 7 .
Fig. 7. Topography maps based on the mean amplitudes (μV) of the recognition-related N170 (165-195 ms), EPN/N250 (240-330 ms) and LPP (800-2500 ms)components for all the experimental conditions and for the differences between the other-gain and other-loss conditions for each level of self-outcome.

Fig. 8 .
Fig. 8. (A) ERP waveforms at electrodes of interest (FCz, Cz and CPz were pooled) for all the experimental conditions during the recognition phase.Shaded areas represent the time window for the recognition-related LPP (800-2500 ms) component.(B) Bar charts and scatterplots for means and SEs of recognition-related LPP amplitudes for all the experimental conditions.The "*" symbol indicates p < .05.

Fig. 9 .
Fig. 9. (A) Hit rates were negatively correlated with pleasantness ratings (upper-left panel) and encoding-related LPP responses (upper-right panel) as an effect of other-outcome in the self-loss condition.(B) Positive correlations between encoding-related N170 responses and recognition-related N170 and EPN/N250 responses (lower-left panel) and between encoding-related EPN/N250 responses and recognition-related N170 and EPN/N250 responses (lower-right panel) as an effect of other-outcome in the self-loss condition.