Elsevier

NeuroImage

Volume 92, 15 May 2014, Pages 237-247
NeuroImage

Brain lateralization of holistic versus analytic processing of emotional facial expressions

https://doi.org/10.1016/j.neuroimage.2014.01.048Get rights and content

Highlights

  • Whole- but not half-face expressions modulate right-hemisphere N170 and EPN.

  • Expression (N170) and emotional (EPN) encoding involve holistic face processing.

  • Separate analysis of visually salient smiles enhances left-hemisphere N170 activity.

  • Modelled high visual saliency of smiling mouth area precedes left-hemisphere N170.

  • Diagnostic smiles are discriminated earlier (P3) than diagnostic angry eyes (LPP).

Abstract

This study investigated the neurocognitive mechanisms underlying the role of the eye and the mouth regions in the recognition of facial happiness, anger, and surprise. To this end, face stimuli were shown in three formats (whole face, upper half visible, and lower half visible) and behavioral categorization, computational modeling, and ERP (event-related potentials) measures were combined. N170 (150–180 ms post-stimulus; right hemisphere) and EPN (early posterior negativity; 200–300 ms; mainly, right hemisphere) were modulated by expression of whole faces, but not by separate halves. This suggests that expression encoding (N170) and emotional assessment (EPN) require holistic processing, mainly in the right hemisphere. In contrast, the mouth region of happy faces enhanced left temporo-occipital activity (150–180 ms), and also the LPC (late positive complex; centro-parietal) activity (350–450 ms) earlier than the angry eyes (450–600 ms) or other face regions. Relatedly, computational modeling revealed that the mouth region of happy faces was also visually salient by 150 ms following stimulus onset. This suggests that analytical or part-based processing of the salient smile occurs early (150–180 ms) and lateralized (left), and is subsequently used as a shortcut to identify the expression of happiness (350–450 ms). This would account for the happy face advantage in behavioral recognition tasks when the smile is visible.

Introduction

Facial expressions reflect emotions and intentions, motives and needs. Adaptive social behavior thus depends on expressers and observers conveying and interpreting such non-verbal information quickly and accurately. The human face contains two primary sources of expressive information, i.e., the eye and the mouth regions. Prior studies using behavioral and modeling measures have shown the relative weight of the eyes and the mouth in expression recognition for the different basic categories of facial affect (see Blais et al., 2012). While there is some agreement that angry and fearful expressions are mainly dependent on changes in the eye region, that disgust relies more on the mouth region, and that sadness and surprise may be similarly recognizable from both regions, different paradigms have shown the critical contribution of the smiling mouth to the recognition of facial happiness (Calder et al., 2000, Calvo et al., in press, Nusseck et al., 2008, Smith et al., 2005, Wang et al., 2011).

The special informative or diagnostic value of the smile can be attributed to its uniqueness as a distinctive facial feature. That is, the smiling mouth is systematically associated with facial happiness, whereas features in the other expressions overlap to some extent across categories (Calvo and Marrero, 2009, Kohler et al., 2004). Being a single diagnostic feature, the smile has been proposed to be used by observers as a shortcut for a quick and accurate categorization of a face as happy (Adolphs, 2002, Leppänen and Hietanen, 2007). The distinctiveness of the smile would thus account for the typical recognition advantage of happy expressions (e.g., Calvo and Lundqvist 2008, Palermo and Coltheart, 2004, Tottenham et al., 2009; see Nelson and Russell, 2013). In the current study, we investigated the neurocognitive basis of the special diagnostic role of the smile relative to other expressive sources, and how this is relevant to examine the mechanisms of holistic versus analytic encoding and brain lateralization in facial expression processing.

Numerous studies using EEG measures have investigated the processes and time course of facial expression processing. Emotional expression modulates a wide range of ERP (event-related potentials) components, from earlier to later stages: (a) P1 (100 to 130-ms peak latency from stimulus onset; occipital brain scalp sites) or N1 (100–150 ms; widely distributed over the entire scalp; e.g., Luo et al., 2010, Pourtois et al., 2005); (b) N170 (150–200 ms; lateral occipital and infero-temporal; e.g., Batty and Taylor, 2003, Williams et al., 2006) and VPP or vertex positive potential (150–200 ms; central midline sites; e.g., Smith et al., 2013, Willis et al., 2010); (c) P2 (150–275 ms; frontal and central sites; e.g., Calvo et al., 2013b, Paulmann and Pell, 2009), N2 (200–350 ms; central; e.g., Ashley et al., 2004, Williams et al., 2006), and EPN or early posterior negativity (200–350 ms; temporo-occipital; e.g., Rellecke et al., 2011, Schupp et al., 2004); and (d) P3 and LPP or late positive potential (350–700 ms; widespread over fronto-central-parietal areas; e.g., Frühholz et al., 2009, Leppänen et al., 2007).

Nevertheless, prior ERP research on emotional facial expressions has typically used only whole-face stimuli, rather than presenting the eye or the mouth regions separately.1 Therefore, the relative diagnostic value of these expressive sources could not be established. Leppänen et al. (2008) used an approach aimed to determine the role of specific expressive sources in a face (see also Meletti et al., 2012, Weymar et al., 2011).2 Leppänen et al. (2008) compared fearful and neutral expressions under different display conditions: whole faces (with the eyes visible), faces with eyes covered, faces with eyes and eyebrows covered, isolated eyes and eyebrows, and isolated eyes. Results showed a negative shift in ERPs for fearful relative to neutral expressions over occipital–temporal scalp sites starting at the latency of the N170 (160–210 ms post-stimulus), and also later over lateral–temporal electrode sites (210–260 ms; EPN). Such effects were observed not only for whole faces but also when both the eyes and eyebrows were shown in isolation; in contrast, the effects were abolished when isolated eyes were presented or when the eyes and eyebrows were covered. This reveals that the eye region (with the eyebrows) is critical for the rapid ERP differentiation between fearful and neutral faces.

We aim to extend this approach to other expressions and also to the mouth region. To this end, we used happy, angry, and surprised faces, in addition to neutral faces, each with three formats (whole face, eye region visible, or mouth region visible). Regarding the expressions, our selection was based on the results of prior behavioral research in which the eye and mouth regions were manipulated to examine their informative value (see above): For happy faces, the mouth is highly diagnostic; for angry faces, the eyes are more diagnostic than the mouth; and for surprised faces, both the eyes and the mouth are similarly (but not highly) diagnostic. Regarding the stimulus format, and given that the isolated eyes or mouth (with no surrounding facial context) loose informative weight (see Leppänen et al., 2008), we presented them within the face top or bottom halves, respectively (see Calder et al., 2000). The other half of the face (bottom or top) was scrambled, rather than simply being removed, to keep the perceptual shape of a face and equivalent low-level properties. In an expression categorization task, participants had to identify the emotional expression conveyed by each face stimulus.

This paradigm allowed us to determine the role of the eye and the mouth region in expression recognition, as well as the underlying neurophysiological processes and their time course. Following the rationale of the behavioral studies, we hypothesize that, if the mouth region of happy faces (or the eye region of angry faces) is diagnostic, it can be used as a cue to access the holistic template and build a cognitive representation of the whole facial expression (see Rossion, 2013). This implies that happy and angry faces will be explicitly categorized more accurately and faster when the mouth or the eye region, respectively, are visible—even in the absence of the whole face—relative to the other combinations of regions and expressions. To assess the processes involved, we recorded EEG activity for 800 ms following the face stimulus onset, and ERP components were examined from the early P1 to the late LPP. This provided us with information about when and how brain processes are sensitive to each major expressive source in a face. The N170 and the EPN components are particularly related to the processing of facial expression (N170; for a review, see Rellecke et al., 2013) and emotional content (EPN; for a review, see Hajcak et al., 2012). Accordingly, if diagnostic sources (e.g., the happy smile or the angry eyes) are used to encode the expression or to extract emotional significance, they will enhance N170 or EPN activity.

The current approach is also relevant to the issues of holistic (or configural or relational) versus part-based (or analytic or featural) processing and hemispheric lateralization, as applied to facial expression and emotional content. Holistic versus part-based perception refer to the integration versus isolated encoding of facial features or regions (e.g., the eyes and the mouth) in a face (e.g., Calder et al., 2000, Richler et al., 2012, Rossion, 2013). Holistic processing is thought to be preferentially executed by the right hemisphere, whereas the left hemisphere is regarded as more involved in part-based processing (see Ramon and Rossion, 2012). With fMRI measures, Maurer et al. (2007) observed that the areas that showed greater activity for featural changes in the face (shape or size of the eyes or mouth) were mostly located in the left prefrontal areas, whereas areas of the right fusiform gyrus and the right frontal cortex showed more activity for configural changes (relative location or distance of the eyes and mouth) (see also Lobmaier et al., 2008). Consistently, with EEG measures, Scott and Nelson (2006) found that the right-hemisphere N170 was greater for configural relative to featural changes, whereas the left-hemisphere N170 exhibited the opposite pattern. In the same vein, TMS (transcraneal magnetic stimulation) research has shown that the right inferior frontal cortex is causally involved in configural processing, whereas the left middle frontal gyrus is involved in featural analysis (Renzi et al., 2013).

The prior studies on holistic versus analytic processing and lateralization have used face stimuli devoid of emotional expression (i.e., neutral faces), and have measured recognition or matching of face identity. In the current study, we extend this work to the recognition of emotional expressions. We hypothesize that, if the mouth or the eye regions alone can drive analytic encoding of the facial expression or its emotional content, ERP modulations of the corresponding processes (e.g., N170 and EPN, respectively) will occur when the mouth region of happy faces, or the eye region of angry faces, are presented separately. Such face part-based ERP modulations will, nevertheless, occur earlier for the happy mouth than for the angry eyes. This would be due to the greater saliency and distinctiveness of the former than the latter region (e.g., Calvo et al., 2014). In contrast, if expression or emotional processing requires holistic encoding, ERP modulations will occur only when the whole face is shown. Furthermore, to the extent that the part-based analysis and the holistic encoding of the diagnostic regions are lateralized, this will be reflected in an enhanced neural activity in the left or the right hemisphere, respectively.

In sum, we used a part-whole paradigm to determine the role of configural processing of emotional facial expressions. The part- (or isolated region) versus whole-face comparisons will provide the relevant evidence. The whole-face condition allows for perceptual integration of all regions at the same time, and thus holistic processing is possible. In contrast, in the part-face conditions, only single expressive sources are available, thus allowing for analytical but preventing on-line holistic processing. For behavioral measures, higher expression categorization accuracy and faster correct responses for the whole than for the part condition would reveal holistic encoding, whereas equivalent (or higher) performance for the part conditions would be indicative of dependence on analytical encoding. For ERP measures, expression modulation of a given electrophysiological component only by whole-face stimuli would reveal holistic encoding, whereas modulation by part-face stimuli would reflect analytical encoding. Specifically, the N170 activity (at right temporo-occipital sites) is a neural signature of the structural processing of “faceness” (i.e., the configuration of a face as a face; Bentin et al., 1996, Rossion and Jacques, 2012). The modulation of this component in the right hemisphere by whole-face expressions would thus indicate holistic encoding, whereas modulation in the left hemisphere by face regions would indicate analytical encoding.

Section snippets

Participants

Twenty-two psychology undergraduates (15 females; all between 18 and 25 years of age) gave informed consent, and received either course credit or were paid (7 Euro per hour) for their participation. All were right-handed and reported normal or corrected-to-normal vision and no neurological or neuropsychological disorder. Four additional subjects were excluded because of excessive eye-movements.

Stimuli

We selected 80 digitized color photographs from the KDEF (Lundqvist et al., 1998) stimulus set. The

Analysis of visual saliency

The mean visual saliency of the eyes and the mouth, the probability that each region was the first salient region, and the probability that the first saliency outburst occurred during the first 150 ms post stimulus onset, were analyzed by means of 4 (Expression: happy, angry, surprised, neutral) × 2 (Region: eye vs. mouth) ANOVAs, separately for each type of face format (whole face, upper face visible with lower half masked, and upper face masked with lower half visible). To decompose the

Discussion

Prior ERP research on facial expressions has generally considered the face as a whole. The current study makes a contribution by investigating the ERP modulations produced by informative regions such as the eyes and the mouth separately. By means of behavioral, computational modeling, and EEG measures, we explored the mechanisms underlying the role of the eyes and the mouth in the recognition of facial happiness, anger, and surprise. A major finding that was common to all three measures

Conclusions

The present results support the view that holistic and analytic facial expression processing at early stages are lateralized. Early analytic encoding of separate facial regions occurs mainly in the left hemisphere: The mouth region of happy faces enhanced neural activity in the left (temporo-occipital) hemisphere, with this effect (150–180 ms after stimulus onset) being driven by visual saliency, rather than reflecting expression recognition or extraction of affective content. In contrast,

Acknowledgments

This research was wholly supported by Grant PSI2009-07245 from the Spanish Ministerio de Ciencia e Innovación, and the Agencia Canaria de Investigación, Innovación y Sociedad de la Información (NEUROCOG project), and the European Regional Development Funds, and by CEI CANARIAS: Campus Atlántico Tricontinental (project supported by Spanish Ministerio de Educación). We are grateful to Andrés Fernández-Martín for conducting the visual saliency computations.

References (65)

  • J.M. Leppänen et al.

    Differential electrocortical responses to increasing intensities of fearful and happy emotional expressions

    Brain Res.

    (2007)
  • J.M. Leppänen et al.

    Differential early ERPs to fearful versus neutral facial expressions: a response to the salience of the eyes?

    Biol. Psychol.

    (2008)
  • D. Maurer et al.

    Neural correlates of processing facial identity based on features versus their spacing

    Neuropsychologia

    (2007)
  • S. Meletti et al.

    Fear and happiness in the eyes: An intra-cerebral event-related potential study from the human amygdala

    Neuropsychologia

    (2012)
  • J.K. Olofsson et al.

    Affective picture processing: an integrative review of ERP findings

    Biol. Psychol.

    (2008)
  • G. Pourtois et al.

    Two electrophysiological stages of spatial orienting towards fearful faces: early temporo-parietal activation preceding gain control in extrastriate visual cortex

    NeuroImage

    (2005)
  • M. Ramon et al.

    Hemisphere-dependent holistic processing of familiar faces

    Brain Cogn.

    (2012)
  • J. Rellecke et al.

    On the automaticity of emotion processing in words and faces: event related brain potentials from a superficial task

    Brain Cogn.

    (2011)
  • J. Rellecke et al.

    Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials

    Biol. Psychol.

    (2012)
  • C. Renzi et al.

    Processing of featural and configural aspects of faces is lateralized in dorsolateral prefrontal cortex: a TMS study

    NeuroImage

    (2013)
  • D.L. Santesso et al.

    Electrophysiological correlates of spatial orienting towards angry faces: a source localization study

    Neuropsychologia

    (2008)
  • A. Schacht et al.

    Emotions in word and face processing: early and late cortical responses

    Brain Cogn.

    (2009)
  • E. Smith et al.

    Electrocortical responses to NIMSTIM facial expressions of emotion

    Int. J. Psychophysiol.

    (2013)
  • N. Tottenham et al.

    The NimStim set of facial expressions: judgments from untrained research participants

    Psychiatry Res.

    (2009)
  • H.F. Wang et al.

    Efficient bubbles for visual categorization tasks

    Vis. Res.

    (2011)
  • M. Weymar et al.

    The face is more than its parts—Brain dynamics of enhanced spatial attention to schematic threat

    Neuroimage

    (2011)
  • L.M. Williams et al.

    The when and where of perceiving signals of threat versus non-threat

    NeuroImage

    (2006)
  • M.L. Willis et al.

    Switching associations between facial identity and emotional expression: a behavioural and ERP study

    NeuroImage

    (2010)
  • R. Adolphs

    Recognizing emotion from facial expressions: psychological and neurological mechanisms

    Behav. Cogn. Neurosci. Rev.

    (2002)
  • V. Ashley et al.

    Time course and specificity of event-related potentials to emotional expressions

    Neuroreport

    (2004)
  • M. Balconi et al.

    Consciousness and emotion: ERP modulation and attentive vs. pre-attentive elaboration of emotional facial expressions by backward masking

    Motiv. Emot.

    (2009)
  • D.V. Becker et al.

    The face in the crowd effect unconfounded: happy faces, not angry faces, are more efficiently detected in single- and multiple-target visual search tasks

    J. Exp. Psychol. Gen.

    (2011)
  • Cited by (98)

    • The rapid and automatic categorization of facial expression changes in highly variable natural images

      2021, Cortex
      Citation Excerpt :

      These observations have also been reported more recently for emotional expression perception. For example, Calvo and Beltrán (2014) reported distinct amplitudes of ERP components between emotional expressions when the whole face was presented in the right hemisphere, while higher responses were observed in the left hemisphere when presenting isolated facial features (e.g., smiling mouth). Using eye tracking, Calvo and Nummenmaa (2008) found that participants focus preferentially on critical regions to categorize a given expression, for example on the mouth for Happiness and Disgust and on the eye region for Fear, Sadness and Anger (Bombari et al., 2013; Wegrzyn, Vogt, Kireclioglu, Schneider, & Kissler, 2017).

    • Effects of low-level visual information and perceptual load on P1 and N170 responses to emotional expressions

      2021, Cortex
      Citation Excerpt :

      This is in line with studies showing that high-level face differences are frequently reported for the N170, but these are rarely shown for the P1 component (e.g., Rousselet et al., 2008; Rossion & Caharel, 2011; Ganis et al., 2012; but see also; Thierry et al., 2007). To further understand how the N170 is modulated by faces in general and facial expressions in particular, both holistic accounts (Calvo & Beltrán, 2014; Piepers & Robbins, 2012; Rossion, 2013) and accounts assuming a role of specific features such as the mouth (Schyns et al., 2009, 2007; Harris & Nakayama, 2008; daSilva et al., 2016) or the eyes (Itier et al., 2011; Parkington & Itier, 2018; Schyns et al, 2007, 2009) have been proposed. In either case, we could show that, at least if faces are presented at fixation, facial expression effects are bound to the structural configuration of faces or face-parts, and insensitive to variations in perceptual load.

    View all citing articles on Scopus
    View full text