Next Article in Journal
Measurement of Gas-Oil Two-Phase Flow Patterns by Using CNN Algorithm Based on Dual ECT Sensors with Venturi Tube
Next Article in Special Issue
An Adaptive Face Tracker with Application in Yawning Detection
Previous Article in Journal
Design of Signal Generators Using Active Elements Developed in I3T25 CMOS Technology Single IC Package for Illuminance to Frequency Conversion
Previous Article in Special Issue
The Design of CNN Architectures for Optimal Six Basic Emotion Classification Using Multiple Physiological Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks

1
Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea
2
Dental Research Institute, Seoul National University, School of Dentistry, Seoul 08826, Korea
3
Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
4
Department of Computer Science, Sangmyung University, Seoul 03016, Korea
5
Seoul National University College of Medicine, Seoul 03080, Korea
6
Department of Education, Sejong University, Seoul 05006, Korea
7
Department of Human Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(4), 1199; https://doi.org/10.3390/s20041199
Submission received: 15 January 2020 / Revised: 20 February 2020 / Accepted: 20 February 2020 / Published: 21 February 2020
(This article belongs to the Special Issue Sensor Applications on Emotion Recognition)

Abstract

:
Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.

1. Introduction

The smile plays a pivotal role in the exchange of emotional states in non-verbal communication [1]. Generally recognized as an epitome of positive affect, research on facial expression of happiness has long been considered as a major focus of emotional expression. The most obtrusive facial movements of a typical smile are known to engage the zygomatic major; a facial muscle on the cheek that is capable of pulling the corners of the mouth. Despite its ease of identification, smile is one of the complex expressions due to its varying origin of internal state [2]. Facial features of smile are regularly observed without positive feelings during communication. For instance, fake smiles may appear to obscure one’s true emotional state. Previous research has distinguished Duchenne smile as opposed to a fake smile with additional movements around the eyes, the orbicularis oculi region [3]. The appearance of Duchenne markers has been widely adopted as genuine, spontaneous smiles while the absence indicated posed smiles. [4,5].
Assessment of emotional state via facial expression is not limited to interpersonal communication. The demand for an accurate evaluation of human emotion has seen an increased significance with commercial applications in industries such as automotive and entertainment [6,7]. Unlike other devices that require direct contact, emotion recognition with facial expression can be administered with a camera, a readily available sensor [8]. Moreover, facial expression, as a non-invasive biomarker, has potential use in clinical application to detect neuropsychological disorders such as depression [9,10], schizophrenia [11], and Parkinson’s Disease [12]. Although bare human eyes are apt to recognize facial expressions to a certain degree, computerized algorithms have explored its advantages in the field. In recent years, there have been significant advances in machine analysis to analyze changes in dynamics of facial expressions [13,14,15]. Such movements can be analyzed with automated facial behavioral analysis tools such as TAUD [16], Affdex [17], and OKAO [18]. OpenFace is an open source toolkit to analyze facial movements based on Convolutional Neural Network algorithm. It is capable of detecting facial landmarks, gaze, head pose, and facial expression based on facial action units (AUs) according to the facial action coding system (FACS) [19,20,21,22].
Although AUs are reliable indicators of facial muscle contraction, previous studies have investigated that asymmetries in facial movement can also be an indicator of distinct emotional states. Facial asymmetry refers to relative differences in the expression intensity of either left or right side of the face that arises from hemispheric lateralization [2]. In general, the right cerebral hemisphere is known to be more involved with emotional expression than the left cerebral hemisphere, which leads to relatively increased movement on the left facial expressions [1]. The distinct emotion processing mechanisms between spontaneous and posed expressions have been explored. However, studies facilitating asymmetry to distinguish posed and spontaneous emotions have been incongruous [23,24]. Moreover, being innervated from different regions of the brain, the motor control of the upper and lower facial muscles is also known to be independent during spontaneous expressions [25,26]. Therefore, further analysis of each region of the face during posed and spontaneous positive emotion must be investigated to observe both vertical asymmetries and horizontal discrepancies.
Most of the previous studies on automatic facial expression analysis were based on AUs, which has limited capacity in terms of hemispheric lateralization. Therefore, the aim of the current study was to establish an algorithm that is capable of detecting overall discrepancies in both vertical and horizontal axes in facial landmarks and to apply it to enhance the discriminating power between posed and spontaneous facial expressions. To address this purpose adequately, we focused on geometric operations based on 3D facial landmarks. The result of the analysis will be discussed in terms of anatomy and neuropsychology.

2. Materials and Methods

2.1. Participants

Participants were recruited from SMG-SNU Boramae Medical Center and Sejong University in Seoul, South Korea; a total of 115 adult participants were enrolled in the study. The inclusion criteria were as follows: (1) aged between 18 and 40, (2) normal vision, hearing and cognitive function, and (3) able to understand the overall experimental procedures. Glasses and hearing aids were allowed if needed, but the data used for the analysis excluded the data from the subject wearing glasses. All participants provided written informed consent before participating in the study. This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Institutional Review Board of SMG-SNU Boramae Medical Center (IRB No.30-2017-63).

2.2. Posed and Spontaneous Facial Expression Task

Instructions for facial emotion task were given to the participants to draw posed and spontaneous facial expressions [27,28]. For example, a photograph of a person with a smiling face was presented; participants were asked to identify the emotion conveyed and “make a happy face for 15 s towards the camera” to be video recorded (posed emotion). Then, they watched a short film of approximately 120 s in duration to elicit positive emotions while their faces were video recorded by the camera (spontaneous emotion). Facial expression in their neutral-emotional status was also collected to make corrections on their ordinary facial characteristics. Figure 1 shows the stimuli used in the task.

2.3. Data Acquisition

The video recordings of the participants’ facial expressions were administered with Canon EOS 70D DSLR camera with 50 mm prime lens, 720 p resolution, and 60 fps frame rate. The experiments were done in a separate room with normal lighting condition. A screen was placed in front of the participants, about 60 cm apart from the seating position. The camera was mounted on a fixed stand about 120–140 cm above the ground to adequately capture the participants from their chest and above to make sure the whole face was included even with moderate head movements. The posed smiles were recorded for 15 s after a clear instruction to imitate a previously recognized happy face. The spontaneous facial expressions were video recorded for 60 s. The presence of smile was detected by OpenFace toolkit. Moreover, the videos of the facial expression were manually cross checked and confirmed. The movement of AU06 (cheek raiser) and AU12 (lip corner puller) were extracted in two measures. The toolkit provides trained predictors capable of determining the presence of AUs and their intensity [21,22]. The video recorded data of the two smiles were fundamentally distinguished by the experiment design. The two smile videos were collected with 317 moments of spontaneous smiles and 185 moments of posed smiles. As a result of the data collection, 58 of the 115 subjects wearing glasses were excluded from the analysis. Occlusion problems caused by the glasses’ frame and specular reflections on the spectacle lens surface can interfere with facial landmark extraction. Thus, a total of 57 data were included in the final analysis. Facial behavior analysis data such as AU06, AU12, and facial landmark were extracted from the final data collected to analyze the difference between spontaneous and posed smiles.

2.4. Self-Reported Measures

2.4.1. Beck Depression Inventory (BDI2)

The Korean version of BDI involves 21 questions to evaluate the severity of depression, with its score ranging from 0 to 63 [29,30]. A higher score indicates severe depressive symptoms; the cutoff score is 18 in the Korean version [31].

2.4.2. Beck Anxiety Inventory (BAI)

The Korean version of BAI utilizes 21 questions to measure the severity of anxiety, with its score ranging from 0 to 63 [32,33]. A higher BAI score indicates severe anxiety symptoms with its cutoff score of 19 [34].

2.4.3. Toronto Alexithymia Scale (TAS)

A twenty-item TAS was developed and validated to measure the severity of alexithymia. A score ranging from 20 to 100 [35,36], with its cutoff score at 61 was used for the Korean version [37]. TAS is comprised of three sub-scales: identifying feeling, difficulty describing feeling, and externally oriented thinking.

2.5. Data Analysis

2.5.1. Facial Behavior Analysis

Both posed and spontaneous facial expressions were under a preliminary investigation to confirm the presence of a smile via OpenFace 2.0, an open source toolkit to analyze facial landmarks and AUs [22]. AU06 (i.e., cheek raiser) and 12 (i.e., lip corner puller), which represent happiness, were evaluated and compared to those of neutral facial expression [20,38].
The AU estimation model used a real-time AU detection and intensity estimation system provided by OpenFace 2.0 [39]. This model extracts features based on facial appearances and geometric shapes that are classified to estimate the AU intensity using machine learning model. Moreover, this model demonstrates performance improvements in AU classifications and estimates through a person-specific normalization technique using neutral facial expression as a dynamic approach [21]. We used the AU estimation model to calculate and compare the AU intensities of posed and spontaneous smiles.
The 3D facial landmark intensity estimation was measured by using each displacement between the landmark of the neutral expressions and the landmark of the smile expressions. The 68 point facial landmark annotations used in this study are represented in Figure 2. The 3D facial landmarks were extracted using convolutional experts constrained local model (CE-CLM) provided by OpenFace2.0 [40]. The performance of CE-CLM in OpenFace toolkit was measured on publicly available datasets such as 300-W [41], Menpo [42]. The size normalized median landmark error was used as the evaluation metric(ε), as shown in Equation (1).
ε =   1 L   X ϵ L d ( x ˜ , x ) d s c a l e
In the Equation (1), L is the number of facial landmarks, d ( x ~ ,   x ) is the Euclidean distance between a predicted landmark and ground-truth landmark at the same index. d s c a l e is a variable to normalize the difference caused by size variation and generally uses the distance between the two eyes. Performance evaluation results of CE-CLM in public dataset are shown in Table 1 [40].
For the 3D facial landmark intensity estimation, we propose a new landmark-based displacement estimation method that resolves issues with incorrect displacement measurements. The proposed method has a contribution that it is capable of revising incorrect displacements caused by person-specific appearances, head movements, and rotations. Person-specific appearance problems occur because different people have different sizes and shapes of eyes, noses, and mouths, while some people have grimaces or smiles, even in their neutral state. In this study, the person-specific appearance problem is solved by measuring the displacement of smile expression landmark using the landmark of neutral expression as a basis. Moreover, as shown in Figure 3, when measuring facial landmarks of neutral expressions, posed and spontaneous smiles, the head position, and orientation of each expression were different from one another. Therefore, the 3D landmarks were required to be aligned to measure changes in each landmark. We used Sorkine et al.’s rigid motion computation to align the landmark sets in both expressions [43]. The aligning process of the two landmark sets is shown in Algorithm 1. The example of aligning the landmark sets of the expressions are shown in Figure 4. Through the processes above, we measured the changes of each landmark every frame and compared the facial movements of each area. As a result of aligning about 250,000 pairs of 57 subjects’ smiles and neutral expressions using rigid motion computation, the mean of residual error between each pair of fiducial points was 0.062 mm. These fiducial points were predefined landmarks for aligning landmarks of smiles and neutral expressions. In Figure 2, landmarks corresponding to the 27, 28, 29, 30, 39, and 42 indexes were selected as fiducial points.
Algorithm 1. Left and Right Facial Movement Measurement Algorithm.
Input: Two 3D landmark sets of neutral expression and one of the spontaneous and posed smiles
Output: Distance of landmarks between the two expressions
 * Facial landmark points were measured with reference to camera coordinate system.
1. Compute the weighted centroids of fiducial point sets ( p i ,     q i )   of the two landmark datasets
 * Fiducial point sets ( p i ,     q i )   are predefined landmarks for rigid body transformation. In Figure 2, landmarks corresponding to the 27, 28, 29, 30, 39, and 42 indexes were selected as fiducial points.
p ¯ =   i = 1 n w i p i i = 1 n w i   ,     q ¯ =   i = 1 n w i q i i = 1 n w i ,       i = 1 ,   2 ,   ,   n .
2. Compute the centered vectors
x i   =   p i   p ¯   ,   y i   =   q i   q ¯   ,     i = 1 ,   2 ,   ,   n .
3. Compute covariance matrix
C = XW Y T ,   X = [ x 1   , x 2   , , x n ]   ,   Y = [ y 1   , y 2   , , y n ] were   W   is   diagonal   matrix   with   the   weight   w i
4. Compute the singular value decomposition
C = U V T
5. Compute rotation matrix
R = V ( 1 1 det ( V U T ) ) U T
6. Compute the optimal translation as
t =   q ¯ R p ¯  
7. Align the two 3D landmark sets using rigid body transformation
P ^ = ( s R ) p ¯ + t ,             s =     ( p ¯ T q ) ¯ n where   P ^   is   an   aligned   3 D   landmark   set   and   s   is   scale   factor .
8. Compute distance of landmarks between the two expressions
d i   =   p i ^   q i   ,     i = 1 ,   2 ,   ,   68.

2.5.2. Behavioral Data Analysis

Descriptive statistics for demographic variables were calculated to mean score and standard deviation (SD). Additionally, we compared the difference in mean scores on posed and spontaneous smile intensities based on analysis of variance (ANOVA) procedures. Comparison of AUs (smile expression: posed vs. spontaneous) were analyzed with repeated-measures ANOVA, and 3D facial landmarks (smile expression: posed vs. spontaneous × side: left vs. right) were analyzed by two-way repeated-measures ANOVA. Pairwise post-hoc comparisons with Bonferroni correction were conducted to 3D facial landmarks. A p-value <0.05 was considered statistically significant. All statistical analyses were performed using R software.

3. Results

Participants’ demographic information is described in Table 2. Age, education years, and the clinical scale data are represented by sex (19 males, 33.33%). Both men and women were within normal range with mean clinical scale scores on depression, anxiety, and alexithymia.
A repeated-measures ANOVA was computed with posed and spontaneous smile intensities according to AU06 (cheek raiser), AU12 (lip corner puller), and AU06+AU12 (see Table 3 and Figure 5). An AU06, located upper face, showed significantly higher intensities in spontaneous smiles than posed smiles. On the other hand, an AU12 corresponding to lower facial muscles had lower intensities in spontaneous smiles than posed smiles. The comparison of facial expression for the combination of AU06 and AU12 was identical to the pattern shown in AU12.
With respect to the 3D facial landmarks, a repeated-measures ANOVA was performed to find out the differences in smile expression (posed vs. spontaneous) and side of face (left vs. right). There were significant main effects of smile expression in eyebrow, eye, mouth, outline, and upper face features. Pairwise t test with Bonferroni correction revealed that eyebrows were significantly higher intensity on left side (P < 0.001) and spontaneous expression (P < 0.001), and eyes had more intensities in spontaneous expression (P < 0.001). Both outline and upper face indicated more intense expression in spontaneous smiles (Poutline < 0.001; Pupper face < 0.001). Mouth, on the other hand, showed stronger intensity in posed expression (P < 0.001). The results are shown in Table 4, Figure 6 and Figure 7.

4. Discussion and Conclusions

4.1. AU Intensity Estimation

The facial movements of spontaneous and posed smiles have displayed distinctive patterns. The intensity of AU06 was significantly higher in spontaneous smile than that of posed smile, while the intensity of AU12 was significantly higher in posed smiles. In other words, the area around the eyes was more actively involved during a genuine smile. On the other hand, the area around the mouth was more active when the smile was voluntarily posed. These results are consistent with previous findings that expression of spontaneously occurring happiness or satisfaction can be distinguished from a fake posed smile by noticeable movements of the upper cheek and muscles around the eyes [44]. These behavioral differences between the two regions are the result of differences in neural innervation, where the facial nerve is specialized for communication [45]. Most people are capable of voluntary control of mouth regions where they are unilateral in nature and mostly managed by the frontal lobes. However, the deliberate motion of the eye regions is only possible for a limited number of people, which is about 20% [46]. Unlike the mouth and the lower part of the face, the upper regions are bilaterally controlled via the limbic system [47].
The overall intensity of spontaneous and posed smiles, which involves simultaneous activation of both zygomaticus major and orbicularis oculi (AU06+AU12), was higher in posed smiles than spontaneous smiles. This result may be due to the stimulus for the spontaneous condition; a wedding scene from a movie may not be able to arouse a significant degree of genuine emotional change for a wide open-mouth smile. In fact, the process of spontaneous smile is much more complicated than a voluntary act of posing a smiling face. Regulation of posed movement of the face is generated from the lower portion of the precentral gyrus and projects to the facial nucleus [1]. However, the anatomical foundation of spontaneous emotional expression involves more structures with complicated pathways than that of posed expression. Spontaneous expressions are known to arise from thalamus and/or the globus pallidus and project to the facial nucleus via several different routes [1,3]. Since it is difficult to estimate how the stimuli are perceived to arouse natural smile, trained and automated algorithm to incorporate large sample is an indispensable aspect of facial expression analysis.

4.2. 3D Facial Landmark Intensity Estimation

We proposed a person-specific normalization method for facial landmark analysis to detect changes of intensity in facial muscle movements. The algorithm in our method has shown proficiency in measuring partial differences in each regions of the face over both hemifaces. The result of our analysis indicates significant differences in intensities between the left and the right eyebrows. The movement of left eyebrows displayed greater intensity than that of right eyebrows in both posed and spontaneous smiles. Moreover, the overall intensities of eyebrows were higher in spontaneous smiles than posed smiles, and the differences were also significant. On the other hand, the movement of the eyes displayed a relatively larger right-biased intensity, which is an opposite trend from the eyebrows. The zygomatic major, which is a major muscle in the lower region, influences face areas under the eyebrows [2]. Therefore, it is possible to see different lateralization patterns from the eyebrows and the eyes. Such results were not observed with conventional AUs, which sums up the whole movement in the eye area. Moreover, the aligning algorithm may have enabled the detection of subtle differences in intensities.
Although there was no significant interaction between expression and facial asymmetry, our observation on the greater intensity of the left orbicularis oculi region is consistent with previous studies. According to the right hemisphere hypothesis, the right hemisphere dominates the mediation of emotion, and therefore, facial expressions should be more evident on the left side of the face in general [25]. Our results also indicated that the upper face displayed significantly more movement in spontaneous than posed condition, which could serve as a reasonable indicator of a genuine smile. Our result is consistent with previous research that facial movement discrepancies during spontaneous expressions are more evident between the upper and lower faces than along the vertical axis [25,26]. The findings support the Component theory of facial expressions that the upper and lower facial expressions are fundamentally distinct at the level of both behavior and emotion; the upper being innervated from both right and left medial cortical projections while the lower from lateral cortical projection [48].

4.3. Limitations and Future Directions

Although facilitating the landmark-based facial recognition system is sufficient for detecting differences between posed and spontaneous smiles, differently weighing each point may enable the system to uncover more latent facial features. The range of muscle movements varies greatly between each facial region. For instance, the landmarks around the nose have significantly less range of motion than that of mouth. Future studies can be conducted with enhanced discriminating power by incorporating the possible range of motion on each landmark point.
Another limitation to the current study pertains to a relative disparity between the training data and actual data from the participants. Ekman stated that different aspects of expression are both universal and culture specific [49]. The participants for the current study were all Koreans, while the training data used to detect FACS are from populations of different races and cultures. Moreover, the posed smile falls under the category of social-emotional response, which is learned through social contexts [26]. Therefore, unlike spontaneous smile, the posed smile is prone to discordance between the training data and the actual recordings of the participants.
As noted in the previous section, the authenticity of spontaneous smile is subject to individual differences. Repeated experiments with the same participants may significantly increase the authenticity of the data. Therefore, further research must consider follow-up experiments with previously enrolled participants for precise analysis and reliability of the current method. Future research may also consider additional participants with emotion regulation disabilities or the elderly population to explore areas of emotion expression and recognition in terms of healthy aging.

4.4. Conclusions

The results reported in the present study support the automated 3D facial landmark detection could effectively distinguish genuine smiles from posed smiles. According to our data, the upper face is more involved during spontaneous expression than the lower face. Specifically, the left eyebrow could serve as a key indicator of a positive emotional state. In addition, under the same circumstances, increased movement around the mouth with relatively lower intensity in the upper face may indicate the smile is posed. The horizontal asymmetry and vertical discrepancy have been proven to be useful measures of facial expression in positive emotion. Increased movement in the upper face may serve as a partial indicator of a genuine smile. The results of this study are obtained from normal adults and can be used as a basic methodology for analyzing and for identification of clinical features of facial expression data.

Author Contributions

J.-Y.L. and E.C.L. designed the study; J.-A.L., J.-I.L., H.K. (Hakrim Kim), S.-J.H., J.-S.K., S.P. (Soowon Park), and J.-Y.L. recruited participants and collected facial and clinical data; T.K. and S.P. (Soowon Park) wrote the protocol and performed interpretation of data; K.L. contributed to facial behavioral data analyses and wrote the methodology; S.P. (Seho Park) and H.K. (Hyunwoong Ko) undertook statistical data analyses; S.P. (Seho Park) and K.L. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Education through National Research Foundation of Korea (NRF), grant number (NRF-2017R1D1A1A02018479). Also, this research was funded by Ministry of Science and ICT through NRF, grant number (NRF-2016M3A9E1915855).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Borod, J.C.; Haywood, C.S.; Koff, E. Neuropsychological Aspects of Facial Asymmetry during Emotional Expression: A Review of the Normal Adult Literature. Neuropsychol. Rev. 1997, 7, 41–60. [Google Scholar] [CrossRef] [PubMed]
  2. Ekman, P.; Friesen, W.V. Felt False and Miserable Smiles. J. Nonverbal Behav. 1982, 6, 238–252. [Google Scholar] [CrossRef]
  3. Ekman, P.; Friesen, W.V. The Duchenne Smile: Emotional Expression and Brain Physiology II. J. Pers. Soc. Psychol. 1990, 58, 342–353. [Google Scholar] [CrossRef] [PubMed]
  4. Schmidt, K.L.; Ambadar, Z.; Cohn, J.F.; Reed, L.I. Movement Differences between Deliberate and Spontaneous Facial Expressions: Zygomaticus Major Action in Smiling. J. Nonverbal Behav. 2006, 30, 37–52. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Cohn, J.F.M.; Schmidt, K.L. The Timing of Facial Motion in Posed and Spontaneous Smiles. Int. J. Wavelets Multiresolution Inf. Process. 2004, 2, 121–132. [Google Scholar] [CrossRef] [Green Version]
  6. Assari, M.A.; Rahmati, M. Driver drowsiness detection using face expression recognition. In Proceedings of the IEEE International Conference on Signal and Image Processing Applications, Kuala Lumpur, Malaysia, 16–18 November 2011; pp. 337–341. [Google Scholar]
  7. Mourão, A.; Magalhães, J. Competitive affective gaming: Winning with a smile. In Proceedings of the ACM International Conference on Multimedia, Barcelona, Spain, 21–25 October 2013; pp. 83–92. [Google Scholar]
  8. Ko, B.C. A Brief Review of Facial Emotion Recognition Based on Visual Information. Sensors 2018, 18, 401. [Google Scholar] [CrossRef]
  9. Reed, L.I.; Sayette, M.A.; Cohn, J.F. Impact of Depression on Response to Comedy: A Dynamic Facial Coding Analysis. J. Abnorm. Psychol. 2007, 116, 804–809. [Google Scholar] [CrossRef]
  10. Girad, J.M.; Cohn, J.F.; Mahoor, M.M.; Mavadati, S.; Rosenwarld, D.P. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, Shanghai, China, 22–26 April 2013; pp. 1–8. [Google Scholar]
  11. Kohler, C.G.; Martin, E.A.; Stolar, N.; Barrett, F.S.; Verma, R.; Brensinger, C.; Bilker, W.; Gur, R.E.; Gur, R.C. Static posed and evoked facial expressions of emotions in schizophrenia Christian. Schizophr. Res. 2016, 105, 49–60. [Google Scholar] [CrossRef] [Green Version]
  12. Simons, G.; Ellgring, H.; Smith Pasqualini, M.C. Disturbance of Spontaneous and Posed Facial Expressions in Parkinson’s Disease. Cogn. Emot. 2003, 17, 759–778. [Google Scholar] [CrossRef]
  13. Martinez, B.; Valstar, M.F. Advances, Challenges, and Opportunities in Automatic Facial Expression Recognition. In Advances in Face Detection and Facial Image Analysis; Springer: Cham, Switzerland, 2016; pp. 63–100. [Google Scholar]
  14. Sariyanidi, E.; Gunes, H.; Cavallaro, A. Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1113–1133. [Google Scholar] [CrossRef]
  15. Paulsen, R.R.; Juhl, K.A.; Haspang, T.M.; Hansen, T.; Ganz, M.; Einarsson, G. Multi-view consensus CNN for 3D facial landmark placement. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; In Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2018; pp. 706–719. [Google Scholar]
  16. Jiang, B.; Valstar, M.F.; Pantic, M. Action Unit Detection Using Sparse Appearance Descriptors in Space-Time Video Volumes. In Proceedings of the 2011 IEEE International Conference on Automatic Face and Gesture Recognition FG, Santa Barbara, CA, USA, 21–25 March 2011; pp. 314–321. [Google Scholar]
  17. Affectiva. Available online: https://www.affectiva.com/ (accessed on 26 December 2019).
  18. OKAO. Available online: https://www.components.omron.com/mobile/sp?nodeId=40702010 (accessed on 28 December 2019).
  19. Ekman, P.; Friesen, W.V. Facial Action Coding System: A Technique for the Measurement of Facial Movement. J. Pers. Soc. Psychol. 1971, 17, 124–129. [Google Scholar] [CrossRef] [Green Version]
  20. Ekman, P.; Friesen, W.V.; Hager, J.C. Facial Action Coding System: The Manual; University of California: Oakland, CA, USA, 2002. [Google Scholar]
  21. Baltrušaitis, T.; Mahmoud, M.; Robinson, P. Cross-Dataset Learning and Person-Specific Normalisation for Automatic Action Unit Detection. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015. [Google Scholar]
  22. Baltrušaitis, T.; Zadeh, A.; Lim, Y.C.; Morency, L.P. OpenFace 2.0: Facial Behavior Analysis Toolkit. In Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018. [Google Scholar]
  23. Okamoto, H.; Haraguchi, S.; Takada, K. Laterality of Asymmetry in Movements of the Corners of the Mouth during Voluntary Smile. Angle Orthod. 2010, 80, 223–229. [Google Scholar] [CrossRef]
  24. Sackeim, H.; Gur, R. Asymmetry in facial expression. Science 1978, 202, 434–436. [Google Scholar] [CrossRef]
  25. Ross, E.D.; Gupta, S.S.; Adnan, A.M.; Holden, T.L.; Havlicek, J.; Radhakrishnan, S. Neurophysiology of Spontaneous Facial Expressions: I. Motor Control of the Upper and Lower Face Is Behaviorally Independent in Adults. Cortex 2016, 76, 28–42. [Google Scholar] [CrossRef]
  26. Ross, E.D.; Gupta, S.S.; Adnan, A.M.; Holden, T.L.; Havlicek, J.; Radhakrishnan, S. Neurophysiology of Spontaneous Facial Expressions: II. Motor Control of the Right and Left Face Is Partially Independent in Adults. Cortex 2019, 111, 164–182. [Google Scholar] [CrossRef] [PubMed]
  27. Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 1971, 17, 124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ekman, P. An Argument for Basic Emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  29. Beck, A.T.; Ward, C.H.; Mendelson, M.; Mock, J.; Erbaugh, J. An inventory for measuring depression. Arch. Gen. Psychiatry 1961, 4, 561–571. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Sung, H.; Kim, J.; Park, Y.; Bai, D.; Lee, S.; Ahn, H. A study on the reliability and the validity of Korean version of the beck depression inventory (BDI). J. Korean Soc. Biol. Ther. Psychiatry 2008, 14, 201–212. [Google Scholar]
  31. Lim, S.Y.; Lee, E.J.; Jeong, S.W.; Kim, H.C. The Validation Study of Beck Depression Scale 2 in Korean Version. Anxiety Mood 2011, 7, 48–53. [Google Scholar]
  32. Beck Aaron, T.; Gary, B.; Kiyosaki, R.T.; Lechter, S.L. An Inventory for Measuring Clinical Anxiety: Psychometric Properties. J. Consult. Clin. Psychol. 1988, 56, 893–897. [Google Scholar] [CrossRef]
  33. Yook, S.P.; Kim, Z.S. A clinical study on the Korean version of Beck Anxiety Inventory: Comparative study of patient and non-patient TT—A clinical study on the Korean version of Beck Anxiety Inventory: Comparative study of patient and non-patient. Korean J. Clin. Psychol. 1997, 16, 185–197. [Google Scholar]
  34. Julian, L.J. Measures of Anxiety: State-Trait Anxiety Inventory (STAI), Beck Anxiety Inventory (BAI), and Hospital Anxiety and Depression Scale-Anxiety (HADS-A). Arthritis Care Res. 2011, 63, 467–472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Bagby, R.M.; Parker, J.D.A.; Taylor, G.J. The Twenty-Item Toronto Alexithymia Scale-I. Item Selection and Cross-Validation of the Factor Structure. J. Psychosom. Res. 1994, 38, 23–32. [Google Scholar] [CrossRef]
  36. Lee, J.Y.; Rim, Y.H.; Lee, H.D. Development and Validation of a Korean Version of the 20-Item Toronto Alexithymia Scale (TAS-20K). J. Korean Neuropsychiatr. Assoc. 1996, 35, 888–899. [Google Scholar]
  37. Sang, S.S.; Chung, U.S.; Hyo, D.R.; Sung, H.J. Reliability and Validity of the 20-Item Toronto Alexithymia Scale in Korean Adolescents. Psychiatry Investig. 2009, 6, 173–179. [Google Scholar]
  38. Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 94–101. [Google Scholar]
  39. Baltrusaitis, T.; Robinson, P.; Morency, L.P. OpenFace: An Open Source Facial Behavior Analysis Toolkit. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–10. [Google Scholar]
  40. Zadeh, A.; Lim, Y.C.; Baltrušaitis, T.; Morency, L.P. Convolutional Experts Constrained Local Model for 3D Facial Landmark Detection. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 2519–2528. [Google Scholar]
  41. Sagonas, C.; Tzimiropoulos, G.; Zafeiriou, S.; Pantic, M. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 397–403. [Google Scholar]
  42. Zafeiriou, S.; Trigeorgis, G.; Chrysos, G.; Deng, J.; Shen, J. The menpo facial landmark localisation challenge: A step towards the solution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 170–179. [Google Scholar]
  43. Tsai, R.; Huang, T.; Zhu, W. Estimating three-dimensional motion parameters of a rigid planar patch, II: Singular value decomposition. IEEE Trans. Acoust. Speech Signal Process. 1982, 30, 525–534. [Google Scholar] [CrossRef]
  44. Schmidt, K.L.; Cohn, J.F. Dynamics of Facial Expression: Normative Characteristics and Individual Differences. In Proceedings of the IEEE International Conference on Multimedia and Expo, ICME, Tokyo, Japan, 22–25 August 2001; Voulme 728, pp. 547–550. [Google Scholar]
  45. Rinn, W.E. The Neuropsychology of Facial Expression: A Review of the Neurological and Psychological Mechanisms for Producing Facial Expressions. Psychol. Bull. 1984, 95, 52–77. [Google Scholar] [CrossRef]
  46. Guo, H.; Zhang, X.H.; Liang, J.; Yan, W.J. The Dynamic Features of Lip Corners in Genuine and Posed Smiles. Front. Psychol. 2018, 9, 202. [Google Scholar] [CrossRef] [Green Version]
  47. Muri, R.M. Cortical control of facial expression. J. Comp. Neurol. 2016, 524, 1578–1585. [Google Scholar] [CrossRef] [Green Version]
  48. Scherer, K.R. The Dynamic Architecture of Emotion: Evidence for the Component Process Model. Cogn. Emot. 2009, 23, 1307–1351. [Google Scholar] [CrossRef]
  49. Ekman, P. Facial expression and emotion. Am. Psychol. 1993, 48, 376–379. [Google Scholar] [CrossRef]
Figure 1. Stimuli used in posed and spontaneous facial emotion task. (A) Posed happy smiles. (B) Spontaneous happy smiles (film clip was taken from “About Time” movie clip).
Figure 1. Stimuli used in posed and spontaneous facial emotion task. (A) Posed happy smiles. (B) Spontaneous happy smiles (film clip was taken from “About Time” movie clip).
Sensors 20 01199 g001
Figure 2. 3D facial landmarks scatter plot and facial landmark’s index numbers.
Figure 2. 3D facial landmarks scatter plot and facial landmark’s index numbers.
Sensors 20 01199 g002
Figure 3. The example of alignment problem between facial landmarks in neutral and smile expression: (A) head position and angle during neutral expression, (B) head position and angle during smile expression, (C) example of misalignment problem between two facial landmarks.
Figure 3. The example of alignment problem between facial landmarks in neutral and smile expression: (A) head position and angle during neutral expression, (B) head position and angle during smile expression, (C) example of misalignment problem between two facial landmarks.
Sensors 20 01199 g003
Figure 4. The example of aligning the two expression’s landmark sets: (A) scatter plot of two landmark data sets before aligning, (B) scatter plot of two landmark sets after aligning.
Figure 4. The example of aligning the two expression’s landmark sets: (A) scatter plot of two landmark data sets before aligning, (B) scatter plot of two landmark sets after aligning.
Sensors 20 01199 g004
Figure 5. Contrast between posed and spontaneous smile intensities of AU. (A) AU06 (cheek raiser) mean score. (B) AU12 (lip corner puller) mean score. (C) AU06+AU12 mean score. (AU: Action Unit; *** P < 0.001.).
Figure 5. Contrast between posed and spontaneous smile intensities of AU. (A) AU06 (cheek raiser) mean score. (B) AU12 (lip corner puller) mean score. (C) AU06+AU12 mean score. (AU: Action Unit; *** P < 0.001.).
Sensors 20 01199 g005
Figure 6. Classification of 3D landmarks in facial features to posed and spontaneous smiles with left and right faces. The difference in mean scores on posed/spontaneous smile intensities and left/right side of (A) Eyebrow. (B) Eye. (C) Mouth. (D) Outline. (*** P < 0.001).
Figure 6. Classification of 3D landmarks in facial features to posed and spontaneous smiles with left and right faces. The difference in mean scores on posed/spontaneous smile intensities and left/right side of (A) Eyebrow. (B) Eye. (C) Mouth. (D) Outline. (*** P < 0.001).
Sensors 20 01199 g006
Figure 7. Classification of 3D landmarks in facial features to posed and spontaneous smiles with left and right faces. The difference in mean scores on posed/spontaneous smile intensities and left/right side of (A) Upper face. (B) Lower face. (*** P < 0.001).
Figure 7. Classification of 3D landmarks in facial features to posed and spontaneous smiles with left and right faces. The difference in mean scores on posed/spontaneous smile intensities and left/right side of (A) Upper face. (B) Lower face. (*** P < 0.001).
Sensors 20 01199 g007
Table 1. The size normalized median landmark error of CE-CLM on 300-W and Menpo [40,41,42].
Table 1. The size normalized median landmark error of CE-CLM on 300-W and Menpo [40,41,42].
Helen and LFPW (300-W)iBUG (300-W)Menpo (Frontal Face)
with outline (68)without outline (49)with outline (68)without outline (49)with outline (68)without outline (49)
0.003150.002300.005310.003860.002230.00174
* These error values are the size normalized values of the distance between annotated ground-truth landmark point and predicted landmark point.
Table 2. Characteristics of participants.
Table 2. Characteristics of participants.
Male (n = 19)Female (n = 38)Total (n = 57)
mean ± SD 1mean ± SD 1mean ± SD 1
Age22.68 ± 2.1622.55 ± 4.3222.60 ± 3.72
Education (year)15.05 ± 1.3914.66 ± 1.0214.79 ± 1.16
BDI 28.53 ± 5.3012.13 ± 7.7710.93 ± 7.20
BAI 32.47 ± 3.085.24 ± 5.084.32 ± 4.67
TAS 443.68 ± 7.5945.79 ± 9.2345.09 ± 8.71
1 SD: Standard Deviation, 2 BDI: Beck Depression Inventory, 3 BAI: Beck Anxiety Inventory, 4 TAS: Toronto Alexithymia Scale.
Table 3. Comparative mean AU 2 according to different type of smile expression.
Table 3. Comparative mean AU 2 according to different type of smile expression.
Posed (n = 57)Spontaneous (n = 57)Fp
mean ± SD 1mean ± SD 1
AU06
(cheek raiser)
0.03 ± 0.070.35 ± 0.2981.48<0.001
AU12
(lip corner puller)
0.74 ± 0.540.09 ± 0.1596.74<0.001
AU06+12
(both)
0.77 ± 0.550.44 ± 0.3318.80<0.001
1 SD: Standard Deviation, 2 AU: Action Unit.
Table 4. Comparison of smile expression by left- and right face based on facial landmarks.
Table 4. Comparison of smile expression by left- and right face based on facial landmarks.
Expression1Left (n = 57)Right (n = 57)ExpressionSidePairwise Comparison
mean ± SD 2mean ± SD 2
EyebrowPosed76.93 ± 76.2357.78 ± 50.4320.03***18.26***Spontaneous > Posed
Left > Right
Spontaneous123.23 ± 100.1594.45 ± 69.06
EyePosed27.13 ± 31.8129.83 ± 38.1713.28***6.15**Spontaneous > Posed
Right > Left
Spontaneous42.03 ± 35.3953.74 ± 61.97
NosePosed25.56 ± 13.0326.94 ± 13.641.684.68Not significant
Spontaneous22.63 ± 11.1125.72 ± 14.18
MouthPosed1035.13 ± 547.911033.77 ± 546.8424.13***0.06Posed > Spontaneous
Spontaneous746.59 ± 347.66760.45 ± 321.07
OutlinePosed541.97 ± 415.72540.11 ± 602.2215.48***0.10Spontaneous > Posed
Spontaneous968.60 ± 768.52935.30 ± 1070.48
Upper face (Eyebrow+ Eye)Posed104.07 ± 104.9087.60 ± 83.0518.32***8.96Spontaneous > Posed
Spontaneous165.26 ± 132.66148.19 ± 123.49
Lower face (Nose+ Mouth+ Outline)Posed1602.65 ± 845.051600.82 ± 882.751.070.02Not significant
Spontaneous1737.81 ± 953.921721.46 ± 1250.97
1 Expression (Posed, Spontaneous), 2 SD: Standard Deviation.

Share and Cite

MDPI and ACS Style

Park, S.; Lee, K.; Lim, J.-A.; Ko, H.; Kim, T.; Lee, J.-I.; Kim, H.; Han, S.-J.; Kim, J.-S.; Park, S.; et al. Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks. Sensors 2020, 20, 1199. https://doi.org/10.3390/s20041199

AMA Style

Park S, Lee K, Lim J-A, Ko H, Kim T, Lee J-I, Kim H, Han S-J, Kim J-S, Park S, et al. Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks. Sensors. 2020; 20(4):1199. https://doi.org/10.3390/s20041199

Chicago/Turabian Style

Park, Seho, Kunyoung Lee, Jae-A Lim, Hyunwoong Ko, Taehoon Kim, Jung-In Lee, Hakrim Kim, Seong-Jae Han, Jeong-Shim Kim, Soowon Park, and et al. 2020. "Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks" Sensors 20, no. 4: 1199. https://doi.org/10.3390/s20041199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop