Abstract
A 3D modular morphable model (3DMMM) is introduced to deal with facial expression recognition. The 3D Morphable Model (3DMM) contains 3D shape and 2D texture information of faces extracted using conventional Principal Component Analysis (PCA). In this work, modular PCA approach is used. A face is divided into six modules according to different facial features which are categorized based on Facial Animation Parameters (FAP). Each region will be treated separately in the PCA analysis. Our work is about recognizing the six basic facial expressions, provided that the properties of a facial expression are satisfied. Given a 2D image of a subject with facial expression, a matched 3D model for the image is found by fitting them to our 3D MMM. The fitting is done according to the modules; it will be in order of the importance modules in facial expression recognition (FER). Each module is assigned a weighting factor based on their position in priority list. The modules are combined and we can recognize the facial expression by measuring the similarity (mean square error) between input image and the reconstructed 3D face model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36(1):259–275
Mao X, Xue Y, Li Z, Huang K, Lv S (2009) Robust facial expression recognition based on RPCA and AdaBoost. In: 10th workshop on image analysis for multimedia interactive services
Tena R, De la Torre F, Matthews I (2011) Interactive region-based linear 3D face models. ACM Trans Graph 30(4):76
Zhao W, Chellappa R, Phillips PJ, Rosenfeld A (2003) Face recognition: a literature survey. ACM Comput Surv 35(4):399–458
King I, Xu L (1997) Localized principal module analysis learning for face feature extraction and recognition. In: Proceedings of workshop 3D computer vision, p 124
Chiang C-C, Chen Z-W, Yang C-N (2009) A module-based face synthesizing method. In: APSIPA annual summit and conference, p 24
Ekman P, Friesen W (1978) Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto
Zhang Y, Ji Q, Zhu Z, Yi B (2008) Dynamic facial expression analysis and synthesis with MPEG-4 facial animation parameters. IEEE Trans. Circuits Syst Video Technol 18(10):1383–1396
Romdhani S, Pierrard J-S, Vetter T (2005) 3D morphable face model, a unified approach for analysis and synthesis of images. In: Wenyi Zhao RC (ed) Face processing: advanced modeling and methods. Elsevier
Lavagetto F, Pockaj R (1999) The facial animation engine: towards a high-level interface for the design of MPEG-4 compliant animated faces. IEEE Trans Circuits Syst Video Technol 9(2):277–289
Blanz V, Scherbaum K, Seidel H (2007) Fitting a morphable model to 3D scans of faces. Comput Vis IEEE Int Conf 0:1–8
Raouzaiou A, Tsapatsoulis N, Karpouzis K (2002) Kollias S (2002) Parameterized facial expression synthesis based on MPEG-4. EURASIP J Appl Sig Proc 1:1021–1038
Deng Z, Noh J (2008) Computer facial animation: a survey. In: Deng Z, Neumann U (eds) Data driven 3D facial animation. Springer, pp 1–28
Gottumukkal R, Asari VK (2003) An improved face recognition technique based on modular PCA approach. Pattern Recognit Lett 25(4):429–436
ISO/IEC IS 14496-2 Visual (1999) http://kazus.ru/nuke/modules/Downloads/pub/144/0/ISO-IEC-14496-2-2001.pdf. Assessed 13 Feb 2012
Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2002). The extended Cohn-Kanade dataset (CK +): a complete dataset for action unit and emotion-specified expression. In: IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW), pp 94–101
Pantic M, Rothkrantz L (2000) Automatic analysis of facial expressions: the state of the art. IEEE Trans PAMI 22:1424–1445
Savran A, Alyüz N, Dibeklioğlu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Biometrics and identity management. In: Schouten B, Juul NC, Drygajlo A, Tistarelli M (eds) Bosphorus database for 3D face analysis. Springer, Berlin, pp 47–56
Velusamy S, Kannan H, Anand B, Sharma A, Navathe B (2011) A method to infer emotions from facial action units. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2028–2031
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1
AU | Description | FAP number | FAP name | Module |
---|---|---|---|---|
1 | Inner brow raiser | 31 | raise_l_i_eyebrow | 5 |
32 | raise_r_i_eyebrow | |||
2 | Outbrow raiser | 35 | raise_l_o_eyebrow | 5 |
36 | raise_l_o_eyebrow | |||
4 | Brow lower | 31_ | raise_l_i_eyebrow | 5 |
32_ | raise_r_i_eyebrow | |||
37 | squeeze_l_eyebrow | |||
38 | squeeze_r_eyebrow | |||
5 | Upper lid raiser | 19_ | open_t_l_eyelid (close_t_l_eyelid) | 4 |
20_ | open_t_r_eyelid (open_t_r_eyelid) | |||
6 | Cheek raiser | 19 | close_t_l_eyelid | 5 |
20 | close_t_r_eyelid | |||
41 | lift_l_cheek | |||
42 | lift_r_cheek | |||
7 | Lid tighter | 21 | close_b_l_eyelid | 4 |
22 | close_b_r_eyelid | |||
9 | Nose wrinkler | 61 | stretch_l_nose | 1 |
62 | stretch_r_nose | |||
10 | Upper lip raiser | 59 | raise_l_cornerlip_o | 3 |
60 | raise_r_cornerlip_o | |||
12 | Lip corner puller | 59 | raise_l_cornerlip_o | 3 |
60 | raise_r_cornerlip_o | |||
53 | stretch_l_cornerlip_o | |||
54 | stretch_r_cornerlip_o | |||
15 | Lip corner depressor | 59_ | lower_l_cornerlip (raise_l_cornerlip_o) | 3 |
60_ | lower_r_cornerlip (raise_r_cornerlip_o) | |||
16 | Lower lip depressor | 5 | raise_b_midlip | 3 |
16 | push_b_lip | |||
17 | Chin raiser | 18 | depress_chin | 3 |
20 | Lip stretcher | 53 | stretch_l_cornerlip | 3 |
54 | stretch_r_cornerlip | |||
5 | raise_b_midlip | |||
23 | Lip tighter | 53_ | tight_l_cornerlip | 3 |
54_ | tight_r_cornerlip | |||
24 | Lip pressor | 4 | lower_t_midlip | 3 |
16 | push_b_lip | |||
17 | push_t_lip | |||
25 | Lip apart | 3 | open_jaw(slight) | 3 |
5_ | lower_b_midlip(slight) | |||
26 | Jaw drop | 3 | open_jaw(middle) | 3 |
5_ | lower_b_midlip(middle) | |||
27 | Mouth stretch | 3_ | open_jaw(large) | 3 |
5_ | lower_b_midlip(large |
Appendix 2
Ekman and Friesen [7] | Raouzaiou et al. [12] | Zhang et al. [8] | Deng and Noh [13] | Lucey et al. [16] | Velusamy et al. [19] | ||
---|---|---|---|---|---|---|---|
Primary | Auxiliary | ||||||
Anger | 4 + 5 + 7 + 23 | 2 + 4 + 5 + 7 + 17 | 2 + 4 + 7 + 23 + 24 | 17 + 25 + 26 + 16 | 2 + 4 + 7 + 9 + 10 + 20 + 26 | 4 + 5 + 15 + 17 | 23 + 7 + 17 + 4 + 2 |
Disgust | 9 + 15 + 16 | 5 + 7 + 10 + 25 | 9 + 10 | 17 + 25 + 26 | NIL | 1 + 4 + 15 + 17 | 9 + 7 + 4 + 17 + 6 |
Fear | 1 + 2 + 4 + 5 + 20 + 26 | 4 + 5 + 7 + 24 + 26 | 20 + (1 + 5) + (5 + 7) | 4 + 5 + 7 + 25 + 26 | 1 + 2 + 4 + 5 + 15 + 20 + 26 | 1 + 4 + 7 + 20 | 20 + 4 + 1 + 5 + 7 |
Happiness | 6 + 12 | 26 + 12 + 7 + 6 + 20 | 6 + 12 | 16 + 25 + 26 | 1 + 6 + 12 + 14 | 6 + 12 + 25 | 12 + 6 + 26 + 10 + 23 |
Sadness | 1 + 4 + 15 | 7 + 5 + 12 | 1 + 15 + 17 | 4 + 7 + 25 + 26 | 1 + 4 + 15 + 23 | 1 + 2 + 4 + 15 + 17 | 15 + 1 + 4 + 17 + 10 |
Surprise | 1 + 2 + 5B + 26 | 26 + 5 + 7 + 4 + 2 + 15 | 5 + 26 + 27 + (1 + 2) | NIL | 1 + 2+5 + 15 + 16 + 20 + 26 | 1 + 2 + 5 + 25 + 27 | 27 + 2 + 1 + 5 + 26 |
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Ujir, H., Spann, M. (2013). Facial Expression Recognition Using FAPs-Based 3DMMM. In: Tavares, J., Natal Jorge, R. (eds) Topics in Medical Image Processing and Computational Vision. Lecture Notes in Computational Vision and Biomechanics, vol 8. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0726-9_2
Download citation
DOI: https://doi.org/10.1007/978-94-007-0726-9_2
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-0725-2
Online ISBN: 978-94-007-0726-9
eBook Packages: EngineeringEngineering (R0)