Skip to main content

Advertisement

Log in

Emotion recognition based on facial components

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

Machine analysis of facial emotion recognition is a challenging and an innovative research topic in human–computer interaction. Though a face displays different facial expressions, which can be immediately recognized by human eyes, it is very hard for a computer to extract and use the information content from these expressions. This paper proposes an approach for emotion recognition based on facial components. The local features are extracted in each frame using Gabor wavelets with selected scales and orientations. These features are passed on to an ensemble classifier for detecting the location of face region. From the signature of each pixel on the face, the eye and the mouth regions are detected using the ensemble classifier. The eye and the mouth features are extracted using normalized semi-local binary patterns. The multiclass Adaboost algorithm is used to select and classify these discriminative features for recognizing the emotion of the face. The developed methods are deployed on the RML, CK and CMU-MIT databases, and they exhibit significant performance improvement owing to their novel features when compared with the existing techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8

Similar content being viewed by others

References

  1. Fakhreddine K, Milad A, Jamil A S and Mo N A 2008 Human–computer interaction: overview on state of the art. Int. J. Smart Sens. Intell. Syst. 1(1): 23

    Google Scholar 

  2. Cohn J F 2010 Advances in behavioral science using automated facial image analysis and synthesis. IEEE Signal Process. 27(6): 128–133

    Google Scholar 

  3. Ekman P and Friesen A 1978 The facial action coding system. San Francisco, CA: Consulting Psychologist Press

    Google Scholar 

  4. Yun T and Ling G 2013 A deformable 3-D facial expression model for dynamic human emotional state recognition. IEEE Trans. Circuits Syst. Video Technol. 23(1): 142–157

    Article  Google Scholar 

  5. Mingli S, Dacheng T, Zicheng L and Xuelong L 2010 Image ratio features for facial expression recognition application. IEEE Trans. Syst. Man Cybernet. B: Cybernet. 40(3): 779–788

  6. Khandait S P, Thool R C and Khandaitm P D 2011 Automatic facial feature extraction and expression recognition based on neural network. Int. J. Adv. Comput. Sci. Appl. 2(1): 113–118

    Google Scholar 

  7. Ji Y and Idrissi K 2010 Learning from essential facial parts and local features for automatic facial expression recognition. In: Proceedings of CBMI, 8th International Workshop on Content-Based Multimedia Indexing

  8. Shan C, Gong S and McOwan P 2009 Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27: 803–816

    Article  Google Scholar 

  9. Ahmed Khana R, Alexandre B, Konika H and Bouakaza S 2013 Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recogn. Lett. 34: 1159–1168

    Article  Google Scholar 

  10. Liu P, Han S and Meng Z 2014 Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1805–1812

  11. Zhong L, Liu Q, Yang P, Huang J and Metaxas N 2015 Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybernet. 45(8): 1499–1510

    Article  Google Scholar 

  12. Littlewort G C, Bartlett M S and Lee K 2009 Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis. Comput. 27: 1797–1803

    Article  Google Scholar 

  13. Praseeda Lekshmi V, Sasikumar M and Naveen S 2008 Analysis of facial expressions from video images using PCA. In: Proceedings of WCE, London, UK

  14. Yi J and Khalid I 2012 Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn. Lett. 33: 1373–1380

    Article  Google Scholar 

  15. Lee T S 1996 Image representation using 2d Gabor wavelets. IEEE Trans. PAMI 18(10): 959–971

    Article  Google Scholar 

  16. Muneeswaran K, Ganesan L, Arumugam S and Ruba Soundar K 2006 Texture image segmentation using combined features from spatial and spectral distribution. Pattern Recogn. Lett. 27: 755–764

    Article  Google Scholar 

  17. Viola P and Jones M 2001 Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511–518

    Google Scholar 

  18. Bo W, Haizhou A, Chang H and Shihong L 2004 Fast rotation invariant multi-view face detection based on RealAdaboost. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 79–84

  19. Jerome F, Trevor H and Robert T 2000 Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2): 337–407

    MathSciNet  MATH  Google Scholar 

  20. Vezhnevets A and Vezhnevets V 2005 Modest AdaBoost—teaching AdaBoost to generalize better. In: Proceedings of Graphicon, Akademgorodok, Novosibirsk

  21. Kyungjoong J, Choi J and Jang G 2015 Semi-local structure patterns for robust face detection. In: IEEE Signal Process. Lett. 22(9): 1400–1403

    Article  Google Scholar 

  22. Ojala T, Pietikainen M and Harwood D 1996 A comparative study of texture measures with classification based on features distributions. Pattern Recogn. 29(1): 51–59

    Article  Google Scholar 

  23. Freund Y and Schapire R 1997 A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55: 119–139

    Article  MathSciNet  MATH  Google Scholar 

  24. Saberian M and Vasoncelos J 2011 Multiclass boosting: theory and algorithms. In: Proceedings of Neural Information Processing Systems (NIPS), Granada, Spain, pp. 2124–2132

  25. Yongjin W and Ling G 2008 Recognizing human emotional state from audiovisual signals. IEEE Trans. Multimedia 10(5): 659–668

    Google Scholar 

  26. Kanade T, Cohn J F and Yingli T 2000 Comprehensive database for facial expression analysis. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG)

  27. Rowley H, Baluja S and Kanade T 1998 Rotation invariant neural network-based face detection. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P ITHAYA RANI.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

RANI, P.I., MUNEESWARAN, K. Emotion recognition based on facial components. Sādhanā 43, 48 (2018). https://doi.org/10.1007/s12046-018-0801-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12046-018-0801-6

Keywords

Navigation