skip to main content
10.1145/3625008.3625014acmotherconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

Comparative Analysis of Facial Expression Recognition Systems for Evaluating Emotional States in Virtual Humans

Published:06 January 2024Publication History

ABSTRACT

The digital animation process is a complex endeavour requiring professional animators to acquire substantial expertise and technique through years of study and practice. Particularly, facial animation, where virtual humans express specific mental states or emotions with a desire for realism, is further complicated by the “Uncanny Valley” phenomenon. In this context, it is posited that pre-validated facial expressions for certain emotions could serve as references for the novice or inexperienced animators during the facial animation and posing process of their virtual humans using morph targets, also known as blend shapes or shape keys. This research presents a comparative study between two Facial Expression Recognition (FER) systems that employ pre-trained models for facial recognition applied to emotion recognition in virtual humans. Given that these systems were not designed or trained for this particular purpose but for facial recognition in real humans, this study aims to investigate their level of applicability in scenarios where virtual humans are used instead of real humans. This assessment is a critical step towards evaluating the feasibility of integrating FER models as part of a support tool for facial animation and the posing of virtual humans. Through this investigation, this research provides evidence of the reliability of applying the FER library and Deepface systems for emotion recognition in virtual humans, contributing to investigating new ways to enhance the digital animation process and overcoming the inherent complexities of facial animation.

References

  1. Anderson R Avila, Zahid Akhtar, Joao F Santos, Douglas O’Shaughnessy, and Tiago H Falk. 2018. Feature pooling of modulation spectrum features for improved speech emotion recognition in the wild. IEEE Transactions on Affective Computing 12, 1 (2018), 177–188.Google ScholarGoogle ScholarCross RefCross Ref
  2. David Burden and Maggi Savin-Baden. 2019. Virtual humans: Today and tomorrow. CRC Press.Google ScholarGoogle Scholar
  3. Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A dataset for recognising faces across pose and age. arxiv:1710.08092 [cs.CV]Google ScholarGoogle Scholar
  4. Daz Productions, Inc.2023. Daz 3D - 3D Models and 3D Software | Daz 3D. https://www.daz3d.com/Google ScholarGoogle Scholar
  5. Daz Productions, Inc.2023. Daz 3D Animation Studio Tools & Features | Daz 3D. https://www.daz3d.com/technology/Google ScholarGoogle Scholar
  6. Daz Productions, Inc.2023. Daz to Unity Bridge. Daz Productions, Inc. https://www.daz3d.com/daz-to-unity-bridgeGoogle ScholarGoogle Scholar
  7. Daniel Valente de Macedo and Maria Andréia Formico Rodrigues. 2011. Experiences with rapid mobile game development using Unity engine. Computers in Entertainment 9 (2011), 14:1–14:12. https:doi.org/10.1145/2027456.2027460.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Paul Ekman. 1973. Cross-cultural studies of facial expression. Darwin and facial expression: A century of research in review 169222, 1 (1973).Google ScholarGoogle Scholar
  9. Paul Ekman and Wallace V Friesen. 1971. Constants across cultures in the face and emotion.Journal of personality and social psychology 17, 2 (1971), 124.Google ScholarGoogle Scholar
  10. Paul Ekman and Wallace V Friesen. 1976. Measuring facial movement. Environmental psychology and nonverbal behavior 1 (1976), 56–75.Google ScholarGoogle Scholar
  11. P Ekman and D Keltner. 1970. Universal facial expressions of emotion. Calif. Mental Health Res. Dig 8, 4 (1970), 151–158.Google ScholarGoogle Scholar
  12. A.J. Ferri. 2007. Willing Suspension of Disbelief: Poetic Faith in Film. Lexington Books. https://books.google.com.br/books?id=yB_ZOzxqMVcCGoogle ScholarGoogle Scholar
  13. Alan J Fridlund and Erika L Rosenberg. 1995. Human facial expression: An evolutionary view. Nature 373, 6515 (1995), 569–569.Google ScholarGoogle Scholar
  14. Prashant Gohel, Priyanka Singh, and Manoranjan Mohanty. 2021. Explainable AI: current status and future directions. https://doi.org/10.48550/arXiv.2107.07045 arXiv:2107.07045 [cs] version: 1.Google ScholarGoogle ScholarCross RefCross Ref
  15. Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio. 2013. Challenges in Representation Learning: A report on three machine learning contests. arxiv:1307.0414 [stat.ML]Google ScholarGoogle Scholar
  16. Jonathan Gratch, Jeff Rickel, Elisabeth André, Justine Cassell, Eric Petajan, and Norman Badler. 2002. Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems 17, 4 (2002), 54–63.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Jonas De Araújo Luz Junior, Maria Andréia Formico Rodrigues, and Jessica Hammer. 2021. A storytelling game to foster empathy and connect emotionally with breast cancer journeys. In 2021 IEEE 9th International Conference on Serious Games and Applications for Health (SeGAH). IEEE, 1–8.Google ScholarGoogle ScholarCross RefCross Ref
  18. Nadia Magnenat-Thalmann and Daniel Thalmann. 2005. Handbook of virtual humans. John Wiley & Sons.Google ScholarGoogle Scholar
  19. Masahiro Mori, Karl F. MacDorman, and Norri Kageki. 2012. The Uncanny Valley [From the Field]. IEEE Robotics & Automation Magazine 19, 2 (2012), 98–100. https://doi.org/10.1109/MRA.2012.2192811Google ScholarGoogle ScholarCross RefCross Ref
  20. NumFOCUS, Inc.2023. Jupyter Notebook. https://jupyter.org/aboutGoogle ScholarGoogle Scholar
  21. NumFOCUS, Inc.2023. Pandas - Python Data Analysis Library. https://pandas.pydata.org/Google ScholarGoogle Scholar
  22. Frederic I. Parke and Keith Waters. 2008. Computer Facial Animation (second ed.). AK Peters Ltd.Google ScholarGoogle Scholar
  23. Ygor R. Serpa, Leonardo A. Pires, and Maria Andréia Formico Rodrigues. 2019. Milestones and New Frontiers in Deep Learning. In the 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T). 22–35. https://doi.org/10.1109/SIBGRAPI-T.2019.00008Google ScholarGoogle ScholarCross RefCross Ref
  24. Emmanuel V.B. Sampaio, Lucie Lévêque, Matthieu Perreira da Silva, and Patrick Le Callet. 2022. Are Facial Expression Recognition Algorithms Reliable in the Context of Interactive Media? A New Metric to Analyse Their Performance. In EmotionIMX: Considering Emotions in Multimedia Experience (ACM IMX 2022 Workshop). Aveiro, Portugal. https://hal.science/hal-03789571Google ScholarGoogle Scholar
  25. Sefik Ilkin Serengil and Alper Ozpinar. 2020. LightFace: A Hybrid Deep Face Recognition Framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 23–27. https://doi.org/10.1109/ASYU50717.2020.9259802Google ScholarGoogle ScholarCross RefCross Ref
  26. Justin Shenk, Aaron CG, Octavio Arriaga, and Owlwasrowk. 2021. justinshenk/fer: Zenodo. https://doi.org/10.5281/zenodo.5362356Google ScholarGoogle ScholarCross RefCross Ref
  27. Unity Technologies. 2023. Unity Real-Time Development Platform | 3D, 2D, VR e AR Engine. https://unity.comGoogle ScholarGoogle Scholar
  28. Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.Google ScholarGoogle Scholar

Index Terms

  1. Comparative Analysis of Facial Expression Recognition Systems for Evaluating Emotional States in Virtual Humans

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      SVR '23: Proceedings of the 25th Symposium on Virtual and Augmented Reality
      November 2023
      315 pages
      ISBN:9798400709432
      DOI:10.1145/3625008

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 January 2024

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)20
      • Downloads (Last 6 weeks)8

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format