Skip to main content

Deep Learning Based Audio-Visual Emotion Recognition in a Smart Learning Environment

  • Conference paper
  • First Online:
Towards a Hybrid, Flexible and Socially Engaged Higher Education (ICL 2023)

Abstract

The main goal of this work is to develop a method to monitor the emotional state of students and teachers during the study process based on voice and facial expressions, that can be applied in a smart learning environment. In this paper we create a multimodal emotion detection model based on voice and facial expression features using convolutional neural network (CNN) models. We describe the implementation of the created emotion detection model into the learning process as a web application to monitor the emotional state of students and teachers in a smart learning environment. In this work we compare three types of emotion detection models: models based on audio and facial features separately and for both features taken together and test the performance of these models in the simulation of study process. To evaluate and analyze the models’ performances k-fold cross-validation is applied and classification accuracy, weighted F1 score, and confusion matrix are computed. The application developed in this study allows to identify the overall emotional background of the learning environment, determine the emotional state of students and academic staff during the learning process in near real-time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ekman, P.: Basic emotions. In: Dalgleish, T., Power, M.J. (eds.) Handbook of Cognition and Emotion, pp. 45–60. Wiley (1999). https://doi.org/10.1002/0470013494.ch3

  2. Ekman, P.: A methodological discussion of nonverbal behavior. J. Psychol. 43(1), 141–149 (1957). https://doi.org/10.1080/00223980.1957.9713059

    Article  Google Scholar 

  3. Russell, J.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980). https://doi.org/10.1037/h0077714

    Article  Google Scholar 

  4. Scherer, K.: What are emotions? And how can they be measured? Soc. Sci. Inf. 44(4), 695–729 (2005). https://doi.org/10.1177/0539018405058216

    Article  Google Scholar 

  5. Haq, S., Jackson, P.: Multimodal emotion recognition. In: Wang, W. (ed.) Machine Audition: Principles, Algorithms and Systems, pp. 398–423. IGI Global (2011). https://doi.org/10.4018/978-1-61520-919-4.ch017

  6. Ekman, P., Friesen, W.V., Ellsworth, P.: Emotion in the Human Face: Guidelines for Research and an Integration of Findings. Pergamon Press (1971)

    Google Scholar 

  7. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205 (1998)

    Google Scholar 

  8. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2013)

    Google Scholar 

  9. Liu, M., Zhang, D., Zhang, D.: Deep convolutional neural network-based facial expression recognition on small-scale datasets. IEEE Trans. Multimed. 19(6), 1276–1285 (2017)

    Google Scholar 

  10. Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologists Press (1978)

    Google Scholar 

  11. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)

    Google Scholar 

  12. Pantic, M., Rothkrantz, L.J., Pelachaud, C.: Toward an affect-sensitive multimodal human-computer interaction. Proc. IEEE 93(9), 1693–1712 (2005)

    Google Scholar 

  13. Yang, H., Zhao, G., Zhang, L., Zhu, N., He, Y., Zhao, C.: Real-time emotion recognition framework based on convolution neural network. In: Pan, J.S., Li, J., Tsai, PW., Jain, L. (eds.) Advances in Intelligent Information Hiding and Multimedia Signal Processing. Smart Innovation, Systems and Technologies, vol. 157. Springer, Singapore (2020)

    Google Scholar 

  14. Scherer, K.R., Johnstone, T., Klasmeyer, G.: Vocal expression of emotion. In: Davidson, R.J., Scherer, K.R., Goldsmith, H.H. (eds.) Handbook of Affective Sciences, pp. 433–456. Oxford University Press (2003)

    Google Scholar 

  15. Schuller, B., Batliner, A., Steidl, S., Seppi, D.: Recognising realistic emotions and affect in speech: state of the art and lessons learnt from the first challenge. Speech Commun. (North-Holland) 53(9–10), pp. 1062–1087 (2011)

    Google Scholar 

  16. Eyben, F., Weninger, F., Gross, F., Schuller, B.: Recent developments in openSMILE, the Munich open-source multimedia feature extractor. In: Proceedings of the ACM International Conference on Multimedia, pp. 835–838 (2016)

    Google Scholar 

  17. Han, K., Yu, C., Vasconcelos, N.: Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans. Affect. Comput. 10(4), 607–619 (2019)

    Google Scholar 

  18. Banse, R., Scherer, K.R.: Acoustic profiles in vocal emotion expression. J. Pers. Soc. Psychol. 70(3), 614–636 (1996)

    Google Scholar 

  19. Bänziger, T., Scherer, K.R.: The role of intonation in emotional expressions. Speech Commun. 46(3–4), 252–267 (2005)

    Google Scholar 

  20. Juslin, P.N., Laukka, P.: Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion. Emotion 1(4), 381–412 (2001)

    Google Scholar 

  21. Martinez, B., Valstar, M.F., Jiang, B., Pantic, M., Valstar, M.: Automatic analysis of facial actions: a survey. IEEE Trans. Affect. Comput. 9(3), 378–390 (2017)

    Google Scholar 

  22. Liu, W., Fu, Z., Huang, B.: Multimodal emotion recognition based on deep neural networks. IEEE Trans. Affect. Comput. 10(4), 542–555 (2019)

    Google Scholar 

  23. Ivleva, N., Pentel, A., Dunajeva, O., Juštšenko, V.: Machine learning based emotion recognition in a digital learning environment. In: Auer, M.E., Pachatz, W., Rüütmann, T. (eds.) Learning in the Age of Digital and Green Transition. ICL 2022. Lecture Notes in Networks and Systems, vol. 633, pp. 405–412. Springer, Cham. https://doi.org/10.1007/978-3-031-26876-2_38

  24. Python Software Foundation. https://www.python.org/. Last accessed 29 May 2023

  25. FER-2013. https://www.kaggle.com/datasets/msambare/fer2013. Last accessed 24 March 2023

  26. face_recognition. https://github.com/ageitgey/face_recognition. Last accessed 29 May 2023

  27. Serengil, S.I., Ozpinar, A.: LightFace: a hybrid deep face recognition framework. In: 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), pp. 23–27 (2020). https://doi.org/10.1109/ASYU50717.2020.9259802

  28. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016). https://doi.org/10.1109/LSP.2016.2603342

  29. Pichora-Fuller, M.K., Dupuis, K.: Toronto emotional speech set (TESS). Borealis V1. https://doi.org/10.5683/SP2/E8H2MF

  30. Surrey Audio-Visual Expressed Emotion (SAVEE) Database. http://kahlan.eps.surrey.ac.uk/savee/Database.html. Last accessed 29 May 2023

  31. Livingstone, S.R., Russo, F.A.: The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): a dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5), e0196391 (2018). https://doi.org/10.1371/journal.pone.0196391

  32. Cao, H., Cooper, D.G., Keutmann, M.K., Gur, R.C., Nenkova, A., Verma, R.: CREMA-D: crowd-sourced emotional multimodal actors dataset. IEEE Trans. Affect. Comput. 5(4), 377–390 (2014). https://doi.org/10.1109/TAFFC.2014.2336244

  33. Berlin Database of Emotional Speech. http://emodb.bilderbar.info/docu/. Last accessed 29 May 2023

  34. librosa/librosa: 0.10.0.post2. https://doi.org/10.5281/zenodo.7746972. Last accessed 29 May 2023

  35. Er, M.B.: A novel approach for classification of speech emotions based on deep and acoustic features. IEEE Access 8, 221640–221653 (2020). https://doi.org/10.1109/ACCESS.2020.3043201

  36. TensorFlow. https://www.tensorflow.org/. Last accessed 29 May 2023

  37. Keras. https://keras.io/. Last accessed 29 May 2023

  38. Google Colaboratory. https://colab.research.google.com/. Last accessed 29 May 2023

  39. O’Malley, T., Bursztein, E., Long, J., Chollet, F., Jin, H., Invernizzi, L.: Keras Tuner (2019). https://github.com/keras-team/keras-tuner

  40. Streamlit. https://streamlit.io/. Last accessed 29 May 2023

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Olga Dunajeva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ivleva, N., Pentel, A., Dunajeva, O., Juštšenko, V. (2024). Deep Learning Based Audio-Visual Emotion Recognition in a Smart Learning Environment. In: Auer, M.E., Cukierman, U.R., Vendrell Vidal, E., Tovar Caro, E. (eds) Towards a Hybrid, Flexible and Socially Engaged Higher Education. ICL 2023. Lecture Notes in Networks and Systems, vol 899. Springer, Cham. https://doi.org/10.1007/978-3-031-51979-6_44

Download citation

Publish with us

Policies and ethics