Abstract
Reaction videos from YouTube provide a range of possibilities when looking to investigate human behavior. This research aims to present a method of creating a spontaneous facial expression dataset from YouTube reaction videos. In this work, we use Convolutional Neural Networks to classify emotions in facial expressions, as well as feature extraction tools to support this classification. To understand the behavior of faces that react to a given trailer, we firstly select automatically interest moments in the video where reactions were more intense and then we present two metrics: Agreement and Continuity rates, which aim to help to identify spontaneous emotions, that cannot be classified by Neural Networks with high assertiveness. Using our metrics, we found an average of 71% of the faces that present similarity of classified emotions, in the most intense moments in the video. Our proposal is to use our metrics jointly with the Neural Network accuracy, which can be lower than usual, in order to find spontaneous expressions. Finally, we show an example of generated dataset.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Framerate or FPS (Frames Per Second).
- 2.
https://github.com tools). /pytube/pytube.
- 3.
- 4.
Described and available at https://www.kaggle.com/msambare/fer2013.
- 5.
Official Trailer - https://www.youtube.com/watch?v=sj9J2ecsSpo.
References
Amos, B., Ludwiczuk, B., Satyanarayanan, M.: OpenFace: a general-purpose face recognition library with mobile applications. Technical report, CMU-CS-16-118, CMU School of Computer Science (2016)
Arriaga, O., Valdenegro-Toro, M., Plöger, P.: Real-time convolutional neural networks for emotion and gender classification (2017)
Barrett, L.F., Adolphs, R., Marsella, S., Martinez, A.M., Pollak, S.D.: Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20(1), 1–68 (2019). https://doi.org/10.1177/1529100619832930, pMID: 31313636
Bradley, M.M., Lang, P.J.: International Affective Picture System, pp. 1–4. Springer International Publishing, Cham (2017). https://doi.org/10.1007/9783319280998421, https://doi.org/10.1007/9783319280998421
Bruce, V., Young, A.: Face Perception. Psychology Press, Hove (2012). https://books.google.com.br/books?id=aJjVW1eYebUC
Carvalho, S., Leite, J., Galdo-Álvarez, S., Gonçalves, O.F.: The emotional movie database (EMDB): a self-report and psychophysiological study. Appl. Psychophysiol. Biofeedback 37(4), 279–294 (2012)
EKMAN, P.: Facial action coding system (FACS). A Human Face (2002). https://ci.nii.ac.jp/naid/10025007347/en/
Ekman, P., Rosenberg, E.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Series in Affective Science. Oxford University Press, Oxford (2005). https://books.google.com.br/books?id=UXapcWqtO-sC
Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests (2013)
Happy, S.L., Patnaik, P., Routray, A., Guha, R.: The Indian spontaneous expression database for emotion recognition. IEEE Trans. Affect. Comput. 8(1), 131–142 (2017). https://doi.org/10.1109/TAFFC.2015.2498174
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). http://arxiv.org/abs/1512.03385
Iandola, F.N., Moskewicz, M.W., Karayev, S., Girshick, R.B., Darrell, T., Keutzer, K.: Densenet: Implementing efficient convnet descriptor pyramids. CoRR abs/1404.1869 (2014). http://arxiv.org/abs/1404.1869
Khan, M., Chakraborty, S., Astya, R., Khepra, S.: Face detection and recognition using OpenCV. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 116–119 (2019)
Khanzada, A., Bai, C., Celepcikay, F.T.: Facial expression recognition with deep learning. CoRR abs/2004.11823 (2020). https://arxiv.org/abs/2004.11823
Kollias, D., Zafeiriou, S.: Aff-Wild2: extending the Aff-wild database for affect recognition (2019)
Koukounas, E., Over, R.: Changes in the magnitude of the eyeblink startle response during habituation of sexual arousal. Behav. Res.Therapy 38(6), 573–584 (2000). https://doi.org/10.1016/S0005-7967(99)00075-3, https://www.sciencedirect.com/science/article/pii/S0005796799000753
Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)
McDuff, D., el Kaliouby, R., Senechal, T., Amr, M., Cohn, J.F., Picard, R.: Affectiva-MIT facial expression dataset (am-fed): naturalistic and spontaneous facial expressions collected “in-the-wild”. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 881–888. CVPRW 2013, IEEE Computer Society, USA (2013). https://doi.org/10.1109/CVPRW.2013.130, https://doi.org/10.1109/CVPRW.2013.130
Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC). pp. 41.1-41.12. BMVA Press, September 2015. https://doi.org/10.5244/C.29.41, https://dx.doi.org/10.5244/C.29.41
Schaefer, A., Nils, F., Sanchez, X., Philippot, P.: Assessing the effectiveness of a large database of emotion-eliciting films: a new tool for emotion researchers. Cogn. Emot. 24(7), 1153–1172 (2010). https://doi.org/10.1080/02699930903274322
Zafeiriou, S., Kollias, D., Nicolaou, M.A., Papaioannou, A., Zhao, G., Kotsia, I.: Aff-wild: valence and arousal ‘in-the-wild’ challenge. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1980–1987 (2017). https://doi.org/10.1109/CVPRW.2017.248
Acknowledgment
The authors would like to thank CNPq and CAPES for partially funding this work.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Peres, V.M.X., Musse, S.R. (2021). Towards the Creation of Spontaneous Datasets Based on Youtube Reaction Videos. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2021. Lecture Notes in Computer Science(), vol 13018. Springer, Cham. https://doi.org/10.1007/978-3-030-90436-4_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-90436-4_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90435-7
Online ISBN: 978-3-030-90436-4
eBook Packages: Computer ScienceComputer Science (R0)