Skip to main content

Towards the Creation of Spontaneous Datasets Based on Youtube Reaction Videos

  • Conference paper
  • First Online:
  • 1086 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13018))

Abstract

Reaction videos from YouTube provide a range of possibilities when looking to investigate human behavior. This research aims to present a method of creating a spontaneous facial expression dataset from YouTube reaction videos. In this work, we use Convolutional Neural Networks to classify emotions in facial expressions, as well as feature extraction tools to support this classification. To understand the behavior of faces that react to a given trailer, we firstly select automatically interest moments in the video where reactions were more intense and then we present two metrics: Agreement and Continuity rates, which aim to help to identify spontaneous emotions, that cannot be classified by Neural Networks with high assertiveness. Using our metrics, we found an average of 71% of the faces that present similarity of classified emotions, in the most intense moments in the video. Our proposal is to use our metrics jointly with the Neural Network accuracy, which can be lower than usual, in order to find spontaneous expressions. Finally, we show an example of generated dataset.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Framerate or FPS (Frames Per Second).

  2. 2.

    https://github.com tools). /pytube/pytube.

  3. 3.

    https://zulko.github.io/moviepy/.

  4. 4.

    Described and available at https://www.kaggle.com/msambare/fer2013.

  5. 5.

    Official Trailer - https://www.youtube.com/watch?v=sj9J2ecsSpo.

References

  1. Amos, B., Ludwiczuk, B., Satyanarayanan, M.: OpenFace: a general-purpose face recognition library with mobile applications. Technical report, CMU-CS-16-118, CMU School of Computer Science (2016)

    Google Scholar 

  2. Arriaga, O., Valdenegro-Toro, M., Plöger, P.: Real-time convolutional neural networks for emotion and gender classification (2017)

    Google Scholar 

  3. Barrett, L.F., Adolphs, R., Marsella, S., Martinez, A.M., Pollak, S.D.: Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20(1), 1–68 (2019). https://doi.org/10.1177/1529100619832930, pMID: 31313636

  4. Bradley, M.M., Lang, P.J.: International Affective Picture System, pp. 1–4. Springer International Publishing, Cham (2017). https://doi.org/10.1007/9783319280998421, https://doi.org/10.1007/9783319280998421

  5. Bruce, V., Young, A.: Face Perception. Psychology Press, Hove (2012). https://books.google.com.br/books?id=aJjVW1eYebUC

  6. Carvalho, S., Leite, J., Galdo-Álvarez, S., Gonçalves, O.F.: The emotional movie database (EMDB): a self-report and psychophysiological study. Appl. Psychophysiol. Biofeedback 37(4), 279–294 (2012)

    Article  Google Scholar 

  7. EKMAN, P.: Facial action coding system (FACS). A Human Face (2002). https://ci.nii.ac.jp/naid/10025007347/en/

  8. Ekman, P., Rosenberg, E.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Series in Affective Science. Oxford University Press, Oxford (2005). https://books.google.com.br/books?id=UXapcWqtO-sC

  9. Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests (2013)

    Google Scholar 

  10. Happy, S.L., Patnaik, P., Routray, A., Guha, R.: The Indian spontaneous expression database for emotion recognition. IEEE Trans. Affect. Comput. 8(1), 131–142 (2017). https://doi.org/10.1109/TAFFC.2015.2498174

    Article  Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). http://arxiv.org/abs/1512.03385

  12. Iandola, F.N., Moskewicz, M.W., Karayev, S., Girshick, R.B., Darrell, T., Keutzer, K.: Densenet: Implementing efficient convnet descriptor pyramids. CoRR abs/1404.1869 (2014). http://arxiv.org/abs/1404.1869

  13. Khan, M., Chakraborty, S., Astya, R., Khepra, S.: Face detection and recognition using OpenCV. In: 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 116–119 (2019)

    Google Scholar 

  14. Khanzada, A., Bai, C., Celepcikay, F.T.: Facial expression recognition with deep learning. CoRR abs/2004.11823 (2020). https://arxiv.org/abs/2004.11823

  15. Kollias, D., Zafeiriou, S.: Aff-Wild2: extending the Aff-wild database for affect recognition (2019)

    Google Scholar 

  16. Koukounas, E., Over, R.: Changes in the magnitude of the eyeblink startle response during habituation of sexual arousal. Behav. Res.Therapy 38(6), 573–584 (2000). https://doi.org/10.1016/S0005-7967(99)00075-3, https://www.sciencedirect.com/science/article/pii/S0005796799000753

  17. Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)

    Article  Google Scholar 

  18. McDuff, D., el Kaliouby, R., Senechal, T., Amr, M., Cohn, J.F., Picard, R.: Affectiva-MIT facial expression dataset (am-fed): naturalistic and spontaneous facial expressions collected “in-the-wild”. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 881–888. CVPRW 2013, IEEE Computer Society, USA (2013). https://doi.org/10.1109/CVPRW.2013.130, https://doi.org/10.1109/CVPRW.2013.130

  19. Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC). pp. 41.1-41.12. BMVA Press, September 2015. https://doi.org/10.5244/C.29.41, https://dx.doi.org/10.5244/C.29.41

  20. Schaefer, A., Nils, F., Sanchez, X., Philippot, P.: Assessing the effectiveness of a large database of emotion-eliciting films: a new tool for emotion researchers. Cogn. Emot. 24(7), 1153–1172 (2010). https://doi.org/10.1080/02699930903274322

  21. Zafeiriou, S., Kollias, D., Nicolaou, M.A., Papaioannou, A., Zhao, G., Kotsia, I.: Aff-wild: valence and arousal ‘in-the-wild’ challenge. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1980–1987 (2017). https://doi.org/10.1109/CVPRW.2017.248

Download references

Acknowledgment

The authors would like to thank CNPq and CAPES for partially funding this work.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Vitor Miguel Xavier Peres or Soraia Raupp Musse .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peres, V.M.X., Musse, S.R. (2021). Towards the Creation of Spontaneous Datasets Based on Youtube Reaction Videos. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2021. Lecture Notes in Computer Science(), vol 13018. Springer, Cham. https://doi.org/10.1007/978-3-030-90436-4_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90436-4_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90435-7

  • Online ISBN: 978-3-030-90436-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics