ABSTRACT
In this paper, we present a video-based emotion recognition system submitted to the EmotiW 2016 Challenge. The core module of this system is a hybrid network that combines recurrent neural network (RNN) and 3D convolutional networks (C3D) in a late-fusion fashion. RNN and C3D encode appearance and motion information in different ways. Specifically, RNN takes appearance features extracted by convolutional neural network (CNN) over individual video frames as input and encodes motion later, while C3D models appearance and motion of video simultaneously. Combined with an audio module, our system achieved a recognition accuracy of 59.02% without using any additional emotion-labeled video clips in training set, compared to 53.8% of the winner of EmotiW 2015. Extensive experiments show that combining RNN and C3D together can improve video-based emotion recognition noticeably.
- Dhall, A., Goecke, R., Lucey, S. and Gedeon, T. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia. Google ScholarDigital Library
- Dhall, A., Goecke, R. and Gedeon, T. 2015. Automatic Group Happiness Intensity Analysis. IEEE Transaction on Affective Computing.Google ScholarDigital Library
- Yao, A., Shao, J., Ma, N. and Chen,Y. 2015. Capturing AUAware Facial Features and Their Latent Relations for Emotion Recognition in the Wild. ACM ICMI. Google ScholarDigital Library
- Liu, M., Wang, R., Li, S., Shan, S., Huang Z. and Chen, X.2014. Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild. ACM ICMI. Google ScholarDigital Library
- Ebrahimi Kahou, S., Michalski, V., Konda, K., Memisevic, R., and Pal, C. 2015. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 467- 474. ACM. Google ScholarDigital Library
- Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. 2015.Google Scholar
- Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV) .4489-4497. IEEE. Google ScholarDigital Library
- Eyben, F., Wöllmer, M., & Schuller, B. (2010, October). Opensmile: the munich versatile and fast open-source audio feature extractor. InProceedings of the 18th ACM international conference on Multimedia. 1459-1462. ACM. Google ScholarDigital Library
- Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. 2015. Longterm recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2625-2634.Google Scholar
- Kahou, S. E., Pal, C., Bouthillier, X., Froumenty, P., Gülçehre, Ç, Memisevic, R. and Mirza, M. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM on International conference on multimodal interaction. 543-550. ACM. Google ScholarDigital Library
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In ACM MM. Google ScholarDigital Library
- Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D. and Rabinovich, A. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.1-9.Google Scholar
- He, K., Zhang, X., Ren, S. and Sun, J. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385.Google Scholar
- Simonyan, K., & Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Google Scholar
- Parkhi, O. M., Vedaldi, A., & Zisserman, A. 2015. Deep face recognition. In British Machine Vision Conference (Vol. 1, No. 3, p. 6).Google ScholarCross Ref
- Deng, J., Dong, W., Socher, R., Li, L. J., Li, K. and Li, F.F., L. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition. CVPR. 248- 255. IEEE.Google Scholar
- Carrier, P. L., Courville, A., Goodfellow, I. J., Mirza, M. and Bengio, Y. 2013 .FER-2013 face database. Technical report, 1365, Université de Montréal.Google Scholar
- Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H. and Schmidhuber, J. 2009. A novel connectionist system for unconstrained handwriting recognition. IEEE transactions on pattern analysis and machine intelligence, 31(5), 855-868. Google ScholarDigital Library
- Sak, H., Senior, A. W. and Beaufays, F. 2014. Long shortterm memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH. 338-342.Google Scholar
- Kim, B. K., Dong, S. Y., Roh, J., Kim, G. and Lee, S. Y. 2016. Fusing Aligned and Non-Aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach. In Computer Vision and Pattern Recognition. CVPR.Google Scholar
- Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R. and Li, F.F. 2014. Large-scale Video Classification with Convolutional Neural Networks.Google Scholar
- Ng, J., Hausknecht, M., Vijayanarasimhan S., Monga R., Vinyals O., Toderici G.2015. Beyond Short Snippets: Deep Networks for Video Classification. In Computer Vision and Pattern Recognition. CVPR. 4694-4702. IEEE.Google Scholar
- Sharma S., Kiros R., Salakhutdinov R.2016 Action Recognition using Visual Attention. Workshop track - ICLR.Google Scholar
- Kaya, H., Gürpinar, F., Afshar, S. and Salah, A. A. 2015. Contrasting and Combining Least Squares Based Learners for Emotion Recognition in the Wild. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 459-466. ACM. Google ScholarDigital Library
- Venugopalan, S., Rohrbach, M., Donahue, J., Mooney, R., Darrell, T.m. and Saenko, K. 2015. Sequence to sequencevideo to text. In Proceedings of the IEEE International Conference on Computer Vision. 4534-4542. Google ScholarDigital Library
- Pan, P., Xu, Z., Yang, Y., Wu, F. and Zhuang, Y. 2015.Google Scholar
- Hierarchical Recurrent Neural Encoder for Video Representation with Application to Captioning. arXiv preprint arXiv:1511.03476.Google Scholar
- Graves, A., Mohamed, A. R. and Hinton, G. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing. 6645-6649. IEEE.Google Scholar
- Dhall, A., Goecke, R., Joshi, J., Hoey, J. and Gedeon, T. 2016. EmotiW 2016: Video and Group-level Emotion Recognition Challenges, ACM ICMI 2016. Google ScholarDigital Library
- Jianguo L., Tao W., Yimin Z. 2011. ICCV: Face Detection using SURF Cascade. In Computer Vision Workshops.Google Scholar
- Fernández, S., Graves, A., Schmidhuber, J. 2007. An application of recurrent neural networks to discriminative keyword spotting. In International Conference on Artificial Neural Networks. 220-229. Springer Berlin Heidelberg Google ScholarDigital Library
Index Terms
- Video-based emotion recognition using CNN-RNN and C3D hybrid networks
Recommendations
Video-based Emotion Recognition Using Deeply-Supervised Neural Networks
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionEmotion recognition (ER) based on natural facial images/videos has been studied for some years and considered a comparatively hot topic in the field of affective computing. However, it remains a challenge to perform ER in the wild, given the noises ...
Multi-Feature Based Emotion Recognition for Video Clips
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionIn this paper, we present our latest progress in Emotion Recognition techniques, which combines acoustic features and facial features in both non-temporal and temporal mode. This paper presents the details of our techniques used in the Audio-Video ...
Recurrent Neural Networks for Emotion Recognition in Video
ICMI '15: Proceedings of the 2015 ACM on International Conference on Multimodal InteractionDeep learning based approaches to facial analysis and video analysis have recently demonstrated high performance on a variety of key tasks such as face recognition, emotion recognition and activity recognition. In the case of video, information often ...
Comments