Abstract
In this paper, we report on user behaviors by analyzing visual clues while users are watching various TV broadcast in pilot settings. We detail the first results of the empathic analysis of viewers watching four distinct videos in dedicated recording sessions. Viewers are sitting in front of a TV set in unconstrained position (free postures, free head poses and free body movements) on a chair and recorded by a regular webcam at both low and high resolutions. We have extracted metrics related to: head and global movement, changes in head orientation and facial expressions (happy, angry, surprise). We have conducted preliminary studies about how the extracted metrics can be employed in order to detect the interest, the amusement or the distraction of a viewer.
Chapter PDF
References
Abadi, M.K., Staiano, J., Cappelletti, A., Zancanaro, M., Sebe, N.: Multimodal engagement classification for affective cinema. In: 5th Conference on Affective Computing and Intelligent Interaction (ACII) (2013)
Dahmane, A., Larabi, S., Djeraba, C., Bilasco, I.M.: Learning symmetrical model for head pose estimation. In: 21st International Conference on Pattern Recognition (ICPR), pp. 3614–3617 (2012)
Hanjalic, A., Li-Qun, X.: Affective video content representation and modeling. IEEE Transaction on Multimedia 7(1), 143–154 (2005)
The MPLab GENKI Database, GENKI-4K Subset (2011)
Joho, H., Staiano, J., Sebe, N., Jose, J.M.: Looking at the viewer: analysing facial activity to detect personal highlights of multimedia contents. Multimedia Tools and Applications (MTAP) 51(2), 505–523 (2011)
Lablack, A., Danisman, T., Bilasco, I.M., Djeraba, C.: A local approach for negative emotion detection. In: 22nd International Conference on Pattern Recognition (ICPR) (2014)
Mahmoud, M., Baltrušaitis, T., Robinson, P., Riek, L.D.: 3D Corpus of Spontaneous Complex Mental States. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 205–214. Springer, Heidelberg (2011)
Milborrow, S., Nicolls, F.: Locating Facial Features with an Extended Active Shape Model. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 504–513. Springer, Heidelberg (2008)
Soleymani, M., Pantic, M., Pun, T.: Multimodal emotion recognition in response to videos. IEEE Transactions on Affective Computing (TAC) 3(2), 211–223 (2012)
Viola, P., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: International Conference on Computer Vision and Pattern Recognition (CVPR) (2001)
Willaert, K., Matton, M.: Empathic media personalization based on actionable moods. In: 1st Workshop on Empathic Television Experiences (EmpaTeX) (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Bilasco, I.M., Lablack, A., Dahmane, A., Danisman, T. (2015). Analysing User Visual Implicit Feedback in Enhanced TV Scenarios. In: Agapito, L., Bronstein, M., Rother, C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science(), vol 8925. Springer, Cham. https://doi.org/10.1007/978-3-319-16178-5_22
Download citation
DOI: https://doi.org/10.1007/978-3-319-16178-5_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16177-8
Online ISBN: 978-3-319-16178-5
eBook Packages: Computer ScienceComputer Science (R0)