Reference Hub1
Affective Video Tagging Framework using Human Attention Modelling through EEG Signals

Affective Video Tagging Framework using Human Attention Modelling through EEG Signals

Shanu Sharma, Ashwani Kumar Dubey, Priya Ranjan
Copyright: © 2022 |Volume: 18 |Issue: 1 |Pages: 18
ISSN: 1548-3657|EISSN: 1548-3665|EISBN13: 9781799893820|DOI: 10.4018/IJIIT.306968
Cite Article Cite Article

MLA

Sharma, Shanu, et al. "Affective Video Tagging Framework using Human Attention Modelling through EEG Signals." IJIIT vol.18, no.1 2022: pp.1-18. http://doi.org/10.4018/IJIIT.306968

APA

Sharma, S., Dubey, A. K., & Ranjan, P. (2022). Affective Video Tagging Framework using Human Attention Modelling through EEG Signals. International Journal of Intelligent Information Technologies (IJIIT), 18(1), 1-18. http://doi.org/10.4018/IJIIT.306968

Chicago

Sharma, Shanu, Ashwani Kumar Dubey, and Priya Ranjan. "Affective Video Tagging Framework using Human Attention Modelling through EEG Signals," International Journal of Intelligent Information Technologies (IJIIT) 18, no.1: 1-18. http://doi.org/10.4018/IJIIT.306968

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

The explosion of multimedia content over the past years is not surprising; thus, their efficient management and analysis methods are always in demand. The effectiveness of any multimedia content deals with analyzing human perception and cognition while watching it. Human attention is also one of the important parameters, as it describes the engagement and interestingness of the user while watching that content. Considering this aspect, a video tagging framework is proposed in which the EEG signals of participants are used to analyze human perception while watching videos. A rigorous analysis has been performed on different scalp locations and frequency rhythms of brain signals to formulate significant features corresponding to affective and interesting video content. The analysis presented in this paper shows that the extracted human attention-based features are generating promising results with the accuracy of 93.2% using SVM-based classification model, which supports the applicability of the model for various BCI-based applications for automatic classification of multimedia content.