Reference Hub3
Movie Video Summarization- Generating Personalized Summaries Using Spatiotemporal Salient Region Detection

Movie Video Summarization- Generating Personalized Summaries Using Spatiotemporal Salient Region Detection

Rajkumar Kannan, Sridhar Swaminathan, Gheorghita Ghinea, Frederic Andres, Kalaiarasi Sonai Muthu Anbananthen
Copyright: © 2019 |Volume: 10 |Issue: 3 |Pages: 26
ISSN: 1947-8534|EISSN: 1947-8542|EISBN13: 9781522565345|DOI: 10.4018/IJMDEM.2019070101
Cite Article Cite Article

MLA

Kannan, Rajkumar, et al. "Movie Video Summarization- Generating Personalized Summaries Using Spatiotemporal Salient Region Detection." IJMDEM vol.10, no.3 2019: pp.1-26. http://doi.org/10.4018/IJMDEM.2019070101

APA

Kannan, R., Swaminathan, S., Ghinea, G., Andres, F., & Anbananthen, K. S. (2019). Movie Video Summarization- Generating Personalized Summaries Using Spatiotemporal Salient Region Detection. International Journal of Multimedia Data Engineering and Management (IJMDEM), 10(3), 1-26. http://doi.org/10.4018/IJMDEM.2019070101

Chicago

Kannan, Rajkumar, et al. "Movie Video Summarization- Generating Personalized Summaries Using Spatiotemporal Salient Region Detection," International Journal of Multimedia Data Engineering and Management (IJMDEM) 10, no.3: 1-26. http://doi.org/10.4018/IJMDEM.2019070101

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Video summarization condenses a video by extracting its informative and interesting segments. In this article, a novel video summarization approach is proposed based on spatiotemporal salient region detection. The proposed approach first segments a video into a set of shots which are ranked with spatiotemporal saliency scores. The score for a shot is computed by aggregating the frame level spatiotemporal saliency scores. This approach detects spatial and temporal salient regions separately using different saliency theories related to objects present in a visual scenario. The spatial saliency of a video frame is computed using color contrast and color distribution estimations and center prior integration. The temporal saliency of a video frame is estimated as an integration of local and global temporal saliencies computed using patch level optical flow abstractions. Finally, top ranked shots with the highest saliency scores are selected for generating the video summary. The objective and subjective experimental results demonstrate the efficacy of the proposed approach.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.