Visual Coherence Loss for Coherent and Visually Grounded Story Generation

Xudong Hong, Vera Demberg, Asad Sayeed, Qiankun Zheng, Bernt Schiele


Abstract
Local coherence is essential for long-form text generation models. We identify two important aspects of local coherence within the visual storytelling task: (1) the model needs to represent re-occurrences of characters within the image sequence in order to mention them correctly in the story; (2) character representations should enable us to find instances of the same characters and distinguish different characters. In this paper, we propose a loss function inspired by a linguistic theory of coherence for self-supervised learning for image sequence representations. We further propose combining features from an object and a face detector to construct stronger character features. To evaluate input-output relevance that current reference-based metrics don’t measure, we propose a character matching metric to check whether the models generate referring expressions correctly for characters in input image sequences. Experiments on a visual story generation dataset show that our proposed features and loss function are effective for generating more coherent and visually grounded stories.
Anthology ID:
2023.findings-acl.603
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9456–9470
Language:
URL:
https://aclanthology.org/2023.findings-acl.603
DOI:
10.18653/v1/2023.findings-acl.603
Bibkey:
Cite (ACL):
Xudong Hong, Vera Demberg, Asad Sayeed, Qiankun Zheng, and Bernt Schiele. 2023. Visual Coherence Loss for Coherent and Visually Grounded Story Generation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9456–9470, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Visual Coherence Loss for Coherent and Visually Grounded Story Generation (Hong et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.603.pdf