Adversarial Text Generation via Sequence Contrast Discrimination

Ke Wang, Xiaojun Wan


Abstract
In this paper, we propose a sequence contrast loss driven text generation framework, which learns the difference between real texts and generated texts and uses that difference. Specifically, our discriminator contains a discriminative sequence generator instead of a binary classifier, and measures the ‘relative realism’ of generated texts against real texts by making use of them simultaneously. Moreover, our generator uses discriminative sequences to directly improve itself, which not only replaces the gradient propagation process from the discriminator to the generator, but also avoids the time-consuming sampling process of estimating rewards in some previous methods. We conduct extensive experiments with various metrics, substantiating that our framework brings improvements in terms of training stability and the quality of generated texts.
Anthology ID:
2020.findings-emnlp.5
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
47–53
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.5
DOI:
10.18653/v1/2020.findings-emnlp.5
Bibkey:
Cite (ACL):
Ke Wang and Xiaojun Wan. 2020. Adversarial Text Generation via Sequence Contrast Discrimination. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 47–53, Online. Association for Computational Linguistics.
Cite (Informal):
Adversarial Text Generation via Sequence Contrast Discrimination (Wang & Wan, Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.5.pdf
Optional supplementary material:
 2020.findings-emnlp.5.OptionalSupplementaryMaterial.zip
Data
MS COCO