Abstract
Dialogue State Tracking (DST) is the core component of the task-oriented dialogue systems. Although recent neural DST models have made great progress, they often ignore the phenomenon that the current dialogue state is closely related to the earlier dialogue states. In this paper, we try to introduce the slot embedding into the transformer to focus on those special tokens, which ever appear in earlier dialogue states. On the basis, we leverage the copy mechanism to predict the state over the dialogue utterances. Our model also imitates the architecture of reading comprehension model to make full use of the current utterances. The experimental results verify the benefits of the slot embedding, and our model achieves significant improvements than baselines on MultiWOZ 2.0 and MultiWOZ 2.1 datasets.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.