Multi-Action Dialog Policy Learning from Logged User Feedback

Authors

  • Shuo Zhang Xi'an Jiaotong University
  • Junzhou Zhao Xi'an Jiaotong University
  • Pinghui Wang Xi'an Jiaotong University
  • Tianxiang Wang Xi'an Jiaotong University
  • Zi Liang Xi'an Jiaotong University
  • Jing Tao Xi'an Jiaotong University
  • Yi Huang China Mobile Research
  • Junlan Feng China Mobile Research

DOI:

https://doi.org/10.1609/aaai.v37i11.26636

Keywords:

SNLP: Conversational AI/Dialogue Systems, SNLP: Applications

Abstract

Multi-action dialog policy (MADP), which generates multiple atomic dialog actions per turn, has been widely applied in task-oriented dialog systems to provide expressive and efficient system responses. Existing MADP models usually imitate action combinations from the labeled multi-action dialog samples. Due to data limitations, they generalize poorly toward unseen dialog flows. While reinforcement learning-based methods are proposed to incorporate the service ratings from real users and user simulators as external supervision signals, they suffer from sparse and less credible dialog-level rewards. To cope with this problem, we explore to improve MADPL with explicit and implicit turn-level user feedback received for historical predictions (i.e., logged user feedback) that are cost-efficient to collect and faithful to real-world scenarios. The task is challenging since the logged user feedback provides only partial label feedback limited to the particular historical dialog actions predicted by the agent. To fully exploit such feedback information, we propose BanditMatch, which addresses the task from a feedback-enhanced semi-supervised learning perspective with a hybrid learning objective of SSL and bandit learning. BanditMatch integrates pseudo-labeling methods to better explore the action space through constructing full label feedback. Extensive experiments show that our BanditMatch improves MADPL over the state-of-the-art methods by generating more concise and informative responses. The source code and the appendix of this paper can be obtained from https://github.com/ShuoZhangXJTU/BanditMatch.

Downloads

Published

2023-06-26

How to Cite

Zhang, S., Zhao, J., Wang, P., Wang, T., Liang, Z., Tao, J., Huang, Y., & Feng, J. (2023). Multi-Action Dialog Policy Learning from Logged User Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13976-13983. https://doi.org/10.1609/aaai.v37i11.26636

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing