Joint Model-Based Attention for Spoken Language Understanding Task

Joint Model-Based Attention for Spoken Language Understanding Task

Xin Liu, RuiHua Qi, Lin Shao
Copyright: © 2020 |Volume: 12 |Issue: 4 |Pages: 12
ISSN: 1941-6210|EISSN: 1941-6229|EISBN13: 9781799805823|DOI: 10.4018/IJDCF.2020100103
Cite Article Cite Article

MLA

Liu, Xin, et al. "Joint Model-Based Attention for Spoken Language Understanding Task." IJDCF vol.12, no.4 2020: pp.32-43. http://doi.org/10.4018/IJDCF.2020100103

APA

Liu, X., Qi, R., & Shao, L. (2020). Joint Model-Based Attention for Spoken Language Understanding Task. International Journal of Digital Crime and Forensics (IJDCF), 12(4), 32-43. http://doi.org/10.4018/IJDCF.2020100103

Chicago

Liu, Xin, RuiHua Qi, and Lin Shao. "Joint Model-Based Attention for Spoken Language Understanding Task," International Journal of Digital Crime and Forensics (IJDCF) 12, no.4: 32-43. http://doi.org/10.4018/IJDCF.2020100103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Intent determination (ID) and slot filling (SF) are two critical steps in the spoken language understanding (SLU) task. Conventionally, most previous work has been done for each subtask respectively. To exploit the dependencies between intent label and slot sequence, as well as deal with both tasks simultaneously, this paper proposes a joint model (ABLCJ), which is trained by a united loss function. In order to utilize both past and future input features efficiently, a joint model based Bi-LSTM with contextual information is employed to learn the representation of each step, which are shared by two tasks and the model. This paper also uses sentence-level tag information learned from a CRF layer to predict the tag of each slot. Meanwhile, a submodule-based attention is employed to capture global features of a sentence for intent classification. The experimental results demonstrate that ABLCJ achieves competitive performance in the Shared Task 4 of NLPCC 2018.