Skip to main content

Multi-granularity Prompts for Topic Shift Detection in Dialogue

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14089))

Included in the following conference series:

  • 930 Accesses

Abstract

The goal of dialogue topic shift detection is to identify whether the current topic in a conversation has changed or needs to change. Previous work focused on detecting topic shifts using pre-trained models to encode the utterance, failing to delve into the various levels of topic granularity in the dialogue and understand dialogue contents. To address the above issues, we take a prompt-based approach to fully extract topic information from dialogues at multiple-granularity, i.e., label, turn, and topic. Experimental results on our annotated Chinese Natural Topic Dialogue dataset CNTD and the publicly available English TIAGE dataset show that the proposed model outperforms the baselines. Further experiments show that the information extracted at different levels of granularity effectively helps the model comprehend the conversation topics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://ltp.ai/index.html.

  2. 2.

    https://allennlp.org.

References

  1. Dai, S., Wang, G., Park, S., Lee, S.: Dialogue response generation via contrastive latent representation learning. In: Proceedings of the 3rd Workshop on NLP for ConvAI, pp. 189–197 (2021)

    Google Scholar 

  2. Li, J., et al.: Dadgraph: a discourse-aware dialogue graph neural network for multiparty dialogue machine reading comprehension. In: Proceedings of IJCNN, pp. 1–8. IEEE (2021)

    Google Scholar 

  3. Li, Y., Zhao, H.: Self-and pseudo-self-supervised prediction of speaker and key-utterance for multi-party dialogue reading comprehension. In: Findings of EMNLP 2021, pp. 2053–2063 (2021)

    Google Scholar 

  4. Ghandeharioun, A., et al.: Approximating interactive human evaluation with self-play for open-domain dialog systems. In: Proceedings of NIPS, vol. 32 (2019)

    Google Scholar 

  5. Einolghozati, A., Gupta, S., Mohit, M., Shah, R.: Improving robustness of task oriented dialog systems. arXiv preprint arXiv:1911.05153 (2019)

  6. Liu, B., Tür, G., Hakkani-Tür, D., Shah, P., Heck, L.: Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems. In: Proceedings of NAACL-HLT, pp. 2060–2069 (2018)

    Google Scholar 

  7. Xing, L., Carenini, G.: Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring. In: Proceedings of SIGdial, pp. 167–177 (2021)

    Google Scholar 

  8. Hearst, M.A.: Text tiling: segmenting text into multi-paragraph subtopic passages. Comput. Linguist. 23(1), 33–64 (1997)

    Google Scholar 

  9. Xie, H., Liu, Z., Xiong, C., Liu, Z., Copestake, A.: Tiage: a benchmark for topic-shift aware dialog modeling. In: Findings of EMNLP 2021, pp. 1684–1690 (2021)

    Google Scholar 

  10. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020)

    MathSciNet  MATH  Google Scholar 

  11. Xu, Y., Zhao, H., Zhang, Z.: Topic-aware multi-turn dialogue modeling. In: Proceedings of the AAAI, pp. 14176–14184 (2021)

    Google Scholar 

  12. Lin, J., et al.: Topic shift detection in Chinese dialogues: corpus and benchmark. arXiv preprint arXiv:2305.01195 (2023)

  13. Wang, X., Li, C., Zhao, J., Yu, D.: Naturalconv: a chinese dialogue dataset towards multi-turn topic-driven conversation. In: Proceedings of the AAAI, pp. 14006–14014 (2021)

    Google Scholar 

  14. Zhang, S., et al.: Personalizing dialogue agents: I have a dog, do you have pets too? In: Proceedings of ACL, pp. 2204–2213 (2018)

    Google Scholar 

  15. Budzianowski, P., et al.: Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In: Proceedings of EMNLP, pp. 5016–5026 (2018)

    Google Scholar 

  16. Eric, M., Krishnan, L., Charette, F., Manning, C.D.: Key-value retrieval networks for task-oriented dialogue. In: Proceedings of SIGdial, pp. 37–49 (2017)

    Google Scholar 

  17. Eisenstein, J., Barzilay, R.: Bayesian unsupervised topic segmentation. In: Proceedings of EMNLP, pp. 334–343 (2008)

    Google Scholar 

  18. Du, L., Buntine, W., Johnson, M.: Topic segmentation with a structured topic model. In: Proceedings of NAACL-HLT, pp. 190–200 (2013)

    Google Scholar 

  19. Badjatiya, P., Kurisinkel, L.J., Gupta, M., Varma, V.: Attention-based neural text segmentation. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 180–193. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_14

    Chapter  Google Scholar 

  20. Arnold, S., Schneider, R., Cudré-Mauroux, P., Gers, F.A., Loser, A.: Sector: a neural model for coherent topic segmentation and classification. Trans. Assoc. Comput. Linguist. 7, 169–184 (2019)

    Google Scholar 

  21. Grootendorst, M.: Keybert: minimal keyword extraction with Bert (2020)

    Google Scholar 

  22. Xue, L., et al.: mt5: a massively multilingual pre-trained text-to-text transformer. In: Proceedings of NAACL-HLT, pp. 483–498 (2021)

    Google Scholar 

  23. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  24. Lukasik, M., Dadachev, B., Papineni, K., Simoes, G.: Text segmentation by cross segment attention. In: Proceedings of EMNLP, pp. 4707–4716 (2020)

    Google Scholar 

  25. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China (Nos. 62276177, and 61836007), and Project Funded by the Priority Aca-demic Program Development of Jiangsu Higher Education Institutions (PAPD).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peifeng Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, J., Fan, Y., Chu, X., Li, P., Zhu, Q. (2023). Multi-granularity Prompts for Topic Shift Detection in Dialogue. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science(), vol 14089. Springer, Singapore. https://doi.org/10.1007/978-981-99-4752-2_42

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4752-2_42

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4751-5

  • Online ISBN: 978-981-99-4752-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics