Skip to main content

OTE: An Optimized Chinese Short Text Matching Algorithm Based on External Knowledge

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13368))

  • 1693 Accesses

Abstract

Short text matching is a key problem in natural language processing (NLP), which can be applied in journalism, military, and other fields. In this paper, we propose an optimized Chinese short text matching algorithm based on external knowledge (OTE). OTE can effectively eliminate semantic ambiguity in Chinese text by integrating the HowNet external knowledge base. We use SoftLexicon to optimize the word lattice graph to provide more comprehensive multi-granularity information and integrate the LaserTagger model and EDA for data augmentation. Experimental results show that OTE has an average accuracy improvement of 1.5% in three datasets compared with existing models.

Supported by the Ministry of Science and Technology of China (No. 2020AAA0105100).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tan, M., Dos Santos, C., Xiang, B., Zhou, B.: Improved representation learning for question answer matching. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 464–473 (2016)

    Google Scholar 

  2. Chen, H.: Personalized recommendation system of e-commerce based on big data analysis. J. Interdisc. Math. 21, 1243–1247 (2018)

    Article  Google Scholar 

  3. Kilimci, Z., Omurca, S.: Extended feature spaces based classifier ensembles for sentiment analysis of short texts. Inf. Tech. Control. 47(3), 457–470 (2018)

    Google Scholar 

  4. Chen, L., et al.: Neural graph matching networks for Chinese short text matching. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6152–6158 (2020)

    Google Scholar 

  5. Ma, R., Peng, M., Zhang, Q., Huang, X.: Simplify the usage of lexicon in Chinese NER. arXiv preprint arXiv:1908.05969 (2019)

  6. Zhang, Y., Wang, Y., Yang, J.: Lattice LSTM for Chinese sentence representation. IEEE/ACM Trans.  Audio Speech Lang. Process. 28, 1506–1519 (2020)

    Google Scholar 

  7. Xu, J., Liu, J., Zhang, L., Li, Z., Chen, H.: Improve Chinese word embeddings by exploiting internal structure. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1041–1050 (2016)

    Google Scholar 

  8. Dong, Z., Dong, Q.: HowNet-a hybrid language and knowledge resource. In: International Conference on Natural Language Processing and Knowledge Engineering, 2003, Proceedings. 2003, pp. 820–824. IEEE (2003)

    Google Scholar 

  9. Wei, J., Zou, K.: EDA: easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint: arXiv:1901.11196 (2019)

  10. Malmi, E., Krause, S., Rothe, S., Mirylenka, D., Severyn, A.: Encode, tag, realize: high-precision text editing. arXiv preprint arXiv:1909.01187 (2019)

  11. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018)

    Google Scholar 

  12. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  13. Liu, Y., et al.: RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  14. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)

  15. Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., Liu, Q.: ERNIE: enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129 (2019)

  16. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  17. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. Adv. Neural Inf. Process. Syst. 28, 649–657 (2015)

    Google Scholar 

  18. Sennrich, R., Haddow, B., Birch, A.: Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709 (2015)

  19. Xie, Q., Dai, Z., Hovy, E., Luong, M.-T., Le, Q.V.: Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848 (2019)

  20. Kobayashi, S.: Contextual augmentation: data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201 (2018)

  21. Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., Xing, E.P.: Toward controlled generation of text. In: International Conference on Machine Learning, PMLR, pp. 1587–1596 (2017)

    Google Scholar 

  22. Zhang, Y., Yang, J.: Chinese NER using lattice LSTM. arXiv preprint arXiv:1805.02023 (2018)

  23. Lai, Y., Feng, Y., Yu, X., Wang, Z., Xu, K., Zhao, D.: Lattice CNNs for matching based Chinese question answering. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 6634–6641 (2019)

    Google Scholar 

  24. Lyu, B., Chen, L., Zhu, S., Yu, K.: LET: linguistic knowledge enhanced graph transformer for Chinese short text matching. arXiv preprint arXiv:2102.12671 (2021)

  25. Hirschberg, D.S.: Algorithms for the longest common subsequence problem. J.  ACM (JACM). 24, 664–675 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  26. Xu, L., Zhang, X., Dong, Q.: CLUECorpus2020: a large-scale Chinese corpus for pre-training language model. arXiv preprint arXiv:2003.01355 (2020)

  27. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  28. Shen, T., Zhou, T., Long, G., Jiang, J., Pan, S., Zhang, C.: DiSAN: directional self-attention network for RNN/CNN-free language understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  29. Niu, Y., Xie, R., Liu, Z., Sun, M.: Improved word representation learning with sememes. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2049–2058 (2017)

    Google Scholar 

  30. Caruana, R.: Learning many related tasks at the same time with backpropagation. In: Advances in Neural Information Processing Systems, pp. 657–664 (1995)

    Google Scholar 

  31. Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814 (2017)

  32. Liu, X., Chen, Q., Deng, C., Zeng, H., Chen, J., Li, D., Tang, B.: LCQMC: a large-scale Chinese question matching corpus. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 1952–1962 (2018)

    Google Scholar 

  33. Chen, J., Chen, Q., Liu, X., Yang, H., Lu, D., Tang, B.: The BQ corpus: a large-scale domain-specific Chinese corpus for sentence semantic equivalence identification. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4946–4951 (2018)

    Google Scholar 

  34. Mueller, J., Thyagarajan, A.: Siamese recurrent architectures for learning sentence similarity. In: Proceedings of the AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  35. Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. arXiv arxiv:1906.08101 (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoyun Ding .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, H., Ding, Z., Li, Z., Guo, H. (2022). OTE: An Optimized Chinese Short Text Matching Algorithm Based on External Knowledge. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13368. Springer, Cham. https://doi.org/10.1007/978-3-031-10983-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10983-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10982-9

  • Online ISBN: 978-3-031-10983-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics