Skip to main content
Log in

Integrating curriculum learning with meta-learning for general rhetoric identification

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Rhetoric is abundant and universal across different human languages. In this paper, we propose a novel curriculum learning integrated with meta-learning (CLML) model to address the task of general rhetorical identification. Specifically, we first leverage inter-category similarities to construct a dataset with curriculum characteristics for facilitating more natural easy-to-difficult learning process. Then we imitate human cognitive thinking that uses the query set in meta-learning to guide inductive network for inducing accurate class-level representations which are further improved by leveraging external class label knowledge into TapNet to construct a mapping function. Extensive experimental results demonstrate that our proposed model outperforms existing state-of-the-art models across four datasets consistently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. IFLYTEK: a Chinese text classifier encompassing multiple categories http://challenge.xfyun.cn.

References

  1. Liang R (1986) Definition and classification of rhetoric. Rhetoric Learn 4:60–62 (in Chinese)

    Google Scholar 

  2. Gu Y (2020) Rhetoric challenging AI with special reference to human-machine interaction. Contemp Rhetoric 6:26–50 (in Chinese)

    Google Scholar 

  3. Zhao L, Wang S, Chen X, Wang D, Zhang Z (2021) Part-of-speech based simile recognition and component extraction. J Chin Inf Process 35:81–87 (in Chinese)

    Google Scholar 

  4. Ge G, Eunsol C, Yejin C, Luke Z (2018) Neural metaphor detection in context. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 607–613

  5. Mu W, Liao J, Wang S (2018) A combination of CNN and structure similarity for parallelism recognition. J Chin Inf Process 32:139–146 (in Chinese)

    Google Scholar 

  6. Gong J (2016) The design and implement of the rhetoric recognition system oriented Chinese essay review. PhD thesis, Harbin Institute of Technology (in Chinese)

  7. Naik DK, Mammone RJ (1992) Meta-neural networks that learn by learning. In: Proceedings of the international joint conference on neural networks, pp 437–442

  8. Schmidhuber J (1987) Evolutionary principles in self-referential learning. In: Genetic programming, pp 2579–2605

  9. Thrun S (1998) Lifelong learning algorithms. In: Learning to learn, pp 2579–2605

  10. Bengio Y, Louradour J, Collobert R, Weston J (2009) Curriculum learning. In: Proceedings of the 26th annual international conference on machine learning, pp 41–48

  11. Wang X, Chen Y, Zhu W (2022) A comprehensive survey on curriculum learning. In: IEEE transactions on pattern analysis and machine intelligence, pp 4555–4576

  12. Geng R, Li B, Li Y, Zhu X, Jian P, Sun J (2019) Induction networks for few-shot text classification. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing, pp 3895–3904

  13. Yoon SW, Seo J, Moon J (2019) Tapnet: neural network augmented with task-adaptive projection for few-shot learning. In: Proceedings of the the 36th international conference on machine learning, pp 7115–7123

  14. Hou Y, Che W, Lai Y, Zhou Z, Liu Y, Liu H, Liu T (2020) Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 1381–1393

  15. Li H, Wen G (2020) Modeling reverse thinking for machine learning. Soft Comput 24:1483–1496

    Article  Google Scholar 

  16. Liu M, Qin B, Liu T (2016) Automated Chinese composition scoring based on the literary feature. Intell Comput Appl 6:1–4 (in Chinese)

    Google Scholar 

  17. Köper M, Walde SS (2017) Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses. In: Proceedings of the 1st workshop on sense, concept and entity representations and their applications, pp 24–30

  18. Chen H, Long Y, Lu Q, Huang C (2017) Leveraging eventive information for better metaphor detection and classification. In: Proceedings of the 21st conference on computational natural language learning, pp 36–46

  19. Rai S, Chakraverty S, Tayal DK (2016) Supervised metaphor detection using conditional random fields. In: Proceedings of the fourth workshop on metaphor in NLP, pp 18–27

  20. Yan H, XiaoFeng W, Shen H (2018) Conditional random fields for metaphor detection. In: Proceedings of the workshop on figurative language processing, pp 121–123

  21. Klebanov BB, Leong CW, Gutierrez ED, Shutova E, Flor M (2016) Semantic classifications for detection of verb metaphors. In: Proceedings of the 54th annual meeting of the association for computational linguistics, pp 101–106

  22. Chen X, Hai Z, Li D, Wang S, Wang D (2021) Jointly identifying rhetoric and implicit emotions via multi-task learning. In: Findings of the joint proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, pp 1429–1434

  23. Mao R, Lin C, Guerin F (2019) End-to-end sequential metaphor identification inspired by linguistic theories. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 104–113

  24. Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2020) A comprehensive survey on transfer learning. Proc IEEE 109:43–76

    Article  Google Scholar 

  25. Yang Z, Dai Z, Yang Y, Carbonell JG, Salakhutdinov R, Le Q (2019) Xlnet: generalized autoregressive pretraining for language understanding. In: Proceedings of the 33th international conference on neural information processing systems, pp 5754–5764

  26. Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the North American chapter of the association for computational linguistics: human language technologies, pp 4171–4186

  27. Bunyamin H (2018) Utilizing Indonesian universal language model fine-tuning for text classification. In: Proceedings of the 56th annual meeting of the association for computational linguistics, pp 325–337

  28. Wu J, Li S, Deng A, Xiong M, Hooi B (2023) Prompt-and-align: prompt-based social alignment for few-shot fake news detection. In: The 32th ACM international conference on information and knowledge management, pp 2726–2736

  29. Pan C, Huang J, Gong J, Yuan X (2019) Few-shot transfer learning for text classification with lightweight word embedding based models. IEEE Access 7:53296–53304

    Article  Google Scholar 

  30. Pan L, Zhang Y, Kan M-Y (2023) Investigating zero- and few-shot generalization in fact verification. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 476–483

  31. Oscar D, Khoshgoftaar MT (2017) A survey on heterogeneous transfer learning. J Big Data 4:29

    Article  Google Scholar 

  32. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. J Mach Learn Res 17:1–35

    MathSciNet  Google Scholar 

  33. Qi C, Li B, Hui B, Wang B, Li J, Wu J, Laili Y (2023) An investigation of llms’ inefficacy in understanding converse relations. arXiv:2310.05163

  34. Wen Z, Tian Z, Wu W, Yang Y, Shi Y, Huang Z, Li D (2023) Grove: a retrieval-augmented complex story generation framework with a forest of evidence. arXiv:2310.05388

  35. Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu, Wierstra D (2016) Matching networks for one shot learning. In: Proceedings of the 30th international conference on neural information processing systems, pp 3637–3645

  36. Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. Mach Learn. arXiv:1703.05175

  37. Sung F, Yang Y, Zhang L, Xiang T, Torr PHS, Hospedales TM (2017) Learning to compare: relation network for few-shot learning. In: Proceedings of the computer vision and pattern recognition, pp 1199–1208

  38. Chen J, Zhang R, Mao Y, Xu J (2022) Contrastnet: a contrastive learning framework for few-shot text classification. Proceedings of the AAAI Conference on Artificial Intelligence 36:10492–10500

    Article  Google Scholar 

  39. Ravi S, Larochelle H (2017) Optimization as a model for few-shot learning. In: Proceedings of the 5th international conference on learning representations, pp 1–11

  40. Przewiȩźlikowski M, Przybysz P, Tabor J, Ziȩba M, Spurek P (2022) Hypermaml: few-shot adaptation of deep models with hypernetworks. arXiv:2205.15745

  41. Lai P, Ye F, Zhang L, Chen Z, Fu Y, Wu Y, Wang Y (2022) Pcbert: parent and child bert for Chinese few-shot ner. In: Proceedings of the 29th international conference on computational linguistics, pp 2199–2209

  42. James Steven Supancic DR (2013) Selfpaced learning for long-term tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2379–2386

  43. Chen X, Hai Z, Li D, Wang S, Wang D (2021) Webly supervised learning of convolutional networks. In: Findings of the joint proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, pp 1429–1434

  44. Tay Y, Wang S, Tuan LA, Fu J, Phan MC, Yuan X, Rao J, Hui SC, Zhang A (2019) Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. In: Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 629–640

  45. Xu B, Zhang L, Mao Z, Wang Q, Xie H, Zhang Y (2020) Curriculum learning for natural language understanding. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 6095–6104

  46. Reimers N, Gurevych I (2019) Sentence-bert: sentence embeddings using siamese bert-networks. In: Proceedings of the 2019 empirical methods in natural language processing, pp 3980–3990

  47. Hartigan JA, Wong MA (1979) Algorithm as 136: a k-means clustering algorithm. J R Stat Soc 28(1):100–108

    Google Scholar 

  48. Chen X, Li Y, Wang S, Li D, Mu W (2018) Emotional knowledge corpus construction for deep understanding of text. In: Proceedings of the workshop on Chinese lexical semantics, pp 655–666

  49. Bao Y, Wu M, Chang S, Barzilay R (2020) Few-shot text classification with distributional signatures. In: Proceedings of the 8th international conference on learning representations, pp 1–24

  50. Pappagari R, Zelasko P, Villalba J, Carmiel Y, Dehak N (2019) Hierarchical transformers for long document classification. In: 2019 IEEE automatic speech recognition and understanding workshop (ASRU), pp 838–844

  51. Zhang N, Sun Z, Deng S, Chen J, Chen H (2019) Improving few-shot text classification via pretrained language representations. arXiv:1908.08788

  52. Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: International conference on machine learning, pp 1126–1135

  53. Han C, Fan Z, Zhang D, Qiu M, Gao M, Zhou A (2021) Meta-learning adversarial domain adaptation network for few-shot text classification. In: Proceedings of the findings of the joint conference of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, pp 645–661

  54. Zhao Y, Tian Z, Yao H, Zheng Y, Lee D, Song Y, Sun J, Zhang N (2022) Improving meta-learning for low-resource text classification and generation via memory imitation. In: Proceedings of the 60th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp 583–595

  55. Joulin A, Grave E, Bojanowski P, Douze M, Jégou H, Mikolov T (2016) Fasttext.zip: compressing text classification models. arXiv:1612.03651

Download references

Acknowledgements

The authors would like to thank all anonymous reviewers for their valuable comments and suggestions which have significantly improved the quality and presentation of this paper. The works described in this paper are supported by the National Natural Science Foundation of China (62076158, 62106130, 62072294, 62306204), Natural Science Foundation of Shanxi Province, China (20210302124084, 202103021223267), Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (2021L284), and CCF-Zhipu AI Large Model Foundation of China (CCF-Zhipu202310).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suge Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Li, Y., Wang, S. et al. Integrating curriculum learning with meta-learning for general rhetoric identification. Int. J. Mach. Learn. & Cyber. (2024). https://doi.org/10.1007/s13042-023-02038-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13042-023-02038-7

Keywords

Navigation