skip to main content
10.1145/3507548.3507585acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaiConference Proceedingsconference-collections
research-article

Script event prediction based on pre-trained model with tail event enhancement

Authors Info & Claims
Published:09 March 2022Publication History

ABSTRACT

Script event prediction is a big challenge and its goal is to predict the subsequent event based on the observed events. Since an event is described by text, the pre-trained models have been applied for event representation. However, the embedding based on the pre-trained models is sensitive to the short text format of events, and the existing works do not handle it well. In addition, previous models pay more attention to the semantic similarity but ignore the factors of emergencies. The turning event at the tail of the event chain can easily affect the follow-up direction. This paper proposes a new preprocessing method: cleaning, alignment, and connection, which helps to obtain richer event representations. On this basis, we concatenate the embedding of the CLS token and event sequence to integrate the semantic and temporal features of the event chain. To deal with the problem of event turning, we propose a tail event enhancement module. It adds the transition probability of tail events and candidate events into prediction layer, so as to avoid pay only attention to the semantic feature. The results of a large number of comparative experiments and ablation experiments confirm the superiority of our model compared with the baselines.

References

  1. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT. 789–797.Google ScholarGoogle Scholar
  2. Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learning enhanced with external commonsense knowledge. arXiv preprint arXiv:1909.05190 (2019).Google ScholarGoogle Scholar
  3. Mark Granroth-Wilding and Stephen Clark. 2016. What Happens Next? Event Prediction Using a Compositional Neural Network Model. In AAAI. AAAI Press, 2727–2733.Google ScholarGoogle Scholar
  4. Yi Han, Linbo Qiao, Jianming Zheng, Hefeng Wu, Dongsheng Li, and Xiangke Liao. 2021. A survey of script learning. Frontiers Inf. Technol. Electron. Eng. volume22, number3 (2021), 341–373.Google ScholarGoogle Scholar
  5. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely Connected Convolutional Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2261–2269.Google ScholarGoogle Scholar
  6. Hiroshi Inoue. 2019. Multi-Sample Dropout for Accelerated Training and Better Generalization. CoRR volumeabs/1905.09788 (2019).Google ScholarGoogle Scholar
  7. Bram Jans, Steven Bethard, Ivan Vulic, and Marie-Francine Moens. 2012. Skip N-grams and Ranking Functions for Predicting Script Events. In EACL. The Association for Computer Linguistics, 336–344.Google ScholarGoogle Scholar
  8. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  9. I-Ta Lee and Dan Goldwasser. 2018. FEEL: Featured Event Embedding Learning. In AAAI. AAAI Press, 4840–4847.Google ScholarGoogle Scholar
  10. I-Ta Lee and Dan Goldwasser. 2019. Multi-relational script learning for discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4214–4226.Google ScholarGoogle ScholarCross RefCross Ref
  11. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Constructing narrative event evolutionary graph for script event prediction. arXiv preprint arXiv:1805.05081 (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR volumeabs/1907.11692 (2019).Google ScholarGoogle Scholar
  13. Shangwen Lv, Wanhui Qian, Longtao Huang, Jizhong Han, and Songlin Hu. 2019. SAM-Net: Integrating Event-Level and Chain-Level Attentions to Predict What Happens Next. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. volume33. 6802–6809.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Shangwen Lv, Fuqing Zhu, and Songlin Hu. 2020. Integrating External Event Knowledge for Script Learning. In Proceedings of the 28th International Conference on Computational Linguistics. 306–315.Google ScholarGoogle ScholarCross RefCross Ref
  15. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).Google ScholarGoogle Scholar
  16. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In EMNLP. ACL, 1532–1543.Google ScholarGoogle Scholar
  17. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 701–710.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Karl Pichotta and Raymond Mooney. 2014. Statistical script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 220–229.Google ScholarGoogle ScholarCross RefCross Ref
  19. Karl Pichotta and Raymond J Mooney. 2016. Learning Statistical Scripts with LSTM Recurrent Neural Networks.. In AAAI. 2800–2806.Google ScholarGoogle Scholar
  20. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K. Joshi, and Bonnie L. Webber. 2008. The Penn Discourse TreeBank 2.0. In LREC. European Language Resources Association.Google ScholarGoogle Scholar
  21. Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script Induction as Language Modeling. In EMNLP. The Association for Computational Linguistics, 1681–1686.Google ScholarGoogle Scholar
  22. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. In AAAI. AAAI Press, 3027–3035.Google ScholarGoogle Scholar
  23. Zhongqing Wang, Yue Zhang, and Ching Yun Chang. 2017. Integrating order information and event relation for script event prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 57–67.Google ScholarGoogle ScholarCross RefCross Ref
  24. Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event Representations With Tensor-Based Compositions. In AAAI. AAAI Press, 4946–4953.Google ScholarGoogle Scholar
  25. Hongzuo Xu, Yijie Wang, Songlei Jian, Zhenyu Huang, Yongjun Wang, Ning Liu, and Fei Li. 2021. Beyond Outlier Detection: Outlier Interpretation by Attention-Guided Triplet Deviation Network. In WWW. ACM / IW3C2, 1328–1339.Google ScholarGoogle Scholar
  26. Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020. ASER: A large-scale eventuality knowledge graph. In Proceedings of The Web Conference 2020. 201–211.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jianming Zheng, Fei Cai, Yanxiang Ling, and Honghui Chen. 2020. Heterogeneous Graph Neural Networks to Predict What Happen Next. In Proceedings of the 28th International Conference on Computational Linguistics. 328–338.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Script event prediction based on pre-trained model with tail event enhancement
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CSAI '21: Proceedings of the 2021 5th International Conference on Computer Science and Artificial Intelligence
        December 2021
        437 pages
        ISBN:9781450384155
        DOI:10.1145/3507548

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 March 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)20
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format