ABSTRACT
Sarcastic comments are often used to express dissatisfaction with products or events. Mining the topics and targets can provide clues for analyzing the underlying reasons behind the sarcasm, which helps understand user demands and improve products service. Existing research mainly focuses on mining single facet of sarcasm, such as topic or target, ignoring the complex interrelations between them. To overcome the above challenges, this paper proposes a Heterogeneous Information Network fused with Context-Aware Contrastive Learning (HINCCL) method. This approach aims to model multi-view features including syntactic style, domain knowledge, and textual semantics through a hierarchical attention aggregation mechanism. Furthermore, a context-aware negative contrastive training strategy is designed to learn the differentiated representations between different topic-target pairs. The effectiveness of the proposed method is validated on a dataset constructed in the digital domain.
Supplemental Material
- Minjie Yuan, Qiudan Li, Xue Mao, and Daniel Dajun Zeng. 2023. Identifying Topic and Cause for Sarcasm: An Unsupervised Knowledge-enhanced Prompt Method. In Companion Proceedings of the ACM Web Conference 2023 (WWW '23 Companion).184--187.Google Scholar
- Liqiang Jing, Xuemeng Song, Kun Ouyang, Mengzhao Jia, and Liqiang Nie. 2023. Multi-source Semantic Graph-based Multimodal Sarcasm Explanation Generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 11349--11361.Google ScholarCross Ref
- Jasabanta Patro, Srijan Bansal, and Animesh Mukherjee. 2019. A deep-learning framework to detect sarcasm targets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).6336--6342.Google ScholarCross Ref
- Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, and David Eyers. 2021. Detecting the target of sarcasm is hard: Really?? Information Processing & Management. Vol. 58, No. 4, 102599.Google ScholarDigital Library
- Bin Liang, Zijie Lin, Bing Qin, and Ruifeng Xu. 2022. Topic-Oriented Sarcasm Detection: New Task, New Dataset and New Method. In Proceedings of the 21st Chinese National Conference on Computational Linguistics. 557--568.Google Scholar
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171-4186.Google Scholar
- Hu Linmei, Tianchi Yang, Chuan Shi, Houye Ji, and Xiaoli Li. 2019. Heterogeneous Graph Attention Networks for Semi-supervised Short Text Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).4821--4830.Google ScholarCross Ref
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, AidanN. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6000--6010Google Scholar
- Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 6894--6910.Google ScholarCross Ref
- Yuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, and Fei Wu. 2021. BertGCN: Transductive Text Classification by Combining GNN and BERT. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 1456--1462.Google ScholarCross Ref
- Yaqing Wang, Song Wang, Quanming Yao, and Dejing Dou. 2021. Hierarchical Heterogeneous Graph Representation Learning for Short Text Classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 3091--3101.Google ScholarCross Ref
- Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-Training with Whole Word Masking for Chinese BERT. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 3504--3514.Google Scholar
Index Terms
- A Heterogeneous Network fused with Context-aware Contrastive Learning for Sarcasm Topic-Target Pair Identification
Recommendations
Improving topic disentanglement via contrastive learning
AbstractWith the emergence and development of deep generative models, such as the variational auto-encoders (VAEs), the research on topic modeling successfully extends to a new area: neural topic modeling, which aims to learn disentangled ...
Highlights- We propose the contrastive disentangled neural topic model based on topic embedding.
Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningHeterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN). However, most HGNNs follow a semi-supervised learning manner, which notably limits their wide use ...
Signaling sarcasm
The use of hashtags such as #sarcasm reduces the further use of linguistic markers of sarcasm in tweets.Hashtags such as #sarcasm appear to be the extralinguistic equivalent of non-verbal expressions in live interaction.Sarcastic hashtags are 90% ...
Comments