Self-Supervised Graph Neural Networks via Diverse and Interactive Message Passing

Authors

  • Liang Yang Hebei University of Technology Chinese Academy of Sciences
  • Cheng Chen Hebei University of Technology
  • Weixun Li Hebei University of Technology
  • Bingxin Niu Hebei University of Technology
  • Junhua Gu Hebei University of Technology
  • Chuan Wang Chinese Academy of Sciences
  • Dongxiao He Tianjin University
  • Yuanfang Guo Beihang University
  • Xiaochun Cao Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v36i4.20353

Keywords:

Data Mining & Knowledge Management (DMKM)

Abstract

By interpreting Graph Neural Networks (GNNs) as the message passing from the spatial perspective, their success is attributed to Laplacian smoothing. However, it also leads to serious over-smoothing issue by stacking many layers. Recently, many efforts have been paid to overcome this issue in semi-supervised learning. Unfortunately, it is more serious in unsupervised node representation learning task due to the lack of supervision information. Thus, most of the unsupervised or self-supervised GNNs often employ \textit{one-layer GCN} as the encoder. Essentially, the over-smoothing issue is caused by the over-simplification of the existing message passing, which possesses two intrinsic limits: blind message and uniform passing. In this paper, a novel Diverse and Interactive Message Passing (DIMP) is proposed for self-supervised learning by overcoming these limits. Firstly, to prevent the message from blindness and make it interactive between two connected nodes, the message is determined by both the two connected nodes instead of the attributes of one node. Secondly, to prevent the passing from uniformness and make it diverse over different attribute channels, different propagation weights are assigned to different elements in the message. To this end, a natural implementation of the message in DIMP is the element-wise product of the representations of two connected nodes. From the perspective of numerical optimization, the proposed DIMP is equivalent to performing an overlapping community detection via expectation-maximization (EM). Both the objective function of the community detection and the convergence of EM algorithm guarantee that DMIP can prevent from over-smoothing issue. Extensive evaluations on node-level and graph-level tasks demonstrate the superiority of DIMP on improving performance and overcoming over-smoothing issue.

Downloads

Published

2022-06-28

How to Cite

Yang, L., Chen, C., Li, W., Niu, B., Gu, J., Wang, C., He, D., Guo, Y., & Cao, X. (2022). Self-Supervised Graph Neural Networks via Diverse and Interactive Message Passing. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4327-4336. https://doi.org/10.1609/aaai.v36i4.20353

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management