skip to main content
10.1145/3394486.3403150acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Graph Structural-topic Neural Network

Published:20 August 2020Publication History

ABSTRACT

Graph Convolutional Networks (GCNs) achieved tremendous success by effectively gathering local features for nodes. However, commonly do GCNs focus more on node features but less on graph structures within the neighborhood, especially higher-order structural patterns. However, such local structural patterns are shown to be indicative of node properties in numerous fields. In addition, it is not just single patterns, but the distribution over all these patterns matter, because networks are complex and the neighborhood of each node consists of a mixture of various nodes and structural patterns. Correspondingly, in this paper, we propose Graph Structural topic Neural Network, abbreviated GraphSTONE 1, a GCN model that utilizes topic models of graphs, such that the structural topics capture indicative graph structures broadly from a probabilistic aspect rather than merely a few structures. Specifically, we build topic models upon graphs using anonymous walks and Graph Anchor LDA, an LDA variant that selects significant structural patterns first, so as to alleviate the complexity and generate structural topics efficiently. In addition, we design multi-view GCNs to unify node features and structural topic features and utilize structural topics to guide the aggregation. We evaluate our model through both quantitative and qualitative experiments, where our model exhibits promising performance, high efficiency, and clear interpretability.

Skip Supplemental Material Section

Supplemental Material

3394486.3403150.mp4

mp4

159.4 MB

References

  1. Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2013. A practical algorithm for topic modeling with provable guarantees. In International Conference on Machine Learning. 280--288.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sanjeev Arora, Rong Ge, and Ankur Moitra. 2012. Learning topic models--going beyond SVD. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science. IEEE, 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. DavidMBlei, AndrewY Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan (2003), 993--1022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Karsten M Borgwardt and Hans-Peter Kriegel. 2005. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining. IEEE, 8--pp.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. 2018. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1320--1329.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mark S Granovetter. 1977. The strength of weak ties. In Social networks. Elsevier, 347--367.Google ScholarGoogle Scholar
  7. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1024--1034.Google ScholarGoogle Scholar
  8. Sergey Ivanov and Evgeny Burnaev. 2018. Anonymous Walk Embeddings. In International Conference on Machine Learning. 2191--2200.Google ScholarGoogle Scholar
  9. Di Jin, Xinxin You, Weihao Li, Dongxiao He, Peng Cui, Françoise Fogelman- Soulié, and Tanmoy Chakraborty. 2019. Incorporating network embedding into markov random field for better community detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 160--167.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yilun Jin, Guojie Song, and Chuan Shi. 2020. GraLSP: Graph Neural Networks with Local Structural Patterns. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA. AAAI Press, 4361--4368.Google ScholarGoogle Scholar
  11. Noriaki Kawamae. 2019. Topic Structure-Aware Neural Language Model: Unified language model that maintains word and topic ordering by their embedded representations. In The World Wide Web Conference. ACM, 2900--2906.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Thomas Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference of Learning Representations.Google ScholarGoogle Scholar
  13. Danai Koutra, U Kang, Jilles Vreeken, and Christos Faloutsos. 2014. Vog: Summarizing and understanding large graphs. In Proceedings of the 2014 SIAM international conference on data mining. SIAM, 91--99.Google ScholarGoogle ScholarCross RefCross Ref
  14. Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401, 6755 (1999), 788.Google ScholarGoogle Scholar
  15. John Boaz Lee, Ryan A Rossi, Xiangnan Kong, Sungchul Kim, Eunyee Koh, and Anup Rao. 2019. Graph Convolutional Networks with Motif-based Attention. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 499--508.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ziyao Li, Liang Zhang, and Guojie Song. 2019. GCN-LASE: towards adequately incorporating link attributes in graph convolutional networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2959--2965.Google ScholarGoogle ScholarCross RefCross Ref
  17. Lin Liu, Lin Tang, Libo He, Shaowen Yao, and Wei Zhou. 2017. Predicting protein function via multi-label supervised topic model on gene ontology. Biotechnology & Biotechnological Equipment 31, 3 (2017), 630--638.Google ScholarGoogle ScholarCross RefCross Ref
  18. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In Twenty-Ninth AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Qingqing Long, Yiming Wang, Lun Du, Guojie Song, Yilun Jin, andWei Lin. 2019. Hierarchical Community Structure Preserving Network Embedding: A Subspace Approach. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 409--418.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Andreas Loukas. 2020. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations. https://openreview.net/ forum?id=B1l2bp4YwSGoogle ScholarGoogle Scholar
  21. Silvio Micali and Zeyuan Allen Zhu. 2016. Reconstructing markov processes from independent and anonymous experiments. Discrete Applied Mathematics 200 (2016), 108--122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Ron Milo, Shai Shen-Orr, Shalev Itzkovitz, Nadav Kashtan, Dmitri Chklovskii, and Uri Alon. 2002. Network motifs: simple building blocks of complex networks. Science 298, 5594 (2002), 824--827.Google ScholarGoogle Scholar
  23. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. 2019. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 4602--4609.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Kenta Oono and Taiji Suzuki. 2020. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. In International Conference on Learning Representations. https://openreview.net/forum?id=S1ldO2EFPrGoogle ScholarGoogle Scholar
  25. Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. 2017. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 385--394.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. 2009. Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics. 488--495.Google ScholarGoogle Scholar
  27. Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).Google ScholarGoogle Scholar
  28. Keyulu Xu,Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).Google ScholarGoogle Scholar
  29. Yizhou Zhang, Guojie Song, Lun Du, Shuwen Yang, and Yilun Jin. 2019. DANE: Domain Adaptive Network Embedding. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  30. Lekui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. 2018. Dynamic network embedding by modeling triadic closure process. In Thirty-Second AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
    August 2020
    3664 pages
    ISBN:9781450379984
    DOI:10.1145/3394486

    Copyright © 2020 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 20 August 2020

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,133of8,635submissions,13%

    Upcoming Conference

    KDD '24

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader