Skip to main content

UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks

  • Conference paper
  • First Online:
Complex Networks & Their Applications XII (COMPLEX NETWORKS 2023)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1141))

Included in the following conference series:

  • 959 Accesses

Abstract

Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks. While the available attack strategies are based on applying perturbations on existing graphs within a specific budget, proposed defense mechanisms successfully guard against this type of attack. This paper proposes a new perspective founded on unrestricted adversarial examples. We propose to produce adversarial attacks by generating completely new data points instead of perturbing existing ones. We introduce a framework, so-called UnboundAttack, leveraging the advancements in graph generation to produce graphs preserving the semantics of the available training data while misleading the targeted classifier. Importantly, our method does not assume any knowledge about the underlying architecture. Finally, we validate the effectiveness of our proposed method in a realistic setting related to molecular graphs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation (2019). https://doi.org/10.48550/ARXIV.1904.06347, https://arxiv.org/abs/1904.06347

  2. Dai, H., et al.: Adversarial attack on graph structured data (2018). https://doi.org/10.48550/ARXIV.1806.02371, https://arxiv.org/abs/1806.02371

  3. Dai, H., et al.: Adversarial attack on graph structured data. In: Proceedings of the 35th International Conference on Machine Learning, pp. 1115–1124 (2018)

    Google Scholar 

  4. De Cao, N., Kipf, T.: MolGAN: an implicit generative model for small molecular graphs (2018). https://doi.org/10.48550/ARXIV.1805.11973, https://arxiv.org/abs/1805.11973

  5. Feng, F., He, X., Tang, J., Chua, T.S.: Graph adversarial training: dynamically regularizing based on graph structure (2019). https://doi.org/10.48550/ARXIV.1902.08226, https://arxiv.org/abs/1902.08226

  6. Goodfellow, I.J., et al.: Generative adversarial networks (2014). https://doi.org/10.48550/ARXIV.1406.2661, https://arxiv.org/abs/1406.2661

  7. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs (2017). https://doi.org/10.48550/ARXIV.1704.00028, https://arxiv.org/abs/1704.00028

  8. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with Gumbel-Softmax (2016). https://doi.org/10.48550/ARXIV.1611.01144, https://arxiv.org/abs/1611.01144

  9. Jin, W., et al.: Adversarial attacks and defenses on graphs: a review, a tool and empirical studies (2020). https://doi.org/10.48550/ARXIV.2003.00653, https://arxiv.org/abs/2003.00653

  10. Kearnes, S., McCloskey, K., Berndl, M., Pande, V., Riley, P.: Molecular graph convolutions: moving beyond fingerprints. J. Comput. Aided Mol. Des. 30(8), 595–608 (2016)

    Article  Google Scholar 

  11. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks (2016). https://doi.org/10.48550/ARXIV.1609.02907, https://arxiv.org/abs/1609.02907

  12. Kipf, T.N., Welling, M.: Variational graph auto-encoders (2016). https://doi.org/10.48550/ARXIV.1611.07308, https://arxiv.org/abs/1611.07308

  13. Mu, J., Wang, B., Li, Q., Sun, K., Xu, M., Liu, Z.: A hard label black-box adversarial attack against graph neural networks (2021). https://doi.org/10.48550/ARXIV.2108.09513, https://arxiv.org/abs/2108.09513

  14. Ramakrishnan, R., Dral, P.O., Rupp, M., von Lilienfeld, O.A.: Quantum chemistry structures and properties of 134 kilo molecules. Sci. Data 1, 140,022 (2014)

    Google Scholar 

  15. Schuchardt, J., Bojchevski, A., Gasteiger, J., Günnemann, S.: Collective robustness certificates: exploiting interdependence in graph neural networks. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=ULQdiUTHe3y

  16. Song, Y., Shu, R., Kushman, N., Ermon, S.: Constructing unrestricted adversarial examples with generative models (2018). https://doi.org/10.48550/ARXIV.1805.07894, https://arxiv.org/abs/1805.07894

  17. Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Adversarial attacks on graph neural networks via node injections: a hierarchical reinforcement learning approach. In: Proceedings of The Web Conference 2020, WWW 2020, pp. 673–683. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3366423.3380149

  18. Wan, X., Kenlay, H., Ru, B., Blaas, A., Osborne, M.A., Dong, X.: Adversarial attacks on graph classification via Bayesian optimisation (2021). https://doi.org/10.48550/ARXIV.2111.02842, https://arxiv.org/abs/2111.02842

  19. Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective (2019). https://doi.org/10.48550/ARXIV.1906.04214, https://arxiv.org/abs/1906.04214

  20. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: 7th International Conference on Learning Representations (2019)

    Google Scholar 

  21. You, J., Ying, R., Ren, X., Hamilton, W.L., Leskovec, J.: GraphRNN: generating realistic graphs with deep auto-regressive models (2018). https://doi.org/10.48550/ARXIV.1802.08773, https://arxiv.org/abs/1802.08773

  22. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (2018)

    Google Scholar 

  23. Zhan, H., Pei, X.: Black-box gradient attack on graph neural networks: deeper insights in graph-based attack and defense. arXiv preprint arXiv:2104.15061 (2021)

  24. Zhang, H., et al.: Projective ranking: a transferable evasion attack method on graph neural networks. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM 2021, pp. 3617–3621. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3459637.3482161

  25. Zhang, X., Zitnik, M.: GNNGuard: defending graph neural networks against adversarial attacks (2020). https://doi.org/10.48550/ARXIV.2006.08149, https://arxiv.org/abs/2006.08149

  26. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856 (2018)

    Google Scholar 

  27. Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: 7th International Conference on Learning Representations (2019)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sofiane Ennadir .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ennadir, S., Alkhatib, A., Nikolentzos, G., Vazirgiannis, M., Boström, H. (2024). UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks. In: Cherifi, H., Rocha, L.M., Cherifi, C., Donduran, M. (eds) Complex Networks & Their Applications XII. COMPLEX NETWORKS 2023. Studies in Computational Intelligence, vol 1141. Springer, Cham. https://doi.org/10.1007/978-3-031-53468-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53468-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53467-6

  • Online ISBN: 978-3-031-53468-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics