Abstract
Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks. While the available attack strategies are based on applying perturbations on existing graphs within a specific budget, proposed defense mechanisms successfully guard against this type of attack. This paper proposes a new perspective founded on unrestricted adversarial examples. We propose to produce adversarial attacks by generating completely new data points instead of perturbing existing ones. We introduce a framework, so-called UnboundAttack, leveraging the advancements in graph generation to produce graphs preserving the semantics of the available training data while misleading the targeted classifier. Importantly, our method does not assume any knowledge about the underlying architecture. Finally, we validate the effectiveness of our proposed method in a realistic setting related to molecular graphs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation (2019). https://doi.org/10.48550/ARXIV.1904.06347, https://arxiv.org/abs/1904.06347
Dai, H., et al.: Adversarial attack on graph structured data (2018). https://doi.org/10.48550/ARXIV.1806.02371, https://arxiv.org/abs/1806.02371
Dai, H., et al.: Adversarial attack on graph structured data. In: Proceedings of the 35th International Conference on Machine Learning, pp. 1115–1124 (2018)
De Cao, N., Kipf, T.: MolGAN: an implicit generative model for small molecular graphs (2018). https://doi.org/10.48550/ARXIV.1805.11973, https://arxiv.org/abs/1805.11973
Feng, F., He, X., Tang, J., Chua, T.S.: Graph adversarial training: dynamically regularizing based on graph structure (2019). https://doi.org/10.48550/ARXIV.1902.08226, https://arxiv.org/abs/1902.08226
Goodfellow, I.J., et al.: Generative adversarial networks (2014). https://doi.org/10.48550/ARXIV.1406.2661, https://arxiv.org/abs/1406.2661
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs (2017). https://doi.org/10.48550/ARXIV.1704.00028, https://arxiv.org/abs/1704.00028
Jang, E., Gu, S., Poole, B.: Categorical reparameterization with Gumbel-Softmax (2016). https://doi.org/10.48550/ARXIV.1611.01144, https://arxiv.org/abs/1611.01144
Jin, W., et al.: Adversarial attacks and defenses on graphs: a review, a tool and empirical studies (2020). https://doi.org/10.48550/ARXIV.2003.00653, https://arxiv.org/abs/2003.00653
Kearnes, S., McCloskey, K., Berndl, M., Pande, V., Riley, P.: Molecular graph convolutions: moving beyond fingerprints. J. Comput. Aided Mol. Des. 30(8), 595–608 (2016)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks (2016). https://doi.org/10.48550/ARXIV.1609.02907, https://arxiv.org/abs/1609.02907
Kipf, T.N., Welling, M.: Variational graph auto-encoders (2016). https://doi.org/10.48550/ARXIV.1611.07308, https://arxiv.org/abs/1611.07308
Mu, J., Wang, B., Li, Q., Sun, K., Xu, M., Liu, Z.: A hard label black-box adversarial attack against graph neural networks (2021). https://doi.org/10.48550/ARXIV.2108.09513, https://arxiv.org/abs/2108.09513
Ramakrishnan, R., Dral, P.O., Rupp, M., von Lilienfeld, O.A.: Quantum chemistry structures and properties of 134 kilo molecules. Sci. Data 1, 140,022 (2014)
Schuchardt, J., Bojchevski, A., Gasteiger, J., Günnemann, S.: Collective robustness certificates: exploiting interdependence in graph neural networks. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=ULQdiUTHe3y
Song, Y., Shu, R., Kushman, N., Ermon, S.: Constructing unrestricted adversarial examples with generative models (2018). https://doi.org/10.48550/ARXIV.1805.07894, https://arxiv.org/abs/1805.07894
Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Adversarial attacks on graph neural networks via node injections: a hierarchical reinforcement learning approach. In: Proceedings of The Web Conference 2020, WWW 2020, pp. 673–683. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3366423.3380149
Wan, X., Kenlay, H., Ru, B., Blaas, A., Osborne, M.A., Dong, X.: Adversarial attacks on graph classification via Bayesian optimisation (2021). https://doi.org/10.48550/ARXIV.2111.02842, https://arxiv.org/abs/2111.02842
Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective (2019). https://doi.org/10.48550/ARXIV.1906.04214, https://arxiv.org/abs/1906.04214
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: 7th International Conference on Learning Representations (2019)
You, J., Ying, R., Ren, X., Hamilton, W.L., Leskovec, J.: GraphRNN: generating realistic graphs with deep auto-regressive models (2018). https://doi.org/10.48550/ARXIV.1802.08773, https://arxiv.org/abs/1802.08773
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (2018)
Zhan, H., Pei, X.: Black-box gradient attack on graph neural networks: deeper insights in graph-based attack and defense. arXiv preprint arXiv:2104.15061 (2021)
Zhang, H., et al.: Projective ranking: a transferable evasion attack method on graph neural networks. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM 2021, pp. 3617–3621. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3459637.3482161
Zhang, X., Zitnik, M.: GNNGuard: defending graph neural networks against adversarial attacks (2020). https://doi.org/10.48550/ARXIV.2006.08149, https://arxiv.org/abs/2006.08149
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856 (2018)
Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: 7th International Conference on Learning Representations (2019)
Acknowledgements
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ennadir, S., Alkhatib, A., Nikolentzos, G., Vazirgiannis, M., Boström, H. (2024). UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks. In: Cherifi, H., Rocha, L.M., Cherifi, C., Donduran, M. (eds) Complex Networks & Their Applications XII. COMPLEX NETWORKS 2023. Studies in Computational Intelligence, vol 1141. Springer, Cham. https://doi.org/10.1007/978-3-031-53468-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-53468-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53467-6
Online ISBN: 978-3-031-53468-3
eBook Packages: EngineeringEngineering (R0)