Skip to main content

Reinforcement Learning for Data Poisoning on Graph Neural Networks

  • Conference paper
  • First Online:
Social, Cultural, and Behavioral Modeling (SBP-BRiMS 2021)

Abstract

Adversarial Machine Learning has emerged as a substantial subfield of Computer Science due to a lack of robustness in the models we train along with crowdsourcing practices that enable attackers to tamper with data. In the last two years, interest has surged in adversarial attacks on graphs yet the Graph Classification setting remains nearly untouched. Since a Graph Classification dataset consists of discrete graphs with class labels, related work has forgone direct gradient optimization in favor of an indirect Reinforcement Learning approach. We will study the novel problem of Data Poisoning (training-time) attacks on Neural Networks for Graph Classification using Reinforcement Learning Agents.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jin, W., Li, Y., Xu, H., Wang, Y., Tang, J.: Adversarial attacks and defenses on graphs: a review and empirical study. arXiv preprint arXiv:2003.00653 (2020)

  2. Xu, H., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17, 151–178 (2020). https://doi.org/10.1007/s11633-019-1211-x

    Article  Google Scholar 

  3. Goodfellow, I.J., et al.: Generative adversarial networks (2014)

    Google Scholar 

  4. Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 6(3), 346–360 (2020)

    Article  Google Scholar 

  5. Knyazev, B., Lin, X., Amer, M.R., Taylor, G.W.: Spectral multigraph networks for discovering and fusing relationships in molecules. arXiv:1811.09595 [cs, stat], November 2018

  6. Ying, R., You, J., Morris, C., Ren, X., Hamilton, W.L., Leskovec, J.: Hierarchical graph representation learning with differentiable pooling. arXiv:1806.08804 [cs, stat], February 2019

  7. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: 25th USENIX Security Symposium (USENIX Security 2016), pp. 601–618 (2016)

    Google Scholar 

  8. Zhang, W.E., Sheng, Q.Z., Alhazmi, A., Li, C.: Adversarial attacks on deep-learning models in natural language processing: a survey. ACM Trans. Intell. Syst. Technol. 11(3) (2020). https://doi.org/10.1145/3374217

  9. Tixier, A.J.-P., Nikolentzos, G., Meladianos, P., Vazirgiannis, M.: Graph classification with 2D convolutional neural networks. In: Tetko, I.V., Kůrková, V., Karpov, P., Theis, F. (eds.) ICANN 2019. LNCS, vol. 11731, pp. 578–593. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30493-5_54

    Chapter  Google Scholar 

  10. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  11. Dai, H., et al.: Adversarial attack on graph structured data. arXiv:1806.02371 [cs, stat], June 2018

  12. Ma, Y., Wang, S., Derr, T., Wu, L., Tang, J.: Attacking graph convolutional networks via rewiring. arXiv:1906.03750 [cs, stat], September 2019

  13. Zhang, Z., Jia, J., Wang, B., Gong, N.Z.: Backdoor attacks to graph neural networks. arXiv:2006.11165 [cs], June 2020

  14. Papernot, N., McDaniel, P., Swami, A., Harang, R.: Crafting adversarial input sequences for recurrent neural networks (2016)

    Google Scholar 

  15. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. arXiv:1801.00553 [cs], February 2018

  16. Xu, X., Yu, Y., Li, B., Song, L., Liu, C., Gunter, C.: Characterizing malicious edges targeting on graph neural networks (2019). https://openreview.net/forum?id=HJxdAoCcYX

  17. Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, pp. 1399–1407. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3292500.3330851

  18. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? arXiv:1810.00826 [cs, stat], February 2019

  19. Wang, M., et al.: Deep graph library: a graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315 (2019)

  20. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3), 229–256 (1992). https://doi.org/10.1007/BF00992696

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacob Dineen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dineen, J., Haque, A.S.M.AU., Bielskas, M. (2021). Reinforcement Learning for Data Poisoning on Graph Neural Networks. In: Thomson, R., Hussain, M.N., Dancy, C., Pyke, A. (eds) Social, Cultural, and Behavioral Modeling. SBP-BRiMS 2021. Lecture Notes in Computer Science(), vol 12720. Springer, Cham. https://doi.org/10.1007/978-3-030-80387-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-80387-2_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-80386-5

  • Online ISBN: 978-3-030-80387-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics