skip to main content
10.1145/3581783.3612598acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Simple Techniques are Sufficient for Boosting Adversarial Transferability

Published:27 October 2023Publication History

ABSTRACT

Transferable targeted adversarial attack against deep image classifiers has remained an open issue. Depending on the space to optimize the loss, the existing methods can be divided into two categories: (a) feature space attack and (b) output space attack. The feature space attack outperforms output space one by a large margin but at the cost of requiring the training of layer-wise auxiliary classifiers for each corresponding target class together with the greedy search for the optimal layers. In this work, we revisit the method of output space attack and improve it from two perspectives. First, we identify over-fitting as one major factor that hinders transferability, for which we propose to augment the network input and/or feature layers with noise. Second, we propose a new cross-entropy loss with two ends: one for pushing the sample far from the source class, i.e. ground-truth class, and the other for pulling it close to the target class. We demonstrate that simple techniques are sufficient enough for achieving very competitive performance.

References

  1. Murtaza Eren Akbiyik. 2019. Data Augmentation in Training CNNs: Injecting Noise to Images. (2019).Google ScholarGoogle Scholar
  2. Philipp Benz, Chaoning Zhang, Tooba Imtiaz, and In So Kweon. 2020. Double Targeted Universal Adversarial Perturbations. In ACCV.Google ScholarGoogle Scholar
  3. Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017).Google ScholarGoogle Scholar
  4. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In SP.Google ScholarGoogle Scholar
  5. Jianbo Chen, Michael I Jordan, and Martin J Wainwright. 2020. Hopskipjumpattack: A query-efficient decision-based attack. In ieee symposium on security and privacy (sp).Google ScholarGoogle Scholar
  6. Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In ACM workshop on artificial intelligence and security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In CVPR.Google ScholarGoogle Scholar
  8. Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019a. Evading defenses to transferable adversarial examples by translation-invariant attacks. In CVPR.Google ScholarGoogle Scholar
  9. Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, and Jun Zhu. 2019b. Efficient decision-based black-box adversarial attacks on face recognition. In CVPR.Google ScholarGoogle Scholar
  10. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR.Google ScholarGoogle Scholar
  11. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR.Google ScholarGoogle Scholar
  12. Yiwen Guo, Qizhang Li, and Hao Chen. 2020. Backpropagating linearly improves transferability of adversarial examples. arXiv preprint arXiv:2012.03528 (2020).Google ScholarGoogle Scholar
  13. Zhezhi He, Adnan Siraj Rakin, and Deliang Fan. 2019. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In CVPR.Google ScholarGoogle Scholar
  14. Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, and Ser-Nam Lim. 2019. Enhancing adversarial example transferability with an intermediate level attack. In ICCV.Google ScholarGoogle Scholar
  15. Sarfaraz Hussein, Robert Gillies, Kunlin Cao, Qi Song, and Ulas Bagci. 2017. Tumornet: Lung nodule characterization using multi-view convolutional neural network with gaussian process. In ISBI.Google ScholarGoogle Scholar
  16. Nathan Inkawhich, Kevin J Liang, Lawrence Carin, and Yiran Chen. 2020a. Transferable perturbations of deep feature distributions. ICLR (2020).Google ScholarGoogle Scholar
  17. Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, and Yiran Chen. 2020b. Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability. NeurIPS (2020).Google ScholarGoogle Scholar
  18. Nathan Inkawhich, Wei Wen, Hai Helen Li, and Yiran Chen. 2019. Feature space perturbations yield more transferable adversarial examples. In CVPR.Google ScholarGoogle Scholar
  19. Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).Google ScholarGoogle Scholar
  20. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In ICLR.Google ScholarGoogle Scholar
  21. Jiguo Li, Xinfeng Zhang, Chuanmin Jia, Jizheng Xu, Li Zhang, Yue Wang, Siwei Ma, and Wen Gao. 2020c. Universal Adversarial Perturbations Generative Network For Speaker Recognition. In ICME.Google ScholarGoogle Scholar
  22. Maosen Li, Cheng Deng, Tengjiao Li, Junchi Yan, Xinbo Gao, and Heng Huang. 2020a. Towards Transferable Targeted Attack. In CVPR.Google ScholarGoogle Scholar
  23. Qizhang Li, Yiwen Guo, and Hao Chen. 2020b. Yet Another Intermediate-Level Attack. In ECCV.Google ScholarGoogle Scholar
  24. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. ICLR (2017).Google ScholarGoogle Scholar
  25. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In ICLR.Google ScholarGoogle Scholar
  26. Nina Narodytska and Shiva Kasiviswanathan. 2017. Simple black-box adversarial attacks on deep neural networks. In CVPRW.Google ScholarGoogle Scholar
  27. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In ACM on Asia conference on computer and communications security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google ScholarGoogle Scholar
  29. Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy. 2021. MLP-Mixer: An all-MLP Architecture for Vision. arXiv preprint arXiv:2105.01601 (2021).Google ScholarGoogle Scholar
  30. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. ICLR (2018).Google ScholarGoogle Scholar
  31. Zhipeng Wei, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. 2023. Enhancing the Self-Universality for Transferable Targeted Attacks. In CVPR.Google ScholarGoogle Scholar
  32. Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, and Xingjun Ma. 2020. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990 (2020).Google ScholarGoogle Scholar
  33. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. 2019. Improving transferability of adversarial examples with input diversity. In CVPR.Google ScholarGoogle Scholar
  34. Ziang Yan, Yiwen Guo, and Changshui Zhang. 2019. Subspace attack: Exploiting promising subspaces for query-efficient black-box attacks. NeurIPS (2019).Google ScholarGoogle Scholar
  35. Zhonghui You, Jinmian Ye, Kunming Li, and Ping Wang. 2019. Adversarial Noise Layer: Regularize Neural Network By Adding Noise. In ICIP.Google ScholarGoogle Scholar
  36. Chaoning Zhang, Philipp Benz, Gyusang Cho, Adil Karjauv, Soomin Ham, Chan-Hyun Youn, and In So Kweon. 2021. Backpropagating Smoothly Improves Transferability of Adversarial Examples. In CVPR 2021 Workshop Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV).Google ScholarGoogle Scholar
  37. Chaoning Zhang, Philipp Benz, Tooba Imtiaz, and In-So Kweon. 2020. CD-UAP: Class Discriminative Universal Adversarial Perturbation. In AAAI.Google ScholarGoogle Scholar
  38. Chaoning Zhang, Philipp Benz, Tooba Imtiaz, and In-So Kweon. 2020. Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations. In CVPR.Google ScholarGoogle Scholar
  39. Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, and In So Kweon. 2022a. Investigating Top-k White-Box and Transferable Black-box Attack. In CVPR.Google ScholarGoogle Scholar
  40. Chaoning Zhang, Philipp Benz, Adil Karjauv, and In So Kweon. 2021. Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective. AAAI (2021).Google ScholarGoogle Scholar
  41. Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D Yoo, and In So Kweon. 2022b. Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness. In ECCV. Springer, 725--742.Google ScholarGoogle Scholar
  42. Zhengyu Zhao, Zhuoran Liu, and Martha Larson. 2021. On Success and Simplicity: A Second Look at Transferable Targeted Attacks. NeurIPS (2021).Google ScholarGoogle Scholar
  43. Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. 2018. Transferable adversarial perturbations. In ECCV.Google ScholarGoogle Scholar
  44. Yao Zhu, Jiacheng Sun, and Zhenguo Li. 2022. Rethinking adversarial transferability from a data distribution perspective. In ICLR.Google ScholarGoogle Scholar

Index Terms

  1. Simple Techniques are Sufficient for Boosting Adversarial Transferability

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 October 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia
    • Article Metrics

      • Downloads (Last 12 months)120
      • Downloads (Last 6 weeks)19

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader