Skip to main content

An Improved (Adversarial) Reprogramming Technique for Neural Networks

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2021 (ICANN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12891))

Included in the following conference series:

Abstract

Neural networks can be repurposed via adversarial reprogramming to perform new tasks, which are different from the tasks they were originally trained for. In this paper, we introduce new and improved reprogramming technique that, compared to prior works, achieves better accuracy, scalability, and can be successfully applied to more complex tasks. While prior literature focuses on potential malicious uses of reprogramming, we argue that reprogramming can be viewed as an efficient training method. Our reprogramming method allows for re-using existing pre-trained models and easily reprogramming them to perform new tasks. This technique requires a lot less effort and hyperparameter tuning compared training new models from scratch. Therefore, we believe that our improved and scalable reprogramming method has potential to become a new method for creating neural network models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bubeck, S., Price, E., Razenshteyn, I.: Adversarial examples from computational constraints. In: ICML (2019)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy, pp. 39–57 (2016)

    Google Scholar 

  3. Elsayed, G., Goodfellow, I., Sohl-Dickstein, J.: Adversarial reprogramming of neural networks. In: ICLR (2019)

    Google Scholar 

  4. Fawzi, A., Moosavi-Dezfooli, S,. Frossard, P.: Robustness of classifiers: from adversarial to random noise. In: NIPS (2016)

    Google Scholar 

  5. Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. In: NeurIPS (2018)

    Google Scholar 

  6. Ford, N., Gilmer, J., Carlini, N., Cubuk, E.: Adversarial examples are a natural consequence of test error in noise. In: Proceedings of the 36th International Conference on Machine Learning (2019)

    Google Scholar 

  7. Gilmer, J., et al.: Adversarial Spheres. In: ICLR (2018)

    Google Scholar 

  8. Goodfellow, I., McDaniel, P., Papernot, N.: Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018)

    Article  Google Scholar 

  9. Goodfellow, I., Shlens, J., Szegedy, Ch.: Explaining and Harnessing Adversarial Examples. In: ICLR (2014)

    Google Scholar 

  10. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs. They are features, In: NeurIPS (2019)

    Google Scholar 

  11. Mahloujifar, S. Diochnos, D., Mahmoody, M.: The curse of concentration in robust learning: evasion and poisoning attacks from concentration of measure. In: AAAI (2019)

    Google Scholar 

  12. Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning (PMLR), vol. 27, pp. 97–110 (2012)

    Google Scholar 

  13. Nakkiran, P.: Adversarial robustness may be at odds with simplicity. In: ICLR (2019)

    Google Scholar 

  14. Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F.: Adversarial reprogramming of text classification neural networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)

    Google Scholar 

  15. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z., Swami, A.: Practical black-box attacks against machine learning. In: ASIA CCS (2016)

    Google Scholar 

  16. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z., Swami, A.: The limitations of deep learning in adversarial settings. IEEE European Symposium on Security and Privacy (EuroS&P), pp. 371–387 (2015)

    Google Scholar 

  17. Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.: Self-taught learning: transfer learning from unlabeled data. In: ICML (2007)

    Google Scholar 

  18. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Madry A.: Adversarially robust generalization requires more data. In: NeurIPS (2018)

    Google Scholar 

  19. Shafahi, A., Huang, W., Studer, Ch., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? In: ICLR (2019)

    Google Scholar 

  20. Shamir, A., Safran, I., Ronen, E., Dunkelman, O.: A simple explanation for the existence of adversarial examples with small hamming distance. ArXiv, arXiv:1901.10861 (2019)

  21. Szegedy, Ch., et al .: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (2014)

    Google Scholar 

  22. Tanay, T., Griffin, L.D.: A boundary tilting persepective on the phenomenon of adversarial examples. ArXiv, arXiv:1608.07690 (2016)

  23. Wang, X., et al.: Protecting neural networks with hierarchical random switching: towards better robustness-accuracy trade-off for stochastic defenses. In: IJCAI (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eliska Kloberdanz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kloberdanz, E., Tian, J., Le, W. (2021). An Improved (Adversarial) Reprogramming Technique for Neural Networks. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12891. Springer, Cham. https://doi.org/10.1007/978-3-030-86362-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86362-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86361-6

  • Online ISBN: 978-3-030-86362-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics