Abstract
Neural networks can be repurposed via adversarial reprogramming to perform new tasks, which are different from the tasks they were originally trained for. In this paper, we introduce new and improved reprogramming technique that, compared to prior works, achieves better accuracy, scalability, and can be successfully applied to more complex tasks. While prior literature focuses on potential malicious uses of reprogramming, we argue that reprogramming can be viewed as an efficient training method. Our reprogramming method allows for re-using existing pre-trained models and easily reprogramming them to perform new tasks. This technique requires a lot less effort and hyperparameter tuning compared training new models from scratch. Therefore, we believe that our improved and scalable reprogramming method has potential to become a new method for creating neural network models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bubeck, S., Price, E., Razenshteyn, I.: Adversarial examples from computational constraints. In: ICML (2019)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy, pp. 39–57 (2016)
Elsayed, G., Goodfellow, I., Sohl-Dickstein, J.: Adversarial reprogramming of neural networks. In: ICLR (2019)
Fawzi, A., Moosavi-Dezfooli, S,. Frossard, P.: Robustness of classifiers: from adversarial to random noise. In: NIPS (2016)
Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. In: NeurIPS (2018)
Ford, N., Gilmer, J., Carlini, N., Cubuk, E.: Adversarial examples are a natural consequence of test error in noise. In: Proceedings of the 36th International Conference on Machine Learning (2019)
Gilmer, J., et al.: Adversarial Spheres. In: ICLR (2018)
Goodfellow, I., McDaniel, P., Papernot, N.: Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018)
Goodfellow, I., Shlens, J., Szegedy, Ch.: Explaining and Harnessing Adversarial Examples. In: ICLR (2014)
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs. They are features, In: NeurIPS (2019)
Mahloujifar, S. Diochnos, D., Mahmoody, M.: The curse of concentration in robust learning: evasion and poisoning attacks from concentration of measure. In: AAAI (2019)
Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning (PMLR), vol. 27, pp. 97–110 (2012)
Nakkiran, P.: Adversarial robustness may be at odds with simplicity. In: ICLR (2019)
Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F.: Adversarial reprogramming of text classification neural networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z., Swami, A.: Practical black-box attacks against machine learning. In: ASIA CCS (2016)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z., Swami, A.: The limitations of deep learning in adversarial settings. IEEE European Symposium on Security and Privacy (EuroS&P), pp. 371–387 (2015)
Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.: Self-taught learning: transfer learning from unlabeled data. In: ICML (2007)
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Madry A.: Adversarially robust generalization requires more data. In: NeurIPS (2018)
Shafahi, A., Huang, W., Studer, Ch., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? In: ICLR (2019)
Shamir, A., Safran, I., Ronen, E., Dunkelman, O.: A simple explanation for the existence of adversarial examples with small hamming distance. ArXiv, arXiv:1901.10861 (2019)
Szegedy, Ch., et al .: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (2014)
Tanay, T., Griffin, L.D.: A boundary tilting persepective on the phenomenon of adversarial examples. ArXiv, arXiv:1608.07690 (2016)
Wang, X., et al.: Protecting neural networks with hierarchical random switching: towards better robustness-accuracy trade-off for stochastic defenses. In: IJCAI (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kloberdanz, E., Tian, J., Le, W. (2021). An Improved (Adversarial) Reprogramming Technique for Neural Networks. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12891. Springer, Cham. https://doi.org/10.1007/978-3-030-86362-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-86362-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86361-6
Online ISBN: 978-3-030-86362-3
eBook Packages: Computer ScienceComputer Science (R0)