ABSTRACT
A well-designed adversarial attack method can expose the security vulnerabilities of the deep neural network models, thus providing support examples for defense strategies such as adversarial training. This paper investigates the adversarial attack against the object detection model Faster R-CNN. First, this work takes Faster R-CNN as a target model and formulates the adversarial attack as a multi-objective optimization problem. Second, a constrain considering perturbation magnitude, class label scores and bounding boxes coordinates is introduced to guarantee effectiveness and concealment of the attack. Finally, the proposed method is verified on two benchmark datasets for object detection. The experimental results show that the generated adversarial examples can reduce the @[.5,.95] of Faster R-CNN from 39.9% to 0.8% and 35.0% to 0.1% on MSCOCO2017 and TT100K, respectively. In addition, the generated perturbation achieves considerable concealment, where the average of perturbation magnitude in L1 norm only reaches 13.99 and 0.71 on the two benchmark datasets.
- He, Kaiming, X. Zhang, Shaoqing Ren and Jian Sun. 2016. “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778.Google ScholarCross Ref
- Chang, P ., Tony T Wong and M. J. Rasiej. 2019. “Deep learning for detection of complete anterior cruciate ligament tear,” Journal of Digital Imaging, pp. 1–7.Google Scholar
- Wang, D., Wei, H., Zhang, Z., Huang, S., Xie, J., Luo, W., & Chen, J. 2022. Non-Parametric Online Learning from Human Feedback for Neural Machine Translation. AAAI.Google Scholar
- ZHANG C, LUO K, GU S. , 2021. Caps-YOLO: Pedestrian Detection Method of Complex Posture Combined with Capsules Network[J]. Journal of Flow Visualization and Image Processing, 28(3):41-69.Google ScholarCross Ref
- LIU S, GENG Y, SONG Y, 2021. Research on Small Target Pedestrian Detection Algorithm Based on Improved YOLOv3[C] //International Conference on Genetic and Evolutionary Computing. Springer, Singapore,:203-214.Google Scholar
- YI Z, YONGLIANG S. 2019. JUN Z. An improved tiny-yolov3 pedestrian detection algorithm[J]. Optik, 183:17-23Google ScholarCross Ref
- JENSEN M B, NASROLLAHI K, MOESLUND T B. 2017. Evaluating state-of-the-art object detector on challenging traffic light data[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 9-15.Google Scholar
- POSSATTI L C, GUIDOLINI R, CARDOSO V B, 14-19 July 2019, Traffic light recognition using deep learning and prior maps for autonomous cars[C] //2019 international joint conference on neural networks (IJCNN). Budapest, Hungary. IEEE, 1-8.Google Scholar
- GAO H, WANG W, YANG C, 2021. Traffic signal image detection technology based on YOLO[C]//Journal of Physics: Conference Series. IOP Publishing, Guangzhou, China, 012012.Google Scholar
- ZHANG J, HUANG M, JIN X, 2017. A real-time Chinese traffic sign detection algorithm based on modified YOLOv2[J]. Algorithms, 10(4):127.Google ScholarCross Ref
- YANG W, ZHANG W. 2020. Real-time Traffic Signs Detection Based on YOLO Network Model[C] //2020 International Conference on CyberEnabled Distributed Computing and Knowledge Discovery (CyberC). Chongqing, China.29-30 Oct. 2020 IEEE, 354-357.Google Scholar
- DEWI C, CHEN R-C, LIU Y-T, 2021. Yolo V4 for advanced traffic sign recognition with synthetic training data generated by various GAN[J]. IEEE Access, 9:97228-97242Google ScholarCross Ref
- Pei K, Cao Y, Yang J, 2017. Towards practical verification of machine learning: The case of computer vision systems[J]. arXiv preprint arXiv:1712.01785.Google Scholar
- Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, D. Erhan, I. Goodfellow and R. Fergus, 2014. “Intriguing properties of neural networks,” CoRR abs/1312.6199.Google Scholar
- Goodfellow, I., Jonathon Shlens and Christian Szegedy. 2015. “Explaining and harnessing adversarial examples,” CoRR abs/1412.6572.Google Scholar
- Madry, A., Aleksandar Makelov, Ludwig Schmidt, D. Tsipras and Adrian Vladu. 2018. “Towards deep learning models resistant to adversarial attacks,” ArXiv abs/1706.06083.Google Scholar
- Carlini, Nicholas and David A. Wagner, 2017. “Towards evaluating the robustness of neural networks,” IEEE Symposium on Security and Privacy (SP), pp. 39-57.Google Scholar
- Zou Z , Shi Z , Guo Y , 2019. Object Detection in 20 Years: A Survey[J].Google Scholar
- Zeiler, Matthew D. and Rob Fergus. 2013. “Stochastic Pooling for Regularization of Deep Convolutional Neural Networks.” CoRR abs/1301.3557 (2013): n. pag.Google Scholar
- Xie C , Wang J , Zhang Z , 2017. Adversarial Examples for Semantic Segmentation and Object Detection[J]. IEEE Computer Society, 1378-1387.Google Scholar
- Wei X , Liang S , Chen N , 2018. Transferable Adversarial Attacks for Image and Video Object Detection[C].Google Scholar
- Li Y , Tian D , Mingching-Chang, 2018. Robust Adversarial Perturbation on Deep Proposal-based Models[J].Google Scholar
- Wang D , Li C , Wen S , 2019. Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples[J].Google Scholar
- J. Liu, Y. Wang, Y. Yin, Y. Hu, H. Chen and X. Gong, 2021, "Adversarial Attacks on Faster R-CNN: Design and Ablation Study," 2021 China Automation Congress (CAC), pp. 7395-7400, doi: 10.1109/CAC53003.2021.9728435.Google ScholarCross Ref
- Ren S , He K , Girshick R , 2016, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[C]// NIPS.Google Scholar
- Lin, Tsung-Yi, M. Maire, Serge J. Belongie, James Hays, P . Perona, D.Ramanan, Piotr Dollár and C. L. Zitnick, 2014, “Microsoft COCO: Common objects in context,” ECCVGoogle Scholar
- Z. Zhu, D. Liang, S. Zhang, X. Huang, B. Li and S. Hu, 2016, "Traffic-Sign Detection and Classification in the Wild," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2110-2118, doi: 10.1109/CVPR.2016.232.Google ScholarCross Ref
Index Terms
- An Adversarial Attack Considering Effectiveness and Concealment on Faster R-CNN
Recommendations
Adversarial attacks on Faster R-CNN object detector
AbstractAdversarial attacks have stimulated research interests in the field of deep learning security. However, most of existing adversarial attack methods are developed on classification. In this paper, we use Projected Gradient Descent (PGD),...
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Machine Learning and Knowledge Discovery in DatabasesAbstractGiven the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we ...
CNN adversarial attack mitigation using perturbed samples training
AbstractSusceptibility to adversarial examples is one of the major concerns in convolutional neural networks (CNNs) applications. Training the model with adversarial examples, known as adversarial training, is a common countermeasure to tackle such ...
Comments