计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (10): 61-75.DOI: 10.3778/j.issn.1002-8331.2310-0362

• 热点与综述 • 上一篇    下一篇

针对目标检测模型的物理对抗攻击综述

蔡伟,狄星雨,蒋昕昊,王鑫,高蔚洁   

  1. 火箭军工程大学 导弹工程学院,西安 710025
  • 出版日期:2024-05-15 发布日期:2024-05-15

Survey of Physical Adversarial Attacks Against Object Detection Models

CAI Wei, DI Xingyu, JIANG Xinhao, WANG Xin, GAO Weijie   

  1. Missile Engineering Institute, Rocket Force University of Engineering, Xi’an 710025, China
  • Online:2024-05-15 Published:2024-05-15

摘要: 深度学习模型容易受到对抗样本的影响,在图像上添加肉眼不可见的微小扰动就可以使训练有素的深度学习模型失灵。最近的研究表明这种扰动也存在于现实世界中。聚焦于深度学习目标检测模型的物理对抗攻击,明确了物理对抗攻击的概念,并介绍了目标检测物理对抗攻击的一般流程,依据攻击任务的不同,从车辆检测和行人检测两个方面综述了近年来一系列针对目标检测网络的物理对抗攻击方法,简单介绍了其他针对目标检测模型的攻击、其他攻击任务和其他攻击方式。最后讨论了物理对抗攻击当前面临的挑战,引出了对抗训练的局限性,并展望了未来可能的发展方向和应用前景。

关键词: 对抗攻击, 物理攻击, 深度学习, 深度神经网络

Abstract: Deep learning models are highly susceptible to adversarial samples, and even minuscule image perturbations that are not perceptible to the naked eye can disable well-trained deep learning models. Recent research indicates that these perturbations can exist in the physical world. This paper provides insight into physical adversarial attacks on deep learning object detection models, clarifying the concept of physical adversarial attack and outlining the general process of such attacks on object detection. According to the different attack tasks, a series of physical adversarial attack methods against object detection networks in recent years are reviewed from vehicle detection and pedestrian detection. Other attacks against target detection models, other attack tasks and other attack methods are briefly introduced. The current challenges of physical adversarial attack are discussed, the limitations of adversarial training are leaded out, and future development directions and application prospect are suggested.

Key words: adversarial attack, physical attack, deep learning, deep neural network