论文检索
期刊
全部知识仓储预印本开放期刊机构
高级检索

针对目标检测模型的物理对抗攻击综述OA北大核心CSTPCD

Survey of Physical Adversarial Attacks Against Object Detection Models

中文摘要英文摘要

深度学习模型容易受到对抗样本的影响,在图像上添加肉眼不可见的微小扰动就可以使训练有素的深度学习模型失灵.最近的研究表明这种扰动也存在于现实世界中.聚焦于深度学习目标检测模型的物理对抗攻击,明确了物理对抗攻击的概念,并介绍了目标检测物理对抗攻击的一般流程,依据攻击任务的不同,从车辆检测和行人检测两个方面综述了近年来一系列针对目标检测网络的物理对抗攻击方法,简单介绍了其他针对目标检测模型的攻击、其他攻击任务和其他攻击方式.最后讨论了物理对抗攻击当前面临的挑战,引出了对抗训练的局限性,并展望了未来可能的发展方向和应用前景.

Deep learning models are highly susceptible to adversarial samples,and even minuscule image perturbations that are not perceptible to the naked eye can disable well-trained deep learning models.Recent research indicates that these perturbations can exist in the physical world.This paper provides insight into physical adversarial attacks on deep learning object detection models,clarifying the concept of physical adversarial attack and outlining the general process of such attacks on object detection.According to the different attack tasks,a series of physical adversarial attack methods against object detection networks in recent years are reviewed from vehicle detection and pedestrian detection.Other attacks against target detection models,other attack tasks and other attack methods are briefly introduced.The current challenges of physical adversarial attack are discussed,the limitations of adversarial training are leaded out,and future development directions and application prospect are suggested.

蔡伟;狄星雨;蒋昕昊;王鑫;高蔚洁

火箭军工程大学 导弹工程学院,西安 710025

计算机与自动化

对抗攻击;物理攻击;深度学习;深度神经网络

adversarial attack;physical attack;deep learning;deep neural network

《计算机工程与应用》 2024 (010)

61-75 / 15

国家部委基金.

10.3778/j.issn.1002-8331.2310-0362

评论

下载量:0
点击量:0