引用本文
  • 高红超,周广治,戴娇,李昭星,韩冀中.基于熵及随机擦除的针对目标检测物理攻击的防御[J].信息安全学报,2023,8(1):119-130    [点击复制]
  • GAO Hongchao,ZHOU Guangzhi,DAI Jiao,LI Zhaoxing,HAN Jizhong.Defense Against Physical Attacks on Object Detection Based on Entropy and Random Erasing[J].Journal of Cyber Security,2023,8(1):119-130   [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

←前一篇|后一篇→

过刊浏览    高级检索

本文已被:浏览 2107次   下载 3860 本文二维码信息
码上扫一扫!
基于熵及随机擦除的针对目标检测物理攻击的防御
高红超1, 周广治1,2, 戴娇1, 李昭星1, 韩冀中1
0
(1.中国科学院信息工程研究所 北京 中国 100093;2.中国科学院大学 网络空间安全学院 北京 中国 100049)
摘要:
物理攻击通过在图像中添加受扰动的对抗块使得基于深度神经网络 (DNNs) 的应用失效, 对DNNs的安全性带来严重的挑战。针对物理攻击方法生成的对抗块与真实图像块之间的信息分布不同的特点, 本文提出了能有效避免现有物理攻击的防御算法。该算法由基于熵的检测组件 (Entropy-based Detection Component, EDC) 和随机擦除组件 (Random Erasing Component,REC) 两部分组成。EDC 组件采用熵值度量检测对抗块并对其灰度替换。该方法不仅能显著降低对抗块对模型推理的影响, 而且不依赖大规模的训练数据。REC 模块改进了深度学习通用训练范式。利用该方法训练得到的深度学习模型, 在不改变现有网络结构的前提下, 不仅能有效防御现有物理攻击, 而且能显著提升图像分析效果。上述两个组件都具有较强的可转移性且不需要额外的训练数据, 它们的有机结合构成了本文的防御策略。实验表明, 本文提出的算法不仅能有效的防御针对目标检测的物理攻击(在 Pascal VOC 2007 上的平均精度 (mAP) 由 31.3% 提升到 64.0% 及在 Inria 数据集上由 19.0% 提升到 41.0%), 并且证明算法具有较好的可转移性, 能同时防御图像分类和目标检测两种任务的物理攻击。
关键词:  对抗样本  物理攻击  对抗块  对抗防御  目标检测
DOI:10.19363/J.cnki.cn10-1380/tn.2023.01.09
投稿时间:2020-01-16修订日期:2020-03-13
基金项目:本课题得到科技创新 2030-“新一代人工智能”重大项目(No. 2020AAA0140000)资助。
Defense Against Physical Attacks on Object Detection Based on Entropy and Random Erasing
GAO Hongchao1, ZHOU Guangzhi1,2, DAI Jiao1, LI Zhaoxing1, HAN Jizhong1
(1.Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China;2.School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China)
Abstract:
Existing physical attack techniques for deep learning models mislead the deep neural networks (DNNs) inference process by adding perturbed adversarial patches to the attacked image, thereby making the application which based on DNNs invalid to achieve the purpose of the attack. Such attack methods are easy to implement and highly transferable, which bring serious challenges to the security of DNNs. The quantities of information contained in the adversarial patches generated by existing physical attack methods is usually higher than that of real natural scene image patches. Using this phenomenon, this paper proposes a defense algorithm with strong versatility and obvious defense effect. This algorithm consists of an Entropy-based Detection Component (EDC) and a Random Erasing Component (REC). EDC component uses entropy measurement to detect the perturbed adversarial patches and replace it with gray patches. This component can not only significantly reduce the impact of adversarial patches on model inferencing, but also does not rely on large-scale training data. The REC component improves the general training paradigm of DNNs. The deep learning model trained by REC can not only effectively defend against existing physical attacks , but also improve the image analysis effect significantly without changing the network structure. The above two components have strong transferability and do not need additional training data. Furthermore, we propose an efficient and transferable defense algorithm through the organic combination of two components. The experimental results on different data of the two image analysis tasks show that the defense algorithm proposed in this paper can not only effectively defend against physical attacks against object detection (the average accuracy (mAP) on Pascal VOC 2007 is increased from 31.3% to 64.0%, and on the Inria dataset is increased from 19.0% to 41.0%), but also prove that the algorithm has good transferability, which can defend against physical attacks of both image classification and object detection tasks.
Key words:  adversarial examples  physical attacks  adversarial patch  adversarial defense  object detection