【打印本页】      【下载PDF全文】   View/Add Comment  Download reader   Close
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 6312次   下载 3608 本文二维码信息
码上扫一扫!
物理域中针对人脸识别系统的对抗样本攻击方法
蔡楚鑫,王宇飞,章烈剽,卓思超,张娟苗,胡永健
分享到: 微信 更多
(华南理工大学电子与信息学院 广州 中国 510641;中新国际联合研究院 广州 中国 511356;广州广电卓识智能科技有限公司 广州 中国 510663;广州广电运通金融电子股份有限公司 广州 中国 510663)
摘要:
对抗样本攻击揭示了人脸识别系统可能存在不安全性和被攻击的方式。现有针对人脸识别系统的对抗样本攻击大多在数字域进行,然而从最近文献检索的结果来看,越来越多的研究开始关注如何能把带有对抗扰动的实物添加到人脸及其周边区域上,如眼镜、贴纸、帽子等,以实现物理域的对抗攻击。这类新型的对抗样本攻击能够轻易突破市面上现有绝大部分人脸活体检测方法的拦截,直接影响人脸识别系统的结果。尽管已有不少文献提出数字域的对抗攻击方法,但在物理域中复现对抗样本的生成并不容易且成本高昂。本文提出一种可从数字域方便地推广到物理域的对抗样本生成方法,通过在原始人脸样本中添加特定形状的对抗扰动来攻击人脸识别系统,达到误导或扮演攻击的目的。主要贡献包括:利用人脸关键点根据脸型构建特定形状掩膜来生成对抗扰动;设计对抗损失函数,通过训练生成器实现在数字域的对抗样本生成;设计打印分数损失函数,减小打印色差,在物理域复现对抗样本的生成,并通过模拟眼镜佩戴、真实场景光照变化等方式增强样本,改善质量。实验结果表明,所生成的对抗样本不仅能在数字域以高成功率攻破典型人脸识别系统VGGFace10,且可方便、大量地在物理域复现。本文方法揭示了人脸识别系统的潜在安全风险,为设计人脸识别系统的防御体系提供了很好的帮助。
关键词:  人脸识别  对抗样本攻击  数字域对抗样本  物理域对抗样本  打印分数损失函数
DOI:10.19363/J.cnki.cn10-1380/tn.2023.03.10
Received:October 14, 2021Revised:December 26, 2021
基金项目:本课题得到国家重点研发计划项目(No.2019QY2202),广州开发区国际合作项目(No.2019GH16)和中新国际联合研究院项目(No.206-A018001)资助。
Adversarial Attacks on Face Recognition System in Physical Domain
CAI Chuxin,WANG Yufei,ZHANG Liepiao,ZHUO Sichao,ZHANG Juanmiao,HU Yongjian
School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China;Sino-Singapore International Joint Research Institute, Guangzhou 511356, China;Guangzhou GRG Vision Co., Ltd., Guangzhou 510663, China;Guangzhou GRG Banking Equipment Co., Ltd., Guangzhou 510663, China
Abstract:
Adversarial attacks exhibit both potential insecurity of face recognition systems and the way of performing attacks. Most current adversarial attacks on face recognition systems are carried out in digital domain. However, based on the recent reports in literature, more and more studies begin to concern about how to put the physical patches containing adversarial noise on human face and its neighboring regions, for example, eyeglass framework, paper sticker, and cap, so as to implement adversarial attacks in physical domain. Such a new type of attacks can easily break through most of current living face detection systems and thus affect the decision of face recognition systems. Although there are a few methods proposed for the generation of adversarial samples in digital domain, it is not easy or cheap to realize those methods in physical domain. This paper proposes a method of generating adversarial attack in digital domain which can be readily extended to physical domain. By adding adversarial perturbation of special shapes into an original face sample, we can fool the face recognition system and make it regard the face as someone else’s face (i.e., dodging attack) or a specific person’s face (i.e., impersonation attack). The major contributions of this paper include: First, we propose a method of using the face landmarks to construct a specific shape mask of the adversarial perturbation for individual face. Second, we design the adversarial loss function to train the generator to produce digital samples. Third, we design the printing score loss function to reduce the color difference between display and printer so as to reproduce those samples in physical domain. We improve the quality of adversarial samples by means of data enhancement which aims at simulating the way of wearing eyeglasses, illumination variations and other situations in real-world applications. Experimental results show that the proposed method can attack the face recognition system VggFace10 in a high success rate in digital domain. Moreover, it can be readily extended to physical domain and generates samples quickly and economically. Our study exposes the security risk of face recognition systems, which can provide us with useful information to design better face recognition systems against adversarial attacks in the future.
Key words:  face recognition  adversarial sample attack  digital adversarial samples  physical adversarial samples  printing score loss function