引用本文: |
-
王伟,董晶,何子文,孙哲南.视觉对抗样本生成技术概述[J].信息安全学报,2020,5(2):39-48 [点击复制]
- WANG Wei,DONG Jing,HE Ziwen,SUN Zhenan.A Brief Introduction to Visual Adversarial Samples[J].Journal of Cyber Security,2020,5(2):39-48 [点击复制]
|
|
摘要: |
深度学习的发明,使得人工智能技术迎来了新的机遇,再次进入了蓬勃发展期。其涉及到的隐私、安全、伦理等问题也日益受到了人们的广泛关注。以对抗样本生成为代表的新技术,直接将人工智能、特别是深度学习模型的脆弱性展示到了人们面前,使得人工智能技术在应用落地时,必须要重视此类问题。本文通过对抗样本生成技术的回顾,从信号层、内容层以及语义层三个层面,白盒攻击与黑盒攻击两个角度,简要介绍了对抗样本生成技术,目的是希望读者能够更好地发现对抗样本的本质,对机器学习模型的健壮性、安全性和可解释性研究有所启发。 |
关键词: 人工智能安全 对抗样本 白盒攻击 黑盒攻击 失真度量 对抗防御 |
DOI:10.19363/J.cnki.cn10-1380/tn.2020.02.04 |
投稿时间:2020-01-03修订日期:2020-02-20 |
基金项目:本课题得到国家自然科学基金61972395、U1736119、61772529资助。 |
|
A Brief Introduction to Visual Adversarial Samples |
WANG Wei1, DONG Jing1, HE Ziwen1,2, SUN Zhenan1
|
(1.Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;2.University of Chinese Academy of Sciences, Beijing 100049, China) |
Abstract: |
With the invention of deep learning, artificial intelligence (AI) has ushered in new opportunities and is booming again. However, its privacy, security, ethics and other issues involved are also increasingly concerned by people. The adversarial samples, the vulnerability of artificial intelligence, especially deep learning models, are directly in front of us in recent years, which makes it necessary to pay attention to such problems during the practical application of AI technology. In this paper, a brief review of adversarial sample generation under white-box and black-box attack protocols is given. We summarize related techniques into three levels:signal level, content level and semantic level. We hope this paper can help readers better find the nature of the adversarial sample, which may improve the robustness, security and interpretability of the learned model. |
Key words: AI security adversarial sample white-box attack black-box attack distortion adversarial defense |