• 中国计算机学会会刊
• CCF推荐中文期刊(B类)
• Scopus收录期刊
• CSCD收录期刊
• 中国科技核心期刊

手机二维码
 
【打印本页】      【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 122次   下载 322 本文二维码信息
码上扫一扫!
人工智能对抗环境下的模型鲁棒性研究综述
王科迪,易平
分享到: 微信 更多
(上海交通大学网络空间安全学院 上海 中国 200240)
摘要:
近年来人工智能研究与应用发展迅速,机器学习模型大量应用在现实的场景中,人工智能模型的安全鲁棒性分析与评估问题已经开始引起人们的关注。最近的研究发现,对于没有经过防御设计的模型,攻击者通过给样本添加微小的人眼不可察觉的扰动,可以轻易的使模型产生误判,从而导致严重的安全性问题,这就是人工智能模型的对抗样本。对抗样本已经成为人工智能安全研究的一个热门领域,各种新的攻击方法,防御方法和模型鲁棒性研究层出不穷,然而至今尚未有一个完备统一的模型鲁棒性的度量评价标准,所以本文总结了现阶段在人工智能对抗环境下的模型鲁棒性研究,论述了当前主流的模型鲁棒性的研究方法,从一个比较全面的视角探讨了对抗环境下的模型鲁棒性这一研究方向的进展,并且提出了一些未来的研究方向。
关键词:  对抗样本  模型鲁棒性  人工智能安全
DOI:10.19363/J.cnki.cn10-1380/tn.2020.05.02
投稿时间:2019-09-20修订日期:2019-11-14
基金项目:本课题得到重点研发计划(No.2017YFB0802900)资助。
A Survey on Model Robustness under Adversarial Example
WANG Kedi,YI Ping
School of Cyber Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Abstract:
In recent years, the research on artificial intelligence has developed rapidly. However, in order to apply machine learning model to real-world setting, we need to consider its security issues in particular. Recent studies have found that for unprotected models, attackers can easily fool the machine learning models by adding small, imperceptible disturbances to the samples, leading to serious security problems. Adversarial sample is a popular research direction nowadays. There are many researches on new attack methods, defense methods and robustness certifications, but there is no well-known and unified framework for certificating model's robustness. Our paper summarizes the research on model robustness in artificial intelligence adversarial setting. This paper describes the popular research methods of model robustness, discusses the research progress of model robustness in adversarial setting from a more comprehensive perspective, and puts forward some future research directions.
Key words:  adversarial examples  model robustness  artificial intelligence security