【打印本页】      【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 14725次   下载 11584 本文二维码信息
码上扫一扫!
深度学习模型的中毒攻击与防御综述s
陈晋音,邹健飞,苏蒙蒙,张龙源
分享到: 微信 更多
(浙江工业大学信息工程学院 杭州 中国 310023)
摘要:
深度学习是当前机器学习和人工智能兴起的核心。随着深度学习在自动驾驶、门禁安检、人脸支付等严苛的安全领域中广泛应用,深度学习模型的安全问题逐渐成为新的研究热点。深度模型的攻击根据攻击阶段可分为中毒攻击和对抗攻击,其区别在于前者的攻击发生在训练阶段,后者的攻击发生在测试阶段。本文首次综述了深度学习中的中毒攻击方法,回顾深度学习中的中毒攻击,分析了此类攻击存在的可能性,并研究了现有的针对这些攻击的防御措施。最后,对未来中毒攻击的研究发展方向进行了探讨。
关键词:  深度学习  中毒攻击  人工智能安全
DOI:10.19363/J.cnki.cn10-1380/tn.2020.07.02
投稿时间:2019-12-22修订日期:2020-04-10
基金项目:本课题得到浙江省自然科学基金项目(No.LY19F020025),宁波市“科技创新2025”重大专项(No.2018B10063)资助。
Poisoning Attack and Defense on Deep learning Model: A Survey
CHEN Jinyin,ZOU Jianfei,SU Mengmeng,ZHANG Longyuan
College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Abstract:
Deep learning is at the heart of current machine learning of artificial intelligence. As it has been successfully applied to security areas such as autonomous driving and face payment, the security of deep learning models has become a new research hotspot. Deep learning attacks can be classified into poisoning attacks and adversarial attacks according to the attack phase, where the former occurs in the training phase and the latter occurs in the testing phase. This paper introduces the review of poisoning attack methods in deep learning for the first time, reviews the poisoning attack methods for deep learning, analyzes the possibility of such attacks, and proposes defense measures against these attacks. Finally, the research direction of future poisoning attacks is discussed.
Key words:  deep learning  poisoning attack  artificial intelligence security