引用本文
  • 李珮玄,黄土,罗书卿,宋佳鑫,刘功申.深度学习模型版权保护技术研究综述[J].信息安全学报,已采用    [点击复制]
  • Li Peixuan,HuangTu,Luo Shuqing,Song Jiaxin,Liu Gongshen.A Survey on Copyright Protection Technology of Deep Learning Model[J].Journal of Cyber Security,Accept   [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

过刊浏览    高级检索

本文已被:浏览 1425次   下载 0  
深度学习模型版权保护技术研究综述
李珮玄, 黄土, 罗书卿, 宋佳鑫, 刘功申
0
(上海交通大学电子信息与电气工程学院)
摘要:
深度学习模型在许多任务中取得出色的成绩,也逐渐被广泛应用到众多领域。由于训练一个性能优越的深度神经网络成本高昂,因此深度学习模型可以视作模型所有者的知识产权。然而深度学习模型设计之初并未考虑模型的安全问题,在其快速发展的同时面临的安全问题也逐渐突显出来。随着模型训练云平台的部署与应用,深度学习模型被盗取、恶意分发、转卖的威胁大大增加。由于深度学习模型有巨大的实用价值,恶意攻击者非法窃取模型会严重侵犯模型所有者的权益,保护深度学习模型版权迫在眉睫。针对这一问题,近年来有很多关于保护深度学习模型版权的方案陆续被提出,包括基于数字水印技术实现模型所有权验证以及基于水印或加密技术实现模型访问控制等。本文总结梳理了当前研究现状,并探讨了未来可能的研究方向。文章首先介绍了深度学习模型水印、后门攻击的基本概念以及对模型水印的要求;然后,基于不同的分类指标,从方案的实现功能、实现方式、实现时间、以及验证方式的不同,对现有深度学习模型版权保护方案进行全面细致的总结与分类;并且从检测攻击、逃逸攻击、去除攻击及欺诈攻击四个方面,归纳总结了针对深度学习模型版权保护方案的攻击方法;最后,总结研究现状并对未来的关键研究方向进行展望。希望本文详细的梳理总结可以为该领域后续的研究提供有益的参考。
关键词:  深度学习模型安全 深度学习模型版权保护 模型水印
DOI:
投稿时间:2022-12-17修订日期:2023-03-21
基金项目:国家自然科学基金项目(面上项目,重点项目,重大项目)
A Survey on Copyright Protection Technology of Deep Learning Model
Li Peixuan, HuangTu, Luo Shuqing, Song Jiaxin, Liu Gongshen
(School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University)
Abstract:
Deep learning models have achieved excellent performance in many tasks, and have gradually been widely used in many fields. Since training a deep neural network with superior performance is expensive, a deep learning model can be regarded as the intellectual property of the model owner. However, the security issues of deep learning models were not considered at the beginning of design, and they have gradually emerged with the rapid development of deep learning. With the deployment and application of model training cloud platforms, the threat of deep learning models being stolen, maliciously distributed, and resold has greatly increased. Due to the huge value of deep learning models, malicious attackers illegally stealing models will seriously violate the rights and interests of model owners. So, it is urgent to protect the copyright of deep learning models. To solve this problem, many copyright protection technologies of deep learning model have been continuously proposed in recent years, including model ownership verification based on digital watermarking technology and model access control based on watermarking or encryption technology, but there is a lack of summary. This paper summarized the current researches and discusses the possible future research directions. This paper firstly introduced the basic concepts of deep learning model watermarking and backdoor attack, the requirements for model watermarking; and then, made a comprehensive and detailed summary and classification of the existing deep learning model copyright protection schemes based on different classification indicators from the differences of implementation functions, implementation methods, implementation time, and verification methods of different schemes; in addition, this paper summarized attack methods for copyright protection schemes of deep learning model from four aspects: detection attack, escape attack, removal attack and fraud attack.; finally, the research status was summarized and the key research directions in the future were prospected. Hope the detailed summary of this paper can provide a useful reference for subsequent research in this field.
Key words:  deep learning model security copyright protection of deep learning model model watermarking