【打印本页】      【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 42次   下载 8 本文二维码信息
码上扫一扫!
面向图神经网络的隐私安全综述
陈晋音,马敏樱,马浩男,郑海斌
分享到: 微信 更多
(浙江工业大学网络空间安全研究院 杭州 中国 310023;浙江工业大学信息工程学院 杭州 中国 310023)
摘要:
图神经网络(Graph Neural Network,GNN)对图所包含的边和节点数据进行高效信息提取与特征表示,因此对处理图结构数据具有先天优势。目前,图神经网络已经在许多领域(如社交网络、自然语言处理、计算机视觉甚至生命科学等领域)得到了非常广泛的应用,极大地促进了人工智能的繁荣与发展。然而,已有研究表明,攻击者可以发起对训练数据或目标模型的隐私窃取攻击,从而造成隐私泄露风险甚至财产损失。因此探究面向GNN的隐私安全获得广泛关注,陆续研究提出了一系列方法挖掘GNN的安全漏洞,并提供隐私保护能力。然而,对GNN隐私问题的研究相对零散,对应的威胁场景、窃取方法与隐私保护技术、应用场景均相对独立,尚未见系统性的综述工作。因此,本文首次围绕GNN的隐私安全问题展开分析,首先定义了图神经网络隐私攻防理论,其次按照模型输入、攻防原理、下游任务、影响因素、数据集、评价指标等思路对隐私攻击方法和隐私保护方法进行分析归纳,整理了针对不同任务进行的通用基准数据集与主要评价指标,同时,讨论了GNN隐私安全问题的潜在应用场景,分析了GNN隐私安全与图像或自然语言处理等深度模型的隐私安全的区别与关系,最后探讨了GNN的隐私安全研究当前面临的挑战,以及未来潜在研究方向,以进一步推动GNN隐私安全研究的发展和应用。
关键词:  图神经网络  推断攻击  隐私保护  重构攻击  隐私安全
DOI:10.19363/J.cnki.cn10-1380/tn.2025.05.08
投稿时间:2023-09-25修订日期:2023-11-24
基金项目:本课题得到浙江省自然科学基金(No. LDQ23F020001), 国家自然科学基金(No. 62072406, No. 62406286), 浙江省重点研发计划(No.2022C01018), 国家重点研发计划(No. 2018AAA0100801)资助。
A Survey of Privacy Security for Graph Neural Networks
CHEN Jinyin,MA Minying,MA Haonan,ZHENG Haibin
Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023, China;College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Abstract:
Graph neural network (GNN) performs efficient information extraction and feature representation on the edge and node data contained in the graph, so it has inherent advantages in processing graph structure data. At present, graph neural networks have been widely used in many fields (such as social networks, natural language processing, computer vision and even life sciences), greatly promoting the prosperity and development of artificial intelligence. However, existing research has shown that attackers can launch privacy theft attacks on training data or target models, resulting in privacy leak risks and even property losses. Therefore, exploring the privacy security of graph neural network has attracted widespread attention, and a series of methods have been proposed to mine the security vulnerabilities of graph neural network and provide privacy protection capabilities. However, research on graph neural network privacy issues is relatively scattered, and the corresponding threat scenarios, stealing methods, privacy protection technologies, and application scenarios are relatively independent, and there is no systematic review work yet. Therefore, this article analyzes the privacy security issues of graph neural networks for the first time. Firstly, it defines the privacy attack and defense theory of graph neural network. Secondly, it analyzes and summarizes the privacy attack methods and privacy protection methods according to the ideas of model input, attack and defense mechanisms, downstream tasks, influencing factors, datasets, evaluations, organizes general benchmark datasets and main evaluations for different tasks. At the same time, the potential application scenarios of graph neural network privacy security issues were discussed, and the differences and relationships between graph neural network privacy security and privacy security of deep models such as image or natural language processing were analyzed. Finally, the current challenges faced by GNN privacy security research and the future were discussed. Potential research directions to further promote the development and application of graph neural network privacy security research.
Key words:  graph neural network  inference attack  privacy protection  reconstruction attack  privacy security