| 引用本文: |
-
何邦彦,李琦,孙哲南,荆丽桦,王蕊.基于Logit增广提升3D点云攻击的迁移性[J].信息安全学报,已采用 [点击复制]
- He Bangyan,Li Qi,Sun Zhenan,Jing Lihua,Wang Rui.Improving the Transferability of 3D Point Cloud Attack via Logit-Level Augmentation[J].Journal of Cyber Security,Accept [点击复制]
|
|
| 摘要: |
| 基于深度学习的点云处理模型(以下简称点云模型)凭借其卓越的性能,已成为智能驾驶、航空航天等安全关键领域的核心技术。现有研究表明,此类点云模型在面临3D点云攻击时表现出显著的脆弱性。所谓3D点云攻击,是指攻击者向3D良性点云中注入人眼不可感知的对抗性扰动,生成3D对抗性点云,进而误导模型产生错误决策。进一步而言,3D点云攻击的迁移性,即通过白盒点云模型生成的3D对抗性点云可误导黑盒点云模型,既是构建可信赖点云模型的核心因素,亦是建立完善的点云模型鲁棒性评估体系的关键指标。因此,为系统性提升点云模型在对抗攻击场景下的鲁棒性,亟需从多个维度对3D点云攻击的迁移性开展探索。然而,现有研究尚未充分探究Logit层对3D点云攻击迁移性的潜在影响。为填补这一空白,本文从Logit层增广的视角,提出一种即插即用模块,称之为LA-Attack模块。具体而言,LA-Attack模块中设计了一个可训练的Logit增广单元,用于增广Logit分布;随后,LA-Attack模块引入联合优化策略,通过对齐约束同时优化可训练的Logit增广单元与3D对抗性点云。随后,本文在ModelNet40、ShapeNetPart、ModelNet10、ModelNet-C和ScanNet这5个基准数据集上开展了大量验证实验。并选取广泛应用的主流点云模型,包括PointNet、PointNet++、PointConv、CurveNet、DGCNN、PCT与PointMamba,以及常用的基线攻击方法,如3D-Adv、KNN、AdvPC、AOF、PF-Attack、SS-Attack及CFG。实验结果充分验证了所提LA-Attack模块的有效性与兼容性。 |
| 关键词: 三维计算机视觉 点云识别 对抗性样本 人工智能安全 黑盒攻击 |
| DOI: |
| 投稿时间:2025-08-27修订日期:2026-01-01 |
| 基金项目:国家自然科学基金项目(面上项目,重点项目,重大项目) |
|
| Improving the Transferability of 3D Point Cloud Attack via Logit-Level Augmentation |
|
He Bangyan1, Li Qi2, Sun Zhenan2, Jing Lihua1, Wang Rui1
|
| (1.Institute of Information Engineering, Chinese Academy of Sciences;2.Institute of Automation, Chinese Academy of Sciences) |
| Abstract: |
| Deep learning-based point cloud processing models (hereafter referred to as point cloud models) have become core technologies in safety-critical domains such as autonomous driving and aerospace engineering due to their superior performance. Existing studies have demonstrated that these point cloud models exhibit notable vulnerability when confronted with 3D point cloud attacks. Specifically, a 3D point cloud attack involves the injection of human-imperceptible adversarial perturbations into benign 3D point clouds by attackers, generating 3D adversarial point clouds that further mislead models into making incorrect decisions. Furthermore, the transferability of 3D point cloud attacks—where 3D adversarial point clouds generated via white-box point cloud models can mislead black-box point cloud models—serves not only as a core factor for developing trustworthy point cloud models but also as a critical metric for establishing a comprehensive robustness evaluation framework for point cloud models. Therefore, to systematically enhance the robustness of point cloud models under adversarial attack scenarios, it is imperative to explore the transferability of 3D point cloud attacks from multiple dimensions. However, existing studies have not fully investigated the potential impact of the Logit layer on the transferability of 3D point cloud attacks. To address this gap, this study proposes a plug-and-play module from the perspective of Logit layer augmentation, termed the LA-Attack module. Specifically, a trainable Logit augmentation unit is designed within the LA-Attack module to augment the Logit distribution; subsequently, the LA-Attack module incorporates a joint optimization strategy, which simultaneously optimizes the trainable Logit augmentation unit and 3D adversarial point clouds through alignment constraints. Extensive validation experiments were then conducted in this study on five benchmark datasets: ModelNet40, ShapeNetPart, ModelNet10, ModelNet-C, and ScanNet. Additionally, widely adopted state-of-the-art point cloud models were selected, including PointNet, PointNet++, PointConv, CurveNet, DGCNN, PCT, and PointMamba, along with commonly used baseline attack methods such as 3D-Adv, KNN, AdvPC, AOF, PF-Attack, SS-Attack, and CFG. The experimental results fully validate the effectiveness and compatibility of the proposed LA-Attack module. |
| Key words: 3D computer vision point cloud recognition adversarial example AI security black-box attack |