引用本文: |
-
王滨,郭艳凯,钱亚冠,王佳敏,王星,顾钊铨.针对卷积神经网络流量分类器的对抗样本攻击防御[J].信息安全学报,2022,7(1):145-156 [点击复制]
- WANG Bin,GUO Yankai,QIAN Yaguan,WANG Jiamin,WANG Xing,GU Zhaoquan.Defense of Traffic Classifiers based on Convolutional Networks against Adversarial Examples[J].Journal of Cyber Security,2022,7(1):145-156 [点击复制]
|
|
摘要: |
随着深度学习的兴起,深度神经网络被成功应用于多种领域,但研究表明深度神经网络容易遭到对抗样本的恶意攻击。作为深度神经网络之一的卷积神经网络(CNN)目前也被成功应用于网络流量的分类问题,因此同样会遭遇对抗样本的攻击。为提高CNN网络流量分类器防御对抗样本的攻击,本文首先提出批次对抗训练方法,利用训练过程反向传播误差的特点,在一次反向传播过程中同时完成样本梯度和参数梯度的计算,可以明显提高训练效率。同时,由于训练用的对抗样本是在目标模型上生成,因此可有效防御白盒攻击;为进一步防御黑盒攻击,克服对抗样本的可转移性,提出增强对抗训练方法。利用多个模型生成样本梯度不一致的对抗样本,增加对抗样本的多样性,提高防御黑盒攻击的能力。通过真实流量数据集USTC-TFC2016上的实验,我们生成对抗样本的网络流量进行模拟攻击,结果表明针对白盒攻击,批次对抗训练可使对抗样本的分类准确率从17.29%提高到75.37%;针对黑盒攻击,增强对抗训练可使对抗样本的分类准确率从26.37%提高到68.39%。由于深度神经网络的黑箱特性,其工作机理和对抗样本产生的原因目前没有一致的认识。下一步工作对CNN的脆弱性机理进行进一步研究,从而找到更好的提高对抗训练效果的方法。 |
关键词: 流量分类 对抗样本 对抗训练 |
DOI:10.19363/J.cnki.cn10-1380/tn.2022.01.10 |
投稿时间:2021-04-01修订日期:2021-06-16 |
基金项目:本课题得到国家重点研发计划项目(No.2018YFB2100400),国家自然科学基金资助项目(No.61902082),浙江省公益技术应用研究项目(No.LGG19F030001,No.LGF20F020007),杭州市领军型创新创业团队资助计划(No.201920110039)资助。 |
|
Defense of Traffic Classifiers based on Convolutional Networks against Adversarial Examples |
WANG Bin1,2, GUO Yankai1, QIAN Yaguan1, WANG Jiamin1, WANG Xing2, GU Zhaoquan3
|
(1.School of Big Data Science, Zhejiang University of Science and Technology, Hangzhou 310023, China;2.Hangzhou Hikvision Network and Information Security Laboratory, Hangzhou 310052, China;3.Cyberspace Institute Advanced Technology, Guangzhou University, Guangzhou 510006, China) |
Abstract: |
With the rise of deep learning, deep neural networks have been successfully applied in many fields, but recent research shows that deep neural network is vulnerable to adversarial examples attacks. Convolutional Neural Networks (CNNs) as one type of deep neural networks have also been successfully applied to the classification of network traffic, however, recent research shows that CNN is as well vulnerable to adversarial examples. To improve the CNN traffic classifier's defense against the attack of adversarial examples, we first propose a batch-adversarial-training method, which uses the characteristics of back propagation error in the training process to calculate the example gradient and weight gradient simultaneously in the process of error back-propagation. This method can improve the training efficiency. At the same time, sine the adversarial examples for training are generated on the target mode, it can effectively defend white-box attacks. To further improve the defense against black-box attacks, we propose an enhanced-adversarial-training method. In order to prevent the transferability of the adversarial examples, we craft the adversarial examples adopted in adversarial training on multiple substitute models for diversity. The benefit of this method is the adversarial examples from these models will have misaligned gradients. We conduct experiments on the real traffic dataset USTC-TFC2016. We craft traffic composed of adversarial examples to simulate attacks. The experimental results show that batch-adversarial-training can improve the classification accuracy of adversarial examples from 17.29% to 75.37% for white-box attacks and for black-box attacks, the enhanced-adversarial-training can improve the classification accuracy of adversarial examples from 26.37% to 68.39%. Due to the black-box characteristics of deep neural network, there is no consistent understanding of its working mechanism and the real cause of adversarial examples. The next step is to further study the vulnerability mechanism of CNN, so as to find a better method to improve the effect of adversarial training. |
Key words: traffic classification adversarial examples adversarial training |