引用本文
  • 涂碧波,许洋,张坤,李晨.可解释性入侵检测方法研究综述[J].信息安全学报,已采用    [点击复制]
  • tu bi bo,xu yang,zhang kun,li chen.Review of explainable intrusion detection methods[J].Journal of Cyber Security,Accept   [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

过刊浏览    高级检索

本文已被:浏览 900次   下载 0  
可解释性入侵检测方法研究综述
涂碧波, 许洋, 张坤, 李晨
0
(中国科学院信息工程研究所)
摘要:
入侵检测系统作为网络安全中防范网络威胁的有效机制,在网络中的数据量爆发式增长的今天,发挥着日益重要的作用。当前基于机器学习,尤其是以深度学习为代表的黑箱模型由于其优异的检测性能而逐渐成为入侵检测技术的研究热点。然而黑箱模型因其固有的不可解释性和基于梯度的安全隐患,阻碍了使用者了解并信任模型的决策过程,因此对黑箱入侵检测系统的可解释型研究变得尤为重要。本文提出了可解释入侵检测系++统的正式定义,即能够对特征层面做出解释的入侵检测方法。此外本文根据解释特征的时机与方式将解释模型分为了模型内嵌型、局部模型估计型与反事实分析型等3类,并从以上3类模型中筛选出了45种适用于入侵检测的方法展开分析,称之为可解释入侵检测方法。本文从3类可解释入侵检测方法中选出6种典型方法,通过实验对比了上述3类方法间鲁棒性、有效性及稀疏性,并根据实验结果讨论了3类方法间的优劣。最后本文深入分析了当前可解释性入侵检测方法的安全及应用难题,为后续可解释性人工智能在入侵检测领域的应用提供借鉴。
关键词:  入侵检测、机器学习、深度学习、可解释性人工智能
DOI:
投稿时间:2023-11-09修订日期:2024-03-01
基金项目:
Review of explainable intrusion detection methods
tu bi bo, xu yang, zhang kun, li chen
(Institute of Information Engineering, Chinese Academy of Sciences)
Abstract:
Intrusion Detection Systems (IDS), as indispensable components in network security, have become increasingly vital due to the unprecedented surge in network data volumes. At present, machine learning approaches, especially those employing deep learning-based black box models, have emerged as significant focal points in the realm of IDS research owing to their unparalleled detection capabilities. However, the inherent opacity of these black box models and their susceptibility to gradient-based vulnerabilities obstruct users' understanding and trust in the models' decision-making processes. This situation urgently underscores the necessity for in-depth research into enhancing the explainability of black box IDS. This article introduces a formal definition of explainable IDS as sophisticated methods capable of providing detailed feature-level explanations and categorizes explanatory models into three distinct types based on the timing and manner of feature explanation: model-embedded, local model estimation, and counterfactual analysis, thereby offering a comprehensive framework for understanding and developing future explainable IDS(X-IDS) approaches. Drawing upon advancements in the field of explainable artificial intelligence (XAI) algorithms, this study meticulously selects 45 models deemed suitable for intrusion detection analysis. From each of the three categories of X-IDS methods, it identifies two emblematic methods, conducting a thorough comparative analysis of their effectiveness, robustness, and sparsity. This comparison elucidates the inherent strengths and weaknesses of each category, as informed by empirical results. Concluding, the paper explores the prevailing security and practical challenges encountered by current methodologies for X-IDS. In doing so, it aims to illuminate the complexities involved in implementing XAI within the realm of IDS, thus offering valuable insights and guidance for future efforts aimed at enhancing the transparency and reliability of IDS. This exploration is crucial for propelling the field toward more secure and interpretable cyber defense mechanisms, ensuring that the deployment of these systems meets the evolving needs of network security in an increasingly data-driven world.
Key words:  intrusion  detection, machine  learning, deep  learning, explanable  AI