引用本文
  • 刘凤,林丽英,黄怡欣.基于手指双模态特征的自动身份验证方法及系统[J].信息安全学报,2024,9(3):80-93    [点击复制]
  • LIU Feng,LIN Liying,HUANG Yixin.Automatic Verification System Based on Finger Bimodal Features[J].Journal of Cyber Security,2024,9(3):80-93   [点击复制]
【打印本页】 【下载PDF全文】 查看/发表评论下载PDF阅读器关闭

←前一篇|后一篇→

过刊浏览    高级检索

本文已被:浏览 5384次   下载 3205 本文二维码信息
码上扫一扫!
基于手指双模态特征的自动身份验证方法及系统
刘凤1,2,3, 林丽英1,2,3, 黄怡欣1,2,3
0
(1.深圳大学 计算机与软件学院 深圳 中国 518060;2.深圳市人工智能与机器人研究院 深圳 中国 518060;3.广东省智能信息处理重点实验室 深圳 中国 518060)
摘要:
针对目前单模态生物特征识别在稳定性与安全性等方面的不足以及多模态融合识别的多设备多输入困难等问题, 本文提出一种充分考虑类内与类间度量的学习模型, 实现基于手指双模态特征的自动身份验证方法及系统。由于指静脉与指折痕具有不易改变, 难以伪造的特点, 本文选取这两种重要的手部特征进行身份验证。通过结合两种不同模态特征, 利用自编码网络对类内特征进行表示, 来构建基于度量学习的孪生网络模型, 从而提取类内与类间特征; 接着将提取的指静脉和指折痕特征进行距离计算, 将距离融合后使用逻辑回归模型进行概率判断, 最终实现有效的双模态融合身份验证。为验证我们提出方法的有效性,我们对指静脉识别结果性能进行了对比。实验结果表明, 我们的方法在更具有挑战性的数据库上识别等错误率为 1.69%, 较之现有代表性论文提出的模型的等错误率降低了 2.96%。我们也将构建的双模态融合模型与仅使用单一模态模型进行对比, 结果表明融合指静脉和指折痕特征的融合模型的等错误率为 1.55%,比单一模态的指静脉与指折痕模型分别降低了 0.14%和 3.0%, 表明了双模态身份验证模型性能更优。进一步地, 本文采集了一个更具有挑战性的数据库, 开发了显示图像及识别结果的图形界面,最终实现了一个从数据采集到识别匹配的端对端的一体化自动身份验证系统。基于以上研究, 本文首次提出了一个基于指静脉和指折痕特征的多目自动身份验证方案, 实现集准确性, 鲁棒性和实效性为一体的系统。
关键词:  双模态融合  孪生网络  自编码器  生物特征
DOI:10.19363/J.cnki.cn10-1380/tn.2024.05.06
投稿时间:2022-07-02修订日期:2022-11-15
基金项目:本课题得到国家自然科学基金 (No. 62076163)的资助。
Automatic Verification System Based on Finger Bimodal Features
LIU Feng1,2,3, LIN Liying1,2,3, HUANG Yixin1,2,3
(1.College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China;2.SZU Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518060, China;3.The Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen 518060, China)
Abstract:
To address the shortcomings of single-modal biometric recognition in terms of stability and security and the difficulties of multi-device and multi-input in multimodal fusion recognition, this paper proposes a learning model that fully considers intra-class and inter-class measure to implement an automatic authentication method and system based on finger bimodal features. Since finger veins and finger creases are difficult to change and hard to imitate, this paper selects these two important hand-based features for authentication. By combining those two different modalities and using the auto-encoder network to represent the intra-class features, we construct a metric learning-based siamese network to extract the intra-class and inter-class features. The extracted finger vein and finger crease features are then subjected to distance calculation, and the distances are fused and used in a logistic regression model to make probabilistic judgments. Finally, a bimodal fusion verification model is achieved. To verify the effectiveness of our proposed method, we compared the performance of finger vein recognition results. Experimental results show that the equal error rate of our method is 1.69% when identifying on more challenging databases, which is 2.96% lower than that of models proposed by existing representative papers. We compare the bimodal fusion model with the single-modal model and show that the equal error rate of the bimodal fusion model with finger vein and finger crease features is 1.55%, which is 0.14% and 3.0% lower than that of the single-modal finger vein and finger crease models respectively, indicating that the multimodal verification model has better performance. Furthermore, we collected a more challenging database, developed a graphical interface to display the collected images and recognition results, and finally built an end-to-end automatic authentication system from data acquisition to feature matching. Based on the above study, a multi-view automatic authentication scheme based on finger vein and finger crease features is proposed for the first time, realizing a system integrating accuracy, robustness and effectiveness.
Key words:  bimodal fusion  siamese network  autoencoder  biometrics