融合3D-CBAM和跨时间尺度特征分析的步态识别方法*
DOI:
CSTR:
作者:
作者单位:

长春理工大学

作者简介:

通讯作者:

中图分类号:

基金项目:

吉林省科技支撑项目


Gait recognition method integrating 3D-CBAM and cross-time scale feature analysis
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对传统的步态识别方法忽略了步态特征中的时间信息,提出了一种融合3D-CBAM和跨时间尺度特征分析的步态识别框架。我们将注意力模块集成到模型中,自适应地关注输入步态序列关键通道和空间位置,提高模型的步态识别性能。此外,增强的全局和局部特征提取器(EGLFE)中全局特征提取将时间信息和空间信息在一定程度上解耦,在2D卷积和1D卷积之间添加额外的LeakyReLU层,增加了网络的非线性数量,在步态特征提取过程中有助于扩大感受野,从而提升模型对特征的学习能力,实现更好的全局特征提取效果,融合局部特征,弥补局部因分块带来的特征损失。多尺度时间增强模块融合帧级特征和长短期时序特征,增强模型对遮挡的鲁棒性。我们在CASIA-B数据集和OU-MVLP数据集上进行训练和测试,在CASIA-B数据集上,平均识别准确率为92.7,在NM,BG,CL上的rank-1准确率分别为98.1%,95.1%,84.9%,实验结果表明,所提方法在正常行走和复杂条件下都表现出很好的性能。

    Abstract:

    Addressing the limitation of traditional gait recognition methods that neglect temporal information in gait features, we propose a gait recognition framework that integrates 3D-CBAM and cross-temporal scale feature analysis. By incorporating an attention module into the model, it adaptively focuses on critical channels and spatial locations within the input gait sequences, enhancing the model's gait recognition performance. Furthermore, the Enhanced Global and Local Feature Extractor (EGLFE) decouples temporal and spatial information to a certain extent during global feature extraction. By inserting additional LeakyReLU layers between 2D and 1D convolutions, the number of nonlinearities in the network is increased, which aids in expanding the receptive field during gait feature extraction. This, in turn, boosts the model's ability to learn features, achieving better global feature extraction results. Local features are also integrated to compensate for feature loss due to partitioning. A multi-scale temporal enhancement module fuses frame-level features and short-to-long-term temporal features, enhancing the model's robustness against occlusion. We conducted training and testing on the CASIA-B and OU-MVLP datasets. On the CASIA-B dataset, the average recognition accuracy reached 92.7%, with rank-1 accuracies of 98.1%, 95.1%, and 84.9% for Normal (NM), Bag (BG), and Clothing (CL) conditions, respectively. Experimental results demonstrate that the proposed method exhibits excellent performance under both normal walking and complex conditions.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-07-08
  • 最后修改日期:2024-12-06
  • 录用日期:2024-12-11
  • 在线发布日期:
  • 出版日期:
文章二维码
×
《电子测量与仪器学报》
财务封账不开票通知