细微特征增强的多级联合聚类跨模态行人重识别算法
DOI:
作者:
作者单位:

重庆邮电大学通信与信息工程学院重庆400065

作者简介:

通讯作者:

中图分类号:

TP391.4

基金项目:

国家自然科学基金(62271096)项目资助


Cross-modal person re-identification algorithm based on multi-level join clustering with subtle feature enhancement
Author:
Affiliation:

School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目前跨模态行人重识别研究注重于通过身份标签在全局特征或局部特征上提取模态共享特征来减少模态差异,但却忽视了具有辨别力的细微特征。为此提出了一种基于特征增强的聚类学习网络,该网络通过全局和局部特征来挖掘并增强不同模态的细微特征,并结合多级联合聚类学习策略,最小化模态差异和类内变化。针对训练数据设计了随机颜色转换模块,在图像输入端增加模态之间的交互,以克服颜色偏差的影响。通过在公共数据集上进行实验,验证了所提方法的有效性,其中在SYSU-MM01数据集的全搜索模式下Rank-1和mAP分别达到了70.52%和64.02%;在RegDB数据集的V2I检索模式下Rank-1和mAP分别达到了88.88%和80.93%。

    Abstract:

    The current cross-modal person re-identification research focuses on extracting modality-shared features from global features or local features via identity labels to reduce modality differences, but ignores the Subtle features of discernment. This paper proposes a feature enhanced clustering learning (FECL) network. The network mines and enhances the subtle features of different modalities through global and local features, and combines a multilevel joint clustering learning strategy to minimize the modal differences and intraclass variation. In addition, this paper also designs a random color transition module for training data, which increases the interaction between modalities at the image input to overcome the influence of color deviation. The experiments on public datasets verify the effectiveness of the proposed methods. In the Allsearch mode of SYSU-MM01 dataset, the Rank-1 and mAP reach 70.52% and 64.02%. In the V2I retrieval mode of RegDB dataset, the Rank-1 and mAP reach 88.88% and 80.93%.

    参考文献
    相似文献
    引证文献
引用本文

范馨月,张阔,张干,李嘉辉.细微特征增强的多级联合聚类跨模态行人重识别算法[J].电子测量与仪器学报,2024,38(3):94-103

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-05-17
  • 出版日期: