多视图融合和全局特征增强的车辆重识别网络
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391;TN919. 8

基金项目:

国家自然科学基金(61304205, 62272236)、江苏省自然科学基金(BK20191401, BK20201136)、江苏省研究生科研与实践创新计划项目(SJCX21_0363)、大学生创新创业训练项目(XJDC202110300601, 202010300290, 202010300211, 202010300116E)资助


Vehicle re-identification network with multi-view fusion and global feature enhancement
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    车辆重识别是智能交通领域重要应用之一,现有的车辆识别方法大多集中于预定义的局部区域特征或全局外观特征。 然而,在复杂的交通环境下,传统的方法难以获取预定义的局部区域,同时很难捕捉有价值的车辆全局特征信息。 因此,本文提 出一种具有多视图融合的混合注意力机制和全局特征增强的端到端双分支网络。 该网络旨在通过增强车辆的特征表达能力和 特征质量来获得更完整、更多样的车辆特征。 本文通过视图解析网络对车辆图片 4 个视角的视图进行分割,并通过视图拼接方 法缓解分割不准确导致的信息丢失问题。 为了更好地突出拼接视图中的显著性局部区域,本文提出一种由通道注意力机制和 自注意力机制组成的混合注意力模块。 通过该模块从车辆拼接视图中分别获取关键局部信息和局部信息之间的相关性,更好 地凸显拼接视图中车辆局部的细节信息。 除此之外,还提出了一个全局特征增强模块,通过池化和卷积获得全局特征的空间和 通道关系。 该模块不仅能提取到语义增强的车辆特征,而且还使车辆特征中包含完好的细节信息,解决获取的车辆图像受视角 变化、光照条件变化等因素的影响。 在 Veri-776 和 VehicleID 数据集上的大量实验表明,mAP、CMC@ 1 和 CMC@ 5 分别达到了 82. 41%、98. 63%和 99. 23%,优于现有的方法。

    Abstract:

    Vehicle re-identification is one of the important applications in the field of intelligent transportation. Most of the existing vehicle Re-ID methods focus on pre-defined local area features or global appearance features. However, in the complex traffic environment, it is difficult for traditional methods to acquire pre-defined local regions, and it is difficult to capture valuable vehicle global feature information. Therefore, an end-to-end dual-branch network with multi-view fusion hybrid attention mechanism and global feature enhancement is proposed. The network aims to obtain more complete and diverse vehicle features by enhancing the feature representation ability and feature quality of vehicles. This paper uses the view parsing network to segment the four views of the vehicle image, and uses the view stitching method to alleviate the problem of information loss caused by inaccurate segmentation. To better highlight salient local regions in stitched views, this paper proposes a hybrid attention module consisting of a channel attention mechanism and a self-attention mechanism. Through this module, the correlation between the key local information and the local information is obtained from the stitched view respectively, so as to better highlight the detailed information of the vehicle part in the stitched view. Besides, this paper also proposes a global feature enhancement module to obtain the spatial and channel relationship of global features through pooling and convolution. This module not only extracts the semantically enhanced vehicle features, but also makes the vehicle features contain complete detailed information, and solve the influence of the acquired vehicle images by factors such as changes in viewing angle and lighting conditions. Extensive experiments in this paper on the Veri-776 and VehicleID datasets show that mAP, CMC@ 1, and CMC@ 5 reach 82. 41%, 98. 63%, and 99. 23%, respectively, which is better than the existing methods.

    参考文献
    相似文献
    引证文献
引用本文

孙 伟,徐 凡,张小瑞,胡亚华,赵宇煌.多视图融合和全局特征增强的车辆重识别网络[J].电子测量与仪器学报,2023,37(1):78-86

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-06-15
  • 出版日期:
文章二维码