孙 伟,徐 凡,张小瑞,胡亚华,赵宇煌.多视图融合和全局特征增强的车辆重识别网络[J].电子测量与仪器学报,2023,37(1):78-86 |
多视图融合和全局特征增强的车辆重识别网络 |
Vehicle re-identification network with multi-viewfusion and global feature enhancement |
|
DOI:10.13382/j.issn.1000-7105.2023.01.009 |
中文关键词: 车辆重识别 视图分割 视图拼接 注意力机制 特征增强 |
英文关键词:vehicle re-identification view segmentation view stitching attention mechanism feature enhancement |
基金项目:国家自然科学基金(61304205, 62272236)、江苏省自然科学基金(BK20191401, BK20201136)、江苏省研究生科研与实践创新计划项目(SJCX21_0363)、大学生创新创业训练项目(XJDC202110300601, 202010300290, 202010300211, 202010300116E)资助 |
|
Author | Institution |
Sun Wei | 1. College of Automation, Nanjing University of Information Science & Technology,2. Jiangsu Collaborative
Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing University of Information Science &
Technology |
Xu Fan | 1. College of Automation, Nanjing University of Information Science & Technology |
Zhang Xiaorui | 2. Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing University of Information Science & Technology,3. Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing
University of Information Science & Technology,4. Wuxi Research Institute, Nanjing University of Information Science & Technology,5. College of Computer and Software, Nanjing University of Information Science & Technology |
Hu Yahua | 1. College of Automation, Nanjing University of Information Science & Technology |
Zhao Yuhuang | 1. College of Automation, Nanjing University of Information Science & Technology |
|
摘要点击次数: 726 |
全文下载次数: 1247 |
中文摘要: |
车辆重识别是智能交通领域重要应用之一,现有的车辆识别方法大多集中于预定义的局部区域特征或全局外观特征。
然而,在复杂的交通环境下,传统的方法难以获取预定义的局部区域,同时很难捕捉有价值的车辆全局特征信息。 因此,本文提
出一种具有多视图融合的混合注意力机制和全局特征增强的端到端双分支网络。 该网络旨在通过增强车辆的特征表达能力和
特征质量来获得更完整、更多样的车辆特征。 本文通过视图解析网络对车辆图片 4 个视角的视图进行分割,并通过视图拼接方
法缓解分割不准确导致的信息丢失问题。 为了更好地突出拼接视图中的显著性局部区域,本文提出一种由通道注意力机制和
自注意力机制组成的混合注意力模块。 通过该模块从车辆拼接视图中分别获取关键局部信息和局部信息之间的相关性,更好
地凸显拼接视图中车辆局部的细节信息。 除此之外,还提出了一个全局特征增强模块,通过池化和卷积获得全局特征的空间和
通道关系。 该模块不仅能提取到语义增强的车辆特征,而且还使车辆特征中包含完好的细节信息,解决获取的车辆图像受视角
变化、光照条件变化等因素的影响。 在 Veri-776 和 VehicleID 数据集上的大量实验表明,mAP、CMC@ 1 和 CMC@ 5 分别达到了
82. 41%、98. 63%和 99. 23%,优于现有的方法。 |
英文摘要: |
Vehicle re-identification is one of the important applications in the field of intelligent transportation. Most of the existing
vehicle Re-ID methods focus on pre-defined local area features or global appearance features. However, in the complex traffic
environment, it is difficult for traditional methods to acquire pre-defined local regions, and it is difficult to capture valuable vehicle
global feature information. Therefore, an end-to-end dual-branch network with multi-view fusion hybrid attention mechanism and global
feature enhancement is proposed. The network aims to obtain more complete and diverse vehicle features by enhancing the feature
representation ability and feature quality of vehicles. This paper uses the view parsing network to segment the four views of the vehicle
image, and uses the view stitching method to alleviate the problem of information loss caused by inaccurate segmentation. To better highlight salient local regions in stitched views, this paper proposes a hybrid attention module consisting of a channel attention
mechanism and a self-attention mechanism. Through this module, the correlation between the key local information and the local
information is obtained from the stitched view respectively, so as to better highlight the detailed information of the vehicle part in the
stitched view. Besides, this paper also proposes a global feature enhancement module to obtain the spatial and channel relationship of
global features through pooling and convolution. This module not only extracts the semantically enhanced vehicle features, but also makes
the vehicle features contain complete detailed information, and solve the influence of the acquired vehicle images by factors such as
changes in viewing angle and lighting conditions. Extensive experiments in this paper on the Veri-776 and VehicleID datasets show that
mAP, CMC@ 1, and CMC@ 5 reach 82. 41%, 98. 63%, and 99. 23%, respectively, which is better than the existing methods. |
查看全文 查看/发表评论 下载PDF阅读器 |
|
|
|