张勃兴,马敬奇,张寿明,李辰潼,钟震宇.利用全局与局部关联特征的行人重识别方法[J].电子测量与仪器学报,2022,36(6):205-212 |
利用全局与局部关联特征的行人重识别方法 |
Person re-identification method based on global and local relation features |
|
DOI: |
中文关键词: 行人重识别 ResNet50 空间变换网络 特征融合 多尺度特征 |
英文关键词:person re-identification ResNet50 spatial transformation network feature fusion multiscale feature |
基金项目:广东省重点领域研发计划项目(2018B010108006)、广州市科技计划项目(202007040007)、广东省重点领域研发计划项目(2020B090925002 )、广东省科学院建设国内一流研究机构行动专项资金项目(2020GDASYL 20200302015)资助 |
|
|
摘要点击次数: 857 |
全文下载次数: 1026 |
中文摘要: |
针对因行人图像背景差异大、人体外观相似导致的行人再识别准确率低的问题,提出了一种利用特征融合与多尺度信
息的行人重识别方法。 首先,通过 ResNet50_IBN 提取人体图像全局特征图。 其次,设计分支结构,第 1 分支利用空间变换网络
对全局特征图进行自适应的空间特征对齐,水平切分全局特征图得到局部特征,采用全局特征与每个局部特征分别融合的方式
来挖掘特征之间的关联关系。 第 2 分支增加了 4 种不同尺度的卷积层提取全局图像的多尺度特征。 最后,在推理阶段将第 1
分支和第 2 分支的特征进行通道维度的串联,作为行人的对比特征。 通过在 Market-1501、DukeMTMC 数据集上的实验表明,所
提方法与 AlignedReID 和 EA-Net 等特征对齐和局部特征提取方法相比具备更强的性能,在 Market-1501 上,mAP 和 Rank-1 分别
达到了 86. 77%和 94. 83%。 |
英文摘要: |
A person re-identification method based on feature fusion and multi-scale information was proposed to solve the problem of low
accuracy of person re-identification due to the large difference of human image background and similar global appearance of human body.
Firstly, the global feature map of human body image is extracted by ResNet50. Secondly, the branch structure is designed. In the first
branch, the spatial transformation network is used to align the global feature images adaptively, and the local feature images are obtained
by horizontal segmentation of the global feature images. The correlation between the global feature and each local feature is mined by
fusing the global feature and each local feature separately. The second branch adds four convolution layers of different scales to extract
multi-scale features from global images. Finally, in the reasoning stage, the features of the first branch and the second branch are
connected in series as the comparative features of person. Experiments on the Market-1501 and DukeMTMC datasets show that the
proposed method has better performance than the AlignedReID and EA-NET feature alignment and local feature extraction methods. In
the Market-1501 dataset, mAP and Rank-1 reach 86. 77% and 94. 83%, respectively. |
查看全文 查看/发表评论 下载PDF阅读器 |
|
|
|