陈广秋,温奇璋,尹文卿,段 锦,黄丹丹.用于红外与可见光图像融合的注意力残差密集融合网络[J].电子测量与仪器学报,2023,37(8):182-193
用于红外与可见光图像融合的注意力残差密集融合网络
Attentional residual dense connection fusion network for infrared and visible image fusion
  
DOI:
中文关键词:  红外与可见光图像融合  自编码器网络  残差密集连接  注意力机制  光谱特性
英文关键词:infrared and visible image fusion  auto-encoder network  residual dense connection  attention mechanism  spectral characteristic
基金项目:国家自然科学基金重大仪器专项(62127813)、吉林省科技发展计划项目(20210203181SF)资助
作者单位
陈广秋 1.长春理工大学电子信息工程学院 
温奇璋 1.长春理工大学电子信息工程学院 
尹文卿 1.长春理工大学电子信息工程学院 
段 锦 1.长春理工大学电子信息工程学院 
黄丹丹 1.长春理工大学电子信息工程学院 
AuthorInstitution
Chen Guangqiu 1.School of Electronics and Information Engineering, Changchun University of Science and Technology 
Wen Qizhang 1.School of Electronics and Information Engineering, Changchun University of Science and Technology 
Yin Wenqing 1.School of Electronics and Information Engineering, Changchun University of Science and Technology 
Duan Jin 1.School of Electronics and Information Engineering, Changchun University of Science and Technology 
Huang Dandan 1.School of Electronics and Information Engineering, Changchun University of Science and Technology 
摘要点击次数: 457
全文下载次数: 381
中文摘要:
      为了解决当前红外与可见光图像融合算法中易出现场景信息缺失、目标区域细节模糊、融合图像不自然等问题,提出一 种用于红外与可见光图像融合的注意力残差密集融合网络(ARDFusion)。 本文整体架构是一种自编码器网络,首先,利用存在 最大池化层的编码器对源图像进行多尺度特征提取,然后,利用注意力残差密集融合网络分别对多个尺度的特征图进行融合, 网络中的残差密集块可以连续存储特征并且最大程度地保留各层特征信息,注意力机制可以突出目标信息并获取更多与目标、 场景有关的细节信息。 最后,将融合后的特征输入到解码器中,通过上采样和卷积层对特征进行重构,得到融合图像。 本文提 出了一种用于红外与可见光图像融合的注意力残差密集融合网络,实验结果表明,较已有文献的其他典型融合算法,具有较好 的融合效果,能够更好地保留可见光图像中的光谱特性且红外目标显著,并在主观评价和客观评价方面都取得了较好的融合 性能。
英文摘要:
      In order to solve the problems in the current infrared and visible image fusion algorithm, such as missing scene detail information, unclear target region detail information and unnatural fusion image, an attentional residual dense fusion network (ARDFusion) for infrared and visible image fusion is proposed. The whole architecture of this paper is an auto-encoder network. First, the encoder with the largest pooling layer is used to extract multi-scale features of the source image, then the attention residual dense fusion network is used to fuse the feature maps of multiple scales. The residual dense blocks in the network can continuously store features and maximize the retention of feature information at each layer. The attention mechanism can highlight target information and obtain more detailed information related to the target and scene. Finally, the fused features are input into the decoder and reconstructed through upsampling and convolutional layers to obtain the fused image. This article proposes an attention residual dense fusion network for infrared and visible image fusion. The experimental results show that compared to other typical fusion algorithms in existing literature, it has better fusion performance, can better preserve the spectral characteristics in visible images, and has significant infrared targets. It has achieved good fusion performance in both subjective and objective evaluations.
查看全文  查看/发表评论  下载PDF阅读器