陈广秋,尹文卿,温奇璋,张晨洁,段锦.基于双重注意力机制生成对抗网络的偏振图像融合[J].电子测量与仪器学报,2024,38(4):140-150
基于双重注意力机制生成对抗网络的偏振图像融合
Polarization image fusion based on dual attention mechanismfor generating adversarial networks
  
DOI:
中文关键词:  图像融合  偏振图像  生成对抗网络  注意力机制
英文关键词:image fusion  polarization images  generative adversarial networks  attention mechanisms
基金项目:国家自然科学基金重大仪器专项(62127813)资助
作者单位
陈广秋 长春理工大学电子信息工程学院长春130022 
尹文卿 长春理工大学电子信息工程学院长春130022 
温奇璋 长春理工大学电子信息工程学院长春130022 
张晨洁 长春理工大学电子信息工程学院长春130022 
段锦 长春理工大学电子信息工程学院长春130022 
AuthorInstitution
Chen Guangqiu School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022,China 
Yin Wenqing School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022,China 
Wen Qizhang School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022,China 
Zhang Chenjie School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022,China 
Duan Jin School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun 130022,China 
摘要点击次数: 156
全文下载次数: 12871
中文摘要:
      针对单一强度图像缺少偏振信息,在恶劣天气条件下无法提供充足场景信息的问题,本文提出了一种基于双重注意力机制生成对抗网络用于强度图像和偏振度图像进行融合。算法网络由一个包含编码器、融合模块和解码器的生成器和一个鉴别器组成。首先源图像输入到生成器的编码器中,经过一个卷积层和密集块进行特征提取,然后通过含有注意力机制的纹理增强融合模块中进行特征融合,最后通过解码器得到融合图像。鉴别器主要由两个卷积模块和两个注意力模块组成,在网络训练过程中,通过不断博弈,迭代优化生成器网络参数,使生成器输出既保留偏振度图像的稀疏特征又不损失强度图像信息的高质量融合图像。实验表明,该方法得到的融合图像在主观上纹理信息更丰富,更符合人眼的视觉感受,并且在客观评价指标中SD提升约18.5%,VIF提升约22.4%。
英文摘要:
      Aiming at the problem that a single intensity image lacks polarization information and cannot provide sufficient scene information under bad weather conditions, this paper proposes a dual-attention mechanism to generate an adversarial network for fusion of intensity and polarization images. The algorithmic network consists of a generator containing an encoder, a fusion module and a decoder and a discriminator. First, the source image is fed into the encoder of the generator, after a convolutional layer and dense block for feature extraction, then feature fusion is performed in the texture enhancement fusion module containing the attention mechanism and finally the fused image is obtained by the decoder. The discriminator is mainly composed of two convolutional modules and two attention modules, and the generator network parameters are iteratively optimised by constant gaming during the network training process, so that the generator outputs a high-quality fused image that retains the sparse features of the polarimetric image without losing the intensity image information. Experiments show that the fused images obtained by this method are subjectively richer in texture information and more in line with the visual perception of the human eye, and that the SD is improved by about 18.5% and the VIF by about 22.4% in the objective evaluation index.
查看全文  查看/发表评论  下载PDF阅读器