基于双重注意力机制生成对抗网络的偏振图像融合
DOI:
作者:
作者单位:

长春理工大学电子信息工程学院 长春 130022

作者简介:

通讯作者:

中图分类号:

TP391.4?

基金项目:

吉林省科技发展计划(20210203120SF)


Polarization image fusion based on dual attention mechanism for generating adversarial networks
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对单一强度图像缺少偏振信息,在恶劣天气条件下无法提供充足场景信息的问题,本文提出了一种基于双重注意力机制生成对抗网络用于强度图像和偏振度图像进行融合。算法网络由一个包含编码器、融合模块和解码器的生成器和一个鉴别器组成。首先源图像输入到生成器的编码器中,经过一个卷积层和密集块进行特征提取,然后通过含有注意力机制的纹理增强融合模块中进行特征融合,最后通过解码器得到融合图像。鉴别器主要由两个卷积模块和两个注意力模块组成,在网络训练过程中,通过不断博弈,迭代优化生成器网络参数,使生成器输出既保留偏振度图像的稀疏特征又不损失强度图像信息的高质量融合图像。实验表明,该方法得到的融合图像在主观上纹理信息更丰富,更符合人眼的视觉感受,并且在客观评价指标中SD提升约18.5%,VIF提升约22.4%。

    Abstract:

    Aiming at the problem that a single intensity image lacks polarization information and cannot provide sufficient scene information under bad weather conditions, this paper proposes a dual-attention mechanism to generate an adversarial network for fusion of intensity and polarization images. The algorithmic network consists of a generator containing an encoder, a fusion module and a decoder and a discriminator. First, the source image is fed into the encoder of the generator, after a convolutional layer and dense block for feature extraction, then feature fusion is performed in the texture enhancement fusion module containing the attention mechanism and finally the fused image is obtained by the decoder. The discriminator is mainly composed of two convolutional modules and two attention modules, and the generator network parameters are iteratively optimised by constant gaming during the network training process, so that the generator outputs a high-quality fused image that retains the sparse features of the polarimetric image without losing the intensity image information. Experiments show that the fused images obtained by this method are subjectively richer in texture information and more in line with the visual perception of the human eye, and that the SD is improved by about 18.5% and the VIF by about 22.4% in the objective evaluation index.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-11-23
  • 最后修改日期:2024-03-13
  • 录用日期:2024-04-11
  • 在线发布日期:
  • 出版日期: