高级检索+

多注意力生成式低光照图像增强方法

Multi-attention generative low-light image enhancement method

  • 摘要: 低光照图像增强技术是提升低光照环境下图像捕获质量的关键手段,可以为后续视觉任务提供保障。为了提升低光照图像增强技术在未知场景下的泛化能力,突破传统方法对配对数据集训练的依赖限制,本文提出以单路径生成对抗网络为基础结合多种注意力机制的低光照图像增强方法。首先,构建了多注意力引导的生成器网络,利用多维注意力机制提取图像的反转照明特征,并以此指导低光照图像的增强。通过改进自注意力机制,增强图像像素间的远程依赖关系,用以提升网络的泛化能力;其次,构建了双判别器结构,分别从整体和局部层面进行图像真伪判定;最后,使用自特征保留损失和GAN的对抗性损失约束网络训练。在非配对数据集上的实验结果表明,该方法在突破配对数据集限制的同时,还能适应复杂光照条件下的低光照图像增强任务,拥有良好的视觉特性,在MEF、LIME、DICM公开数据集上客观、主观评价方面均取得了最优的结果。

     

    Abstract: Low-light image enhancement technology is a key method for improving the quality of image captured in low-light environments,thereby ensuring the reliability of subsequent visual tasks. To enhance the generalization capability of low-light image enhancement technology in unknown scenarios and to break through the limitations of traditional methods that rely on paired datasets for training,this paper proposes a low-light image enhancement approach based on a single-path generative adversarial network (GAN) combined with a multi-attention mechanism. Firstly,a multi-attention-guided generator network is constructed with a multi-dimensional attention mechanism to extract the inverted illumination features of images to guide the enhancement of low-light images. By improving the self-attention mechanism,the long-range dependencies among image pixels are enhanced to improve the network's generalization ability. Secondly,a dual-discriminator architecture is crafted to discern the authenticity of images from both global and local perspectives. Finally,the network training is regularized by incorporating self-feature preservation loss and the adversarial loss of GAN. Experimental results on unpaired datasets demonstrate that this method not only overcomes the reliance on paired datasets but also exhibits robust adaptability to low-light image enhancement tasks under complex conditions. It achieves superior visual quality and the best performance in both objective and subjective evaluations on public datasets such as MEF,LIME,and DICM.

     

/

返回文章
返回