DewaterNet: A fusion adversarial real underwater image enhancement network

https://doi.org/10.1016/j.image.2021.116248Get rights and content

Highlights

  • We propose the simple and effective fusion adversarial network for enhancing real underwater image.

  • We employ the multi-term objective function to correct color casts.

  • We provide visually promising results by the numerous experiments.

  • We conduct the ablation study to show the effect of each component and loss component.

Abstract

Underwater image enhancement algorithms have attracted much attention in underwater vision task. However, these algorithms are mainly evaluated on different datasets and metrics. In this paper, we utilize an effective and public underwater benchmark dataset including diverse underwater degradation scenes to enlarge the test scale and propose a fusion adversarial network for enhancing real underwater images. Meanwhile, the multiple inputs and well-designed multi-term adversarial loss can not only introduce multiple input image features, but also balance the impact of multi-term loss functions. The proposed network tested on the benchmark dataset achieves better or comparable performance than the other state-of-the-art methods in terms of qualitative and quantitative evaluations. Moreover, the ablation study experimentally validates the contributions of each component and hyper-parameter setting of loss functions.

Introduction

The underwater optical imaging quality is of great significance to the exploration and utilization of deep ocean [1]. However, raw underwater images seldom fulfill the requirements concerning low-level and high-level vision tasks because of the serious underwater degradation model depicted in Fig. 1. The underwater images are rapidly degraded by two major factors. Due to the depth, light conditions, water type, and different light wavelengths, the color tones of underwater images are often distorted [2], [3]. For example, in clean water, red light disappears first at 5 m water depth. As the depth of water increases, orange, yellow, and green lights disappear, respectively. The blue light has the shortest wavelength and travels the furthest distance in the water [4]. In addition, both suspended particles and water affect the scene contrast and produce haze-like effects by absorbing and scattering light to the camera lenses [5].

To solve the above-mentioned problems, existing underwater image enhancement methods can be divided into three categories: non model-based methods [6], [7], [8], [9], [10], [11], model-based methods [2], [12], [13], [14], [15], [16], [17], [18] and deep learning-based methods [19], [20], [21], [22], [23], [24], [25], [26]. The non model-based methods focus on adjusting image pixel values to produce a subjectively and visually appealing image without modeling the underwater image formation process. The model-based methods recover underwater images by constructing the degradation model and then estimate model parameters from prior assumptions. Due to the lack of abundant training data and the complex real underwater situation, the pixel values adjustment and physical priors cannot perform well in various underwater scenes. Deep learning [27] obtains convincing success on low-level vision tasks, such as image dehazing [28], [29], [30], image deraining [31], [32], [33], low-light image enhancement [34], and image super resolution [35], and some researchers apply deep learning to underwater image processing [19], [20], [21], [22], [23], [24], [25], [26], [36].

In this paper, we propose a novel fusion generative adversarial network named DewaterNet, and the main contributions of this paper are summarized as follows:

  • We propose the simple and effective fusion adversarial network which employs the fusion method to extract the degraded underwater image features and the enhanced image features without the specialized priors.

  • The multi-term objective function combined Lgt loss, Lfe loss, Lssim loss, and Lpsnr loss is leveraged for correcting color casts effectively, and the spectral normalization is utilized to improve image quality. In addition, an ablation study experimentally shows the effect of each component and loss component in our method.

  • In order to evaluate different underwater conditions, the enhanced images on a large-scale real-world underwater image enhancement benchmark dataset (i.e., UIEBD) demonstrate the superiority of the proposed method in both qualitative and quantitative evaluations. UIEBD includes greenish, bluish, limited illumination, and fuzz categories, and can help to get rid of the constraints of specific underwater scenes. Furthermore, our method can enhance the degraded image under different cameras and benefit the underwater video.

Section snippets

Related work

To solve the above-mentioned problems, existing underwater image enhancement methods can be divided into three categories: non model-based methods [6], [7], [8], [9], [10], [11], [37], [38], model-based methods [2], [12], [13], [14], [15], [16], [17], [18], [39] and deep learning-based methods [19], [20], [21], [22], [23], [24], [25], [26], [40].

Methodology

To solve color casts, low contrast and haze-like effects, we effectively blend multiple inputs and the generative adversarial network [51]. As shown in Fig. 2, the generator network employs two inputs in a fully convolutional network and combines two simple basic blocks. Lgt loss and Lfe loss preserve image features of ground truth x, and preserve image features of enhanced images xfe produced by fusion enhance method [8], respectively. Furthermore, the Lssim loss and Lpsnr loss are utilized to

Experiments

In this section, we first discuss the detail setup of the DewaterNet and the UIEBD test dataset. We then qualitatively and quantitatively analyze the performance of the network by comparing it with the other state-of-the-art methods. Finally, we conduct an ablation study to demonstrate the effect of each component.

Conclusion

This paper presents a fusion adversarial network named DewaterNet for underwater image enhancement. The proposed DewaterNet combined with basic blocks and multi-term loss function could correct color effectively and produce visually pleasing enhanced results, which is the first attempt to blend two inputs in generative adversarial network underwater tasks. Numerous experiments on benchmark dataset are conducted to demonstrate the superiority of the DewaterNet. The ablation study experimentally

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 61701245 and 61701247, in part by The Startup Foundation for Introducing Talent of NUIST, China 2243141701030, in part by College Students Practice Innovation Training Program of NUIST, China under Grant 201910300079Y, in part by A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions, China .

CRediT authorship contribution statement

Hanyu Li: Methodology, Software, Validation, Writing - original draft, Writing. Peixian Zhuang: Fund support, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (64)

  • BermanD. et al.

    Underwater single image color restoration using haze-lines and a new quantitative dataset

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2020)
  • JaffeJ.S.

    Underwater optical imaging: the past, the present, and the prospects

    IEEE J. Ocean. Eng.

    (2014)
  • IqbalK. et al.

    Enhancing the low quality images using unsupervised colour correction method

  • GhaniA.S.A. et al.

    Underwater image quality enhancement through integrated color model with Rayleigh distribution

    Appl. Soft Comput.

    (2015)
  • AncutiC. et al.

    Enhancing underwater images and videos by fusion

  • FuX. et al.

    A retinex-based enhancing approach for single underwater image

  • GaoS.-B. et al.

    Underwater image enhancement using adaptive retinal mechanisms

    IEEE Trans. Image Process.

    (2019)
  • ZhuangP. et al.

    Underwater image enhancement using an edge-preserving filtering Retinex algorithm

    Multimedia Tools Appl.

    (2020)
  • LiC.-Y. et al.

    Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior

    IEEE Trans. Image Process.

    (2016)
  • DrewsP.L. et al.

    Underwater depth estimation and image restoration based on single images

    IEEE Comput. Graph. Appl.

    (2016)
  • PengY.-T. et al.

    Underwater image restoration based on image blurriness and light absorption

    IEEE Trans. Image Process.

    (2017)
  • D. Akkaynak, T. Treibitz, Sea-thru: A method for removing water from underwater images, in: Proceedings of the IEEE...
  • SongW. et al.

    Enhancement of underwater images with statistical model of background light and optimization of transmission map

    IEEE Trans. Broadcast.

    (2020)
  • WangY. et al.

    Single underwater image restoration using adaptive attenuation-curve prior

    IEEE Trans. Circuits Syst. I. Regul. Pap.

    (2017)
  • LiJ. et al.

    WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images

    IEEE Robot. Autom. Lett.

    (2017)
  • LiC. et al.

    Emerging from water: Underwater image color correction based on weakly supervised color transfer

    IEEE Signal Process. Lett.

    (2018)
  • FabbriC. et al.

    Enhancing underwater imagery using generative adversarial networks

  • LiC. et al.

    An underwater image enhancement benchmark dataset and beyond

    IEEE Trans. Image Process.

    (2020)
  • GuoY. et al.

    Underwater image enhancement using a multiscale dense generative adversarial network

    IEEE J. Ocean. Eng.

    (2020)
  • LiuX. et al.

    MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction

    IEEE Geosci. Remote Sens. Lett.

    (2019)
  • LeCunY. et al.

    Deep learning

    Nature

    (2015)
  • CaiB. et al.

    Dehazenet: An end-to-end system for single image haze removal

    IEEE Trans. Image Process.

    (2016)
  • Cited by (41)

    • Multi-view underwater image enhancement method via embedded fusion mechanism

      2023, Engineering Applications of Artificial Intelligence
    View all citing articles on Scopus
    1

    The co-first authors contributed equally.

    View full text