DewaterNet: A fusion adversarial real underwater image enhancement network
Introduction
The underwater optical imaging quality is of great significance to the exploration and utilization of deep ocean [1]. However, raw underwater images seldom fulfill the requirements concerning low-level and high-level vision tasks because of the serious underwater degradation model depicted in Fig. 1. The underwater images are rapidly degraded by two major factors. Due to the depth, light conditions, water type, and different light wavelengths, the color tones of underwater images are often distorted [2], [3]. For example, in clean water, red light disappears first at 5 m water depth. As the depth of water increases, orange, yellow, and green lights disappear, respectively. The blue light has the shortest wavelength and travels the furthest distance in the water [4]. In addition, both suspended particles and water affect the scene contrast and produce haze-like effects by absorbing and scattering light to the camera lenses [5].
To solve the above-mentioned problems, existing underwater image enhancement methods can be divided into three categories: non model-based methods [6], [7], [8], [9], [10], [11], model-based methods [2], [12], [13], [14], [15], [16], [17], [18] and deep learning-based methods [19], [20], [21], [22], [23], [24], [25], [26]. The non model-based methods focus on adjusting image pixel values to produce a subjectively and visually appealing image without modeling the underwater image formation process. The model-based methods recover underwater images by constructing the degradation model and then estimate model parameters from prior assumptions. Due to the lack of abundant training data and the complex real underwater situation, the pixel values adjustment and physical priors cannot perform well in various underwater scenes. Deep learning [27] obtains convincing success on low-level vision tasks, such as image dehazing [28], [29], [30], image deraining [31], [32], [33], low-light image enhancement [34], and image super resolution [35], and some researchers apply deep learning to underwater image processing [19], [20], [21], [22], [23], [24], [25], [26], [36].
In this paper, we propose a novel fusion generative adversarial network named DewaterNet, and the main contributions of this paper are summarized as follows:
- •
We propose the simple and effective fusion adversarial network which employs the fusion method to extract the degraded underwater image features and the enhanced image features without the specialized priors.
- •
The multi-term objective function combined loss, loss, loss, and loss is leveraged for correcting color casts effectively, and the spectral normalization is utilized to improve image quality. In addition, an ablation study experimentally shows the effect of each component and loss component in our method.
- •
In order to evaluate different underwater conditions, the enhanced images on a large-scale real-world underwater image enhancement benchmark dataset (i.e., UIEBD) demonstrate the superiority of the proposed method in both qualitative and quantitative evaluations. UIEBD includes greenish, bluish, limited illumination, and fuzz categories, and can help to get rid of the constraints of specific underwater scenes. Furthermore, our method can enhance the degraded image under different cameras and benefit the underwater video.
Section snippets
Related work
To solve the above-mentioned problems, existing underwater image enhancement methods can be divided into three categories: non model-based methods [6], [7], [8], [9], [10], [11], [37], [38], model-based methods [2], [12], [13], [14], [15], [16], [17], [18], [39] and deep learning-based methods [19], [20], [21], [22], [23], [24], [25], [26], [40].
Methodology
To solve color casts, low contrast and haze-like effects, we effectively blend multiple inputs and the generative adversarial network [51]. As shown in Fig. 2, the generator network employs two inputs in a fully convolutional network and combines two simple basic blocks. loss and loss preserve image features of ground truth , and preserve image features of enhanced images produced by fusion enhance method [8], respectively. Furthermore, the loss and loss are utilized to
Experiments
In this section, we first discuss the detail setup of the DewaterNet and the UIEBD test dataset. We then qualitatively and quantitatively analyze the performance of the network by comparing it with the other state-of-the-art methods. Finally, we conduct an ablation study to demonstrate the effect of each component.
Conclusion
This paper presents a fusion adversarial network named DewaterNet for underwater image enhancement. The proposed DewaterNet combined with basic blocks and multi-term loss function could correct color effectively and produce visually pleasing enhanced results, which is the first attempt to blend two inputs in generative adversarial network underwater tasks. Numerous experiments on benchmark dataset are conducted to demonstrate the superiority of the DewaterNet. The ablation study experimentally
Funding
This work was supported in part by the National Natural Science Foundation of China under Grants 61701245 and 61701247, in part by The Startup Foundation for Introducing Talent of NUIST, China 2243141701030, in part by College Students Practice Innovation Training Program of NUIST, China under Grant 201910300079Y, in part by A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions, China .
CRediT authorship contribution statement
Hanyu Li: Methodology, Software, Validation, Writing - original draft, Writing. Peixian Zhuang: Fund support, Writing - review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (64)
- et al.
Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light
IEEE Trans. Circuits Syst. Video Technol.
(2020) - et al.
Automatic red-channel underwater image restoration
J. Vis. Commun. Image Represent.
(2015) - et al.
Underwater scene prior inspired deep underwater image and video enhancement
Pattern Recognit.
(2020) - et al.
Underwater image enhancement based on conditional generative adversarial network
Signal Process., Image Commun.
(2020) - et al.
Single image rain removal via a deep decomposition–composition network
Comput. Vis. Image Underst.
(2019) - et al.
A hybrid method for underwater image correction
Pattern Recognit. Lett.
(2017) - et al.
Underwater image enhancement with global-local networks and compressed-histogram equalization
Signal Process., Image Commun.
(2020) - et al.
A fusion-based enhancing method for weakly illuminated images
Signal Process.
(2016) Computer modeling and the design of optimal underwater imaging systems
IEEE J. Ocean. Eng.
(1990)- D. Berman, T. Treibitz, S. Avidan, Diving into haze-lines: Color restoration of underwater images, in: Proc. British...
Underwater single image color restoration using haze-lines and a new quantitative dataset
IEEE Trans. Pattern Anal. Mach. Intell.
Underwater optical imaging: the past, the present, and the prospects
IEEE J. Ocean. Eng.
Enhancing the low quality images using unsupervised colour correction method
Underwater image quality enhancement through integrated color model with Rayleigh distribution
Appl. Soft Comput.
Enhancing underwater images and videos by fusion
A retinex-based enhancing approach for single underwater image
Underwater image enhancement using adaptive retinal mechanisms
IEEE Trans. Image Process.
Underwater image enhancement using an edge-preserving filtering Retinex algorithm
Multimedia Tools Appl.
Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior
IEEE Trans. Image Process.
Underwater depth estimation and image restoration based on single images
IEEE Comput. Graph. Appl.
Underwater image restoration based on image blurriness and light absorption
IEEE Trans. Image Process.
Enhancement of underwater images with statistical model of background light and optimization of transmission map
IEEE Trans. Broadcast.
Single underwater image restoration using adaptive attenuation-curve prior
IEEE Trans. Circuits Syst. I. Regul. Pap.
WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images
IEEE Robot. Autom. Lett.
Emerging from water: Underwater image color correction based on weakly supervised color transfer
IEEE Signal Process. Lett.
Enhancing underwater imagery using generative adversarial networks
An underwater image enhancement benchmark dataset and beyond
IEEE Trans. Image Process.
Underwater image enhancement using a multiscale dense generative adversarial network
IEEE J. Ocean. Eng.
MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction
IEEE Geosci. Remote Sens. Lett.
Deep learning
Nature
Dehazenet: An end-to-end system for single image haze removal
IEEE Trans. Image Process.
Cited by (41)
Algorithms for improving the quality of underwater optical images: A comprehensive review
2024, Signal ProcessingCOC-UFGAN: Underwater image enhancement based on color opponent compensation and dual-subnet underwater fusion generative adversarial network
2024, Journal of Visual Communication and Image RepresentationLearning by competing: Competitive multi-generator based adversarial learning
2023, Applied Soft ComputingHierarchical attention aggregation with multi-resolution feature learning for GAN-based underwater image enhancement
2023, Engineering Applications of Artificial IntelligenceMulti-view underwater image enhancement method via embedded fusion mechanism
2023, Engineering Applications of Artificial Intelligence
- 1
The co-first authors contributed equally.