Enhancement algorithm for high visibility of underwater images

Underwater image enhancement has been attracting researchers in present day scenario for exploring the marine life. But underwater images suffer from various glitches like haziness, low contrast and faded colours due to absorption and scattering properties of light in water. To overcome these issues, the present paper proposes an enhancement method for underwater images. The proposed enhancement method comprises of three steps, i.e. automatic white balancing, dehazing and Rayleigh stretching in spatial domain. Automatic white balancing technique removes the colour cast from underwater images. Haze removal algorithm efﬁciently overcomes the haziness but it also reduces the local contrast and making the image appear dull. For improving the dehazing result and the visual quality simulta-neously, the proposed method process the image by histogram stretching technique based on Rayleigh distribution. The proposed method corrects the colour cast and improves the contrast along with removing the haze from underwater images. Subjective and objective analysis on U 45 dataset validates the efﬁciency of proposed method and the enhanced images achieve better visual quality as compared to the existing methods. High values of EMEE, EME, UIQM and UCIQE for different types of underwater images further proves the potential and efﬁcacy of the proposed method.


INTRODUCTION
An unexplored huge world present under oceans is attracting various researcher and scientists to study and investigate variety of marine life forms, incredible landscape, mysterious shipwrecks and beautiful coral reefs. Underwater environment is also gaining substantial attention due to increase in optical robotic applications and marine engineering [1,2]. But complex underwater environment and poor lighting conditions are major challenges to capture good quality underwater images. Water is a denser medium than the air and sunlight is reflected and refracted partially as it enters the ocean due to transition from one medium to the other. Less light will be reflected when ocean water is calm and smooth rather than when it is turbulent. The part of light which penetrates the ocean surface is affected by the phenomenon of refraction because light travels faster in air as compared to water. These refracted rays of light further interact with the water molecules and suspended solid particles leading to scattering and absorption. Scattering reduces intensity of light causing loss in contrast and haziness in underwater images. Whereas, absorption of This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology light leads to diminished colour quality of underwater images [3,4]. Light propogation model along with its three components, i.e direct component, backward and forward scattering components are shown in Figure 1.
Light spectrum which is completely visible in atmosphere is absorbed within 10 m, i.e. 33 feet deep in the water and almost no light travels below 150 m, i.e. 490 feet depth of water. Water selectively absorbs wavelengths of the visible light. Longer wavelengths of yellow, orange and red colour can penetrate through approximately 98, 164 and 49 feet, respectively, whereas the shorter wavelengths of indigo, blue, green and violet colour can penetrate even further. Red light has the least amount of energy in the visible spectrum as it has the longest wavelength. Wavelength decreases and energy increases from red to violet light across the spectrum [3]. Therefore, blue penetrates the deepest, leading to blue colour of underwater images. Figure 2 shows the absorption of visible spectrum in ocean.
Attenuation (i.e. scattering and absorption) of the light in water introduces various degradations among underwater images like poor colours, diminished contrast, haziness and FIGURE 1 Light propagation model in water [3] FIGURE 2 Colour penetration pattern in ocean [4] blurriness making it unpleasant for the viewer. So, enhancement of underwater images is an interesting research area for marine biologist, researchers and scientists. Various enhancement and restoration methods [5][6][7][8][9][10][11] are used in literature to improve visibility of such degraded underwater images. Such methods have various issues like poor colours, low contrast, hazziness etc., which deteriorate their practicality in contempt of their valuable achievements.
The present paper proposes a method to enhance visible quality of underwater images and reduce the haze from them . The proposed method also tries to intensify the contrast and colours of the underwater images and hence make it applicable for real time scenario. So, proposed method is an amalgamation to balance all three parameters namely colour, contrast and haze for good quality underwater image.
The remaining paper is structured as follows: Section 2 accounts for the related work in literature. Section 3 elaborates the white balancing stage and Section 4 focuses on the proposed method in details. Section 5 presents the quantitative and qualitative result analysis. Finally, Section 6 concludes the paper.

LITERATURE SURVEY
This section reviews the major techniques and methods which have been considered for enhancing or restoring the images captured in underwater scenario since last few years. These approaches can be divided as hardware and software based approaches.
The traditional hardware approaches include polarisation, range-gated imaging and fluorescence imaging for enhancement of underwater images. Intensity, polarisation and wavelength are the main characteristics of light. Natural light lacks polarisation but light reaching camera's imaging sensor suffers from biased polarisation. Some studies verify that polarisation of light can reduce backscatter from underwater images [12].
Based on the above theory, Liu et al. [5] proposed a polarisation imaging model for underwater images. Besides scattering and absorption effects, polarisation details are also emphasised in this model, i.e. backscattered light is polarised whereas direct light is unpolarised for enhancement procedure. This model also estimates the total intensity which is lost by water absorption and hence the recovery of accurate target radiance. An optimized dark channel prior (DCP) method has also been proposed by Amer et al. [6] to decrease the diffusion effect in underwater images by involving the polarimetric imaging optical system.
Range-gated imaging is commonly used for laser imaging systems in turbid water. It rejects backscattered light from the target and increases signal-to-backscattering noise ratio (SBR), modified fidelity index (MF) and contrast of underwater images. Depending on the propogation property of light in water, Wang et al. [7] proposed a 3D range gated imaging method for deblurring of underwater images. Depth-noise maps (DNM) of the target gate images have been calculated and are further subtracted from it to obtain denoised and high range resolution images.
A high resolution range gated camera system is proposed by Mariani et al. [8] to enhance underwater images. It combines "time of flight" (ToF) image sensors and pulsed laser illumination to reduce backscattering. It also calculates the distance of each illuminated object in the scene. This proposed range gated camera system has also a good capability of providing extended depth range for far away objects. Fluorescence Imaging System (FluorIS) proposed by Treibitz et al. [9] is a modification of consumer camera for improving sensitivity to chlorophyll (i.e. fluorescence) along with wide field-of-view of fluorophore for surveying during both day and night time in underwater scenes including coral reefs.
Wang et al. [10] restored the underwater images by using distant dependant formation algorithm based on Poisson's equation; followed by homomorphic filtering to remove the non-linear disturbances from raw underwater images. Method proposed by Kareem et al. [11] utilises RGB model for colour restoration and YCbCr space for colour transformation. Finally, the existing methods were outperformed by the Rayleigh based integrated colour model.
Dual domain-based underwater image enhancement (DDUIE) method proposed by Mathur et al. [13] utilises both the spatial and frequency domains for enhancement of underwater images. DWT (discrete wavelet transform) image has been contrast stretched in approximation band. Later intensity saturation of three colour channels of RGB image is carried out in spatial domain. To further improve the colour quality, the image has been processed in HSV (hue-saturation-value) colour space.
Background light estimation algorithm is proposed by Yang et al. [14] to enhance the underwater images between 30-60 m depth with the help of artificial light. This method basically combines DCP and deep learning for obtaining red channel information of the background light, which is further improved by adaptive colour deviation. A DCP based approach has been proposed by Peng et al. [15] for image restoration in turbid medium and ambient lighting conditions. Further adaptive colour correction is also fused into the image formation model (IFM) for improving colours along with contrast.
A multiscale fusion technique proposed by Ancuti et al. [16] combines colour compensated and white-balanced images with related weight maps for enhancing the contrast, edges and colour in underwater images. Ghani et al. [17] also proposed a dual-image wavelet fusion approach which incorporates contrast limited adaptive histogram specification (CLAHS) and homomorphic filtering for enhancement of hazy underwater images.
Underwater convolutional neural network (UWCNN) is an underwater image enhancement method which is proposed by Porikli et al. [18]. It is based on light-weight CNN model and reconstructs the underwater image from the underwater scene prior instead of estimation of the parameters from underwater imaging model. UWCNN works well on all underwater image degradation datasets including different water types and degradation levels. Xu et al. [19] proposed a two-stage dust removal approach for underwater images. First stage removes the fine dust by underwater red-green minimum channel prior de-scattering, whereas the second stage uses deep convolutional neural-network to eliminate haze from underwater images.
Li et al. [20] proposed an underwater image enhancement benchmark consisting of 950 real-world underwater images. 890 out of 950 images have corresponding reference images and remaining 60 images which don't have reference images are considered as challenging data. Further, underwater image enhancement network called Water-Net has been proposed and trained on this underwater image enhancement benchmark. Moghimi et al. [21] proposed a robust two-step enhancement algorithm for underwater images. First step corrects the colour of underwater images and enhances the image quality by reducing hazing and darkening artefacts. Second step enhances the optimized image resolution using the convolutional neural network (CNN) for super-resolution of underwater images under diverse artefact conditions and at different depths. Now a days the generative adversarial networks have also been increasingly used in underwater image enhancement. Takahashi et al. [22] proposed generative adversarial network for colour restoration of underwater images. The loss function of network has been improved to train the dataset, followed by detecting and enhancing the underwater images. Guo et al. [23] proposed a novel multiscale dense generative adversarial net-work (GAN) for enhancing underwater images. The generator presents the residual multiscale dense block to utilise the previous features and hence improves the details. The training of the discriminator is stablised by computationally light spectral normalisation and nonsaturating GAN loss function (L1 loss + gradient loss) represents the image features of ground truth. This method outperforms both CNN and non-CNN based methods qualitatively as well as quantitatively.
The survey briefs both the hardware and software based methods used for enhancement and restoration of underwater images. The above discussion shows that software-based methods are better as compared to hardware-based methods owing to less computational time, economic efficiency, simple designs, easy modulations and no complex hardware.
Above mentioned hardware and software based methods improve the visual quality of underwater images upto certain extent but most of them lacks in enhancing the contrast and colours efficiently. In contrast, the proposed method not only removes the haze from the underwater images but also improves the colour cast of images along with enhancing the various parameters of underwater images. Moreover, the result analysis indicates that greater number of local feature points of image matching and large edge detection points make it suitable for real time application. Next section shows the comparison of various white balancing techniques required for improving the colour cast of underwater images.

WHITE BALANCING
Different colours of white light when enters the ocean get absorbed corresponding to their wavelength and imbalances the red, green and blue colour channels of the underwater images. Colour distortion of these channels becomes even more challenging to eliminate when the depth increases. For removal of this colour distortion, the proposed method uses an automatic white balancing algorithm [24]. Automatic white balancing is the first step of proposed image enhancement method as shown in Figure 4. Various white balancing techniques have been proposed till date but finding about that which one suits the underwater images is an important task. Different existing white balancing techniques have been considered in this section and best among them has been used that is both effective and suitable for underwater images. Figure 3 compares various white balancing techniques present in literature and shows which one is better for the underwater scenario. All automatic white balancing techniques comprise of two steps. First and hardest step estimates the ambient light or scene illumination and second step focuses on correcting the colour balance of the image. It is a very significant procedure for correcting the colours in the image, once the ambient light is estimated.
White balancing reduces the colour distortions and removes unrealistic colour cast from the underwater images. The auto white balancing (AWB) technique [25] uses white patch retinex (WPR) technique for illuminant estimation [26]. Further, this illuminant estimation is used for calculating correlated colour where, F (x, y) represents the gray level values for R, G and B channels for any pixel location in the image and F WB (x, y) gives white balanced output. and are the gain factor and offset value respectively used in white balance correction process. The W_B [27] technique estimates the illumination by the value of i , which is computed as sum of ref (the average colour of the scene) and the parameter . Equation 2 shows the illumination estimation procedure [27].
Illuminant colour is estimated using ref and Minkowski norm when p = 1 (a general solution derived from Gray-World [24]). Further, the value of parameter is estimated by analysing the density and the distribution of R, G and B channels on the colour histogram. Parameter has values in the range [0, 0.5] and it also varies inversely with the number of colours. Consequently, higher value for is chosen when colours detected are less. Experiments show that default value of 0.2 for parameter produces visually pleasing results because most of the processed underwater images have relative uniform colour distribution.
Cheng's white balancing technique, PCA_WB [28] estimates the illuminant according to spatial domain method (Grey Edge [29]), based on the assumption that image gradients are achromatic. PCA_WB technique improves the Grey Edge method by ordering pixels in the direction of the mean image colour and synthetically introducing strong gradient along with retaining the top and bottom 5% of pixels. Retained pixels are however, treated with principal component analysis (PCA) and the first component is estimated as the scene illumination.
The illuminant estimation in white patch retinex technique, WPR_WB [26] has assumed that every scene contains a bright patch. Illumination colour of the scene is based on this bright patch, which reflects the maximum light possible, for each colour band.
Gray World [24] is the illuminant estimation technique which is based on an assumption that world is achromatic (i.e. average colour of the world is gray). Therefore, it computes the ambient light as the average of red, green and blue channels in the image. Gray world technique excludes the brightest and darkest pixels from the image, which may produce the variation in the illuminant estimation. This makes the technique more robust.
Above discussion and results of Figure 3 shows that most effective white balancing technique for underwater images is the gray-world [24] approach. One common problem noticed in most of the white-balancing techniques (mentioned above) is the colour deviation that appears due to poor illumination. Most underwater images have overall blue appearance after white balancing, whereas results of gray-world [24] approach are better both in terms of colours and visible clarity.

PROPOSED METHODOLOGY
Diminished colours, haziness and low contrast are the three main issues in underwater images due to propogation of light from one medium (i.e. air) to the other (i.e. water). To overcome the above issues, the present paper proposes a novel method for enhancing the visible quality of underwater image. The proposed enhancement method consists of three steps, i.e. automatic white balancing, dehazing and Rayleigh stretching in the spatial domain. Absorption of light in water leads to dull colours of underwater images. These dull colours have been treated with white balancing technique used in the proposed method. Haze removal algorithm overcomes the veil of haze from the white balanced images but in some situations it reduces the local contrast of the image making it under-enhanced. So, the image is further  Figure 4 shows the block diagram of the proposed method. Detailed description of above steps have been presented in the following subsections.

Auto white balancing
This step aims at elimination of unwanted and non-uniform colour casts caused by various illuminants to enhance the image appearance. For removal of this colour distortion, the proposed method uses an automatic white balancing algorithm. It is the fundamental step in enhancement of underwater images for colour restoration. Detailed description and comparison of var-ious white balancing techniques have been already discussed in Section 3. But white balancing alone cannot reduce the visibility problem of underwater images as the details and edges of the image have also been affected by the light scattering in underwater scenario. A dehazing algorithm has been used in the next section to cope up with the hazy nature of these white balanced images.

Dehazing
The present paper uses dehazing algorithm to overcome the effect of hazy/foggy appearance of the white balanced underwater image of previous step. Owing to similarity between foggy atmospheric images and underwater images basic haze removal algorithm known as dark channel prior (DCP) [30] has been used in this paper. But results of DCP algorithm show that image has darker background and the whole image appears to be dull. So, to improve the imperfections of the DCP algorithm, the present paper propose to use a combination of DCP and Rayleigh stretching techniques. The dehazing algorithm DCP [30] consists of four steps: where, J (x) is the dehazed image, I (x) represents the hazy image, L and T (x) are the atmospheric light and transmission map respectively. Transmission map represents the obtained residual energy when the foreground radiance transverses the medium or it describes the fraction of light reaching the camera. The haze removal algorithm aims at the estimation of J (x), T (x) and L. DCP algorithm assumes that the local regions in the background of the image have some very low intensity pixels in at least one colour (R, G and B) channel and these low intensity pixels are known as dark pixels [30]. J dark (x) is the dark channel at x and is given by the Equation (4).
In Equation (4), Ω(x) is a small area centred at x and J c is one of the colour channel of scene radiance J (x) [30]. If centre x doesn't belong to local regions, then J dark (x) approximately approaches zero and is named as dark channel. I (x) is the intensity of the hazy image mixed with atmospheric light L and is usually brighter as compared to the transmission map T (x). So, the dark channel of I (x) has higher value as compared to J (x) and this difference between values leads to haze removal. T(x) is calculated by dividing equation 3 by atmospheric light L [30]. This is shown by equation 5.
According to DCP, dehazed image or dark channel of scene radiance tends to have zero value [30] as shown in equation 6.
Finally, combining equation 5 and 6 gives the value of Transmission map, T (x) [30] and is represented as equation 7.
A small parameter a is added to the equation 7 for considering small amount of haze and finding depth map of the image [30]. This change is shown as equation 8. Transmission map is further refined by a soft matting technique [30].
Finally, DCP algorithm recovers the dehazed image (scene radiance, J (x)) in equation 9 by estimating the atmospheric light L and transmission map T (x) [30]. Transmission map, T (x) is restricted to a threshold value T 0 , which signifies the preservation of small amount of haze in densely hazy areas of the image.
DCP algorithm has some shortcomings which make it unsuitable for underwater image enhancement. DCP algorithm lacks in the background enhancement. If hazy underwater image has dark and small background then DCP cannot accurately dehaze it as image background is regarded as merging with foreground in the DCP method. Moreover, thick haze also reduces the local contrast making the whole image dull. To overcome these shortcomings of DCP algorithm, it is further improved with Rayleigh stretching technique.

Rayleigh stretching
To overcome the limitations of the DCP method, the images obtained from the above step are Rayleigh stretched in spatial domain. Dehazed images have been fed as input to contrast stretching algorithm, which in turn corrects the image contrast by extending the pixel range to desired values. According to the study of [31], histogram of underwater images should follow Rayleigh distribution for better contrast. The proposed method contrast stretches the image and finally map its pixels to Rayleigh distribution curve. Histogram stretching is done on the basis of composition of all three colours of RGB model. In underwater images red colour has minimum composition, so its histogram has been stretched towards higher side (i.e. 255) from 5% to 100%. Under-enhanced areas from underwater images are removed by neglecting lower 5% of pixels as this 0-5 where, I red and O red are input and output histograms, respectively for red channel. I (x) is the total dynamic range of underwater image, i.e. 0-255. On the other hand blue colour has maximum composition, so its histogram has been stretched towards lower side from 0% to 95%. Over-enhanced areas from underwater images are removed by neglecting upper 5% of pixels as this 5% is responsible for whiteness in the images. Equation 11 shows the mathematical representation [33].
where, I blue and O blue are input and output histograms, respectively for blue channel. I (x) is the total dynamic range of underwater image, i.e. 0-255. Green colour composition in underwater images is intermediate, so its histogram has been stretched to both the sides of the histogram. After making the composition of all three channels equal by histogram stretching, the histograms follow the Rayleigh distribution curve. This curve focuses most of the intensity values in the centre and thus reduces the too bright or too dark areas from the image. Expression used for contrast stretching of all three channels is given by Equation 12 [33].
where, P out and P in are output and input pixel intensities of underwater images respectively; I max and I min are maximum and minimum input pixel intensities; whereas O min and O max are minimum and maximum output pixel intensities. The above mentioned contrast stretching uniformly distributes the pixel intensity values to enhance the contrast effectively. Equation 13 shows the probability distribution function (PDF) for Rayleigh distribution with x representing the input pixels and representing the distribution parameter. is fixed to 0.4 for underwater scenario [33].
The proposed method implies histogram stretching to underwater images followed by mapping of their pixel intensities to Rayleigh curve. So, to obtain Rayleigh stretched histogram, Equation (12) is combined with Equation (13). Resultant equation is indicated as Equation (14) [33].
The proposed method stretches the image histogram within the certain limits and maps the intensity values to Rayleigh distribution for minimising pitch dark and dazzling regions from the image. The image colours are significantly improved along with reduction of blue-green illumination from it.
To further improve the visual quality in spatial domain, pixels of all three colour channel of RGB image are saturated to highest 1% and lowest 1% for enhancing the contrast of the image. All the three colour channels are combined back to construct the enhanced image.
The contrast of the image along with its colour can be efficiently improved by the proposed method and making it applicable for real-time scenario. Quantitative and qualitative result discussions in next section further validate the proposed method.

RESULT DISCUSSION
Qualitative and quantitative results have been discussed in this section to prove the effectiveness of the proposed method. Underwater dataset have been divided into four image categories including greenish tone images, bluish tone images, hazy images and dark images for these analysis. The results presented are based on U 45 underwater image dataset [36], however results for only 4 images from each category are presented in this section due to prescribed page limit. Various experiments on U 45 dataset elaborates that the proposed method performs better than the existing methods in terms of quantitative and qualitative evaluations. For qualitative evaluation obtained results are compared with state-of-the-art methods like He et al. [30], CLAHE [31], Ancuti et al. [32], Ghani et al. [33], VCIP [34] and RSUIE [35]. Quantitative result analysis has been done on four no-reference image quality evaluation metrics, i.e. underwater image quality measure (UIQM) [37], underwater colour image quality evaluator (UCIQE) [38], measure of enhancement (EME) [39] and measure of enhancement by entropy (EMEE) [40]. Coloured underwater images of size 512 × 512 has been processed for result calculation and evaluation using image processing toolbox of MATLAB 2018a.

Qualitative results
Underwater images have low contrast and dull colours with reduced object visibility. So, qualitative evaluation for the proposed method is done in terms of colours, contrast and object visibility. Figures 5-8 show the comparative results for each category, i.e. blue, green, hazy and dark images with different stateof-the-art methods. Figure 5 subjectively evaluates the visual quality of bluish tone underwater images. The blue colour is visible in CLAHE [31] and He et al. [30] methods which is eventually affecting the contrast and colours of images. Ancuti et al. [32] and VCIP [34] are also not able to recover the good quality images due to introduction of reddish tone in their over-exposed regions by the white balancing algorithm. Colour information has been completely lost by Ghani et al. [33] method. Similarly, RSUIE [37] method also completely oversaturate the image with reddish tone. On the contrary, proposed method enhances the image with good quality colours, contrast and visibility.
Qualitative evaluation of green images has been presented in Figure 6. Images of CLAHE [31] and He et al. [30] methods are hazy and have greenish tone and hence the visual quality of these images is unpleasing. Ghani et al. [33] method is unable to recover the colours of green tone images. Ancuti et al. [32], VCIP [34] and RSUIE [35] methods eliminate the haze but colours and contrast of the images are not acceptable. But the results of proposed method from Figure 6 subjectively improves the contrast, colours and object recognition in the given underwater images.
Similarly, the result analysis for hazy images in Figure 7 indicates that proposed method is best over existing state-of-the-art methods in terms of visibility, contrast and colours. Figure 8 shows the result for dark or non-illuminated images. These scenes have been captured without any artificial light. Result evaluation indicates that visual quality is not adequate in any of the state-of-art-the methods including the proposed method. Ancuti et al. [32] and RSUIE [35] method shows a little visibility but still object recognition is not possible. The reason for the same can be that images without light are black and it is impossible to get the details of such images.
Qualitative result evaluations indicate that proposed method is characterized by higher robustness in extreme underwater cases. This has been demonstrated by the results of Figures 5-8 that shows different challenging underwater scenario. Results indicates that proposed method performs better than state-ofart-the-methods in terms of colour, contrast and haze removal.

Quantitative results
Quantitative evaluation supports the visual quality of the resultant images in subjective evaluation and further validate the effectiveness of the proposed method. Proposed method uses four no-reference quality evaluation parameters, i.e. underwater image quality measure (UIQM) [37], underwater colour image quality evaluator (UCIQE) [38], measure of enhancement (EME) [39] and measure of enhancement by entropy (EMEE) [40] for quantitative analysis of underwater images as there is no reference or original image available for underwater scenes. Equations (15) and (16) shows EME (measure of enhancement) and EMEE (measure of enhancement by entropy)      scores [40], [39]. EME and EMEE calculates an absolute value of contrast for every image by using Fechner's Law.
where, the image is divided into K 1 and K 2 blocks and is a scaling factor which handles randomness. As increases randomness or entropy is more emphasised. I min and I max are the minimum and maximum values of the pixels in each block of the enhanced image. Equation (17) shows the mathematical representation of UIQM [37] comprising of three factors, i.e. underwater image sharpness measure (UISM), underwater image colorfulness measure (UICM) and underwater image contrast measure (UIConM). (17) UIQM is an evaluation parameter inspired by human visual system for effectively measuring the quality of underwater images, where C 1 is 0.0282, C 2 is 0.2953 and C 3 is 3.5753, respectively [37]. A higher value of UIQM shows good quality image. Equation (18) shows the UCIQE [38] metric for underwater images. UCIQE = c 1 * c + c 2 * con l + c 3 * s (18) where, c is the standard deviation of chroma, con l is the contrast of luminance and s is the average of saturation with c 1 = 0.4680, c 2 = 0.2745 and c 3 = 0.2576 as weighted coefficients [38]. For good quality image UCIQE parameter should be high. Tables 1-4 gives the quantitative scores of the state-of-the-art methods along with the proposed method for blue, green, hazy and dark underwater images, respectively. Values in bold shows the best results according to the formula.
Result evaluation of Table 1 shows that "Blue1" image has best value of EME for Ancuti et al. [32] and best value of UCIQE for VCIP [34] method but subjective results for the corresponding images of Ancuti et al. [32] and VCIP [34] in Figure 5 are unsatisfactory and high value of these parameters is due to over-enhancement of red colour in these algorithms. Similarly, the quantitative evaluation of "Blue3" and "Blue4" images does not match their subjective analysis. There exist inconsistencies between the quantitative scores in Table 1 and their relative qualitative images in Figure 5. Hence, quantative results with higher score do not always produce a good quality human visual perception for underwater images. But the proposed method has advantages in correcting colour casts and obtains high UIQM values for all blue images. "Green3" image in Table 2 shows high values of EME, UCIQE and UIQM but the qualitative results of the same in Figure 6 are unsatisfactory due to poor enhancement and distorted colours. Similarly, the UCIQE values of all haze images in Table 3 are better for VCIP [34] method but its corresponding subjective results in Figure 7 are visibly unpleasant. The overenhancement produced by its algorithm accounts for the high values of UCIQE.
"Dark1" and "Dark3" images in Table 4 shows high value of UIQM. The disproportionate contrast in results of He et al. [30] improves the UCIM value and which in turn increases UIQM scores. UCIQE values for all dark images are high for VCIP [34] method but its visual results are oversaturated. The proposed method however justifies the quantitative evaluation with its subjective results.
The above evaluations shows that there are inconsistencies between the objective and subjective evaluation for the same underwater image. So, UIQM and UCIQE may yield discrepancies on same dataset, which has been also found in the literature [36], [41]. The objective result evaluation by EMEE and EME is also not suitable for underwater scenarios as proved by the quantitative analysis of images.
Subjective analysis evaluates the visual quality of state-ofthe-art methods along with proposed method. Then, objective analysis have also been carried on underwater images using EMEE, EME, UIQM and UCIQE evaluation metrics but no metric is found suitable for measuring underwater image colour correction. Finally, some application tests have been performed to support the effectiveness and robustness of the proposed method.

APPLICATION TEST ASSESSMENT
To further prove the effectiveness of the proposed method, several application tests have been performed using edge detection [42] and keypoint matching [43] between original and rotated undewater images. Figure 9 depicts the results of application tests before and after using the proposed method. Only single image from each blue, green and haze subsets have been presented here due to space constraint. The results of Figure 9 show better edge detection and greater matching points as compared to raw images and hence provide evidence of robustness of the proposed method.
The proposed method has also been evaluated for foggy images as underwater images resemble them in the appearance. Results in Figure 10 shows that proposed method is also effective for foggy images and enhances their colour and contrast efficiently. Hence, the results of these application tests further support the proposed method as it provide satisfactory results for real world underwater images with a wide range of diversities. channels. Finally, colours, visibility and contrast of these images are enhanced by saturating the intensities of three channels in spatial domain. Quantitative and qualitative result evaluation on U 45 dataset indicates that proposed method reduces the haziness as well as minimises bluish and greenish tone from the underwater images. Comparative analysis of proposed method with the various state-of-the-art method shows the enhancement in visible perception of underwater images to a significant level. Various application tests further provide proof of the effectiveness and robustness of the proposed method. Future work includes the enhancement of dark underwater images to improve their colour, contrast and visibility. Further, work will also be done in area of object recognition for fish species classification.