Efficient Dehazing Method for Outdoor and Remote Sensing Images

As an atmospheric phenomenon, haze significantly reduces the visibility of outdoor and remote sensing images. As remote sensing and outdoor imaging have different mechanisms, existing dehazing methods are hard to be applied for both outdoor images and remote sensing images. In this article, an efficient dehazing method is proposed which can be applied to both outdoor and remote sensing images. The proposed method has the advantages of both dehazing methods based on image enhancement and image restoration methods. To address the problem of inaccurate calculation of transmittance in existing methods, a Gaussian-weighted image fusion is introduced to obtain a refined transmittance. In addition, to solve the problem of image color distortion after dehazing, an unsharp mask method is used to correct the dehazed image. Experiments on the synthetic dataset and real dataset show that the proposed method is able to dehaze both the outdoor and remote sensing images and outperforms existing methods by visual inspection. On the RICE remote sensing image dataset, the proposed method has an image peak signal-to-noise ratio of 27.08 and structural similarity of 0.94, which are higher than other methods.


I. INTRODUCTION
T ARGET detection based on outdoor and remote sensing (RS) images is used in many fields [1], such as civil and military [2], traffic surveillance [3], and disaster forecast [4]. However, in harsh environmental conditions, as air contaminants increase, light is scattered in the air and the image becomes hazy [5]. In addition, with the development of industry, haze occurs more frequently [6], which results in a poorly characterized ground target [7]. Therefore, image dehazing is an important issue in target detection for outdoor images and RS images. It has recently proposed a variety of methods to dehaze images, which can be broadly categorized into three types: based on image enhancement, based on image restoration, and based on deep learning methods, respectively. Manuscript  Based on the image enhancement, dehazing methods highlight useful information without considering the cause of haze. The representative methods include the Retinex-based methods [8], [9], [10], [11], histogram equalization [12], [13], [14], [15], and wavelet transform [16], [17], [18]. Dharejo et al. [15] proposed an algorithm that significantly boosted the effects of dark images and reduced the influence of haze and noise. However, this method requires preset parameters. Song et al. [17] proposed a wavelet spatial attention based multistream feedback network (WSAMF-Net), which can utilize both frequency-and spatial-domain information for better edge and structure recovery with a hazy image as input. However, the performance of the dehazing network on RS images is uncertain. A new wavelet hybrid (local-global combined) network is proposed by Dharejo et al. [18], and used a convolution neural network (CNN) in the wavelet domain (WH-Net). However, the image contrast after dehazing is low. Despite their benefits, these methods have some disadvantages, including the loss of detailed information, noise, and oversaturation.
Based on the image, restoration methods analyze the effect of atmospheric scattering model (ASM) on the image. On the basis of image restoration, some dehazing methods for outdoor images are proposed [9], [19], [20], [21], [22], [23], [24], [25], [26], [27]. In [23], dark channel prior (DCP) was used to dehaze outdoor images. However, the method is based on a fixed window when calculating transmittance, resulting in distorted image colors after dehazing. There are also some dehazing methods for RS images [28], [29], [30]. Ji et al. [30] proposed using histogram equalization to dehaze RS images. Despite this, the resultant image may be blurry as a result of the median filter effect.
Over the past decade, based on deep learning, dehazing methods have been rapidly developed, which learn the intrinsic relationships between images and obtain haze-free images by training. For outdoor images, the dehazing method is based on deep learning [31], [32], [33], [34], [35], [36]. In [34], multiscale networks (MSCNN) were proposed as a dehazing method for outdoor images. However, the network still needs to use ASM to obtain haze-free images. Deeba et al. [36] presented a two-stage attention-based residual encoder-decoder image dehazing network. The network included the encoder and decoder structure and the color correction model. However, the dehazing network did not preserve the edge-based information of hazy images. In addition, deep learning can be used to dehaze images on RS [37], [38], [39], [40], [41], [42]. In [40], an enhanced pixel-by-pixel dehazing network was proposed as a dehazing method for RS  images, which is independent of the ASM. However, the dehazed image still has residual haze.
Existing dehazing methods are only suitable for either outdoor images or RS images. In addition, the existing methods calculate transmittance based on fixed windows and do not refine the image, which often results in inaccurate estimation of transmittance and poor image quality.
In this article, we propose an efficient dehazing method that is applicable to both outdoor and RS images. In Fig. 1, some results are shown for the proposed method. The main contributions to this article are as follows.
1) We propose an efficient dehazing method that can be applied for both outdoor and RS images. The proposed method takes advantage of the image enhancement based dehazing methods and image restoration based dehazing methods. 2) A Gaussian-weighted image fusion is proposed to obtain a refined transmittance. 3) To obtain a true color image, the dehazed image is corrected with an unsharp mask (USM) method. The rest of this article is organized as follows. Section II introduces the related work. Section III presents the proposed image dehazing method in detail. Section IV discusses and compares the comparative experimental results. Further analysis is discussed in Section V. Finally, Section VI concludes this article.

A. Retinex Method
Retinex is widely used in image dehazing. In [43], Land and McCann proposed the Retinex method. As illustrated in Fig. 2, the following are simple explanations for Retinex. An image S c (x, y) can be expressed as where R c (x, y) is the reflectance at each pixel (x, y) and L c (x, y) is the illumination at each pixel (x, y). In general, the Retinex method compensates for illumination-related changes in images. The pixel-by-pixel product of ambient illumination and the scene reflectance are used to represent an image. The Retinex theory aims to separate the two quantities. From (1), we can obtain the reflectance by dividing the illumination by the reflectance.
From (3), we get where r c (x, y) represents the output image of the logarithmic domain, * denotes convolution, and K(x, y) represents the central surround function given by  where τ denotes the normalization parameter, and θ is the scale parameter. Through numerous experiments, we set τ = 0.3 and θ = 0.5. K(x, y) satisfies the following equation: K(x, y)dxdy = 1.
As shown in Fig. 3, the dehazing using Retinex improves contrast, which increased image detail information. However, Retinex-based dehazing methods can hardly remove haze completely from hazy images, due to the lack of consideration of image degradation.

B. Atmospheric Scattering Model (ASM)
When it comes to dehazing images, haze images are typically described using the ASM [9], [44], [45], [46], and are defined as where I c (x, y) , J c (x, y), T (x, y), and A c represent the observed image, haze-free image, transmission mapping, and atmospheric intensity, respectively. The hazy image degradation process described by ASM is shown in Fig. 4. The principle of the ASM can be expressed as follows. The hazy image consists of the target attenuated reflected light [the first term on the right side of (7)], and the atmospheric light term (the second term on the right side). Target attenuated reflected light describes the global atmospheric light reflected by the scene object participating in the imaging after attenuation, and its pixel intensity changes with the depth of field (DOF). The global atmospheric light is affected by the concentration of atmospheric suspended particles, and gradually changes with the DOF [47], [48].
Although haze can be removed from the image using ASM [see Fig

III. PROPOSED METHOD
Existing methods are unable to be applied for dehazing both outdoor and RS images. In particular, Retinex-based dehazing methods can hardly remove haze completely from hazy images, whereas ASM-based dehazing methods may result in low image contrast. Therefore, in this article, we propose a dehazing method that can be applied to both outdoor images and RS images based on the Retinex and ASM. Our method consists of two main modules: fusion model and image dehazing. The fusion model fuses Retinex with ASM. The image dehazing module includes transmittance calculation, atmospheric intensity value calculation, and color correction. A Gaussian-weighted image fusion method is proposed to refine the transmittance and improve accuracy when calculating the transmittance. As the dehazed image is subject to color distortion, a USM method is used to correct the image to obtain a clear and true color image [49].
In Fig. 6, we illustrate the proposed method. An input image's transmittance and atmospheric intensity are calculated using a fusion model. Using color correction, a clear and true color image is obtained after dehazing.

A. Fusion Model
For Retinex, S c (x, y) is a clear image of the object in a natural scene; for ASM, J c (x, y) is a clear image of the object after dehazing from the hazy image. Both S c (x, y) and J c (x, y) are clear images of the object. Therefore, we make By fusing the Retinex method with ASM, the fusion model can be obtained as Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  The new model fuses the advantages of Retinex and ASM, enables image dehazing and image enhancement at the same time, and is capable of dehazing both outdoor and RS images. As shown in Fig. 7, the model is based on the following principle.

B. Image Dehazing
The image dehazing process recovers the clear image R c (x, y) from the hazy image by solving for the transmittance T (x, y) and atmospheric intensity A c in (9). 1) Transmittance: When using the DCP to calculate T (x, y) , assume that the atmospheric intensity A c is known, and (9) can be expressed as a normalized form given as Applying two minimum operations on both sides of (10) gives where Ω(x, y) is the local area of the image and Y is the pixel in the area.
In the outdoor images with the sky removed, almost every pixel has a channel with a gray value nearly to 0 according to the DCP [23], [50] given as As a result of minimal filtering, the dark channel image edge's bright pixels are underestimated. The dark channel edge is enhanced using the image fusion method. Morphology can dilate and erode the image; thus, for compensating dark channel image. The difference between the dilated image and the corrosion image is then obtained to obtain the morphological edge image given as where I edge (x, y) represents the morphological edge of the image, Ω 1 (x, y) represents the center (x, y) of the filter block (3 × 3), and Ω 2 (x, y) represents the center (x, y) of the filter block (15 × 15).
There are three types of image fusion: pixel-level, featurelevel, and decision-level [51]. Image fusion based on the acquired morphological edges and the original dark channel image to compensate for the dark channel pixel values at the edges of the image is give as where I dark1 (x, y) represents the fusion dark channel image, and ω 1 and ω 2 are weighting coefficients, satisfying ω 1 + ω 2 = 1. ω 1 affects the quality of the image, too large a value will result in overcompensation of the edges of the dark channel image, whereas too small a value will result in missing edge information in the dark channel image. Through repeated experiments, the selected weights are ω 1 = 0.5 and ω 2 = 0.5. As the obtained dark channel image is composed of the minimized and the morphological edge, the pixel value of the areas with large DOF of the dark channel image is underestimated. Hence, the image is hardly completely dehazed. In this article, the Gaussian-weighted function is used to improve the dehazing effect, which is given by the following equation: where I dark g (x, y), G i (i), and β represent the Gaussian-weighted dark channel image, the Gaussian-weighted, and the total weight, respectively. Through numerous experiments, we set λ = 1. The Gaussian weight is where σ is the standard deviation, and the range is [0,1]. Fig. 8(b) shows that substantial underestimation of dark channel values in the edge region decreases contrast and dark colors after image restoration. According to Fig. 8(c), the Gaussian weight improves the pixel value of dark channel edges, preserves image details, and improves the image's color and makes it more realistic.
As shown in Fig. 9, σ can affect the dehazing effect. Through repeated experiments, we chose σ to be 0.6 in this article.
The value of transmittance T (x, y) can be obtained as 2) Atmospheric Intensity: Atmospheric light in an image typically originates from a light source, which is the brightest area in the image. In calculating the atmospheric intensity A c , by arranging the pixels from highest to lowest brightness, we select the top 0.1%, and the total number of pixels selected as n where • is the floor function, and M × N is the total number of pixels of the image. Then, the gray values of these pixels are summed in the three RGB channels and calculated as A c sum (x, y) given by where A c (x n , y n ) is the atmospheric intensity of the nth pixel point of the R, G, and B color channels. Calculate the mean of the gray value of each channel and use the mean as the global atmospheric intensity of each color channel image. The average value of A c sum (x, y) can be calculated as We can obtain the image R c (x, y) from hazed outdoor images or RS images by (10), T (x, y), and A c as According to (4), we can obtain The dehazed image can be expressed as 3) Color Correction: Dehazed images show phenomena such as color distortion, as illustrated in Fig. 10(b). To solve the image color distortion problem, we use the linear USM to correct the color deviation.
First, R c (x, y) is averaged to obtain the average pixel value as where c is the three RGB color channels, and R c sum (x, y) is the sum of all color channels of the image R c (x, y). We can then Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
Finally, the normalized image R c final (x, y) can be obtained as By merging all color channels, an image R c High-frequency images are enhanced using the USM method. Based on the input image R c (x, y), an enhanced image R c E (x, y) is generated where R c H (x, y) represents the output correction signal calculated by the linear high-pass filter, and λ 1 is a positive scale factor used to control the contrast enhancement level achieved at the output end, λ 1 = 0.5.
The common choice is to use the bidirectional Laplacian of the input image as R c H (x, y) given by

IV. EXPERIMENTS AND EVALUATION
This section compares our method with existing dehazing methods on outdoor and RS image datasets. The outdoor images are divided into two parts: daytime and night-time images.

A. Datasets
The datasets were divided into synthetic and real hazy image datasets. Following extensive experiments, we choose four representative images from each dataset for comparison.

C. Evaluation Metrics
In addition to the subjective evaluation, three evaluation metrics are used to measure the dehazing effects: perceptual index (PI), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM). Higher PSNR and SSIM indicate better results, and lower PI indicates better results. The three metrics are expressed (31)- (37).
PI: It is a new criterion that bridges the visual effect with a computable index, and has been recognized to be effective in image superresolution [63]. The lower the image quality is, the higher PI is. PI is formulated as where Ma and NIQE are two image qualification indexes, which are detailed in [64] and [65], respectively. PSNR: It is the most widely used objective evaluation standard for images [66]. In other words, the value of this parameter depends on how far the real image is from the restored image in terms of pixel match. Thus, PSNR is highly sensitive to errors, and its unit is dB. PSNR should be maximized. The definition is as follows: where MSE represents the mean square error between the dehazing image X and the reference image Y , and n represents the number of pixels. SSIM: The perception model is used to assess image similarity, which takes into account three factors: brightness, contrast, and structure, respectively [67]. SSIM ∈ [0, 1]. Closer to 1 states the higher structural quality of the restored image.  where l(X, Y ) is the brightness, c(X, Y ) is the contrast, s(X, Y ) is the degree of structure, μ X and μ Y are the mean values of the dehazed image X and reference image Y , respectively, σ X and σ Y are the variances of the dehazed image X and the reference image Y , respectively, σ XY is the covariance of the dehazed image X and the reference image Y , respectively, and C 1 , C 2 , and C 3 are constants to achieve a nonzero denominator.

D. Results on Synthetic Hazy Image Datasets 1) Qualitative Results:
We conducted some experiments on datasets of synthetic hazy images to compare the performance of our method with that of other state-of-the-art methods. a) Daytime: As shown in Fig. 11, our method was compared with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the RESIDE dataset. For synthetic hazy images, the AOD-Net, RVIS, BPPNet, CHAL, iGANet, and R-YOLO methods achieved good dehazing performance but had some shortcomings. For example, the CHAL and RVIS methods performed excessive dehazing, resulting in color distortion in the image. Relying on ASMs, the AOD-Net method based on deep learning also had a color distortion problem. The dehazing results of iGANet showed an apparently overexposed dehazed image. The R-YOLO method resulted in a serious loss of edge detail information in local areas, such as blurred edges of leaves. The dehazed image generated from BPPNet contained a large amount of haze residue. The proposed method effectively avoided these problems and restored the background light expected in a daytime image. The atmospheric light value obtained from the proposed method effectively reduced the image haze cover, preserved the image edge information, and increased the image contrast.
b) Night-time: As shown in Fig. 12, we compared our method with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the NightHaze-1 dataset. The dehazed results of these methods are shown in Fig. 12. BPPNet increased the image details but also introduced obvious color deviation and overexposure. RVIS enhanced the contrast of the image but introduced color deviations (such as distorted). After applying AOD-Net, CHAL, and R-YOLO, the image became dark. Overall, AOD-Net, RVIS, BPPNet, CHAL, iGANet, and R-YOLO were hardly effective in dehazing nighttime hazy images. AOD-Net, RVIS, BPPNet, and iGANet were effective in dehazing daytime images. CHAL and R-YOLO were  effective in dehazing RS images. Compared with these methods, our proposed method eliminated the residual haze in images, suppressed the sky region color, and preserved the image details. c) RS: As shown in Fig. 13, we compared our method with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the SateHaze1K dataset. CHAL and R-YOLO obtained a superior dehazing effect compared to AOD-Net, RVIS, BPPNet, and iGANet. The images were darker after RVIS and BPPNet. AOD-Net, iGANet, and R-YOLO recovered the image detail information but performed incomplete dehazing and retained residual haze in the local area of the image. CHAL had an overexposure problem, whereas the proposed method had a better recovery effect and more true colors. Our proposed method not only adequately recovered the synthetic image, but also enhanced the image contrast and saturation. Compared with these methods, our proposed method eliminated the residual haze from the image and suppressed the color distortion while preserving the image detail information.
2) Quantitative Results: We performed further experiments on three synthetic hazy image datasets and used PI, PSNR, and SSIM to show the performance of our method when compared with other state-of-the-art methods in the field of image dehazing. Table I and Fig. 14 show the average PI, PSNR, and SSIM results on three synthetic hazy image datasets.
On the RESIDE and NightHaze-1 datasets, the iGAL method has the lowest PI, the highest PSNR and SSIM, relative to the AOD-Net, RVIS, BPPNet, CHAL, and R-YOLO methods. On the Statehaze1K dataset, the R-YOLO method has the lowest PI, the highest PSNR and SSIM, relative to the AOD-Net, RVIS, BPPNet, CHAL, and iGAL methods. On the RESIDE, NightHaze-1, and Statehaze1K dataset, R-YOLO, CHAL, and AOD-Net methods are the worst. Our method has higher values of PSNR and SSIM metrics than the comparison methods on three synthetic datasets, indicating that our method can be applied both outdoor and RS images. Furthermore, the dehazed image retains its detailed information, whereas the original image retains its true colors.

E. Results on Real Hazy Image Datasets 1) Qualitative Results:
We conducted experiments on datasets of real hazy images to compare the performance of our method with that of other state-of-the-art methods in image dehazing tasks. a) Daytime: Our method was compared with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the O-Haze dataset. The O-Haze dataset contained 45 haze-free and 45 hazy images of different scenes.
As shown in Fig. 15, for real hazy images, the AOD-Net, RVIS, BPPNet, CHAL, iGANet, and R-YOLO methods achieved good dehazing performance but had shortcomings. The R-YOLO, RVIS, and BPPNet methods performed excessive dehazing, resulting in severe color distortion in the image. The AOD-Net method based on deep learning also had a color  distortion problem because its relied to some extent on ASMs. An observation of the dehazing results of RVIS showed an apparently overexposed dehazed image. The CHAL method resulted in a serious loss of edge detail information in local areas, such as blurred edges of leaves, and the dehazed image of iGANet contained a large amount of haze residue. The proposed method effectively avoided these problems and restored the background light expected in a daytime image. The atmospheric light value provided by the proposed method effectively reduced the image haze cover, preserved the image edge information, and increased the image contrast.
b) Night-time: Our method was compared with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the NSID dataset. The dataset contained 30 haze-free images and 30 hazy images of different scenes. As shown in Fig. 16, iGANet increased the image details but also introduced obvious color deviation and overexposure. R-YOLO enhanced the contrast of the image but introduced color deviations (such as distortion). After applying AOD-Net and CHAL, the image became dark. Overall, AOD-Net, RVIS, BPPNet, CHAL, iGANet, and R-YOLO were hardly effective in dehazing night-time hazy images. AOD-Net, RVIS, BPPNet, and iGANet were effective in dehazing daytime images. CHAL and R-YOLO were applied to RS image dehazing. Compared with these methods, our proposed method eliminated the residual haze in images, suppressed the sky region color, and preserved the image details. c) RS: Our method was compared with AOD-Net [58], RVIS [36], BPPNet [59], CHAL [60], iGANet [61], and R-YOLO [62] on the RICE dataset. As shown in Fig. 17, AOD-Net and CHAL obtain a superior dehazing effect compared to RVIS, BPPNet, and iGANet. The images were darker after BPPNet dehazing. AOD-Net, CHAL, and BPPNet recovered the image detail information, but performed incomplete dehazing and retain residual haze in the local area of the image. R-YOLO method had an overexposure problem, whereas the proposed method had a better recovery effect and more true colors; not only did it adequately recover the real image, but it also enhanced the image contrast and saturation. Compared with these methods, our proposed method eliminated the residual haze from the image and suppressed the color distortion while preserving the image detail information.
2) Quantitative Results: We performed further experiments on real hazy image datasets and used PI, PSNR, and SSIM to illustrate the performance of our method relative to other state-of-the-art methods in the field of image dehazing. Table II and Fig. 18 present the average PI, PSNR, and SSIM results on three real hazy image datasets. Fig. 18 shows the metrics    Table II and Fig. 18, on the O-HAZE and NSID datasets, iGAL obtain the lowest PI, and the highest PSNR and SSIM relative to AOD-Net, RVIS, BPPNet, CHAL, and R-YOLO. On the Statehaze1K dataset, R-YOLO obtained the highest PSNR and SSIM relative to the AOD-Net, RVIS, BPPNet, CHAL, and iGAL. On the three datasets, CHAL and AOD-Net had the worst performance. Our method obtained higher PSNR and SSIM values than those of other methods on three synthetic datasets, indicating that our method is applicable to both outdoor and RS images. As established in Section IV-E1, the dehazed image of our method retained its detailed information and the true colors of the original image.

F. Computational Complexity
Table III presents the average running time of each dehazing algorithm on 50 real haze images of the size 512 × 512 and 1024 × 1024 under the same experimental platform. The results showed that for images of the same size, CHAL was the least efficient, taking up to 0.15 s to process an image of size  512 × 512 and taking up to 0.21 s to process an image of size 1024 × 1024. R-YOLO was more efficient than the proposed method but slower in the preliminary training and learning of the network model, in addition to requiring more hardware. Therefore, the proposed method is more efficient and less costly in actual applications.

V. DISCUSSION
Adverse weather will degrade image quality and reduces useful information in the image, which affects target detection, identification, and tracking. In addition, outdoor and RS images have different imaging mechanisms because of the different sensors in the imaging equipment used for capturing them. Therefore, it is crucial to develop a method that can simultaneously dehaze outdoor images and RS images.
Existing dehazing methods can dehaze only outdoor images or only RS images and not both. Methods based on deep learning can better dehaze outdoor images and RS images; however, pretraining the deep learning models is time-consuming and requires high-quality training labels. Our proposed dehazing method can dehaze both outdoor images and RS images and does not require pretraining or a large number of samples.
The proposed dehazing method involves fusing the ASM and Retinex to form a new dehazing model, which enhances the image quality during dehazing. To solve the problem of wrong transmittance estimation, we proposed a Gaussian weight image fusion method to refine the transmission map, thereby improving the accuracy of transmittance and atmospheric intensity values.
However, as the dehazing effect is poor when the image contains dense clouds, more work is required to remove dense haze.
As outdoor images and RS images have different imaging mechanisms, hence, to improve the dehazing effect on outdoor and RS images, further studies should pay more attention to the following work: 1) proposing more effective ways of characterizing features of outdoor and RS images and fusing features mechanisms; 2) designing new priors and discovering deeper features in images; 3) improving the remove effect of dense haze and cloud by more refined features and combination strategies.

VI. CONCLUSION
We propose an efficient dehazing method that can be applied for both outdoor and RS images. This method takes advantage of the image enhancement based dehazing methods and image restoration based dehazing methods. To address the problem of inaccurate calculation of transmittance in existing methods, a Gaussian-weighted image fusion is introduced to obtain a refined transmittance. Additionally, the dehazed images are corrected using a USM method to solve the color distortion problem. The experimental results showed that our method outperforms existing methods for all selected metrics.
In the future, we will work on decreasing the complexity of the method to further apply it to the real-time processing of image acquisition terminals.