Next Article in Journal
Parametric Symmetries in Architectures Involving Indefinite Causal Order and Path Superposition for Quantum Parameter Estimation of Pauli Channels
Previous Article in Journal
Application of Explicit Symplectic Integrators in the Magnetized Reissner–Nordström Spacetime
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement of Asymmetrically Color-Cast Sandstorm Image Using Saturation-Based Color Correction and Hybrid Transmission Network

Department of Electronics Engineering, University of Pukyong National, 45 Yongso-ro, Nam-gu, Busan 48513, Republic of Korea
Symmetry 2023, 15(5), 1095; https://doi.org/10.3390/sym15051095
Submission received: 27 March 2023 / Revised: 4 May 2023 / Accepted: 6 May 2023 / Published: 16 May 2023

Abstract

:
The images discussed in this manuscript show atmospheric conditions of smog, sandstorm, and dust. Moreover, the images were taken in various environments and have features such as dimness or color cast. The smoggy image has a greenish or bluish color veil, and the sandstorm image has a yellowish or reddish color veil because of the various sand particles. Various methods have been used to enhance images containing dust. However, if the color-cast ingredients are not considered during image enhancement, then the enhanced image will have a new, artificial color veil that did not appear in the input image, as the color-veiled image does not have a uniform color channel. Certain channels are attenuated by sand particles. Therefore, this paper proposes a color-balancing method based on saturation to enhance asymmetrically cast colors due to the attenuation of the color channel by sand particles. Moreover, because the balanced image contains dust and the distribution of hazy ingredients is asymmetrical, a dehazing procedure is needed to enhance the image. This work used the original image and a reversed image to train the hybrid transmission network and generate the image’s transmission map. Moreover, an objective and subjective assessment procedure was used to compare the performance of the proposed method with that of other methods. Through the assessment, the performance of the proposed method was shown to be superior to other methods’ performance.

1. Introduction

The images discussed in this paper have diverse features with a hazy appearance or a color veil caused by various atmospheric circumstances. Hazy and dusty images are dim and unclear, and the sandstorm image contains a yellowish or reddish color veil because a certain color channel is attenuated by sand particles. Moreover, because the sandstorm image has low resolution and a rare color channel in certain environments, it presents a challenge in the areas of computer vision or image recognition. Therefore, a sandstorm image enhancement procedure is needed. Because sandstorm images and dusty images are obtained by a similar path, to enhance both images, a dehazing procedure is required. However, the existing dehazing methods have no color-balancing techniques; therefore, the enhanced image contains a new artificial color cast which was not visible in the input image with a color veil. Therefore, to enhance the sandstorm image naturally, a color-balancing procedure is needed.
The hazy-image-enhancement methods can be divided into two broad categories of machine-learning-based methods and non-machine-learning-based algorithms.
Numerous studies have been conducted on the use of non-machine-learning-based algorithms to enhance hazy images. He et al., proposed a dehazing algorithm using the dark channel prior (DCP) [1]. This method is usually applied for dehazing. However, because this method uses the constant kernel size to estimate the transmission map, the enhanced image has an artificial effect similar to the blocked effect. Meng et al., used the boundary constraint transmission map to enhance hazy images [2]. Their algorithm compensates for the DCP method using boundary constraints. Because this method has no color-balancing procedure, if the image has a cast color, the enhanced image will have an artificial color. Narasimhan et al., proposed a dehazing method using the image’s scene depth map, which is generated under different weather conditions [3]. Narasimhan et al., presented a hazy-image-enhancement algorithm according to changes in the scene color by atmospheric conditions [4]. Although this method enhances hazy images, the enhancement effect becomes low with the increase in depth. Zhao et al., enhanced hazy images using a transmission map with the pixel- and patch-wise method to compensate for the edge region of the existing transmission map [5]. In this method for enhancing hazy images, if the enhanced image is too dark, an exposure procedure is applied [5]. Tarel et al., proposed an image-enhancement method using white balance, atmospheric veil inference, and corner-preserving smoothing [6]. Nasseeba et al., enhanced hazy images using a depth estimation module to refine the transmission map with median filtering, a color analysis module using the gray world assumption, and a visibility restoration module to adjust the transmission map [7]. Schechner et al., proposed a hazy image enhancement algorithm using polarization [8]. Hong et al., enhanced hazy images using the adaptive gamma correction with the saturation increase [9]. However, in this method of hazy image enhancement, only the constant value that is not image-adaptive is used. Al-Ameen proposed a dusty-image-enhancement method using a tri-threshold with gamma correction [10]. Although this method enhances dusty images with a color cast, the constant value is not suitable for various other image conditions. Shi et al., enhanced dusty images using a normalized gamma transform and a contrast limit adaptive histogram equalization [11]. This method enhances color-cast dusty images; however, to balance the color components, this method uses a mean shift in color ingredients, and an artificial color cast can appear in the enhanced image. Cheng et al., enhanced images containing sand dust using blue channel compensation [12]. This method enhances the color-cast sandstorm image suitably; however, if the image’s color channel is too rare, a new artificial color veil can appear. Cheng et al., proposed a sandstorm-image-enhancement algorithm using the blue channel prior and white balance [13]. Gao et al., established a sand-dust-image-enhancement method using the blue channel prior and color-balancing method [14]. This method enhances the color-cast sand-dust image sufficiently; however, in the case of greatly attenuated images, because certain color channels are rare, the enhanced image could show a new color veil. Shi et al., proposed a sandstorm-image-enhancement algorithm using the compensated dark channel prior [15]. This method also uses the mean shift in color ingredients to enhance the color cast; however, a newly cast color can be seen in the enhanced image.
Furthermore, a great number of studies have focused on how to enhance hazy images based on machine learning. Zhu et al., enhanced hazy images using color attenuation prior and by training the depth map [16]. Ren et al., enhanced hazy images using two kinds of multi-scale convolutional neural networks: one generating a transmission map and the other generating a refining-of-transmission map [17]. Although their method enhances the hazy image suitably, because the training image is taken in the daytime, the nighttime image is not sufficiently enhanced. Wang et al., enhanced hazy images using atmospheric illumination prior and multiscale convolutional neural network [18]. This method effectively estimates the transmission map; however, in some images, the sky region of the transmission map is not well-estimated [18]. Lee enhanced the sandstorm image using image adaptive eigenvalue and brightness adaptive dark channel network [19]. Santra et al., improved hazy images using a transmittance map and environmental illumination [20]. This method enhances hazy images; however, for images taken at nighttime, the enhanced image has an artificial effect because the synthetic image does not include the nighttime environment. Yu et al., enhanced hazy images using ensemble learning with a two-branch neural network [21]. Zhou et al., improved the haze image using robust polarization and neural networks [22]. However, since this method is not able to estimate certain conditions of the atmosphere, the enhanced image has an artifact effect in some cases [22]. Zhang et al., enhanced the hazy image using a pyramid channel-based feature attention network, which has three modules with three-scale feature extraction, a pyramid channel-based feature attention, and a reconstruction module to extract diverse characteristics of the image [23]. Machine-learning-based dehazing methods are sometimes superior to non-machine-learning algorithms; however, creating a hazy dataset is a difficult task, and the synthetic image cannot contain the various circumstances required to train the neural network sufficiently.
The sandstorm image has a color veil due to the attenuation of color components. Therefore, to enhance the sandstorm image naturally, asymmetrically casted color needs a color-balancing procedure. Moreover, because the color-balanced image only appears to be a dusty image without a color veil, to enhance the image sufficiently, an image-adaptive dehazing procedure is needed because the distribution of hazy particles is asymmetrical. Therefore, this paper proposes an image-adaptive color-balancing method and a dehazing method, respectively.

2. Proposed Method

2.1. Saturation-Based Asymmetric Color-Channel Compensation

The sandstorm image has a certain color veil, which is either reddish or yellowish, due to the attenuation of the color channel. Moreover, the distribution of the color channel is asymmetrical. In order to balance the asymmetrical color components of an image naturally by attenuation on the color channel, it needs certain parameters that show the image’s characteristic. Hong et al. [9] enhanced hazy images using gamma correction with the value channel of the HSV domain and an increase in the saturation channel. As the hazy image only has dim characteristics, not a color cast, variations in saturation can lead to a new color cast. The saturation of image shows how the color is mixed and whether the color is dark or light. The sandstorm image has a yellowish or reddish color cast. To balance the image’s color, if the hue channel is controlled, then the image color is changed, leading to an artificial effect. However, because the image saturation shows how the color is mixed, to balance the color-casted image, the image saturation should be adaptively controlled.
Figure 1 provides an overview of the color-balancing procedure used in the proposed method. Figure 1a is an asymmetrically color-casted sandstorm image; Figure 1b shows a variation of saturation (variation in circle’s position: brown dotted arrow and circle are the saturation position of the yellowish casted sandstorm image; black dotted arrow and blue circle are the saturation position of the color-balanced sandstorm image). Figure 1c is a color-balanced image. As shown in Figure 1a–c, if the image saturation is changed, then the color veil of the image can be compensated. Because the color-casted image can be balanced by a change of saturation, this work proposes an image-adaptive color-balancing algorithm based on variations of saturation.

2.1.1. Color-Compensation Measure on Reddish or Orange Images

The sandstorm image has two types of color cast: reddish (yellowish) by sand particles or greenish (bluish) by smog particles. The reddish color-veiled image has the highest mean value for the red channel and a lower mean value for the blue or green channel than that of the red channel due to the color channel attenuation. The greenish color-veiled image has the highest mean value for the green channel and a lower mean value for the red and blue channel than that of the green color channel due to the color channel attenuation. Accordingly, to enhance the reddish and orange color-casted sandstorm images, this paper proposes the following color-balancing parameters:
r a t i o r b = m I r m I b ,
r a t i o r g = m I r m I g ,
where r a t i o r b and r a t i o r g are the average difference in values between red and blue color channels or red and green color channels, respectively, m · is the average operator. If the sandstorm image has a reddish or yellowish color veil, then the mean value of the red channel of the image is bigger than that of other color channels; therefore, the r a t i o r b and r a t i o r g values are always more than zero. As the ratios are applied as follows:
r a t i o R Y = r a t i o r b + r a t i o r g · ω ,
ω = max { m ( I r ) ,   m I g ,   1 m ( I b ) } ,    
where r a t i o R Y is the ratio of the reddish or yellowish sandstorm image, ω is the controlling parameter according to the image condition. If the image has a high level of color-casting and is reddish or yellowish, then the blue color channel of the sandstorm image is the rare condition, and the average value of the blue channel is lowest. However, because the average value of the reversed blue channel is the highest, the weight is somewhat uniform. Moreover, if the image has a light color veil because the average values of the color channels are somewhat uniform, the average value of the reversed color channel is uniform. Therefore, using Equation (4), the ratio can be changed, including adaptive image conditions.

2.1.2. Color-Compensation Measures for Greenish or Bluish Images

The reddish color-casted image has an imbalanced color channel, as the red channel is more abundant than other channels. Meanwhile, if the image has a greenish or bluish color veil, then the average value of the green channel is higher than other color channels. That is, the average value of the greenish or bluish image is higher than that of the red color channel. Therefore, if the r a t i o r b or r a t i o r g is lower than zero or equal zero, to enhance the greenish color-casted image, this paper uses the average value difference in color channels as follows:
r a t i o g r = m I g m I r ,
r a t i o g b = m I g m I b ,
where r a t i o g r and r a t i o g b show the average difference in values between the green channel and red or blue channel.
r a t i o G B = r a t i o g r + r a t i o g b · ω ,
where r a t i o G B is the ratio of the greenish or bluish color-casted image.

2.1.3. Color Compensation Using Image-Adaptive Measures

The ratios obtained by Equations (1)–(7) are applied to balance the image based on saturation, as follows:
S p = S r a t i o φ R Y , G B ,
where S p is the saturation channel enhanced by the proposed method; S is the saturation channel of the input image; r a t i o φ R Y , G B is the image-adaptive ratio of the greenish or reddish color-casted image obtained by Equations (1)–(7) where Hong et al. [9] enhances hazy image by increasing saturation of image with constant r a t i o φ R Y ,   G B value however it is not image adaptive. Using Equations (1)–(8), although the color channel of the image is rare, the enhanced image has a balanced natural color channel.
Figure 2 shows a balanced image, with the color-casted image and non-color-casted image. Figure 2b shows the color-balanced image obtained by Hong et al.’s [9] method. The image improved by Hong et al. [9] still has a shifted color because this method increases saturation with a constant value to enhance the hazy image. Meanwhile, the proposed color-balancing algorithm shown in Figure 2c shows a nice performance for both greatly color-casted images and non-color-casted images due to the image-adaptive saturation variations. Therefore, the proposed color-balancing algorithm is suitable for enhancing the sandstorm image.

2.2. Hybrid Transmission Network

The color-balanced image has hazy features, similar to dim images. Moreover, because haze particles are distributed asymmetrically, a dehazing procedure is required to enhance hazy images. The existing dehazing methods usually use dark channel prior (DCP) [1]. This method is useful for estimating the transmission map of a single image. However, when a transmission map is estimated because the constant kernel size is used to estimate the image’s dark region, the estimated image has a blocked area. Meanwhile, because the convolutional neural network (CNN) uses various kernel sizes, it can generate a transmission map naturally. Therefore, this work estimated the transmission map using CNN. When training the neural network, various training data are needed. Because acquiring the transmission map of images is a challenging task, a synthetic dataset is used. However, because the synthetic dataset does not contain various image circumstances, the enhanced image contains artifact phenomena. The transmission map is defined as [1]:
t x = e x p β d x ,
where β is a scattering parameter; d x is a depth map of the image. The transmission map is changed by the β parameter. The transmission map has diverse features according to whether the β parameter is low or high. Therefore, this work generated a suitable transmission map using variation in the β parameter, and called this the ground truth image of the transmission map. The proposed transmission map was obtained as follows:
t i x = e x p β i   d x ,
t g x = 1 N i = 1 N t i x ,
where t g x is the ground truth transmission map; N is the length of β i ; t i x is i t h transmission map; β i 0.5 ,   2 with 0.1 intervals. Obtained using Equations (10) and (11), the generated ground truth transmission map has diverse features. To generate the image-adaptive transmission map, this work used the hybrid transmission map applying the theories of dark channel prior (DCP) [1] and bright channel prior (BCP) [24]. The DCP estimates the dark region of images; however, if the image has bright regions, such as sky regions, the estimated image is still bright, not dark, and the enhanced image using the estimated DCP has an artificial effect. Additionally, because the BCP [24] estimates the bright region if DCP [1] and BCP [24] are hybridized, the estimated transmission map will be more natural. Therefore, this work used the hybrid transmission map with DCP [1] and BCP [24]. The DCP [1], BCP [24], and transmission map were obtained as follows:
I d x = min c min y Ω x I c y A c ,
I b x = max c max y Ω x I c y A c ,
t x = 1 I d   o r   b x ,
where I d x is the dark channel, I b x is the bright channel, Ω x is the patch size used to estimate dark or bright region, A c is backscatter light, c r , g , b , and t x is the transmission map obtained by reversing the dark channel or bright channel. By using Equations (12)–(14), to estimate dark or bright regions, a certain size kernel can be applied, and because the transmission map is estimated by reversing the dark channel or bright channel, the enhanced image can obtain a blocked effect by using a constant kernel size. Lee [19] designs neural network applying DCP theory [1]. Therefore, this work, aiming to estimate a transmission map without blocked phenomenon, used multi-scale CNN because CNN has a diverse kernel size with DCP [1] and BCP [24] theories. The brief design of neural network is as follows:
l t d x = 1 l d x ,
l t b x = 1 l b x ,
l t p x = c a t l t d x ,     l t b x ,
where l t d x is the transmission layer obtained applying DCP theory [1], l t b x is the transmission layer obtained applying BCP theory [24],   l d x is the dark channel layer with minimum pooling, l b x is the bright channel layer with maximum pooling, cat( · ) is the concatenate layer, and the rectified linear unit (ReLU) [25] is used as an activation function with each convolution layer and post-arithmetic operation. This work aims to apply the theories of DCP [1] and BCP [24] using minimum pooling and maximum pooling, respectively. Moreover, to obtain various image characteristics, multi-scale convolutional neural networks are applied.
Figure 3 provides an overview of the proposed neural networks and individual networks. Figure 3a provides an overview of the proposed method, while Figure 3b shows the networks of the dark channel: the brown color is the minimum pooling layer, sky blue color is the convolution layer, green color is the up-sampling layer, and the dark blue color is the concatenate layer. This network has 10 convolution layers, 2 minimum pooling layers, 3 up-sampling layers, and 4 concatenate layers. Figure 3c shows the networks of the multi-scale bright channel: the yellow color is the maximum pooling layer, sky blue color is the convolution layer, dark blue is the concatenate layer, and green color is the up-sampling layer. This network has 8 convolution layers, 2 maximum pooling layers, 2 up sampling layers, and 2 concatenate layers. Figure 3d shows a hybrid network: the sky blue color indicates the convolution layer, dark blue is the concatenate layer. This network has 2 convolution layers, and 1 concatenate layer. Moreover, the yellow rectangular shape shown in Figure 3b–d indicates the grouping of unit layers, where 1/2 and x2 indicate variation of size; downsize as 1/2, upsize as x2, the number below the layers indicates the channel size. The networks partially applied a U-net [26] architecture with a multi-scale resolution to obtain the various image characteristics.
Figure 4 shows ground truth transmission maps, the transmission map generated by the proposed algorithm, and the existing transmission map. In the existing methods established by He et al. [1], Santra et al. [20], Ren et al. [17], Meng et al. [2], and Zhao et al. [5], the bright region is too dark or too bright; however, the transmission map generated by the proposed algorithm estimates bright and dark regions suitably. Therefore, the proposed algorithm is competitive in terms of transmission map estimation.

2.3. The Training Environment Set

The color-balanced image has diverse features, such as dusty or hazy. Therefore, to enhance hazy images suitably, the training dataset should also be diverse. In order to train the neural network suitably, this work used a D hazy dataset [27], which has 1449 original images, synthetic hazy images, and depth map images. Moreover, during the training, 10% of 1449 images were used for validation, and 90% of 1449 images were used for training. Moreover, the hybrid loss function was applied for the loss function with mean squared error (MSE) and structural similarity index measure (SSIM) [28] as follows:
L p = L m s e + L α ,
L α = L m s e   ·   L s s i m L m s e + L s s i m ,
L m s e = 1 N i = 1 N e i 2 ,
L s s i m = 2 · μ t · μ G + C 1 · 2 · σ t G + C 2 μ t 2 + μ G 2 + C 1 · σ t 2 + σ G 2 + C 2 ,
where L m s e is mse loss function; e is error; L s s i m is ssim loss function; μ t mis the average intensity of the target image; μ G is the average intensity of the generated image; σ t G mis the correlation coefficient; σ t mis standard deviation of the target image; σ G is the standard deviation of the generated image. C 1 ,   C 2 are constant values. Using Equation (18), the loss value can be adjusted more suitably because both SSIM [28] and MSE indicate the similarities between two objects in different ways, and Adam optimizer [29] is used. Moreover, the train batch size and validation batch size were set as 8, while the learning rate and weight decay were, respectively, 0.0001 and 20 epoch sets. While network is training, per 1 epoch, 163 iterations with 8 batches = 1304 images are used, and when 20 epochs, 163 iterations, and 8 batches = 26,080 images are trained. Moreover, during validation, per 1 epoch, 18 iterations with 8 batches = 144 images are used, and when 20 epochs, 18 iterations, and 8 batches = 2880 images are used for validation, approximately. Moreover, to show accuracy during the training, SSIM [28] measure is used. The hardware environment set was Intel ® Core™ i7-8700 CPU @3.20 GHz, 32 GB RAM, 12 GB Geforce RTX 2060, and 6 GB Geforce GXT 1660 super.
Figure 5 shows the variation in loss function and accuracy of the training. The loss value gradually converges, and accuracy gradually converges.
Additionally, the detection in adverse weather nature (DAWN) dataset [30] was used, which has 323 natural sandstorm images to validate the trained data.

2.4. Image Recovery

The sandstorm image has color-casted characteristics due to the color of sand particles. In order to improve this phenomenon, this work proposed a color-balancing algorithm based on saturation. The balanced image seems hazy. Therefore, to enhance the image, this work used the CNN with a hybrid transmission map. This section, using the color-balanced image and generated transmission map, recovered the image as follows [1,4,31,32,33]:
J c x = I B c x A B c max t p x ,   t 0 + A B c ,
where J c x is the enhanced image; x is the pixel location; I B c x is the color-balanced image obtained using the proposed method; t p x is the generated transmission map; t 0 sets 0.1 to prohibit divided 0; A B c is the backscatter light of the balanced image obtained by He et al. [1] method. Moreover, to refine the enhanced image, this work applied a guided image filter [34] as follows:
J G c x = G f J c x ,   K ,   e p s ,
J e n c x = J c x J G c x · r a t i o + J G c x ,
where J G c x is guided filtered image; G f · is the guided filter; K is kernel size, set as 16; e p s was set as 0.1 2 ; J e n c x is the refined enhanced image; r a t i o was set as 5.
Figure 6 shows the color-balanced image, transmission map, and enhanced image obtained by the methods of He et al. [1] and Santra et al. [20]. Figure 6b shows the color-balanced image; Figure 6c,d shows the transmission map and enhanced image obtained by He et al. [1] using Figure 6b. Figure 6e,f shows the transmission map and enhanced image obtained by Santra et al. [20] method using Figure 6b. Figure 6g,h shows the transmission map and enhanced image obtained by the proposed algorithm using Figure 6b. The enhanced images obtained by He et al. [1] and Santra et al. [20] contain an artificial effect due to the transmission map. Meanwhile, the enhanced image obtained by the proposed algorithm has no artificial effect.

3. Experiment Result and Discussion

The color-casted sandstorm image is balanced by the proposed algorithm, and the balanced image has a hazy characteristic. In order to enhance the hazy image, this work applied the dehazing algorithm using CNN. This section shows the suitable performance of the proposed algorithm to enhance the sandstorm image. The categories of the assessment procedure are divided into two. The first is a subjective assessment, and the other is an objective assessment. Moreover, because the sandstorm image has a casted color, the subjective assessment is divided into two branches: color correction, and image enhancement, through comparison with state-of-the-art methods.

3.1. Subjective Assessment

The sandstorm image has a yellowish or reddish color cast. Therefore, to assess the enhanced sandstorm image subjectively, two procedures are required: color balancing and image enhancement. Therefore, this work was divided into two branches: color correction and enhanced image, and compared with state-of-the-art methods such as those of Al Ameen [10], Shi et al. [11], Shi et al. [15], Gao et al. [14], Ren et al. [17], He et al. [1], Meng et al. [2], Santra et al. [20], Zhao et al. [5], Hong et al. [9], and Yu et al. [21]. Moreover, to conduct comparisons in various environments, the detection in adverse weather nature (DAWN) dataset [30] was used, which has 323 natural sandstorm and dust storm images.

3.1.1. Color Correction

This section shows the image color is balanced compared with the state-of-the-art methods, such as those of Al Ameen [10], Shi et al. [11], Shi et al. [15], and Hong et al. [9], using the DAWN dataset [30].
Figure 7 and Figure 8 compare the color-balancing effect to that obtained with state-of-the-art methods. Shi et al.’s [11,15] methods contain a color-balancing procedure; however, the color-balanced image has a bluish artificial effect because these methods are used to balance the color channel mean shift of the color components. The color-balanced image obtained by Al Ameen [10] method has a yellowish or reddish casted color, which appears due to the use of a constant value; moreover, this is not an image-adaptive measure to enhance the image. Because Hong et al. [9] use an increase in saturation to enhance hazy images, if the image contains color-casting, then the balanced image will still contain a color-shifted effect, which may thicken due to the increase in saturation. However, the color-balanced image obtained by the proposed method has no color-casted effect.

3.1.2. Enhanced Image

The balanced image obtained using the proposed method is more natural than those obtained using the other methods. Therefore, this section assesses the enhancement of hazy images compared with state-of-the-art methods, such as those of Al Ameen [10], Shi et al. [11], Shi et al. [15], and Gao et al. [14]. Moreover, because the color-balanced image has hazy characteristics, existing dehazing algorithms are used, such as those of Ren et al. [17], He et al. [1], Meng et al. [2], Santra et al. [20], Zhao et al. [5], Hong et al. [9], and Yu et al. [21]. He et al. [1], Meng et al. [2], and Zhao et al.’s [5] methods enhance hazy images using DCP. Ren et al. [17], Santra et al. [20], and Yu et al.’s [21] methods enhance hazy images using CNN. Moreover, Hong et al. [9] enhance hazy images using gamma correction and increased saturation. Meanwhile, Gao et al. [14], Shi et al. [11], Shi et al. [15], and Al Ameen’s [10] methods enhance the sandstorm image, and these methods use the color-balancing procedure.
Figure 9 and Figure 10 show a performance comparison of the proposed method and state-of-the-art methods. He et al. [1] and Meng et al.’s [2] methods enhance hazy images; however, when used with color-casted images, the enhanced image has an artificial color because these methods have no color compensation procedure. Shi et al.’s [11,15] algorithms enhance sandstorm images, although these contain a color veil. However, due to the mean shift in color ingredients to balance the color channel, these methods sometimes have an artificial bluish color. Gao et al. [14] enhanced the sandstorm image. However, the enhanced image seems dim because of the transmission map. Al Ameen [10] enhanced a sandstorm image with a lightly casted color because this method uses a constant value to enhance the image and is not an image-adaptive measure. Ren et al. [17] and Santra et al.’s [20] methods enhance hazy images using CNN; however, these methods have no color-compensation procedure, and the enhanced image shows a color shift. Moreover, the image enhanced by Hong et al. [9] also has a casted color because this method does not have an image-adaptive color-balancing procedure but increases the image saturation of the image, meaning that the enhanced image has a thicker casted color veil. Yu et al. [21] enhanced hazy images; however, because this method has no color-compensation procedure, the enhanced image has color-shift components. Zhao et al. [5] improved a lightly color-casted sandstorm image; however, because this method has no suitable color-correction procedure, the enhanced reddish or orange color-casted image still contains color ingredients. Meanwhile, the image enhanced by the proposed algorithm has no shifted color and no artificial effect. Therefore, the proposed algorithm is suitable for application in the sandstorm image-enhancement sphere.
Figure 7. The performance comparison of color-balancing algorithms using state-of-the-art methods and the proposed method: (a) input; (b) Shi et al. [15]; (c) Shi et al. [11]; (d) Al Ameen [10]; (e) Hong et al. [9]; (f) Proposed method.
Figure 7. The performance comparison of color-balancing algorithms using state-of-the-art methods and the proposed method: (a) input; (b) Shi et al. [15]; (c) Shi et al. [11]; (d) Al Ameen [10]; (e) Hong et al. [9]; (f) Proposed method.
Symmetry 15 01095 g007
Figure 8. The performance comparison of color-balancing algorithms using state-of-the-art methods and the proposed method: (a) input; (b) Shi et al. [15]; (c) Shi et al. [11]; (d) Al Ameen [10]; (e) Hong et al. [9]; (f) Proposed method.
Figure 8. The performance comparison of color-balancing algorithms using state-of-the-art methods and the proposed method: (a) input; (b) Shi et al. [15]; (c) Shi et al. [11]; (d) Al Ameen [10]; (e) Hong et al. [9]; (f) Proposed method.
Symmetry 15 01095 g008
Figure 9. The performance of the enhanced image compared to state-of-the-art methods and the proposed method: (a) input; (b) He et al. [1]; (c) Meng et al. [2]; (d) Hong et al. [9]; (e) Shi et al. [15]; (f) Shi et al. [11]; (g) Gao et al. [14]; (h) Al Ameen [10]; (i) Ren et al. [17]; (j) Santra et al. [20]; (k) Zhao et al. [5]; (l) Yu et al. [21]; (m) Proposed method.
Figure 9. The performance of the enhanced image compared to state-of-the-art methods and the proposed method: (a) input; (b) He et al. [1]; (c) Meng et al. [2]; (d) Hong et al. [9]; (e) Shi et al. [15]; (f) Shi et al. [11]; (g) Gao et al. [14]; (h) Al Ameen [10]; (i) Ren et al. [17]; (j) Santra et al. [20]; (k) Zhao et al. [5]; (l) Yu et al. [21]; (m) Proposed method.
Symmetry 15 01095 g009
Figure 10. The performance of the enhanced image compared with state-of-the-art methods and the proposed method: (a) input; (b) He et al. [1]; (c) Meng et al. [2]; (d) Hong et al. [9]; (e) Shi et al. [15]; (f) Shi et al. [11]; (g) Gao et al. [14]; (h) Al Ameen [10]; (i) Ren et al. [17]; (j) Santra et al. [20]; (k) Zhao et al. [5]; (l) Yu et al. [21]; (m) Proposed method.
Figure 10. The performance of the enhanced image compared with state-of-the-art methods and the proposed method: (a) input; (b) He et al. [1]; (c) Meng et al. [2]; (d) Hong et al. [9]; (e) Shi et al. [15]; (f) Shi et al. [11]; (g) Gao et al. [14]; (h) Al Ameen [10]; (i) Ren et al. [17]; (j) Santra et al. [20]; (k) Zhao et al. [5]; (l) Yu et al. [21]; (m) Proposed method.
Symmetry 15 01095 g010

3.2. Objective Assessment

The color-casted sandstorm image is balanced by the proposed algorithm, and Figure 7 and Figure 8 show the performance of the proposed algorithm, which was shown to be suitably compared with state-of-the-art methods. Moreover, the dehazing algorithm used by the proposed method is superior to state-of-the art-methods in subjective terms. This section assesses the performance of the proposed method and how suitable it is for the enhancement of sandstorm images. To objectively assess the enhanced image, this work used two metrics: the natural image quality evaluator (NIQE) [35] and the underwater image quality measure (UIQM) [36]. The NIQE [35] metric indicates how natural an image is. The lower the NIQE [35] score, the more natural the enhanced image and the better its quality. Meanwhile, the UIQM [36] score shows how well an image is enhanced in terms of the image’s contrast, colorfulness, and sharpness. The higher the UIQM [36] score, the more enhanced the image. Moreover, to assess the generated transmission map, SSIM [28] and MSE metrics are used.
Table 1 shows how similar the transmission map is to the ground truth. The transmission map by Ren et al. [17] is more similar to the ground truth image, although the transmission map is dimmer than those obtained using He et al. [1], Santra et al. [20], Zhao et al. [5], and Meng et al.’s [2] methods according to their SSIM [28] score. Meanwhile, the transmission map used in He et al.’s [1] method is more similar to the ground truth image than those of Ren et al. [17], Santra et al. [20], Zhao et al. [5], and Meng et al. [2] according to MSE score. The transmission map obtained by the proposed method is more similar to the ground truth image than those of He et al. [1], Santra et al. [20], Ren et al. [17], Meng et al. [2], and Zhao et al. [5] according to both SSIM [28] and MSE scores.
Table 2 and Table 3 show the NIQE [35] scores of Figure 9 and Figure 10. The lower the NIQE score, the better enhanced and more natural the image. The NIQE score obtained by He et al. [1] is higher than that obtained by Gao et al. [14] in some images because He et al.’s [1] method has no color-compensation procedure. Gao et al. [14] obtained a higher NIQE score than Al Ameen [10], though the image enhanced by Gao et al. [14] method has a smaller color shift than that obtained using Al Ameen’s method [10]. Meng et al.’s [2] method has a lower NIQE score than Gao et al.’s [14] method, although the enhanced image contains casted color. Shi et al. [15] obtained a lower NIQE score than Meng et al. [2] because Shi et al. [15] used a color-compensation procedure. Ren et al. [17] obtained a higher NIQE score than Shi et al. [15] because Ren et al. [17] did not contain a color-compensation procedure. Shi et al. [11] obtained a lower NIQE score than Ren et al. [17] in some images, because Ren et al.’s [17] method does not contain a color-compensation procedure. Santra et al. [20] obtained a higher NIQE score than that of Shi et al. [11] in some images because Santra et al.’s [20] method contains no color-compensation procedure. Hong et al. [9] obtained a higher NIQE score than Shi et al. [11] because Shi et al. [11] used a color-balancing procedure. Yu et al. [21] obtained a higher NIQE score than Zhao et al. [5] and Shi et al. [11] because Shi et al.’s [11] method contains a color-compensation procedure. Although the image has a shifted color, the NIQE score is lower than that of the non-color-shifted image. Therefore, the NIQE score is not an absolute but a referenceable measure. The proposed method has a lower NIQE score than other methods.
Table 4 and Table 5 compare the performance of the enhanced image with state-of-the-art methods and the proposed method through UIQM [36] score. A higher score denotes a better-enhanced image. He et al. [1] obtained a higher UIQM score than Gao et al. [14], although He et al.’s [1] method contains no color-compensation procedure. Gao et al.’s [14] method obtained a lower UIQM score than Al Ameen’s [10] method, although the image enhanced using Al Ameen’s [10] method has a casted color. Meng et al. [2] obtained a lower UIQM score than Al Ameen [10] because Meng et al.’s [2] method has no color-compensation procedure. Shi et al. [15] obtained a higher UIQM score than Meng et al. [2] because Shi et al. [15] used a color-compensation procedure. Ren et al. [17] obtained a lower UIQM score than Shi et al. [15] because Ren et al.’s [17] method has no color-compensation procedure. Shi et al. [11] obtained a lower UIQM score than Ren et al. [17], although Shi et al.’s [11] method contains a color-compensation procedure. Santra et al. [20] obtained a higher UIQM score than Shi et al. [11] in some images, although Santra et al.’s [20] method contains no color-compensation procedure, and the enhanced image has a casted color. Hong et al. [9] obtained a lower UIQM score than Shi et al. [11] because Shi et al.’s [11] method contains a color-compensation procedure. Zhao et al. [5] obtained a higher UIQM score than Yu et al. [21] and Gao et al. [14], although Gao et al.’s [14] method contains a color-compensation procedure. Through the UIQM score, though, if the image has a casted color because the UIQM score is higher than that of others, the UIQM is not an absolute but a referenceable measure. The image enhanced by the proposed method has a higher UIQM score than other methods.
Table 6 and Table 7 show a comparison of the enhanced image with state-of-the-art methods and the proposed method through averaged NIQE [35] and UIQM [36] scores on Figure 9 and Figure 10 and the DAWN dataset [30]. Table 6 shows the averaged NIQE score of Figure 9 and Figure 10 and the DAWN dataset [30]. The existing dehazing method contains no color-compensation procedure; however, sometimes, the NIQE score is lower than that of the non-color-casted image on the color-casted image. Moreover, although the enhanced image contains a color shift, the UIQM score of the enhanced image with casted color is higher than that of the non-color-casted image. Therefore, the NIQE and UIQM metrics are not absolute but referenceable measures. The NIQE score of the proposed method was lower than those obtained for other methods, and the UIQM score of the proposed method was higher than those obtained for other methods.

4. Conclusions

The sandstorm image has an asymmetrically casted color, such as yellowish or reddish, due to the color-channel attenuation caused by sand particles. If the color-casted components are not considered when enhancing the sandstorm image, then the enhanced image has an artificial color. Therefore, this work balanced the image using a saturation-based color-correction algorithm on an asymmetrically casted color. The balanced image contains no color veil and seems hazy. Moreover, as the distribution of the haze ingredients is asymmetrical, a dehazing procedure was needed to enhance the hazy image; therefore, this work obtained a transmission map with hybrid theories, such as dark-channel prior and bright-channel prior, based on CNN. The enhanced image has no artificial effect or naturalism. The contribution of this work is that the proposed color-correction algorithm is based on saturation using the average value difference of color channels in sandstorm images with various color casts. Moreover, this method can easily and widely compensate images, even when the color channel is too rare due to great attenuation, and by using the hybrid transmission map, the proposed algorithm is naturally enhanced, although the image has regions that are too bright or dark. The next aim of this work is to enhance the image naturally, pursuing image-adaptive measures to balance the color and estimate the transmission map in low-light circumstances and a thick, hazy environment.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  2. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the ICCV—IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
  3. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef]
  4. Narasimhan, S.G.; Nayar, S.K. Chromatic framework for vision in bad weather. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), Hilton Head, SC, USA, 15 June 2000; Volume 1, p. 00662. [Google Scholar]
  5. Zhao, D.; Xu, L.; Yan, Y.; Chen, J.; Duan, L.-Y. Multi-scale Optimal Fusion model for single image dehazing. Signal Process. Image Commun. 2019, 74, 253–265. [Google Scholar] [CrossRef]
  6. Tarel, J.-P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar]
  7. Naseeba, T.; Binu, H. KP Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. Int. Res. J. Eng. Technol. 2016, 3, 135–139. [Google Scholar]
  8. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
  9. Hong, N.M.; Thanh, N.C. A single image dehazing method based on adaptive gamma correction. In Proceedings of the 2019 6th NAFOSTED Conference on Information and Computer Science (NICS), Hanoi, Vietnam, 12–13 December 2019. [Google Scholar]
  10. Al-Ameen, Z. Visibility Enhancement for Images Captured in Dusty Weather via Tuned Tri-threshold Fuzzy Intensification Operators. Int. J. Intell. Syst. Appl. 2016, 8, 10–17. [Google Scholar] [CrossRef]
  11. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement. IET Image Process. 2020, 14, 747–756. [Google Scholar] [CrossRef]
  12. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. A Fast Sand-Dust Image Enhancement Algorithm by Blue Channel Compensation and Guided Image Filtering. IEEE Access 2020, 8, 196690–196699. [Google Scholar] [CrossRef]
  13. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. Blue Channel and Fusion for Sandstorm Image Enhancement. IEEE Access 2020, 8, 66931–66940. [Google Scholar] [CrossRef]
  14. Gao, G.; Lai, H.; Jia, Z.; Liu, Y.Q.; Wang, Y. Sand-Dust Image Restoration Based on Reversing the Blue Channel Prior. IEEE Photon. J. 2020, 12, 1–16. [Google Scholar] [CrossRef]
  15. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Let You See in Sand Dust Weather: A Method Based on Halo-Reduced Dark Channel Prior Dehazing for Sand-Dust Image Enhancement. IEEE Access 2019, 7, 116722–116733. [Google Scholar] [CrossRef]
  16. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed]
  17. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  18. Wang, A.; Wang, W.; Liu, J.; Gu, N. AIPNet: Image-to-Image Single Image Dehazing With Atmospheric Illumination Prior. IEEE Trans. Image Process. 2018, 28, 381–393. [Google Scholar] [CrossRef] [PubMed]
  19. Lee, H. Sandstorm Image Enhancement Using Image-Adaptive Eigenvalue and Brightness-Adaptive Dark Channel Network. Symmetry 2022, 14, 2310. [Google Scholar] [CrossRef]
  20. Santra, S.; Mondal, R.; Panda, P.; Mohanty, N.; Bhuyan, S. Image Dehazing via Joint Estimation of Transmittance Map and Environmental Illumination. In Proceedings of the 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017. [Google Scholar]
  21. Yu, Y.; Liu, H.; Fu, M.; Chen, J.; Wang, X.; Wang, K. A two-branch neural network for non-homogeneous dehazing via ensemble learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  22. Zhou, C.; Teng, M.; Han, Y.; Xu, C.; Shi, B. Learning to dehaze with polarization. Adv. Neural Inf. Process. Syst. 2021, 34, 11487–11500. [Google Scholar]
  23. Zhang, X.; Wang, T.; Wang, J.; Tang, G.; Zhao, L. Pyramid channel-based feature attention network for image dehazing. Comput. Vis. Image Underst. 2020, 197, 103003. [Google Scholar] [CrossRef]
  24. Shi, Z.; Zhu, M.M.; Guo, B.; Zhao, M.; Zhang, C. Nighttime low illumination image enhancement with single image using bright/dark channel prior. EURASIP J. Image Video Process. 2018, 2018, 13. [Google Scholar] [CrossRef]
  25. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), Haifa, Israel, 21–24 June 2010. [Google Scholar]
  26. Ronneberger, Q.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  27. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In Proceedings of the 2016 IEEE international conference on image processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
  28. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  29. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  30. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar]
  31. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  32. Fattal, R. Single image dehazing. ACM Trans. Graph. (TOG) 2008, 27, 1–9. [Google Scholar] [CrossRef]
  33. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233. [Google Scholar] [CrossRef]
  34. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  35. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer”. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  36. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
Figure 1. Overview of the color-balancing procedure: (a) sandstorm image; (b) overview of color-balancing procedure with [16] (blue and brown circles with brown and black dotted arrows are variations of saturation); (c) color-balanced image.
Figure 1. Overview of the color-balancing procedure: (a) sandstorm image; (b) overview of color-balancing procedure with [16] (blue and brown circles with brown and black dotted arrows are variations of saturation); (c) color-balanced image.
Symmetry 15 01095 g001
Figure 2. The performance comparison of color-balancing algorithms; (a) sandstorm image with asymmetrically color-casted or non-color-casted images; (b) improved image obtained by Hong et al. [9]; (c) color-balanced image obtained by the proposed method.
Figure 2. The performance comparison of color-balancing algorithms; (a) sandstorm image with asymmetrically color-casted or non-color-casted images; (b) improved image obtained by Hong et al. [9]; (c) color-balanced image obtained by the proposed method.
Symmetry 15 01095 g002
Figure 3. The hybrid transmission networks: (a) overview of the hybrid transmission network; (b) transmission network of the dark channel; (c) transmission network of the bright channel; (d) the hybid transmission network.
Figure 3. The hybrid transmission networks: (a) overview of the hybrid transmission network; (b) transmission network of the dark channel; (c) transmission network of the bright channel; (d) the hybid transmission network.
Symmetry 15 01095 g003
Figure 4. The comparison of transmission maps: (a) input; (b) ground truth transmission map; (c) transmission map developed by Zhao et al. [5]; (d) transmission map developed by He et al. [1]; (e) transmission map developed by Meng et al. [2]; (f) transmission map developed by Santra et al. [20]; (g) transmission map developed by Ren et al. [17]; (h) transmission map developed by the proposed method.
Figure 4. The comparison of transmission maps: (a) input; (b) ground truth transmission map; (c) transmission map developed by Zhao et al. [5]; (d) transmission map developed by He et al. [1]; (e) transmission map developed by Meng et al. [2]; (f) transmission map developed by Santra et al. [20]; (g) transmission map developed by Ren et al. [17]; (h) transmission map developed by the proposed method.
Symmetry 15 01095 g004
Figure 5. The variation in loss value and accuracy: (a) loss value; (b) accuracy.
Figure 5. The variation in loss value and accuracy: (a) loss value; (b) accuracy.
Symmetry 15 01095 g005
Figure 6. The comparison of the enhanced image with the transmission map: (a) input; (b) color-balanced image obtained by the proposed method; (c) transmission map obtained by He et al. [1]; (d) enhanced image obtained by He et al. [1]; (e) transmission map obtained by Santra et al. [20]; (f) enhanced image obtained by Santra et al. [20]; (g) transmission map obtained by the proposed method; (h) enhanced image obtained by the proposed method (the transmission map and enhanced image of comparison algorithms use the color-balanced image (b)).
Figure 6. The comparison of the enhanced image with the transmission map: (a) input; (b) color-balanced image obtained by the proposed method; (c) transmission map obtained by He et al. [1]; (d) enhanced image obtained by He et al. [1]; (e) transmission map obtained by Santra et al. [20]; (f) enhanced image obtained by Santra et al. [20]; (g) transmission map obtained by the proposed method; (h) enhanced image obtained by the proposed method (the transmission map and enhanced image of comparison algorithms use the color-balanced image (b)).
Symmetry 15 01095 g006
Table 1. The comparison of transmission map obtained through SSIM [28] and MSE metrics with state-of-the-art methods shown in Figure 4 and D hazy dataset [27] (PM is proposed method).
Table 1. The comparison of transmission map obtained through SSIM [28] and MSE metrics with state-of-the-art methods shown in Figure 4 and D hazy dataset [27] (PM is proposed method).
SSIM [28][1][20][17][2][5]PM MSE[1][20][17][2][5]PM
0.8720.7890.9120.7580.6210.950 0.0290.0710.0610.0650.0500.003
0.8580.8290.8280.7660.8230.952 0.0260.0460.1330.0450.0340.008
0.7940.8950.8200.5590.7120.927 0.0360.0140.1280.0920.0610.005
0.7690.7990.8910.6100.6940.926 0.0630.0850.0580.1270.0940.015
0.7710.7840.8740.6090.6720.941 0.0540.0780.0880.1000.0690.004
AVG (5)0.8130.8190.8650.6600.7040.939 AVG(5)0.0420.0590.0940.0860.0620.007
AVG (1449)0.7890.7780.8780.6360.6800.920 AVG(1449)0.0520.0830.0700.1070.0760.010
Table 2. The comparison of enhanced image through the NIQE [35] metric with state of the art methods in Figure 9 (the lower score is the better-enhanced image, PM is proposed method).
Table 2. The comparison of enhanced image through the NIQE [35] metric with state of the art methods in Figure 9 (the lower score is the better-enhanced image, PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
19.69419.79319.92720.14218.89219.60419.74019.95519.74819.36618.89318.535
20.28020.16420.34320.57920.35120.27520.33620.12920.26420.06120.14819.829
19.55419.86619.77419.78019.46819.52419.61819.58319.62919.93119.24718.317
19.68919.67119.74119.69219.51219.58819.63719.63519.59519.62919.27819.094
19.77019.84220.10419.41320.26119.84220.08519.30119.70619.92920.25717.154
19.93420.37120.40420.58819.63519.73720.13020.15120.10520.32719.40919.040
20.15719.86820.05120.38320.08820.28520.38720.11420.34120.24520.10519.183
19.88319.79419.75018.93619.96219.59920.03219.27519.94220.15319.86416.758
20.33820.23820.31820.15420.22720.13020.17320.28820.12819.97820.36919.273
20.10519.72320.12120.25320.13519.71920.19319.69520.05719.63919.79019.263
AVG19.94019.93320.05319.99219.85319.83020.03319.81319.95219.92619.73618.645
Table 3. The comparison of enhanced image through the NIQE [35] metric with state-of-the-art methods in Figure 10 (the lower score is the better enhanced image, PM is proposed method).
Table 3. The comparison of enhanced image through the NIQE [35] metric with state-of-the-art methods in Figure 10 (the lower score is the better enhanced image, PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
19.62719.81520.15819.61719.44019.82420.09219.61420.57119.66819.48317.613
20.98521.17021.43621.35320.30221.09021.30720.91821.17621.49220.35619.672
19.61419.74219.69519.25919.51619.46619.19319.13819.43919.67119.76416.848
20.20420.42020.34720.30920.20920.04820.19420.22120.13221.21619.74419.530
19.89720.01619.95920.10519.68819.73819.94419.91819.90920.15319.51719.336
20.78120.36320.30019.80620.70620.23220.88120.22220.74020.44821.10917.506
19.90419.82219.75719.78119.55319.69719.81019.72819.83620.04619.52317.430
19.49519.27818.42419.11619.69119.04019.61718.56119.70019.17518.43716.002
19.82519.94719.82919.71319.84719.78319.88719.93919.89420.32819.91519.023
19.82619.59519.77419.88819.90019.63319.87119.60019.78619.77119.65219.181
AVG20.01620.01719.96819.89519.88519.85520.08019.78620.11820.19719.75018.214
Table 4. The comparison of enhanced image through the UIQM [36] metric with state-of-the-art methods in Figure 9 (the higher score denotes a better enhanced image; PM is proposed method).
Table 4. The comparison of enhanced image through the UIQM [36] metric with state-of-the-art methods in Figure 9 (the higher score denotes a better enhanced image; PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
0.8650.7680.6521.1581.3640.9681.0370.9591.0980.9841.5811.543
0.3640.5110.3010.8340.6430.7510.5140.4780.6130.6541.1240.941
0.9060.6890.5640.8701.2260.8600.9700.8570.9540.7191.3101.410
0.4850.4440.3610.7350.7900.7990.6080.6170.7580.7570.9791.205
0.9340.7180.7650.9460.9370.8990.8961.0050.9050.6740.9661.476
0.9450.5790.5320.8061.1021.1780.7830.7950.7220.6911.2241.456
0.4200.4980.3870.9050.7800.6840.7700.5440.6720.4481.0071.009
0.8110.6350.7571.0930.8261.1800.8010.9470.7740.6280.8911.511
0.4290.4440.5270.8670.4790.7220.5990.5730.4700.4430.6341.237
0.8120.9170.7451.2370.7570.8700.9070.9410.8190.8761.0481.483
AVG0.6970.6200.5590.9450.8900.8910.7890.7720.7790.6871.0761.327
Table 5. The comparison of enhanced image through the UIQM [36] metric with state-of-the art-methods in Figure 10 (the higher score denotes a better-enhanced image; PM is proposed method).
Table 5. The comparison of enhanced image through the UIQM [36] metric with state-of-the art-methods in Figure 10 (the higher score denotes a better-enhanced image; PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
1.0170.7290.6791.0811.3140.9981.0020.9561.2390.8421.2901.671
0.5770.6020.4631.0100.9270.9560.7900.6610.7350.5091.0311.163
0.7280.6520.6090.8630.9790.8780.9890.8450.8410.6471.0961.377
0.4920.4780.3980.9670.8060.9180.6800.5800.6000.4591.0411.096
0.4420.3950.2950.8220.8060.9610.5210.4620.4360.5141.0420.984
0.5730.6380.8351.0990.5631.0240.7360.8100.5800.5340.6271.310
0.5310.4930.6430.8800.7170.8980.6860.7750.5520.5230.7761.589
1.0211.0031.1491.2861.0581.1421.1761.1681.1000.9261.1211.462
0.4400.4120.4630.8990.6370.8030.6630.5930.7890.5910.9381.209
0.6290.6710.5731.0870.6891.0200.6520.7720.6700.6820.9431.382
AVG0.6450.6070.6110.9990.8500.9600.7900.7620.7540.6230.9911.324
Table 6. The comparison of enhanced image through averaged NIQE [35] metric with state-of-the-art methods in Figure 9 and Figure 10 and DAWN dataset [30] (a lower score denotes a better-enhanced image; PM is proposed method).
Table 6. The comparison of enhanced image through averaged NIQE [35] metric with state-of-the-art methods in Figure 9 and Figure 10 and DAWN dataset [30] (a lower score denotes a better-enhanced image; PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
AVG (20)19.97819.97520.01119.94319.86919.84320.05619.79920.03520.06119.74318.429
AVG (323)19.94620.02620.08319.98119.79119.80220.02419.72020.01720.05519.70918.090
Table 7. The comparison of enhanced image through averaged UIQM [36] metric with state-of-the-art methods in Figure 9 and Figure 10 and DAWN dataset [30] (a higher score denotes a better-enhanced image; PM is proposed method).
Table 7. The comparison of enhanced image through averaged UIQM [36] metric with state-of-the-art methods in Figure 9 and Figure 10 and DAWN dataset [30] (a higher score denotes a better-enhanced image; PM is proposed method).
[1][9][14][10][2][15][17][11][20][21][5]PM
AVG (20)0.6710.6140.5850.9720.8700.9250.7890.7670.7660.6551.0331.326
AVG (323)0.8120.7200.6950.9900.9910.9260.8950.8560.8740.7301.1201.410
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, H. Enhancement of Asymmetrically Color-Cast Sandstorm Image Using Saturation-Based Color Correction and Hybrid Transmission Network. Symmetry 2023, 15, 1095. https://doi.org/10.3390/sym15051095

AMA Style

Lee H. Enhancement of Asymmetrically Color-Cast Sandstorm Image Using Saturation-Based Color Correction and Hybrid Transmission Network. Symmetry. 2023; 15(5):1095. https://doi.org/10.3390/sym15051095

Chicago/Turabian Style

Lee, Hosang. 2023. "Enhancement of Asymmetrically Color-Cast Sandstorm Image Using Saturation-Based Color Correction and Hybrid Transmission Network" Symmetry 15, no. 5: 1095. https://doi.org/10.3390/sym15051095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop