A remote-sensing image enhancement algorithm based on patch-wise dark channel prior and histogram equalisation with colour correction

The object identiﬁcation within an image captured during rough weather conditions (such as haze, fog) poses difﬁculty due to the reduction of an image. The rough weather conditions lead not only to the variation of the image’s visual effect but also to the disadvan-tage of post-processing of an image. Furthermore, it causes inconvenience of all types of instruments that rely on optical imaging, such as satellite remote-sensing systems, aerial photo systems, outdoor monitoring systems, and object identiﬁcation systems, respec-tively. Hence, the improvement and restorement of the visual effects and enhanced post-processing are needed. This research introduces a new image enhancement approach for image dehazing based on dark channel prior and piecewise linear transformation; also, the histogram equalisation technique, i.e. contrast limited adaptive histogram equalisation is applied. A dark channel prior is well known for its simplicity and productivity. In this work, the dark channel prior to a new angle is analysed in the ﬁrst step, where average patch sizes are estimated for the computation of haze densities. Furthermore, the sky is approximated up to 5–10% of the hazy images, which has a good effect in removing the haze from the image. Using the dark channel, the proposed algorithm signiﬁcantly boosted the effects of the dark images as well as reduced the inﬂuence of haze and noise. Eventu-ally, for colour correction, the piecewise linear transformation technique is applied, which enhances the colour close to the original image. Experimental results demonstrate that the proposed method signiﬁcantly improves the visibility of the algorithm on dark remote-sensing images as well as on hazy natural images.


INTRODUCTION
Image dehazing technique is essentially a way to minimise or even eliminate interferences through special techniques, to produce adequate visual effects and gain more valuable information. It is always challenging to remove the artefacts and noise [1] from the dehaze image. A variety of dehazing algorithms have been proposed over the last few years. They can be broadly categorised into two groups: multiple frames and single frames. Different image-based dehazing procedures also use a range of images, such as different constraint-based [2,3] based [4], and depth methods [5,6]. Recently, for image-based dehazing, certain types of priors and assumptions are adopted. For instance, Tan [7] proposed a new approach based on the significant observation that in the hazy image, the local contrast was higher than the hazy image [7]. Considering that the transition and the surface shading of objects were statistically independent, Fattal et al. removed the haze using a separate analytical component [8]. Subsequently, he suggested estimating the transmission using the colour line model in a small patch [9]. Berman et al. [10] demonstrated that it was a haze line in the red, green and blue (RGB) space, proposing that the haze colour cluster-free image is a non-local prior. Based on the prior colour reduction, Zhu et al. [11] developed and monitored the model parameter in a linear scene-depth model for the hazy input image. For a single-image dehazing, the dark channel prior was suggested. The preliminary data is focussed on measurements which often contain very low intensity pixels in local patches of hazardous images [12]. For the transmission calculation, Bui et al. used colour ellipsoids, which are statistically calibrated for the haze of RGB pixel clusters [13]. A system for optimising transmission is also proposed in [14]. In a decomposition-based approach for an edge, preservation is proposed [15]. Using the internal relationship between gamma correction and the atmospheric dispersion mechanism, Ju et al.
proposed a dehazing mechanism [16]. Zhu et al. [17] suggested a new approach for estimating transmissions based on finding dark pixels rather than the dark channel. As a luminance reconstruction framework, Liu et al. [18] reformulated the problem of hazel removal. For transmission evaluation, strength and saturation are used [19]. The transmission is determined only by employing saturation in [20]. Image dehazing is an ill-posed problem (i.e. it has no unique solution) of image processing. Various optimisation and machine learning techniques are required to solve this problem, i.e. assuming that there are numerous images in a scene. After ideal performances of the neural network models for image classification [21], now they are also providing better outstanding performances for single-image dehaze, such as the recently proposed method called all-in-one dehazing network (AOD-Net) [22], which is used to recover the final fog-free image through a new central variable without evaluating the transmitted image with the limitation of the weak overall effect. The current single-image denoising algorithm can remove haze to certain extent in the image, but the high-quality fluid images cannot be obtained. Compared with the existing single-image defogging methods, PFNet has the best performance in the resident (13) dataset. FFNet [23] can restore most outdoor blurred images as expected, but when it is applied to indoor blurred images, it will produce unwanted artefacts. Our recently proposed method [24] also has a good effect on remote-sensing (RS) images in the hazy environment. However, it is not good at dealing with the dark environment. A variety of dehazing algorithms, including multiscale CNN [12], DehazeNet [25], AOD-Net [22] etc., are being proposed with the rapid advancement of artificial intelligence (AI) technology. The more detailed study of various outer imagery dehazing methods can be found in [26]. Generally, the dark channel prior approach for its simplicity and efficiency is considered as one of the best methods for removing haze. However, it has other inherent difficulties in reality. As mentioned in [27], for example, the dark channel often underestimates the transmission when an object exhibits a similar grey intensity to atmospheric light, which in turn over-saturates and makes the scene radiance unnatural. In optical RS images, clouds are also an inevitable barrier to the effective observation of sensors [28]; therefore, it is essential to recover the original information for further post-processing to recover the original information in order to use it safely. In RS image enchantment colour and contrast are challenges to always improve, such as Imtiaz et al. [29] proposed methods capable of handling images of grey and colour channels.
The existence of a light-scattering effect in the RS image will cause the phenomenon of dark (haze) in the image. It is noted that the pixels for an image are termed as dark pixels where a very low intensity is observed for at least one colour (RGB) channel. In order to solve the problem of the dark channel, the defog algorithm dark channel prior (DCP) is used to remove the fog/haze in the image. Before DCP, haze density is the most crucial factor affecting the quality of the dark image, so the haze density is calculated by estimating the patch size. The image blocks that are more suitable for DCP are observed, and then a primary denoising algorithm DCP method is employed to eliminate fog in a dark environment. It is found that the DCP results darkened the background and the whole image. Therefore, to overcome the shortcomings of the DCP method, here the technique of combining DCP and clean technology is introduced [30], and finally the colour correction piecewise linear transformation (PWLT) technology is applied because it can recover the original colour well. By combining DCP and histogram equalisation, we can recover high-quality, non-fog images, and then use colour correction technology, PWLT, to process the results, which is the final output image. The final output image can not only eliminate the haze freely but also restore the original colour of the image. The remainder of the paper is structured as follows. Section 2 reflects the proposed approach, Section 3 describes the experiment and outcomes, and finally, the conclusion and future research is presented.

METHODOLOGY
Real-time dehazing is used in fast, robust visibility for computer vision applications such as AI robots, auto-pneumatic vehicles, and RS technology. The DCP results will be analysed by histogram equalisation to eliminate the drawbacks of the DCP algorithm., i.e. the contrast limited adaptive histogram equalisation (CLAHE) technique [3]. Then, the obtained image is segmented into small blocks to estimate the haze density. The optimal patch size is then feedback to the DCP method, which gives the excellent results. Furthermore, to achieve original colour use, PWLT [31] processes the colour correction method for the output results.
In the image-processing sense, the histogram is the procedure in which any intensity value occurs in the image; the histogram equalisation is the technique that increases the dynamic range of the histogram of the image. Histogram plots show multiple times (frequencies) of each image-intensity value. A histogram plot shows the position of the image globally, providing valuable details for the enhancement of contrast. Low contrast means the image uses less intensity, and high contrast means the image uses several distinct intensity values. The histogram of a digital image is the discrete function, f , and is defined as H (r k )n k , where r k is the Kth intensity value and n k is the number of pixels in f with intensity r k .
As hazy images lose contrast and colour, even in minor changes it is necessary to evaluate the differences between the FIGURE 1 Proposed approach block diagram two images with a histogram. Four specific types of images and their respective histograms exist -bright, dark, low contrast, and strong contrast. We have used the histogram equalisation technique (CLAHE) because it is a process in which the histogram is evenly distributed to increase the contrast in an image.
Our method has two objectives: the first objective is to restore a dark RS image when a dark environment affects the RS image, the other objective is dehydration because fog is also a factor that affects the image quality of RS. Finally, our main goal is to restore the original image using RS dark image. Histogram equalisation has an adverse effect to increase visual graininess in a low-contrast image. The overall flow of our method is shown in Figure 1.
Image dehazing is a poorly caused two-dimensional ill-posed problem in signal or image reconstruction. Its purpose is to restore an unknown image of a hazy image. The model can be expressed as follows: where I (x) is the observation image, J (x) is the clearer image we need to estimate, A is the global atmospheric light, and t(x) is the medium transmission:

Haze density estimation matrix
Haze density is the most important factor in dehumidification application, so because of weather, haze density is not constant.
To calculate the haze density, we transform RGB colour mage into I(x), i.e. hue, saturation, and value (HSV) colour model. Then, we get the distance D(x) of each pixel in the IHSV(x) model.
where C is the brightness in the HSV colour model, and its effect is such that the shorter the distance, the brighter the pixel, which indicates the maximum value of density in all d(x) [12]. RGB are all connected to colour luminance (which is called intensity), meaning that colour details cannot be isolated from lighting. HSV or hue value is the separation from colour details of image luminance. It helps the research or includes image/frame luminance. For cases where representation of colour plays an integral role, HSV is often used.
The key colour properties that allow us to distinguish between colours are HSV. Colour is a crucial element of photography when colour is successful, as it can attract the viewer's eye towards the composition of the image and can affect the mood and emotion of the image. For a certain colour, grey is characterised by saturation between 0% and 100%. Reducing the part to zero leads to more grey and a faded effect.
It defines the brightness or intensity of the colour, from 0% to 100%, where 0 is totally dark and 100 is the brightest, and shows the most colour:

Patch-size estimation
In our method, patch size is the main contribution of DCP technology. The fixed patch size is selected for the formation and transmission of dark channels. In general, the larger the patch, the better the dark channel, but due to the larger patch size, the halo effect near the edge may become more substantial, which will prevent the premise of DCP from achieving uniform transmission. Similarly, if we choose a smaller patch size, there will be an excessive colour enhancement in the output [32].
Experiments have shown that if the input image is not clear (hazy), small patch sizes will lead to better output results. Therefore, the haze density has a significant influence on the selection of the patch size, but the fixed patch size used by most methods cannot reflect the best parameter-handling process. We calculated the average haze density through the haze density matrix.

Haze removal method based on DCP
There are many recent methods for RS image enhancement. Still, dark channel prior methods are commonly used to generate images from noise and natural haze-free images in the dark environment. We use this technology to process dark RS images because bad weather and light scattering can cause haze in the image, resulting in noisy dark particles. However, the DCP method eliminates it well. Although DCP states that in many cases, local areas appearing in the background of an RS image usually have very low intensity in the RGB colour in the form of at least one channel. This colour is called Z dark (x), which represents the dark at x aisle: ) .
In Equation (5) Z c is the RGB channel, dark-channel operators are the coordinates y and Z(y) that extract a minimum RGB colour channel in the local Ω(x) patch centred at x. If x does not belong to a local area, Z dark (x) indicates a lower value approaching 0, so it is called the dark channel. S(x) is the intensity of the image, it is a mixture of atmospheric light, so we can say that S(x) is usually brighter than Z (x) and has a lower value that is medium transmission t(x). It is worth noting that compared to Z (x), the dark channel of J (x) has a higher value, which helps to eliminate haze. Firstly, the global atmospheric light A is estimated, then the average transmissiont is calculated by dividing Equation (1) to A c : wheret (x) is the coarse transmission map based on a patch and the arguments S C (y) A C are the element-wise divisions. If W(x) is set to large patch size (e.g. 16 × 16), then it should tend to be 0 [9], so the equation can be By combining Equations (3) and (4), we havẽ In addition, He [12] proposed a parameter called w, which is used in Equation (5) to keep the haze in the image to a minimum to see the depth of the image, so we have a mediumt transmission: where w is the rate of depth removal considered for human perception. To optimise medium transmissiont and find accurate medium transmission t, a soft matching algorithm is applied: wheret (x) is the patch-based transmission map, t (x) is the refined transmission map, A is atmospheric light, and t 0 is set to be 0.1 in order to avoid the division by a small value.
The DCP improves the spatial resolution of the transmission map t(x) along with object wheels by measuring the maximisation of the space within a dark path image. Therefore, by manipulating images, He et al. refined the map [12]. The processing of images therefore requires high computational costs and cannot be done in real time. Further research proposed a pixel-wise DCP [29] and a combination of original patch-wise DCP in a flat and pixelwise region around the edge region Ibrahim [35]. Therefore,

Z C (y)
A C (Equation (6)) cannot be 0 by setting the correct patch size, as shown in Table 1, instead of constant patch size. So, in the proposed method Z C (y) A C (Equation (6)) has a small value, which ranges from 0 to 1.

Limitations in the DCP process
(i) In general, a digital image is made up of two parts. The first part is the foreground part, which is the main content (information). The second part is the background part, which has nothing to do with the main content (except the main content). Therefore, DCP technology has major flaws in terms of background enhancement. If there is noise (haze) in a little background with low contrast, DCP will have an adverse effect on the dark environment with low contrast because DCP will only reduce the contrast of the local area, as shown in Figure 2(a). The local area is regarded as the image background of the scene with light and dark and mist shadow. (ii) For the second problem of DCP, some direct attenuation can be obtained from Equation (1) [6], which is also nonlinear with the image strength. Therefore, DCP underestimates foreground image fading, so the output contrast of DCP is lower than that of the fog-free image. The disadvantage discussed here is that the haze is eliminated. Still, when the background area of the haze-free image is large, and the contrast is low, DCP will produce a weak visual effect and reduce the contrasting foreground. (iii) Due to the influence of external light, DCP does not restore the primary colour well, so even some traditional methods based on DCP cannot restore the primary colour [12,33,34].

Sky region estimation
In the previous methods, the transmission time is a fixed value (0.1 s), and the transmission value is (t 0 ). It is very important to select the sky region in the hay image. The sky region (t 0 ) is a part of the haze image. To remove the artefact of the sky region, it must be processed. Furthermore, when a fixed value is used, the sky region is shaped because the transfer map is not smooth. However, DCP may not work when the scene looks substantially like air light. For example, it has been found that in experiments, sky regions tend to degrade due to severe noise and colour distortion. In our research work, we took this into account and assumed that t 0 is calculated from the given input image. We assumed that the sky area is 5-10% of the hazy vision, so we used the same method to calculate (t 0 ) based on 5-10% of the hazy part via calculating the patch size.

Air light estimation
Air light is an approximation that contains dense fog from hazy images. This air is thought to come from the observer's view. We calculate it by creating a dark channel twice the size of the original patch, in order to eliminate the trap of improper calculation of air light due to the existence of strong external light source. Finally, after estimating the dark channel, assuming that 0.2% of the brightness value of the dark channel is taken as the average value, the air light is calculated. From Equation (1), we assume that the transfer t(x) in a local patch is constant: By combining (11) and (12), we have By combining Equations (12) and (13), we have A constant parameter is introduced , 0 < < 1 due to some existence of particles even in clear atmosphere:

Scene radiance
To restore the brightness of the scene, we have selected the brightest 0.1% pixels in the dark channel, since these pixels are hazy, the final reconstructed image is calculated as where ambient light (K c ) is estimated to be the average of these light pixels in the tri-colour channel and to eliminate the noise t 0 = 0 when t(x) is close to 0.

Colour correction
For accurate colour correction, we applied the Buchsbaum technique Ibrahim [35] colour correction model through PWLT transformation. This assumption implies that the average colour is achromatic, equal to 128 of the 8 bits of the image. The method is modified, the colour of segmentation transformation is proposed, and the average value of the image is extended to 128: . (17) Based on the average value, we should determine which colour direction is suitable for low contrast and low-intensity values. This is due to the fast absorption of the red wavelength, and because it is necessary to reconstruct the original colour, the following improvements can be made: As shown in Figure 2(b), the original colour is not well recovered for scene radiance output before PWLT (it shows whitening), so we applied PWLT for colour correction, which is the most important part of the image dehazing, as many images are similar to the colour of the object and where it is located. So, on the contrary, it can give us better contrasting features.

EXPERIMENTAL ANALYSIS
We have implemented our algorithm on the NWPU-RESISC45 [25] dataset concerning dehazing and dark images to evaluate the performance of different algorithms quantitatively. We compared our method with Ibrahim [35], adaptive gamma correction with weighting distribution (AGCWD) [26] method, dark channel prior (DCP) technique [12], color attenuation prior (CAP) method [11], gated fusion network (GFN) method [36], and the proposed method for qualitative visual assessment and quantitative assessment, respectively. The experiment was performed on a PC with 8 GB RAM on MATLAB 2019b. When handling with dense, hazy images, a moderate β is required. β = 1.0 is more than adequate in most of the situations. For the best results, we have picked Ω = 0. 85, t 0 = 0.1, and for colour correction λ = 0.1. The image shown in Figure 3(a) is composed of different sizes, so in order to respond quickly, we reduce its size to 256 × 256 pixels. Haze density is sensitive to the dehazing proposed method; therefore, it is very conscious that we compute the haze density first, as shown in Table 2, by choosing the average patch size. While it was seen during experiments that if the patch size is randomly taken as earlier methods, then the dehazing is not much effective as shown in results from Figure 3(b). Our main goal is to design the algorithm for RS dark satellite images. From the NWPU-RESISC45 dataset, we applied both types of image hazy (airport image) and dark images to our method.  [26], (c) DCP method [12], (d) CAP method [11], (e) GFN method [36], and (f) proposed method)  Figure 3(b), we can see that this method not only effectively prevents haze but also effectively prevents dark images, which is the purpose of methodology. Few other deep learning methods are effective against haze, but when they do not behave well in a dark environment, especially in dark satellite images.

Objective evaluation
In the quantitative evaluation, PSNR and SSIM [37] are calculated as follows: A well-known consistency metric used to measure the error between ground truth (G) and restored image (J) is the mean square error (MSE). This ranges from 0 to ∞: The actual pixel ratio is evaluated by the peak signal to noise ratio (PSNR), PSNR should be maximised. This can be defined as The structural index of similarity (SSIM) measures the relation of edges which was ignored during PSNR computation. SSIM closer to 1 means higher SSIM of restored image: From the above formula, H is the image height (size), W is the colour channel width, K is the number of colour channels, G and J are the real image and haze removal results, respectively. The variable MAX in Equation (20) is the maximum pos-sible value of the ground truth image; G and j are the Gauss weighted means of G and J, respectively, in the local region. Furthermore, the G and J are the standard deviations of G and J, respectively. In the local region, GJ is the covariance of G and J. The proposed method and the traditional method are compared with the actual haze image as a qualitative visual evaluation. The average PSNR and SSIM are calculated as shown in Figure 4.

Visible edges ratio
In our method, the ratio of the new visible edge (e) to the average gradient ( ⌢ r ) is used to prove more quantitative results. Furthermore, the results are compared with Ibrahim [35], AGCWD [26] method, DCP technique [12], CAP method [11], GFN method [32], where (e) gives the improvement rate of the visible edge, and its mathematical expression is as follows [33]: where n k and n i represent the number of visible edges in hazy and non-hazy images, respectively. The higher the (e) value, the stronger the edge of the haze-free image. To describe edge recovery and texture information, another parameter ( ⌢ r ) uses the gradient of the visible edge in a fog-free image: where r i = Δk∕Δl Δk and Δl are the gradients, and r i is a set of visible edges of haze-free images. The maximum ⌢ r shows that the corresponding defogging technology improves the edge retention ability than other methods. Tables 3 and 4 provide the details of e and ⌢ r .

HE:
In [35], the one-dimensional Gaussian filter is used to smoothen the input histogram image, and then the smoothen histogram is segmented based on the local maximum. AGCWD: In this method [26], to change the histogram and increase the contrast in digital images, it is an automatic conversion method that can increase the brightness of the darkened image with gamma correction and the probability of scattering of brightness pixels.
DCP: Using a local patch [12], this means using low-intensity pixels in at least one colour channel. The model uses previous data to provide thickness directly without using walnuts and restore high-quality images.
CAP: This method [11] proposes a fast method based on depth maps. This new precondition uses an atmospheric scattering model to evaluate the transmission and reconstruction of scene radiation.
GFN: In [36], the proposed end-to-end neural network algorithm consisting of encoder and decoder. This allows you to restore a haze-free image directly effectively. The encoder is used to obtain the exported input image, and the decoder is used to evaluate the contribution of each input to the final turbidity output by applying training-based presentation attributes to the encoder.
Our proposed dark channel based RS image method is simple and effective in image pre-processing. Before removing the haze in a single input image, instead of local patches [11,12], we estimate the most appropriate patch size, which is conducive to FIGURE 5 Average runtime for different methods reduce the haze density. In our method, we do not apply any filter, which makes our method simpler compared to the deep learning based methods. Therefore, the computational cost of our approach is very low, and compared with the neural network method [36], the visual effect is good enough due to colour correction.

Computation complexity
The calculation time of any method is the essential factor that determines the robustness of the method. To verify the speed of our approach, the calculation time of single-image dehazing is recorded and compared with the Ibrahim [35], AGCWD [26] method, DCP technique [12], CAP method [11], and GFN method [36]. To ensure fairness of comparison, all programs are run in MATLAB environment 2019b. It can be seen from Figure 5 that the CAP method has excellent runtime compared to all methods, but when it comes to dark satellite images overall, our approach has a lower average runtime compared to all methods. Recently, machine learning methods are designed to dehaze the image due to a large number of parameters and Conv layers; these methods need very high computational resources and more time for processing training [22,36].

CONCLUSION
A novel method based on the prior of the dark channel and histogram equalisation is proposed in this work. Firstly, DCP technology is applied to estimate the haze density through a section of suitable patch size. Because haze density has a significant influence on the selection of patch size, we have calculated the most suitable patch size, compared with fixed patch size, which cannot reflect the best result. Therefore, the average patch size of each image is estimated first, and then the DCP approach is applied. We used the histogram equalisation technique to this method in order to test the robustness in comparison with other methods. Finally, the colour correction technique, PWLT, is applied to the final output. Compared with other methods, the best colour originality is obtained. In terms of real-time performance, our approach is superior to Ibrahim [35], AGCWD [26] method, DCP technique [12], CAP method [11], and GFN method [36]. The performance of this method is quantified using the technical indexes such as visibility edge segmentation (e and r), PSNR, and SSIM. The results show that the method has good qualitative and quantitative results.