A Novel Image Dehazing Algorithm via Adaptive Gamma-Correction and Modified AMEF

Captured images are usually influenced by fog or haze. In reality, image dehazing is challenging. This paper proposes a modified artificial multiple-exposure image fusion (AMEF) algorithm to remove the haze from an image. In the algorithm, first, an adaptive gamma-correction transform with the mean and standard deviation values of each component of a hazy image is utilized to verify the intensities. Second, the homomorphic filtering algorithm is introduced into the Gaussian pyramid and Laplacian pyramid to compute the exposed accessible images. Last, a modified Laplacian filter method is presented to calculate the contrast of the exposed accessible images. Further, extensive experimental results demonstrate that the proposed algorithm has superior performance compared with that of some state-of-the-art methods, including higher contrast, richer details and a better visual effect in the dehazed image.


I. INTRODUCTION
The haze or fog phenomenon frequently causes declining visibility in an outdoor environment and fuzzy acquired outdoor images. A keen influence is the diminution of the radiation factor along its path in the direction of the camera. Therefore, the degeneration in the colour quality and deficit in the contrast of an image are regulated, and the visibility in far areas is deteriorated, especially in hazy, foggy or cloudy weather. These conditions can create many problems, such as decreased image quality, which affects the experiences of people who utilize a camera to take a picture. On the other hand, in the real world, images are broadly employed in image segmentation, image enhancement, vehicle license plate recognition trajectory detection, etc.
[1]- [3]. However, hazy images are frequently obtained, especially, in hazy, foggy, or cloudy weather. Thus, obtaining a hazefree image is an important issue in theory and practice. As a result, the main task of improving the visual quality of poorquality images has been progressively concerning in recent years. Techniques for image dehazing can generate a constructive influence in the tasks of some optical instruments, for example, reconnaissance [4], remote sensing [5], [6], or self-driving in foggy weather [7].
The associate editor coordinating the review of this manuscript and approving it for publication was Larbi Boubchir . The first physical model for improving the quality of a foggy image was introduced by Koschmieder [8].
where I(x) = (I R (x), I G (x), I B (x)) denotes the foggy or hazy image; J(x) represents a foggy-or hazy-free image; A denotes a constant colour vector, which is utilized to describe the atmospheric light t(x) is a convex that combines parameter mapping with respect to the depth of each pixel denoted by x. The second term of equation (1) can be considered airlight. Further, He et al [9] proposed a dark channel prior (DCP) method, which was expressed as follows: Equation (2) can be easily obtained. In the approach, at least one low-intensity pixel exists in some colour channel of locality around every pixel.
Related works can be summarized as four types: augmenting contrast methods, dark channel prior (DCP) methods, machine learning methods, and image enhancement methods.
Reducing the fog or haze is one of the main issues of image processing. Narasimhan and Nayar [10] investigated the methods to restore the contrast of images affected by poor weather conditions. Since the scatter of natural light is led by VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the atmospheric environment, Schechner et al. [11] investigated a partially polarized approach to reduce the haze in an image. Both studies depended on peripheral sources of information. Nevertheless, it is too hard to obtain outside information in general; thus, the application of this kind of method is restricted. To overcome the deficiency, single-image dehazing methods were proposed. The approaches presumed no outside information of the locale of an image description. The methods are vulnerable to the spatial variance as the haze is a depth-relay manifestation. In this circumstance, resorting to physical models of haze formation methods were utilized to overcome the fault. He et al. [9] and Zhu et al. [12] employed the prior knowledge of an image to augment the contrast or eliminate weakened colours of foggy images. A linear transformation model was introduced by Wang et al. [13] to remove the haze of an image, in which a linear connection between the foggy image and the foggy-free image existed. Lee et al [14] reviewed the related works of the dark channel prior (DCP). On the other hand, the localized maximal contrast was utilized to the low-intensity pixel, which was studied by Tan [15]. Further, Fattal [16] and Berman et al. [17] presupposed that the colour pixels obeyed a specific distribution in the RGB colour space. Further, a colour ellipsoid geometry based on the statistical result was stated to compute the transmittal map, which was introduced by Bui and Kim [18]. More research of the DCP is provided in reference [19]. However, the characteristic of the presupposition is still requisite and the result may be very poor if the presupposition is inadequate.
Therefore, machine learning methods are proposed based on the Koschmieder model to address foggy or hazy images. In the approaches, t(x) is estimated by learning method. Tang et al. [20] introduced a learning framework to obtain the best characteristic amalgamation for image dehazing. Multi-scale convolutional neural networks are utilized to learn the corresponding transmission maps. Further, these networks were utilized to reduce hazy or foggy images, as presented by Ren et al. [21]. Cai et al. [22] proposed a trainable system, named DehazeNet, to obtain the transmission map, which was based on medium transmission prediction. A new nonlinear activation function was embraced in the system and utilized to ameliorate a hazy image. Proximal Dehaze-Net [23] is a deep architecture that was proposed for dehazing an image; it is incorporated in a haze imaging model, dark channel, and transmission priors, was proposed. In some conditions, it is difficult to obtain a high-quality dehazing effect. A binocular image dehazing Network (Bid-Net), which was proposed by Pang et al. [24], was based on a deep learning framework, and the left and right images of the binocular images were utilized in the BidNet model.
In addition, image enhancement methods are investigated by many researchers but image degradation is not considered. The main idea of the methods is to ameliorate the image quality. Specifically, the contrast and details are ameliorated. In this class, equation (1) is constructive for comprehending the connection between a hazy image and a consistent haze-free image, which is a prerequisite. However, the purpose is not employed to predict t(x) nor A for correcting the model and gaining a high-quality dehazed image J(x). Moreover, a spatially variant contrast enhancement process can be utilized to understand the methods. The Retinex approach, frequency count domain enhancement, and histogram equalization (HE) methods are three main approaches for enhancing an image. Retinex approaches were proposed by Jobson et al. [25] and Rahman et al. [26]. In view of the investigations of the Retinex approaches, they can be summarized as two types: single-scale Retinex (SSR) [25] and multi-scale Retinex (MSR) [26]. For ameliorating the low contrast of images, a new approach based on SSR is studied by Al-Ameen and Sulong [27]. In view of SSR, Alharbi et al. [28] presents a new single-image dehazing approach. Since some images with uneven illumination are susceptible to a local aureole, the SSR and CLAHE methods are combined to solve the problem, as proposed by Wu and Tan [29]. Xie et al. [30] introduces a new approach, in which the dark channel prior and MSR are combined, to improve a hazy image. The MSR with colour restoration method, which is introduced by Wang et al. [31] and Wang et al. [32] is utilized to address a hazy weather image. The experimental results demonstrated that the quality of a hazy weather image is improved obviously and the details of the hazy weather image are maintained sufficiently. In addition, based on the haloreduced dark channel prior, a new dehazing method for foggy image enhancement is proposed by Shi et al. [33] (HRDIE). For additional research on Retinex theory, please refer to Liu et al. [34], Mei et al. [35], Zhe et al. [36], Zhang et al. [37], and Li et al. [38]. In addition, to enhance an image, a model-assisted multiband fusion (MAMF) method is proposed by Cho et al. [39]. To address the nighttime single foggy image, a pixel-wise alpha blending (PWAB) approach is proposed by Yu et al. [40].
In this study, we propose an image dehazing method that focuses on the results of a series of applications of adaptive correction amendments. In the past two decades, adaptive gamma correction has been broadly investigated for enhancing images [41]- [44]. Huang et al. [45] presented an adaptive amendment to process the transmission function t(x) before inverting a model (1). Further, the idea of amelioration based on Laplacian-based methods was introduced by Huang et al. [46].
In this paper, novel adaptive gamma correction, which is based on the mean value and standard deviation value of each component (R, G, and B) of a colour image, is employed in the gamma function. The values of the parameters of gamma correction do not need to be presuppose, and feature information of a hazy image is considered. To further improve the quality of the hazy image, a modified Laplacian pyramid is introduced in this study, in which the homomorphic filtering algorithm is introduced. A new equation is presented to calculate the contrast of the exposed accessible images. In the equation, a new filter that is based on the Laplacian filter is employed, and more detailed information can be reserved.
The main work of this paper can be summarized as follows: • First, an adaptive approach is proposed to obtain the values of the parameters of gamma correction transformation, in which the mean and standard deviation values of each component (R, G, and B) of the colour image are employed. The characteristics of the hazy image are considered, and the values of the parameters of gamma correction do not need to be presupposed.
• Second, homomorphic filtering is introduced into the Laplacian pyramid in this study. Compared with the reference Galdran [43], a new filter is utilized in the Laplacian pyramid and the experimental results demonstrate that the quality of the hazy image is evidently improved.
• Last, a new computing method for the weights is introduced in this paper. In the new method, a novel filter that is based on the Laplacian filter is introduced, that is, more pixels are considered to compute the weights in the filter. Therefore, the new filter can reserve more detailed information of an image. The remainder of this paper is arranged as follows: The related works of this study are introduced in Section 2. The new method and the main innovations of this study are discussed in Section 3. In Section 4, light and heavy hazy images are utilized to test our method. The experimental results and discussion are presented in Section 4. The conclusions of this investigation are discussed in Section 5.

II. RELATED WORKS OF THIS STUDY
Since the work of this study is based on the artificial multiexposure fusion (AMEF) technique proposed by Galdran [46], the main merits of AMEF and other related works are introduced in this section.
First, the gamma correction is expressed as where α and γ are the parameters of the gamma correction. In general, in an image, more than one dark area existed, and the distinctions are perceptually more conspicuous than the distinctions in highly intense regions. On the other hand, gamma correction can prevent the bright luminance regions from changing conspicuously to dehaze an image. Adaptive methods for calculating parameters α and γ have been investigated in past decades. Huang et al. [40] proposed adaptive gamma correction with the probability distribution of luminance pixels of an image. For realizing adaptability, an iterative approach was introduced by Rahman et al. [41]. Further, based on this method, another adaptive gamma correction method was constructed. In the same year, an adaptive gamma correction was presented by Huang et al. [43], which was based on a cumulative histogram. The adaptive gamma corrections are based on the global image transformations. On the other hand, Xiong and Pulli [44] proposed a combination method with combined gamma and linear corrections to improve the hazy image. Thus, the gamma correction is also utilized in our study. In this study, previous studies for predicting the α and γ values are compared, and a simple adaptive method that is based on the average and standard deviation values of each component (R, G, and B) of the colour image is presented. Second, the Laplacian pyramid and Gaussian pyramid mappings were explored by some scholars, such as Wang and Chang [48], Wahyuni and Sabre [49] and Singh et al. [50]. However, as depicted by Wahyuni [49], the Laplacian pyramid mapping can reduce the contrast of an image. On the other hand, the reason was expressed in the previous paragraph. The merit of homomorphic filtering is a nonlinear method. Thus, homomorphic filtering is employed in Laplacian pyramid mappings and Gaussian pyramid mappings.
Since the Laplacian filter [47] only considers the second derivative of the vertical and horizontal variables, the mixed second derivative is not considered. Therefore, we propose a modified Laplacian filter, in which the mixed second derivatives are considered. Thus, more information about an image can be obtained.

III. MODIFIED ARTIFICIAL MULTI-EXPOSURE FOR IMAGE DEHAZING
In our study, three main improvements are proposed based on the original artificial multi-exposure fusion technique (AMEF) proposed by Galdran [47]: (1) the mean and standard deviation values of a hazy image are considered in the gamma correction; (2) homomorphic filtering is inserted into the Laplacian pyramid and Gaussian pyramid, which are employed to enhance the contrast of image; (3) A modified Laplacian filtering is proposed to calculate the contrast, in which the mixed second derivative is considered.
The intent of this research is also to construct based on an image enhancement technique with a spatially varying method. The main idea is to remove the influence of haze, in which the transmission map and airlight in equation (1) are utilized. The input hazy image I(x) is deliberated, and the pixels are normalized in [0, 1]. According to equation (1), t(x) is a convex combining function. Thus, t(x) ∈ [0, 1] for all x and J(x) ≤ I(x) with respect to every x.
Based on the conclusion, let E(x) = {I 1 (x), I 2 (x), · · · , I N (x)}, which is a set of under-exposed sequences from the original hazy image I(x). The reduced intensity is always generated with under-exposed hazy images I(x). In addition, if the global exposure of an image is incomplete, some feature information may be contained in the low-intensity regions. Thus, an uncomplicated and effectual multi-exposure image fusion (MEF) approach is employed to fuse the images in In addition, the presented dehazing skeleton is displayed in Figure 1. A sequence of adaptive gamma corrections are employed to extract a collection of underexposed multi-exposure image sequences from a haze image. Second, the Gaussian pyramid and Laplacian pyramid, which embrace the homomorphic filtering algorithm, are utilized to address the exposed obtainable images. Third, the images are amalgamated with the new filter, which is based on the Laplacian filter to measure the contrast and saturation of images. Last, a dehazed image is obtained. To better comprehend the process of the image dehazing in this study, the main differences in other works of the process are depicted as follows: In the second step, gamma correction, whose parameters are calculated by equation (4), is utilized to prevent the brightness areas from varying noticeably. Further, in the third step, the Laplacian pyramid and Gaussian pyramid with homomorphic filter are utilized to address an image, which has been switched by the new gamma correction. Last, a modified Laplacian filter is employed to compute the contrast.

A. ARTIFICIAL EXPOSURE MODIFICATION VIA ADAPTIVE GAMMA CORRECTION MAPPING
The global image intensity is modified by gamma correction (equation 3), which is defined as Here, the parameters α and γ are real positive constants. Compared with high-intensity regions, low-intensity regions are noticeably changed by gamma correction. To overcome the default, the wider quantization intervals of gamma correction is adopted in high-intensity regions, which can reserve the luminance to change inconspicuously. Contrariwise, in the dark region, slender intervals are considered, which can persist the details of dark areas. In Galdran [47], more details were described. If γ > 1, higher intensities are assigned in a broad region after conversion, while lower intensities are switched into a compacted range. On the other hand, if γ < 1, the results are converse. As described in Section 2, the parameters of gamma correction can be obtained automatically by some methods. In most of the former studies, the parameters α and γ were obtained by learning methods, decomposition methods, or other methods. In our study, we propose a method to attain the values of α and γ by the mean and standard deviation values of I n (x) (n = 1, 2, · · · , N ). The gamma correction is also divided into bright regions and dark regions. The parameters α and γ can be computed by equation (4). α = std(I(x))/mean(I(x)) γ = mean(I(x))/std(I(x)). (4) It is obvious that the two parameters are reciprocals of each other. As previously described, if γ is too large while α is very small, and if γ is small while α is larger, then the advantage of equation (4) is understandable for each component.
To prevent an extensively broad region after conversion while γ > 1, α has a role of compressing intervals. The other situations can be analysed in the same way. Further, to enhance the contrast of the image after gamma correction, equation (5) is utilized to compute the contrast of a prearranged area ℵ from an image I(x), which is defined as where I ℵ max (x) = max {I(x)|x ∈ ℵ} and I ℵ min (x) = min {I(x)|x ∈ ℵ}. Equation (5) was proposed by Galdran [47] and Zheng et al. [51].

B. ARTIFICIAL MULTI-EXPOSURE IMAGE FUSION WITH MODIFIED LAPLACIAN FILTER
The first study about MEF was introduced by Burt [52]; it has been scrutinized extensively. However, the main idea of most studies can be summarized as a single structure, in which the goal is to find the optimal weights W k (x) by the following equation: Here, K indicates the number of differently exposed accessible images E k (x), J(x) denotes the fused image, and K k=1 W k (x) = 1. Let G k (x) and U k (x) be the contrast and the saturation at each pixel x. We rewrite equation (6) as follows: How to compute G k (x), U k (x) and E k (x) is presented. Further, to turn away from the performance of blending artefacts, on the other hand, some details of an image are lost while the typical Laplacian pyramid is employed to reduce the haze of a hazy image. A Gaussian pyramid with a Gaussian kernel was utilized by Galdran [47], which can be expressed as Here ds [•] is an operator, which embraces an image filter and a half of its original dimension, each iteration is obtained. The main processes of the image filter are proposed as where the mean (•) is the average value function, ''FFT'' is the fast Fourier transformation operator, and ''FFSHIFT'' is a mapping that is utilized to address the zero frequency component in ''FFSHIFT''.
The filter is defined as where H , L, D 0 , and C are the filter parameters, and D (p, q) is the distance of the pixel, in which the location of the pixel is (p, q).
Here, (•) −1 is inverse transform operator of mapping ''•''. For restoring the original dimension of the image, the Laplacian pyramid was employed to handle the image in a similar way as Here, up [•] is an operator that embraces a filter operator, and the filter operator is the same as previously mentioned. Thus, how to compute E k (x) is solved. G k (x) and U k (x) are expressed here. The loss of contrast and saturation is affected since haze or fog exists in an image. Thus, how to compensate the contrast and saturation are broadly securitized. The dark channel prior (DCP) is a typical method, which was introduced by He et al. [10]. However, some filter methods also realize the same way, such as the guided filter and Laplacian filter. The Laplacian filter was utilized by Galdran [47]; four adjacent pixels located over, under, left and right are considered in the method, and the contrast and saturation are computed by the patch-wise of the image. In this study, a new simple approach that is similar to the method presented by Mertens et al. [53] and Galdran [47] is proposed. More pixels are embraced in the new filter. Let ; then, the contrast G k (x) and the saturation U k (x) are calculated by the equation as follows: where x, y ∈ (x) and (x) is a small covering that is centred at pixel x.
where µ R k , µ G k and µ B k are the average values of the components of R, G and B, respectively, of E k (x). Substitute equations (12), (13) and (14) into equation (7) to obtain a haze-free image.

IV. EXPERIMENTAL RESULTS AND DISCUSSION
In this section, the evaluation of the proposed method is performed in two ways: subjective and objective. Some   which are subjected to poor illumination conditions and non-uniform illumination images. Figure 2 shows some examples of foggy/hazy images. All the experiments are tested in MATLAB R2016a on a PC with 2.4 GHz CPU and 4 GB RAM.
To assess the effectiveness of the proposed method, some state-of-the-art technologies are utilized for comparison: CLAHE [27], DCP [9], MAMF [39], AMEF [47], HDRIE [33], PWAB [40], ESIHR [22] and PLDN [23]. The assessment is performed in subjective and objective ways. To make a fair comparison, the parameters of the selected methods are set according to the suggestions of the authors.

A. SUBJECTIVE EVALUATION
To subjectively evaluate the performance in this section, the test pictures are named image#1-12, which contain light and heavy haze images. The light haze images (image# 1-12) are selected from RTTS, HSTS and SOTS [54], and the heavy haze images are chosen from O-HAZE, I-HAZE and DENSE-HAZE [55]- [57] public datasets.
'Image#1' and its defogging versions achieved by different algorithms are displayed in Figure 3. A certain defogging effect has been obtained by CLAHE (Figure 4 (b)), but the image details are seriously lost. The whole DCP-ed and MAMF-ed images look too dark and some colour information is lost in the output images, as shown in Figure 4 (c) and (d). The image processed by AMEF obtains a certain improvement in defogging, but the foreground colour of the processed image is bright (Figure 3 (e)). HRDIE makes the image overenhanced and causes a loss of detail in the output image, as shown in Figure 3 (f). PWAB can obtain reasonable defogging results, but the colour of some areas deviate from the background, such as the building area, as indicated in Figure 3(g). Some areas of ESIHR-ed and VOLUME 8, 2020  PLDN-ed images still look dark, such as the tree area on the left side, as shown in Figure 4 (h) and (i). The output image by the proposed method exhibits a natural look, as indicated in Figure 3 (j). Figure 4 shows the 'image#2' and its defogging results achieved by various methods. The image processed by CLAHE is dim and loses detail, especially in the cloud area. DCP, MAMF, PLDN and ESIHR can achieve certain defogging results, but the whole images are slightly dark. The pavilion region of the AMEF-ed image is dim, as displayed in Figure 4 (e). HRDIE makes the contrast of the output image overenhanced, as displayed in Figure 4 (f). PWAB can effectively defog, but some areas look dim, such as the regions of the pavilion and tree, as indicated in Figure 4 (g). The proposed method can obtain a clear image and the background colour of the image is closer to the original image, as indicated in Figure 4 (j).
Some defogging results for 'image#3' with different methods are shown in Figure 5. The background cannot be seen clearly in the original image. CLAHE and HRDIE have some effects on image defogging, but information in some areas has been destroyed, such as the cloud area in Figure 5 (b) and the tree region in Figure 5 (f). DCP can lead to the darkness of the foreground, as indicated in Figure 5 (c). As displayed in Figure 5 (d, e and g), MAMF, AMEF and PWAB methods create effective improvement, but the tree areas on the left are unnatural compared with the proposed algorithm. In addition, the whole image processed by MAMF is slightly dark. The area without sunlight in the ESIHR-ed image is too dim, as displayed in Figure 5(h), and the same phenomenon exists in the PLDN-ed image. There is an excellent visual quality in the proposed algorithm shown in Figure 5(j).  The processed results of different enhancement methods are shown in Figure 6. As shown in Figure 6 (a), 'image #4' is foggy with non-uniform illumination. The output images exhibit under-enhancement; thus, the pictures look too dark in some regions, as displayed in Figure 6 (b) and (c). AMEF can obtain reasonable defogging results, but some areas still look dim, as indicated in Figure 6 (e). The MAMF-ed and HRDIE-ed images exhibit colour distortion and a loss of details. By observing Figure 6 (g) and (h), it is clear that the PWAB and proposed methods have relatively acceptable defogging effects. However, the stone area looks too bright and unnatural in the PWAB-ed image compared with our proposed algorithm. The image processed by ESIHR obtains limited improvement in defogging, as shown in Figure 6 (h). As indicated in Figure 6 (i), background colour deviates from the original colour in the PLDN-ed image. The suggested algorithm has the best naturalness and visibility in the output image, as indicated in Figure 6 (j).
'image#5' and its defogging results obtained by different methods are indicated in Figure 7. As shown in Figure 7 (b-e), floor colour distortions are serious, and the bottoms of the tables are dark. The HRDIE-ed image exhibits overenhancement, and the whole image looks too bright, as indicated in Figure 7 (f). The desk area is unnatural in the PWAB-ed image compared with the proposed method. Floor colour distortions are also serious in the ESIHR-ed and PLDN-ed images. The enhanced image by the proposed can obtain satisfactory visual effects, as displayed in Figure 7(j).
'Image#6' and its contrast-enhanced images achieved by different techniques are exhibited in Figure 9. As indicated in Figure 8 (b) and (f), CLAHE and HRDIE have a certain defogging effect, but detail information is seriously lost and the output image looks unnatural. The DCP and ESIHR can 207280 VOLUME 8, 2020  obtain a certain improvement in defogging, but some regions still look dark, such as the top of the corridor, as shown in Figure 8 (c) and (h). MAMF and AMEF can effectively defog the images, and unfortunately, the output images look slightly dark. The foreground is too bright in the HRDIE-ed image, as displayed in Figure 8 (f). PWAB (Figure 8 (g)) can obtain reasonable results in defogging. Unfortunately, some areas are still dark. The proposed method has the best visual effect compared with other methods.
Some results are given in Figure 9 with the investigated methods for the 'image #7' image. The image processed by CLAHE exhibits a loss of image information, as shown in Figure 9 (b). From Figure 9 (c), (d) and (f), we observe that the background colour deviates from the original colour in the DCP-ed, MAMF-ed and HRDIE-ed images. The AMEF method can achieve relatively satisfactory results, but the shadow of rocks has low brightness, as indicated in Figure 9 (e). Limited improvement in defogging is obtained in the ESIHR method, as displayed in Figure 9 (h). PLDN cause a phenomenon of colour information, which deviates from the original image. The results in Figure 9 show that the proposed method is excellent.
'Image #8' and its defogging versions achieved by different algorithms are displayed in Figure 10. CLAHE and DCP methods cause serious distortion in the background colour of the images. The MAMF-ed and AMEF-ed images have a dark background, as displayed in Figure 10 (d) and (e). As indicated in Figure 10 (f), the whole HRDIE-ed image is bright, and the processed image looks unnatural. The foreground colour of the image is bright in the PWAB-ed image. ESIHR and PLDN can defog the images; unfortunately, the processed images look slightly dark. As displayed in Figure 10 (j), the processed image by our method exhibits the best defogging effect.  'Image #9' and its defogging results obtained by different algorithms are indicated in Figure 11. CLAHE has poor defogging results in the image background area, and the loss of image colour information is severe, as shown in Figure 11 (b). As indicated in Figure 11 (d), (e) and (h), the DCP-ed, MAMF-ed and ESIHR-ed images can obtain certain defogging results, but the processed images still look dim. The AMEF method makes some regions of the output image dark, such as the tree area. HRDIE can obtain satisfactory defogging results compared with most methods, but the whole image is too bright. The background distortion of the PWAB-ed image is serious, and some regions look dark. The whole PLDN-ed image looks slightly dark, as indicated in Figure 11 (i). The proposed method effectively defogs the original image and avoids colour distortion, which makes the image more natural.
The processing results from different methods for 'image #10' are shown in Figure 12. The CLAHE, HRDIE and PWAB methods make the processed image exhibit colour information loss. The DCP-ed and ESIHR-ed images are still foggy. The images processed by MAMF and PLDN look too dim. The AMEF-ed image can obtain a certain defogging result. The proposed method has the best visibility. Figure 13 displays the processed results of 'image #11'. Among these processed results, we discover that the result of the proposed method has the best visual quality. The image is dark or colour information is lost in other compared methods.
In Figure 14, CLAHE makes the image exhibit a serious loss of details. The DCP-ed, MAMF-ed and PLDN-ed images look dim. The ESIHR method enables limited improvement in defogging. The proposed algorithm has a relatively acceptable defogging effect, and the output image looks clearer than other methods. VOLUME 8, 2020   The processed results of 'image #13' are indicated in Figure 15. Compared with all the other methods, we determine that the result of the proposed method has the best visual quality. The image is dark or colour information is lost in other methods.
'Image #14' and its defogging versions achieved by different methods are indicated in Figure 16. CLAHE causes serious distortion in the background colour of the images. DCP-ed and AMEF-ed images have a dark background, as displayed in Figure 16 (d) and (e). As indicated in Figure 16 (f), the whole HRDIE-ed image is bright, and the output image looks unnatural. PWAB can defog the images; unfortunately, the processed image exhibits overenhancement. ESIHR obtains limited results in fog removal. PLDN can defog the images; unfortunately, the processed image exhibits a loss of colour information. The output image by the proposed method appears more natural than other algorithms, as indicated in Figure 16 (j).
'Image#15' and its defogging results processed by different methods are displayed in Figure 17. The loss of image colour information is severe in the CLAHE-ed image, as shown in Figure 17      can obtain certain defogging results, but the processed images look slightly dim. The MAMF and PWAB methods make some regions of the output image exhibit a loss of detail information. HRDIE can obtain certain defogging result compared with most of the methods, but some regions are still dim. As indicated in Figure 17 (j), the processed image by our algorithm exhibits the best defogging effect. Figure 18 displays the processed results of 'image #16'. We discover that the proposed method can obtain the best visual quality compared with all the other methods. The processed image has a limited defogging effect or a serious loss of image details exist with other methods.

B. OBJECTIVE EVALUATION
Further, to test the enhancement performance in an objective way, PSNR [58], RMS [59], SSIMM [60], and FSIM [61] are employed for execution in comparison with some state-the-art algorithms, which are mentioned in Section I.
PSNR is generally utilized to evaluate the quality achievement between the input images and output images and can measure the degree of image contrast enhancement. A large PSNR value denotes that the processed image has the smallest degradation by comparison with the original image [58].
Some results are given in Table 1 for PSNR with the investigated algorithms, as previously mentioned. The proposed algorithm can outperform the compared methods in PSNR, as shown in Table 1.
RMS is commonly employed to measure the contrast of an image, in which a higher value represents better visibility of the processed image [59]. Some results are indicated in Table 2 for the RMS with the investigated methods, as previously mentioned. As indicated in Table 2, the PWAB method can obtain the best results for images #3 and #9, while DCP has the best results for image #7. The developed algorithm performs best in all other test images. We discover that the best enhancement performances can be obtained by the proposed method, as displayed in Table 2.
MSSIM is the average value of structural similarity (SSIM) [60], which is an important index for measuring the image quality. Some results are given in Table 3 for MSSIM [60] with the investigated algorithms, as previously mentioned. The presented method can outperform the compared methods in MSSIM, as shown in Table 3.
Feature SIMilarity (FSIM) is an image quality assessment metric that is based on low-level features [61], which is based on the notion that the human mainly understands an image according to low-level features (such as step edge, zero crossing edge, etc.). The FSIM algorithm uses two kinds of features. The steps of the algorithm are detailed in [61].
Some results are given in Table 4 for FSIM with the investigated methods, as previously mentioned. We determine from Table 4 that the proposed algorithm creates a larger FSIM value in most of the cases and its performance is the best compared with the other methods.
Compared with other six algorithms, the performance of the proposed method for most of the nine test images is satisfactory. Therefore, in addition to the night test images, the three objective evaluation functions (i.e., PSNR [58], RMS [59], MSSIM [60], and FSIM [61]) are utilized on the selected 500 test images from public datasets to further verify the performance and capability of the proposed method. Figure 15 shows the average values of the quantitative analyses for 500 test images. Figure 19 shows that the proposed algorithm has outstanding performance compared with other state-of-theart algorithms. On average, the images processed by the proposed method contain the highest amount of information. Our method has the highest RMS value, with an average of 69.25 for 500 test images, which can maintain the richness   and information details in the output images. The proposed algorithm outperforms all other methods and can obtain the largest MSSIM value, which means that the processed images have a natural appearance with minimum artefacts according to the comparison with other algorithms.
The 500 test images are further employed to test the performance of the execution time. All the experiments are carried out in MATLAB R2016a on a PC with 2.4 GHz CPU and 4 GB RAM. The test results of different algorithms are shown in Table 5. The average processing time of the proposed method can outperform those of most of the compared methods.
In all, in view of results of the experiments, the perormance of the proposed method of this study is acceptable.

V. CONCLUSION
In this paper, a modified AMEF image dehazing algorithm is proposed to solve hazy/foggy images. In the algorithm, first, an adaptive gamma transformation is utilized for each component (R, G and B) of the colour image, which is based on the mean and standard values of each component, that is, the global characteristics of the hazy image are considered in our method. Thus, different foggy images have different parameter values in the gamma correction. Compared with traditional gamma transformation, parameters of the proposed gamma transformation are not needed to presuppose, and the adaptability of the gamma correction is stronger. Second, to further improve the quality of a hazy image, homomorphic filtering is introduced into the Laplacian pyramid in our study. The experimental results demonstrate that the modified Laplacian pyramid has better performance in improving a hazy image than the reference Galdran [47]. Last, a modified Laplacian filter is utilized to compute the contrast of the hazy image, which can reserve more details of an image as more pixels are considered to compute the weights of the new filter. The modified filter is based on the Laplacian filter. In the experiment section, light and heavy hazy images are utilized to test our method. Further, the subjective evaluations and objective evaluations of the images are displayed by the figures and tables, respectively. According to the experimental results, we conclude that our method can obtain an excellent defogging effect in light and heavy fog images. The average running time is also acceptable.
As a future research direction, modified Laplacian filtering can be utilized to address other image processing research areas. Further, a multi-block pyramid of an image can be investigated in the future.