Next Article in Journal
Approaching Bounded Rationality: From Quantum Probability to Criticality
Next Article in Special Issue
Modified Hilbert Curve for Rectangles and Cuboids and Its Application in Entropy Coding for Image and Video Compression
Previous Article in Journal
Quantum Advantages of Communication Complexity from Bell Nonlocality
Previous Article in Special Issue
Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retinex-Based Fast Algorithm for Low-Light Image Enhancement

School of Mechanical Engineering, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(6), 746; https://doi.org/10.3390/e23060746
Submission received: 15 May 2021 / Revised: 7 June 2021 / Accepted: 8 June 2021 / Published: 13 June 2021
(This article belongs to the Special Issue Entropy in Image Analysis III)

Abstract

:
We proposed the Retinex-based fast algorithm (RBFA) to achieve low-light image enhancement in this paper, which can restore information that is covered by low illuminance. The proposed algorithm consists of the following parts. Firstly, we convert the low-light image from the RGB (red, green, blue) color space to the HSV (hue, saturation, value) color space and use the linear function to stretch the original gray level dynamic range of the V component. Then, we estimate the illumination image via adaptive gamma correction and use the Retinex model to achieve the brightness enhancement. After that, we further stretch the gray level dynamic range to avoid low image contrast. Finally, we design another mapping function to achieve color saturation correction and convert the enhanced image from the HSV color space to the RGB color space after which we can obtain the clear image. The experimental results show that the enhanced images with the proposed method have better qualitative and quantitative evaluations and lower computational complexity than other state-of-the-art methods.

1. Introduction

Images captured with a camera in weakly illuminated environments are often degraded. For example, these types of images with low contrast and low light, reduce visibility. The object and detail information cannot be captured, which can reduce the performance of image-based analysis systems, such as computer vision systems, image processing systems and intelligent traffic analysis systems [1,2,3].
In order to address the above problems, a great number of low-light image enhancement methods have been proposed. Generally, the existing methods can be divided into three categories, namely the HE-based (histogram equalization) algorithm, Retinex-based algorithm and non-linear transformation [4,5,6]. The HE-based algorithm is the simplest method; the main idea of this method is to adjust illuminance by equalizing the histogram of the input low-light image. To address the shortage of conventional HE algorithms, over enhancement and loss of detail information, a great number of improved and HE-based methods have been proposed, such as contrast-limited equalization (CLAHE), bi-histogram equalization with a plateau limit (BHE), exposure-based sub-image histogram equalization (ESIHE) and exposure-based multi-histogram equalization contrast enhancement for non-uniform illumination images (EMHE) [7,8,9,10,11,12]. However, HE-based methods neglect the noise hidden in the dark region of low-light images. The Retinex model is a color perception model of human vision, which consists of illumination and reflectance [13,14]. The aim of Retinex-based algorithms is to estimate the right illumination image or reflectance image from its degraded image by different filters to achieve low brightness enhancement [15,16]. Some classic algorithms are single-scale Retinex (SSR) and multi-scale Retinex (MSR). In order to solve color distortion, multi-scale Retinex with color restoration (MSRCR) was proposed, which introduced color restoration in multi-scale Retinex. After that, some improved algorithms introduced different types of filters to replace the traditional Gaussian filter, such as the improved Gaussian filter, improved guided filter, bright-pass filter and so on [17,18,19]. Even though image texture details can be restored well via the Retinex-based method, the halo effect is introduced into enhanced images. Common non-linear functions are gamma correction, sigmoid transfer function and logarithmic transfer function [20,21,22]; these types of methods are pixel-wise operations for natural low-light images. Compared with other non-linear functions, the gamma transfer function is wildly used in the field of image processing, but the limitation of gamma correction is that if the parameter γ is too small, it will amplify the noise of the target image; by contrast, if the parameter γ is close to 1, satisfactory enhanced results will not be obtained. Therefore, estimating a suitable γ value is the key to obtaining satisfactory enhanced results.
In this paper, we utilize the gamma transfer function to estimate the illumination and achieve brightness enhancement via the Retinex model. The enhanced image achieves satisfactory light enhancement and global brightness equalization; thus, our method can restore more information than other methods. The final experimental results show that compared with other state-of-the-art methods, the enhanced images through our algorithm have better qualitative and quantitative evaluations. Some examples of natural low-light images and enhanced images with the proposed RBFA method are shown in Figure 1. All low-light images in Figure 1 were captured by the authors of this paper.
The rest of this paper is organized as follows: Section 2 describes the corresponding works of the proposed algorithm in this paper. In Section 3, the details of the proposed method are introduced. Section 4 presents the comparative experiment results with other state-of-the-art methods and describes the computational complexity comparison. The work is concluded in Section 5.

2. Related Work

We introduce the Retinex model, gamma correction and HSV color space in this section, which construct the basis of our method.

2.1. Retinex Model

The classical Retinex model assumes that the observed image consists of reflectance and illumination. The Retinex model can be expressed as follows [23].
H = R     L  
where H is the observed image, R and L represent the reflectance and the illumination of the image, respectively. The operator ‘•’ denotes the multiplication. In this paper, we utilize the logarithmic transformation to reduce computational complexity. We can obtain the following expression.
l o g ( H ) = l o g ( R L )  
Finally, we can obtain Equation (3) to estimate the reflectance in the HSV color space.
l o g ( R ) = l o g ( V ) l o g ( L )  

2.2. Gamma Correction

The gamma transfer function is wildly used in the field of image processing, and the corresponding gamma transfer function can be expressed as follows [24,25].
g ( x , y ) = u ( x , y ) γ  
where g ( x , y ) denotes the gray level of the enhanced image at pixel location ( x , y ) , u ( x , y ) is the gray level of the input low-light image at pixel location ( x , y ) , and γ represents the parameter of the gamma transfer function. The shape of the gamma transfer function can be affected by parameter γ ; the influence of different values of γ is shown in Figure 2.
According to the Figure 2, we can see that the enhanced gray level increases monotonically with decreased parameter γ ; if we want to achieve a higher value of the gray level, we have to let the size of parameter γ fall within the range from 0 to 1. Contrastingly, the enhanced gray level decreases monotonically with increased parameter γ .

2.3. HSV Color Space

The HSV color space consists of a hue component (H), saturation component (S) and value component (V) [26,27]. The value component represents the brightness intensity of the image. The advantage of the HSV color space is that any component can be adjusted without affecting each other [28]; more specifically, the input image is transferred from the RGB (red, green, blue) color space to the HSV color space, which can eliminate the strong color correlation of the image in the RGB color space. Therefore, this work is based on the HSV color space [29]. Commonly, image enhancement in RGB color space need to process R, G and B, three components, but we only need to process the V component in this work. Therefore, this will greatly reduce the image processing time.

3. Our Approach

The details of proposed algorithm are described in this section. Based on the descriptions in Section 2.2, in this work we only focus on the V component to adjust the brightness of the low-light image; the flowchart of the proposed method is shown in Figure 3. We choose an image named “Arno” to illustrate the enhancement process of the proposed method, the processing of image enhancement and corresponding histograms are shown in Figure 4.
In our method, we use gamma correction to estimate the illumination and the Retinex model to achieve brightness enhancement. Compared with using filters to estimate the illumination, using gamma correction to estimate the illumination can effectively reduce the computational time. The key to gamma correction is to compute the value of the gamma parameter; the details of the gamma parameter determined are described as follows.

3.1. Brightness Enhancement

The gray levels of a low-light image are mainly concentrated in the low gray level area, and the dynamic range of low gray levels is very narrow. Combing Figure 2, we can see that the higher the gray level dynamic range of the input image, the higher the gray level dynamic range of the output image. Therefore, we use linear enhancement to stretch the gray level dynamic range before gamma correction, and we make the value of the stretched gray level fall within the range of (0, 1) to prevent over-enhancement. The used linear function in this paper can be expressed as follows.
V m a x = max ( V ( x , y ) )  
V 1 ( x , y ) = 1 V m a x V ( x , y )  
where V m a x denotes the maximum pixel value of V component, max ( . ) denotes take the maximum value of V ( x , y ) , V ( x , y ) is the pixel value of the original V component at location ( x , y ) , V 1 ( x , y ) is the enhanced pixel value at location ( x , y ) and ‘ ’ represents the multiplication.
The maximum value of the low-light image is usually lower than 1; we can infer that 1 V m a x > 1 , so this linear function can stretch the dynamic range of the low-light image, and we also can obtain that V 1 ( x , y ) 1 .
After the gray level dynamic range is stretched, we adopt gamma correction to estimate illumination. For a low-light image, the lower the brightness intensity, the lower the gray level. Therefore, we take this feature into consideration. First, based on the global histogram, we compute the mean gray level value, which can reflect the overall brightness level to a certain extent. The corresponding computational formula is expressed as Equation (7), and we can obtain the mean gray level value via this equation.
m = i = 0 L P ( i ) i i = 0 L P ( i )  
where m is the mean value of gray levels, L denotes the maximum value of gray levels of an image and P ( i ) is the histogram of gray level i .
In this paper, we assume that the gray levels more than zero and less than m + 1 are the extreme low gray levels. In fact, this part of the gray level is the key to determine the mean gray level of the low-light image. Based on the above descriptions, we design a formula to convert the gray level of this part into a constant, and use this constant to compute the gamma value. The corresponding transfer formula is expressed as Equation (8).
c = i = 1 m P ( i ) i 128 i = 1 m P ( i )  
where c is the value of conversion result and c is a positive number. Low-light images may have similar mean values, which will lead to similar c values. In order to enlarge the difference of c values among different images, we use the following expression to enlarge c values.
c 1 = 1 1 + e c  
where c 1 represents the enlarged c value. In addition, we also think that the focus of brightness enhancement lies in the low gray level area rather than the high gray level area. Therefore, we take the distribution of the low gray level as one of the important bases for estimating the gamma value. In order to calculate the distribution of the low gray level, the cumulative distribution function (CDF) is used to calculate the distribution of the gray level in this part. In this paper, we consider the gray level less than 128 to be the low gray level area.
c d f ( j ) = 0 j p d f ( i )  
p d f ( i ) = p ( i ) M N   ( 11 )
where p ( i ) is the number of pixels that have gray level i , M and N are the length and width of the image, j is the threshold point of CDF and we set j equals to 128. Then we weigh the CDF value with the c 1 value to obtain the gamma parameter value.
γ = w c 1 + ( 1 w ) c d f  
where γ represents the gamma parameter, w is the weighted value and equals to 0.48. Combining Equations (4), (6) and (12), we can get the final expression as follows.
V L ( x , y ) = V 1 ( x , y ) w c 1 + ( 1 w ) c d f  
where V L ( x , y ) denotes the pixel location ( x , y ) of illumination image. Combing Equations (3) and (13), we can get the reflectance, and it is shown as follows.
l o g ( R ) = l o g ( V ) l o g ( VL )  
We get the enhanced V component as follows:
V E = exp ( l o g ( V ) l o g ( V L ) )  
The enhanced V component and corresponding histogram are shown in Figure 4c.

3.2. Dynamic Range Expansion

After brightness enhancement, the pixel values are easily concentrated in the higher gray level range, which leads to the grayscale dynamic range becoming narrow with low contrast in the enhanced image. We can adjust the contrast of the image by enlarging the V component gray level [30,31]. In order to avoid pixels values concentrated in the higher gray level range, we use a piecewise function to further stretch the gray level dynamic range to achieve dynamic range expansion. The corresponding expression can be expressed as follows.
V E ( x , y ) = { V E ( x , y ) ,   V E ( x , y ) 0.5 2 ( V E ( x , y ) ) 2 ,   V E ( x , y ) < 0.5  
The dynamic range enlarged V component and corresponding histogram are shown in Figure 4d.

3.3. Saturation Adjustment

In addition to brightness, the color saturation also directly affects the visual experience. In the HSV color space, the mean value of the S component and V component of a clear image should be approximately equal [32,33]. However, with the adjustment of brightness, the mean value of the V component changes greatly, which affects the image color. Based on the mean difference between the V component and the S component, Formula (20) is designed to adjust the S component. The details of our method are described as follows. Firstly, we use Equation (17) to compute the mean difference between the V component and S component.
V E S = V E m e a n S m e a n  
where V E S is the mean difference, V E m e a n is the mean value of enhanced V component and S m e a n is the mean value of S component. The expression used to compute V E m e a n is shown below.
V E m e a n = 0 i V E ( i ) i M N  
where i denotes the gray level, and V E ( i ) is the number of pixels that have gray level i . M and N are the length and width of the image. Similiarly, we can get Equation (19) to compute the S m e a n .
S m e a n = 0 i S ( i ) i M N  
where i denotes the gray level, S ( i ) is the number of pixels that have gray level i . From the above description, we adjust the S component value to reduce the mean difference value between the VE’ component and S component to achieve the purpose of color saturation adjustment. After V E S is obtained, we use it to adjust the S component. According to Section 2.2, if we want to enlarge the value of the S component, we have to ensure that the gamma parameter lies in the range (0,1). On the contrary, we need to ensure that the parameter value is greater than 1 to reduce the value of the S component. Therefore, we use Equation (20) to achieve this step.
S 1 ( x , y ) = S ( x , y ) 1 + ( 1 ) 2 n ( | V E S | 2 + | V E S | ) ,   n = { 0   V E S < 0 1   V E S 0  
where S 1 ( x , y ) denotes the pixel location ( x , y ) of the adjusted S component, and S ( x , y ) is the pixel location ( x , y ) of the original S component. According to Equation (17), we can see that if V E S < 0 , we know that V E m e a n < S m e a n , so we need to reduce the value of the S component. Meanwhile, from Equation (20) we know that n = 0 and 1 + ( 1 ) 2 n ( | V E S | 2 + | V E S | ) > 1 , then we get S 1 ( x , y ) < S ( x , y ) . Similarly, we can see that when V E S > 0 , we also can get S 1 ( x , y ) > S ( x , y ) . The original S component and corresponding histogram are shown in Figure 4e and the adjusted S component and corresponding histogram are shown in Figure 4f.

4. Comparative Experiment and Discussion

This section describes the comparative experiment with the existing methods and experimental results. The comparative methods used include the LECARM algorithm [34], FFM algorithm [7], LIME algorithm [17], AFEM algorithm [1], JIEP algorithm [15] and SDD algorithm [35]. All comparative experiments are performed in MATLAB R2020b on a PC running Windows 10 with an Intel (R) Core (TM) i7-10875H CPU @ 2.30 GHz and 16 GB of RAM. Due to the length limitation of this paper, we use 10 images to illustrate the comparative results; the reference images are shown in Figure 5. All test images and reference images come from the public MEF dataset [36], which, in total, include 24 low-light images.

4.1. Computational Time Comparison

We test the time consumed for different algorithms to process different size images, and the test results are shown in Table 1.
In Table 1, the shortest times are highlighted in bold case values, and the second-shortest times are highlighted with underlined values. Table 1 shows that the proposed method takes the shortest time for processing each image due to the lowest computational complexity, in comparison to both the FFM method and SDD method, which consume the longer time. We also can learn that JIEP’s time consumption is higher than the AFEM method and less than the FFM method. The time consumptions of AFEM, LECARM and LIME are similar because of the same computational complexity. Generally, the proposed RBFA algorithm consumes the least time on average, and the processing speed of the image is the fastest.
We made the data in Table 1 into a line chart to analyze the computational comparison of different methods as shown in Figure 6. Figure 6 shows that the computational complexity of the proposed method RBFA is O(N), and it is the lowest among all the methods, in comparison to SDD’s computational complexity, which is the highest. The computational complexity of the SDD method is O(N2), which results in the SDD method costing more time on image processing. The computational complexity of both the FFM method and JIEP method are O(NlogN), but the time increment of the FFM method is higher than the JIEP method. The computation complexity of AFEM, LECARM, LIME and the proposed method MFGC are O(N); although the computational complexity is the same, the time increment of the proposed algorithm is the smallest with the same amount of data, proving that the proposed method has the lowest computational complexity.

4.2. Visual Comparison

Although the results of the LECARM method preserve the original hue and saturation and have a higher brightness intensity of each image, this algorithm easily results in non-uniform global light, further decreasing the visual experience, such as in the mid area of Figure 7, Figure 8, Figure 9 and Figure 10, which have higher illumination than other areas. The SDD method results show that there are some regions that become blurred, such as in the middle area of Figure 10 and Figure 11. From the enhanced images, we can see that the performance of FFM is unstable, because some results of the FFM method are inadequate brightness enhancement, for instance, the whole area of Figure 12 and Figure 13. From the results of the JIEP method, we can see that this algorithm is focused on normal under exposure and does not perform well in extreme low illuminance regions, such as in the middle region of Figure 8 and the bottom region of Figure 14. It is clear that the results of the LIME method show uneven brightness and over-enhancement in some areas, such as the lower middle area of Figure 10, middle area of Figure 14 and Figure 15 and the wall in the Figure 16. The results of the AFEM algorithm are not satisfactory because the brightness increment is too small to restore the details covered with dark regions, such as the bottom area of Figure 14 and Figure 16. As we can see, the results of the proposed RBFA method achieved the global brightness balance after enhancement via the proposed method; the color retained is more natural than the other methods.

4.3. Objective Assessment

Because human eyes often lose some details when we observe a picture, we choose one no-reference image quality assessment metric (perception-based image quality evaluator (PIQE)), three full reference image quality measure metrics (mean-squared error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR)) and lightness order error (LOE) to measure the quality of the enhanced images. The results of the different quality measure methods are shown in Table 2; these values represent the average value. The best scores are highlighted in bold case values, and the second-best scores are highlighted with underline values.
We can see from Table 2 that the proposed method obtained the best score four times and second-best score once. As shown in Table 2, the PIQE values of different methods fall within the range from 38.601 to 51.457, which means that the quality of all enhanced images is very similar and close, and the enhanced images via the proposed method obtained the best score. The smaller the LOE value, the more natural the enhancement effect. We can see that the LOE value of the proposed method is the best. This also means that the naturalness of the preservation of the proposed method is efficient. MSE is calculated by taking the average of the square of the difference between the reference image and enhanced image; the smaller the value is, the higher the similarity between the reference image and the enhanced image. The result of the proposed method is only 4.59 lower than the best LIME result. SSIM assesses the visual impact of three characteristics of an image: luminance, contrast and structure. The bigger the SSIM value, the higher the image quality; we see that the enhanced image via the proposed method preserved the highest similarity to the reference image. We know that the PSNR value of the proposed method is also the highest, which means that our method is useful for low-light image enhancement. Generally, the image quality enhanced by the proposed method is better than other comparative methods.

5. Conclusions

We proposed the Retinex-based fast enhancement method in this paper. This method can address uneven brightness and greatly improve the brightness of low-light areas. The proposed method is more efficient. In general, the proposed RBFA algorithm performance is better than other state-of-the-art methods, combining the results of the comparative experiment, computational complexity comparison and quality assessment. In other words, the proposed RBFA method is a simple and efficient low-light image-enhancement algorithm.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, W.L.; validation, L.H.; formal analysis, W.D.; investigation, L.H.; resources, Y.L.; data curation, L.H.; writing—original draft preparation, S.L.; writing—review and editing, S.L.; visualization, W.L.; supervision, W.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science and Technology Department of Sichuan Province, grant number 2020JDRC0026.

Acknowledgments

This work was supported by Science and Technology Department of Sichuan Province, People’s Republic of China.

Conflicts of Interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the manuscript entitled ‘Retinex-Based Fast Algorithm for Low-Light Image Enhancement’.

References

  1. Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
  2. Wang, Y.F.; Liu, H.M.; Fu, Z.W. Low-Light Image Enhancement via the Absorption Light Scattering Model. (in English). IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, W.; Wu, X.; Yuan, X.; Gao, Z. An Experiment-Based Review of Low-Light Image Enhancement Methods. IEEE Access. 2020, 8, 87884–87917. [Google Scholar] [CrossRef]
  4. Bora, D.J.; Bania, R.K.; Che, N. A Local Type-2 Fuzzy Set Based Technique for the Stain Image Enhancement. Ing. Solidar. 2019, 15, 1–22. [Google Scholar] [CrossRef]
  5. Yun, H.J.; Wu, Z.Y.; Wang, G.J.; Tong, G.; Yang, H. A Novel Enhancement Algorithm Combined with Improved Fuzzy Set Theory for Low Illumination Images. Math. Probl. Eng. 2016, 2016, 1–9. [Google Scholar] [CrossRef]
  6. Rahman, S.; Rahman, M.M.; Abdullah-Al-Wadud, M.; Al-Quaderi, G.D.; Shoyaib, M. An adaptive gamma correction for image enhancement. Eurasip. J. Image Vide 2016, 2016, 35–48. [Google Scholar] [CrossRef] [Green Version]
  7. Dai, Q.; Pu, Y.F.; Rahman, Z.; Aamir, M. Fractional-Order Fusion Model for Low-Light Image Enhancement. Symmetry 2019, 11, 574. [Google Scholar] [CrossRef] [Green Version]
  8. Reddy, E.; Reddy, R. Dynamic Clipped Histogram Equalization Technique for Enhancing Low Contrast Images. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2019, 89, 673–698. [Google Scholar] [CrossRef]
  9. Ooi, C.H.; Kong, N.S.P.; Ibrahim, H. Bi-Histogram Equalization with a Plateau Limit for Digital Image Enhancement. IEEE T Consum. Electr. 2009, 55, 2072–2080. [Google Scholar] [CrossRef]
  10. Singh, K.; Kapoor, R. Image enhancement using Exposure based Sub Image Histogram Equalization. Pattern Recogn Lett. 2014, 36, 10–14. [Google Scholar] [CrossRef]
  11. Tan, S.F.; Isa, N.A.M. Exposure Based Multi-Histogram Equalization Contrast Enhancement for Non-Uniform Illumination Images. IEEE Access 2019, 7, 70842–70861. [Google Scholar] [CrossRef]
  12. Zuiderveld, K. Contrast limited adaptive histogram equalization. Graph. Gems Iv 1994, 474–485. [Google Scholar] [CrossRef]
  13. Li, M.D.; Liu, J.Y.; Yang, W.H.; Sun, X.Y.; Guo, Z.M. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE T Image Process 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  14. Zhou, Z.Y.; Feng, Z.; Liu, J.L.; Hao, S.J. Single-image low-light enhancement via generating and fusing multiple sources. Neural Comput. Appl. 2020, 32, 6455–6465. [Google Scholar] [CrossRef]
  15. Cai, B.; Xu, X.; Guo, K.; Jia, K.; Hu, B.; Tao, D. A Joint Intrinsic-Extrinsic Prior Model for Retinex. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4020–4029. [Google Scholar]
  16. Li, Z.; Xiaochen, H.; Jiafeng, L.; Jing, Z.; Xiaoguang, L. A Naturalness-Preserved Low-Light Enhancement Algorithm for Intelligent Analysis. Chin. J. Electron. 2019, 28, 316–324. [Google Scholar]
  17. Guo, X.J.; Li, Y.; Ling, H.B. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE T Image Process 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  18. Kim, W.; Lee, R.; Park, M.; Lee, S.H. Low-Light Image Enhancement Based on Maximal Diffusion Values. IEEE Access 2019, 7, 129150–129163. [Google Scholar] [CrossRef]
  19. Wang, W.C.; Chen, Z.X.; Yuan, X.H.; Wu, X.J. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
  20. Chang, Y.; Jung, C.; Ke, P.; Song, H.; Hwang, J. Automatic Contrast-Limited Adaptive Histogram Equalization With Dual Gamma Correction. IEEE Access 2018, 6, 11782–11792. [Google Scholar] [CrossRef]
  21. Srinivas, K.; Bhandari, A.K. Low light image enhancement with adaptive sigmoid transfer function. IET Image Process 2020, 14, 668–678. [Google Scholar] [CrossRef]
  22. Kansal, S.; Tripathi, R.K. Adaptive gamma correction for contrast enhancement of remote sensing images. Multimed Tools Appl 2019, 78, 25241–25258. [Google Scholar] [CrossRef]
  23. Al-Hashim, M.A.; Al-Ameen, Z. Retinex-based multiphase algorithm for low-light image enhancement. Traitement Du Signal 2020, 37, 733–743. [Google Scholar] [CrossRef]
  24. Ashiba, M.I.; Tolba, M.S.; El-Fishawy, A.S.; El-Samie, F.E.A. Gamma correction enhancement of infrared night vision images using histogram processing. Multimed Tools Appl 2019, 78, 27771–27783. [Google Scholar] [CrossRef]
  25. Kallel, F.; Ben Hamida, A. A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement. IEEE Trans Nanobioscience 2017, 16, 666–675. [Google Scholar] [CrossRef]
  26. Chandrasekharan, R.; Sasikumar, M. Fuzzy Transform for Contrast Enhancement of Nonuniform Illumination Images. IEEE Signal Proc Let 2018, 25, 813–817. [Google Scholar] [CrossRef]
  27. Li, Z.; Jia, Z.; Yang, J.; Kasabov, N. Low Illumination Video Image Enhancement. IEEE Photonics J. 2020, 12, 1–13. [Google Scholar] [CrossRef]
  28. Dhal, K.G.; Ray, S.; Das, S.; Biswas, A.; Ghosh, S. Hue-Preserving and Gamut Problem-Free Histopathology Image Enhancement. Iran. J. Sci. Technol. Trans. Electr. Eng. 2019, 43, 645–672. [Google Scholar] [CrossRef]
  29. Lyu, W.J.; Lu, W.; Ma, M. No-reference quality metric for contrast-distorted image based on gradient domain and HSV space. J. Vis. Commun. Image Represent. 2020, 69, 102797–102806. [Google Scholar] [CrossRef]
  30. Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Image enhancement based on intuitionistic fuzzy sets theory. Iet Image Process 2016, 10, 701–709. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, Z.; Wang, K.; Liu, Z.; Zeng, Z. Study on Denoising and Enhancement Method in SAR Image based on Wavelet Packet and Fuzzy Set. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; pp. 1541–1544. [Google Scholar]
  32. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE T Image Process 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [Green Version]
  33. Gupta, R.; Khari, M.; Gupta, V.; Verdu, E.; Wu, X.; Herrera-Viedma, E.; Crespo, R.G. Fast Single Image Haze Removal Method for Inhomogeneous Environment Using Variable Scattering Coefficient. Cmes-Comput. Modeling Eng. Sci. 2020, 123, 1175–1192. [Google Scholar] [CrossRef]
  34. Ren, Y.; Ying, Z.; Li, T.H.; Li, G. LECARM: Low-Light Image Enhancement Using the Camera Response Model. IEEE T Circ Syst Vid 2019, 29, 968–981. [Google Scholar] [CrossRef]
  35. Hao, S.; Han, X.; Guo, Y.; Xu, X.; Wang, M. Low-Light Image Enhancement with Semi-Decoupled Decomposition. IEEE T Multimed. 2020. [Google Scholar] [CrossRef]
  36. Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index. IEEE Trans. Comput. Imaging 2018, 4, 60–72. [Google Scholar] [CrossRef]
Figure 1. Top row (ac): natural low-light images, bottom row (df): enhanced images with our proposed RBFA method.
Figure 1. Top row (ac): natural low-light images, bottom row (df): enhanced images with our proposed RBFA method.
Entropy 23 00746 g001
Figure 2. The shapes of gamma functions with different γ values.
Figure 2. The shapes of gamma functions with different γ values.
Entropy 23 00746 g002
Figure 3. The flowchart of the proposed method.
Figure 3. The flowchart of the proposed method.
Entropy 23 00746 g003
Figure 4. Low-light image enhancement process and corresponding grayscale histograms.
Figure 4. Low-light image enhancement process and corresponding grayscale histograms.
Entropy 23 00746 g004
Figure 5. Reference images (aj).
Figure 5. Reference images (aj).
Entropy 23 00746 g005aEntropy 23 00746 g005b
Figure 6. Result of computational complexity comparison.
Figure 6. Result of computational complexity comparison.
Entropy 23 00746 g006
Figure 7. Comparing enhanced results of Arno with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 7. Comparing enhanced results of Arno with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g007aEntropy 23 00746 g007b
Figure 8. Comparing the enhanced results of Room with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 8. Comparing the enhanced results of Room with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g008
Figure 9. Comparing enhanced results of Farmhouse with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 9. Comparing enhanced results of Farmhouse with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g009
Figure 10. Comparing enhanced results of House with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with the proposed RBFA method.
Figure 10. Comparing enhanced results of House with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with the proposed RBFA method.
Entropy 23 00746 g010aEntropy 23 00746 g010b
Figure 11. Comparing enhanced results of Cru with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 11. Comparing enhanced results of Cru with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g011
Figure 12. Comparing enhanced results of Office with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 12. Comparing enhanced results of Office with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g012
Figure 13. Comparing enhanced results of Door with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 13. Comparing enhanced results of Door with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g013
Figure 14. Comparing enhanced results of Capitol with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 14. Comparing enhanced results of Capitol with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g014
Figure 15. Comparing enhanced results of Venice with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 15. Comparing enhanced results of Venice with different methods. (a) Input image, (b) enhanced with LECARM, (c) enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g015
Figure 16. Comparing enhanced results of Venice with different methods. (a) Input image, (b) enhanced with LECARM, (c) Enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Figure 16. Comparing enhanced results of Venice with different methods. (a) Input image, (b) enhanced with LECARM, (c) Enhanced with FFM, (d) enhanced with LIME, (e) enhanced with AFEM, (f) enhanced with JIEP, (g) enhanced with SDD, (h) enhanced with proposed RBFA method.
Entropy 23 00746 g016aEntropy 23 00746 g016b
Table 1. Time cost of different methods.
Table 1. Time cost of different methods.
Image Size100 × 100700 × 7001300 × 13001900 × 19002500 × 25003100 × 31003700 × 37004300 × 4300
LECARM0.151 0.396 0.707 1.234 1.934 2.823 3.951 5.398
AFEM0.048 0.204 0.566 1.136 2.014 3.075 4.674 5.959
LIME0.0300.1240.3940.8251.4372.2033.2094.363
FFM0.182 5.043 17.071 36.819 65.744 95.577 142.183 197.190
SDD0.222 8.882 34.930 79.808 139.754 209.301 345.587 526.162
JIEP0.079 3.565 13.332 29.159 45.297 55.327 82.677 120.519
Proposed0.0130.0710.2490.5190.9091.4192.0762.804
Table 2. Results of image quality measure metrics with different methods.
Table 2. Results of image quality measure metrics with different methods.
MetricsLECARMAFEMFFMJIEPLIMESDDProposed
PIQE39.818 39.80942.884 40.072 42.705 51.457 38.601
LOE415.594 253.646291.906 296.568 749.862 493.806 7.660
MSE3777.21752021.3052823.8492241.7681153.5841617.4791158.174
SSIM0.531 0.747 0.709 0.732 0.739 0.7510.753
PSNR12.504 16.350 14.464 15.847 18.13617.511 18.258
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, S.; Long, W.; He, L.; Li, Y.; Ding, W. Retinex-Based Fast Algorithm for Low-Light Image Enhancement. Entropy 2021, 23, 746. https://doi.org/10.3390/e23060746

AMA Style

Liu S, Long W, He L, Li Y, Ding W. Retinex-Based Fast Algorithm for Low-Light Image Enhancement. Entropy. 2021; 23(6):746. https://doi.org/10.3390/e23060746

Chicago/Turabian Style

Liu, Shouxin, Wei Long, Lei He, Yanyan Li, and Wei Ding. 2021. "Retinex-Based Fast Algorithm for Low-Light Image Enhancement" Entropy 23, no. 6: 746. https://doi.org/10.3390/e23060746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop