Retinal image enhancement based on color dominance of image

Real-time fundus images captured to detect multiple diseases are prone to different quality issues like illumination, noise, etc., resulting in less visibility of anomalies. So, enhancing the retinal fundus images is essential for a better prediction rate of eye diseases. In this paper, we propose Lab color space-based enhancement techniques for retinal image enhancement. Existing research works does not consider the relation between color spaces of the fundus image in selecting a specific channel to perform retinal image enhancement. Our unique contribution to this research work is utilizing the color dominance of an image in quantifying the distribution of information in the blue channel and performing enhancement in Lab space followed by a series of steps to optimize overall brightness and contrast. The test set of the Retinal Fundus Multi-disease Image Dataset is used to evaluate the performance of the proposed enhancement technique in identifying the presence or absence of retinal abnormality. The proposed technique achieved an accuracy of 89.53 percent.

According to the vision 2019 world report by World Health Organization, out of the 2.2 billion estimated visually impaired population worldwide, 1 billion cases could have been treated or prevented from vision impairment 1 . The initial non-invasive procedure in a routine eye clinical setting captures retinal fundus images and analyzes the anomalies. Examination of the eye is an early indicator of other diseases like hypertension, diabetes, cardiovascular-related diseases, etc. [2][3][4] . Thus, screening and examination of the eye, along with proper treatment, aids in preventing vision loss and protection from the risk of other diseases.
Typical quality issues in fundus images are due to noise, illumination, contrast, and sharpened regions within the image. Ophthalmologists need to view the features of retinal images to suggest appropriate treatment. Improper lighting may result in dark or bright images leading to less visibility of anomalies 5 . To overcome the illumination or contrast issues on captured retinal images and aid in better visibility of anomalies, image enhancement is an essential step in the image pre-processing stage 6 . Existing enhancement algorithms are based on the following approaches-histogram-based approach, transformation-based approach, and filter-based approach 7 . Out of various histogram-based approaches, Contrast Limited Adaptive Histogram Equalization (CLAHE) is found to be effective [8][9][10] .The enhancement techniques are mainly applied based on either of the three listed methods-1. color image to grayscale conversion and enhancement of grayscale image, 2. splitting of channels in color space (ex., Splitting BGR (Blue, Green and Red) color space to blue, green, and red channels) and performing enhancement on individual channel followed by merging enhanced channel, 3. Performing enhancement techniques directly on color space 11,12 . A.W. Setiawan et al. chose a green channel from the RGB (Red, Green and Blue) color model and applied CLAHE 5 . Alwazzan et al. applied the Weiner filter followed by CLAHE on the green channel and merged with the red and blue channels of the RGB color model 13 . Jin et al. converted the input image from RGB color space to Lab (L and ab components represent lightness and chromaticity, respectively) color space and applied CLAHE on normalized individual components of the Lab (also called CIELAB color space defined by the International Commission on Illumination (CIE)) color model 14 . The Related works section discusses more research works in this domain. Most of the existing retinal enhancement methods are focused towards improving the contrast by choosing green channel or luminosity channel. The color information of retinal image will vary depending upon the retinal diseases a patient is suffering from along with other image quality issues. Choosing only the green channel can result in losing information available in optic disc and information about anomalies like drusen, cotton wool spots etc. So, it is important to select the color channel that displays more artifacts depending on the color dominance of retinal image for performing enhancement.
The unique contribution of this paper is utilizing the relation between color spaces to identify color dominance in a retinal image. RGB and Lab color space is chosen for selecting the channel to be enhanced for efficient enhancement of fundus images. The variance information in the blue channel is calculated to identify the color dominance of the selected retinal image and accordingly choose either a* or b* channel from the Lab color space to enhance its artifacts. Existing research works consider the L channel primarily from the Lab space to enhance www.nature.com/scientificreports/ the contrast in fundus images. The other two channels are generally less explored for image enhancement. But in this research, the information in a* and b* channels is utilized to enhance the retinal image dataset of multiple diseases. Instead of considering the overall average value of performance metrics, this paper analyses the performance metrics concerning various retinal disease categories individually to understand the suitability of proposed image enhancement on multiple fundus diseases with different anomalies. The organization of the rest of this paper is as follows -Sections "Related works" and "Proposed enhancement method" present the overview of related works and proposed enhancement methodology; the experiments and discussion of results are under sections "Results" and "Discussion"; and finally, the conclusion of this work is under Section "Conclusion".

Related works
Image enhancement is an essential process in the design of computer-aided diagnosis based solutions. Retinal images are more susceptible to image quality issues. Over the years, researchers have been experimenting with different enhancement methods to enhance the visibility of artifacts in fundus images. Gupta et al. 15 proposed an enhancement technique by applying adaptive gamma correction on the luminosity channel of Lab color space, where the weights are calculated using the cumulative distribution of the histogram of input image pixels. The enhanced image contrast in Lab space is further improved by applying a quantile-based histogram. The authors achieved a PSNR value of 27.67, an SSIM value of 0.66 for quantile equals 3, and a PSNR of 28.40 and SSIM of 0.69 for quantile equals 5 on the MESSIDOR dataset. Mohammed et al. 16 applied CLAHE on the normalized luminosity channel of Lab space after segmentation of the retinal region. The enhanced luminosity channel is rescaled and merged with the other two chromatic color channels of Lab color space. The authors achieved a PSNR of 24.42, a local contrast index of 0.57, and an Entropy of 5.63. Zhou et al. 7 proposed enhancement on fundus images based on luminosity and contrast. The authors applied Gamma correction on the Luminance gain matrix obtained by converting from RGB to HSV channel. The resultant image is converted to Lab color space with an intermediate step conversion to RGB color. CLAHE is applied on the L channel, and the output is converted to RGB color space. From the private dataset of 4000 images, 961 poor-quality images were extracted and tested. The average image quality assessment improved from 0.0404 to 0.4565 for low-quality images. Kumar et al. 20 followed a similar approach but applied a weighted average histogram on the luminosity channel instead of CLAHE. The authors assessed the enhancement using the metrics -Edge based contrast measure (EBCM), contrast-enhanced image quality (CEIQ), naturalness image quality evaluator (NIQE), visual saliency-induced index (VSI), and modified measure of enhancement (MEME). Navdeep et al. 17 addressed the non-uniform illumination problem by proposing two radiance-based histogram equalization (RIHE-RRVE and RIHE-RVE) for retinal vessel enhancement where one is a recursive algorithm, and the other is non-recursive. A tuneable parameter is estimated to split the histogram into sub-bands and to calculate the radiance value. If the radiance value is less than the threshold, the histogram equalization technique is applied. Performance evaluation is estimated on various databases -DRIVE, STARE, CHASE using the measures entropy, PSNR, Euclidean measure, and visual quality inspection. Qureshi et al. 18 converted the RGB image to CIECAM02 color space, and the lightness component of this color space is converted to grayscale. Texture features of the fundus image are enhanced by applying a non-linear contrast enhancement technique on the resultant grayscale image. The performance is evaluated on all the images of MESSIDOR and DRIVE datasets resulting in mean values of 4.60 Entropy, 23.78 PSNR, and 8.78 contrast-to-noise ratios. Dissopa et al. 19 enhanced local image contrast by applying CLAHE on Lab space. Followed by histogram rescaling and stretching to standardize the brightness range to Hubbard's brightness range of fundus images for different histogram clip limits. The performance is measured in terms of Quaternion structural similarity, Global contrast factor, and lightness order error values. In a technique by Wang et al. 21 , the fundus image is decomposed into three layers: base, detail, and noise. Then, a visual adaption model is framed to perform non-illumination correction using a luminance map at the base layer, weighted fusion for enhancing the detail layer, and denoising at the noise layer. The authors calculated the Local contrast index and entropy measures for quantitative assessment. To improve the blurred retinal images, Xiong et al. 22 applied techniques specific to background and foreground on 319 images. Background pixels are estimated using an illumination map and transmission map. Foreground pixels are enhanced and captured by applying the combination of Mahalanobis distance and entropy-based enhancement methods. Table 1 presents the overall comparative analysis of the discussed image enhancement techniques. The analysis shows that the current research works perform enhancement techniques mainly on the Luminosity channels. To the best of our knowledge, the current research works have not considered quantifying color information for performing image enhancement. In the proposed method, we quantified the spread of information present in color channels by calculating the variance and performed image enhancement techniques based on color information.

Proposed enhancement method
The proposed retinal fundus image enhancement method consists of two stages. Figure 1 presents the flow of each stage-Stage 1 and Stage 2. Stage 1 focuses on selecting the color channel for image enhancement, and Stage 2 focuses on noise removal, brightness, and contrast optimization.

Fundus image dataset. The Retinal Fundus Multi-Disease Image Dataset (RFMiD) dataset is chosen in this
research to evaluate the performance of our proposed method because the database contains images of different color dominance, and the proposed method is also based on color dominance. RFMiD is more recent(published in 2021) and the only publicly available dataset that includes 45 retinal disease categories plus one set of healthy fundus images 23 .  Table 2 shows the distribution of healthy and unhealthy retinal images. To understand the suitability of the proposed enhancement method on other publicly available datasets, we tested the algorithm on DRIVE and MESSIDOR datasets and tabulated the results in Tables 7, 8  where X ( i, j) , µ , m and n represents individual pixels, mean, number of rows and columns in blue channel respectively. The variance measure is selected to understand the spread of data variability. Table 3 shows the variance calculated for each channel from RGB, HSV, and Lab color space. Compared to the variance of all the channels listed in Table 3, the variance of the blue channel shows direct relation with the color dominance in fundus images. It is evident from the table that red dominant images have less blue variance while non-red dominant images have more blue variance value. The underlying principle of human vision and Lab color space is based on the opponent color model. Since Lab color space is based on human visual perception, Lab color space is chosen in this research. The a* channel in the Lab space denotes the relation between red and green pixel values in an image, while the b* channel depicts the yellow-blue pixel values in the Lab space. So, the retinal image in RGB color space is converted to Lab space using the transformations described in Eqs. (2)-(5) 24 . The conversion from RGB to Lab color space involves transformation to intermediate components X, Y and Z components-of CIE XYZ color space, and X n , Y n , Z n represents CIE XYZ tristimulus values of the reference white point. (1)

R G B
(3) Gamma correction is applied on the Luminosity gain matrix of the HSVcolor space and CLAHE on the L channel of the Lab color space.
Quality assessment metric in the range of (0-1).
Qureshi et al. 18  After repeated experiments on analyzing the relation between blue variance and image color dominance, a threshold of θ = 1500 is fixed. For the red dominant image, i.e., with blue variance σ 2 ≤ θ , CLAHE is applied on a* channel of Lab space. And for the non-red dominant images with σ 2 > θ , CLAHE is applied on the b* channel of Lab space. The enhanced channel of Lab space is merged with the other two unchanged channels of Lab space and finally converted to RGB. The transformation from Lab to RGB color space involves intermediate conversion to CIE XYZ color space described in the Eqs. (6)-(9) 24 :  Stage 2. Stage 2 is focused on noise removal and brightness and contrast optimization on the output image from stage 1. Out of the red, green, and blue channels in the RGB color space, the green channel is chosen for  www.nature.com/scientificreports/ further performance improvement because the green channel is proportional to the L channel of the Lab color space 7 . The Green channel has better visibility of more artifacts than the other two channels and is less noiseprone. So, in this research, the modified (RGB)' channel from stage 1 is split, and CLAHE is applied on the green channel. The enhanced green channel is merged with the red and blue channels. Due to repeated enhancements, it is essential to perform noise removal. A bilateral filter is applied to perform noise removal using the Eq. (10) where the output image ( X output ) is given by the weighted average of pixels in input image ( X input ) 25 . where, where weight w ij is the product of photometric and euclidean distance between pixels X input [i] and X input [j] and p i denotes the position of ith pixel A bilateral filter is better than a popular Gaussian filter because of the algorithm characteristics to preserve edges between the regions in an image and reduce noise by applying a non-linear function over the image pixels. The brightness and contrast of a filtered image are auto-optimized by calculating the gain parameter ( α ) and the bias parameter ( β ) using Eqs. (11)- (13). Alpha and beta values are auto-calculated specific to each input image to produce the final enhanced retinal fundus image. greenChannel= Img RGB [:,:,2] 5: Calculate the variance σ 2 of the blue channel (equation1). 6: Convert Img RGB to Img Lab color space (equations (2)-(5)). Output: Enhanced Output Image (Img output ) 1: Splitting of fundus image in (RGB)' color space to red, green, and blue channels. 2: Apply CLAHE on the green channel from Step 1 and merge the enhanced green channel with red and blue from Step 1 (Img (RG B) ) 3: Apply bilateral filter on an image in (RG'B)' format (Img (RG B) ) (equation (10)). 4: Auto-optimization of brightness and contrast 1. Convert (RG'B)' format to Grayscale. 2. Calculate the minimum (Gray min) and maximum (Gray max) pixel value of the Grayscale image.
3. Calculate alpha and beta values- where α is the scale factor and β is the delta added to scaled values.

Brightness and contrast adjustment is performed using the below equation
Img output = α * Img (RG B) + β (13) Method evaluation. The proposed enhancement technique is applied to the RFMiD image data-set to test the disease prediction accuracy. Pre-trained VGG16 model using the transfer learning technique is trained with a training data-set (1920 images), validated with a validation data-set (640 images), evaluated on a test set (640 images), and the accuracy is estimated. A fully connected layer with 512 nodes and a Relu activation function, followed by a dropout layer, and a final layer with 1 node and a sigmoid activation function are added on top of the VGG16 model. The model is trained with a Stochastic gradient optimizer, and for regularization, dropout is used. The code for the proposed method was developed in python, and the VGG16 model was trained on Power Ai 9 server with 16 GB RAM at 8Hz. The experiment results are analyzed and discussed in the following section.

Results
The proposed enhancement technique is implemented, trained, and validated on the training and validation set of the RFMiD data-set using a pre-trained VGG16 model and evaluated on the test set to identify the presence or absence of retinal abnormalities. The model performance is evaluated by calculating accuracy and F1 score. Accuracy is calculated as the ratio between the total number of correct predictions to the total number of predictions. F1 score is defined as the harmonic mean of precision and recall. Visual image analysis of the result is carried out in RGB color space and as well as in gray-scale. Figure 3 shows the comparison of the original input, stage 1 output, and stage 2 output in color space, along with a comparison between the gray-scale of the input image vs. the output of the proposed technique. Table 4 tabulates the distribution of retinal disorders in the RFMiD training data-set. The performance of the enhancement technique is evaluated on the training set of the RFMiD data-set in terms of the following metrics -Mean square error (MSE), Peak-to-signal noise ratio (PSNR), and Universal Quality Index (UQI). UQI calculates correlation loss, distortion in contrast, and luminance as a single performance metric 26 . The similarity of the original input image and the enhanced image is evaluated using the Structural similarity index measure (SSIM), and Pearson correlation coefficient 27 , and the information variability of input and the enhanced image is estimated by comparing the Shannon entropy of the original and enhanced image. The higher Shannon entropy value indicates high information variability in an image. The considered metrics are calculated using the Eqs. (14)- (19) for each channel of RGB color space and averaged; the results are tabulated in Tables 5 and 6 and analyzed in the following section.  www.nature.com/scientificreports/ where X and Y represent enhanced and input images, respectively; m and n represent the number of rows and columns.
where R represents the maximum fluctuation present in the input image.
where l(x,y), c(x,y), s(x,y) represents luminance, contrast and structure comparison between input(x) and enhanced(y) images. Pearson's correlation coefficient, r 1 where x i and y i Denote ith pixel intensity of images 1 and 2, respectively x m and y m correspond to the mean intensity of images 1 and 2. Shannon entropy, H(X)

Discussion
We evaluated the performance of proposed enhancement technique using the performance measures-MSE, PSNR and UQI. . From the analysis, it can be inferred that high UQI is achieved for retinal images with artery or vessel inflammation-related disorders like RP, RS, and MS by maintaining SSIM of 80% and above with a Pearson correlation of Coefficient 96%. The overall Pearson correlation of Coefficient for all the disease categories is above 90% except for Exudation with 88%. A minimum of 73% structural similarity is maintained between the input image and enhanced images. The tabulated results show that the proposed enhancement method achieves an overall average UQI of 0.81 and PSNR of 29.12 by enhancing the features of the input image from an average Shannon entropy of 5.82 to 6.12 by maintaining average 78% structural similarity and 94 percent Pearson correlation coefficient between the input and enhanced image.  Figure 4, it is evident that anomalies present in the path of retinal vessels, like red haemorrhages, are enhanced well. The path of blood vessels serves as vital evidence to identify diseases like retinal pigment epithelium. Figure 4 shows proliferated retinal vessels in the optic cup better in the enhanced retinal image compared to an original input image. It is challenging to enhance both retinal vessels and the optic cup in a fundus image because either of them gets suppressed while enhancing the other. The advantage of the proposed method is it is efficient in enhancing the vessel as well as the optic cup for an image of different resolutions.
We tested the suitability of the proposed enhancement technique on two other publicly available datasets -MESSIDOR and DRIVE.

Conclusion
Retinal image enhancement is an essential step under the pre-processing stage to better view the retinal anomalies for identifying the type of disease a patient suffers. This paper proposes an efficient retinal image enhancement technique based on color dominance in an image. In stage 1, the variance of the blue channel is calculated to find the color dominance of the input image. Depending on the resultant value, a* or b* channel of Lab color space is chosen for enhancement. If the variance value is less than the threshold, the information in the blue channel is less. So, a* channel of Lab color space is chosen for enhancement. For values above the threshold, b* channel of Lab color space is chosen for enhancement. The Contrast limited adaptive histogram equalization (CLAHE) technique is applied to perform an enhancement on the selected channels. Enhanced image in Lab space is passed to stage 2. In stage 2, the corresponding green channel is enhanced, followed by noise removal using a bilateral filter and auto-optimizing brightness and contrast.
The proposed image enhancement technique is analyzed using the metrics -UQI, MSE, PSNR, Shannon entropy, SSIM, and Pearson Correlation of Coefficient on the training set of the RFMiD dataset. The analysis      Table 8. Average PSNR and Entropy measure comparison of DRIVE dataset.

PSNR Entropy
Mohammed et al. 16 Table 9. Average PSNR and Entropy measure comparison of MESSIDOR and DRIVE datasets.