Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Novel Image Recuperation Approach for Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image

  • Somasundaram Krishnamoorthy ,

    Contributed equally to this work with: Somasundaram Krishnamoorthy, Alli P

    soma@psnacet.edu.in

    Affiliation Department of Computer Science and Engineering, PSNA College of Engineering and Technology, Dindigul, Tamil Nadu, India

  • Alli P

    Contributed equally to this work with: Somasundaram Krishnamoorthy, Alli P

    Affiliation Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai, Tamil Nadu, India

Retraction

It has been brought to the attention of the PLOS ONE editors that this article contains substantial overlap in text and scientific content with the publication below by the same authors:

Diagnosing and ranking retinopathy disease level using diabetic fundus image recuperation approach. ScientificWorldJournal. 2015;2015:534045. doi: 10.1155/2015/534045.

Figures 1 and 2 duplicate images included in the publication in The Scientific World Journal. There is also substantial overlap in the Introduction, Methods, Discussion and Conclusion sections.

The PLOS ONE editors retract this article as we consider that this constitutes redundant publication.

18 Jul 2017: The PLOS ONE Editors (2017) Retraction: A Novel Image Recuperation Approach for Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image. PLOS ONE 12(7): e0181891. https://doi.org/10.1371/journal.pone.0181891 View retraction

Abstract

Retinal fundus images are widely used in diagnosing and providing treatment for several eye diseases. Prior works using retinal fundus images detected the presence of exudation with the aid of publicly available dataset using extensive segmentation process. Though it was proved to be computationally efficient, it failed to create a diabetic retinopathy feature selection system for transparently diagnosing the disease state. Also the diagnosis of diseases did not employ machine learning methods to categorize candidate fundus images into true positive and true negative ratio. Several candidate fundus images did not include more detailed feature selection technique for diabetic retinopathy. To apply machine learning methods and classify the candidate fundus images on the basis of sliding window a method called, Diabetic Fundus Image Recuperation (DFIR) is designed in this paper. The initial phase of DFIR method select the feature of optic cup in digital retinal fundus images based on Sliding Window Approach. With this, the disease state for diabetic retinopathy is assessed. The feature selection in DFIR method uses collection of sliding windows to obtain the features based on the histogram value. The histogram based feature selection with the aid of Group Sparsity Non-overlapping function provides more detailed information of features. Using Support Vector Model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases. The ranking of disease level for each candidate set provides a much promising result for developing practically automated diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, specificity rate, ranking efficiency and feature selection time.

Introduction

The identification of exudates in the macular region plays significant part for diabetic macular edema and helps in the detection according to the severity of the disease with higher level of sensitivity. As a result, exudates detection plays an important task for efficient diagnosing. In the design of computer based diagnosis and ranking of diabetic retinopathy, many algorithms and technique for efficient exudates detection was presented and discussed.

The first part of the introductory section provides a detailed discussion of different ways in which the contribution of image processing for efficient diagnosing of diabetic retinopathy has been made by different researchers. A two-fold method for early detection of macular edema can be presented with the help of mathematical morphology to identify the fovea region. By applying mathematical morphology, minimization of load was achieved by efficiently capturing the global characteristics of images.

One of the most fundamental reasons for the cause of blindness in the modern world is Diabetic retinopathy (DR), of which Diabetic Macular Edema can be obtained by detection of exudates present in the fundus images. A new technique called, Feature based Macular Edema Detection (FMED) [1], was introduced for diagnosing of DME with the help of novel features namely, color, decomposition of wavelet, segmentation of lesion and so on. Even though the technique was proved to be efficient in terms of computation, feature selection system for transparent disease diagnose was not possible.

One of the causes of retina is the minimization of vision is due to Macular edema. In [2], a two-fold technique for measuring the severity of DME was presented using classification for digital fundus images. A supervised learning model was framed using normal digital fundus images, where the extraction of features were performed to obtain the dynamic form of digital fundus images and efficient means were introduced to differentiate between the normal and DME images. Though sensitivity and specificity were improved, the work flows with an assumption that an image is said to be normal if it has no lesions.

In order to effectively measure the severity of retinal diseases, extraction of vessel is also considered as an important means for digital fundus images. Contrast-Limited Adaptive Histogram Equalization (CLAHE) [3], efficiently segmented the retinal blood vessels that differentiated between the Proliferative Diabetic Retinopathy (PDR) and digital fundus images using 2D-Match (Gabor) filters resulting in higher level of accuracy. Though the level of accuracy was improved, threshold value had to be set for each digital fundus images. Another potential Proliferative Diabetic Retinopathy presented in [4] was developed with the objective of analyzing the angiogenic potential.

A model that used 2-D Gabor wavelet for enhancement of vessels [5] was presented for significant segmentation of vessel. With this, the segmentation model proved to extract even the thinnest vessels for large illumination. Though accuracy was obtained, the time with which the accuracy was performed increased with the segmentation of vessel.

Diabetic retinopathy which is considered as the critical disease with respect to eye can significantly result in blindness by 50%. Optimally Adjusted Morphological Operator (OCT) [6] investigated the states of the patients and presented optimally adjusted morphological operators for efficient detection of exudates. One of the main drawbacks of this method was that these mechanically detected exudates did not included detailed feature selection technique for diabetic retinopathy detection.

Effective diagnosis and measures for treating diabetic retinopathy (DR) in young adults has improved a lot in the recent years. In [7], summary of methods were provided for early detection and treatment of DR. A framework was designed in [8] for benchmark databases and novel measures for the protocol design for treatment of diabetic retinopathy in medical image analysis were also introduced.

In [9], an image processing technique using adaptive histogram equalization was presented for efficient detection of exudates in digital fundus images. Followed by this, clustering algorithms were applied for significant segmentation of the exudates and were provided as inputs to Echo State Neural Network (ESNN) for providing an efficient differentiation between the normal and affected digital fundus image. Though, the results proved to classify the lesion in accurate manner, various symptoms of diabetic retinopathy were not detected. Detection of microaneurysms (MAs) was used as a key for early detection of diabetic retinopathy.

In [10], a three-fold mechanism was presented for early MA detection with the help of filter banks resulting in higher accuracy. In [11], measures were taken to investigate a group of morphological operators used for early detection of microaneurysm on the non-dilated pupil in addition to the low-contrast retinal images resulting in precision and accuracy. With the aid of Non-Proliferative Diabetic Retinopathy (NPDR) [12] features were identified from digital fundus image using segmentation based on the extraction of features. Classification in NPDR was performed on the basis of intensity level of the feature and their frequency measures. Results also confirmed that the sensitivity and specificity reduced the diabetic retinopathy. However, Feature based Macular Edema Detection (FMED) though detected the presence of exudation in fundus image using publicly available dataset, did not include studies involving retinopathy feature selection system for diagnosing the disease state.

Based on the aforementioned techniques and methods, an efficient mechanism of diagnosing and ranking the disease level of diabetic retinopathy using DFIR method based on sliding window approach is presented. The study evaluates the effectiveness of DFIR method for early identification of disease state through sliding window approach. The contributions of DFIR method include the following:

  1. To efficiently diagnose the diseased fundus images using DFIR method and observe the level of severity using the ranking model based on Sliding Window Approach.
  2. To diagnose the diabetic retinopathy disease state during the initial phase by selecting the features of optic cup in digital retinal fundus images.
  3. To provide more detailed information of features using Group Sparsity Non-overlapping Function using histogram value.
  4. To effectively rank the diabetic retinopathy diseases level using Support Vector Model with the aid of Spiral Basis Function and
  5. To provide a promising result for developing practically automated and assisted diabetic retinopathy diagnosis system.

The organization of the paper is as follows. Section 2 provides the framework for diagnosing the disease level with the aid of image recuperation approach followed by a detailed algorithm and a neat architecture diagram. Section 3 provides the experimental setups and Section 4 includes detailed analysis of result. Section 5 includes a detailed comparison with other state-of-the-art methods. Finally, Section 6 concludes with concluding remarks.

Methods

Ethics Statement

Experiments were conducted using DIARETDB1—Standard Diabetic Retinopathy Database where the patients were selected in accordance with the benchmarking diabetic retinopathy detection. We included the images obtained from 40 patients using 35 color fundus images with sliding windows of size 7 for experimental purpose.

Study Subject

A comprehensive computerized literature search is carried out using 89 color fundus images, out of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. Forty patients (25 males, 15 females) aged between 60 years and 75 years having diabetes is included in this study. All 25 male patients were aged between 70 and 75 years whereas the 15 female patients were aged between 60 and 65 years.

Statistical Analysis

Statistical analysis is performed using diabetic fundus image recuperation approach for diagnosing the disease level and the stage is also measured. The images of 40 patients are extracted where the histogram value for each images are obtained using sliding window approach. The histogram intensity range of the 40 patients for both left and right boundary is obtained. The feature selection time for 40 patients is also reduced by applying group sparsity non-overlapping function and correspondingly the feature selection time is also evaluated. Finally, the disease affected area of color fundus images of 40 patients is measured using spiral basis function.

Materials and Methods

3.1 Diabetic Fundus Image Recuperation Approach for Diagnosing Disease Level

In this section, an efficient method for diagnosing disease level for diabetic fundus image using recuperation approach is explained with the help of a neat architecture diagram. The work starts with the designing of Sliding Window Approach, followed by the identification of disease using optic cup and finally, designing a Support Vector Model for disease ranking.

Exudates are one of the most ordinary occurring lesions in diabetic retinopathy fundus images. The shape, intensity and position of candidate fundus images differ a lot amongst diverse diabetic retinopathy patients. The existing diabetic macular edema included information related to blood vessels, length, width, tortuosity and branching pattern but did not provided much information related to pathological feature changes and also did not considered the disease severity of DR. The DFIR method on the other hand analyzes the disease severity of DR using the Spiral Basis Function. Initially, Sliding Window Approach in DFIR method divides the digital fundus images into sliding window blocks of varying dimensional length. The size of digital fundus images and dimensional length varies according to different set of patients.

Fig 1(A) shows the application of Sliding Window Approach in DFIR method that is divided into blocks. The sliding window based feature selection in DFIR method initially divides the optic cup digital fundus images into blocks of specific window size. As shown in the figure, three blocks (i.e., with different colors). Each blocks varies according to the patients and in the figure, three sliding windows are represented namely, sliding window 1, sliding window 2 and sliding window 3 respectively. The overall sliding window blocks evaluate two set of histogram results using the DFIR method. The first set of histogram value works from the left boundary of block whereas the second set work from the right boundary of block. The result obtained from the two sets is combined into one operational set using the Group Sparsity Non-overlapping Function in DFIR method.

thumbnail
Fig 1. (a) Sliding Window on DFIR method (b) Distance Computation on Affected Diseased Optic Cup.

https://doi.org/10.1371/journal.pone.0125542.g001

The DFIR method uses the Group Sparsity Non-overlapping Function with non-overlapping pixel for evaluating the histogram intensity range. With the aid of histogram intensity, the features are easily spotted on the digital fundus images. Group Sparsity Non-overlapping Function removes the overlapping of pixel values, and as a result the feature is selected at minimal time period.

The selected features are used in the second phase of the DFIR method to analyze the disease severity of DR using the ranking level. To measure the ranking level, Support Vector Model is used in DFIR method. The Spiral Basis Function measures the distance of the disease affected area in the digital fundus image and accordingly ranks the patients severity level.

Fig 1(B) shows the measure of distance computation performed (i.e.,) severity level on digital fundus images. It represents one of the color fundus images that are slightly affected by disease optic cup. The optic portions are identified using multivariate Gaussian distance obtained from a male patient of age 72 years. Spiral Basis Function uses the multivariate Gaussian distance for identifying the depthness of affected level on the digital images. With the aid of multivariate Gaussian distance the level of disease is obtained and ranked based on the computation value. The overall architecture diagram of DFIR method is depicted in Fig 2.

As illustrated in Fig 2, the DFIR method for diagnosing the diseased fundus images is provided and examines the level of severity using the ranking model. Initially, the DIARETDB1 Fundus Images database is taken for the experimental work, though Caucasian in nature can be applied to different ethnicities. 35 color fundus images, collected from 40 different patients are used to conduct the experiments. Due to space constraint, three images obtained from two male and one female patient is extracted and shown in the figure. Out of which three patients are affected by diabetic retinopathy. The division of fundus images is made into blocks using the sliding window approach and two set of histogram value is computed. The computed histogram value uses the Group Sparsity Non-overlapping Function to remove the overlapping of pixels on computation. The Group Sparsity Non-overlapping Function chooses the features and offers the detailed feature information. The detailed information uses the Support Vector Model to rank the diabetic retinopathy disease level. Finally, Spiral Basis Function is used to compute the distance (i.e.,) depth of affected area using the multivariate Gaussian distance.

3.2 Sliding Window Approach

Sliding window approach is used for localizing the features by dividing the fundus images into blocks {b1, b2, b3, …., bn} of size ‘n’. The Sliding Window Approach in DFIR method uses rectangular shaped boundary to perform sliding window operation on fundus images. The window of ‘n’ size contains the block such that, (1)

The sliding window division operation in (1) divides the optic cup ‘op’ into ‘n’ blocks. Using DFIR method, {a11, a12, a13, …, a1n} is the divided boundary on ‘n’ dimensional digital fundus mages. The histogram intensity range of the left and right boundary (1) & (2) are computed as, (2) (3)

The ‘LB’ and ‘RB’ are the left and right boundary of the fundus images with which the histogram intensity value is evaluated. The intensity range is measured based on rows and column pixel of the digital image. Smax in DFIR method is the maximum group sparsity value used for the histogram value computation. In the left boundary range, the value is incremented as the window moves from the left end to the right end of the image. In the right boundary range, the value is decremented as the window moves from the right end to the left end of the image. The algorithmic step of the Sliding Window Approach is described as,

//Sliding Window Approach

Begin

Input: Digital Fundus Images from DIARETDB1 database

Output: Feature Selected with detailed optic cup information

Step 1: Division of block of size ‘n’ into b1, b2, and b3….bn

Step 2: Compute bn = op (a11, a12, a13, ……, a1n)

Step 3: For i = Row point 1 to n

Step 4: For j = Column point 1 to n

Step 4.1: Evaluate LB (Histogram) using (2)

Step 4.2: Evaluate RB (Histogram) using (3)

Step 4.3: Compute Group Sparsity function to avoid overlapping of pixels

Step 4.4: Window Fundus image = LB (Histogram)+RB (Histogram)

Step 5: End (For j)

Step 6: End (For i)

End

The above algorithm describes the Sliding Window Approach through the algorithmic step with for loop structure. The histogram intensity value of left and right boundary is computed using the rows and columns in DFIR method. The two set of value is combined to remove the overlapping condition in step 4.3 using the Group Sparsity Non-overlapping Function. The Group Sparsity Non-overlapping Function is clearly described in the forthcoming section.

3.2.1 Group Sparsity Non-overlapping Function

The optic cup histogram value from the right and left boundary is combined together to reduce the feature selection time and group Sparsity Non-overlapping Function is computed as, (4)

In (4), ‘S’ denotes the Sparsity histogram value for window size from 1, 2…, max. The group Sparsity Non-overlapping Function (GS) is squared when the left and right boundary values are combined together. The feature selection time taken using DFIR method is formularized as, (5)

The combined histogram value with time ‘T’ produce the result with features obtained from both LB(FI) and RB(FI) which is a combined value without any overlapping of pixel value.

3.3 Support Vector Model

The DFIR machine learning method uses the Support Vector Model to rank the disease severity level of DR from digital fundus images. The features obtained through group sparsity non-overlapping function is used in the Support Vector Model to identify the position of the disease affected area in fundus images. In DFIR method, the ranking of diseased fundus images is performed using Spiral Basis Function.

3.3.1 Spiral Basis Function.

The Spiral Basis Function in Support Vector Model of DFIR method uses multivariate Gaussian distance to identify the depth of the disease affected position. With the identified depth level of disease affected position, ranking of retinopathy disease level is obtained. The Spiral Basis Function with multivariate Gaussian distance is denoted as, (6)

The Multivariate Gaussian distance with σ1 and σ2 denotes the depth of the affected portion in the fundus image from left to right where μ1 and μ2 denotes the length to which the disease get spread on the four directions denoted by 1/4. The distance is computed and threshold value is set to rank the disease level. If the computed value in (6) is higher than the threshold value, then the rank gets increased (i.e.,) the disease severity of DR level gets increased. On the other hand, if the value in (6) is lower than the threshold value, then the rank gets decreased (i.e.,) the disease severity level of DR gets decreased. The ranking in DFIR method develops a practical assisted diabetic retinopathy disease ranking system.

Results

DFIR method is developed in MATLAB platform. DFIR Method uses the DIARETDB1—Standard Diabetic Retinopathy Database. DIARETDB1 database is a public database used for benchmarking diabetic retinopathy detection. The idea of the DFIR method is to define a database with a test image used as benchmark images for feature selection. The DIARETDB1database includes 89 color fundus images. Out of 89 color fundus images, 84 contain at least mild non-proliferative signs of the diabetic retinopathy. The remaining 5 color fundus images are considered as normal which do not contain any signs of diabetic retinopathy. The method DFIR is evaluated using 35 color fundus images collected from 40 different patients.

4.1 Experimental Analysis of DFIR Method

DIARETDB1 database is used to conduct the experiments and the defined testing method results are compared with existing method. DFIR Method is compared with the existing Feature based Macular Edema Detection (FMED) [1] and Optical Coherence Tomography OCT [13].The experiment is conducted on factors such as sensitivity, specificity rate, ranking efficiency and feature selection time. The sensitivity of DFIR method in (7) measures the capability of diagnosing those patients with the retinopathy disease (in terms of %), also referred to as a measure for providing a true positive test with retinopathy disease.

(7)

Where True Positive refers to the number of diabetic retinopathy diseases, False Negatives refers to the number of diabetic retinopathy disease that was not detected. The specificity of DFIR method in (8) measures the capability to correctly identify those patients without retinopathy disease (in terms of %), also referred to as a measure for providing a true negative test with retinopathy disease.

(8)

Where True Negatives refers to the number of non-diabetic retinopathy diseases which were correctly identified as non-diabetic retinopathy and False Positives measures the number of non-diabetic retinopathy diseases which are detected wrongly as diabetic retinopathy disease. Ranking efficiency using DFIR method is measured using the support vector model based on spiral basis function evaluated using (4). The ranking efficiency is measured in terms of percentage (%).The time taken to select the feature using DFIR method is evaluated from (5) that is measured in terms of milliseconds (ms).

4.1.1 Result Analysis of DFIR.

The result analysis of DFIR method using Standard Diabetic Retinopathy Database is compared with existing Feature based Macular Edema Detection (FMED) [1] and Optical Coherence Tomography (OCT) [13]. S1 Table shows the impact of sensitivity rate obtained from 40 patients using MATLAB simulator and comparison are made with two other methods, namely FMED [1] and OCT [13].

Fig 3 shows the impact of sensitivity obtained from 40 patients and comparison made with two other methods FMED and OCT respectively to that of the proposed method DFIR. The proposed DFIR method provides higher sensitivity rate when compared to FMED [1] method and OCT [13] method. The increase in sensitivity using the DFIR method is due to the incorporation of Sliding Window Approach. By applying Sliding Window Approach with varying range more elaborated information of features are obtained based on the histogram value using Group Sparsity Non-overlapping Function improving the sensitivity value 7–18% when compared to FMED. In addition to that with the use of the intensity range based on rows and column, pixel of the digital image obtain the left and right boundary whereas using OCT only left eye was addressed. As a result the rate of sensitivity was proved to be efficient by 20–27% in DFIR than the OCT [13] method.

The comparison of specificity is presented in S1 Table with respect to the number of images in the range of 5–40. With increase in the number of images, the specificity also gets increased though not linear due to the varying size of digital fundus images that varies according to different patients and accordingly the specificity also varies.

To ascertain the performance of the specificity, comparison is made with two other existing methods Feature based Macular Edema Detection (FMED) [1] and OCT [13]. In Fig 4, the number of images used for experimental purpose is varied between 5 and 40. From the Fig 4 it is illustrative that the specificity is higher or increased using the proposed DFIR method when compared to the two other existing methods. The increase in specificity is because of the application of Support Vector Model, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases level and therefore increasing the specificity by 10–15% when compared to FMED [1] method. Furthermore, by ranking the disease level for each candidate set using Spiral Basis Function with multivariate Gaussian distance helps in efficiently identifying the affected portion in the fundus image from left to right position. As a result, the specificity is increased using DFIR method by 16–3% than when compared to OCT [13] method.

The ranking efficiency of DFIR method is elaborated in S2 Table. We consider the method with sliding windows of size 7 for experimental purpose using MATLAB. In Fig 5, the ranking efficiency attained using the sliding windows of size 1 to 8 for experimental purposes are depicted. From the Fig 5, the value of ranking achieved using the proposed DFIR method is higher when compared to two other existing methods Feature based Macular Edema Detection (FMED) [1] and OCT [13]. Besides we can also observe that by increasing the size of the sliding windows, the value of ranking efficiency is increased using all the methods. But comparatively, it is higher in DFIR method because of the Spiral Basis Function with multivariate Gaussian distance that efficiently identifies the depth of the disease affected position. With the identified depth level points, the ranking of retinopathy disease level is improved by 15–20% than FMED [1] where both the true positive and true negative ratio was performed using DFIR whereas no testing was possible in FMED. In addition, with the application of the Spiral Basis Function that efficiently identifies the distance of the disease affected area, ranks the patients severity level and therefore improving the ranking by 23–32% than when compared to the OCT[13] method.

S2 Table and Fig 6 shows the feature selection time versus the number of sliding windows measured in terms of milliseconds. The feature selection time using the proposed DFIR method is comparatively lesser than two other methods and this reduces with the increase in the size of sliding windows. This is because, the Group Sparsity Function with non-overlapping pixel using DFIR method evaluates the histogram intensity range and with the aid of histogram intensity the feature is selected with minimal time period. In addition to that, the two set of value i.e., the left and right boundary is combined to remove the overlapping condition by minimizing the feature selection time by 20–91% when compared to FMED [1] method and 48–83% than the OCT [13] method.

Discussion

The major breakthrough of blindness is caused due to the diabetic retinopathy. Early identification and proper meditation can prevent the disease and avoid blindness to certain extent. One of the dangerous retinal injuries, hazardous by gazing of sun is called as the solar retinopathy which occurs with the observation of solar eclipse observation without any precautious measure. At the same time, one of the safest ways to view the solar eclipse is with the indirect projection of image as suggested by many researchers.

Fourier-domain optical coherence tomography (OCT) [13] was introduced as an imaging tool to prevent retinal damage by the individuals. Though proved to be an efficient tool, was opted only for the left eye. A hybrid approach for both left and right eye called as the Combined Cross-point Number [14] was designed for efficient diagnosing of diabetic retinopathy. Results when applied to retina fundus images proved to be highly precise and accurate with minimized false error rate for treating diabetic retinopathy. In [15], different measures were taken to identify the concentrations in patients suffering from type 2 diabetes mellitus and diabetic retinopathy and significantly measured the relationship between metabolites and disease.

An automatic method for efficient detection of exudates called, Differential Morphological Profile (DMP) [16] was structured from digital color fundus images using three major phases. Gaussian smoothing and contrast enhancement was performed in the initial stage, followed by which DMP was applied on the features being extracted. Finally, actual exudates were obtained on the basis of the location of optic disc, shape index and area resulting in accuracy. Microperimetry [17] has the added advantage of combining the functional parameter with the morphologic status of retina.

With the application of microperimetry, the changes observed in the level of diabetes were efficiently measured. But accurate results were not obtained. To obtain accurate results, different techniques including normalization of images, efficient compactness classification model, significant morphology operations, process of filtering using Gaussian process and threshold factor were introduced for early detection of neovascularization. To improve the accuracy of the results obtained, a region-based neovascularization classification method was applied.

Conclusion

Diagnosing and ranking retinopathy disease level has become the key for digital fundus images with the main objective of improving the level of sensitivity by reducing the time taken for feature selection. In this work, the performance effects of machine learning methods are investigated called as Diabetic Fundus Image Recuperation method. The Diabetic Fundus Image Recuperation method efficiently categorizes the candidate fundus images based on Sliding Window Approach that increases the effects of sensitivity and greatly improves the ranking efficiency. First, we study the use of Sliding Window Approach to divide fundus images into blocks, and propose to use the rectangular shape for feature selection in DFIR. Second, we develop several Group Sparsity Function with non-overlapping pixel for evaluating the histogram intensity range that work with the feature selection module to minimize the feature selection time. We also integrate Spiral Basis Function with the features selected using Support Vector Model for efficient ranking of diabetic retinopathy disease using DIARETDB1—Standard Diabetic Retinopathy Database. The experiment conducted using DIARETDB1shows that the DFIR method outperforms in terms of sensitivity, specificity, ranking efficiency and feature selection time when compared to the state-of-the-art methods.

Supporting Information

S1 Table. Tabulation for Sensitivity and Specificity.

High percentage sensitivity and specificity identifies the best model.

https://doi.org/10.1371/journal.pone.0125542.s001

(DOCX)

S2 Table. Tabulation for Ranking Efficiency and Feature selection time.

High percentage Ranking Efficiency and Quick Feature Selection time identifies the best model.

https://doi.org/10.1371/journal.pone.0125542.s002

(DOCX)

Author Contributions

Conceived and designed the experiments: SK AP. Performed the experiments: SK. Analyzed the data: SK. Contributed reagents/materials/analysis tools: SK AP. Wrote the paper: SK AP.

References

  1. 1. Luca G, Fabrice M, Thomas KP, Yaqin L, Seema G, Kenneth TW. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med Image Anal.2012; 16:216–226 pmid:21865074
  2. 2. Deepak SK, Jayanthi S. Automatic Assessment of Macular Edema from Color Retinal Images. IEEE Trans Med Imaging. 2012; 31:766–776 pmid:22167598
  3. 3. Girish SR, Vivek KN, Chandan C. Small retinal vessels extraction towards proliferative diabetic retinopathy screening. Expert Syst Appl. 2011; 39:1141–1146
  4. 4. Ponnalagu M, Dhananjay S, Ramasamy K, Perumalsamy N, Alan WS, Veerappan M. Angiogenic Potential of Vitreous from Proliferative Diabetic Retinopathy and Eales’ Disease Patients”, PLoS One. 2014; 10: 1–8.
  5. 5. Usman AM, Shoab KA. Multilayered thresholding-based blood vessel segmentation for screening of diabetic retinopathy. EngComput. 2012; 29: 165–173
  6. 6. Akara SK., Bunyarit U, Sarah B, Williamson TH. Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Comput Med Imaging Graph. Elsevier Journal. 2008; 32: 720–727
  7. 7. Dorota RN, Katarzyna Z, Beata U, Dominik Z, Andrzej S, Grahyna M et al. Current Trends in the Monitoring and Treatment of Diabetic Retinopathy in Young Adults. Mediators Inflamm. 2014; 14:1–13
  8. 8. Tomi K, Joni-Kristian K, Lasse L, Valentina K, Iiris S, Hannu U et al. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy. Comput Math Methods Med. 2013. 13: 1–13
  9. 9. Jaya K C, Maruthi R. Detection of Hard Exudates in Color Fundus Images of the Human Retina. International Conference on Communication Technology and System Design. Procedia Engineering [NLM ID: 101541420].2012; 30:297–302.
  10. 10. Usman AM, Shehzad K, ShoabKA . Identification and classification of microaneurysms for early detection of diabetic retinopathy.Pattern Recognit. 2012; 12:319–336
  11. 11. AkaraS, Bunyarit U, Sarah B, Williamson TH. Automatic Microaneurysm Detection from Non-dilated Diabetic Retinopathy Retinal Images. Proceedings of the World Congress on Engineering. 2011
  12. 12. Atul K, Abhishek KG, Manish S.A Segment based Technique for detecting Exudate from Retinal Fundus image. Second International Conference on Communication, Computing & Security [ICCCS-2012], Elsevier. 2012
  13. 13. Ya-Hsin K, Tsung-Tien W, Shwu-Jiuan S. Subtle Solar Retinopathy Detected by Fourier domain Optical Coherence Tomography. J Chin Med Assoc. 2010; 396–398
  14. 14. Aibinu AM, Iqbal MI, Shafie AA, Salami MJE, Nilsson M. Vascular intersection detection in retina fundus images using a new hybrid approach. ComputBiol Med. 2010; 40: 81–89 pmid:20386056
  15. 15. Jianfei X, Zonghua W, Feifei Z. Association between Related Purine Metabolites and Diabetic Retinopathy in Type 2 Diabetic Patients. Int J Endocrinol. 2014; 14:65–76
  16. 16. Shraddha T, Krishna KS, Singh BK, Akansha M. Automatic Detection of Exudates in Retinal Fundus Images using Differential Morphological Profile. Int J Engg Tech. 2013; 13:85–96
  17. 17. Edoardo M , Stela V. Microperimetry in diabetic retinopathy. Saudi J Ophthalmol. 2011; 25(2): 131–135 pmid:23960914