Skip to main content

Advertisement

Log in

Laterality Classification of Fundus Images Using Interpretable Deep Neural Network

Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. Jaya T, Dheeba J, Singh NA: Detection of hard exudates in colour fundus images using fuzzy support vector machine-based expert system. J Digit Imaging 28(6):761–768, 2015

    Article  CAS  Google Scholar 

  2. Oloumi F, Rangayyan RM, Ells AL: Computer-aided diagnosis of proliferative diabetic retinopathy via modeling of the major temporal arcade in retinal fundus images. J Digit Imaging 26(6):1124–1130, 2013

    Article  Google Scholar 

  3. Group, E.T.D.R.S.R: Grading diabetic retinopathy from stereoscopic color fundus photographs--an extension of the modified Airlie house classification. ETDRS report number 10. Early treatment diabetic retinopathy study research group. Ophthalmology 98(5 Suppl):786–806, 1991

    Google Scholar 

  4. Krizhevsky, A.a.S., Ilya and Hinton, Geoffrey E, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems. 2012. p. 1097–1105.

  5. Ulyanov, D.a.V., Andrea and Lempitsky, victor, Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.

  6. Zhou, B.a.K., Aditya and Lapedriza, Agata and Oliva, Aude and Torralba, Antonio. Learning deep features for discriminative localization. in IEEE conference on computer vision and pattern recognition. 2016.

  7. Selvaraju, R.R.a.C., Michael and Das, Abhishek and Vedantam, Ramakrishna and Parikh, Devi and Batra, Dhruv, Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. arXiv preprint arXiv:1610.02391, 2016.

  8. Gal, Y.a.G., Zoubin. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. in International conference on machine learning. 2016.

  9. Simonyan, K.a.Z., Andrew, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

  10. Schneiderman, H., The Funduscopic Examination, in Clinical Methods: The History, Physical, and Laboratory Examinations, rd, et al., Editors. 1990: Boston.

  11. Ronneberger, O.a.F., Philipp and Brox, Thomas, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015, Springer. p. 234–241.

  12. Carmona EJ, Rincón M, García-Feijoó J, Martínez-de-la-Casa JM: Identification of the optic nerve head with genetic algorithms. Artif Intell Med 43(3):243–259, 2008

    Article  Google Scholar 

Download references

Funding

This study was supported by the Small Grant for Exploratory Research of the National Research Foundation of Korea (NRF), which is funded by the Ministry of Science, ICT, and Future Planning (NRF-2015R1D1A1A02062194). The funding organizations had no role in the design or conduct of this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyu-Hwan Jung.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jang, Y., Son, J., Park, K.H. et al. Laterality Classification of Fundus Images Using Interpretable Deep Neural Network. J Digit Imaging 31, 923–928 (2018). https://doi.org/10.1007/s10278-018-0099-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-018-0099-2

Keywords

Navigation