Skip to main content

Improve Unseen Domain Generalization via Enhanced Local Color Transformation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Abstract

Recent application of deep learning in medical image achieves expert-level accuracy. However, the accuracy often degrades greatly on unseen data, for example data from different device designs and population distributions. In this work, we consider a realistic problem of domain generalization in fundus image analysis: when a model is trained on a certain domain but tested on unseen domains. Here, the known domain data is taken from a single fundus camera manufacture, i.e. Canon. The unseen data are the image from different demographic population and with distinct photography styles. Specifically, the unseen images are taken from Topcom, Syseye and Crystalvue cameras. The model performance is evaluated by two objectives: age regression and diabetic retinopathy (DR) classification. We found that the model performance on unseen domain could decrease significantly. For example, the mean absolute error (MAE) of age prediction could increase by 57.7 %. To remedy this problem, we introduce an easy-to-use method, named enhanced domain transformation (EDT), to improve the performance on both seen and unseen data. The goal of EDT is to achieve domain adaptation without using labeling and training on unseen images. We evaluate our method comprehensively on seen and unseen data sets considering the factors of demographic distribution, image style and prediction task. All the results demonstrate that EDT improves the performance on seen and unseen data in the tasks of age prediction and DR classification. Equipped with EDT, the \(\mathrm{R}^2\) (coefficient of determination) of age prediction could be greatly improved from 0.599 to 0.765 (n = 29,577) on Crystalvue images, and AUC (area under curve) of DR classification increases from 0.875 to 0.922 (n = 1,015).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jain, A.: Fundamentals of Digital Image Processing. Prentice-Hall, Upper Saddle River (1989)

    MATH  Google Scholar 

  2. Chollet, F., et al.: Keras. https://keras.io. Accessed 29 Feb 2020

  3. Christ, P.F., et al.: Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 415–423. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_48

    Chapter  Google Scholar 

  4. DeHoog, E., Schwiegerling, J.: Optimal parameters for retinal illumination and imaging in fundus cameras. Appl. Opt. 47(36), 6769–6777 (2008)

    Article  Google Scholar 

  5. Efron, B.: Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82(397), 171–185 (1987)

    Article  MathSciNet  Google Scholar 

  6. Fu, C., et al.: Three dimensional fluorescence microscopy image synthesis and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2221–2229. IEEE (2018)

    Google Scholar 

  7. Graham, B.: Diabetic retinopathy detection competition report. University of Warwick (2015)

    Google Scholar 

  8. Guibas, J.T., Virdi, T.S., Li, P.S.: Synthetic medical images from dual generative adversarial networks (2017). arXiv preprint arXiv:1709.01872

  9. Huang, X., Liu, M.Y., Belongie, S., Kautz., J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision, pp. 172–189 (2018)

    Google Scholar 

  10. Jung, W., Park, S., Jung, K.H., Hwang, S.I.: Prostate cancer segmentation using manifold mixup u-net. In Proceedings of the Medical Imaging with Deep Learning (MIDL), pp. 8–10 (2019)

    Google Scholar 

  11. Karimi, D., Dou, H., Warfield, S.K., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)

    Article  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980

  13. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  14. Li, H., Liu, W., Zhang, H.F.: Investigating the influence of chromatic aberration and optical illumination bandwidth on fundus imaging in rats. J. Biomed. Opt. 20(10), 106010 (2015)

    Article  Google Scholar 

  15. Li, Z., Kamnitsas, K., Glocker, B.: Overfitting of neural nets under class imbalance: analysis and improvements for segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 402–410. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_45

    Chapter  Google Scholar 

  16. Liskowski, P., Krawiec, K.: Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 35(11), 2369–2380 (2016)

    Article  Google Scholar 

  17. Maninis, K.-K., Pont-Tuset, J., Arbeláez, P., Van Gool, L.: Deep retinal image understanding. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 140–148. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_17

    Chapter  Google Scholar 

  18. Panfilov, E., Tiulpin, A., Klein, S., Nieminen, M.T., Saarakkala., S.: Improving robustness of deep learning based knee MRI segmentation: mixup and adversarial domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  19. Poplin, R., et al.: Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2(3), 158 (2018)

    Article  Google Scholar 

  20. Sinthanayothin, C., Boyce, J., Cook, H., Williamson, T.: Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. Brit. J. Ophthalmol. 83(8), 902–910 (1999)

    Article  Google Scholar 

  21. Sirinukunwattana, K., et al.: Gland segmentation in colon histology images: the glas challenge contest. Med. Image Anal. 35, 489–502 (2017)

    Article  Google Scholar 

  22. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  23. Tajbakhsh, N., et al.: Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Med. Image Anal. 63, 101693 (2020)

    Article  Google Scholar 

  24. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  25. Yasaka, K., Abe, O.: Deep learning and artificial intelligence in radiology: current applications and future directions. PLoS Med. 15(11), e1002707 (2018)

    Article  Google Scholar 

  26. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization (2017). arXiv preprint arXiv:1710.09412

  27. Zhang, L., et. al: When unseen domain generalization is unnecessary? rethinking data augmentation (2019). arXiv preprint arXiv:1906.03347

  28. Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zongyuan Ge .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2519 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiong, J. et al. (2020). Improve Unseen Domain Generalization via Enhanced Local Color Transformation. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59713-9_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59712-2

  • Online ISBN: 978-3-030-59713-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics