Abstract
The rapid accessibility of portable and affordable retinal imaging devices has made early differential diagnosis easier. For example, color funduscopy imaging is readily available in remote villages, which can help to identify diseases like age-related macular degeneration (AMD), glaucoma, or pathological myopia (PM). On the other hand, astronauts at the International Space Station utilize this camera for identifying spaceflight-associated neuro-ocular syndrome (SANS). However, due to the unavailability of experts in these locations, the data has to be transferred to an urban healthcare facility (AMD and glaucoma) or a terrestrial station (e.g., SANS) for more precise disease identification. Moreover, due to low bandwidth limits, the imaging data has to be compressed for transfer between these two places. Different super-resolution algorithms have been proposed throughout the years to address this. Furthermore, with the advent of deep learning, the field has advanced so much that \(\times \)2 and \(\times \)4 compressed images can be decompressed to their original form without losing spatial information. In this paper, we introduce a novel model called Swin-FSR that utilizes Swin Transformer with spatial and depth-wise attention for fundus image super-resolution. Our architecture achieves Peak signal-to-noise-ratio (PSNR) of 47.89, 49.00 and 45.32 on three public datasets, namely iChallenge-AMD, iChallenge-PM, and G1020. Additionally, we tested the model’s effectiveness on a privately held dataset for SANS and achieved comparable results against previous architectures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bajwa, M.N., Singh, G.A.P., Neumeier, W., Malik, M.I., Dengel, A., Ahmed, S.: G1020: a benchmark retinal fundus image dataset for computer-aided glaucoma detection. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2020)
Dai, T., Cai, J., Zhang, Y., Xia, S.T., Zhang, L.: Second-order attention network for single image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11065–11074 (2019)
Das, V., Dandapat, S., Bora, P.K.: A novel diagnostic information based framework for super-resolution of retinal fundus images. Comput. Med. Imaging Graph. 72, 22–33 (2019)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)
Fu, H., et al.: Adam: automatic detection challenge on age-related macular degeneration. In: IEEE Dataport (2020)
Hardie, R.: A fast image super-resolution algorithm using an adaptive wiener filter. IEEE Trans. Image Process. 16(12), 2953–2964 (2007)
Huazhu, F., Fei, L., José, I.: PALM: pathologic myopia challenge. Comput. Vis. Med. Imaging (2019)
Jin, K., et al.: SwiniPASSR: Swin transformer based parallax attention network for stereo image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 920–929 (2022)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lee, A.G., Mader, T.H., Gibson, C.R., Brunstetter, T.J., Tarver, W.J.: Space flight-associated neuro-ocular syndrome (SANS). Eye 32(7), 1164–1167 (2018)
Lee, A.G., et al.: Spaceflight associated neuro-ocular syndrome (SANS) and the neuro-ophthalmologic effects of microgravity: a review and an update. NPJ Microgravity 6(1), 7 (2020)
Li, B., Li, X., Lu, Y., Liu, S., Feng, R., Chen, Z.: HST: hierarchical swin transformer for compressed image super-resolution. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022. LNCS, vol. 13802, pp. 651–668. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25063-7_41
Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: SwinIR: image restoration using swin transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1833–1844 (2021)
Lin, F., et al.: Longitudinal changes in macular optical coherence tomography angiography metrics in primary open-angle glaucoma with high myopia: a prospective study. Invest. Ophthalmol. Vis. Sci. 62(1), 30–30 (2021)
Lin, Z., et al.: Revisiting RCAN: improved training for image super-resolution. arXiv preprint arXiv:2201.11279 (2022)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)
Niu, B., et al.: Single Image Super-Resolution via a Holistic Attention Network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 191–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_12
Ong, J., et al.: Neuro-ophthalmic imaging and visual assessment technology for spaceflight associated neuro-ocular syndrome (SANS). Survey Ophthalmol. 67, 1443–1466 (2022)
Seas, A., Robinson, B., Shih, T., Khatri, F., Brumfield, M.: Optical communications systems for NASA’s human space flight missions. In: International Conference on Space Optics-ICSO 2018, vol. 11180, pp. 182–191. SPIE (2019)
Sen, P., Darabi, S.: Compressive image super-resolution. In: 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, pp. 1235–1242. IEEE (2009)
Sengupta, S., Wong, A., Singh, A., Zelek, J., Lakshminarayanan, V.: DeSupGAN: multi-scale feature averaging generative adversarial network for simultaneous de-blurring and super-resolution of retinal fundus images. In: Fu, H., Garvin, M.K., MacGillivray, T., Xu, Y., Zheng, Y. (eds.) OMIA 2020. LNCS, vol. 12069, pp. 32–41. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63419-3_4
Song, X., et al.: Channel attention based iterative residual learning for depth map super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5631–5640 (2020)
Van Ouwerkerk, J.: Image super-resolution survey. Image Vis. Comput. 24(10), 1039–1052 (2006)
Xiao, T., Singh, M., Mintun, E., Darrell, T., Dollár, P., Girshick, R.: Early convolutions help transformers see better. In: Advances in Neural Information Processing Systems, vol. 34, pp. 30392–30400 (2021)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)
Zhang, K., Zuo, W., Zhang, L.: Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262–3271 (2018)
Zhang, X., Zeng, H., Guo, S., Zhang, L.: Efficient long-range attention network for image super-resolution. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13677, pp. 649–667. Springer, Cham (2022)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301 (2018)
Zhang, Z., et al.: A survey on computer aided diagnosis for ocular diseases. BMC Med. Inform. Decis. Mak. 14(1), 1–29 (2014)
Acknowledgement
Research reported in this publication was supported in part by the National Science Foundation by grant numbers [OAC-2201599],[OIA-2148788] and by NASA grant no 80NSSC20K1831.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hossain, K.F., Kamran, S.A., Ong, J., Lee, A.G., Tavakkoli, A. (2023). Revolutionizing Space Health (Swin-FSR): Advancing Super-Resolution of Fundus Images for SANS Visual Assessment Technology. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14226. Springer, Cham. https://doi.org/10.1007/978-3-031-43990-2_65
Download citation
DOI: https://doi.org/10.1007/978-3-031-43990-2_65
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43989-6
Online ISBN: 978-3-031-43990-2
eBook Packages: Computer ScienceComputer Science (R0)