Skip to main content

Automated segmentation of choroidal neovascularization on optical coherence tomography angiography images of neovascular age-related macular degeneration patients based on deep learning

Abstract

Optical coherence tomography angiography (OCTA) has been a frequently used diagnostic method in neovascular age-related macular degeneration (nAMD) because it is non-invasive and provides a comprehensive view of the characteristic lesion, choroidal neovascularization (CNV). In order to study its characteristics, an automated method is needed to identify and quantify CNV. Here, we have developed a deep learning model that can automatically segment CNV regions from OCTA images. Specifically, we use the ResNeSt block as our basic backbone, which learns better feature representations through group convolution and split-attention mechanisms. In addition, considering the varying sizes of CNVs, we developed a spatial pyramid pooling module, which uses different receptive fields to enable the model to extract contextual information at different scales to better segment CNVs of different sizes, thus further improving the segmentation performance of the model. Experimental results on a clinical OCTA dataset containing 116 OCTA images show that the CNV segmentation model has an AUC of 0.9476 (95% CI 0.9473–0.9479), with specificity and sensitivity of 0.9950 (95% CI 0.9945–0.9955) and 0.7271 (95% CI 0.7265–0.7277), respectively. In summary, the model has satisfactory performance in extracting CNV regions from the background of OCTA images of nAMD patients.

Introduction

Age-related macular degeneration (AMD) is a common degenerative eye disease in the elderly population and a leading cause of blindness worldwide. A meta-analysis showed that the prevalence of AMD in people aged 45–85 years was 8.69% in 2013, and the total number of patients was expected to reach 288 million by 2040 [1]. AMD particularly affects the macular region, leading to the progressive loss of central vision or even irreversible blindness, placing a tremendous burden on patients’ daily life as well as healthcare resources. Thus, early detection and timely treatment are necessary. The advanced stage AMD is classified into wet (also known as exudative or neovascular) and dry (also known as non- exudative or atrophic), characterized by the presence of choroidal neovascularization (CNV) and geographic atrophy (GA), respectively [2]. In neovascular AMD (nAMD), the hypoxia resulting from decreased oxygen diffusing from the choroid to the outer retina induces vascular endothelial growth factor (VEGF) production, leading to the formation of CNV [3, 4]. According to the histological patterns, CNV can be divided into 3 types [5]. Both Type 1 and Type 2 CNV originate from the choriocapillaris but grow at different depths. Type 1 CNV penetrates the Bruch’s membrane and stays beneath the retinal pigment epithelium (RPE), while Type 2 CNV traverses RPE and reaches the sub-retinal area. Unlike the other two types, Type 3 CNV arises from the retinal vessels, growing from the inner retina to the outer part. The abnormal permeability or rupture of CNV causes retinal fluid, hemorrhage, retinal pigment epithelial detachment, or fibrous scar, which impairs vision to varying degrees [6].

To identify CNV accurately before substantial vision deterioration, a series of methods have been used in patients suspected of neovascular AMD (nAMD). Fluorescein angiography (FA), as the gold standard for diagnosing CNV, indicates the location and activity of CNV by abnormal fluorescein penetration [7]. Another frequently-used method indocyanine green angiography (ICGA) has an advantage in detecting occult CNV, especially those covered by macular hemorrhage [8]. However, with the advent of optical coherence tomography (OCT) and OCT angiography (OCTA), their non-invasiveness and high efficiency enable them to gradually replace the conventional invasive examinations including FA and ICGA to be the most common means in AMD diagnosis [9]. OCTA, in particular, provides us quick and easy access to high-resolution and en face images of both normal vessels and neovascularization [10]. By setting different scanning depths, it is able to show the blood flow on the typical layers of the retina. Therefore, how to automatically identify CNV features from OCTA images is crucial for the subsequent quantification of CNV and the analysis of the AMD situation.

Existing solution

In order to quantitatively characterize the CNVs in OCTA images, researchers have tried to use some machine learning methods to automatically segment CNVs from OCTA images. For example, one previous attempt was to detect the presence of CNVs based on saliency maps [11]. They use intensity, orientation, and location-based denoising and saliency detection models to overcome problems such as projection artifacts and CNV heterogeneity. However, this method relies heavily on the quality of the image. When the CNV flow signal is higher than the background noise and motion artifacts, this method performs well, while when the CNV flow is similar to the background noise, this saliency-based method usually judges those background noises and artifacts as CNV as well. The defect results in a large number of false positives. In addition, this method tends to miss those CNV vessels with thin vessels, as their flow signals are not significant compared to those with thicker vessels.

Proposed solution

To address the issue, we proposed a deep learning-based algorithm which auto- matically segmented CNV from en face OCTA images of neovascular AMD patients. Our model is based on the traditional U-net model [12]. Specifically, we first replace the encoder and decoder units in the traditional U-net with a segmentation module based on the ResNeSt block [13], which learns superior feature representations through group convolution and split attention mechanisms. In addition, the spatial relationships and contextual information between different anatomical parts in OCTA images may be overlooked by the limited receptive fields of traditional deep learning models such as the U-net [14]. One solution is to increase the receptive field of the model. For example, the Pyramid Scene Parsing Network (PSPNet) [14] addresses this problem with a pyramid pooling module. Here, we take this a step further by adding an adapted spatial pyramid pooling module after each encoder to integrate contextual information at different scales, thus further improving the segmentation performance of the model. We conducted extensive experiments on an OCTA dataset collected from a clinical setting. The experimental results show that our approach is significantly better than both traditional segmentation methods and the state-of-the-art deep learning method U-net.

Methodology elaboration

Our model is a modified U-net model [12] with two key changes to the U-net: ResNeSt blocks and the spatial pyramidal pooling. The overall structure of our model is shown in Fig. 1, which consists of an encoder part on the left and a decoder part on the right, with a total of four encoders and four symmetric decoders. A spatial pyramid pooling module is attached to each encoder to capture contextual information at different scales. In addition, each encoder and decoder use the ResNeSt block as the basic backbone. The input sizes of the 1st–5th encoders are \(64\times 152\times 152\), \(256\times 76\times 76\), \(512\times 38\times 38\), \(1024\times 20\times 20\), \(2048\times 10\times 10\). Each decoder performs an upsampling operation with sequentially increasing resolution. In addition, we introduce skip connections, where feature maps with the same resolution of the encoder and decoder are concatenated as the input to the next decoder.

Fig. 1
figure 1

The overall structure of our model

ResNeSt block

ResNeSt [13] is a resnet-style network structure incorporating a split-attention mechanism, which focuses on learning better feature representations through group convolution and attention mechanisms. Its overall structure is shown in Fig. 2. Specifically, for an input feature map, it is first split equally into two feature maps and then fed into two cardinality groups. In each cardinality group, the feature map is further split into two feature maps, which are then fed into two parallel convolution groups, each containing one \(1\times 1\) and one \(3\times 3\) convolution layer. These two split features are then fed into the split attention module to integrate the features.

Fig. 2
figure 2

The structure of ResNeSt and split attention module

The structure of the split attention module is shown in Fig. 2. Specifically, we first element-wise sum the features from the two branches. A global pooling module is then applied to get the statistics of the feature channels, and then two \(1\times 1\) convolution layers and a softmax layer are followed to get the attention coefficients. We then re-weight each branch feature and cascade it along the channel dimension to obtain the output features.

After obtaining the features of the two cardinal groups separately, we cascade them and follow them with a \(1\times 1\) convolution layer. In addition, we also introduce a shortcut connection to obtain the final output features of the ResNeSt block.

Spatial pyramid pooling

In OCTA images, CNVs may have different shapes and sizes, and traditional deep learning models such as U-net cannot take into account the different sizes of CNVs due to its limited receptive field, and may miss some small CNV regions. In addition, spatial relationships and contextual information between different parts of the OCTA image may be missed by the limited receptive field of the U-net. For this reason, we introduce a spatial pyramid pooling module to learn contextual information at different scales to further improve the segmentation performance of the model.

Fig. 3
figure 3

The structure of spatial pyramid pooling module

The structure of the spatial pyramid pooling is shown in Fig. 3. Specifically, for an input feature map, we apply four different sizes of pooling kernels, their dimensions are (\(2 \times 2\)), (\(3 \times 3\)), (\(6 \times 6\)) and (\(9 \times 9\)). Then we can obtain four pooled feature maps. We then upsample each of the four feature maps to a reference dimension, where the reference dimension is the input dimension of the next layer of encoders, so that the four feature maps can be concatenated by channel dimension. We then cascade the downsampled original feature maps with these four feature maps and follow with a \(1\times 1\) convolution layer to obtain the input features for the final spatial pyramid pooling module.

Experiments and results

To validate the segmentation performance of the proposed deep learning algorithm, we conduct experiments on a clinical medical dataset.

Dataset

Fig. 4
figure 4

An example of OCTA images from a neovascular AMD patient. Superficial ranges from the internal limiting membrane (ILM) to the inner plexiform layer (IPL). Deep ranges from IPL to the outer plexiform layer (OPL). Outer retina is defined from OPL to Bruch’s membrane (BRM) while Choriocapillaris is from BRM to BRM+30nm. The CNV is indicated by a red arrow on Choriocapillaris

Table 1 Demographic information of recruited patients

The demographic information was shown in Table 1. Of the 69 patients, 29 were female and 40 were male. The average age was 71.3 with a standard deviation of 8.8. Among the affected eyes, 39 were OS and 30 were OD. Each OCTA scanning consisted of 4 layers including Superficial, Deep, Outer Retina and Choriocapillaris (Fig. 4). The reconstruction of en face OCTA images was based on automatic slab segmentation. Generally, CNV appears on the latter two layers. Therefore, 54 Choriocapillaris images and 62 Outer Retina ones were included in total. All OCTA images were 6 × 6 mm with the fovea as the center and had 304 × 304 pixels. The study was approved by the Medical Sciences Ethics Committee of Beijing Tsinghua Changung Hospital and is adherent to the tenets of the declaration of Helsinki.

The CNV annotated by a skillful grader was considered the ground truth (Fig. 5). Of the 69 patients, 47 patients had OCTA images of both layers and 22 patients contained OCTA images of only one layer, as there was no clear CNV in the other layer. We randomly selected 25 patients with OCTA images containing both layers as the test set and used all the remaining OCTA images to train the segmentation model. Note that the OCTA images in the training and test sets are from different patients. We performed 10 runs using different random seeds and reported the average segmentation performance and 95% confidence interval.

Fig. 5
figure 5

The manual depiction of CNV as the groundtruth

Experimental setup and implementation details

We ran all experiments using the Pytorch deep learning framework. The model was trained for a total of 200 epochs, and the Adam optimizer was used to optimize the model parameters, with the initial learning rate set to 5e-4, the batch size set to 4, and the weight decay set to 1e-4. We used binary cross-entropy loss to train the model. During training, we applied data augmentation techniques such as random rotation and random horizontal/vertical flipping to improve the generalization performance of the model. The following metrics were used in the assessment of the model: Area Under the ROC Curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), intersection over union(IOU), and Dice coefficient (DICE).

Comparison with the state-of-the-art methods

We compare our model with the traditional saliency-based segmentation model [11] and the deep model U-net [12]. Table 2 shows the segmentation performance of the different algorithms. It can be seen that the traditional saliency-based segmentation model achieves the worst segmentation performance, which is mainly due to the limited feature extraction capability of the traditional approach without being able to generalize well on the test set. In addition, our deep learning model outperforms both the traditional saliency-based segmentation model and the deep learning model U-net by a significant margin. Compared to U-net, we achieve a 2.17% improvement in AUC, 0.96% improvement in ACC, 4.11% improvement in SEN, 2.66% improvement in IOU, and 1.55% improvement in DICE. To compare the performance differences between the different algorithms, we performed the two-sample t-test [15] and report the p-values. The statistical significance level was set to 5% therefore, the performance difference was considered statistically significant if \(p < 0.05\). As shown in Table 3, it can be observed that in most cases, our method performs significantly better than the other compared algorithms (\(p < 0.05\)).

Table 2 Segmentation performance of different algorithms on the test set
Table 3 The p-value of the two-sample t-test between our method and other comparative methods

We also show the segmentation results of different algorithms. As shown in Fig. 6, the traditional saliency-based segmentation model tends to classify some background noise and artifacts into CNVs, and therefore has some false positives. In addition, for some small CNVs and CNVs with low contrast and blur, the traditional method also causes missed detection. In contrast, our model can accurately identify CNVs of different sizes and at different contrasts, which proves that the spatial pyramid pooling module and the ResNeSt block give our model more powerful feature extraction capabilities, thus greatly improving the segmentation results. In conclusion, our deep learning model shows satisfactory performance in automatically segmenting the CNV of OCTA images.

Fig. 6
figure 6

Visualization of segmentation results for different methods

Discussion

Over the past decade, artificial intelligence, especially deep learning has achieved great progress in medical research. Thanks to its robust capacity in image analysis, including image classification, recognition, segmentation, etc, the DL-based algorithms perform particularly well in image-centered specialties, such as radiology, pathology, and ophthalmology [16]. The availability of Big Data (eg. high-resolution images and combinational examination methods), as well as the powerful computing capabilities, enables DL to obtain comparable performances to human physicians in the diagnosis, grading, and prognoses of several common eye diseases, such as diabetic retinopathy [17, 18], glaucoma [19, 20], and AMD [21]. Several real-world studies have demonstrated that the DL algorithms have great potential to assist ophthalmologists in routine tasks, and they also allow for large-scale screening at extremely low cost, which would greatly contribute to the diagnosis rate of eye diseases, especially in low-income areas [22, 23]. In conclusion, DL has a very promising future in ophthalmology. Its clinical application will greatly improve the diagnosis and treatment of eye diseases and hopefully, be a revolutionary opportunity to reduce the prevalence of blindness and the ensuing social burden.

For AMD, many DL algorithms have been developed to help ophthalmologists with empirical work. It has been proved by a multi-center study with a dataset of 35948 AMD images that the DL system is very reliable in identifying several retinal diseases including AMD [18]. Besides, many groups have sought to automate the severity scale of AMD by DL [24, 25], offering an alternative tool for referral decisions and routine monitoring. Based on this, researchers also developed algorithms predicting the progressing risk of AMD [26,27,28]. Despite the high accuracy in test datasets, the predicting performance has not been proved by any prospective study [16]. Therefore, some researchers attempt to obtain more abundant information about the particular lesions by DL methods. DL algorithms were developed to automatically identify and quantify the features including macular fluid and hyperreflective foci in OCT images [29, 30]. Schmidt et al. applied the quantitative results of macular fluid to direct the precise anti-VEGF treatment [31].

The majority of DL systems described above are based on color fundus photographs or OCT images. The advent of OCTA brings us a brand new appearance of retina structure and lesions, as well as a series of new features, thus is expected to provide more information about the diseases than the conventional examinations. Researchers have utilized DL methods based on OCTA images to remove artifacts [32], investigate the non-perfusion area [33] and extract the characteristics of normal vessels [34]. However, the precise segmentation of CNV is challenging. The accuracy of saliency-based methods is not satisfying because of complicated factors such as artifacts and signal attenuation [11, 35]. Also, some segmenting approaches require optimal settings to get the best segment performance, which can be variable among images and is not favorable for comparison [36, 37]. This limits their applications in clinical research. DL network trained by raw OCTA images including the complicated texture is advantageous in this issue. Wang et al. developed an automated CNV segmentation model based on CNN, and showed much better results than the saliency-based methods, demonstrating the potential of DL in segmentation tasks [38].

Here, we presented a modified U-net model appended with ResNeSt blocks and the spatial pyramidal pooling. It enables fully automated CNV segmentation and shows superior accuracy and DICE index compared to the conventional methods. Without adjusting parameters or manual correction, the trained model is very easy to start with. In addition, It ensures maximum comparability between images and high reproducibility. Our model still has some limitations. First, its sensitivity is relatively low. Due to complicated reasons, the border between CNV and normal vessels sometimes can be blurry. The lack of absolute criteria to judge CNV on a certain pixel and the ground truth annotated by human ophthalmologists may result in the problem. Besides, complete exposure of CNV on OCTA images is necessary for accurate segmentation. The intra-retinal or sub-retinal fluid often obscures neovascularization, resulting in a black hole on the en face OCTA image. In this case, our model may not reach the expected performance. In future research, we expect to automatically measure a group of parameters, both conventional and novel ones, based on the extracted CNV, and try to establish the association between en face CNV characteristics and unsolved clinical problems of AMD such as the prediction of treatment response. The fast and automated quantification allows us to retrospectively measure more complex parameters which could reflect the nature of CNV well and truly, and helps researchers to seek out the most crucial ones which predict clinical outcomes best.

Conclusion

In this paper, we propose a new deep learning model for segmenting neovascularization from OCTA images. Our model is based on the U-net segmentation model and contains two key improvements, the ResNeSt block and the spatial pyramid pooling module. the ResNeSt block learns better feature representations through group convolution and split attention mechanisms. The spatial pyramid pooling module uses multiple pooling kernels to capture contextual information at different scales. Extensive experiments on a clinical OCTA dataset validate the effectiveness of the proposed model. Future work will be to apply the model to larger medical image datasets.

Availability of data and materials

The dataset underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

  1. Wong WL, Su X, Li X, Cheung CMG, Klein R, Cheng C-Y, Wong TY. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014;2(2):106–16.

    Article  Google Scholar 

  2. Ferris FL III, Wilkinson C, Bird A, Chakravarthy U, Chew E, Csaky K, Sadda SR, Macular Research Classification Committee group. Clinical classification of age-related macular degeneration. Ophthalmology. 2013;120(4):844–51.

    Article  Google Scholar 

  3. Campochiaro PA. Retinal and choroidal neovascularization. J Cell Physiol. 2000;184(3):301–10.

    Article  Google Scholar 

  4. Shweiki D, Itin A, Soffer D, Keshet E. Vascular endothelial growth factor induced by hypoxia may mediate hypoxia-initiated angiogenesis. Nature. 1992;359(6398):843–5.

    Article  Google Scholar 

  5. Spaide RF, Jaffe GJ, Sarraf D, Freund KB, Sadda SR, Staurenghi G, Waheed NK, Chakravarthy U, Rosenfeld PJ, Holz FG. Consensus nomenclature for reporting neovascular age-related macular degeneration data: consensus on neovascular age-related macular degeneration nomenclature study group. Ophthalmology. 2020;127(5):616–36.

    Article  Google Scholar 

  6. Ambati J, Fowler BJ. Mechanisms of age-related macular degeneration. Neuron. 2012;75(1):26–39.

    Article  Google Scholar 

  7. Gualino V, Tadayoni R, Cohen SY, Erginay A, Fajnkuchen F, Haouchine B, Krivosic V, Quentel G, Vicaut E, Gaudric A. Optical coherence tomography, fluorescein angiography, and diagnosis of choroidal neovascularization in age-related macular degeneration. Retina (Philadelphia, Pa). 2019;39(9):1664.

    Article  Google Scholar 

  8. Bermig J, Tylla H, Jochmann C, Nestler A, Wolf S. Angiographic findings in patients with exudative age-related macular degeneration. Graefes Arch Clin Exp Ophthalmol. 2002;240(3):169–75.

    Article  Google Scholar 

  9. An L, Wang RK. In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography. Opt Express. 2008;16(15):11438–52.

    Article  Google Scholar 

  10. Giocanti-Auregan A, Dubois L, Dourmad P, Cohen SY. Impact of optical coherence tomography angiography on the non-invasive diagnosis of neovascular age-related macular degeneration. Graefes Arch Clin Exp Ophthalmol. 2020;258(3):537–41.

    Article  Google Scholar 

  11. Liu L, Gao SS, Bailey ST, Huang D, Li D, Jia Y. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography. Biomed Opt Express. 2015;6(9):3564–76.

    Article  Google Scholar 

  12. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Wells WM, Fragi AF, editors. International conference on medical image computing and computer-assisted intervention. Cham: Springer; 2015. p. 234–41.

    Google Scholar 

  13. Zhang H, Wu C, Zhang Z, Zhu Y, Lin H, Zhang Z, Sun Y, He T, Mueller J, Manmatha R. Resnest: split-attention networks. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022. p. 2736–46.

  14. Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 2881–90.

  15. Yuen KK. The two-sample trimmed t for unequal population variances. Biometrika. 1974;61(1):165–70.

    Article  MATH  Google Scholar 

  16. Ting DS, Peng L, Varadarajan AV, Keane PA, Burlina PM, Chiang MF, Schmetterer L, Pasquale LR, Bressler NM, Webster DR. Deep learning in ophthalmology: the technical and clinical considerations. Prog Retinal Eye Res. 2019;72: 100759.

    Article  Google Scholar 

  17. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama. 2016;316(22):2402–10.

    Article  Google Scholar 

  18. Ting DSW, Cheung CY-L, Lim G, Tan GSW, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. Jama. 2017;318(22):2211–23.

    Article  Google Scholar 

  19. Li F, Su Y, Lin F, Li Z, Song Y, Nie S, Xu J, Chen L, Chen S, Li H, et al. A deep-learning system predicts glaucoma incidence and progression using retinal photographs. J Clin Investig. 2022;132(11):e57968.

    Article  Google Scholar 

  20. Medeiros FA. Deep learning in glaucoma: progress, but still lots to do. Lancet Digit Health. 2019;1(4):151–2.

    Article  Google Scholar 

  21. Dow ER, Keenan TD, Lad EM, Lee AY, Lee CS, Lowenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI, et al. From data to deployment: the collaborative communities on ophthalmic imaging roadmap for artificial intelligence in age-related macular degeneration. Ophthalmology. 2022. https://doi.org/10.1016/j.ophtha.2022.01.002.

    Article  Google Scholar 

  22. Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B. Application of comprehensive artificial intelligence retinal expert (care) system: a national real-world evidence study. Lancet Digit Health. 2021;3(8):486–95.

    Article  Google Scholar 

  23. Ruamviboonsuk P, Tiwari R, Sayres R, Nganthavee V, Hemarat K, Kongprayoon A, Raman R, Levinstein B, Liu Y, Schaekermann M. Real-time diabetic retinopathy screening by deep learning in a multisite national screening programme: a prospective interventional cohort study. Lancet Digit Health. 2022;4(4):235–44.

    Article  Google Scholar 

  24. Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017;135(11):1170–6.

    Article  Google Scholar 

  25. Peng Y, Dharssi S, Chen Q, Keenan TD, Agrón E, Wong WT, Chew EY, Lu Z. Deepseenet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs. Ophthalmology. 2019;126(4):565–75.

    Article  Google Scholar 

  26. Burlina PM, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Use of deep learning for detailed severity characterization and estimation of 5-year risk among patients with age-related macular degeneration. JAMA Ophthalmol. 2018;136(12):1359–66.

    Article  Google Scholar 

  27. Peng Y, Keenan TD, Chen Q, Agrón E, Allot A, Wong WT, Chew EY, Lu Z. Predicting risk of late age-related macular degeneration using deep learning. NPJ Digit Med. 2020;3(1):1–10.

    Article  Google Scholar 

  28. Grassmann F, Mengelkamp J, Brandl C, Harsch S, Zimmermann ME, Linkohr B, Peters A, Heid IM, Palm C, Weber BH. A deep learning algorithm for prediction of age-related eye disease study severity scale for age-related macular degeneration from color fundus photography. Ophthalmology. 2018;125(9):1410–20.

    Article  Google Scholar 

  29. Schlegl T, Waldstein SM, Bogunovic H, Endstraßer F, Sadeghipour A, Philip A-M, Podkowinski D, Gerendas BS, Langs G, Schmidt-Erfurth U. Fully automated detection and quantification of macular fluid in oct using deep learning. Ophthalmology. 2018;125(4):549–58.

    Article  Google Scholar 

  30. Moraes G, Fu DJ, Wilson M, Khalid H, Wagner SK, Korot E, Ferraz D, Faes L, Kelly CJ, Spitz T. Quantitative analysis of oct for neovascular age-related macular degeneration using deep learning. Ophthalmology. 2021;128(5):693–705.

    Article  Google Scholar 

  31. Schmidt-Erfurth U, Vogl W-D, Jampol LM, Bogunović H. Application of automated quantification of fluid volumes to anti-vegf therapy of neovascular age-related macular degeneration. Ophthalmology. 2020;127(9):1211–9.

    Article  Google Scholar 

  32. Camino A, Jia Y, Yu J, Wang J, Liu L, Huang D. Automated detection of shadow artifacts in optical coherence tomography angiography. Biomed Opt Express. 2019;10(3):1514–31.

    Article  Google Scholar 

  33. Guo Y, Hormel TT, Xiong H, Wang B, Camino A, Wang J, Huang D, Hwang TS, Jia Y. Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on oct angiography. Biomed Opt Express. 2019;10(7):3257–68.

    Article  Google Scholar 

  34. Sandhu HS, Elmogy M, Sharafeldeen AT, Elsharkawy M, El-Adawy N, Eltanboly A, Shalaby A, Keynton R, El-Baz A. Automated diagnosis of diabetic retinopathy using clinical biomarkers, optical coherence tomography, and optical coherence tomography angiography. Am J Ophthalmol. 2020;216:201–6.

    Article  Google Scholar 

  35. Hormel TT, Hwang TS, Bailey ST, Wilson DJ, Huang D, Jia Y. Artificial intelligence in oct angiography. Prog Retinal Eye Res. 2021;85: 100965.

    Article  Google Scholar 

  36. Zudaire E, Gambardella L, Kurcz C, Vermeren S. A computational tool for quantitative analysis of vascular networks. PLoS ONE. 2011;6(11):27385.

    Article  Google Scholar 

  37. Choi K-E, Yun C, Cha J, Kim S-W. Oct angiography features associated with macular edema recurrence after intravitreal bevacizumab treatment in branch retinal vein occlusion. Sci Rep. 2019;9(1):1–10.

    Google Scholar 

  38. Wang J, Hormel TT, Gao L, Zang P, Guo Y, Wang X, Bailey ST, Jia Y. Automated diagnosis and segmentation of choroidal neovascularization in oct angiography using deep learning. Biomed Opt Express. 2020;11(2):927–44.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This project is supported by Tsinghua University Initiative Scientific Research Program of Precision Medicine (10001020106).

Author information

Authors and Affiliations

Authors

Contributions

MD and BW obtained the data set for the study and conducted the initial experiments. WF designed the methodology and the model. WF, MD, BW, LZ, YD, YZ, BW and YH were involved in in the revision of the study objectives and methods. All authors were involved in editing and proofreading. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yuntao Hu.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, W., Duan, M., Wang, B. et al. Automated segmentation of choroidal neovascularization on optical coherence tomography angiography images of neovascular age-related macular degeneration patients based on deep learning. J Big Data 10, 111 (2023). https://doi.org/10.1186/s40537-023-00757-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-023-00757-w

Keywords