Skip to main content
Log in

A scale-aware UNet++ model combined with attentional context supervision and adaptive Tversky loss for accurate airway segmentation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Automated and accurate airway segmentation from chest computed tomography (CT) images is essential to enable quantitative assessment of airway diseases and aid intra-operative navigation for pulmonary intervention surgery. Although deep learning-based methods have achieved massive success in medical image segmentation, it is still challenging to segment the airways accurately and entirely from CT images, especially the small airways. The feature vanishing of small airways, the local discontinuities of small airway branches, and the varying degrees of class imbalance between foreground and background have seriously affected airway segmentation performance. This paper presents an improved UNet++-based model that introduces a novel supervision manner and a new adaptive loss function to address these problems. Specifically, we put forward an attentional context supervision (ACS) manner, where different supervision branches and attention mechanisms are presented to capture more discriminative multi-scale features. In addition, we present an adaptive Tversky loss (ATL) function by integrating radial distance information and segmentation-wise focal loss into the Tversky loss, enabling adaptive focus on learning the target airways under the particular class imbalance condition. The experimental results on the public dataset showed that the proposed ACS and ATL brought considerable performance gains. Moreover, our method obtained the best sensitivity and comparable accuracy on the complete airway segmentation compared with the state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The datasets generated and analysed during the current study are available at http://www.pami.sjtu.edu.cn/Show/56/126 [6].

References

  1. Kirby M, Tanabe N, Tan WC et al (2018) Total airway count on computed tomography and the risk of chronic obstructive pulmonary disease progression. findings from a population-based study. Am J Respir Crit Care Med 197(1):56–65. https://doi.org/10.1164/rccm.201704-0692OC

    Article  Google Scholar 

  2. Wu X, Kim GH, Salisbury ML et al (2019) Computed tomographic biomarkers in idiopathic pulmonary fibrosis. the future of quantitative analysis. American journal of respiratory and critical care medicine 199(1):12–21. https://doi.org/10.1164/rccm.201803-0444PP

    Article  Google Scholar 

  3. Banach A, King F, Masaki F et al (2021) Visually navigated bronchoscopy using three cycle-consistent generative adversarial network for depth estimation. Med Image Anal 73:102,164. https://doi.org/10.1016/j.media.2021.102164

    Article  Google Scholar 

  4. Shen M, Gu Y, Liu N et al (2019) Context-aware depth and pose estimation for bronchoscopic navigation. IEEE Robotics and Automation Letters 4(2):732–739. https://doi.org/10.1109/LRA.2019.2893419

    Article  Google Scholar 

  5. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. https://doi.org/10.1007/978-3-319-24574-4_28. Springer, pp 234–241

  6. Qin Y, Gu Y, Zheng H et al (2020) Airwaynet-se: a simple-yet-effective approach to improve airway segmentation using context scale fusion. In: 2020 IEEE 17th international symposium on biomedical imaging (ISBI). https://doi.org/10.1109/ISBI45749.2020.9098537, pp 809–813

  7. Zhou Z, Rahman Siddiquee MM, Tajbakhsh N et al (2018) Unet++: A nested u-net architecture for medical image segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. https://doi.org/10.1007/978-3-030-00889-5_1, pp 3–11

  8. Salehi SSM, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3d fully convolutional deep networks. In: International workshop on machine learning in medical imaging. https://doi.org/10.1007/978-3-319-67389-9_44, pp 379–387

  9. Lin TY, Goyal P, Girshick R et al (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  10. Lo P, Van Ginneken B, Reinhardt JM et al (2012) Extraction of airways from ct (exact’09). IEEE Trans Med Imaging 31(11):2093–2107. https://doi.org/10.1109/TMI.2012.2209674

    Article  Google Scholar 

  11. Charbonnier JP, Van Rikxoort EM, Setio AA et al (2017) Improving airway segmentation in computed tomography using leak detection with convolutional networks. Med Image Anal 36:52–60. https://doi.org/10.1016/j.media.2016.11.001

    Article  Google Scholar 

  12. Yun J, Park J, Yu D et al (2019) Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net. Med Image Anal 51:13–20. https://doi.org/10.1016/j.media.2018.10.006

    Article  Google Scholar 

  13. Jin D, Xu Z, Harrison AP et al (2017) 3d convolutional neural networks with graph refinement for airway segmentation using incomplete data labels. In: International workshop on machine learning in medical imaging. https://doi.org/10.1007/978-3-319-67389-9_17, pp 141–149

  14. Juarez AGU, Tiddens HA, de Bruijne M (2018) Automatic airway segmentation in chest ct using convolutional neural networks. In: Image analysis for moving organ, breast, and thoracic images. https://doi.org/10.1007/978-3-030-00946-5_24, pp 238–250

  15. Qin Y, Chen M, Zheng H et al (2019) Airwaynet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks. In: International conference on medical image computing and computer-assisted intervention. https://doi.org/10.1007/978-3-030-32226-7_24, pp 212–220

  16. Qin Y, Zheng H, Gu Y et al (2021) Learning tubule-sensitive cnns for pulmonary airway and artery-vein segmentation in ct. IEEE Trans Med Imaging 40(6):1603–1617. https://doi.org/10.1109/TMI.2021.3062280

    Article  Google Scholar 

  17. Zheng H, Qin Y, Gu Y et al (2021) Alleviating class-wise gradient imbalance for pulmonary airway segmentation. IEEE Transactions on Medical Imaging https://doi.org/10.1109/TMI.2021.3078828

  18. Lee CY, Xie S, Gallagher P et al (2015) Deeply-supervised nets. In: Artificial intelligence and statistics. PMLR, pp 562–570

  19. Zhu Q, Du B, Turkbey B et al (2017) Deeply-supervised cnn for prostate segmentation. In: 2017 international joint conference on neural networks (IJCNN). https://doi.org/10.1109/IJCNN.2017.7965852. IEEE, pp 178–184

  20. Lin TY, Dollár P, Girshick R et al (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  21. Chou SY, Jang JSR, Yang YH (2018) Learning to recognize transient sound events using attentional supervision. In: IJCAI, pp 3336–3342

  22. Hesamian MH, Jia W, He X et al (2019) Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging 32(4):582–596. https://doi.org/10.1007/s10278-019-00227-x

    Article  Google Scholar 

  23. Milletari F, Navab N, Ahmadi SA (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). https://doi.org/10.1109/3DV.2016.79, pp 565–571

  24. Abraham N, Khan NM (2019) A novel focal tversky loss function with improved attention u-net for lesion segmentation. In: 2019 IEEE 16Th international symposium on biomedical imaging (ISBI. https://doi.org/10.1109/ISBI.2019.8759329, vol 2019, pp 683–687

  25. Lu X, Ma C, Ni B et al (2018) Deep regression tracking with shrinkage loss. In: Proceedings of the European conference on computer vision (ECCV), pp 353–369

  26. Lu X, Ma C, Shen J et al (2020) Deep object tracking with shrinkage loss. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2020.3041332

  27. Wang C, Hayashi Y, Oda M et al (2019) Tubular structure segmentation using spatial fully connected network with radial distance loss for 3d medical images. In: International conference on medical image computing and computer-assisted intervention. https://doi.org/10.1007/978-3-030-32226-7_39, pp 348–356

  28. Roy AG, Navab N, Wachinger C (2018) Concurrent spatial and channel ‘squeeze & excitation’in fully convolutional networks. In: International conference on medical image computing and computer-assisted intervention. https://doi.org/10.1007/978-3-030-00928-1_48, pp 421–429

  29. Armato IIISG, McLennan G, Bidaut L et al (2011) The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Med Phys 38(2):915–931. https://doi.org/10.1118/1.3528204

    Article  Google Scholar 

  30. Xu X, Wang C, Guo J et al (2020) Mscs-deepln: evaluating lung nodule malignancy using multi-scale cost-sensitive neural networks. Med Image Anal 65:101,772. https://doi.org/10.1016/j.media.2020.101772

    Article  Google Scholar 

  31. Zhou K, Chen N, Xu X et al (2021) Automatic airway tree segmentation based on multi-scale context information. Int J CARS 16(2):219–230. https://doi.org/10.1007/s11548-020-02293-x

    Article  Google Scholar 

  32. Çiçek Ö, Abdulkadir A, Lienkamp SS et al (2016) 3d u-net: learning dense volumetric segmentation from sparse annotation. In: International conference on medical image computing and computer-assisted intervention. https://doi.org/10.1007/978-3-319-46723-8_49, pp 424–432

  33. Schlemper J, Oktay O, Schaap M et al (2019) Attention gated networks: learning to leverage salient regions in medical images. Med Image Anal 53:197–207. https://doi.org/10.1016/j.media.2019.01.012

    Article  Google Scholar 

  34. Isensee F, Jaeger PF, Kohl SA et al (2021) nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18(2):203–211. https://doi.org/10.1038/s41592-020-01008-z

    Article  Google Scholar 

  35. Hatamizadeh A, Tang Y, Nath V et al (2022) Unetr: transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 574–584

Download references

Acknowledgements

This work was supported by the National Science and Technology Major Project of China under Grant No. 2018AAA0100201 and the Major Science and Technology Project from the Science & Technology Department of Sichuan Province under Grant 2020YFG0473.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jixiang Guo.

Ethics declarations

Conflict of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ke, Z., Xu, X., Zhou, K. et al. A scale-aware UNet++ model combined with attentional context supervision and adaptive Tversky loss for accurate airway segmentation. Appl Intell 53, 18138–18154 (2023). https://doi.org/10.1007/s10489-022-04380-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04380-9

Keywords

Navigation