Skip to main content

Multi-scale Fusion Methodologies for Head and Neck Tumor Segmentation

  • Conference paper
  • First Online:
Head and Neck Tumor Segmentation and Outcome Prediction (HECKTOR 2022)

Abstract

Head and Neck (H &N) organ-at-risk (OAR) and tumor segmentations are an essential component of radiation therapy planning. The varying anatomic locations and dimensions of H &N nodal Gross Tumor Volumes (GTVn) and H &N primary gross tumor volume (GTVp) are difficult to obtain due to lack of accurate and reliable delineation methods. The downstream effect of incorrect segmentation can result in unnecessary irradiation of normal organs. Towards a fully automated radiation therapy planning algorithm, we explore the efficacy of multi-scale fusion based deep learning architectures for accurately segmenting H &N tumors from medical scans. Team Name: M &H_lab_NU.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aerts, H.J., et al.: Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 5(1), 1–9 (2014)

    MathSciNet  Google Scholar 

  2. Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2021: automatic head and neck tumor segmentation and outcome prediction in pet/ct images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 1–37. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-98253-9_1

    Chapter  Google Scholar 

  3. Dosovitskiy, A., et al.: An image is worth 16\(\times 16\) words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  4. Gu, J., et al.: HRViT: multi-scale high-resolution vision transformer. arXiv preprint arXiv:2111.01236 (2021)

  5. Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin UNETR: swin transformers for semantic segmentation of brain tumors in MRI images. In: Crimi, A., Bakas, S. (eds.) BrainLes 2021. LNCS, vol. 12962, pp. 272–284. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-08999-2_22

    Chapter  Google Scholar 

  6. Khan, S., Naseer, M., Hayat, M., Zamir, S.W., Khan, F.S., Shah, M.: Transformers in vision: a survey. ACM Comput. Surv. (CSUR) 54(10s), 1–41 (2021)

    Article  Google Scholar 

  7. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  8. Lou, B., et al.: An image-based deep learning framework for individualising radiotherapy dose: a retrospective analysis of outcome prediction. Lancet Digit. Health 1(3), e136–e147 (2019)

    Article  Google Scholar 

  9. Oreiller, V., et al.: Head and neck tumor segmentation in PET/CT: the HECKTOR challenge. Med. Image Anal. 77, 102336 (2022)

    Google Scholar 

  10. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  11. Srivastava, A., Chanda, S., Jha, D., Pal, U., Ali, S.: GMSRF-Net: an improved generalizability with global multi-scale residual fusion network for polyp segmentation. arXiv preprint arXiv:2111.10614 (2021)

  12. Srivastava, A., et al.: MSRF-Net: a multi-scale residual fusion network for biomedical image segmentation. IEEE J. Biomed. Health Inform. 26(5), 2252–2263 (2021)

    Article  MathSciNet  Google Scholar 

  13. Srivastava, A., Jha, D., Keles, E., Aydogan, B., Abazeed, M., Bagci, U.: An efficient multi-scale fusion network for 3D organ at risk (OAR) segmentation. arXiv preprint arXiv:2208.07417 (2022)

  14. Wang, J., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. 43(10), 3349–3364 (2020)

    Article  Google Scholar 

Download references

Acknowledgment

This project is supported by the NIH funding: R01-CA246704 and R01-CA240639. The computations in this paper were performed on equipment provided by the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by the Research Council of Norway under contract 270053.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhishek Srivastava .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Srivastava, A., Jha, D., Aydogan, B., Abazeed, M.E., Bagci, U. (2023). Multi-scale Fusion Methodologies for Head and Neck Tumor Segmentation. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds) Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2022. Lecture Notes in Computer Science, vol 13626. Springer, Cham. https://doi.org/10.1007/978-3-031-27420-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27420-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27419-0

  • Online ISBN: 978-3-031-27420-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics