Skip to main content

Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14393))

Abstract

Most state-of-the-art techniques for medical image segmentation rely on deep-learning models. These models, however, are often trained on narrowly-defined tasks in a supervised fashion, which requires expensive labeled datasets. Recent advances in several machine learning domains, such as natural language generation have demonstrated the feasibility and utility of building foundation models that can be customized for various downstream tasks with little to no labeled data. This likely represents a paradigm shift for medical imaging, where we expect that foundation models may shape the future of the field. In this paper, we consider a recently developed foundation model for medical image segmentation, UniverSeg [6]. We conduct an empirical evaluation study in the context of prostate imaging and compare it against the conventional approach of training a task-specific segmentation model. Our results and discussion highlight several important factors that will likely be important in the development and adoption of foundation models for medical image segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.usa.philips.com/healthcare/product/HC784029/dynacad-prostate.

References

  1. Baevski, A., Zhou, Y., Mohamed, A., Auli, M.: wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Advances in Neural Information Processing Systems, vol. 33, pp. 12449–12460 (2020)

    Google Scholar 

  2. Bardis, M., Houshyar, R., Chantaduly, C., Tran-Harding, K., Ushinsky, A., et al.: Segmentation of the prostate transition zone and peripheral zone on MR images with deep learning. Radiol. Imaging Cancer 3(3), e200024 (2021)

    Article  Google Scholar 

  3. Billot, B., et al.: A learning strategy for contrast-agnostic MRI segmentation. arXiv preprint arXiv:2003.01995 (2020)

  4. Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., et al.: On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021)

  5. Brohan, A., Brown, N., Carbajal, J., Chebotar, Y., Dabis, J., et al.: RT-1: robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 (2022)

  6. Butoi, V.I., Ortiz, J.J.G., Ma, T., Sabuncu, M.R., Guttag, J., Dalca, A.V.: UniverSeg: universal medical image segmentation. arXiv preprint arXiv:2304.06131 (2023)

  7. Chen, C., Qin, C., Ouyang, C., Li, Z., Wang, S., et al.: Enhancing MR image segmentation with realistic adversarial data augmentation. Med. Image Anal. 82, 102597 (2022)

    Article  Google Scholar 

  8. Cheng, D., Qin, Z., Jiang, Z., Zhang, S., Lao, Q., Li, K.: Sam on medical images: a comprehensive study on three prompt modes. arXiv preprint arXiv:2305.00035 (2023)

  9. Cuocolo, R., Stanzione, A., Castaldo, A., De Lucia, D.R., Imbriaco, M.: Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset. Eur. J. Radiol. 138, 109647 (2021)

    Article  Google Scholar 

  10. Deng, R., Cui, C., Liu, Q., Yao, T., Remedios, L.W., et al.: Segment anything model (SAM) for digital pathology: assess zero-shot segmentation on whole slide imaging. arXiv preprint arXiv:2304.04155 (2023)

  11. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  12. Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)

    Article  Google Scholar 

  13. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., et al.: An image is worth 16 \(\times \) 16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  14. Fischl, B.: Freesurfer. Neuroimage 62(2), 774–781 (2012)

    Article  Google Scholar 

  15. Gao, Y., Xia, W., Hu, D., Gao, X.: DeSAM: decoupling segment anything model for generalizable medical image segmentation. arXiv preprint arXiv:2306.00499 (2023)

  16. He, S., Bao, R., Li, J., Grant, P.E., Ou, Y.: Accuracy of segment-anything model (SAM) in medical image segmentation tasks. arXiv preprint arXiv:2304.09324 (2023)

  17. Hu, M., Li, Y., Yang, X.: SkinSAM: empowering skin cancer segmentation with segment anything model. arXiv preprint arXiv:2304.13973 (2023)

  18. Huang, Y., Yang, X., Liu, L., Zhou, H., Chang, A., et al.: Segment anything model for medical images? arXiv preprint arXiv:2304.14660 (2023)

  19. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Meth. 18(2), 203–211 (2021)

    Article  Google Scholar 

  20. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)

  21. Litjens, G., Debats, O., Barentsz, J., Karssemeijer, N., Huisman, H.: Computer-aided detection of prostate cancer in MRI. IEEE Trans. Med. Imaging 33(5), 1083–1092 (2014)

    Article  Google Scholar 

  22. Ma, J., Wang, B.: Segment anything in medical images. arXiv preprint arXiv:2304.12306 (2023)

  23. Mattjie, C., de Moura, L.V., Ravazio, R.C., Kupssinskü, L.S., Parraga, O., et al.: Exploring the zero-shot capabilities of the segment anything model (SAM) in 2D medical imaging: a comprehensive evaluation and practical guideline. arXiv preprint arXiv:2305.00109 (2023)

  24. Mazurowski, M.A., Dong, H., Gu, H., Yang, J., Konz, N., Zhang, Y.: Segment anything model for medical image analysis: an experimental study. arXiv preprint arXiv:2304.10517 (2023)

  25. OpenAI: GPT-4 technical report (2023)

    Google Scholar 

  26. Radford, A., Kim, J.W., Xu, T., Brockman, G., McLeavey, C., Sutskever, I.: Robust speech recognition via large-scale weak supervision. arXiv preprint arXiv:2212.04356 (2022)

  27. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  28. Rouvière, O., Moldovan, P.C., Vlachomitrou, A., Gouttard, S., Riche, B., et al.: Combined model-based and deep learning-based automated 3D zonal segmentation of the prostate on T2-weighted MR images: clinical evaluation. Eur. Radiol. 32, 3248–3259 (2022)

    Article  Google Scholar 

  29. Roy, S., Wald, T., Koehler, G., Rokuss, M.R., Disch, N., et al.: SAM.MD: zero-shot medical image segmentation capabilities of the segment anything model. arXiv preprint arXiv:2304.05396 (2023)

  30. Shi, P., Qiu, J., Abaxi, S.M.D., Wei, H., Lo, F.P.W., Yuan, W.: Generalist vision foundation models for medical imaging: a case study of segment anything model on zero-shot medical segmentation. Diagnostics 13(11), 1947 (2023)

    Article  Google Scholar 

  31. Stone, A., Xiao, T., Lu, Y., Gopalakrishnan, K., Lee, K.H., et al.: Open-world object manipulation using pre-trained vision-language models. arXiv preprint arXiv:2303.00905 (2023)

  32. Wald, T., Roy, S., Koehler, G., Disch, N., Rokuss, M.R., et al.: SAM.MD: zero-shot medical image segmentation capabilities of the segment anything model. In: Medical Imaging with Deep Learning, short paper track (2023)

    Google Scholar 

  33. Wu, J., Fu, R., Fang, H., Liu, Y., Wang, Z., et al.: Medical SAM adapter: adapting segment anything model for medical image segmentation. arXiv preprint arXiv:2304.12620 (2023)

  34. Zhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmentation using learned transformations for one-shot medical image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8543–8553 (2019)

    Google Scholar 

  35. Zhou, T., Zhang, Y., Zhou, Y., Wu, Y., Gong, C.: Can SAM segment polyps? arXiv preprint arXiv:2304.07583 (2023)

  36. Zhu, Y., Wei, R., Gao, G., Ding, L., Zhang, X., et al.: Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J. Magn. Reson. Imaging 49(4), 1149–1156 (2019)

    Article  Google Scholar 

  37. Zou, X., Yang, J., Zhang, H., Li, F., Li, L., et al.: Segment everything everywhere all at once. arXiv preprint arXiv:2304.06718 (2023)

Download references

Acknowledgements

This work was supported by NIH, United States grant R01AG053949 and 1R01AG064027, the NSF, United States NeuroNex grant 1707312, and the NSF, United States CAREER 1748377 grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Heejong Kim .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 105 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, H., Butoi, V.I., Dalca, A.V., Sabuncu, M.R. (2023). Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging. In: Celebi, M.E., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 Workshops . MICCAI 2023. Lecture Notes in Computer Science, vol 14393. Springer, Cham. https://doi.org/10.1007/978-3-031-47401-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47401-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47400-2

  • Online ISBN: 978-3-031-47401-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics