Abstract
Deep learning has made a breakthrough in medical image segmentation in recent years due to its ability to extract high-level features without the need for prior knowledge. In this context, UNet is one of the most advanced medical image segmentation models, with promising results in mammography. Despite its excellent overall performance in segmenting multimodal medical images, the traditional U-Net structure appears to be inadequate in various ways. There are certain U-Net design modifications, such as MultiResUNet, Connected-UNets and AU-Net, that have improved overall performance in areas where the conventional U-Net architecture appears to be deficient. Following the success of UNet and its variants, we have presented two enhanced versions of the Connected-UNets architecture: ConnectedUNets+ and ConnectedUNets++. In ConnectedUNets+, we have replaced the simple skip connections of Connected-UNets architecture with residual skip connections, while in ConnectedUNets++, we have modified the encoder decoder structure along with employing residual skip connections. We have evaluated our proposed architectures on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast.
Prithul Sarker and Sushmita Sarker have equal contribution and are co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
American Chemical Society: Breast cancer facts & figures 2019–2020. Am. Cancer Soc. 1–44 (2019)
Elter, M., Horsch, A.: CADx of mammographic masses and clustered microcalcifications: a review. Med. Phys. 36(6Part1), 2052–2068 (2009)
Jiang, Y., Nishikawa, R.M., Schmidt, R.A., Metz, C.E., Giger, M.L., Doi, K.: Improving breast cancer diagnosis with computer-aided diagnosis. Acad. Radiol. 6(1), 22–33 (1999)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Zaheer, R., Shaziya, H.: GPU-based empirical evaluation of activation functions in convolutional neural networks. In: 2018 2nd International Conference on Inventive Systems and Control (ICISC), pp. 769–773. IEEE (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Baccouche, A., Garcia-Zapirain, B., Castillo Olea, C., Elmaghraby, A.S.: Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 7(1), 1–12 (2021)
Sun, H., et al.: AUNet: attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms. Phys. Medi. Biol. 65(5), 055005 (2020)
Lee, R.S., Gimenez, F., Hoogi, A., Miyake, K.K., Gorovoy, M., Rubin, D.L.: A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 4(1), 1–9 (2017)
Moreira, I.C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M.J., Cardoso, J.S.: INbreast: toward a full-field digital mammographic database. Acad. Radiol. 19(2), 236–248 (2012)
Abdelhafiz, D., Bi, J., Ammar, R., Yang, C., Nabavi, S.: Convolutional neural network for automated mass segmentation in mammography. BMC Bioinform. 21(1), 1–19 (2020)
Ravitha Rajalakshmi, N., Vidhyapriya, R., Elango, N., Ramesh, N.: Deeply supervised U-Net for mass segmentation in digital mammograms. Int. J. Imaging Syst. Technol. 31(1), 59–71 (2021)
Li, H., Chen, D., Nailon, W.H., Davies, M.E., Laurenson, D.: Improved breast mass segmentation in mammograms with conditional residual U-Net. In: Stoyanov, D., et al. (eds.) RAMBO/BIA/TIA 2018. LNCS, vol. 11040, pp. 81–89. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00946-5_9
Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)
Li, S., Dong, M., Du, G., Mu, X.: Attention Dense-U-Net for automatic breast mass segmentation in digital mammogram. IEEE Access 7, 59037–59047 (2019)
Hai, J., et al.: Fully convolutional DenseNet with multiscale context for automated breast tumor segmentation. J. Healthcare Eng. 2019 (2019)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Mahmood, T., Li, J., Pei, Y., Akhtar, F., Rehman, M.U., Wasti, S.H.: Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS ONE 17(1), e0263126 (2022)
Acknowledgements
Portions of this material is based upon work supported by the Office of the Under Secretary of Defense for Research and Engineering under award number FA9550-21-1-0207.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sarker, P., Sarker, S., Bebis, G., Tavakkoli, A. (2022). ConnectedUNets++: Mass Segmentation from Whole Mammographic Images. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13598. Springer, Cham. https://doi.org/10.1007/978-3-031-20713-6_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-20713-6_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20712-9
Online ISBN: 978-3-031-20713-6
eBook Packages: Computer ScienceComputer Science (R0)