Abstract
Automatic surgical video captioning is critical to understanding surgical procedures, and can provide the intra-operative guidance and the post-operative report generation. As the overlap of surgical workflow and vision-language learning, this cross-modal task expects precise text descriptions of complex surgical videos. However, current captioning algorithms neither fully leverage the inherent patterns of surgery, nor coordinate the knowledge of visual and text modalities well. To address these problems, we introduce the surgical concepts into captioning, and propose the Surgical Concept Alignment Network (SCA-Net) to bridge the visual and text modalities via surgical concepts. Specifically, to enable the captioning network to accurately perceive surgical concepts, we first devise the Surgical Concept Learning (SCL) to predict the presence of surgical concepts with the representations of visual and text modalities, respectively. Moreover, to mitigate the semantic gap between visual and text modalities of captioning, we propose the Mutual-Modality Concept Alignment (MC-Align) to mutually coordinate the encoded features with surgical concept representations of the other modality. In this way, the proposed SCA-Net achieves the surgical concept alignment between visual and text modalities, thereby producing more accurate captions with aligned multi-modal knowledge. Extensive experiments on neurosurgery videos and nephrectomy images confirm the effectiveness of our SCA-Net, which outperforms the state-of-the-arts by a large margin. The source code is available at https://github.com/franciszchen/SCA-Net.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Allan, M., et al.: 2018 robotic scene segmentation challenge. arXiv preprint arXiv:2001.11190 (2020)
Anderson, P., Fernando, B., Johnson, M., Gould, S.: SPICE: semantic propositional image caption evaluation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 382–398. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_24
Banerjee, S., Lavie, A.: METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: ACL Workshop, pp. 65–72 (2005)
Bieck, R., et al.: Generation of surgical reports using keyword-augmented next sequence prediction. Curr. Direct. Biomed. Eng. 7(2), 387–390 (2021)
Cornia, M., Stefanini, M., Baraldi, L., Cucchiara, R.: Meshed-memory transformer for image captioning. In: CVPR, pp. 10578–10587 (2020)
Czempiel, T., Paschali, M., Ostler, D., Kim, S.T., Busam, B., Navab, N.: OperA: attention-regularized transformers for surgical phase recognition. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 604–614. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_58
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. In: ICLR (2021)
Elnikety, S., Badr, E., Abdelaal, A.: Surgical training fit for the future: the need for a change. Postgrad. Med. J. 98(1165), 820–823 (2022)
Huang, L., Wang, W., Chen, J., Wei, X.Y.: Attention on attention for image captioning. In: ICCV, pp. 4634–4643 (2019)
Khosla, P., et al.: Supervised contrastive learning. In: NeurIPS, vol. 33, pp. 18661–18673 (2020)
Lin, C., Zheng, S., Liu, Z., Li, Y., Zhu, Z., Zhao, Y.: SGT: scene graph-guided transformer for surgical report generation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13437, pp. 507–518. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_48
Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Lin, K., et al.: SwinBERT: end-to-end transformers with sparse attention for video captioning. In: CVPR, pp. 17949–17958 (2022)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: ICCV, pp. 10012–10022 (2021)
Liu, Z., et al.: Video swin transformer. In: CVPR, pp. 3202–3211 (2022)
Loper, E., Bird, S.: NLTK: the natural language toolkit. arXiv preprint cs/0205028 (2002)
Madani, A., et al.: Artificial intelligence for intraoperative guidance: using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann. Surg. (2020)
Nwoye, C.I., et al.: CholecTriplet 2021: a benchmark challenge for surgical action triplet recognition. Med. Image Anal. 86, 102803 (2023)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: ACL, pp. 311–318 (2002)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 (2019)
Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: CVPR, pp. 7008–7024 (2017)
Vedantam, R., Lawrence Zitnick, C., Parikh, D.: CIDEr: consensus-based image description evaluation. In: CVPR, pp. 4566–4575 (2015)
Xu, M., Islam, M., Lim, C.M., Ren, H.: Class-incremental domain adaptation with smoothing and calibration for surgical report generation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 269–278. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_26
Xu, M., Islam, M., Ren, H.: Rethinking surgical captioning: end-to-end window-based MLP transformer using patches. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13437, pp. 376–386. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_36
Yu, J., Wang, Z., Vasudevan, V., Yeung, L., Seyedhosseini, M., Wu, Y.: CoCa: contrastive captioners are image-text foundation models. Trans. Mach. Learn. Res. (2022)
Zhang, J., Nie, Y., Chang, J., Zhang, J.J.: Surgical instruction generation with transformers. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 290–299. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_28
Acknowledgments
This work is supported by National Key R &D Program of China under Grant No. 2021YFE0205700, National Natural Science Foundation of China (No. 62276260, 62076235, 62176254, 61976210, 62002356, 62006230), sponsored by Zhejiang Lab (No. 2021KH0AB07) and the InnoHK program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, Z. et al. (2023). Surgical Video Captioning with Mutual-Modal Concept Alignment. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14228. Springer, Cham. https://doi.org/10.1007/978-3-031-43996-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-43996-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43995-7
Online ISBN: 978-3-031-43996-4
eBook Packages: Computer ScienceComputer Science (R0)