Abstract
Data-driven approaches to assist operating room (OR) workflow analysis depend on large curated datasets that are time consuming and expensive to collect. On the other hand, we see a recent paradigm shift from supervised learning to self-supervised and/or unsupervised learning approaches that can learn representations from unlabeled datasets. In this paper, we leverage the unlabeled data captured in robotic surgery ORs and propose a novel way to fuse the multi-modal data for a single video frame or image. Instead of producing different augmentations (or “views”) of the same image or video frame which is a common practice in self-supervised learning, we treat the multi-modal data as different views to train the model in an unsupervised manner via clustering. We compared our method with other state of the art methods and results show the superior performance of our approach on surgical video activity recognition and semantic segmentation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bao, H., Dong, L., Wei, F.: Beit: BERT pre-training of image transformers. CoRR abs/2106.08254 (2021)
Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? CoRR abs/2102.05095 (2021)
Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. CoRR abs/1807.05520 (2018)
Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A.: Unsupervised learning of visual features by contrasting cluster assignments. CoRR abs/2006.09882 (2020)
Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. CoRR abs/1705.07750 (2017)
Catchpole, K., et al.: Safety, efficiency and learning curves in robotic surgery: a human factors analysis. Surg. Endosc. 30(9), 3749–3761 (2015). https://doi.org/10.1007/s00464-015-4671-2
Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR abs/1606.00915 (2016)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. CoRR abs/2002.05709 (2020)
Cuturi, M.: Sinkhorn distances: Lightspeed computation of optimal transport. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems (2013)
Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. CoRR abs/1812.03982 (2018)
Feichtenhofer, C., Fan, H., Xiong, B., Girshick, R.B., He, K.: A large-scale study on unsupervised spatiotemporal representation learning. CoRR abs/2104.14558 (2021)
Grill, J., et al.: Bootstrap your own latent: a new approach to self-supervised learning. CoRR abs/2006.07733 (2020)
Han, T., Xie, W., Zisserman, A.: Self-supervised co-training for video representation learning. CoRR abs/2010.09709 (2020)
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. CoRR abs/2111.06377 (2021)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. CoRR abs/1911.05722 (2019)
Issenhuth, T., Srivastav, V., Gangi, A., Padoy, N.: Face detection in the operating room: Comparison of state-of-the-art methods and a self-supervised approach. CoRR abs/1811.12296 (2018)
Kadkhodamohammadi, A., Gangi, A., de Mathelin, M., Padoy, N.: 3d pictorial structures on RGB-D data for articulated human detection in operating rooms. CoRR abs/1602.03468 (2016)
Kadkhodamohammadi, A., Gangi, A., de Mathelin, M., Padoy, N.: A multi-view RGB-D approach for human pose estimation in operating rooms. CoRR abs/1701.07372 (2017)
Kay, W., et al.: The kinetics human action video dataset. CoRR abs/1705.06950 (2017)
Lee, H., Huang, J., Singh, M., Yang, M.: Unsupervised representation learning by sorting sequences. CoRR abs/1708.01246 (2017)
Li, Z., Shaban, A., Simard, J., Rabindran, D., DiMaio, S.P., Mohareri, O.: A robotic 3d perception system for operating room environment awareness. CoRR abs/2003.09487 (2020)
Misra, I., van der Maaten, L.: Self-supervised learning of pretext-invariant representations. CoRR abs/1912.01991 (2019)
van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. CoRR abs/1807.03748 (2018)
Peyré, G., Cuturi, M.: Computational optimal transport (2020)
Qian, R., et al.: Spatiotemporal contrastive video representation learning. CoRR abs/2008.03800 (2020)
Ranzato, M., Huang, F.J., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: CVPR (2007)
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. CoRR abs/1409.0575 (2014)
Schmidt, A., Sharghi, A., Haugerud, H., Oh, D., Mohareri, O.: Multi-view surgical video action detection via mixed global view attention. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 626–635. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_60
Sharghi, A., Haugerud, H., Oh, D., Mohareri, O.: Automatic operating room surgical activity recognition for robot-assisted surgery. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 385–395. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_37
Srivastav, V., Gangi, A., Padoy, N.: Human pose estimation on privacy-preserving low-resolution depth images. CoRR abs/2007.08340 (2020)
Srivastav, V., Issenhuth, T., Kadkhodamohammadi, A., de Mathelin, M., Gangi, A., Padoy, N.: MVOR: a multi-view RGB-D operating room dataset for 2d and 3d human pose estimation. CoRR abs/1808.08180 (2018)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: ICML 2008 (2008)
Wang, J., Jiao, J., Liu, Y.: Self-supervised video representation learning by pace prediction. CoRR abs/2008.05861 (2020)
Xu, D., Xiao, J., Zhao, Z., Shao, J., Xie, D., Zhuang, Y.: Self-supervised spatiotemporal learning via video clip order prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Yao, Y., Liu, C., Luo, D., Zhou, Y., Ye, Q.: Video playback rate perception for self-supervisedspatio-temporal representation learning. CoRR abs/2006.11476 (2020)
Asano, Y.M., Rupprecht, C., Vedald, A.: Self-labelling via simultaneous clustering and representation learning. In: International Conference on Learning Representations (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Jamal, M.A., Mohareri, O. (2022). Multi-modal Unsupervised Pre-training for Surgical Operating Room Workflow Analysis. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13437. Springer, Cham. https://doi.org/10.1007/978-3-031-16449-1_43
Download citation
DOI: https://doi.org/10.1007/978-3-031-16449-1_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16448-4
Online ISBN: 978-3-031-16449-1
eBook Packages: Computer ScienceComputer Science (R0)