Skip to main content

Multi-modal Unsupervised Pre-training for Surgical Operating Room Workflow Analysis

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13437))

Abstract

Data-driven approaches to assist operating room (OR) workflow analysis depend on large curated datasets that are time consuming and expensive to collect. On the other hand, we see a recent paradigm shift from supervised learning to self-supervised and/or unsupervised learning approaches that can learn representations from unlabeled datasets. In this paper, we leverage the unlabeled data captured in robotic surgery ORs and propose a novel way to fuse the multi-modal data for a single video frame or image. Instead of producing different augmentations (or “views”) of the same image or video frame which is a common practice in self-supervised learning, we treat the multi-modal data as different views to train the model in an unsupervised manner via clustering. We compared our method with other state of the art methods and results show the superior performance of our approach on surgical video activity recognition and semantic segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bao, H., Dong, L., Wei, F.: Beit: BERT pre-training of image transformers. CoRR abs/2106.08254 (2021)

    Google Scholar 

  2. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? CoRR abs/2102.05095 (2021)

    Google Scholar 

  3. Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. CoRR abs/1807.05520 (2018)

    Google Scholar 

  4. Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A.: Unsupervised learning of visual features by contrasting cluster assignments. CoRR abs/2006.09882 (2020)

    Google Scholar 

  5. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. CoRR abs/1705.07750 (2017)

    Google Scholar 

  6. Catchpole, K., et al.: Safety, efficiency and learning curves in robotic surgery: a human factors analysis. Surg. Endosc. 30(9), 3749–3761 (2015). https://doi.org/10.1007/s00464-015-4671-2

    Article  Google Scholar 

  7. Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR abs/1606.00915 (2016)

    Google Scholar 

  8. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. CoRR abs/2002.05709 (2020)

    Google Scholar 

  9. Cuturi, M.: Sinkhorn distances: Lightspeed computation of optimal transport. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems (2013)

    Google Scholar 

  10. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. CoRR abs/1812.03982 (2018)

    Google Scholar 

  11. Feichtenhofer, C., Fan, H., Xiong, B., Girshick, R.B., He, K.: A large-scale study on unsupervised spatiotemporal representation learning. CoRR abs/2104.14558 (2021)

    Google Scholar 

  12. Grill, J., et al.: Bootstrap your own latent: a new approach to self-supervised learning. CoRR abs/2006.07733 (2020)

    Google Scholar 

  13. Han, T., Xie, W., Zisserman, A.: Self-supervised co-training for video representation learning. CoRR abs/2010.09709 (2020)

    Google Scholar 

  14. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. CoRR abs/2111.06377 (2021)

    Google Scholar 

  15. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. CoRR abs/1911.05722 (2019)

    Google Scholar 

  16. Issenhuth, T., Srivastav, V., Gangi, A., Padoy, N.: Face detection in the operating room: Comparison of state-of-the-art methods and a self-supervised approach. CoRR abs/1811.12296 (2018)

    Google Scholar 

  17. Kadkhodamohammadi, A., Gangi, A., de Mathelin, M., Padoy, N.: 3d pictorial structures on RGB-D data for articulated human detection in operating rooms. CoRR abs/1602.03468 (2016)

    Google Scholar 

  18. Kadkhodamohammadi, A., Gangi, A., de Mathelin, M., Padoy, N.: A multi-view RGB-D approach for human pose estimation in operating rooms. CoRR abs/1701.07372 (2017)

    Google Scholar 

  19. Kay, W., et al.: The kinetics human action video dataset. CoRR abs/1705.06950 (2017)

    Google Scholar 

  20. Lee, H., Huang, J., Singh, M., Yang, M.: Unsupervised representation learning by sorting sequences. CoRR abs/1708.01246 (2017)

    Google Scholar 

  21. Li, Z., Shaban, A., Simard, J., Rabindran, D., DiMaio, S.P., Mohareri, O.: A robotic 3d perception system for operating room environment awareness. CoRR abs/2003.09487 (2020)

    Google Scholar 

  22. Misra, I., van der Maaten, L.: Self-supervised learning of pretext-invariant representations. CoRR abs/1912.01991 (2019)

    Google Scholar 

  23. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. CoRR abs/1807.03748 (2018)

    Google Scholar 

  24. Peyré, G., Cuturi, M.: Computational optimal transport (2020)

    Google Scholar 

  25. Qian, R., et al.: Spatiotemporal contrastive video representation learning. CoRR abs/2008.03800 (2020)

    Google Scholar 

  26. Ranzato, M., Huang, F.J., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: CVPR (2007)

    Google Scholar 

  27. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. CoRR abs/1409.0575 (2014)

    Google Scholar 

  28. Schmidt, A., Sharghi, A., Haugerud, H., Oh, D., Mohareri, O.: Multi-view surgical video action detection via mixed global view attention. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12904, pp. 626–635. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87202-1_60

    Chapter  Google Scholar 

  29. Sharghi, A., Haugerud, H., Oh, D., Mohareri, O.: Automatic operating room surgical activity recognition for robot-assisted surgery. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 385–395. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_37

    Chapter  Google Scholar 

  30. Srivastav, V., Gangi, A., Padoy, N.: Human pose estimation on privacy-preserving low-resolution depth images. CoRR abs/2007.08340 (2020)

    Google Scholar 

  31. Srivastav, V., Issenhuth, T., Kadkhodamohammadi, A., de Mathelin, M., Gangi, A., Padoy, N.: MVOR: a multi-view RGB-D operating room dataset for 2d and 3d human pose estimation. CoRR abs/1808.08180 (2018)

    Google Scholar 

  32. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: ICML 2008 (2008)

    Google Scholar 

  33. Wang, J., Jiao, J., Liu, Y.: Self-supervised video representation learning by pace prediction. CoRR abs/2008.05861 (2020)

    Google Scholar 

  34. Xu, D., Xiao, J., Zhao, Z., Shao, J., Xie, D., Zhuang, Y.: Self-supervised spatiotemporal learning via video clip order prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  35. Yao, Y., Liu, C., Luo, D., Zhou, Y., Ye, Q.: Video playback rate perception for self-supervisedspatio-temporal representation learning. CoRR abs/2006.11476 (2020)

    Google Scholar 

  36. Asano, Y.M., Rupprecht, C., Vedald, A.: Self-labelling via simultaneous clustering and representation learning. In: International Conference on Learning Representations (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Abdullah Jamal .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 102 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jamal, M.A., Mohareri, O. (2022). Multi-modal Unsupervised Pre-training for Surgical Operating Room Workflow Analysis. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13437. Springer, Cham. https://doi.org/10.1007/978-3-031-16449-1_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16449-1_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16448-4

  • Online ISBN: 978-3-031-16449-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics