Skip to main content

Explainable Image Classification with Improved Trustworthiness for Tissue Characterisation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Abstract

The deployment of Machine Learning models intraoperatively for tissue characterisation can assist decision making and guide safe tumour resections. For the surgeon to trust the model, explainability of the generated predictions needs to be provided. For image classification models, pixel attribution (PA) and risk estimation are popular methods to infer explainability. However, the former method lacks trustworthiness while the latter can not provide visual explanation of the model’s attention. In this paper, we propose the first approach which incorporates risk estimation into a PA method for improved and more trustworthy image classification explainability. The proposed method iteratively applies a classification model with a PA method to create a volume of PA maps. We introduce a method to generate an enhanced PA map by estimating the expectation values of the pixel-wise distributions. In addition, the coefficient of variation (CV) is used to estimate pixel-wise risk of this enhanced PA map. Hence, the proposed method not only provides an improved PA map but also produces an estimation of risk on the output PA values. Performance evaluation on probe-based Confocal Laser Endomicroscopy (pCLE) data verifies that our improved explainability method outperforms the state-of-the-art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adebayo, J., et al.: Sanity Checks for Saliency Maps

    Google Scholar 

  2. Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V.I.: Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC 20(1) (2020)

    Google Scholar 

  3. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for Deep Neural Networks (Dec 2017)

    Google Scholar 

  4. Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight Uncertainty in Neural Networks (May 2015)

    Google Scholar 

  5. Byun, S.Y., Lee, W.: Recipro-CAM: gradient-free reciprocal class activation map (Sep 2022)

    Google Scholar 

  6. Chattopadhyay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: improved Visual Explanations for Deep Convolutional Networks (Oct 2017)

    Google Scholar 

  7. Damianou, A.C., Lawrence, N.D.: Deep Gaussian Processes (Nov 2012)

    Google Scholar 

  8. Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J. Am. Med. Inform. Association 27(4) (2020)

    Google Scholar 

  9. Fernandez, F.G.: TorchCAM: class activation explorer (2020)

    Google Scholar 

  10. Folgoc, L.L., et al.: Is MC Dropout Bayesian? (Oct 2021)

    Google Scholar 

  11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian Approximation: Appendix (June 2015)

    Google Scholar 

  12. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (June 2015)

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples (Dec 2014)

    Google Scholar 

  14. Gordon, L., Grantcharov, T., Rudzicz, F.: Explainable artificial intelligence for safe intraoperative decision support. JAMA Surg. 154(11), 1064–1065 (2019)

    Article  Google Scholar 

  15. Graves, A.: Practical Variational Inference for Neural Networks

    Google Scholar 

  16. Hagos, M.T., Curran, K.M., Mac Namee, B.: Identifying Spurious Correlations and Correcting them with an Explanation-based Learning (Nov 2022)

    Google Scholar 

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition (Dec 2015)

    Google Scholar 

  18. Hinton, G.E., van Camp, D.: Keeping neural networks simple by minimizing the description length of the weights, pp. 5–13 (1993)

    Google Scholar 

  19. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors (July 2012)

    Google Scholar 

  20. Li, X., Zhou, Y., Dvornek, N.C., Gu, Y., Ventola, P., Duncan, J.S.: Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty (2020)

    Google Scholar 

  21. Lin, M., Chen, Q., Yan, S.: Network In Network (Dec 2013)

    Google Scholar 

  22. Loshchilov, I., Hutter, F.: Decoupled Weight Decay Regularization (Nov 2017)

    Google Scholar 

  23. Neal, R.M.: Bayesian Learning for Neural Networks, vol. 118 (1996)

    Google Scholar 

  24. Omeiza, D., Speakman, S., Cintas, C., Weldermariam, K.: Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models (Aug 2019)

    Google Scholar 

  25. Paszke, A., etal.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019)

    Google Scholar 

  26. Petsiuk, V., Das, A., Saenko, K.: RISE: Randomized Input Sampling for Explanation of Black-box Models (June 2018)

    Google Scholar 

  27. Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis (April 2021)

    Google Scholar 

  28. Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?Explaining the Predictions of Any Classifier (Feb 2016)

    Google Scholar 

  29. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: Inverted Residuals and Linear Bottlenecks (Jan 2018)

    Google Scholar 

  30. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision 2017-October, pp. 618–626 (Dec 2017)

    Google Scholar 

  31. Wang, H., et al.: Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks (Oct 2019)

    Google Scholar 

  32. Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks (Nov 2013)

    Google Scholar 

  33. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning Deep Features for Discriminative Localization

    Google Scholar 

Download references

Acknowledgement

This work was supported by the Engineering and Physical Sciences Research Council (EP/T51780X/1) and Intel R &D UK. Dr Giannarou is supported by the Royal Society (URF\(\setminus \)R\(\setminus \)201014).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alfie Roddan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Roddan, A., Xu, C., Ajlouni, S., Kakaletri, I., Charalampaki, P., Giannarou, S. (2023). Explainable Image Classification with Improved Trustworthiness for Tissue Characterisation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14221. Springer, Cham. https://doi.org/10.1007/978-3-031-43895-0_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43895-0_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43894-3

  • Online ISBN: 978-3-031-43895-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics