Abstract
The deployment of Machine Learning models intraoperatively for tissue characterisation can assist decision making and guide safe tumour resections. For the surgeon to trust the model, explainability of the generated predictions needs to be provided. For image classification models, pixel attribution (PA) and risk estimation are popular methods to infer explainability. However, the former method lacks trustworthiness while the latter can not provide visual explanation of the model’s attention. In this paper, we propose the first approach which incorporates risk estimation into a PA method for improved and more trustworthy image classification explainability. The proposed method iteratively applies a classification model with a PA method to create a volume of PA maps. We introduce a method to generate an enhanced PA map by estimating the expectation values of the pixel-wise distributions. In addition, the coefficient of variation (CV) is used to estimate pixel-wise risk of this enhanced PA map. Hence, the proposed method not only provides an improved PA map but also produces an estimation of risk on the output PA values. Performance evaluation on probe-based Confocal Laser Endomicroscopy (pCLE) data verifies that our improved explainability method outperforms the state-of-the-art.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adebayo, J., et al.: Sanity Checks for Saliency Maps
Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V.I.: Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC 20(1) (2020)
Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for Deep Neural Networks (Dec 2017)
Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight Uncertainty in Neural Networks (May 2015)
Byun, S.Y., Lee, W.: Recipro-CAM: gradient-free reciprocal class activation map (Sep 2022)
Chattopadhyay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: improved Visual Explanations for Deep Convolutional Networks (Oct 2017)
Damianou, A.C., Lawrence, N.D.: Deep Gaussian Processes (Nov 2012)
Diprose, W.K., Buist, N., Hua, N., Thurier, Q., Shand, G., Robinson, R.: Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J. Am. Med. Inform. Association 27(4) (2020)
Fernandez, F.G.: TorchCAM: class activation explorer (2020)
Folgoc, L.L., et al.: Is MC Dropout Bayesian? (Oct 2021)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian Approximation: Appendix (June 2015)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (June 2015)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples (Dec 2014)
Gordon, L., Grantcharov, T., Rudzicz, F.: Explainable artificial intelligence for safe intraoperative decision support. JAMA Surg. 154(11), 1064–1065 (2019)
Graves, A.: Practical Variational Inference for Neural Networks
Hagos, M.T., Curran, K.M., Mac Namee, B.: Identifying Spurious Correlations and Correcting them with an Explanation-based Learning (Nov 2022)
He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition (Dec 2015)
Hinton, G.E., van Camp, D.: Keeping neural networks simple by minimizing the description length of the weights, pp. 5–13 (1993)
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors (July 2012)
Li, X., Zhou, Y., Dvornek, N.C., Gu, Y., Ventola, P., Duncan, J.S.: Efficient Shapley Explanation for Features Importance Estimation Under Uncertainty (2020)
Lin, M., Chen, Q., Yan, S.: Network In Network (Dec 2013)
Loshchilov, I., Hutter, F.: Decoupled Weight Decay Regularization (Nov 2017)
Neal, R.M.: Bayesian Learning for Neural Networks, vol. 118 (1996)
Omeiza, D., Speakman, S., Cintas, C., Weldermariam, K.: Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models (Aug 2019)
Paszke, A., etal.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019)
Petsiuk, V., Das, A., Saenko, K.: RISE: Randomized Input Sampling for Explanation of Black-box Models (June 2018)
Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis (April 2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You?Explaining the Predictions of Any Classifier (Feb 2016)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: Inverted Residuals and Linear Bottlenecks (Jan 2018)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision 2017-October, pp. 618–626 (Dec 2017)
Wang, H., et al.: Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks (Oct 2019)
Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks (Nov 2013)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning Deep Features for Discriminative Localization
Acknowledgement
This work was supported by the Engineering and Physical Sciences Research Council (EP/T51780X/1) and Intel R &D UK. Dr Giannarou is supported by the Royal Society (URF\(\setminus \)R\(\setminus \)201014).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Roddan, A., Xu, C., Ajlouni, S., Kakaletri, I., Charalampaki, P., Giannarou, S. (2023). Explainable Image Classification with Improved Trustworthiness for Tissue Characterisation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14221. Springer, Cham. https://doi.org/10.1007/978-3-031-43895-0_54
Download citation
DOI: https://doi.org/10.1007/978-3-031-43895-0_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43894-3
Online ISBN: 978-3-031-43895-0
eBook Packages: Computer ScienceComputer Science (R0)