Abstract
In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this “black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making.
In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a post hoc approach. We study and compare the different heat maps obtained.
Second, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance, ranging from classification for an unaided operator to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context.
The results obtained show that the heat maps explanation has a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.
- [1] . 1991. Critical factor in sonar operation: A survey of experienced operators. Naval Health Res. Cent. San Diego CA (1991). https://apps.dtic.mil/sti/tr/pdf/ADA258924.pdfGoogle Scholar
- [2] . 2011. Introduction to synthetic aperture sonar. IntechOpen (2011).Google Scholar
- [3] . 2014. On human perception and automatic target recognition: Strategies for human-computer cooperation. In 22nd International Conference on Pattern Recognition. 4690–4695.Google Scholar
- [4] . 2012. ImageNet classification with deep convolutional neural networks. NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, 1097–1105. Google ScholarDigital Library
- [5] . 2019. Underwater threat recognition: Are automatic target classification alorithms going to replace expert humain operator in the near future. In OCEANS Conference. 1–4.Google Scholar
- [6] . 2020. On the use of tiny convolutional neural networks for human-expert-level classification performance in sonar imagery. IEEE J. Ocean. Eng. 46, 1 (2020), 236–260.Google ScholarCross Ref
- [7] . 2003. The Roles of Human Operator and Machine in Decision Aid Strategies for Target Detection.
Technical Report . Defence R&D Canada – Atlantic. 15 pages.Google Scholar - [8] . 2016. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Zachary C Lipton Queue 16, 3 (2016), 31–57. Google ScholarDigital Library
- [9] . 2021. Recent studies of XAI - review. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, 421–431.Google ScholarDigital Library
- [10] . 2018. Can we trust machine learning results? Artificial intelligence in safety-critical decision support. ERCIM News 112, 1(2018), 42–43.Google Scholar
- [11] . 2019. Interpretable Machine Learning. Retrieved from https://christophm.github.io/interpretable-ml-bookGoogle Scholar
- [12] . 2022. Explainable AI methods - A brief overview. XXAI - Lecture Notes in Artificial Intelligence, Vol. 13200. Springer, Cham, 13–38.Google Scholar
- [13] . 2020. Data quality and explainable AI. J. Data Inf. Qual. 12, 2 (2020).Google Scholar
- [14] . 2018. Underwater object images classification based on convolutional neural network. In IEEE 3rd International Conference on Signal and Image Processing (ICSIP’18).Google Scholar
- [15] K. R. Weiss, T. M. Khoshgoftaar, and D. Wang. 2016. A survey of transfer learning. Journal of Big Data (Springer International Publishing) 3, 1 (2016), 9. https://typeset.io/papers/a-survey-of-transfer-learning-w9lk128p94Google Scholar
- [16] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. https://arxiv.org/pdf/1704.04861.pdf(2017.pdfGoogle Scholar
- [17] . 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. 248–255.Google ScholarCross Ref
- [18] . 2021. How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 805–815.Google Scholar
- [19] . 2020. Explainability of deep SAR ATR through feature analysis. IEEE Trans. Aerosp. Electron. Systems 57, 1 (2020), 1–1. https://ieeexplore.ieee.org/document/9233954Google Scholar
- [20] SHAP codes and documentation. Retrieved from https://github.com/shap/shapGoogle Scholar
- [21] . 2017. Top-down visual saliency guided by captions. In Computer Vision and Pattern Recognition Conference.Google Scholar
- [22] . 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations.Google Scholar
- [23] . 2017. A unified approach to interpreting model predictions. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768–4777. Google ScholarDigital Library
- [24] . 2017. Axiomatic attribution for deep networks. In 34th International Conference on Machine Learning. 3319–3328.Google ScholarDigital Library
- [25] . 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations.Google Scholar
- [26] . 2017. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. (2017). https://arxiv.org/abs/1605.01713Google Scholar
- [27] . 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research 70 (2021), 245–317.Google Scholar
- [28] . 2019. Layer-wise relevance propagation: An overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer Nature Switzerland, 193–209.Google Scholar
- [29] . 1953. A value for n-person games. In Contributions to the Theory of Games II, H. Kuhn and A. Tucker (Eds.). Princeton University Press, Princeton, 307–317.Google Scholar
- [30] . accepted. Explainable AI methods for underwater mine warfare. IEEE J. Ocean. Eng. (accepted).Google Scholar
- [31] . 2022. Survey on deep learning based computer vision for sonar imagery. Eng. Applicat. Artif. Intell. 114 (2022), 105157.Google ScholarDigital Library
- [32] . 2020. Measuring the quality of explanations: The system causability scale (SCS). Künstliche Intelligenz 34 (2020), 193–198.Google Scholar
Index Terms
- AI Explainability and Acceptance: A Case Study for Underwater Mine Hunting
Recommendations
Machine Learning Explainability and Robustness: Connected at the Hip
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningThis tutorial examines the synergistic relationship between explainability methods for machine learning and a significant problem related to model quality: robustness against adversarial perturbations. We begin with a broad overview of approaches to ...
Transparency and explainability of AI systems: From ethical guidelines to requirements
Highlights- The AI ethical guidelines of 16 organizations emphasize explainability as the core of transparency.
Abstract Context and MotivationRecent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of ...
Designing Explainability of an Artificial Intelligence System
TechMindSociety '18: Proceedings of the Technology, Mind, and SocietyExplainability and accuracy of the machine learning algorithms usually laid on a trade-off relationship. Several algorithms such as deep-learning artificial neural networks have high accuracy but low explainability. Since there were only limited ways to ...
Comments