skip to main content
research-article

AI Explainability and Acceptance: A Case Study for Underwater Mine Hunting

Published:06 March 2024Publication History
Skip Abstract Section

Abstract

In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this “black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making.

In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a post hoc approach. We study and compare the different heat maps obtained.

Second, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance, ranging from classification for an unaided operator to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context.

The results obtained show that the heat maps explanation has a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.

REFERENCES

  1. [1] Kobus D. A. and Lewandowski L. J.. 1991. Critical factor in sonar operation: A survey of experienced operators. Naval Health Res. Cent. San Diego CA (1991). https://apps.dtic.mil/sti/tr/pdf/ADA258924.pdfGoogle ScholarGoogle Scholar
  2. [2] Hansen R. E.. 2011. Introduction to synthetic aperture sonar. IntechOpen (2011).Google ScholarGoogle Scholar
  3. [3] Williams D. P., Couillard M., and Dugelay S.. 2014. On human perception and automatic target recognition: Strategies for human-computer cooperation. In 22nd International Conference on Pattern Recognition. 46904695.Google ScholarGoogle Scholar
  4. [4] Krizhevsky A., Sutskever I., and Hinton G. E.. 2012. ImageNet classification with deep convolutional neural networks. NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, 1097–1105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Tellez O. L.. 2019. Underwater threat recognition: Are automatic target classification alorithms going to replace expert humain operator in the near future. In OCEANS Conference. 14.Google ScholarGoogle Scholar
  6. [6] Williams D.. 2020. On the use of tiny convolutional neural networks for human-expert-level classification performance in sonar imagery. IEEE J. Ocean. Eng. 46, 1 (2020), 236260.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Allen N. and Kessel R.. 2003. The Roles of Human Operator and Machine in Decision Aid Strategies for Target Detection. Technical Report. Defence R&D Canada – Atlantic. 15 pages.Google ScholarGoogle Scholar
  8. [8] Zachary and Lipton C.. 2016. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Zachary C Lipton Queue 16, 3 (2016), 31–57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Hu F. Z., Kuflik T., Mocanu I. G., Najafian S., and Tal A. Shulner. 2021. Recent studies of XAI - review. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, 421431.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Holzinger K., Mak K., Kieseberg P., and Holzinger A.. 2018. Can we trust machine learning results? Artificial intelligence in safety-critical decision support. ERCIM News 112, 1(2018), 42–43.Google ScholarGoogle Scholar
  11. [11] Molnar Christoph. 2019. Interpretable Machine Learning. Retrieved from https://christophm.github.io/interpretable-ml-bookGoogle ScholarGoogle Scholar
  12. [12] Holzinger A., Saranti A., Molnar C., Biececk P., and Samek W.. 2022. Explainable AI methods - A brief overview. XXAI - Lecture Notes in Artificial Intelligence, Vol. 13200. Springer, Cham, 13–38.Google ScholarGoogle Scholar
  13. [13] Bertossi L. and Geerts F.. 2020. Data quality and explainable AI. J. Data Inf. Qual. 12, 2 (2020).Google ScholarGoogle Scholar
  14. [14] Keqing Z., Jie T., and Haining H.. 2018. Underwater object images classification based on convolutional neural network. In IEEE 3rd International Conference on Signal and Image Processing (ICSIP’18).Google ScholarGoogle Scholar
  15. [15] K. R. Weiss, T. M. Khoshgoftaar, and D. Wang. 2016. A survey of transfer learning. Journal of Big Data (Springer International Publishing) 3, 1 (2016), 9. https://typeset.io/papers/a-survey-of-transfer-learning-w9lk128p94Google ScholarGoogle Scholar
  16. [16] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. https://arxiv.org/pdf/1704.04861.pdf(2017.pdfGoogle ScholarGoogle Scholar
  17. [17] Deng J., Dong W., Socher R., Li L.-J., Li K., and Fei-Fei L.. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. 248255.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Jesus S., Belém C., Balayan V., Bento J., Saleiro P., Bizarro P., and Gama J.. 2021. How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, 805815.Google ScholarGoogle Scholar
  19. [19] Belloni C., Aouf N., Balleri A., Caillec J.-M. Le, and Merlet T.. 2020. Explainability of deep SAR ATR through feature analysis. IEEE Trans. Aerosp. Electron. Systems 57, 1 (2020), 11. https://ieeexplore.ieee.org/document/9233954Google ScholarGoogle Scholar
  20. [20] SHAP codes and documentation. Retrieved from https://github.com/shap/shapGoogle ScholarGoogle Scholar
  21. [21] Ramanishka V., Das A., Zhang J., and Saenko K.. 2017. Top-down visual saliency guided by captions. In Computer Vision and Pattern Recognition Conference.Google ScholarGoogle Scholar
  22. [22] Ribeiro M. T., Singh S., and Guestrin C.. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations.Google ScholarGoogle Scholar
  23. [23] Lundberg S. and Lee S.. 2017. A unified approach to interpreting model predictions. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768–4777. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Sundararajan M., Taly A., and Yan Q.. 2017. Axiomatic attribution for deep networks. In 34th International Conference on Machine Learning. 33193328.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Ancona M., Ceolini E., Oztireli C., and Gross M.. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  26. [26] Shrikumar A., Greenside P., Shcherbina A., and Kundaje A.. 2017. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. (2017). https://arxiv.org/abs/1605.01713Google ScholarGoogle Scholar
  27. [27] Burkart N. and Huber M.. 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research 70 (2021), 245–317.Google ScholarGoogle Scholar
  28. [28] Montavon G., Binder A., Lapuschkin S., Samek W., and Muller K.. 2019. Layer-wise relevance propagation: An overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer Nature Switzerland, 193–209.Google ScholarGoogle Scholar
  29. [29] Shapley S.. 1953. A value for n-person games. In Contributions to the Theory of Games II, H. Kuhn and A. Tucker (Eds.). Princeton University Press, Princeton, 307–317.Google ScholarGoogle Scholar
  30. [30] Richard G. J., Habonneau J., Gueriot D., and Caillec J-M Le. accepted. Explainable AI methods for underwater mine warfare. IEEE J. Ocean. Eng. (accepted).Google ScholarGoogle Scholar
  31. [31] Steiniegr Y., Kraus D., and Meisen T.. 2022. Survey on deep learning based computer vision for sonar imagery. Eng. Applicat. Artif. Intell. 114 (2022), 105157.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Holzinger A., Carrington A., and Muller H.. 2020. Measuring the quality of explanations: The system causability scale (SCS). Künstliche Intelligenz 34 (2020), 193–198.Google ScholarGoogle Scholar

Index Terms

  1. AI Explainability and Acceptance: A Case Study for Underwater Mine Hunting

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image Journal of Data and Information Quality
          Journal of Data and Information Quality  Volume 16, Issue 1
          March 2024
          187 pages
          ISSN:1936-1955
          EISSN:1936-1963
          DOI:10.1145/3613486
          Issue’s Table of Contents

          Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 6 March 2024
          • Online AM: 21 December 2023
          • Accepted: 2 November 2023
          • Revised: 16 September 2023
          • Received: 9 December 2022
          Published in jdiq Volume 16, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
        • Article Metrics

          • Downloads (Last 12 months)114
          • Downloads (Last 6 weeks)54

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text