Skip to main content

A Comprehensive Survey of Explainable Artificial Intelligence (XAI) Methods: Exploring Transparency and Interpretability

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2023 (WISE 2023)

Abstract

Artificial Intelligence (AI) is undergoing a significant transformation. In recent years, the deployment of AI models, from Analytical to Cognitive and Generative AI, has become imminent; however, the widespread utilization of these models has prompted questions and concerns within the research and business communities regarding their transparency and interpretability. A primary challenge lies in comprehending the underlying reasoning mechanisms employed by AI-enabled systems. The absence of transparency and interpretability into the decision-making process of these systems indicates a deficiency that can have severe consequences, e.g., in domains such as medical diagnosis and financial decision-making, where valuable resources are at stake. This survey explores Explainable AI (XAI) techniques within the AI system pipeline based on existing literature. It covers tools and applications across various domains, assessing current methods and addressing challenges and opportunities, particularly in the context of Generative AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adadi, A., et al.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Alvarez-Melis, D., et al.: On the Robustness of Interpretability Methods. arXiv preprint arXiv:1806.08049 (2018)

  3. Amir, O., et al.: Summarizing agent strategies. Auton. Agent. Multi-Agent Syst. 33(5), 628–644 (2019)

    Article  Google Scholar 

  4. Aytekin, C.: Neural Networks are Decision Trees. arXiv preprint arXiv:2210.05189 (2022)

  5. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  6. Cabitza, F., Campagner, A., Ciucci, D.: New frontiers in explainable AI: understanding the GI to interpret the GO. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2019. LNCS, vol. 11713, pp. 27–47. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29726-8_3

    Chapter  Google Scholar 

  7. Doshi-Velez, F., et al.: Towards A Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608 (2017)

  8. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 2018, pp. 0210–0215 (2018). https://doi.org/10.23919/MIPRO.2018.8400040

  9. Ehsan, U., et al.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of IUI. ACM (2019)

    Google Scholar 

  10. Fong, R.C., et al.: Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE ICCV, pp. 3429–3437 (2017)

    Google Scholar 

  11. Fumagalli, F., et al.: Incremental Permutation Feature Importance (iPFI): Towards Online Explanations on Data Streams. arXiv preprint arXiv:2209.01939 (2022)

  12. Gaur, M., et al.: Semantics of the black-box: can knowledge graphs help make deep learning systems more interpretable and explainable? IEEE Internet Comput. 25(1), 51–59 (2021)

    Article  Google Scholar 

  13. Gilpin, L.H., et al.: Explaining explanations: an overview of interpretability of machine learning. In: IEEE 5th International Conference on DSAA (2019)

    Google Scholar 

  14. Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)

    Article  Google Scholar 

  15. Gunning, D., et al.: XAI-Explainable Artificial Intelligence. Sci. Robot. 4(37), 7120 (2019)

    Article  Google Scholar 

  16. Hanif, A., et al.: A survey on explainable artificial intelligence techniques and challenges. In: IEEE 25th EDOCW, pp. 81–89. IEEE (2021)

    Google Scholar 

  17. Kim, B., et al.: Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in NIPS, vol. 29 (2016)

    Google Scholar 

  18. Lundberg, S.M., et al.: A unified approach to interpreting model predictions. In: Advances in NIPS, Long Beach, CA, vol. 30 (2017)

    Google Scholar 

  19. Ma, W., et al.: Jointly learning explainable rules for recommendation with knowledge graph. In: Proceedings of the WWW, pp. 1210–1221 (2019)

    Google Scholar 

  20. Markus, A.F., et al.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. JBI 113, 103655 (2021)

    Google Scholar 

  21. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rao, V.N., et al.: A first look: towards explainable TextVQA models via visual and textual explanations. In: Proceedings of the Third MAI-Workshop, pp. 19–29. ACL (2021)

    Google Scholar 

  23. Pouriyeh, S., et al.: A comprehensive investigation and comparison of machine learning techniques in the domain of heart disease. In: IEEE ISCC, pp. 204–207 (2017)

    Google Scholar 

  24. Raju, C., et al.: A survey on predicting heart disease using data mining techniques. In: ICEDSS, pp. 253–255 (2018)

    Google Scholar 

  25. Ras, G., et al.: Explainable deep learning: a field guide for the uninitiated. J. Artif. Intell. Res. 73, 329–397 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ribeiro, M.T., et al.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD, KDD 2016, pp. 1135–1144 (2016)

    Google Scholar 

  27. Romei, A., et al.: A multidisciplinary survey on discrimination analysis. KER 29(5), 582–638 (2014)

    Google Scholar 

  28. Saeed, W., et al.: Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. KBS 263, 110273 (2023)

    Google Scholar 

  29. Sarker, I.H.: Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SNCS 2(6), 420 (2021)

    Google Scholar 

  30. Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. IJCV 128(2), 336–359 (2017)

    Article  Google Scholar 

  31. Shrikumar, A., et al.: Learning important features through propagating activation differences. In: 34th ICML, vol. 7, pp. 4844–4866 (2017)

    Google Scholar 

  32. Smilkov, D., et al.: SmoothGrad: removing noise by adding noise. arXiv (2017)

    Google Scholar 

  33. Sridharan, M., et al.: Towards a theory of explanations for human-robot collaboration. KI Künstliche Intell. 33(4), 331–342 (2019)

    Article  Google Scholar 

  34. Sundararajan, M., et al.: Axiomatic attribution for deep networks. In: 34th ICML 2017, vol. 7, pp. 5109–5118 (2017)

    Google Scholar 

  35. Tjoa, E., et al.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learn. 14(8), 1–21 (2019)

    Google Scholar 

  36. Vaswani, A., et al.: Attention is all you need. In: Advances in NIPS (2017)

    Google Scholar 

  37. Wachter, S., et al.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard JOLT 31, 841 (2017)

    Google Scholar 

  38. Wang, Y.-X., et al.: Using data mining and machine learning techniques for system design space exploration and automatized optimization. In: ICASI, pp. 1079–1082 (2017)

    Google Scholar 

  39. Wells, L., et al.: Explainable AI and reinforcement learning-a systematic review of current approaches and trends. Front. Artif. 4, 550030 (2021)

    Article  Google Scholar 

  40. Yuan, X., et al.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. 30(9), 2805–2824 (2018)

    Article  MathSciNet  Google Scholar 

  41. Zhang, Z., et al.: Deep learning on graphs: a survey. IEEE Trans. Knowl. Data Eng. 34(1), 249–270 (2022)

    Article  MathSciNet  Google Scholar 

  42. Zhou, B., et al.: Learning deep features for discriminative localization. In: IEEE CVPR, pp. 2921–2929 (2016)

    Google Scholar 

  43. Zilke, J.R., Loza Mencía, E., Janssen, F.: DeepRED – rule extraction from deep neural networks. In: Calders, T., Ceci, M., Malerba, D. (eds.) DS 2016. LNCS (LNAI), vol. 9956, pp. 457–473. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46307-0_29

    Chapter  Google Scholar 

Download references

Acknowledgement

We acknowledge the Centre for Applied Artificial Intelligence at Macquarie University, Sydney, Australia, for funding this research.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Ambreen Hanif or Amin Beheshti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hanif, A. et al. (2023). A Comprehensive Survey of Explainable Artificial Intelligence (XAI) Methods: Exploring Transparency and Interpretability. In: Zhang, F., Wang, H., Barhamgi, M., Chen, L., Zhou, R. (eds) Web Information Systems Engineering – WISE 2023. WISE 2023. Lecture Notes in Computer Science, vol 14306. Springer, Singapore. https://doi.org/10.1007/978-981-99-7254-8_71

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7254-8_71

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7253-1

  • Online ISBN: 978-981-99-7254-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics