Skip to main content
Log in

Defining Explainable AI for Requirements Analysis

  • Technical Contribution
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Explainable artificial intelligence (XAI) has become popular in the last few years. The artificial intelligence (AI) community in general, and the machine learning (ML) community in particular, is coming to the realisation that in many applications, for AI to be trusted, it must not only demonstrate good performance in its decisionmaking, but it also must explain these decisions and convince us that it is making the decisions for the right reasons. However, different applications have different requirements on the information required of the underlying AI system in order to convince us that it is worthy of our trust. How do we define these requirements? In this paper, we present three dimensions for categorising the explanatory requirements of different applications. These are Source, Depth and Scope. We focus on the problem of matching up the explanatory requirements of different applications with the capabilities of underlying ML techniques to provide them. We deliberately avoid including aspects of explanation that are already well-covered by the existing literature and we focus our discussion on ML although the principles apply to AI more broadly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Athalye A, Engstrom L, Ilyas A, Kwok K (2017) Synthesizing robust adversarial examples. arXiv:1707.07397

  2. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7):1–46

    Google Scholar 

  3. Bibal A (2016) interpretability of machine learning models and representations : an introduction. Proc. ESANN, 77–82

  4. Biran O, Cotton C (2017) Explanation and justification in machine learning: A Survey. IJCAI XAI Workshop

  5. Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. arXiv:1712.09665

  6. Chakraborti T, Sreedharan S, Zhang Y, Kambhampati S (2016) Plan explanations as model reconciliation. Proc. IJCAI. arXiv:1701.08317v3

  7. Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualization of perspectives. arXiv:1710.00794

  8. Doshi-velez F, Kim B (2017) Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608v2

  9. Doyle D, Tsymbal A, Cunningham P (2003) A review of explanation and explanation in case-based reasoning. Tech. rep., Trinity College Dublin, Department of Computer Science

  10. Freed M (2018) Three elements of trust. Private communication

  11. French RM (1999) Catastrophic forgetting in connectionist networks. Trends Cognit Sci 3(4):128–135

    Article  Google Scholar 

  12. Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. In: Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning. arXiv:1606.08813

  13. Gunning D (2016) Explainable artificial intelligence (XAI). Broad Agency Announcement DARPA-BAA-16-53, Defence Advanced Research Projects Agency

  14. Hendricks LA, Akata Z, Rohrbach M, Donahue J, Schiele B, Darrell T (2016) Generating visual explanations. In: European Conference on Computer Vision, Springer, pp 3–19

    Chapter  Google Scholar 

  15. Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. Lecture Notes Comp Sci 10426 LNCS:3–29

    Article  MathSciNet  Google Scholar 

  16. Keil FC (2003) Folkscience: coarse interpretations of a complex reality. Trends Cognit Sci 7(8):368–373

    Article  Google Scholar 

  17. van Lent M, Laird JE (2001) Learning procedural knowledge through observation. In: Proc. 1st Int’l Conf. on Knowledge Capture, pp 179–186

  18. Miller T, Howe P, Sonenberg L (2017) Explainable AI : Beware of inmates running the asylum. In: IJCAI Workshop on XAI

  19. Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Dig Signal Process Rev J 73:1–15

    Article  MathSciNet  Google Scholar 

  20. Pomerleau DA (1989) Alvinn: An autonomous land vehicle in a neural network. In: Advances in neural information processing systems, pp 305–313

  21. Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144

  22. Ross AS, Hughes MC, Doshi-Velez F (2017) Right for the right reasons: Training differentiable models by constraining their explanations. In: Proc. IJCAI. arXiv:1703.03717

  23. Roth-Berghofer TR (2004) Explanations and case-based reasoning: foundational issues. In: European Conference on Case-Based Reasoning, Springer, pp 389–403

  24. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  25. Sculley D, Holt G, Golovin D, Davydov E, Phillips T, Ebner D, Chaudhary V, Young M (2014) Machine learning: the high interest credit card of technical debt. In: NIPS Workshop on SE4ML

  26. Sheh R (2017) Different XAI for different HRI. In: AAAI FSS Workshop on AI-HRI

  27. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489

    Article  Google Scholar 

  28. Su J, Vargas DV, Sakurai K (2017) One pixel attack for fooling deep neural networks. CoRR abs/1710.08864

  29. Swartout WR (1983) XPLAIN: a system for creating and explaining expert consulting programs. Artif Intell 21(3):285–325

    Article  Google Scholar 

  30. Swartout WR, Moore JD (1993) Explanation in second generation expert systems. In: Second generation expert systems. Springer, Berlin. https://doi.org/10.1007/978-3-642-77927-5_24

    Chapter  Google Scholar 

  31. Tolchinsky P, Modgil S, Atkinson K, McBurney P, Cortés U (2012) Deliberation dialogues for reasoning about safety critical actions. Autonomous Agents Multi-Agent Syst 25(2):209–259

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raymond Sheh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sheh, R., Monteath, I. Defining Explainable AI for Requirements Analysis. Künstl Intell 32, 261–266 (2018). https://doi.org/10.1007/s13218-018-0559-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-018-0559-3

Keywords

Navigation