Skip to main content

A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Understandable

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2021)

Abstract

Because of recent and rapid developments in Artificial Intelligence (AI), humans and AI-systems increasingly work together in human-agent teams. However, in order to effectively leverage the capabilities of both, AI-systems need to be understandable to their human teammates. The branch of eXplainable AI (XAI) aspires to make AI-systems more understandable to humans, potentially improving human-agent teamwork. Unfortunately, XAI literature suffers from a lack of agreement regarding the definitions of and relations between the four key XAI-concepts: transparency, interpretability, explainability, and understandability. Inspired by both XAI and social sciences literature, we present a two-dimensional framework that defines and relates these concepts in a concise and coherent way, yielding a classification of three types of AI-systems: incomprehensible, interpretable, and understandable. We also discuss how the established relationships can be used to guide future research into XAI, and how the framework could be used during the development of AI-systems as part of human-AI teams.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Also referred to as global explanations in XAI literature.

  2. 2.

    Also referred to as local explanations in XAI literature.

  3. 3.

    For example ranging from “Totally Disagree” to “Totally Agree” on a 7-point scale.

References

  1. Alvarado, R., Humphreys, P.: Big data, thick mediation, and representational opacity. New Lit. Hist. 48(4), 729–749 (2017). https://doi.org/10.1353/nlh.2017.0037

    Article  Google Scholar 

  2. Amir, D., Amir, O.: Highlights: summarizing agent behavior to people. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1168–1176 (2018)

    Google Scholar 

  3. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  4. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012. http://www.sciencedirect.com/science/article/pii/S1566253519308103

  5. Broekens, J., Harbers, M., Hindriks, K., van den Bosch, K., Jonker, C., Meyer, J.-J.: Do you get it? User-evaluated explainable BDI agents. In: Dix, J., Witteveen, C. (eds.) MATES 2010. LNCS (LNAI), vol. 6251, pp. 28–39. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16178-0_5

    Chapter  Google Scholar 

  6. Brooke, J.: SUS: a quick and dirty usability. In: Usability Evaluation in Industry, p. 189 (1996)

    Google Scholar 

  7. Chin, J.P., Diehl, V.A., Norman, K.L.: Development of an instrument measuring user satisfaction of the human-computer interface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1988, pp. 213–218. Association for Computing Machinery, New York (1988). https://doi.org/10.1145/57167.57203

  8. Ciatto, G., Schumacher, M.I., Omicini, A., Calvaresi, D.: Agent-based explanations in AI: towards an abstract framework. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS (LNAI), vol. 12175, pp. 3–20. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_1

    Chapter  Google Scholar 

  9. van Diggelen, J., et al.: Pluggable social artificial intelligence for enabling human-agent teaming. arXiv preprint arXiv:1909.04492 (2019)

  10. Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. CoRR abs/1710.00794 (2017). http://arxiv.org/abs/1710.00794

  11. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  12. Endsley, M.R.: Situation awareness global assessment technique (SAGAT). In: Proceedings of the IEEE 1988 National Aerospace and Electronics Conference, vol. 3, pp. 789–795 (1988). https://doi.org/10.1109/NAECON.1988.195097

  13. Endsley, M.R.: A systematic review and meta-analysis of direct objective measures of situation awareness: a comparison of SAGAT and SPAM. Hum. Factors 63(1), 124–150 (2021). https://doi.org/10.1177/0018720819875376. pMID: 31560575

    Article  Google Scholar 

  14. Goldman, A.I., et al.: Theory of mind. In: The Oxford Handbook of Philosophy of Cognitive Science, vol. 1 (2012)

    Google Scholar 

  15. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018). https://doi.org/10.1145/3236009

  16. Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2(2) (2017)

    Google Scholar 

  17. Harbers, M., van den Bosch, K., Meyer, J.: Design and evaluation of explainable BDI agents. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 2, pp. 125–132 (2010). https://doi.org/10.1109/WI-IAT.2010.115

  18. Harbers, M., Bradshaw, J.M., Johnson, M., Feltovich, P., van den Bosch, K., Meyer, J.-J.: Explanation in human-agent teamwork. In: Cranefield, S., van Riemsdijk, M.B., Vázquez-Salceda, J., Noriega, P. (eds.) COIN -2011. LNCS (LNAI), vol. 7254, pp. 21–37. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35545-5_2

    Chapter  Google Scholar 

  19. Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: 2017 12th ACM/IEEE International Conference on Human-Robot Interaction, HRI, pp. 303–312 (2017)

    Google Scholar 

  20. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects (2019)

    Google Scholar 

  21. Johnson, M., Bradshaw, J.M., Feltovich, P.J., Jonker, C.M., van Riemsdijk, B., Sierhuis, M.: The fundamental principle of coactive design: interdependence must shape autonomy. In: De Vos, M., Fornara, N., Pitt, J.V., Vouros, G. (eds.) COIN -2010. LNCS (LNAI), vol. 6541, pp. 172–191. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21268-0_10

    Chapter  Google Scholar 

  22. Johnson, M., Bradshaw, J.M., Feltovich, P.J., Jonker, C.M., van Riemsdijk, M.B., Sierhuis, M.: Coactive design: designing support for interdependence in joint activity. J. Hum.-Robot Interact. 3(1), 43–69 (2014). https://doi.org/10.5898/JHRI.3.1.Johnson

    Article  Google Scholar 

  23. Johnson, M., Vera, A.: No AI is an Island: the case for teaming intelligence. AI Mag. 40(1), 16–28 (2019). https://doi.org/10.1609/aimag.v40i1.2842. https://ojs.aaai.org/index.php/aimagazine/article/view/2842

  24. Klien, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player’’ in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004). https://doi.org/10.1109/MIS.2004.74

    Article  Google Scholar 

  25. Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: AAAI 2017, pp. 4762–4763 (2017)

    Google Scholar 

  26. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  MathSciNet  Google Scholar 

  27. Lomas, M., Chevalier, R., Cross, E.V., Garrett, R.C., Hoare, J., Kopack, M.: Explaining robot actions. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2012, pp. 187–188. Association for Computing Machinery, New York (2012). https://doi.org/10.1145/2157689.2157748

  28. Malle, B.F.: How the Mind Explains Behavior. Folk Explanation, Meaning and Social Interaction. MIT-Press, Cambridge (2004)

    Book  Google Scholar 

  29. Malle, B.F.: Attribution theories: how people make sense of behavior. Theor. Soc. Psychol. 23, 72–95 (2011)

    Google Scholar 

  30. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007. https://www.sciencedirect.com/science/article/pii/S0004370218305988

  31. Neerincx, M.A., van der Waa, J., Kaptein, F., van Diggelen, J.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (ed.) EPCE 2018. LNCS (LNAI), vol. 10906, pp. 204–214. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18

    Chapter  Google Scholar 

  32. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agent. Multi-Agent Syst. 33(6), 673–705 (2019). https://doi.org/10.1007/s10458-019-09408-y

    Article  Google Scholar 

  33. Salas, E., Sims, D.E., Burke, C.S.: Is there a “big five’’ in teamwork? Small Group Res. 36(5), 555–599 (2005). https://doi.org/10.1177/1046496405277134

    Article  Google Scholar 

  34. Sanneman, L., Shah, J.A.: A situation awareness-based framework for design and evaluation of explainable AI. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS (LNAI), vol. 12175, pp. 94–110. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_6

    Chapter  Google Scholar 

  35. Sreedharan, S., Srivastava, S., Kambhampati, S.: Hierarchical expertise level modeling for user specific contrastive explanations. In: IJCAI, pp. 4829–4836 (2018)

    Google Scholar 

  36. Turilli, M., Floridi, L.: The ethics of information transparency. Ethics Inf. Technol. 11(2), 105–112 (2009). https://doi.org/10.1007/s10676-009-9187-9

    Article  Google Scholar 

  37. Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093 (2020)

  38. Walmsley, Joel: Artificial intelligence and the value of transparency. AI Soc. 1–11 (2020). https://doi.org/10.1007/s00146-020-01066-z

Download references

Acknowledgements

This work is part of the research lab AI*MAN of Delft University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruben S. Verhagen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Verhagen, R.S., Neerincx, M.A., Tielman, M.L. (2021). A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Understandable. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2021. Lecture Notes in Computer Science(), vol 12688. Springer, Cham. https://doi.org/10.1007/978-3-030-82017-6_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82017-6_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82016-9

  • Online ISBN: 978-3-030-82017-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics