Skip to main content

The “Why Did You Do That?” Button: Answering Why-Questions for End Users of Robotic Systems

  • Conference paper
  • First Online:
Engineering Multi-Agent Systems (EMAS 2019)

Abstract

The issue of explainability for autonomous systems is becoming increasingly prominent. Several researchers and organisations have advocated the provision of a “Why did you do that?” button which allows a user to interrogate a robot about its choices and actions. We take previous work on debugging cognitive agent programs and apply it to the question of supplying explanations to end users in the form of answers to why-questions. These previous approaches are based on the generation of a trace of events in the execution of the program and then answering why-questions using the trace. We implemented this framework in the agent infrastructure layer and, in particular, the Gwendolen programming language it supports – extending it in the process to handle the generation of applicable plans and multiple intentions. In order to make the answers to why-questions comprehensible to end users we advocate a two step process in which first a representation of an explanation is created and this is subsequently converted into natural language in a way which abstracts away from some events in the trace and employs application specific predicate dictionaries in order to translate the first-order logic presentation of concepts within the cognitive agent program in natural language. A prototype implementation of these ideas is provided.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Though it should be noted that the implementation of omniscient debugging in GOAL also handles GOAL’s module mechanism (although this is not reported in depth in [17]) which is not entirely dissimilar to the concept of intention in the AIL.

  2. 2.

    The implementation of Gwendolen contains a sixth stage for message handling.

  3. 3.

    A refinement of the AIL’s intention structure which is more general.

  4. 4.

    In order to handle situations where the top deed on the intention is not \(\epsilon \) (“no plan yet”) then \(\mathcal {G}\) returns the existing top tuple so there is no change to the intention and it continues to be processed as normal. This somewhat baroque mechanism has its roots in Gwendolen’s origin as an intermediate language into which all BDI languages could be translated  [10]. We ignore this type of applicable plan in our explanation mechanism and so do not refer to them further here.

  5. 5.

    Note this is not the complete set of trace events shown in Fig. 1. This is elaborated further in Sect. 3.1.

  6. 6.

    It should be noted that our implementation does not yet enable such expansion of explanations.

  7. 7.

    It is generally accepted that end users prefer natural language presentations while developers often prefer something more compact so this log presents the events with end users in mind, though it remains much more verbose than is required for an explanation.

  8. 8.

    https://github.com/VerifiableAutonomy/BDIPython.

References

  1. Bordini, R.H., Hübner, J.F., Wooldridge, M.: Programming Multi-agent Systems in AgentSpeak Using Jason. Wiley, Hoboken (2007)

    Book  Google Scholar 

  2. Bordini, R.H., Dastani, M., Dix, J., El Fallah-Seghrouchni, A. (eds.): Multi-Agent Programming: Languages, Platforms and Applications. Springer, Heidelberg (2005). https://doi.org/10.1007/b137449

    Book  MATH  Google Scholar 

  3. Bratman, M.E.: Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge (1987)

    Google Scholar 

  4. Bremner, P., Dennis, L.A., Fisher, M., Winfiled, A.F.: On proactive, transparent and verifiable ethical reasoning for robots. In: Proceedings of the IEEE special issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems (2019, to appear)

    Google Scholar 

  5. Charisi, V., et al.: Towards moral autonomous systems. CoRR abs/1703.04741 (2017). http://arxiv.org/abs/1703.04741

  6. Dastani, M., van Riemsdijk, M.B., Meyer, J.J.C.: Programming multi-agent systems in 3APL. In: [2], chap. 2, pp. 39–67

    Google Scholar 

  7. Dennis, L., Fisher, M., Webster, M., Bordini, R.: Model checking agent programming languages. Autom. Softw. Eng. 19, 1–59 (2011). https://doi.org/10.1007/s10515-011-0088-x

    Article  Google Scholar 

  8. Dennis, L.A.: Gwendolen semantics: 2017. Technical report ULCS-17-001, University of Liverpool, Department of Computer Science (2017)

    Google Scholar 

  9. Dennis, L.A.: The MCAPL framework including the agent infrastructure layer and agent java pathfinder. J. Open Source Softw. 3(24), 617 (2018)

    Article  Google Scholar 

  10. Dennis, L.A., Farwer, B., Bordini, R.H., Fisher, M., Wooldridge, M.: A common semantic basis for BDI languages. In: Dastani, M., El Fallah Seghrouchni, A., Ricci, A., Winikoff, M. (eds.) ProMAS 2007. LNCS (LNAI), vol. 4908, pp. 124–139. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79043-3_8

    Chapter  Google Scholar 

  11. Dinmohammadi, F., et al.: Certification of safe and trusted robotic inspection of assets. In: 2018 Prognostics and System Health Management Conference (PHM-Chongqing), pp. 276–284, October 2018

    Google Scholar 

  12. Fisher, M., et al.: Verifiable self-certifying autonomous systems. In: 2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 341–348, October 2018

    Google Scholar 

  13. Harbers, M.: Explaining agent behaviour in virtual training. Ph.D. thesis, SIKS Dissertation Series (2011). no. 2011–35

    Google Scholar 

  14. Hindriks, K.V.: Programming rational agents in GOAL. In: El Fallah Seghrouchni, A., Dix, J., Dastani, M., Bordini, R.H. (eds.) Multi-Agent Programming, pp. 119–157. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-89299-3_4

    Chapter  MATH  Google Scholar 

  15. Hindriks, K.V.: Debugging is explaining. In: Rahwan, I., Wobcke, W., Sen, S., Sugawara, T. (eds.) PRIMA 2012. LNCS (LNAI), vol. 7455, pp. 31–45. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32729-2_3

    Chapter  Google Scholar 

  16. Ko, A.J., Myers, B.A.: Extracting and answering why and why not questions about Java program output. ACM Trans. Softw. Eng. Methodol. 20(2), 4:1–4:36 (2010)

    Article  Google Scholar 

  17. Koeman, V.J., Hindriks, K.V., Jonker, C.M.: Omniscient debugging for cognitive agent programs. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 265–272. AAAI Press (2017)

    Google Scholar 

  18. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2017)

    Article  MathSciNet  Google Scholar 

  19. Pokahr, A., Braubach, L., Lamersdorf, W.: Jadex: a BDI reasoning engine. In: [2], pp. 149–174

    Google Scholar 

  20. Rao, A.S., Georgeff, M.P.: Modeling agents within a BDI-architecture. In: Proceedings of 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR&R), pp. 473–484. Morgan Kaufmann (1991)

    Google Scholar 

  21. Rao, A.S., Georgeff, M.P.: An abstract architecture for rational agents. In: Proceedings of International Conference on Knowledge Representation and Reasoning (KR&R), pp. 439–449. Morgan Kaufmann (1992)

    Google Scholar 

  22. Rao, A.S., Georgeff, M.P.: BDI agents: from theory to practice. In: Proceedings of 1st International Conference on Multi-Agent Systems (ICMAS), San Francisco, USA, pp. 312–319 (1995)

    Google Scholar 

  23. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language. In: Van de Velde, W., Perram, J.W. (eds.) MAAMAW 1996. LNCS, vol. 1038, pp. 42–55. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0031845

    Chapter  Google Scholar 

  24. Sheh, R.K.: “Why did you do that?” Explainable intelligent robots. In: AAAI-17 Workshop on Human-Aware Artificial Intelligence (2017)

    Google Scholar 

  25. The IEEE global initiative on ethics of autonomous and intelligent systems: ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems. version 2. Report. IEEE (2017)

    Google Scholar 

  26. Turner, J.: Robot Rules: Regulating Artificial Intelligence. Palgrave Macmillan, London (2019)

    Book  Google Scholar 

  27. Webster, M., Fisher, M., Cameron, N., Jump, M.: Formal methods for the certification of autonomous unmanned aircraft systems. In: Flammini, F., Bologna, S., Vittorini, V. (eds.) SAFECOMP 2011. LNCS, vol. 6894, pp. 228–242. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24270-0_17

    Chapter  Google Scholar 

  28. Wei, C., Hindriks, K.V.: An agent-based cognitive robot architecture. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 54–71. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38700-5_4

    Chapter  Google Scholar 

  29. Winikoff, M., Cranefield, S.: On the testability of BDI agent systems. J. Artif. Intell. Res. 51, 71–131 (2015)

    Article  MathSciNet  Google Scholar 

  30. Winikoff, M.: BDI agent testability revisited. Auton. Agents Multi-agent Syst. 31(1094), 1094–1132 (2017). https://doi.org/10.1007/s10458-016-9356-2

    Article  Google Scholar 

  31. Winikoff, M.: Debugging agent programs with Why? questions. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, pp. 251–259. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2017)

    Google Scholar 

  32. Winikoff, M., Dignum, V., Dignum, F.: Why bad coffee? Explaining agent plans with valuings. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 521–534. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_47

    Chapter  Google Scholar 

  33. Wooldridge, M.: An Introduction to Multiagent Systems. Wiley, Hoboken (2002)

    Google Scholar 

  34. Wooldridge, M., Rao, A. (eds.): Foundations of Rational Agency. Applied Logic Series. Kluwer Academic Publishers, Berlin (1999)

    MATH  Google Scholar 

  35. Wortham, R.H., Theodorou, A.: Robot transparency, trust and utility. Connect. Sci. 29(3), 24200247 (2017)

    Article  Google Scholar 

  36. Ziafati, P., Dastani, M., Meyer, J.-J., van der Torre, L.: Agent programming languages requirements for programming autonomous robots. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 35–53. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38700-5_3

    Chapter  Google Scholar 

Download references

Acknowledgments

This research was partially funded by EPSRC grants Verifiable Autonomy (EP/LO24845/1) and the Offshore Robotics for Certification of Assets (EP/RO26173) Robotics and Artificial Intelligence Hub.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louise A. Dennis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Koeman, V.J., Dennis, L.A., Webster, M., Fisher, M., Hindriks, K. (2020). The “Why Did You Do That?” Button: Answering Why-Questions for End Users of Robotic Systems. In: Dennis, L., Bordini, R., Lespérance, Y. (eds) Engineering Multi-Agent Systems. EMAS 2019. Lecture Notes in Computer Science(), vol 12058. Springer, Cham. https://doi.org/10.1007/978-3-030-51417-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-51417-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-51416-7

  • Online ISBN: 978-3-030-51417-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics