Skip to main content

Architecting Explainable Service Robots

  • Conference paper
  • First Online:
Software Architecture (ECSA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14212))

Included in the following conference series:

  • 582 Accesses

Abstract

Service robots entailing a tight collaboration with humans are increasingly widespread in critical domains, such as healthcare and domestic assistance. However, the so-called Human-Machine-Teaming paradigm can be hindered by the black-box nature of service robots, whose autonomous decisions may be confusing or even dangerous for humans. Thus, the explainability for these systems emerges as a crucial property for their acceptance in our society. This paper introduces the concept of explainable service robots and proposes a software architecture to support the engineering of the self-explainability requirements in these collaborating systems by combining formal analysis and interpretable machine learning. We evaluate the proposed architecture using an illustrative example in healthcare. Results show that our proposal supports the explainability of multi-agent Human-Machine-Teaming missions featuring an infinite (dense) space of human-machine uncertain factors, such as diverse physical and physiological characteristics of the agents involved in the teamwork.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We let the reader refer to [22] for a comprehensive treatment of the model and its accuracy w.r.t. a real-world deployment.

  2. 2.

    A package with full mission specification, data and sources to replicate our results is available at https://doi.org/10.5281/zenodo.8110691.

  3. 3.

    Our current implementation relies on uniform random sampling.

  4. 4.

    Mission success occurs if \(P(\psi )\) is greater than a user-defined probability threshold.

  5. 5.

    https://martinfowler.com/eaaDev/EventSourcing.html.

References

  1. Angelov, P.P., Soares, E.A., Jiang, R., Arnold, N.I., Atkinson, P.M.: Explainable artificial intelligence: an analytical review. WIREs Data Min. Knowl. Discov. 11(5), e1424 (2021)

    Google Scholar 

  2. Bersani, M.M., Camilli, M., Lestingi, L., Mirandola, R., Rossi, M.: Explainable human-machine teaming using model checking and interpretable machine learning. In: International Conference on Formal Methods in Software Engineering, pp. 18–28. IEEE (2023)

    Google Scholar 

  3. Bersani, M.M., Camilli, M., Lestingi, L., Mirandola, R., Rossi, M., Scandurra, P.: Towards better trust in human-machine teaming through explainable dependability. In: ICSA Companion, pp. 86–90. IEEE (2023)

    Google Scholar 

  4. Cámara, J., Silva, M., Garlan, D., Schmerl, B.: Explaining architectural design tradeoff spaces: a machine learning approach. In: Biffl, S., Navarro, E., Löwe, W., Sirjani, M., Mirandola, R., Weyns, D. (eds.) ECSA 2021. LNCS, vol. 12857, pp. 49–65. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86044-8_4

    Chapter  Google Scholar 

  5. Camilli, M., Mirandola, R., Scandurra, P.: XSA: Explainable self-adaptation. In: International Conference on Automated Software Engineering. ASE’22. ACM (2023)

    Google Scholar 

  6. Cleland-Huang, J., Agrawal, A., Vierhauser, M., Murphy, M., Prieto, M.: Extending MAPE-K to support human-machine teaming. In: SEAMS, pp. 120–131. ACM (2022)

    Google Scholar 

  7. David, A., Larsen, K.G., Legay, A., Mikučionis, M., Poulsen, D.B.: UPPAAL SMC tutorial. STTT 17(4), 397–415 (2015)

    Article  Google Scholar 

  8. David, A., et al.: Statistical model checking for networks of priced timed automata. In: Fahrenberg, U., Tripakis, S. (eds.) FORMATS 2011. LNCS, vol. 6919, pp. 80–96. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24310-3_7

    Chapter  Google Scholar 

  9. EU: Robotics 2020 Multi-Annual Roadmap For Robotic in Europe (2016). https://www.eu-robotics.net/sparc/upload/about/files/H2020-Robotics-Multi-Annual-Roadmap-ICT-2016.pdf

  10. García, S., Strüber, D., Brugali, D., Berger, T., Pelliccione, P.: Robotics software engineering: a perspective from the service robotics domain, pp. 593–604. ESEC/FSE 2020. ACM (2020)

    Google Scholar 

  11. Hanley, J.A., McNeil, B.J.: The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143(1), 29–36 (1982)

    Article  Google Scholar 

  12. Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: HRI, pp. 303–312. IEEE (2017)

    Google Scholar 

  13. Jovanović, M., Schmitz, M.: Explainability as a user requirement for artificial intelligence systems. Computer 55(2), 90–94 (2022)

    Google Scholar 

  14. Kaleeswaran, A.P., Nordmann, A., Vogel, T., Grunske, L.: A systematic literature review on counterexample explanation. Inf. Softw. Technol. 145, 106800 (2022)

    Article  Google Scholar 

  15. Kang, H.G., Dingwell, J.B.: Differential changes with age in multiscale entropy of electromyography signals from leg muscles during treadmill walking. PLoS ONE 11(8), e0162034 (2016)

    Article  Google Scholar 

  16. Khalid, N., Qureshi, N.A.: Towards self-explainable adaptive systems (SEAS): a requirements driven approach. In: Joint Proceedings of REFSQ. CEUR Workshop Proceedings, vol. 2857. CEUR-WS.org (2021)

    Google Scholar 

  17. Kordts, B., Kopetz, J.P., Schrader, A.: A framework for self-explaining systems in the context of intensive care. In: ACSOS, pp. 138–144. IEEE (2021)

    Google Scholar 

  18. Köhl, M.A., Baum, K., Langer, M., Oster, D., Speith, T., Bohlender, D.: Explainability as a non-functional requirement. In: RE, pp. 363–368. IEEE (2019)

    Google Scholar 

  19. de Lemos, R.: Human in the loop: what is the point of no return? In: SEAMS, pp. 165–166. ACM (2020)

    Google Scholar 

  20. Lessmann, S., Baesens, B., Mues, C., Pietsch, S.: Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans. Softw. Eng. 34(4), 485–496 (2008)

    Article  Google Scholar 

  21. Lestingi, L., Askarpour, M., Bersani, M.M., Rossi, M.: A deployment framework for formally verified human-robot interactions. IEEE Access 9, 136616–136635 (2021)

    Article  Google Scholar 

  22. Lestingi, L., Zerla, D., Bersani, M.M., Rossi, M.: Specification, stochastic modeling and analysis of interactive service robotic applications. Robot. Autonom. Syst. 163 (2023)

    Google Scholar 

  23. Li, N., Cámara, J., Garlan, D., Schmerl, B.R., Jin, Z.: Hey! Preparing humans to do tasks in self-adaptive systems. In: SEAMS, pp. 48–58. IEEE (2021)

    Google Scholar 

  24. Madni, A.M., Madni, C.C.: Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 6(4) (2018)

    Google Scholar 

  25. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  26. Mitchell, T.M.: Machine Learning, 1st edn. McGraw-Hill Inc., New York (1997)

    MATH  Google Scholar 

  27. Molnar, C.: Interpretable Machine Learning. 2 edn (2022). https://christophm.github.io/interpretable-ml-book

  28. Ozkaya, I.: The behavioral science of software engineering and human-machine teaming. IEEE Softw. 37(6), 3–6 (2020)

    Article  Google Scholar 

  29. Paleja, R., Ghuy, M., Ranawaka Arachchige, N., Jensen, R., Gombolay, M.: The utility of explainable AI in ad hoc human-machine teaming. In: NEURIPS, vol. 34, pp. 610–623. Curran Associates, Inc. (2021)

    Google Scholar 

  30. Scott, A.J., Knott, M.: A cluster analysis method for grouping means in the analysis of variance. Biometrics 30(3), 507–512 (1974)

    Article  MATH  Google Scholar 

  31. Stone, M.: Cross-validatory choice and assessment of statistical predictions. J. Roy. Stat. Soc.: Ser. B (Methodol.) 36(2), 111–133 (1974)

    MathSciNet  MATH  Google Scholar 

  32. Tantithamthavorn, C.K., Jiarpakdee, J.: Explainable AI for software engineering. In: ASE, pp. 1–2. ACM (2021)

    Google Scholar 

  33. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Livia Lestingi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bersani, M.M., Camilli, M., Lestingi, L., Mirandola, R., Rossi, M., Scandurra, P. (2023). Architecting Explainable Service Robots. In: Tekinerdogan, B., Trubiani, C., Tibermacine, C., Scandurra, P., Cuesta, C.E. (eds) Software Architecture. ECSA 2023. Lecture Notes in Computer Science, vol 14212. Springer, Cham. https://doi.org/10.1007/978-3-031-42592-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42592-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42591-2

  • Online ISBN: 978-3-031-42592-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics