Skip to main content

Impact of Fidelity and Robustness of Machine Learning Explanations on User Trust

  • Conference paper
  • First Online:
AI 2023: Advances in Artificial Intelligence (AI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14472))

Included in the following conference series:

  • 567 Accesses

Abstract

EXplainable machine learning (XML) has recently emerged as a promising approach to address the inherent opacity of machine learning (ML) systems by providing insights into their reasoning processes. This paper explores the relationships among user trust, fidelity, and robustness within the context of ML explanations. To investigate these relationships, a user study is implemented within the context of predicting students’ performance. The study is designed to focus on two scenarios: (1) fidelity-based scenario—exploring dynamics of user trust across different explanations at varying fidelity levels and (2) robustness-based scenario—examining dynamics of in user trust concerning robustness. For each scenario, we conduct experiments based on two different metrics, including self-reported trust and behaviour-based trust metrics. For the fidelity-based scenario, we find that users trust both high and low-fidelity explanations compared to without-fidelity explanations (no explanations) based on the behaviour-based trust results, rather than relying on the self-reported trust results. We also obtain consistent findings based on different metrics, indicating no significant differences in user trust when comparing different explanations across fidelity levels. Additionally, for the robustness-based scenario, we get contrasting results from the two metrics. The self-reported trust metric does not demonstrate any variations in user trust concerning robustness levels, whereas the behaviour-based trust metric suggests that user trust tends to be higher when robustness levels are higher.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018). https://arxiv.org/abs/1806.08049. arXiv:1806.08049

  2. Asan, O., Bayrak, A.E., Choudhury, A.: Artificial intelligence and human trust in healthcare: focus on clinicians. J. Med. Internet Res. 22(6), e15154 (2020). https://doi.org/10.2196/15154. Company: Journal of Medical Internet Research Distributor: Journal of Medical Internet Research Institution: Journal of Medical Internet Research Label: Journal of Medical Internet Research Publisher: JMIR Publications Inc., Toronto, Canada

  3. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://doi.org/10.3390/electronics8080832

    Article  Google Scholar 

  4. Chan, H., Darwiche, A.: On the robustness of most probable explanations (2012). https://arxiv.org/abs/1206.6819. arXiv:1206.6819

  5. Cortez, P.: Student performance. UCI Machine Learning Repository (2014). https://doi.org/10.24432/C5TG7T

  6. Dai, J., Upadhyay, S., Aivodji, U., Bach, S.H., Lakkaraju, H.: Fairness via explanation quality: evaluating disparities in the quality of post hoc explanations. In: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pp. 203–214 (2022). https://doi.org/10.1145/3514094.3534159. arXiv:2205.07277

  7. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? (2017). https://arxiv.org/abs/1712.09923. arXiv:1712.09923

  8. Löfström, H., Hammar, K., Johansson, U.: A meta survey of quality evaluation criteria in explanation methods (2022). https://arxiv.org/abs/2203.13929. arXiv:2203.13929

  9. Moradi, M., Samwald, M.: Post-hoc explanation of black-box classifiers using confident itemsets. Exp. Syst. Appl. 165, 113941 (2021). https://doi.org/10.1016/j.eswa.2020.113941arXiv:2005.01992

  10. Pan, Y., Froese, F., Liu, N., Hu, Y., Ye, M.: The adoption of artificial intelligence in employee recruitment: the influence of contextual factors. Int. J. Hum. Res. Manage. 33(6), 1125–1147 (2022). https://doi.org/10.1080/09585192.2021.1879206

    Article  Google Scholar 

  11. Papenmeier, A., Englebienne, G., Seifert, C.: How model accuracy and explanation fidelity influence user trust (2019). https://arxiv.org/abs/1907.12652. arXiv:1907.12652

  12. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, San Francisco California USA (2016). https://doi.org/10.1145/2939672.2939778

  13. Sanneman, L., Shah, J.A.: The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems. Int. J. Hum.-Comput. Interact. 38(18–20), 1772–1788 (2022). https://doi.org/10.1080/10447318.2022.2081282

    Article  Google Scholar 

  14. Schmidt, P., Biessmann, F.: Quantifying interpretability and trust in machine learning systems (2019). https://arxiv.org/abs/1901.08558. arXiv:1901.08558

  15. Shin, D.: Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 98, 277–284 (2019)

    Article  Google Scholar 

  16. Shin, D.: How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Comput. Hum. Behav. 109, 106344 (2020)

    Article  Google Scholar 

  17. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum Comput Stud. 146, 102551 (2021). https://doi.org/10.1016/j.ijhcs.2020.102551

    Article  Google Scholar 

  18. Tintarev, N.: Explaining recommendations. Ph.D. thesis, University of Aberdeen, UK (2009)

    Google Scholar 

  19. Tocchetti, A., et al.: A.I. robustness: a human-centered perspective on technological challenges and opportunities (2022). https://arxiv.org/abs/2210.08906. arXiv:2210.08906

  20. Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021). https://doi.org/10.3390/electronics10050593

    Article  Google Scholar 

  21. Zhou, J., Verma, S., Mittal, M., Chen, F.: Understanding relations between perception of fairness and trust in algorithmic decision making (2021). https://arxiv.org/abs/2109.14345. arXiv:2109.14345

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, B., Zhou, J., Li, Y., Chen, F. (2024). Impact of Fidelity and Robustness of Machine Learning Explanations on User Trust. In: Liu, T., Webb, G., Yue, L., Wang, D. (eds) AI 2023: Advances in Artificial Intelligence. AI 2023. Lecture Notes in Computer Science(), vol 14472. Springer, Singapore. https://doi.org/10.1007/978-981-99-8391-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8391-9_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8390-2

  • Online ISBN: 978-981-99-8391-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics