Abstract
While the previous chapter covered Human-in-the-Loop and its importance in building responsible AI algorithms, it’s paramount to ensure the transparency and explainability of ML models following HITL processes. This chapter reviews “explainability,” also known as XAI and its implementation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Duke, T. (2023). Explainability. In: Building Responsible AI Algorithms. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-9306-5_7
Download citation
DOI: https://doi.org/10.1007/978-1-4842-9306-5_7
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-9305-8
Online ISBN: 978-1-4842-9306-5
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)