Skip to main content

Explainability

  • Chapter
  • First Online:
Building Responsible AI Algorithms
  • 489 Accesses

Abstract

While the previous chapter covered Human-in-the-Loop and its importance in building responsible AI algorithms, it’s paramount to ensure the transparency and explainability of ML models following HITL processes. This chapter reviews “explainability,” also known as XAI and its implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 24.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 34.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Duke, T. (2023). Explainability. In: Building Responsible AI Algorithms. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-9306-5_7

Download citation

Publish with us

Policies and ethics