Skip to main content

Transparency and Interpretability for Learned Representations of Artificial Neural Networks

  • Book
  • © 2022

Overview

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (8 chapters)

Keywords

About this book

Artificial intelligence (AI) is a concept, whose meaning and perception has changed considerably over the last decades. Starting off with individual and purely theoretical research efforts in the 1950s, AI has grown into a fully developed research field of modern times and may arguably emerge as one of the most important technological advancements of mankind. Despite these rapid technological advancements, some key questions revolving around the matter of transparency, interpretability and explainability of an AI’s decision-making remain unanswered. Thus, a young research field coined with the general term Explainable AI (XAI) has emerged from increasingly strict requirements for AI to be used in safety critical or ethically sensitive domains. An important research branch of XAI is to develop methods that help to facilitate a deeper understanding for the learned knowledge of artificial neural systems. In this book, a series of scientific studies are presented that shed lighton how to adopt an empirical neuroscience inspired approach to investigate a neural network’s learned representation in the same spirit as neuroscientific studies of the brain.

Authors and Affiliations

  • Institute for Technologies and Management of Digital Transformation, University of Wuppertal, Wuppertal, Germany

    Richard Meyes

About the author

Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks.

Bibliographic Information

  • Book Title: Transparency and Interpretability for Learned Representations of Artificial Neural Networks

  • Authors: Richard Meyes

  • DOI: https://doi.org/10.1007/978-3-658-40004-0

  • Publisher: Springer Vieweg Wiesbaden

  • eBook Packages: Life Science and Basic Disciplines (German Language)

  • Copyright Information: The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2022

  • Softcover ISBN: 978-3-658-40003-3Published: 28 November 2022

  • eBook ISBN: 978-3-658-40004-0Published: 26 November 2022

  • Edition Number: 1

  • Number of Pages: XXI, 211

  • Number of Illustrations: 3 b/w illustrations, 70 illustrations in colour

  • Topics: Machine Learning, Neurosciences, Artificial Intelligence

Publish with us