Skip to main content

The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14050))

Included in the following conference series:

Abstract

In this paper, we investigate the practical relevance of explainable artificial intelligence (XAI) with a special focus on the producing industries and relate them to the current state of academic XAI research. Our findings are based on an extensive series of interviews regarding the role and applicability of XAI along the Machine Learning (ML) lifecycle in current industrial practice and its expected relevance in the future. The interviews were conducted among a great variety of roles and key stakeholders from different industry sectors. On top of that, we outline the state of XAI research by providing a concise review of the relevant literature. This enables us to provide an encompassing overview covering the opinions of the surveyed persons as well as the current state of academic research. By comparing our interview results with the current research approaches we reveal several discrepancies. While a multitude of different XAI approaches exists, most of them are centered around the model evaluation phase and data scientists. Their versatile capabilities for other stages are currently either not sufficiently explored or not popular among practitioners. In line with existing work, our findings also confirm that more efforts are needed to enable also non-expert users’ interpretation and understanding of opaque AI models with existing methods and frameworks.

All authors contributed equally and are listed in alphabetic order.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Examples are, e.g., the explainable AI frameworks and tools by Google for the Google Cloud, the Captum library [60] by Meta/Facebook, Amazon Lookout as well as Amazon SageMaker Clarify Model Explainability and Microsofts InterpretML [83] python library (Comments of the authors and not part of the interview responses).

  2. 2.

    References are given by the authors and were not part of the interview responses.

References

  1. A bill. The Lancet 34(873), 316–317 (May 2022). https://doi.org/10.1016/S0140-6736(02)37657-8

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Agarwal, C., D’souza, D., Hooker, S.: Estimating example difficulty using variance of gradients. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10368–10378 (2022)

    Google Scholar 

  4. Alkan, O., Wei, D., Mattetti, M., Nair, R., Daly, E., Saha, D.: Frote: feedback rule-driven oversampling for editing models. In: Marculescu, D., Chi, Y., Wu, C. (eds.) Proceedings of Machine Learning and Systems, vol. 4, pp. 276–301 (2022). https://proceedings.mlsys.org/paper/2022/file/63dc7ed1010d3c3b8269faf0ba7491d4-Paper.pdf

  5. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9

    Chapter  Google Scholar 

  6. Ancona, M., Oztireli, C., Gross, M.: Explaining deep neural networks with a polynomial time algorithm for shapley value approximation. In: International Conference on Machine Learning, pp. 272–281. PMLR (2019)

    Google Scholar 

  7. Arbesser, C., Muehlbacher, T., Komornyik, S., Piringer, H.: Visual analytics for domain experts: challenges and lessons learned. In: Science, V.K.T., Technology CO., L. (eds.) Proceedings of the second international symposium on Virtual Reality and Visual Computing, pp. 1–6. VR Kebao (Tiajin) Science and Technology CO., Ltd (2017). https://www.vrvis.at/publications/PB-VRVis-2017-019

  8. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  Google Scholar 

  9. Bae, J., Ng, N.H., Lo, A., Ghassemi, M., Grosse, R.B.: If influence functions are the answer, then what is the question? In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K. (eds.) Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  10. Basu, S., Pope, P., Feizi, S.: Influence functions in deep learning are fragile. arXiv preprint arXiv:2006.14651 (2020)

  11. Basu, S., You, X., Feizi, S.: On second-order group influence functions for black-box predictions. In: International Conference on Machine Learning, pp. 715–724. PMLR (2020)

    Google Scholar 

  12. Bertossi, L., Geerts, F.: Data quality and explainable AI. J. Data Inf. Qual. (JDIQ) 12(2), 1–9 (2020)

    Article  Google Scholar 

  13. Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648–657 (2020)

    Google Scholar 

  14. Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and survey of explanation methods for black box models. arXiv preprint arXiv:2102.13076 (2021)

  15. Bradford, A.: The brussels effect. Nw. UL Rev. 107, 1 (2012)

    Google Scholar 

  16. Van den Broeck, G., Lykov, A., Schleich, M., Suciu, D.: On the tractability of shap explanations. J. Artif. Intell. Res. 74, 851–886 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  17. Budhathoki, K., Janzing, D., Bloebaum, P., Ng, H.: Why did the distribution change? In: Banerjee, A., Fukumizu, K. (eds.) Proceedings of The 24th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 130, pp. 1666–1674. PMLR (13–15 Apr 2021)

    Google Scholar 

  18. Castro, J., Gómez, D., Tejada, J.: Polynomial calculation of the shapley value based on sampling. Comput. Oper. Res. 36(5), 1726–1730 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Charpiat, G., Girard, N., Felardos, L., Tarabalka, Y.: Input similarity from the neural network perspective. Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  20. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 782–791 (2021)

    Google Scholar 

  21. Chen, J., Song, L., Wainwright, M., Jordan, M.: Learning to explain: an information-theoretic perspective on model interpretation. In: International Conference on Machine Learning, pp. 883–892. PMLR (2018)

    Google Scholar 

  22. Chen, J., Song, L., Wainwright, M.J., Jordan, M.I.: L-shapley and c-shapley: efficient model interpretation for structured data. In: International Conference on Learning Representations (2019)

    Google Scholar 

  23. Cook, R.D.: Detection of influential observation in linear regression. Technometrics 19(1), 15–18 (1977)

    MathSciNet  MATH  Google Scholar 

  24. Covert, I., Kim, C., Lee, S.I.: Learning to estimate shapley values with vision transformers. arXiv preprint arXiv:2206.05282 (2022)

  25. Covert, I., Lee, S.I.: Improving kernelshap: practical shapley value estimation using linear regression. In: International Conference on Artificial Intelligence and Statistics, pp. 3457–3465. PMLR (2021)

    Google Scholar 

  26. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (xai): a survey. arXiv preprint arXiv:2006.11371 (2020)

  27. Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what, when?: broadening the explainable ai (XAI) design space by looking at explanations across the ai lifecycle. In: Designing Interactive Systems Conference 2021, pp. 1591–1602 (2021)

    Google Scholar 

  28. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  29. Erion, G., Janizek, J.D., Sturmfels, P., Lundberg, S.M., Lee, S.I.: Improving performance of deep learning models with axiomatic attribution priors and expected gradients. Nature Mach. Intell. 3(7), 620–631 (2021)

    Article  Google Scholar 

  30. EU, H.L.E.G.o.A.: Ethic guidelines for trustworthy ai (2019)

    Google Scholar 

  31. EU, H.L.E.G.o.A.: Policy and investment recommendations for trustworthy ai (2019)

    Google Scholar 

  32. European Commission: Proposal for a regulation of the european parliament and the council: Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, com/2021/206 final (2021)

    Google Scholar 

  33. Feifel, P., Bonarens, F., Köster, F.: Leveraging interpretability: Concept-based pedestrian detection with deep neural networks. In: Computer Science in Cars Symposium, pp. 1–10 (2021)

    Google Scholar 

  34. Feldman, V., Zhang, C.: What neural networks memorize and why: discovering the long tail via influence estimation. Adv. Neural. Inf. Process. Syst. 33, 2881–2891 (2020)

    Google Scholar 

  35. Floridi, L.: Establishing the rules for building trustworthy ai. Nature Mach. Intell. 1(6), 261–262 (2019)

    Article  Google Scholar 

  36. Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., Wen, Y.: capai-a procedure for conducting conformity assessment of ai systems in line with the eu artificial intelligence act. Available at SSRN 4064091 (2022)

    Google Scholar 

  37. Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784 (2017)

  38. Galassi, A., Lippi, M., Torroni, P.: Attention in natural language processing. IEEE Trans. Neural Networks Learn. Syst. 32(10), 4291–4308 (2020)

    Article  Google Scholar 

  39. Ghai, B., Liao, Q.V., Zhang, Y., Bellamy, R., Mueller, K.: Explainable active learning (xal): Toward ai explanations as interfaces for machine teachers. Proc. ACM Hum.-Comput. Interact. 4(CSCW3) (2021). https://doi.org/10.1145/3432934

  40. Ghorbani, A., Kim, M., Zou, J.: A distributional framework for data valuation. In: International Conference on Machine Learning, pp. 3535–3544. PMLR (2020)

    Google Scholar 

  41. Ghorbani, A., Zou, J.: Data shapley: Equitable valuation of data for machine learning. In: International Conference on Machine Learning, pp. 2242–2251. PMLR (2019)

    Google Scholar 

  42. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  43. Gstrein, O.J.: European ai regulation: Brussels effect versus human dignity? Zeitschrift für Europarechtliche Studien (ZEuS) 4 (2022)

    Google Scholar 

  44. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)

  45. Gulsum, A., Bo, S.: A survey of visual analytics for explainable artificial intelligence methods. Comput. Graph. 102, 502–520 (2022). https://doi.org/10.1016/j.cag.2021.09.002. https://www.sciencedirect.com/science/article/pii/S0097849321001886

  46. Hanawa, K., Yokoi, S., Hara, S., Inui, K.: Evaluation of similarity-based explanations. In: International Conference on Learning Representations (2021)

    Google Scholar 

  47. Hara, S., Nitanda, A., Maehara, T.: Data cleansing for models trained with sgd. Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  48. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., Wallach, H.: Improving fairness in machine learning systems: what do industry practitioners need? In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2019)

    Google Scholar 

  49. Jethani, N., Sudarshan, M., Aphinyanaphongs, Y., Ranganath, R.: Have we learned to explain?: how interpretability methods can learn to encode predictions in their interpretations. In: International Conference on Artificial Intelligence and Statistics, pp. 1459–1467. PMLR (2021)

    Google Scholar 

  50. Jethani, N., Sudarshan, M., Covert, I.C., Lee, S.I., Ranganath, R.: Fastshap: real-time shapley value estimation. In: International Conference on Learning Representations (2021)

    Google Scholar 

  51. Jia, R., et al.: Efficient task-specific data valuation for nearest neighbor algorithms. arXiv preprint arXiv:1908.08619 (2019)

  52. Jia, R., et al.: Towards efficient data valuation based on the shapley value. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1167–1176. PMLR (2019)

    Google Scholar 

  53. Jia, R., Wu, F., Sun, X., Xu, J., Dao, D., Kailkhura, B., Zhang, C., Li, B., Song, D.: Scalability vs. utility: do we have to sacrifice one for the other in data importance quantification? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8239–8247 (2021)

    Google Scholar 

  54. Keim, D., Andrienko, G., Fekete, J.-D., Görg, C., Kohlhammer, J., Melançon, G.: Visual analytics: definition, process, and challenges. In: Kerren, A., Stasko, J.T., Fekete, J.-D., North, C. (eds.) Information Visualization. LNCS, vol. 4950, pp. 154–175. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-70956-5_7

    Chapter  Google Scholar 

  55. Khanna, R., Kim, B., Ghosh, J., Koyejo, S.: Interpreting black box predictions using fisher kernels. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3382–3390. PMLR (2019)

    Google Scholar 

  56. Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016)

    Google Scholar 

  57. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, pp. 2668–2677. PMLR (2018)

    Google Scholar 

  58. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894. PMLR (2017)

    Google Scholar 

  59. Koh, P.W.W., Ang, K.S., Teo, H., Liang, P.S.: On the accuracy of influence functions for measuring group effects. Advances in neural information processing systems 32 (2019)

    Google Scholar 

  60. Kokhlikyan, N., et al.: Captum: a unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896 (2020)

  61. Kong, S., Shen, Y., Huang, L.: Resolving training biases via influence-based data relabeling. In: International Conference on Learning Representations (2021)

    Google Scholar 

  62. Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., Lakkaraju, H.: The disagreement problem in explainable machine learning: a practitioner’s perspective. arXiv preprint arXiv:2202.01602 (2022)

  63. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)

  64. Kwon, Y., Rivas, M.A., Zou, J.: Efficient computation and analysis of distributional shapley values. In: International Conference on Artificial Intelligence and Statistics, pp. 793–801. PMLR (2021)

    Google Scholar 

  65. Lee, D., Park, H., Pham, T., Yoo, C.D.: Learning augmentation network via influence functions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10961–10970 (2020)

    Google Scholar 

  66. Liu, F., Avci, B.: Incorporating priors with feature attribution on text classification. In: Annual Meeting of the Association for Computational Linguistics (2019)

    Google Scholar 

  67. Lundberg, S.M., Erion, G.G., Lee, S.I.: Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 (2018)

  68. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017)

    Google Scholar 

  69. Marques-Silva, J., Ignatiev, A.: Delivering trustworthy ai through formal xai. In: Proc. of AAAI, pp. 3806–3814 (2022)

    Google Scholar 

  70. Martínez-Plumed, F., et al.: Crisp-dm twenty years later: from data mining processes to data science trajectories. IEEE Trans. Knowl. Data Eng. 33(8), 3048–3061 (2019)

    Article  Google Scholar 

  71. Meng, L., et al.: Machine learning in additive manufacturing: a review. JOM 72(6), 2363–2377 (2020). https://doi.org/10.1007/s11837-020-04155-y

    Article  Google Scholar 

  72. de Mijolla, D., Frye, C., Kunesch, M., Mansir, J., Feige, I.: Human-interpretable model explainability on high-dimensional data. arXiv preprint arXiv:2010.07384 (2020)

  73. Miksch, S., Aigner, W.: A matter of time: applying a data-users-tasks design triangle to visual analytics of time-oriented data (2013)

    Google Scholar 

  74. Mitchell, R., Frank, E., Holmes, G.: Gputreeshap: massively parallel exact calculation of shap scores for tree ensembles. PeerJ Comput. Sci. 8, e880 (2022)

    Article  Google Scholar 

  75. Mökander, J., Juneja, P., Watson, D.S., Floridi, L.: The us algorithmic accountability act of 2022 vs. the eu artificial intelligence act: what can they learn from each other? Minds and Machines, pp. 1–8 (2022)

    Google Scholar 

  76. Molnar, C.: Interpretable machine learning. Lulu. com (2020)

    Google Scholar 

  77. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., Bischl, B.: Explaining hyperparameter optimization via partial dependence plots. Adv. Neural. Inf. Process. Syst. 34, 2280–2291 (2021)

    Google Scholar 

  78. Mougan, C., Broelemann, K., Kasneci, G., Tiropanis, T., Staab, S.: Explanation shift: detecting distribution shifts on tabular data via the explanation space. arXiv preprint arXiv:2210.12369 (2022)

  79. Mougan, C., Nielsen, D.S.: Monitoring model deterioration with explainable uncertainty estimation via non-parametric bootstrap. arXiv preprint arXiv:2201.11676 (2022)

  80. Munzner, T.: A nested model for visualization design and validation. IEEE Trans. Visual Comput. Graphics 15(6), 921–928 (2009). https://doi.org/10.1109/TVCG.2009.111

    Article  Google Scholar 

  81. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in neural information processing systems 29 (2016)

    Google Scholar 

  82. Nigenda, D., et al.: Amazon sagemaker model monitor: a system for real-time insights into deployed machine learning models. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022, pp. 3671–3681. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3534678.3539145

  83. Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)

  84. Pruthi, G., Liu, F., Kale, S., Sundararajan, M.: Estimating training data influence by tracing gradient descent. Adv. Neural. Inf. Process. Syst. 33, 19920–19930 (2020)

    Google Scholar 

  85. Rai, A.: Explainable ai: From black box to glass box. J. Acad. Mark. Sci. 48(1), 137–141 (2020)

    Article  Google Scholar 

  86. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  87. Rieger, L., Singh, C., Murdoch, W., Yu, B.: Interpretations are useful: penalizing explanations to align neural networks with prior knowledge. In: International Conference on Machine Learning, pp. 8116–8126. PMLR (2020)

    Google Scholar 

  88. Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., Díaz-Rodríguez, N.: Explainable artificial intelligence (xai) on timeseries data: a survey. arXiv preprint arXiv:2104.00950 (2021)

  89. Ross, A., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  90. Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: Training differentiable models by constraining their explanations. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 2662–2670 (2017). https://doi.org/10.24963/ijcai.2017/371

  91. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  92. Schramowski, P., et al.: Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Mach. Intell. 2(8), 476–486 (2020)

    Article  Google Scholar 

  93. Sculley, D., et al.: Hidden technical debt in machine learning systems. Advances in neural information processing systems 28 (2015)

    Google Scholar 

  94. Sebag, M., Kimelfeld, B., Bertossi, L., Livshits, E.: The shapley value of tuples in query answering. Logical Methods in Computer Science 17 (2021)

    Google Scholar 

  95. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  96. Shao, X., Rienstra, T., Thimm, M., Kersting, K.: Towards understanding and arguing with classifiers: recent progress. Datenbank-Spektrum 20(2), 171–180 (2020). https://doi.org/10.1007/s13222-020-00351-x

    Article  Google Scholar 

  97. Sharma, A., van Rijn, J.N., Hutter, F., Müller, A.: Hyperparameter importance for image classification by residual neural networks. In: Kralj Novak, P., Šmuc, T., Džeroski, S. (eds.) DS 2019. LNCS (LNAI), vol. 11828, pp. 112–126. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33778-0_10

    Chapter  Google Scholar 

  98. Siegmann, C., Anderljung, M.: The brussels effect and artificial intelligence: How eu regulation will impact the global ai market. arXiv preprint arXiv:2208.12645 (2022)

  99. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)

  100. Stammer, W., Schramowski, P., Kersting, K.: Right for the right concept: revising neuro-symbolic concepts by interacting with their explanations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3619–3629 (2021)

    Google Scholar 

  101. Studer, S., Bui, T.B., Drescher, C., Hanuschkin, A., Winkler, L., Peters, S., Müller, K.R.: Towards crisp-ml (q): a machine learning process model with quality assurance methodology. Mach. Learn. Knowl. Extraction 3(2), 392–413 (2021)

    Article  Google Scholar 

  102. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  103. Teso, S., Alkan, Ö., Stammer, W., Daly, E.: Leveraging explanations in interactive machine learning: an overview. arXiv preprint arXiv:2207.14526 (2022)

  104. Teso, S., Bontempelli, A., Giunchiglia, F., Passerini, A.: Interactive label cleaning with example-based explanations. Adv. Neural. Inf. Process. Syst. 34, 12966–12977 (2021)

    Google Scholar 

  105. Teso, S., Kersting, K.: Explanatory interactive machine learning. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019)

    Google Scholar 

  106. Wang, G., et al.: Accelerating shapley explanation via contributive cooperator selection. In: International Conference on Machine Learning, pp. 22576–22590. PMLR (2022)

    Google Scholar 

  107. Wang, J., Ma, Y., Zhang, L., Gao, R.X., Wu, D.: Deep learning for smart manufacturing: methods and applications. J. Manuf. Syst. 48, 144–156 (2018)

    Article  Google Scholar 

  108. Wang, T., Yang, Y., Jia, R.: Improving cooperative game theory-based data valuation via data utility learning. arXiv preprint arXiv:2107.06336 (2021)

  109. Wang, T., Zeng, Y., Jin, M., Jia, R.: A unified framework for task-driven data quality management. arXiv preprint arXiv:2106.05484 (2021)

  110. Wang, Z., Zhu, H., Dong, Z., He, X., Huang, S.L.: Less is better: unweighted data subsampling via influence function. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6340–6347 (2020)

    Google Scholar 

  111. Wells, L., Bednarz, T.: Explainable ai and reinforcement learning-a systematic review of current approaches and trends. Front. Artif. Intell. 4, 550030 (2021)

    Article  Google Scholar 

  112. Wirth, R., Hipp, J.: Crisp-dm: towards a standard process model for data mining. In: Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining, vol. 1, pp. 29–39. Manchester (2000)

    Google Scholar 

  113. Wuest, T., Weimer, D., Irgens, C., Thoben, K.D.: Machine learning in manufacturing: advantages, challenges, and applications. Production Manufacturing Res. 4, 23–45 (2016). https://doi.org/10.1080/21693277.2016.1192517

  114. Yang, C., Rangarajan, A., Ranka, S.: Global model interpretation via recursive partitioning. In: 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 1563–1570. IEEE (2018)

    Google Scholar 

  115. Yang, J.: Fast treeshap: accelerating shap value computation for trees. arXiv preprint arXiv:2109.09847 (2021)

  116. Yang, S.C.H., Folke, N.E.T., Shafto, P.: A psychological theory of explainability. In: International Conference on Machine Learning, pp. 25007–25021. PMLR (2022)

    Google Scholar 

  117. Yeh, C.K., Kim, J., Yen, I.E.H., Ravikumar, P.K.: Representer point selection for explaining deep neural networks. Advances in neural information processing systems 31 (2018)

    Google Scholar 

  118. Yeh, C.K., Taly, A., Sundararajan, M., Liu, F., Ravikumar, P.: First is better than last for training data influence. arXiv preprint arXiv:2202.11844 (2022)

  119. Yeom, S.K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.R., Samek, W.: Pruning by explaining: a novel criterion for deep neural network pruning. Pattern Recogn. 115, 107899 (2021)

    Article  Google Scholar 

  120. Yoon, J., Jordon, J., van der Schaar, M.: Invase: instance-wise variable selection using neural networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  121. Yu, P., Xu, C., Bifet, A., Read, J.: Linear treeshap. arXiv preprint arXiv:2209.08192 (2022)

  122. Zhang, H., Singh, H., Joshi, S.: “Why did the model fail?”: attributing model performance changes to distribution shifts. In: ICML 2022: Workshop on Spurious Correlations, Invariance and Stability (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Lebacher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Decker, T., Gross, R., Koebler, A., Lebacher, M., Schnitzer, R., Weber, S.H. (2023). The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics