Skip to content
Licensed Unlicensed Requires Authentication Published by Oldenbourg Wissenschaftsverlag February 17, 2023

Explainable AI for sensor-based sorting systems

  • Mathias Anneken

    Mathias Anneken received a master’s degree in Electrical Engineering and Information Technology after studying at the Karlsruhe Institute of Technology (KIT). He joined Fraunhofer IOSB in 2014 and is currently finishing his PhD in cooperation with the KIT in the fields of artificial intelligence, machine learning, and anomaly detection in the maritime domain. Since July 2022 Mathias Anneken leads the research group Applied Explainable Artificial Intelligence at Fraunhofer IOSB.

    ORCID logo EMAIL logo
    , Manjunatha Veerappa

    Manjunatha Veerappa is a research employee at Fraunhofer IOSB where he is responsible for developing XAI algorithms. Before joining Fraunhofer, Manjunatha Veerappa received a master’s degree specializing in real-time data processing and pattern recognition. Prior to this, he worked at Cognizant Technology Solutions as a programmer analyst with a primary focus on interpreting the datasets using statistical tools.

    ORCID logo EMAIL logo
    , Marco F. Huber

    Marco F. Huber received his diploma, Ph.D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group “Variable Image Acquisition and Processing” of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Center for Cyber Cognitive Intelligence (CCI) and of the Department for Image and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, data analytics, and robotics. He has authored or co-authored more than 100 publications in various high-ranking journals, books, and conferences, and holds two U.S. patents and one EU patent.

    ORCID logo
    , Christian Kühnert

    Christian Kühnert received is diploma in electrical engineering in 2008 from TU Darmstadt. In the same year he joined Fraunhofer IOSB. In 2013 Christian Kühnert finished his PhD at the Karlsruhe Institute of Technology (KIT). Since 2019 he is senior scientist at Fraunhofer IOSB.

    , Felix Kronenwett

    Felix Kronenwett studied at the DHBW Karlsruhe and continued his studies in Electrical Engineering and Information Technology at the Karlsruhe Institute of Technology (KIT). He joined Fraunhofer IOSB in 2021.

    and Georg Maier

    Georg Maier received the M.Sc. degree in computer science from Utrecht University, Utrecht, The Netherlands, in 2013. Afterwards, he joined the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, Germany. In 2021 he finished is PhD at the Karlsruhe Institute of Technology (KIT). His research interests include different aspects of image processing, in particular algorithmic aspects, with a focus on real-time capabilities.

From the journal tm - Technisches Messen

Abstract

Explainable artificial intelligence (XAI) can make machine learning based systems more transparent. This additional transparency can enable the use of machine learning in many different domains. In our work, we show how XAI methods can be applied to an autoencoder for anomaly detection in a sensor-based sorting system. The setup of the sorting system consists of a vibrating feeder, a conveyor belt, a line-scan camera and an array of fast-switching pneumatic valves. It allows the separation of a material stream into two fractions, realizing a binary sorting task. The autoencoder tries to mimic the normal behavior of the nozzle array and thus can detect abnormal behavior. The XAI methods are used to explain the output of the autoencoder. As XAI methods global and local approaches are used, which means we receive explanations for both a single result and the whole autoencoder. Initial results for both approaches are shown, together with possible interpretations of these results.


Corresponding authors: Mathias Anneken and Manjunatha Veerappa, Fraunhofer IOSB, Karlsruhe, Germany; and Fraunhofer Center for Machine Learning, E-mail: (M. Anneken), (M. Veerappa)

About the authors

Mathias Anneken

Mathias Anneken received a master’s degree in Electrical Engineering and Information Technology after studying at the Karlsruhe Institute of Technology (KIT). He joined Fraunhofer IOSB in 2014 and is currently finishing his PhD in cooperation with the KIT in the fields of artificial intelligence, machine learning, and anomaly detection in the maritime domain. Since July 2022 Mathias Anneken leads the research group Applied Explainable Artificial Intelligence at Fraunhofer IOSB.

Manjunatha Veerappa

Manjunatha Veerappa is a research employee at Fraunhofer IOSB where he is responsible for developing XAI algorithms. Before joining Fraunhofer, Manjunatha Veerappa received a master’s degree specializing in real-time data processing and pattern recognition. Prior to this, he worked at Cognizant Technology Solutions as a programmer analyst with a primary focus on interpreting the datasets using statistical tools.

Marco F. Huber

Marco F. Huber received his diploma, Ph.D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group “Variable Image Acquisition and Processing” of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Center for Cyber Cognitive Intelligence (CCI) and of the Department for Image and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, data analytics, and robotics. He has authored or co-authored more than 100 publications in various high-ranking journals, books, and conferences, and holds two U.S. patents and one EU patent.

Christian Kühnert

Christian Kühnert received is diploma in electrical engineering in 2008 from TU Darmstadt. In the same year he joined Fraunhofer IOSB. In 2013 Christian Kühnert finished his PhD at the Karlsruhe Institute of Technology (KIT). Since 2019 he is senior scientist at Fraunhofer IOSB.

Felix Kronenwett

Felix Kronenwett studied at the DHBW Karlsruhe and continued his studies in Electrical Engineering and Information Technology at the Karlsruhe Institute of Technology (KIT). He joined Fraunhofer IOSB in 2021.

Georg Maier

Georg Maier received the M.Sc. degree in computer science from Utrecht University, Utrecht, The Netherlands, in 2013. Afterwards, he joined the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, Germany. In 2021 he finished is PhD at the Karlsruhe Institute of Technology (KIT). His research interests include different aspects of image processing, in particular algorithmic aspects, with a focus on real-time capabilities.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: Funded by the Fraunhofer Research Center for Machine Learning within the Fraunhofer Cluster of Excellence Cognitive Internet Technologies.

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

[1] S. J. Russell and N. Peter, Artificial Intelligence: A Modern Approach, 3rd ed. Boston, Prentice-Hall Series in Artificial Intelligence. Pearson, 2010.Search in Google Scholar

[2] V. Buhrmester, D. Münch, and M. Arens, “Analysis of explainers of black box deep neural networks for computer vision: a survey,” CoRR, abs/1911.12116, 2019, Available at: http://arxiv.org/abs/1911.12116.10.3390/make3040048Search in Google Scholar

[3] Y. H. Sheu, “Illuminating the black box: interpreting deep neural network models for psychiatric research,” Front. Psychiatry, vol. 11, p. 1091, 2020. https://doi.org/10.3389/fpsyt.2020.551299.Search in Google Scholar PubMed PubMed Central

[4] M. Veerappa, M. Anneken, N. Burkart, and M. Huber, “Validation of XAI explanations for multivariate time series classification in the maritime domain,” J. Comput. Sci., vol. 58, p. 101539, 2021. https://doi.org/10.1016/j.jocs.2021.101539.Search in Google Scholar

[5] H. Wilts, B. R. Garcia, R. G. Garlito, L. S. Gómez, and E. G. Prieto, “Artificial intelligence in the sorting of municipal waste as an enabler of the circular economy,” Resources, vol. 10, no. 4, p. 28, 2021. https://doi.org/10.3390/resources10040028.Search in Google Scholar

[6] I. H. Sarker, “Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions,” SN Comput. Sci., vol. 2, p. 420, 2021. https://doi.org/10.1007/s42979-021-00815-1.Search in Google Scholar PubMed PubMed Central

[7] P. Linardatos, V. Papastefanopoulos, and S. Kotsiantis, “Explainable ai: A review of machine learning interpretability methods,” Entropy, vol. 23, no. 1, 2021, https://doi.org/10.3390/e23010018.Search in Google Scholar PubMed PubMed Central

[8] U. Schlegel, H. Arnout, M. El-Assady, D. Oelke, and D. A. Keim, “Towards a rigorous evaluation of XAI methods on time series,” in IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 4197–4201.10.1109/ICCVW.2019.00516Search in Google Scholar

[9] N. Burkart, M. Huber, and M. Anneken, “Supported decision-making by explainable predictions of ship trajectories,” in 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020), 2021, pp. 44–54.10.1007/978-3-030-57802-2_5Search in Google Scholar

[10] M. Veerappa, M. Anneken, and N. Burkart, “Evaluation of interpretable association rule mining methods on time-series in the maritime domain,” in Pattern Recognition. ICPR International Workshops and Challenges, A. Del Bimbo, R. Cucchiara, S. Sclaroff, G. M. Farinella, T. Mei, M. Bertini, H. J. Escalante, and R. Vezzani, Eds., Cham, Springer International Publishing, 2021, pp. 204–218.10.1007/978-3-030-68796-0_15Search in Google Scholar

[11] N. Burkart and M. F. Huber, “A survey on the explainability of supervised machine learning,” J. Artif. Intell. Res., vol. 70, pp. 245–317, 2021. https://doi.org/10.1613/jair.1.12228.Search in Google Scholar

[12] G. Sofianidis, J. M. Rožanec, D. Mladenić, and D. Kyriazis, A Review of Explainable Artificial Intelligence In Manufacturing, 2021, Available at: https://arxiv.org/abs/2107.02295.10.1561/9781680838770.ch5Search in Google Scholar

[13] V. Nasteski, “An overview of the supervised machine learning methods,” Horiz. B, vol. 4, pp. 51–62, 2017. https://doi.org/10.20544/HORIZONS.B.04.1.17.P05.Search in Google Scholar

[14] L. F. Antwarg, R. Miller, B. Shapira, and L. Rokach, “Explaining anomalies detected by autoencoders using shapley additive explanations,” Expert Syst. Appl., vol. 186, p. 115736, 2021. https://doi.org/10.1016/j.eswa.2021.115736.Search in Google Scholar

[15] M. Schultz, N. Gnoss, and M. Tropmann-Frick, XAI in the Audit Domain – Explaining an Autoencoder Model for Anomaly Detection, Nuremberg, Germany, Wirtschaftsinformatik 2022 Proceedings. 1., 2022.Search in Google Scholar

[16] K. Roshan and A. Zafar, “Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation (SHAP),” Int. J. Comput. Netw. Commun., vol. 13, no. 6, pp. 109–128, 2021. https://doi.org/10.5121/ijcnc.2021.13607.Search in Google Scholar

[17] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: a survey,” ACM Comput. Surv., vol. 41, no. 15, pp. 1–58, 2009. https://doi.org/10.1145/1541880.1541882.Search in Google Scholar

[18] S. Omar, A. Ngadi, and H. H. Jebur, “Machine learning techniques for anomaly detection: an overview,” Int. J. Comput. Appl., vol. 79, no. 2, pp. 33–41, 2013. https://doi.org/10.5120/13715-1478.Search in Google Scholar

[19] L. Chapel and C. Friguet, “Anomaly detection with score functions based on the reconstruction error of the kernel PCA,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, 2014, pp. 227–241.10.1007/978-3-662-44848-9_15Search in Google Scholar

[20] Z. Chen, C. K. Yeo, B. S. Lee, and C. T. Lau, “Autoencoder-based network anomaly detection,” in 2018 Wireless Telecommunications Symposium (WTS), 2018, pp. 1–5.10.1109/WTS.2018.8363930Search in Google Scholar

[21] C. C. Aggarwal, Neural Networks and Deep Learning – A Textbook, New York, Springer, 2018.10.1007/978-3-319-94463-0Search in Google Scholar

[22] I. Dokmanic, R. Parhizkar, J. Ranieri, and M. Vetterli, “Euclidean distance matrices: essential theory, algorithms, and applications,” IEEE Signal Process. Mag., vol. 32, no. 6, pp. 12–30, 2015. https://doi.org/10.1109/MSP.2015.2398954.Search in Google Scholar

[23] L. Alzubaidi, J. Zhang, A. Humaidi, et al.., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 8, p. 53, 2021. https://doi.org/10.1186/s40537-021-00444-8.Search in Google Scholar PubMed PubMed Central

[24] C. Molnar, Interpretable Machine Learning, 2019.10.21105/joss.00786Search in Google Scholar

[25] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.10.18653/v1/N16-3020Search in Google Scholar

[26] P. J. Kindermans, S. Hooker, J. Adebayo, et al.., “The (Un)reliability of Saliency methods,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 1st ed. Switzerland, Springer Publishing Company, Incorporated, 2017, pp. 267–280.10.1007/978-3-030-28954-6_14Search in Google Scholar

[27] S. M. Lundberg and S. I. Lee, “A unified approach to interpreting model predictions,” Adv. Neural. Inf. Process. Syst., vol. 30, pp. 4765–4774, 2017.Search in Google Scholar

[28] L. S. Shapley, “A value for n-person games,” Contrib. Theory Game, vol. 2, no. 28, pp. 307–317, 1953.10.1515/9781400881970-018Search in Google Scholar

[29] A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” in International Conference on Machine Learning, 2017, pp. 3145–3153.Search in Google Scholar

[30] G. Maier, F. Pfaff, C. Pieper, et al.., “Experimental evaluation of a novel sensor-based sorting approach featuring predictive real-time multiobject tracking,” IEEE Trans. Ind. Electron., vol. 68, no. 2, pp. 1548–1559, 2021. https://doi.org/10.1109/TIE.2020.2970643.Search in Google Scholar

Received: 2022-10-31
Accepted: 2023-01-27
Published Online: 2023-02-17
Published in Print: 2023-03-28

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 1.5.2024 from https://www.degruyter.com/document/doi/10.1515/teme-2022-0097/html
Scroll to top button