Abstract
The financial industry is a major player in the digital landscape and a key driver of digital transformation in the economy. In recent times, the financial sector has come under scrutiny due to emerging financial crises, particularly in high-risk areas like credit scoring models where standard AI models may not be fully reliable. This highlights the need for greater accountability and transparency in the use of digital technologies in Fintech. In this paper, we propose a novel approach to enhance the interpretability of AI models by knowledge augmentation using distillation methods. Our aim is to transfer the knowledge from black-box models to more transparent and interpretable models, e.g., decision-trees, enabling a deeper understanding of decision patterns. We apply our method to a credit score problem and demonstrate that it is feasible to use white-box techniques to gain insight into the decision patterns of black-box models. Our results show the potential for improving interpretability and transparency in AI decision-making processes in Fintech scenarios.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Eli5, a library for debugging ml classifiers/regressors and explaining their decisions. www.eli5.readthedocs.io/en/latest/overview.html
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11(61), 1803–1831 (2010). www.jmlr.org/papers/v11/baehrens10a.html
Ben-Israel, I., Cerdio, J., Ema, A., Friedman, L., Ienca, M., Mantelero, A., Matania, E., Muller, C., Shiiroyama, H., Vayena, E.: Towards regulation of AI systems-global perspectives on the development of a legal framework on artificial intelligence (AI) systems based on the council of Europe’s standards on human rights, democracy and the rule of law (2020). www.rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/1680a0c17a
Bucila, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: KDD’06 (2006)
Cao, L., Yang, Q., Yu, P.S.: Data science and ai in fintech: an overview. Int. J. Data Sci. Anal. 81–89 (2021)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for HealthCare. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (Aug 2015). 10.11452F2783258.2788613
Che, Z., Purushotham, S., Khemani, R., Liu, Y.: Distilling knowledge from deep networks with applications to healthcare domain (2015)
Dua, D., Graff, C.: UCI machine learning repository (2017). www.archive.ics.uci.edu/ml
Faria, T., Silva, C., Ribeiro, B.: Interpreting decision patterns in financial applications. In: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 25th Iberoamerican Congress, CIARP (2021)
Ghai, B., Mueller, K.: D-bias: a causality-based human-in-the-loop system for tackling algorithmic bias. IEEE Trans. Visual Comput. Graphics 29(1), 473–482 (2023)
Ghanta, S., Subramanian, S., Sundararaman, S., Khermosh, L., Sridhar, V., Arteaga, D., Luo, Q., Das, D., Talagala, N.: Interpretability and reproducability in production machine learning applications. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 658–664 (2018)
Huang, C., Zhang, Z., Mao, B., Yao, X.: An overview of artificial intelligence ethics. IEEE Trans. Artif. Intell. 1(01), 1–21 (2022). https://doi.org/10.1109/TAI.2022.3194503
Lyu, D., Yang, F., Kwon, H., Dong, W., Yilmaz, L., Liu, B.: Tdm: trustworthy decision-making via interpretability enhancement. IEEE Trans. Emerg. Topics Comput. Intell. 6(3), 450–461 (2022). https://doi.org/10.1109/TETCI.2021.3084290
Molnar, C.: Interpretable Machine Learning: A Guide For Making Black Box Models Explainable (2022). www.christophm.github.io/interpretable-ml-book/
Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You? : Explaining the Predictions of Any Classifier (2016)
Sarkar, S., Weyde, T., Garcez, A., Slabaugh, G., Dragicevic, S., Percy, C.: Accuracy and interpretability trade-offs in machine learning applied to safer gambling. In: CoCo@NIPS (2016)
Tan, S., Caruana, R., Hooker, G., Lou, Y.: Distill-and-compare. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (Dec 2018). https://doi.org/10.1145/3278721.3278725
Valente, F., Henriques, J., Paredes, S., Rocha, T., de Carvalho, P., Morais, J.: Improving the compromise between accuracy, interpretability and personalization of rule-based machine learning in medical problems. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2132–2135 (2021)
Xu, K., Park, D.H., Yi, C., Sutton, C.: Interpreting deep classifier by visual distillation of dark knowledge (2018)
Acknowledgements
This research was supported by the Portuguese Recovery and Resilience Plan (PRR) through project C645008882-00000055, Center for Responsible AI.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Silva, C., Faria, T., Ribeiro, B. (2023). Towards Interpretability in Fintech Applications via Knowledge Augmentation. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science(), vol 14115. Springer, Cham. https://doi.org/10.1007/978-3-031-49008-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-49008-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49007-1
Online ISBN: 978-3-031-49008-8
eBook Packages: Computer ScienceComputer Science (R0)