Skip to main content

Towards Interpretability in Fintech Applications via Knowledge Augmentation

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14115))

Included in the following conference series:

  • 318 Accesses

Abstract

The financial industry is a major player in the digital landscape and a key driver of digital transformation in the economy. In recent times, the financial sector has come under scrutiny due to emerging financial crises, particularly in high-risk areas like credit scoring models where standard AI models may not be fully reliable. This highlights the need for greater accountability and transparency in the use of digital technologies in Fintech. In this paper, we propose a novel approach to enhance the interpretability of AI models by knowledge augmentation using distillation methods. Our aim is to transfer the knowledge from black-box models to more transparent and interpretable models, e.g., decision-trees, enabling a deeper understanding of decision patterns. We apply our method to a credit score problem and demonstrate that it is feasible to use white-box techniques to gain insight into the decision patterns of black-box models. Our results show the potential for improving interpretability and transparency in AI decision-making processes in Fintech scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://xgboost.readthedocs.io/en/stable/.

References

  1. Eli5, a library for debugging ml classifiers/regressors and explaining their decisions. www.eli5.readthedocs.io/en/latest/overview.html

  2. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11(61), 1803–1831 (2010). www.jmlr.org/papers/v11/baehrens10a.html

  3. Ben-Israel, I., Cerdio, J., Ema, A., Friedman, L., Ienca, M., Mantelero, A., Matania, E., Muller, C., Shiiroyama, H., Vayena, E.: Towards regulation of AI systems-global perspectives on the development of a legal framework on artificial intelligence (AI) systems based on the council of Europe’s standards on human rights, democracy and the rule of law (2020). www.rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/1680a0c17a

  4. Bucila, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: KDD’06 (2006)

    Google Scholar 

  5. Cao, L., Yang, Q., Yu, P.S.: Data science and ai in fintech: an overview. Int. J. Data Sci. Anal. 81–89 (2021)

    Google Scholar 

  6. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for HealthCare. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (Aug 2015). 10.11452F2783258.2788613

    Google Scholar 

  7. Che, Z., Purushotham, S., Khemani, R., Liu, Y.: Distilling knowledge from deep networks with applications to healthcare domain (2015)

    Google Scholar 

  8. Dua, D., Graff, C.: UCI machine learning repository (2017). www.archive.ics.uci.edu/ml

  9. Faria, T., Silva, C., Ribeiro, B.: Interpreting decision patterns in financial applications. In: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 25th Iberoamerican Congress, CIARP (2021)

    Google Scholar 

  10. Ghai, B., Mueller, K.: D-bias: a causality-based human-in-the-loop system for tackling algorithmic bias. IEEE Trans. Visual Comput. Graphics 29(1), 473–482 (2023)

    Google Scholar 

  11. Ghanta, S., Subramanian, S., Sundararaman, S., Khermosh, L., Sridhar, V., Arteaga, D., Luo, Q., Das, D., Talagala, N.: Interpretability and reproducability in production machine learning applications. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 658–664 (2018)

    Google Scholar 

  12. Huang, C., Zhang, Z., Mao, B., Yao, X.: An overview of artificial intelligence ethics. IEEE Trans. Artif. Intell. 1(01), 1–21 (2022). https://doi.org/10.1109/TAI.2022.3194503

    Article  Google Scholar 

  13. Lyu, D., Yang, F., Kwon, H., Dong, W., Yilmaz, L., Liu, B.: Tdm: trustworthy decision-making via interpretability enhancement. IEEE Trans. Emerg. Topics Comput. Intell. 6(3), 450–461 (2022). https://doi.org/10.1109/TETCI.2021.3084290

    Article  Google Scholar 

  14. Molnar, C.: Interpretable Machine Learning: A Guide For Making Black Box Models Explainable (2022). www.christophm.github.io/interpretable-ml-book/

  15. Ribeiro, M.T., Singh, S., Guestrin, C.: Why Should I Trust You? : Explaining the Predictions of Any Classifier (2016)

    Google Scholar 

  16. Sarkar, S., Weyde, T., Garcez, A., Slabaugh, G., Dragicevic, S., Percy, C.: Accuracy and interpretability trade-offs in machine learning applied to safer gambling. In: CoCo@NIPS (2016)

    Google Scholar 

  17. Tan, S., Caruana, R., Hooker, G., Lou, Y.: Distill-and-compare. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (Dec 2018). https://doi.org/10.1145/3278721.3278725

  18. Valente, F., Henriques, J., Paredes, S., Rocha, T., de Carvalho, P., Morais, J.: Improving the compromise between accuracy, interpretability and personalization of rule-based machine learning in medical problems. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2132–2135 (2021)

    Google Scholar 

  19. Xu, K., Park, D.H., Yi, C., Sutton, C.: Interpreting deep classifier by visual distillation of dark knowledge (2018)

    Google Scholar 

Download references

Acknowledgements

This research was supported by the Portuguese Recovery and Resilience Plan (PRR) through project C645008882-00000055, Center for Responsible AI.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernardete Ribeiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Silva, C., Faria, T., Ribeiro, B. (2023). Towards Interpretability in Fintech Applications via Knowledge Augmentation. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science(), vol 14115. Springer, Cham. https://doi.org/10.1007/978-3-031-49008-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-49008-8_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-49007-1

  • Online ISBN: 978-3-031-49008-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics