Skip to main content

Abstract

Inherently explainable Machine Learning (ML) models are able to provide explanations for their predictions by virtue of their construction. The explanations of a ML model are more comprehensible if they are expressed in terms of its input features. Our paper proposes an inherently explainable pipeline for document classification using pattern structures and Abstract Meaning Representation (AMR) graphs. The pipeline generates two kinds of explanations: intermediate and final ones, that justify its classifications. Intermediate explanations are represented as significant subgraphs found in the document graphs of test documents. Final explanations are the sentences of the test documents, that correspond to the significant subgraphs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/bjascob/amrlib.

  2. 2.

    https://github.com/bjascob/amrlib-models.

References

  1. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  2. Banarescu, L., et al.: Abstract meaning representation for sembanking. In: LAW@ACL, pp. 178–186. The Association for Computer Linguistics (2013)

    Google Scholar 

  3. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR, pp. 3319–3327. IEEE Computer Society (2017)

    Google Scholar 

  4. Belfodil, A., Kuznetsov, S.O., Kaytoue, M.: On pattern setups and pattern multistructures. Int. J. Gen. Syst. 49(8), 785–818 (2020)

    Article  MathSciNet  Google Scholar 

  5. Buzmakov, A., Kuznetsov, S.O., Napoli, A.: Efficient mining of subsample-stable graph patterns. In: ICDM, pp. 757–762. IEEE Computer Society (2017)

    Google Scholar 

  6. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)

    Article  Google Scholar 

  7. Chen, Z., Bei, Y., Rudin, C.: Concept whitening for interpretable image recognition. Nat. Mach. Intell. 2(12), 772–782 (2020)

    Article  Google Scholar 

  8. Demko, C., Bertet, K., Faucher, C., Viaud, J.F., Kuznetsov, S.O.: Nextpriorityconcept: a new and generic algorithm computing concepts from complex and heterogeneous data. Theoret. Comput. Sci. 845, 1–20 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: MIPRO, pp. 210–215. IEEE (2018)

    Google Scholar 

  10. Ferré, S., Huchard, M., Kaytoue, M., Kuznetsov, S.O., Napoli, A.: Formal concept analysis: from knowledge discovery to knowledge processing. In: Marquis, P., Papini, O., Prade, H. (eds.) A Guided Tour of Artificial Intelligence Research, pp. 411–445. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-06167-8_13

    Chapter  Google Scholar 

  11. Galitsky, B.A., Ilvovsky, D.I., Kuznetsov, S.O.: Detecting logical argumentation in text via communicative discourse tree. J. Exp. Theor. Artif. Intell. 30(5), 637–663 (2018)

    Google Scholar 

  12. Ganter, B., Kuznetsov, S.O.: Pattern structures and their projections. In: Delugach, H.S., Stumme, G. (eds.) ICCS-ConceptStruct 2001. LNCS (LNAI), vol. 2120, pp. 129–142. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44583-8_10

    Chapter  MATH  Google Scholar 

  13. Greene, D., Cunningham, P.: Practical solutions to the problem of diagonal dominance in kernel document clustering. In: ICML. ACM International Conference Proceeding Series, vol. 148, pp. 377–384. ACM (2006)

    Google Scholar 

  14. Hand, D.J.: Classifier technology and the illusion of progress. Stat. Sci. 21(1), 1–14 (2006)

    MathSciNet  MATH  Google Scholar 

  15. Ivanovs, M., Kadikis, R., Ozols, K.: Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit. Lett. 150, 228–234 (2021)

    Article  Google Scholar 

  16. Kaytoue, M., Codocedo, V., Buzmakov, A., Baixeries, J., Kuznetsov, S.O., Napoli, A.: Pattern structures and concept lattices for data mining and knowledge processing. In: Bifet, A., May, M., Zadrozny, B., Gavalda, R., Pedreschi, D., Bonchi, F., Cardoso, J., Spiliopoulou, M. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9286, pp. 227–231. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23461-8_19

    Chapter  Google Scholar 

  17. Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: ICML. Proceedings of Machine Learning Research, vol. 80, pp. 2673–2682. PMLR (2018)

    Google Scholar 

  18. Koh, P.W., et al.: Concept bottleneck models. In: ICML. Proceedings of Machine Learning Research, vol. 119, pp. 5338–5348. PMLR (2020)

    Google Scholar 

  19. Kuznetsov, S.O.: Fitting pattern structures to knowledge discovery in big data. In: Cellier, P., Distel, F., Ganter, B. (eds.) ICFCA 2013. LNCS (LNAI), vol. 7880, pp. 254–266. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38317-5_17

    Chapter  Google Scholar 

  20. Kuznetsov, S.O.: Scalable knowledge discovery in complex data with pattern structures. In: Maji, P., Ghosh, A., Murty, M.N., Ghosh, K., Pal, S.K. (eds.) PReMI 2013. LNCS, vol. 8251, pp. 30–39. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-45062-4_3

    Chapter  Google Scholar 

  21. Kuznetsov, S.O., Makhazhanov, N., Ushakov, M.: On neural network architecture based on concept lattices. In: Kryszkiewicz, M., Appice, A., Ślęzak, D., Rybinski, H., Skowron, A., Raś, Z.W. (eds.) ISMIS 2017. LNCS (LNAI), vol. 10352, pp. 653–663. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60438-1_64

    Chapter  Google Scholar 

  22. Parakal, E.G., Kuznetsov, S.O.: Intrinsically interpretable document classification via concept lattices. In: FCA4AI@IJCAI. CEUR Workshop Proceedings, vol. 3233, pp. 9–22. CEUR-WS.org (2022)

    Google Scholar 

  23. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

The work of Sergei O. Kuznetsov on this paper was supported by the Russian Science Foundation under grant 22-11-00323 and performed at HSE University, Moscow, Russia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergei O. Kuznetsov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kuznetsov, S.O., Parakal, E.G. (2023). Explainable Document Classification via Pattern Structures. In: Kovalev, S., Kotenko, I., Sukhanov, A. (eds) Proceedings of the Seventh International Scientific Conference “Intelligent Information Technologies for Industry” (IITI’23). IITI 2023. Lecture Notes in Networks and Systems, vol 776. Springer, Cham. https://doi.org/10.1007/978-3-031-43789-2_39

Download citation

Publish with us

Policies and ethics