Abstract
In this expository article we highlight the relevance of explanations for artificial intelligence, in general, and for the newer developments in explainable AI, referring to origins and connections of and among different approaches. We describe in simple terms, explanations in data management and machine learning that are based on attribution-scores, and counterfactuals as found in the area of causality. We elaborate on the importance of logical reasoning when dealing with counterfactuals, and their use for score computation.
L. Bertossi—Member of the Millennium Institute for Foundations of Data Research (IMFD, Chile).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
If some other non-classical logic is used instead, \(\models \) has to be replaced by the corresponding entailment criterion [23].
- 2.
Example 7 will show an actual cause that is not a counterfactual cause.
- 3.
Less “trivial” cases will be shown in Example 7.
- 4.
We are assuming that classifiers are binary, i.e. they return labels 0 or 1. For simplicity and uniformity, but without loss of generality, we will assume that label 1 is the one we want to explain.
- 5.
Another \(\#P\)-complete problem is \(\#{ Hamiltonian}\), about counting the number of Hamiltonian cycles in a graph. Its decision version, about the existence of a Hamiltonian cycle, is \({ NP}\)-complete.
- 6.
Interestingly, the decision version of the problem, i.e. of deciding if a formula in \({ Monotone}2{ CNF}\) is satisfiable, is trivially tractable: the assignment that makes all atoms true satisfies the formula.
- 7.
It could be transformed into a dDBC, but this would make the circuit grow. The transformation cost is always a concern in the area of knowledge compilation. For some classes of BCs, a transformation into another class could take exponential time; sometimes exponential on a fixed parameter, etc. [1, 26].
- 8.
It is worth mentioning that ASP and DLV have been used to specify and compute model-based diagnoses, both in their abductive and consistency-based formulations [25].
References
Antoine Amarilli, A., Capelli, F., Monet, M., Senellart, P.: Connecting knowledge compilation classes width parameters. Theory Comput. Syst. 64, 861–914 (2020)
Arenas, M., Barcelo, P., Bertossi, L., Monet, M.: The tractability of SHAP-scores over deterministic and decomposable boolean circuits. In: Proceedings of AAAI (2021)
Arenas, M., Barcelo, P., Bertossi, L., Monet, M.: The tractability of SHAP-scores over deterministic and decomposable boolean circuits. Extended version of AAAI 2021 paper. arXiv:2104.08015 (2021)
Arenas, M., Pablo Barcelo, P., Romero, M., Subercaseaux, B.: On computing probabilistic explanations for decision trees. In: Proceedings of NeurIPS (2022)
Baral, C., Gelfond, M., Rushton, N.: Probabilistic reasoning with answer sets. Theory Pract. Logic Program. 9(1), 57–144 (2009)
Bertossi, L., Salimi, B.: From causes for database queries to repairs and model-based diagnosis and back. Theory Comput. Syst. 61(1), 191–232 (2017)
Bertossi, L., Salimi, B.: Causes for query answers from databases: datalog abduction, view-updates, and integrity constraints. Int. J. Approximate Reasoning 90, 226–252 (2017)
Bertossi, L., Li, J., Schleich, M., Suciu, D., Vagena, Z.: Causality-based explanation of classification outcomes. In: Proceedings of 4th International Workshop on “Data Management for End-to-End Machine Learning” (DEEM) at ACM SIGMOD/PODS, pp. 6:1–6:10 (2020)
Bertossi, L.: Specifying and computing causes for query answers in databases via database repairs and repair programs. Knowl. Inf. Syst. 63(1), 199–231 (2021)
Bertossi, L.: Score-based explanations in data management and machine learning: an answer-set programming approach to counterfactual analysis. In: Šimkus, M., Varzinczak, I. (eds.) Reasoning Web 2021. LNCS, vol. 13100, pp. 145–184. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95481-9_7
Bertossi, L., Reyes, G.: Answer-set programs for reasoning about counterfactual interventions and responsibility scores for classification. In: Katzouris, N., Artikis, A. (eds.) ILP 2021. LNCS, vol. 13191, pp. 41–56. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-97454-1_4
Bertossi, L.: Declarative approaches to counterfactual explanations for classification. Theory Pract. Log. Program. (2020). https://doi.org/10.1017/S1471068421000582, arXiv:2011.07423
Brewka, G., Eiter, T., Truszczynski, M.: Answer set programming at a glance. Commun. ACM 54(12), 92–103 (2011)
Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Tran. Comput. C-35, 677–691 (1986)
Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)
Chatila, R., et al.: Trustworthy AI. In: Braunschweig, B., Ghallab, M. (eds.) Reflections on Artificial Intelligence for Humanity. LNCS (LNAI), vol. 12600, pp. 13–39. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69128-8_2
Chen, C., Lin, K., Rudin, C., Shaposhnik, Y., Wang, S., Wang, T.: An interpretable model with globally consistent explanations for credit risk. arXiv:1811.12615 (2018)
Chockler, H., Halpern, J.Y.: Responsibility and blame: a structural-model approach. J. Artif. Intell. Res. 22, 93–115 (2004)
Darwiche, A., Marquis, P.: A knowledge compilation map. J. Artif. Intell. Res. 17, 229–264 (2002)
Darwiche, A.: On the tractable counting of theory models and its application to truth maintenance and belief revision. J. Appl. Non-Classical Log. 11(1–2), 11–34 (2011)
de Kleer, J., Mackworth, A., Reiter, R.: Characterizing diagnoses and systems. Artif. Intell. 56(2–3), 197–222 (1992)
Eiter, T., Gottlob, G.: On the complexity of propositional knowledge base revision, updates, and counterfactuals. Artif. Intell. 57(2–3), 227–270 (1992)
Eiter, T., Gottlob, G.: The complexity of logic-based abduction. J. ACM 42(1), 3–42 (1995)
Eiter, T., Gottlob, G., Leone, N.: Abduction from logic programs: semantics and complexity. Theor. Comput. Sci. 189(1–2), 129–177 (1997)
Eiter, T., Faber, W., Leone, N., Pfeifer, G.: The diagnosis frontend of the DLV system. AI Communun. 12(1–2), 99–111 (1999). Extended version as Tech. Report DBAI-TR-98-20, TU Vienna, 1998
Ferrara, A., Pan, G., Vardi, M.Y.: Treewidth in verification: local vs. global. In: Sutcliffe, G., Voronkov, A. (eds.) LPAR 2005. LNCS (LNAI), vol. 3835, pp. 489–503. Springer, Heidelberg (2005). https://doi.org/10.1007/11591191_34
Gomes, C.P., Sabharwal, A., Selman, B.: Model counting. In: Handbook of Satisfiability, pp. 993–1014. IOS Press (2009)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–93 (2018)
Halpern, J., Pearl, J.: Causes and explanations: a structural-model approach. Part I: causes. Br. J. Philos. Sci. 56(4), 843–887 (2005)
Halpern, J.: Actual Causality. MIT Press, Cambridge (2016)
Hahn, S., Janhunen, T., Kaminski, R., Romero, J., Rühling, N., Schaub, T.: Plingo: a system for probabilistic reasoning in Clingo based on LP MLN. In: Proceedings of RuleML+RR, pp. 54–62 (2022)
Hunter, A., Konieczny, S.: On the measure of conflicts: shapley inconsistency values. Artif. Intell. 174(14), 1007–1026 (2010)
Huang, X., Izza, Y., Ignatiev, A., Cooper, M.C., Asher, N., Marques-Silva, J.: Tractable explanations for d-DNNF classifiers. In: Proceedings of AAAI, pp. 5719–5728 (2022)
Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: Proceedings of AAAI, pp. 1511–1519 (2019)
Karimi, A.-H., von Kügelgen, B.J., Schölkopf, B., Valera, I.: Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. In: Proceedings of NeurIPS (2020)
Karimi, A.-H., Barthe, G., Schölkopf, B., Valera, I.: A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Comput. Surv. 55(5), 95:1–95:29 (2023)
Lee, J., Yang, Z.: LPMLN, weak constraints, and P-log. In: Proceedings of AAAI, pp. 1170–1177 (2017)
Lee, J., Yang, Z.: Statistical relational extension of answer set programming. In: Bertossi, L., Xiao, G. (eds.) Reasoning Web. Causality, Explanations and Declarative Knowledge. LNCS, vol. 13759, pp. 132–160. Springer, Cham (2023)
Leone, N., et al.: The DLV system for knowledge representation and reasoning. ACM Trans. Comput. Log. 7(3), 499–562 (2006)
Livshits, E., Bertossi, L., Kimelfeld, B., Sebag, M.: Query games in databases. ACM SIGMOD Rec. 50(1), 78–85 (2021)
Livshits, E., Bertossi, L., Kimelfeld, B., Sebag, M.: The shapley value of tuples in query answering. Log. Methods Comput. Sci. 17(3), 22:1–22:33 (2021)
Livshits, E., Kimelfeld, E.: The shapley value of inconsistency measures for functional dependencies. Log. Methods Comput. Sci. 18(2), 20:1–20:33 (2022)
Lundberg, S., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of NIPS, pp. 4765–4774 (2017)
Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., Lee, S.-I.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)
Marques-Silva, J.: Logic-based explainability in machine learning. In: Bertossi, L., Xiao, G. (eds.) Reasoning Web. Causality, Explanations and Declarative Knowledge. LNCS, vol. 13759, pp. 24–104. Springer, Cham (2023)
Marquis, P.: Extending abduction from propositional to first-order logic. In: Jorrand, P., Kelemen, J. (eds.) FAIR 1991. LNCS, vol. 535, pp. 141–155. Springer, Heidelberg (1991). https://doi.org/10.1007/3-540-54507-7_12
Meliou, A., Gatterbauer, W., Moore, K.F., Suciu, D.: The complexity of causality and responsibility for query answers and non-answers. In: Proceedings of VLDB, pp. 34–41 (2010)
Meliou, A., Gatterbauer, W., Halpern, J.Y., Koch, C., Moore, K.F., Suciu, D.: Causality in databases. IEEE Data Eng. Bull. 33(3), 59–67 (2010)
Miller, T.: Contrastive explanation: a structural-model approach. Knowl. Eng. Rev. 36(4), 1–24 (2021)
Minh, D., Xiang-Wang, H., Fen-Li, Y., Nguyen, T.N.: Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev. 55, 3503–3568 (2022)
Molnar, C.: Interpretable machine learning: a guide for making black box models explainable, (2020). https://christophm.github.io/interpretable-ml-book
Papadimitriou, P.: Computational Complexity. Addison-Wesley (1994)
Pearl, J.: Causality: Models, Reasoning and Inference, 2nd edn. Cambridge University Press, Cambridge (2009)
Peirce, C.S.: Collected papers of Charles Sanders Peirce. In: Hartsthorne, C., Weiss, P. (eds.) vol. 2. Harvard University Press (1931)
Poole, D., Mackworth, A.K.: Artificial Intelligence. Section 5.7, 2nd edn. Cambridge University Press (2017)
Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)
Roth, A.E.: The Shapley Value: Essays in Honor of Lloyd S. Shapley. Cambridge University Press, Cambridge (1988)
Roy, S., Salimi, B.: Causal inference in data analysis with applications to fairness and explanations. In: Bertossi, L., Xiao, G. (eds.) Reasoning Web. Causality, Explanations and Declarative Knowledge. LNCS, vol. 13759, pp. 105–131. Springer, Cham (2023)
Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953)
Shi, W., Shih, A., Darwiche, A., Choi, A.: On tractable representations of binary neural networks. In: Proceedings of KR, pp. 882–892 (2020)
Struss, P.: Model-based problem solving. In: Handbook of Knowledge Representation, chap. 4, pp. 395–465. Elsevier (2008)
Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of FAT, pp. 10–19 (2019)
Valiant, L.G.: The complexity of enumeration and reliability problems. SIAM J. Comput. 8(3), 410–421 (1979)
Van den Broeck, G., Lykov, A., Schleich, M., Suciu, D.: On the tractability of shap explanations. In: Proceedings of AAAI, pp. 6505–6513 (2021)
Verma, S., et al.: Counterfactual explanations and algorithmic recourses for machine learning: a review. arXiv:2010.10596 (2022)
Acknowledgements
Part of this work was funded by ANID - Millennium Science Initiative Program - Code ICN17002. The author is a Professor Emeritus at Carleton University, Ottawa, Canada; and a Senior Universidad Adolfo Ibáñez (UAI) Fellow, Chile. Comments by Paloma Bertossi on an earlier version of the article are much appreciated.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bertossi, L. (2023). Attribution-Scores and Causal Counterfactuals as Explanations in Artificial Intelligence. In: Bertossi, L., Xiao, G. (eds) Reasoning Web. Causality, Explanations and Declarative Knowledge. Lecture Notes in Computer Science, vol 13759. Springer, Cham. https://doi.org/10.1007/978-3-031-31414-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-31414-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31413-1
Online ISBN: 978-3-031-31414-8
eBook Packages: Computer ScienceComputer Science (R0)