skip to main content
10.1145/3600211.3604676acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

Published:29 August 2023Publication History

ABSTRACT

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations. These classes of approaches have been largely studied independently and the few attempts at reconciling them have been primarily empirical. This work establishes a clear theoretical connection between game-theoretic feature attributions, focusing on but not limited to SHAP, and counterfactuals explanations. After motivating operative changes to Shapley values based feature attributions and counterfactual explanations, we prove that, under conditions, they are in fact equivalent. We then extend the equivalency result to game-theoretic solution concepts beyond Shapley values. Moreover, through the analysis of the conditions of such equivalence, we shed light on the limitations of naively using counterfactual explanations to provide feature importances. Experiments on three datasets quantitatively show the difference in explanations at every stage of the connection between the two approaches and corroborate the theoretical findings.

References

  1. Kjersti Aas, Martin Jullum, and Anders Løland. 2021. Explaining Individual Predictions When Features Are Dependent: More Accurate Approximations to Shapley Values. Artificial Intelligence 298 (2021), 103502.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Emanuele Albini, Jason Long, Danial Dervovic, and Daniele Magazzeni. 2022. Counterfactual Shapley Additive Explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, 1054–1070.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Emanuele Albini, Antonio Rago, Pietro Baroni, and Francesca Toni. 2020. Relation-Based Counterfactual Explanations for Bayesian Network Classifiers. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 451–457.Google ScholarGoogle ScholarCross RefCross Ref
  4. Emanuele Albini, Antonio Rago, Pietro Baroni, and Francesca Toni. 2021. Influence-Driven Explanations for Bayesian Network Classifiers. In PRICAI 2021: Trends in Artificial Intelligence(Lecture Notes in Computer Science). Springer International Publishing, 88–100.Google ScholarGoogle Scholar
  5. John F. III Banzhaf. 1964. Weighted Voting Doesn’t Work: A Mathematical Analysis. Rutgers Law Review 19 (1964), 317.Google ScholarGoogle Scholar
  6. Solon Barocas, Andrew D. Selbst, and Manish Raghavan. 2020. The Hidden Assumptions behind Counterfactual Explanations and Principal Reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). ACM, 80–89.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Barry Becker. 1994. Adult Dataset: Extract of 1994 U.S. Income Census.Google ScholarGoogle Scholar
  8. James Bergstra, Daniel Yamins, and David Cox. 2013. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. In Proceedings of the 30th International Conference on Machine Learning. PMLR, 115–123.Google ScholarGoogle Scholar
  9. Siddharth Bhatore, Lalit Mohan, and Y. Raghu Reddy. 2020. Machine Learning Techniques for Credit Risk Evaluation: A Systematic Literature Review. Journal of Banking and Financial Technology 4, 1 (2020), 111–138.Google ScholarGoogle ScholarCross RefCross Ref
  10. Umang Bhatt, Adrian Weller, and José M. F. Moura. 2020. Evaluating and Aggregating Feature-based Model Explanations. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI). 3016–3022.Google ScholarGoogle Scholar
  11. U.S. Consumer Financial Protection Bureau CFPB. 2018. Equal Credit Opportunity Act (Regulation B), 12 CFR Part 1002.Google ScholarGoogle Scholar
  12. Hugh Chen, Joseph D. Janizek, Scott Lundberg, and Su-In Lee. 2020. True to the Model or True to the Data?. In ICML Workshop on Human Interpretability in Machine Learning. arxiv:2006.16234Google ScholarGoogle Scholar
  13. Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’16). Association for Computing Machinery, 785–794.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. James Samuel Coleman. 1968. Control of Collectivities and the Power of a Collectivity to Act. Technical Report. RAND Corporation.Google ScholarGoogle Scholar
  15. Ian C Covert, Scott Lundberg, and Su-In Lee. 2020. Feature Removal Is A Unifying Principle For Model Explanation Methods. In NeurIPS ML-Retrospectives, Surveys & Meta-Analyses Workshop.Google ScholarGoogle Scholar
  16. Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-Objective Counterfactual Explanations. In Parallel Problem Solving from Nature – PPSN XVI. Vol. 12269. Springer International Publishing, 448–469.Google ScholarGoogle Scholar
  17. J. Deegan and E. W. Packel. 1978. A New Index of Power for Simplen-Person Games. International Journal of Game Theory 7, 2 (1978), 113–123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. John Deegan and Edward W. Packel. 1983. To the (Minimal Winning) Victors Go the (Equally Divided) Spoils: A New Power Index for Simple n-Person Games. In Political and Related Models. Springer, 239–255.Google ScholarGoogle Scholar
  19. Pierre Dehez. 2017. On Harsanyi Dividends and Asymmetric Values. International Game Theory Review 19, 03 (2017), 1750012.Google ScholarGoogle ScholarCross RefCross Ref
  20. Rubén R. Fernández, Isaac Martín de Diego, Víctor Aceña, Javier M. Moguerza, and Alberto Fernández-Isabel. 2019. Relevance Metric for Counterfactuals Selection in Decision Trees. In Intelligent Data Engineering and Automated Learning – IDEAL 2019. Vol. 11871. Springer International Publishing, 85–93.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. FICO Community. 2019. Explainable Machine Learning Challenge.Google ScholarGoogle Scholar
  22. Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, and Ilya Feige. 2021. Shapley Explainability on the Data Manifold. In Proceedings of the 9th International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  23. Daniel Fryer, Inga Strümke, and Hien Nguyen. 2021. Shapley Values for Feature Selection: The Good, the Bad, and the Axioms. IEEE Access 9 (2021), 144352–144360.Google ScholarGoogle ScholarCross RefCross Ref
  24. Sainyam Galhotra, Romila Pradhan, and Babak Salimi. 2021. Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals. In Proceedings of the 2021 International Conference on Management of Data. ACM, 577–590.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. GDPR. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).Google ScholarGoogle Scholar
  26. Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of Neural Networks Is Fragile. Proceedings of the 33rd AAAI Conference on Artificial Intelligence 33, 01 (2019), 3681–3688.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. 2020. A Survey of Deep Learning Techniques for Autonomous Driving. Journal of Field Robotics 37, 3 (2020), 362–386. arxiv:1910.07738Google ScholarGoogle ScholarCross RefCross Ref
  28. Riccardo Guidotti. 2022. Counterfactual Explanations and How to Find Them: Literature Review and Benchmarking. Data Mining and Knowledge Discovery (2022).Google ScholarGoogle Scholar
  29. Joseph Y. Halpern. 2016. Actual Causality.Google ScholarGoogle Scholar
  30. Goerge Charles Harsanyi. 1958. A Bargaining Model for the Cooperatiove N-Person Game. Ph. D. Dissertation.Google ScholarGoogle Scholar
  31. Manfred J. Holler. 1978. A Priori Party Party Power and Government Formation: Esimerkkinä Suomi.Google ScholarGoogle Scholar
  32. Manfred J. Holler and Edward W. Packel. 1983. Power, Luck and the Right Index. Zeitschrift für Nationalökonomie / Journal of Economics 43, 1 (1983), 21–29. jstor:41798164Google ScholarGoogle Scholar
  33. Dominik Janzing, Lenon Minorics, and Patrick Bloebaum. 2020. Feature Relevance Quantification in Explainable AI: A Causal Problem. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2907–2916.Google ScholarGoogle Scholar
  34. Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.Google ScholarGoogle ScholarCross RefCross Ref
  35. Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Yuichi Ike. 2022. Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 1846–1870.Google ScholarGoogle Scholar
  36. Adam Karczmarz, Tomasz Michalak, Anish Mukherjee, Piotr Sankowski, and Piotr Wygocki. 2022. Improved Feature Importance Computation for Tree Models Based on the Banzhaf Value. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI). PMLR, 969–979.Google ScholarGoogle Scholar
  37. Amir-Hossein Karimi, G. Barthe, B. Balle, and Isabel Valera. 2020. Model-Agnostic Counterfactual Explanations for Consequential Decisions. In Proceedings of the 23rd International Conference on. Artificial Intelligence and Statistics (AISTATS).Google ScholarGoogle Scholar
  38. Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2022. A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations. Comput. Surveys 55, 5 (2022), 95:1–95:29.Google ScholarGoogle Scholar
  39. Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic Recourse under Imperfect Causal Knowledge: A Probabilistic Approach. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS). 265–277. arxiv:2006.06831Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 4466–4474.Google ScholarGoogle ScholarCross RefCross Ref
  41. Maurice Kendall and Jean D. Gibbons. 1990. Rank Correlation Methods (fifth ed.). A Charles Griffin Title.Google ScholarGoogle Scholar
  42. Eoin M. Kenny and Mark T. Keane. 2021. On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence 35, 13 (2021), 11575–11585.Google ScholarGoogle Scholar
  43. Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. 2022. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective. arxiv:2202.01602Google ScholarGoogle Scholar
  44. I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle A. Friedler. 2020. Problems with Shapley-value-based Explanations as Feature Importance Measures. In Proceedings of the 37th International Conference on Machine Learning(ICML’20). JMLR.org, 5491–5500.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Aditya Lahiri, Kamran Alipour, Ehsan Adeli, and Babak Salimi. 2022. Combining Counterfactuals With Shapley Values To Explain Image Models. In ICML 2022 Workshop on Responsible Decision Making in Dynamic Environments. arXiv. arxiv:2206.07087Google ScholarGoogle Scholar
  46. Jana Lang, Martin Giese, Winfried Ilg, and Sebastian Otte. 2022. Generating Sparse Counterfactual Explanations For Multivariate Time Series. arxiv:2206.00931Google ScholarGoogle Scholar
  47. Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2019. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2801–2807.Google ScholarGoogle ScholarCross RefCross Ref
  48. Lending Club. 2019. Lending Club Loans.Google ScholarGoogle Scholar
  49. Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2, 1 (2020), 56–67.Google ScholarGoogle ScholarCross RefCross Ref
  50. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS). 4768–4777.Google ScholarGoogle Scholar
  51. Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics 113 (2021), 103655.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Luke Merrick and Ankur Taly. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In Proceedings of 2020 International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE). 17–38. arxiv:1909.08128Google ScholarGoogle ScholarCross RefCross Ref
  53. Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (2019), 1–38.Google ScholarGoogle ScholarCross RefCross Ref
  54. Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (2021), 24:1–24:45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. R. K. Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2021. Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End. In AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 652–663.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, 607–617.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Martin J. Osborne and Ariel Rubinstein. 1994. A Course in Game Theory. MIT Press.Google ScholarGoogle Scholar
  58. Ioannis Papantonis and Vaishak Belle. 2022. Principled Diverse Counterfactuals in Multilinear Models. arxiv:2201.06467Google ScholarGoogle Scholar
  59. Martin Pawelczyk, Klaus Broelemann, and Gjergji Kasneci. 2020. On Counterfactual Explanations under Predictive Multiplicity. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI). PMLR, 809–818.Google ScholarGoogle Scholar
  60. Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse. In Proceedings of the 11th International Conference on Learning Representations (ICLR) 2023. arxiv:2203.06768Google ScholarGoogle Scholar
  61. L. S. Penrose. 1946. The Elementary Statistics of Majority Voting. Journal of the Royal Statistical Society 109, 1 (1946), 53–57. jstor:2981392Google ScholarGoogle ScholarCross RefCross Ref
  62. Hans Peters. 2008. Game Theory: A Multi-Leveled Approach. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  63. Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. RISE: Randomized Input Sampling for Explanation of Black-box Models. In Proceedings of the British Machine Vision Conference (BMVC). arXiv. arxiv:1806.07421Google ScholarGoogle Scholar
  64. Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: Feasible and Actionable Counterfactual Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344–350. arxiv:1909.09369Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135–1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, and David Vazquez. 2021. Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 1036–1045.Google ScholarGoogle Scholar
  67. Fabrizio Russo and Francesca Toni. 2022. Causal Discovery and Injection for Feed-Forward Neural Networks. arxiv:2205.09787Google ScholarGoogle Scholar
  68. Alessia Sarica, Andrea Quattrone, and Aldo Quattrone. 2022. Introducing the Rank-Biased Overlap as Similarity Measure for Feature Importance in Explainable Machine Learning: A Case Study on Parkinson’s Disease. In Brain Informatics(Lecture Notes in Computer Science). Springer International Publishing, 129–139.Google ScholarGoogle Scholar
  69. Andrew D. Selbst and Solon Barocas. 2018. The Intuitive Appeal of Explainable Machines. Fordham Law Review 87, 1085 (2018).Google ScholarGoogle Scholar
  70. Lloyd Stowell Shapley. 1951. Notes on the N-Person Game-II: The Value of an n-Person Game. Project Rand, U.S. Air Force (1951).Google ScholarGoogle Scholar
  71. L. S. Shapley and Martin Shubik. 1954. A Method for Evaluating the Distribution of Power in a Committee System. The American Political Science Review 48, 3 (1954), 787–792. jstor:1951053Google ScholarGoogle ScholarCross RefCross Ref
  72. Shubham Sharma, Alan H. Gee, Jette Henderson, and Joydeep Ghosh. 2022. FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual Explanations. arxiv:2210.06578Google ScholarGoogle Scholar
  73. Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2020. CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, 166–172.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Ravid Shwartz-Ziv and Amitai Armon. 2022. Tabular Data: Deep Learning Is Not All You Need. Information Fusion 81 (2022), 84–90.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Barry Smyth and Mark T. Keane. 2021. A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations. Technical Report. arxiv:2101.09056Google ScholarGoogle Scholar
  76. C. Spearman. 1987. The Proof and Measurement of Association between Two Things. The American Journal of Psychology 100, 3/4 (1987), 441–471. jstor:1422689Google ScholarGoogle ScholarCross RefCross Ref
  77. Thomas Spooner, Danial Dervovic, Jason Long, Jon Shepard, Jiahao Chen, and Daniele Magazzeni. 2021. Counterfactual Explanations for Arbitrary Regression Models. In ICML’21 Workshop on Algorithmic Recourse. arxiv:2106.15212Google ScholarGoogle Scholar
  78. Ilia Stepin, Jose M. Alonso, Alejandro Catala, and Martin Pereira-Farina. 2021. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. IEEE Access 9 (2021), 11974–12001.Google ScholarGoogle ScholarCross RefCross Ref
  79. Erik Strumbelj and Igor Kononenko. 2010. An Efficient Explanation of Individual Classifications Using Game Theory. Journal of Machine Learning Research 11, 1 (2010), 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Pascal Sturmfels, Scott Lundberg, and Su-In Lee. 2020. Visualizing the Impact of Feature Attribution Baselines. Distill 5, 1 (2020), e22.Google ScholarGoogle ScholarCross RefCross Ref
  81. Mukund Sundararajan and Amir Najmi. 2020. The Many Shapley Values for Model Explanation. In Proceedings of the 37th International Conference on Machine Learning (ICML). 11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML). PMLR, 3319–3328.Google ScholarGoogle Scholar
  83. Sohini Upadhyay, Shalmali Joshi, and Himabindu Lakkaraju. 2021. Towards Robust and Reliable Algorithmic Recourse. NeurIPS 2021 Poster (2021), 12.Google ScholarGoogle Scholar
  84. Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable Recourse in Linear Classification. Proceedings of the Conference on Fairness, Accountability, and Transparency (2019), 10–19. arxiv:1809.06514Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. René van den Brink and Gerard van der Laan. 1998. Axiomatizations of the Normalized Banzhaf Value and the Shapley Value. Social Choice and Welfare 15, 4 (1998), 567–582.Google ScholarGoogle ScholarCross RefCross Ref
  86. Arnaud Van Looveren and Janis Klaise. 2021. Interpretable Counterfactual Explanations Guided by Prototypes. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part II. Springer-Verlag, 650–665.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Manuela Veloso, Tucker Balch, Daniel Borrajo, Prashant Reddy, and Sameena Shah. 2021. Artificial Intelligence Research in Finance: Discussion and Examples. Oxford Review of Economic Policy 37, 3 (2021), 564–584.Google ScholarGoogle ScholarCross RefCross Ref
  88. Suresh Venkatasubramanian and Mark Alfano. 2020. The Philosophical Basis of Algorithmic Recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, 284–293.Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan E. Hines, John P. Dickerson, and Chirag Shah. 2020. Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review. (2020).Google ScholarGoogle Scholar
  90. Mattia Villani, Joshua Lockhart, and Daniele Magazzeni. 2022. Feature Importance for Time Series Data: Improving KernelSHAP.Google ScholarGoogle Scholar
  91. Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Schölkopf. 2022. On the Fairness of Causal Algorithmic Recourse. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). arxiv:2010.06529Google ScholarGoogle Scholar
  92. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. SSRN Electronic Journal (2017).Google ScholarGoogle Scholar
  93. William Webber, Alistair Moffat, and Justin Zobel. 2010. A Similarity Measure for Indefinite Rankings. ACM Transactions on Information Systems 28, 4 (2010), 20:1–20:38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. H. P. Young. 1985. Monotonic Solutions of Cooperative Games. International Journal of Game Theory 14, 2 (1985), 65–72.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane. 2018. Artificial Intelligence in Healthcare. Nature Biomedical Engineering 2, 10 (2018), 719–731.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
          August 2023
          1026 pages
          ISBN:9798400702310
          DOI:10.1145/3600211

          Copyright © 2023 Owner/Author

          This work is licensed under a Creative Commons Attribution International 4.0 License.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 29 August 2023

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate61of162submissions,38%
        • Article Metrics

          • Downloads (Last 12 months)108
          • Downloads (Last 6 weeks)13

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format