Skip to main content

VCNet: A Self-explaining Model for Realistic Counterfactual Generation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022)

Abstract

Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet – Variational Counter Net – a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For Kingma et al. [12], what we call the “content” in this paper is denoted the “style”. It refers to the writing style of digits in MNIST-like datasets.

  2. 2.

    By Self-explainable model here we mean that the predictor is constrained by the counterfactual generator during training but the explanation is not directly used to produce model output as in [1].

  3. 3.

    Note that the quality of the generated counterfactual depends on the quality of the learned latent space.

  4. 4.

    http://yann.lecun.com/exdb/mnist/.

References

  1. Alvarez Melis, D., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. In: Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS), pp. 7786–7795 (2018)

    Google Scholar 

  2. Barr, B., Harrington, M.R., Sharpe, S., Bruss, C.B.: Counterfactual explanations via latent space projection and interpolation. Preprint arXiv:2112.00890 (2021)

  3. Blake, C.: UCI repository of machine learning databases (1998). http://www.ics.uci.edu/mlearn/MLRepository.html

  4. Cortez, P., Silva, A.M.G.: Using data mining to predict secondary school student performance. In: Proceedings of Annual Future Business Technology Conference, pp. 5–12 (2008)

    Google Scholar 

  5. Downs, M., Chu, J.L., Yacoby, Y., Doshi-Velez, F., Pan, W.: Cruds: counterfactual recourse using disentangled subspaces. In: ICML Workshop on Human Interpretability in Machine Learning (WHI), pp. 1–23 (2020)

    Google Scholar 

  6. Elton, D.C.: Self-explaining AI as an alternative to interpretable AI. In: Proceedings of the International Conference on Artificial General Intelligence (AGI), pp. 95–106 (2020)

    Google Scholar 

  7. FICO: Explainable machine learning challenge (2018). https://community.fico.com/s/explainable-machine-learning-challenge

  8. Guo, H., Nguyen, T., Yadav, A.: CounterNet: end-to-end training of counterfactual aware predictions. In: ICML Workshop on Algorithmic Recourse (2021)

    Google Scholar 

  9. John, V., Mou, L., Bahuleyan, H., Vechtomova, O.: Disentangled representation learning for non-parallel text style transfer. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 424–434 (2019)

    Google Scholar 

  10. Kaggle: Titanic - machine learning from disaster (2018). https://www.kaggle.com/c/titanic/overview

  11. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: Proceedings of the International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  12. Kingma, D.P., Mohamed, S., Jimenez Rezende, D., Welling, M.: Semi-supervised learning with deep generative models. In: Proceedings of International Conference on Neural Information Processing Systems (NIPS), pp. 3581–3589 (2014)

    Google Scholar 

  13. Kohavi, R., Becker, B.: UCI machine learning repository: adult data set (1996)

    Google Scholar 

  14. Kuzilek, J., Hlosta, M., Zdrahal, Z.: Open university learning analytics dataset. Sci. Data 4, 170171 (2017)

    Article  Google Scholar 

  15. Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 2801–2807 (2019)

    Google Scholar 

  16. Molnar, C.: Interpretable Machine Learning. C. Molnar, 2nd edn (2022). https://christophm.github.io/interpretable-ml-book

  17. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), pp. 607–617 (2020)

    Google Scholar 

  18. Nangi, S.R., Chhaya, N., Khosla, S., Kaushik, N., Nyati, H.: Counterfactuals to control latent disentangled text representations for style transfer. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 40–48 (2021)

    Google Scholar 

  19. Nemirovsky, D., Thiebaut, N., Xu, Y., Gupta, A.: CounteRGAN: generating realistic counterfactuals with residual generative adversarial nets. preprint arXiv:2009.05199 (2020)

  20. de Oliveira, R.M.B., Martens, D.: A framework and benchmarking study for counterfactual generating methods on tabular data. Appl. Sci. 11(16), 7274 (2021)

    Article  Google Scholar 

  21. Pawelczyk, M., Broelemann, K., Kasneci, G.: Learning model-agnostic counterfactual explanations for tabular data. In: Proceedings of the Web Conference (WWW’20), pp. 3126–3132 (2020)

    Google Scholar 

  22. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)

    Article  Google Scholar 

  23. Russell, C.: Efficient search for diverse coherent explanations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT). Association for Computing Machinery, New York (2019)

    Google Scholar 

  24. Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. In: Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS), pp. 3483–3491 (2015)

    Google Scholar 

  25. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), pp. 10–19 (2019)

    Google Scholar 

  26. Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD), pp. 650–665 (2021)

    Google Scholar 

  27. Wachter, S., Mittelstadt, B.D., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 841–887 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Victor Guyomard .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 146 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guyomard, V., Fessant, F., Guyet, T., Bouadi, T., Termier, A. (2023). VCNet: A Self-explaining Model for Realistic Counterfactual Generation. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13713. Springer, Cham. https://doi.org/10.1007/978-3-031-26387-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26387-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26386-6

  • Online ISBN: 978-3-031-26387-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics