SafeAR: Safe Algorithmic Recourse by Risk-Aware Policies

Authors

  • Haochen Wu University of Michigan
  • Shubham Sharma J.P. Morgan AI Research
  • Sunandita Patra J.P. Morgan AI Research
  • Sriram Gopalakrishnan J.P. Morgan AI Research

DOI:

https://doi.org/10.1609/aaai.v38i14.29522

Keywords:

ML: Transparent, Interpretable, Explainable ML, PEAI: Accountability, Interpretability & Explainability

Abstract

With the growing use of machine learning (ML) models in critical domains such as finance and healthcare, the need to offer recourse for those adversely affected by the decisions of ML models has become more important; individuals ought to be provided with recommendations on actions to take for improving their situation and thus receiving a favorable decision. Prior work on sequential algorithmic recourse---which recommends a series of changes---focuses on action feasibility and uses the proximity of feature changes to determine action costs. However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered. It is undesirable if a recourse could (with some probability) result in a worse situation from which recovery requires an extremely high cost. It is essential to incorporate risks when computing and evaluating recourse. We call the recourse computed with such risk considerations as Safe Algorithmic Recourse (SafeAR). The objective is to empower people to choose a recourse based on their risk tolerance. In this work, we discuss and show how existing recourse desiderata can fail to capture the risk of higher costs. We present a method to compute recourse policies that consider variability in cost and connect algorithmic recourse literature with risk-sensitive reinforcement learning. We also adopt measures "Value at Risk" and "Conditional Value at Risk" from the financial literature to summarize risk concisely. We apply our method to two real-world datasets and compare policies with different risk-aversion levels using risk measures and recourse desiderata (sparsity and proximity).

Published

2024-03-24

How to Cite

Wu, H., Sharma, S., Patra, S., & Gopalakrishnan, S. (2024). SafeAR: Safe Algorithmic Recourse by Risk-Aware Policies. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15915-15923. https://doi.org/10.1609/aaai.v38i14.29522

Issue

Section

AAAI Technical Track on Machine Learning V