Demystifying Algorithmic Fairness in an Uncertain World

Authors

  • Lu Cheng University of Illinois Chicago

DOI:

https://doi.org/10.1609/aaai.v38i20.30278

Keywords:

Algorithmic Fairness, Uncertainty, Responsible AI

Abstract

Significant progress in the field of fair machine learning (ML) has been made to counteract algorithmic discrimination against marginalized groups. However, fairness remains an active research area that is far from settled. One key bottleneck is the implicit assumption that environments, where ML is developed and deployed, are certain and reliable. In a world that is characterized by volatility, uncertainty, complexity, and ambiguity, whether what has been developed in algorithmic fairness can still serve its purpose is far from obvious. In this talk, I will first discuss how to improve algorithmic fairness under two kinds of predictive uncertainties, i.e., aleatoric uncertainty (i.e., randomness and ambiguity in the data) and epistemic uncertainty (i.e., a lack of data or knowledge), respectively. The former regards historical bias reflected in the data and the latter corresponds to the bias perpetuated or amplified during model training due to lack of data or knowledge. In particular, the first work studies pushing the fairness-utility trade-off through aleatoric uncertainty, and the second work investigates fair few-shot learning. The last work introduces coverage-based fairness that ensures different groups enjoy identical treatment and receive equal coverage.

Published

2024-03-24

How to Cite

Cheng, L. (2024). Demystifying Algorithmic Fairness in an Uncertain World. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22662-22662. https://doi.org/10.1609/aaai.v38i20.30278