Can We Trust Fair-AI?

Authors

  • Salvatore Ruggieri University of Pisa, Pisa, Italy
  • Jose M. Alvarez University of Pisa, Pisa, Italy Scuola Normale Superiore, Pisa, Italy
  • Andrea Pugnana Scuola Normale Superiore, Pisa, Italy
  • Laura State University of Pisa, Pisa, Italy Scuola Normale Superiore, Pisa, Italy
  • Franco Turini University of Pisa, Pisa, Italy

DOI:

https://doi.org/10.1609/aaai.v37i13.26798

Keywords:

Fair Machine Learning, Fairness Metrics, Yule's Effect

Abstract

There is a fast-growing literature in addressing the fairness of AI models (fair-AI), with a continuous stream of new conceptual frameworks, methods, and tools. How much can we trust them? How much do they actually impact society? We take a critical focus on fair-AI and survey issues, simplifications, and mistakes that researchers and practitioners often underestimate, which in turn can undermine the trust on fair-AI and limit its contribution to society. In particular, we discuss the hyper-focus on fairness metrics and on optimizing their average performances. We instantiate this observation by discussing the Yule's effect of fair-AI tools: being fair on average does not imply being fair in contexts that matter. We conclude that the use of fair-AI methods should be complemented with the design, development, and verification practices that are commonly summarized under the umbrella of trustworthy AI.

Downloads

Published

2023-09-06

How to Cite

Ruggieri, S., Alvarez, J. M., Pugnana, A., State, L., & Turini, F. (2023). Can We Trust Fair-AI?. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15421-15430. https://doi.org/10.1609/aaai.v37i13.26798